124 37 5MB
English Pages 578 [579] Year 2018
THE ROUTLEDGE HANDBOOK OF MORAL EPISTEMOLOGY
The Routledge Handbook of Moral Epistemology brings together philosophers, cognitive scientists, developmental and evolutionary psychologists, animal ethologists, intellectual historians, and educators to provide the most comprehensive analysis of the prospects for moral knowledge ever assembled in print. The book’s thirty chapters feature leading experts describing the nature of moral thought, its evolution, childhood development and neurological realization. Various forms of moral skepticism are addressed along with the historical development of ideals of moral knowledge and their role in law, education, legal policy, and other areas of social life. Highlights include: • • • • • •
Analyses of moral cognition and moral learning by leading cognitive scientists Accounts of the normative practices of animals by expert animal ethologists An overview of the evolution of cooperation by preeminent evolutionary psychologists Sophisticated treatments of moral skepticism, relativism, moral uncertainty, and know-how by renowned philosophers Scholarly accounts of the development of Western moral thinking by eminent intellectual historians Careful analyses of the role played by conceptions of moral knowledge in political liberation movements, religious institutions, criminal law, secondary education, and professional codes of ethics articulated by cutting-edge social and moral philosophers.
Aaron Zimmerman is Professor of Philosophy at University of California, Santa Barbara, and the author of two books: Moral Epistemology (2010) and Belief: A Pragmatic Picture (2018). Karen Jones is Senior Lecturer at the University of Melbourne. She has written extensively about trust, what it is, and when it is justified. She is the coeditor, with Francois Schroeter, of The Many Moral Rationalisms (2018). Much of her work is from a feminist perspective. Mark Timmons is Professor of Philosophy at the University of Arizona. He specializes in Kant’s ethics and metaethics. A collection of his essays on Kant, Significance and System: Essays on Kant’s Ethics was published in 2017. He is currently at work on two books: one on Kant’s doctrine of virtue and another (with Terry Horgan) on moral phenomenology.
ROUTLEDGE HANDBOOKS IN PHILOSOPHY
Routledge Handbooks in Philosophy are state-of-the-art surveys of emerging, newly refreshed, and important fields in philosophy, providing accessible yet thorough assessments of key problems, themes, thinkers, and recent developments in research. All chapters for each volume are specially commissioned and written by leading scholars in the field. Carefully edited and organized, Routledge Handbooks in Philosophy provide indispensable reference tools for students and researchers seeking a comprehensive overview of new and exciting topics in philosophy. They are also valuable teaching resources as accompaniments to textbooks, anthologies, and research-orientated publications. ALSO AVAILABLE: THE ROUTLEDGE HANDBOOK OF COLLECTIVE INTENTIONALITY Edited by Marija Jankovic and Kirk Ludwig THE ROUTLEDGE HANDBOOK OF SCIENTIFIC REALISM Edited by Juha Saatsi THE ROUTLEDGE HANDBOOK OF PACIFISM AND NON-VIOLENCE Edited by Andrew Fiala THE ROUTLEDGE HANDBOOK OF CONSCIOUSNESS Edited by Rocco J. Gennaro THE ROUTLEDGE HANDBOOK OF PHILOSOPHY AND SCIENCE OF ADDICTION Edited by Hanna Pickard and Serge Ahmed THE ROUTLEDGE HANDBOOK OF MORAL EPISTEMOLOGY Edited by Aaron Zimmerman, Karen Jones, and Mark Timmons For more information about this series, please visit: www.routledge.com/Routledge-Handbooks-inPhilosophy/book-series/RHP
THE ROUTLEDGE HANDBOOK OF MORAL EPISTEMOLOGY
Edited by Aaron Zimmerman Karen Jones Mark Timmons
First published 2019 by Routledge 711 Third Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2019 Taylor & Francis The right of Aaron Zimmerman, Karen Jones, and Mark Timmons to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this title has been requested ISBN: 978-1-138-81612-1 (hbk) ISBN: 978-1-315-71969-6 (ebk) Typeset in Bembo by Apex CoVantage, LLC
CONTENTS
Contributorsviii Preface to Routledge Handbook of Moral Epistemology xiv SECTION I
Science1 1 The Quest for the Boundaries of Morality Stephen Stich
15
2 The Normative Sense: What is Universal? What Varies? Elizabeth O’Neill and Edouard Machery
38
3 Normative Practices of Other Animals Sarah Vincent, Rebecca Ring, and Kristin Andrews
57
4 The Neuroscience of Moral Judgment Joanna Demaree-Cotton and Guy Kahane
84
5 Moral Development in Humans Julia W. Van de Vondervoort and J. Kiley Hamlin
105
6 Moral Learning Shaun Nichols
124
7 Moral Reasoning and Emotion Joshua May and Victor Kumar
139
8 Moral Intuitions and Heuristics Piotr M. Patrzyk
157
v
Contents
9 The Evolution of Moral Cognition Leda Cosmides, Ricardo Andrés Guzmán, and John Tooby
174
SECTION II
Normative Theory
229
10 Ancient and Medieval Moral Epistemology Matthias Perkams
239
11 Modern Moral Epistemology Kenneth R.Westphal
254
12 Contemporary Moral Epistemology Robert Shaver
274
13 The Denial of Moral Knowledge Richard Joyce
289
14 Nihilism and the Epistemic Profile of Moral Judgment Jonas Olson
304
15 Relativism and Pluralism in Moral Epistemology David B.Wong
316
16 Rationalism and Intuitionism—Assessing Three Views about the Psychology of Moral Judgments Christian B. Miller
329
17 Moral Perception Robert Audi
347
18 Moral Intuition Matthew S. Bedke
360
19 Foundationalism and Coherentism in Moral Epistemology Noah Lemos
375
20 Moral Theory and Its Role in Everyday Moral Thought and Action Brad Hooker
387
SECTION III
Applications401 21 Methods, Goals, and Data in Moral Theorizing John Bengson,Terence Cuneo, and Russ Shafer-Landau vi
409
Contents
22 Moral Knowledge as Know-How Jennifer Cole Wright
427
23 Group Moral Knowledge Deborah Tollefsen and Christopher Lucibella
440
24 Moral Epistemology and Liberation Movements Lauren Woomer
454
25 Moral Expertise Alison Hills
469
26 Moral Epistemology and Professional Codes of Ethics Alan Goldman
482
27 Teaching Virtue Nancy E. Snow and Scott Beck
493
28 Decision Making under Moral Uncertainty Andrew Sepielli
508
29 Public Policy and Philosophical Accounts of Desert Steven Sverdlik
522
30 Religion and Moral Knowledge C.A.J. Coady
537
Index552
vii
CONTRIBUTORS
Kristin Andrews is York Research Chair in the Philosophy of Animal Minds and Associate Professor of Philosophy at York University in Toronto, Canada, and is the author of two books: Do Apes Read Minds? Toward a New Folk Psychology (MIT 2012) and The Animal Mind (Routledge 2015). Robert Audi is John A. O’Brien Professor of Philosophy at the University of Notre Dame. His research interests focus on ethics, political philosophy, epistemology, religious epistemology, and philosophy of mind and action. He is the author of numerous books and articles, including Moral Perception (Oxford University Press, 2013) and Means, Ends, and Persons: The Meaning and Psychological Dimensions of Kant’s Humanity Formula (Oxford University Press, 2015). Scott Beck is the Head Principal at Norman High School in Norman, Oklahoma, and holds a PhD in educational leadership and policy studies. He serves on the Leadership Team for the Institute for the Study of Human Flourishing at the University of Oklahoma. Matthew S. Bedke is Associate Professor of Philosophy at the University of British Columbia. He has published widely in metaethics, with particular interest in the nature of normativity, moral intuitions and their epistemology, reasons internalism, motivational internalism, and non-representationalist theories of normative thought and language. He has recently edited a special volume with the Canadian Journal of Philosophy, Representation and Evaluation. John Bengson is Associate Professor of Philosophy at the University of Wisconsin, Madison, working primarily in epistemology, philosophy of mind, philosophy of action, and philosophical methodology. He is coeditor of Knowing How: Essays on Knowledge, Mind, and Action (Oxford University Press, 2011). Terence Cuneo is Marsh Professor of Intellectual and Moral Philosophy at the University of Vermont. He is the author of The Normative Web (Oxford University Press, 2007), Speech viii
Contributors
and Morality (Oxford University Press, 2014), and Ritualized Faith (Oxford University Press, 2016). Russ Shafer-Landau is Professor of Philosophy at the University of Wisconsin, Madison. He is the author of Moral Realism: A Defense (Oxford University Press, 2003) and the editor of Oxford Studies in Metaethics. C.A.J. (Tony) Coady is Emeritus Professor of Philosophy at the University of Melbourne. His books include the influential Testimony: A Philosophical Study (1992) and the widely cited Morality and Political Violence (2008). In 2005, he gave the Uehiro Lectures on Practical Ethics at the University of Oxford, and in 2012 he delivered the Leverhulme lectures at Oxford. Leda Cosmides is Distinguished Professor of Psychological & Brain Sciences at the University of California, Santa Barbara, where she codirects the Center for Evolutionary Psychology with John Tooby. She received the AAAS Prize for Behavioral Science Research, the American Psychological Association’s Distinguished Scientific Award for an Early Career Contribution to Psychology, and a Lifetime Career Award from the Human Behavior & Evolution Society. Joanna Demaree-Cotton is a PhD candidate in philosophy at Yale University. Her research focuses especially on how empirical psychology can shed light on the moral and epistemic status of judgment-forming processes. Recent work examines whether framing effects affect the reliability of moral intuition and asks how psychology can help us identify distorting influences on intuitions used in philosophical argument. Alan Goldman is Kenan Professor of Humanities Emeritus at the College of William & Mary. He is the author of eight books, the most recent being Reasons from Within and Philosophy and the Novel. Emulating Beethoven, Schubert, and Dvorak, he is currently working on a final ninth major work on pleasure, happiness, well-being, and meaning in life, topics suitable for an old man. Ricardo Andrés Guzmán is Associate Professor of Economics at the Centro de Investigación en Complejidad Social (Center for Research on Social Complexity) at Universidad del Desarrollo in Santiago, Chile. He is interested in the intersection of behavioral economics and evolutionary psychology, moral philosophy, and the application of computational methods for understanding social change and history. J. Kiley Hamlin is Associate Professor of Philosophy at the University of British Columbia in Canada. Her research aims to help tease apart the roles of nature and nurture in humans’ social and moral lives by examining the developmental foundations of humans’ tendency to evaluate others as prosocial or antisocial and to engage in cooperative and uncooperative behaviors themselves. Alison Hills is Professor of Philosophy at Oxford University and Fellow and Tutor at St John’s College. Her recent research has focused on the intersection between ethics and epistemology. Her book, The Beloved Self, was published by Oxford University Press in 2010. ix
Contributors
Brad Hooker is Professor of Philosophy at the University of Reading. He is best known for his defense of rule-consequentialism, his criticisms of particularism, and his more recent research on impartiality and fairness. Richard Joyce is Professor of Philosophy at Victoria University of Wellington (New Zealand). He is author of The Myth of Morality (Cambridge University Press 2001), The Evolution of Morality (MIT Press 2006), and Essays in Moral Skepticism (Oxford University Press 2016) as well as numerous articles and book chapters on metaethics and moral psychology. Guy Kahane is Associate Professor at the Philosophy Faculty at the University of Oxford and Fellow and Tutor in Philosophy at Pembroke College, Oxford. Kahane is also Director of Studies and Research Fellow at the Uehiro Centre for Practical Ethics, Oxford. He works in metaethics, applied ethics, and moral psychology. Victor Kumar is Assistant Professor of Philosophy at Boston University. He works on topics in moral philosophy that can be illuminated by work in cognitive science and evolutionary theory. His research has appeared in venues such as Ethics, Cognition, Philosophers’ Imprint, and Philosophical Studies. Noah Lemos is the Leslie and Naomi Legum Distinguished Professor of Philosophy at The College of William and Mary. He is the author of three books, Intrinsic Value, Common Sense, and An Introduction to the Theory of Knowledge (all with Cambridge University Press). His publications include “Ethics and Epistemology” in The Oxford Handbook of Epistemology and “Moore and Skepticism” in The Oxford Handbook of Skepticism. Christopher Lucibella is a doctoral student at the University of Memphis. His areas of research include epistemology, social/political philosophy, and continental philosophy. He is currently writing a dissertation on the epistemology of group testimony. Edouard Machery is a Distinguished Professor of History and Philosophy of Science in the Department of History and Philosophy of Science as well as the Director of the Center for Philosophy of Science at the University of Pittsburgh. He works in the areas of philosophy of cognitive science, moral psychology, experimental philosophy, metaphilosophy and philosophy of science. Joshua May is Assistant Professor at the University of Alabama at Birmingham. His book Regard for Reason in the Moral Mind (Oxford University Press) draws on empirical research to show that ordinary ethical thought and motivation are fundamentally rational activities. Christian B. Miller is the A. C. Reid Professor of Philosophy at Wake Forest University. He is the author of over 80 papers and editor or coeditor of five volumes. His three books with Oxford University Press are Moral Character: An Empirical Theory (2013), Character and Moral Psychology (2014), and The Character Gap: How Good Are We? (2017).
x
Contributors
Shaun Nichols is Professor of Philosophy and Cognitive Science at the University of Arizona. His research focuses on the psychological underpinnings of ordinary thinking about philosophical issues. He is the author of Mindreading (with Stephen Stich, Oxford 2003), Sentimental Rules (Oxford 2004), and Bound (Oxford 2015) as well as several articles at the intersection of philosophy and psychology. Jonas Olson is Professor of Practical Philosophy at Stockholm University. He works mainly on metaethics and the history of moral philosophy. He is the author of Moral Error Theory: History, Critique, Defence (Oxford University Press, 2014) and co-editor of The Oxford Handbook of Value Theory (Oxford University Press, 2015). Elizabeth O’Neill is Assistant Professor of Philosophy at Eindhoven University of Technology in the Netherlands. She works in the areas of moral psychology, moral epistemology, philosophy of biology, and applied ethics. Piotr M. Patrzyk is a PhD student at the Faculty of Business and Economics, University of Lausanne, Switzerland. He is primarily interested in cognitive process models of criminal and moral decision making. Matthias Perkams is Professor of Philosophy (especially Ancient and Medieval philosophy) at Friedrich-Schiller-Universität, Jena, Germany. He has published widely on Greek, Syriac, Arabic, and Latin philosophy from Socrates to early modern times, especially on practical philosophy, anthropology, and metaphysics. Rebecca Ring is a PhD student in philosophy at York University writing her dissertation on moral practice in cetaceans. Andrew Sepielli is Associate Professor of Philosophy at the University of Toronto. He has published papers in both normative ethics and metaethics and is currently writing a book about moral objectivity. Robert Shaver is Professor of Philosophy at the University of Manitoba. Recent publications include “Sidgwick’s Axioms and Consequentialism,” Philosophical Review 123, 2014, and “Sidgwick on Pleasure,” Ethics 126, 2016. He has also written recently on nonnaturalism, experimental philosophy, the origins of deontology, Sidgwick on well-being, Ross on the duty to give pleasure to oneself, and Prichard on consequentialism. Nancy E. Snow is Professor of Philosophy and Director of the Institute for the Study of Human Flourishing at the University of Oklahoma. She is the author of Virtue as Social Intelligence: An Empirically Grounded Theory (Routledge, 2009) and edited The Oxford Handbook of Virtue. Stephen Stich is Board of Governors Distinguished Professor of Philosophy and Cognitive Science at Rutgers University. He is a recipient of the Jean Nicod Prize, the Gittler Award for Outstanding Scholarly Contribution in the Philosophy of the Social Sciences, and the
xi
Contributors
Lebowitz Prize for Philosophical Achievement. His most recent book is Collected Papers, Volume 2: Knowledge, Rationality and Morality. Steven Sverdlik is Professor of Philosophy at Southern Methodist University in Dallas, Texas. He is the author of Motive and Rightness (2011) and papers on moral epistemology, moral responsibility, the history of moral philosophy, and the philosophy of criminal law. Deborah Tollefsen is Professor of Philosophy at the University of Memphis. Her research and teaching interests include social epistemology, collective intentionality, and philosophy of mind. She is currently working on a book manuscript on the epistemology of groups. John Tooby is Distinguished Professor of Anthropology at the University of California, Santa Barbara, where he codirects the Center for Evolutionary Psychology with Leda Cosmides. He received the National Science Foundation’s Presidential Young Investigator Award, a J.S. Guggenheim Fellowship, and a Lifetime Career Award from the Human Behavior & Evolution Society. Julia W. Van de Vondervoort is currently a graduate student studying developmental psychology at the University of British Columbia in Canada. Sarah Vincent is the Florida Blue Center for Ethics Post-Doctoral Fellow at the University of North Florida (Jacksonville, FL). Her areas of research are philosophy of psychology/ cognitive science and applied ethics, most especially with respect to nonhuman animals. She was previously a Post-Doctoral Visitor at York University (Toronto), and she earned her doctorate at University of Memphis in 2015. Kenneth R. Westphal is Professor of Philosophy, Boğaziçi Üniversitesi (İstanbul). His research focuses on the character and scope of rational justification in nonformal, substantive domains, both moral (ethics, justice, history and philosophy of law, philosophy of education) and theoretical (epistemology, history and philosophy of science). His books include How Hume and Kant Reconstruct Natural Law: Justifying Strict Objectivity without Debating Moral Realism (Clarendon, 2016) and Grounds of Pragmatic Realism: Hegel’s Internal Critique and Transformation of Kant’s Critical Philosophy (Brill, 2017). He is completing a new book, Normative Justification, Natural Law and Kant’s Constructivism in Hegel’s Moral Philosophy, and plans a systematic study in history and philosophy of law focusing on Montesquieu, G.W.F. Hegel, and Rudolf von Jhering. David B. Wong is the Susan Fox Beischer and George D. Beischer Professor of Philosophy at Duke University. In addition to his books, Moral Relativity and Natural Moralities, he has written essays in contemporary ethical theory, moral psychology, and on classical Chinese philosophy. Lauren Woomer received her PhD from Michigan State University in 2015. Her research operates at the intersection of epistemology, feminist philosophy, and critical philosophy of race and focuses on ignorance that is enabled by unjust social structures.
xii
Contributors
Jennifer Cole Wright is Associate Professor of Psychology at the College of Charleston, as well as an Affiliate Member of both the Philosophy Department and the Environmental and Sustainability Studies Program. Her area of research is moral development and moral psychology more generally. Specifically, she studies humility, meta-ethical pluralism, the influence of individual and social “liberal vs. conservative” mindsets on moral judgments, and young children’s early moral development. She co-edited, with Hagop Sarkissian, Advances in Experimental Moral Psychology and is currently co-authoring a book Understanding Virtue: Theory and Measurement with Nancy E. Snow, as well as editing an interdisciplinary volume on Humility: Reflections on its Nature and Function (both with Oxford Press). When she’s not writing, she is usually busy warping young minds in the classroom, brainstorming experiments in her lab, or satisfying her lust for travel by backpacking across Europe or SE Asia— or sometimes just trekking (with the help of a fuel-efficient car) across the US.
xiii
PREFACE TO ROUTLEDGE HANDBOOK OF MORAL EPISTEMOLOGY
As epistemology is the study of knowledge, so moral epistemology is the study of moral knowledge. The subject occupies a central place in the history of Western philosophy, as ancient philosophers debated whether virtue is itself a kind of knowledge and how the knowledge that manifests itself in the lives of those deemed morally excellent might be taught to children. During the medieval period, philosophers advanced the answers Aristotle had formulated to these questions by developing detailed analyses of human nature, moral instruction, the effects of instruction on children, and the differing motivations behind disobedience on the one hand and conformity to operative norms and values on the other.This was essential moral epistemology, shared with anxious parents from both the pulpit and lectern. Moral epistemology acquired particular relevance throughout the modern period, as philosophers sought a method of moral reflection to support a system of “natural” rights and duties not easily found in those ancient texts and traditions they sought to challenge. And the nineteenth century brought the Darwinian revolution in biology, which spread greater understanding of the causal source of our most basic moral capacities while further undermining public belief in the divinity of accepted moral texts, codes, and mores. To assuage these concerns, some philosophers turned to secular moral epistemologies extracted from those that had been developed by Hobbes, Locke, Rousseau, Hume, and Kant in the modern period. Others rejected the idea of moral knowledge altogether, embracing nihilism or Social Darwinism. Most intellectuals writing in the wake of modernity sought more constructive ways to reconceptualize what they took to be the social virtues and the means by which we encourage them in each other, where the integration of diverse cultures added an additional impetus for their research. Twentieth-century moral epistemologists wanted to know how they could resolve disputes between rival moral theories, values, and traditions in as reasonable a way as possible. Indeed, John Rawls, one of the most influential moral philosophers writing during this period, spoke of the plurality of “conceptions of the good” as a fact of life, here to stay for the foreseeable future. But by the end of the twentieth century, few intellectuals were pursuing projects in moral epistemology. It was often disparaged as the “poor cousin” of the other two central xiv
Preface to Routledge Handbook of Moral Epistemology
branches of metaethics: moral metaphysics and semantics. In hindsight, this comparative lack of attention is puzzling given that epistemological concerns continued to lie in the background, driving argumentative moves in debates about the nature of moral judgment, the semantics of moral terms, and the metaphysics of moral properties. In any event, we are thankful that the tide has now decisively turned, and moral epistemology is reclaiming its place as a third, coequal branch of metaethics. This handbook is a testament to this fact and to the renewed interest in moral epistemology more generally. In virtue of what is a question appropriately considered a question in moral epistemology? We can draw the boundaries of moral epistemology either narrowly or broadly. Drawn narrowly, we find the cluster of issues that are the focus of Section II of this volume, “Normative Theory,” which centers around the questions of whether moral knowledge is possible, and if it is, how we might acquire it. We are taken into the territory, both familiar and important, of debates over whether there are conceptual, metaphysical, or other obstacles to the very possibility of moral knowledge, whether there are non-inferentially justified moral beliefs that provide the foundation for further inferentially justified moral beliefs, or whether all moral justification is inferential, as well as debates about what sources of moral belief are reliable, whether reason, intuition, or perception. Moral epistemology, done in this vein and rich in history, remains the sole province of philosophers. But it is notable that several of our philosopher authors have drawn from other disciplines to inform their work. They have found that their answers to traditional questions about the reliability of moral intuition are deepened when integrated with the latest work on the evolution of these intuitions; that their answers to traditional questions about the extent to which we can use reason or reflection to shape our moral intuitions and principles are deepened when integrated with the latest cognitive science on the nature of reason and reflection; and so on. Because integration with related research deepens our understanding of even the most traditional questions about moral knowledge, we have purposefully chosen to recapture the broad approach to moral epistemology introduced by the ancients. Of course, while Aristotle was both the greatest moral philosopher and greatest biologist of his day, the modern academy effects a division of intellectual labor. Thus, to present the biology relevant to our questions about moral knowledge, not to mention the relevant developmental psychology, educational theory, cognitive neuroscience, and sociology, we must involve authors from other disciplines beyond philosophy. But not all of these disciplines are sciences. While many scientists are now writing about morality, moral thought and talk is not primarily academic. Ideas about moral knowledge— and talk of “knowing right from wrong”—continue to play a large role in the thought and speech of teachers, judges, political activists, and others. To better develop our understanding of what moral knowledge is taken to be and how these conceptions of its nature shape our social interactions in the courthouse, classroom, and legislature, we must consult the work of philosophers who have focused their analyses on these phenomena. Since these aspects of the field have not been center stage in contemporary analytic metaethics, we have turned to work in social epistemology and social ontology to this end.This more open approach to the area is vindicated by both the new light it sheds on the old questions and by the new questions that it brings into focus. Section I of the volume explores the potential for scientists to contribute to moral epistemology. Philosophical naturalists are committed to locating moral phenomenon within xv
Preface to Routledge Handbook of Moral Epistemology
the natural world that is investigated by the natural and social sciences. Unlike traditional Kantian rationalists, who take the intellectual and reflective capacities characteristic of human cognition to be the key to morality and so embrace a form of human exceptionalism, naturalists seek to explain how uniquely human capacities could have emerged from the capacities of our nonhuman animal relatives and to understand the ways in which human moral practices are similar to and different from the normative practices of other social animals. Answering these questions calls for the expertise of evolutionary biology and animal ethology. Philosophers have long recognized the importance of answering the question of whether our moral judgments are reliable or not, but we have been slower to see that an interdisciplinary approach might make answering it more tractable. If we can understand the developmental story behind our acquisition of moral concepts and moral competence we will be in a better position to assess the reliability of moral judgments that result from their exercise. Together, the entries in the first section of the volume reveal the fruitfulness of consulting the sciences in our efforts to tackle the distinctive problems of moral epistemology. Traditional moral epistemology is interested in both doxastic justification and propositional justification, or, less technically, in the question of what it takes for an individual to know a moral proposition and in the question of theory acceptance in ethics. But these questions assume that moral knowledge (if any there be) is propositional and is had by individuals, when perhaps at least some moral knowledge might be better modeled on knowhow, or might be embedded in social practices, or might be the property not of individuals but of groups. These questions are among those taken up in Section III, which includes an entry on decision making under distinctively moral uncertainty along with analyses of problems that have been addressed in social ontology, social epistemology, feminism, and critical-race theory to identify and illuminate comparatively new problems in moral epistemology. Each of the thirty chapters in this volume is a state-of-the-art overview of its topic. Read together, they make the case for redrawing the boundaries of what counts as moral epistemology and of who counts as doing moral epistemology. We think the result is itself a defense of the usefulness and interest of defining the field broadly, and we invite our readers to confirm this for themselves.
xvi
SECTION I
Science
“Hume’s Law” urges us to distinguish statements of how things are and attempts to explain why they are that way from evaluations of the world so described and any policy proposals, recommendations, or decisions we might premise in our evaluations (Hume, T 3.1.1) So understood, Hume’s Law poses challenges for a putative science of moral epistemology, for it is extraordinarily difficult to advance claims about morality without therein taking a stance on what is and what is not immoral, where statements of immorality are commonly treated as evaluative if not prescriptive in character. Nevertheless, despite the historical challenges to the enterprise, academic scientists have returned to the study of morality en masse. Is this because we have found a way to respect Hume’s Law? Or have we simply grown comfortable flouting its demands? Is a science of morality really possible? The scientist authors of this section’s chapters have values; and, like the rest of us, they have views about what we ought to be doing as a community. Surely these values have affected their choice of what to describe, how to describe it, and which explanations of the putative data to report. Moreover, though these authors have attempted to describe and explain things as they have found them, they have had to assess whether one putative explanation of what they’ve described is better than another. To be fair, the judgment that one explanation of the data is better than another is supposed to emerge from an epistemic assessment of the theories in play, not a moral evaluation of the goodness or badness of the reality these theories are meant to explain. But the theories in question have “our” moral views and practices as their object, and the authors in question are members of “us.” So it is entirely appropriate to wonder whether they’ve succeeded in isolating their epistemic evaluations from the moral values or principles that find expression in their nonacademic lives, as when they reprimand others or defend their actions from judgment, or endorse certain political candidates and criticize others, or advocate for changes in our laws or public policies. To their credit, academics have developed methods for achieving some level of “objectivity” in their assessment of morality. First, philosophers have attempted to give accounts of what knowledge of right and wrong would have to be were we to have such knowledge without taking a stand on the reality of what they’ve described. These projects are usually
Section I
framed as analyses of our concepts of moral knowledge or accounts of the social practices in which these concepts are applied. Following J. L. Mackie, many label these projects “secondorder” or “metaethical” theories. It should be noted that some of those who advance metaethical theories do not think of themselves as scientists, perhaps because they don’t feel the need to conduct or analyze experiments to lend credence to their claims. But many philosophers do consider metaethics a science. And even those uncomfortable with this label tend to advance their accounts as true or accurate representations of our moral thinking. Authors in both these camps must either reject Hume’s Law or show that their metaethical hypotheses have no implication for the “first-order” morality of reader or author. Since philosophers agree on very little, it is not surprising that they continue to debate whether an author’s metaethics can be isolated from her ethics in the manner proposed. But a second approach to the scientific study of morality is suggested by the traditional analysis of knowledge itself. Though E. Gettier demonstrated to the satisfaction of most contemporary epistemologists that knowledge does not reduce to justified, true belief, most of us continue to posit belief as the core psychological component of knowledge. (It seems reasonable to suppose that you must be convinced of something to know it.) So a scientist of morality might eschew talk of “moral knowledge” in favor of “moral belief ” or “moral judgment.” Or, in an attempt to denote her target subject in full generality, the theorist might write of the genesis, development, and operations of “moral cognition.” On this understanding of the terrain, psychology, ethology, anthropology, and sociology are the scientific components of moral epistemology. Stephen Stich uses Chapter 1 of this volume to recount the recent history of this “Philosopher’s Project”: the attempt to distinguish our moral psychologies from other components of our minds without making substantive assumptions about what is right and wrong. Many philosophers analyzed paradigmatic moral cognitions as representations of rules, but to distinguish moral rules from rules of etiquette and the like, R. M. Hare, inspired by Kant, argued that you don’t think of a norm as moral in character unless you treat is as “universalizable” and “prescriptive.” Other theorists turned their attention to isolating distinctively moral modes of thinking. For instance, to distinguish genuinely moral reasoning from prudential calculation, W. Frankena proposed that some consideration of others and their interests is essential. N. Cooper and P. Taylor joined Frankena in hypothesizing that a person’s moral code consists of those prescriptions she treats as “overriding or supremely important.” And A. Gewirth added Kant’s idea of categoricity: to think of a rule as moral you must think that you are bound to follow it even when obedience would thwart your ends or frustrate your desires. Several theorists added some susceptibility to guilt or ostracism in the wake of a norm’s violation, and additional conditions were proposed. Predictably, the philosophers failed to achieve consensus. But when psychologists, led by E. Turiel, eventually extracted a working definition of “moral cognition” from the philosophical literature, they abandoned the attempted neutrality of most analyses by requiring some relation to harm, welfare, justice, or rights. Though many psychologists found evidence that people conceptualize distinctively moral rules in the way Turiel supposed, and many theorists still distinguish moral rules from “mere” conventions in this way, Stich articulates the seeds of the project’s ruin. First, the formal analyses or definitions of “moral cognition” have fallen into question. For example, Stich reports evidence that some people
2
Science
think of rules against corporal punishment as authority-dependent. Are prohibitions on the practice not then moral rules? Stich rejects this conclusion and proposes that “moral cognition” is not associated with a unitary concept. Instead, this phrase and others like it correspond to different concepts in the minds of different people, though our paradigms of moral rules, moral reasoning, or moral motivation may be similar. Second, Stich describes how J. Haidt and others demonstrated that morality is not limited to harm, welfare, rights, and justice in the minds of illiberal people. For instance, disgusting acts are often “moralized,” even when they neither cause harm nor constitute injustice. Elizabeth O’Neill and Edouard Machery, the authors of Chapter 2, agree with Stich’s critique of the moral/conventional distinction.They join Haidt in broadening the category of “moral cognition” beyond rules related to harm, care, fairness, and reciprocity to include also “groupish” norms of patriotism, ideals of loyalty, authority, and respect, and rules preserving purity or sanctity. They go on to add to Haidt’s list our concern for privacy and honesty.1 There is even evidence that some non-Western people “lump” all of their rules together; that there is no difference in their minds between norms associated with these “foundations” and norms of other kinds. They conclude that some people fail to draw a distinction of any kind between moral norms and nonmoral conventions. Instead of trying to define distinctively moral cognition, O’Neill and Machery try to isolate a broader phenomenon: normative cognition, which they define as the capacity to learn social rules, the disposition to follow them, the tendency to punish rule-breakers, and some susceptibility to a range of characteristic emotions, including admiration, disgust, guilt, and shame. When it is defined in this way, normative cognition is indeed a human universal, and O’Neill and Machery report evidence that all people “externalize” at least some of their norms to some degree.They also describe a significant overlap in the contents of norms embraced by diverse cultures; recount how almost every culture judges an agent’s intentions relevant to the propriety of punishing her for a given violation; and report a study suggesting that no community treats ignorance of the norms in play as an excuse for violating its rules. But O’Neill and Machery find a great deal of variation in the content of norms beyond these areas of overlap, along with significant differences with regard to the importance of an agent’s intentions for judgments of her blameworthiness.They also report substantive variation with regard to how much of a community’s life is governed by norms of any kind. Some societies are more rule-heavy than others. Though the universality of normative cognition among humans does not imply the innateness of a shared “normative sense,” it lends credence to a continuity hypothesis of some sort. Mightn’t homo sapiens have inherited our proclivity toward rule governance from the human-like apes from whom we evolved? In Chapter 3, Sarah Vincent, Rebecca Ring, and Kristin Andrews shed light on this question by describing the “ought thoughts” of other animals. They find these normative cognitions implicated in various “normative practices,” which are defined as “patterns of behavior shared by members of a community that demonstrate they value certain ways of doing things as opposed to others.” According to Vincent, Ring, and Andrews, when the leader of a wolf pack prevents a female member from breeding with a strange male, she is enforcing a norm of obedience. When a dog tucks her tail and hides her face because she anticipates a scolding for stealing cake, her guilt evidences her susceptibility to these same norms. Indeed, when an older ape
3
Section I
critiques a youngster’s initially unsuccessful attempts at termite fishing, norms of obedience are expressed and taken to heart in the process. In partial contrast, when capuchin monkeys protest getting a cucumber after observing a fellow receive a more highly valued grape for the same task, the capuchin has expressed her aversion to injustice, implicating a norm of reciprocity. This second group of norms is supposed to guide monkeys’ exchanges of food and grooming services and their expressed dissatisfaction with unfair deals. Animal acts of self-sacrifice and consolation are instead guided by norms of altruism or caring, as when orcas attack ships to save their pod-mates, humpback whales save a seal from the pursuit of these same orcas, or a polar bear mourns the death of a mate. Norms of social responsibility are manifested in various distributions of goods and divisions of labor. One example might be the sentinels among a scurry of Belding field squirrels, who draw danger upon their own heads by whistling warning of a hawk’s approach. In this vein,Vincent et al. report a chimp in the Kansas City Zoo who propped a log against the enclosure’s wall to serve as a ladder and then “beckoned to another six chimps to join him” in his escape. Finally, the authors posit norms of solidarity that reinforce the common identity of the communities in which they live.These norms are invoked to help explain why a group of cetaceans might develop an in-group language of whistles and clicks or beach themselves collectively. The authors go on to examine the lives of chimpanzees and cetaceans in detail, reporting an array of normative practices, which are in turn supposed to provide evidence of a similarly complex manifold of “ought thoughts” in the minds of those animals enacting them. They conclude by rebutting various deflationary explanations of the practices they report and then arguing, against C. Korsgaard (but in keeping with M. Rowlands), that an animal can be guided by norms even if she doesn’t have the ability to introspect, interrogate, and modify her initial reactions to the behavior of a conspecific so as to comply with rules she accepts “from” her reflective endorsement of them. So as not to beg questions against the parties to this dispute, let us use “explicit norm guidance” to refer to the reflective capacity we have just described. If one chimp wants to mate with another and refrains from doing so because she represents this as something forbidden—or something she ought not do—we will say that she is explicitly guided by the norm in question. But what is it to think of some proposed action as wrong or forbidden? If we analyze this thought in the way proposed by O’Neill and Machery in Chapter 2, we must look for evidence that the chimp in question enforces the mating hierarchy in cases in which she is not personally implicated or that she now complies with it because of the guilt and remorse she experienced after prior indiscretions. Vincent, Ring, and Andrews do not argue that explicit rule guidance (so understood) is manifest among populations of nonhuman animals, but they also don’t rule it out, and there is some intriguing evidence in favor of the hypothesis. De Waal, for example, argues that other animals exhibit “willpower,” as when they forgo a present reward to secure more remote advantages (2016, 221–229), and willpower would seem to implicate explicit norm guidance of a sort. To be fair, when an animal suppresses an experienced appetite for one grape in the hopes of therein securing ten, she is explicitly guided by prudential rather than moral norms. But when these norms implicate others, suppression of appetites and aversions in their service might be thought to constitute genuinely “moral” norm guidance. (This is a possibility to which Vincent et al. remain open.) In the end, we might join Darwin in awarding full marks to dogs who overcome fear to save their owners. 4
Science
The conception of normative cognition that emerges from an evolutionary perspective is more detailed, less idealized, and more realistic than those assumed by traditional moral epistemologists. But these theories remain relatively abstract and conjectural. Since the posited psychological processes of norm guidance are often supposed to be introspectively inaccessible to us, only neuroscience can tell us whether aspects of our moral lives that might have evolved via natural mechanisms of selection operating upon populations of our hunter-gatherer ancestors really did so evolve and persist to this day. Unfortunately, to describe the characteristic inputs, functions, and outputs of a neurological process in psychological or computational terms, we need to utilize concepts drawn from outside neuroscience. Joanna Demaree-Cotton and Guy Kahane explain the relevance of this realization to normative moral epistemology in Chapter 4 when they argue that findings in neuroscience cannot be used to evaluate the reliability or adaptivity of a set of moral intuitions unless we can infer which psychological process a neural network is implementing. The authors go on to assess the role that neuroscience has played within the cognitive science of morality (CSM) more generally and the prospects that the CSM will have “normative significance” by affecting the first-order moralities of those of us who have been exposed to it. Demaree-Cotton and Kahane describe how difficult it is to “map” neurological processes onto “higher-level” psychological processes to confirm or infirm hypotheses about the proximate causes of our relatively automatic normative intuitions and judgments. First, a psychological process may be differently realized in the nervous systems of different people or groups of people. “For example, emotional processing that is normally supported by paralimbic brain areas in nonclinical populations might be supported by the lateral frontal cortex in psychopaths.” Second, a discrete brain area or neural network can support many different processes, which both complicates the attempt to assign that area a unified computational function and undermines efforts to assess whether the cognition it enables is reliable or adaptive. It may turn out that a single network participates in both reliable and unreliable (or adaptive and maladaptive) cognitive processes. Finally, we may have good reason to recognize cognitive processes that are not neurologically discrete. As Demaree-Cotton and Kahane see it, CSM is organized around three main paradigms. Two of these are the “two-systems” or dual-process models of Haidt and Greene discussed in detail in many of this volume’s chapters. The third is J. Mikhail’s “universal moral grammar” approach—critiqued by Stich in Chapter 1—which posits an innate faculty for representing intent, harm, and the intuitive wrongness of intentionally inflicting harm. The authors are relatively dubious of the use to which neuroscience has been put in defense of these models, arguing that a range of different neural networks are implicated in moral judgment. According to their review of the relevant literature, we currently lack neurological evidence of those specialized or “dedicated” mechanisms of normative judgment posited by evolutionary psychologists. Instead, the evidence suggests that moral rules are learned and imbued with emotion in the way Nichols describes in Chapter 6. Demaree-Cotton and Kahane also use neurological evidence to further undercut the reason-emotion dichotomy placed into question by May and Kumar in Chapter 7. The neurological evidence suggests that emotions are key components of reflection, decision, and choice. And this undermines Greene’s attempt to identify utilitarian calculation with the neural correlates of “reason” in order to dismiss the processes responsible for deontic 5
Section I
intuition as unreliable “passions” or emotions. The neurology responsible for aversion to killing one to save five isn’t correlated with knee-jerk reactions of disgust or fear. Instead, these “emotional” processes, mediated by areas such as the right temporoparietal junction (rTPJ), amygdala, and ventromedial prefrontal cortex (vmPFC), allow us to assign intention, distinguish relevant from irrelevant information, and weigh different pieces of relevant information against others to arrive at “all things considered” judgments. What emerges is not a “preponent” emotional response, as Greene has maintained, but a deployment of Aristotelian phronesis or practical wisdom. Evidence for this reconceptualization is provided by observation of clinical populations. Damage to the aforementioned brain regions is indeed correlated with higher rates of utilitarian judgment, but it is also correlated with psychopathology, blindness to the import of intention, diminished empathy, and an increased tendency to punish perceived slights. “In nonclinical populations, so-called ‘utilitarian’ judgments that Greene associated with the dorsolateral prefrontal cortex (dlPFC) are not associated with genuinely utilitarian, impartial concern for others but rather with rational egoism, endorsement of clear ethical transgressions, and lower levels of altruism and identification with humanity.” For this reason, “current evidence suggests that both emotions and reasoning contribute to moral judgment and that moral judgment may operate at its best when reasoning and emotion interact.” To adopt an evolutionary perspective on morality we must focus on the accumulation of changes in populations of animals over “deep” time. By studying the norms enacted by our ancestor species, trying to correlate these behaviors with similar activities in humans, and looking for the shared neurological structures that enable these interactions to proceed as they do, scientists are trying to uncover the origins and underlying reality of the human moralities enacted across the globe today. But to fully understand our moralities we must incorporate a more proximate developmental perspective. Each person develops a set of moral views and dispositions over her lifetime. Developmental psychologists examine these processes. What have they discovered? And how do these discoveries mesh with the other sciences of morality that we have examined? In Chapter 5, Julia W. Van de Vondervoort and J. Kiley Hamlin address these questions by recounting relatively recent paradigms in developmental moral psychology. Their history overlaps with Stich’s to some extent as we read of how psychologists from Piaget to Kohlberg imputed the history of political thought from Hobbes to Kant into the activities of the children they studied. Moral reasoning was supposed to emerge from self-interested calculation when the needs and interests of other children were made salient during playground disputes. The “highest” form of moral development was to be found in Rawlsian calls to limit rules to those no reasonable person would reject. Though Van de Vondervoort and Hamlin are less critical than our other authors of the Kohlbergian tradition in developmental moral psychology, they report a number of the criticisms that have been brought against it. Children do not always focus on outcomes to the exclusion of intentions in their evaluations of behavior, and when a child’s parents distinguish moral rules from nonmoral conventions by treating violations of the former as more serious than violations of the latter, their children cognize this distinction at a much earlier age than Kohlberg allowed. Van de Vondervoort and Hamlin also question Kohlberg’s assumption that Kantian reasoning marks the pinnacle of moral development by reviewing a large body of evidence
6
Science
linking emotion to moral cognition. They begin with the analyses of Hume and Adam Smith, who both argued that emotional reactions of approval and disapproval acquire moral content when we judge that we would continue to experience them were we to adopt a “general view” or imagine ourselves impartial spectators to the events to which we are reacting. Hypothesizing that judgment-sculpted condemnation and approval of these kinds evolved to facilitate cooperation,Van de Vondervoort and Hamlin report evidence that our attraction to helpers and aversion to hinderers originates in preverbal infancy. Two-yearolds distinguish intentional harm from accidental damage, and they protest intentional acts of harm and injustice no matter who commits them, but they limit their condemnation of unconventional acts to those “in-group” members who are party to the convention. Infants as young as 3 months old seem to track the social valence of an act, preferring helpful graphics and puppets to characters shown hindering the pursuits of others. And infants as young as 18 months manifest intuitions of fairness, exhibiting a preference for equal distributions of goods except when the labor or success of some would merit their receiving a larger share, (a finding which resonates with the evolutionary hypotheses defended by Cosmides et al. in Chapter 9). Chapter 5 concludes with an assessment of the evidence that infants distinguish morality from prudence. In one study, children in their first year rejected more snacks from a hinderer, choosing instead fewer snacks offered by a helper. Do these infants prefer helpers over hinderers because they hope to benefit from future interactions with the partners they’ve chosen? Are infants capable of the more disinterested evaluations that Hume and Smith equated with distinctively “moral” judgment? According to our authors, the jury is still out on these matters. Van de Vondervoort and Hamlin conclude that both emotions and reasoning are implicated in moral development. But exactly how do these elements interact in the development of the reader’s more or less “mature” morality? In Chapter 6, Shaun Nichols tries to put emotions in their proper place by embracing a relatively cognitivist account of the genesis of our intuitive moral judgments. On J. Greene’s dual-processing account, our resistance to killing one to save five is primarily constituted by a “preponent” aversion to the act. But why, asks Nichols, do we feel this way toward killing as a means? Drawing on work in machine learning, Nichols distinguishes the kind of “model-free” learning that inculcates habits and instinctive responses from the “model-based” learning that provides animals with the kind of information they need to navigate their environments in more flexible ways. Habits tend to persist for a time even when they don’t serve our ends, but rational animals overcome their habits when the benefits of doing so outweigh the costs. When an animal makes this calculation but nevertheless “defers” to the habit blocking her ends, Nichols classifies her behavior as weakness of will. He cites, as an example, someone who desperately wants to scuba dive but gives up trying because she finds it difficult to surmount her instinctive aversion to breathing underwater. Do our deontic intuitions originate in the kind of model-free learning responsible for our instinctive aversions? Nichols reports work by F. Cushman that is supposed to support a positive answer. For instance, subjects are reluctant to smash an obviously fake hand when they have no qualms about striking a nut. Of course, subjects are even more averse to striking real hands, but the point remains: people are averse to an action that tends to be harmful (e.g., hitting what looks to be a hand) even in cases in which it is manifestly not harmful.
7
Section I
But Nichols provides reasons for doubting consequentialist attempts to debunk deontic judgment as an “overlearned” response of this same kind. Though instinctive or habituated aversion is a regular component of deontic intuition, we are often averse to an action we do not judge wrong. (Think of someone spitting into a cup and then drinking it. This is disgusting but not obviously immoral.) Nichols’s positive proposal is that we don’t judge an act wrong unless we represent it as the intentional performance of an action known to be prohibited by rule, where model-free learning is insufficiently robust to account for our knowledge of social rules and agents’ intentions. To argue that an understanding of social rules is necessary for moral judgment, Nichols critiques P. Railton’s cognitive account of the “broad affective system” implicated in those of our moral judgments that are not grounded in conscious reasoning. Railton insists that emotions are not “dumb,” as they are often attuned to risks and rewards, obstacles and affordances. For example, mightn’t our relatively automatic aversion to incest attune us to the dangers of this practice? According to Railton, this assessment is not undermined by the observations of Haidt and colleagues that subjects remain averse to a described act of incest even when researchers stipulate that it hasn’t caused harm. After all, we are similarly averse to games of “Russian roulette” that do not end in suicide. But Nichols insists that affect is “less flexible and sensitive to evidence” than “general cognition” and he casts doubt on Railton’s diagnosis. Risky behavior isn’t typically conceptualized as immoral, and few of us have the kind of experience with incest that would engender an emotional memory of its deleterious effects, so it’s unlikely that our belief in incest’s immorality can be chalked up to “affective attunement.” Instead, variation in incest norms across cultures suggests that most of us have been taught the particular incest prohibitions operative in our communities. Nichols concludes his chapter with a description of the kind of statistical learning implicated in a child’s mastery of the social rules operative in her milieu. Though we are prone to various inductive and probabilistic fallacies, recent studies show that children utilize valid heuristics like a “size principle” to shape their expectations. Nichols suggests, on this basis, that youngsters can extract subtle principles (such as the greater immorality of harming in comparison with allowing harm to occur) from their exposure to a range of moral judgments without explicit instruction in these principles. Perhaps parental and religious prohibitions on incest are facilitated by this kind of implicit pattern recognition, which functions alongside a more or less innate aversion to sexual encounters with siblings to yield the kind of aversion Haidt has recorded. In either event, a typical person’s belief in the immorality of incest is not a wholly “system 1” product. In Chapter 7, Joshua May and Victor Kumar look at how reasoning and emotions interact in mature moral agents who have learned a morality in the ways Nichols describes.They begin by endorsing a “two-systems” model of cognition in general and moral cognition in particular. According to May and Kumar, when Humean philosophers and psychologists deemphasize the role of reasoning in the genesis, modification, and entrenchment of our moral judgments, they are neglecting unconscious or “system 1” reasoning, which is supposed to be quicker, more automatic, and less flexible than the “system 2” reasoning of which we are introspectively aware. When you think like a utilitarian, calculating the likely impact of a proposed course of action on those you know will be affected, you are utilizing your slow, effortful system 2. In contrast, system 1 processes account for the automatic aversion you experience to the prospect of killing one person to save five others. While 8
Science
they admit that your reaction to a “Trolley case” of the relevant sort will have an emotional component, May and Kumar suggest this feeling might be an effect of your belief that the ends don’t justify the means rather than its cause. As an example of this phenomenon, they discuss ideological vegetarians who only come to experience disgust at the sight of meat after accepting arguments against killing animals for food. The authors also consider the possibility that affective processing runs “in parallel” with the unconscious reasoning responsible for our deontic intuitions. In offering their analysis, May and Kumar take aim at other “two-systems” theorists, like Haidt, who identify system 1 with “emotion” rather than “reason.” But the authors also assign a substantive role to conscious inference, rejecting Haidt’s suggestion that system 2 reasoning is limited to the lawyerly defense of a moral judgment that has been challenged. System 2 reasoning is supposed to allow us to achieve greater levels of consistency between our moral intuitions, attain reflective equilibrium between our intuitive judgments of particular actions and the moral principles we embrace, and help us suppress or even eliminate automatic reactions to one another we reject as racist, sexist, or unduly prejudicial. What then of emotion? Psychopaths have a diminished capacity for sympathy and guilt. Doesn’t this distort their moral thinking? May and Kumar are not convinced, hypothesizing that deficits in empathic concern and allied emotions may adversely affect the moral development of psychopathic children, even if these emotions do not play a significant, proximate role in our adult capacity for moral thought. As evidence for this, they focus on those patients analyzed by Demaree-Cotton and Kahane in Chapter 4, who are thought to retain their capacity for moral judgment despite sustaining damage to the ventromedial cortex in adulthood, which deadens their emotional sensitivities. The authors also cast doubt on the significance and replicability of studies that are supposed to show that disgust and anger magnify moral condemnation. But May and Kumar allow that emotions influence reasoning in cases of wishful thinking, self-deception, and confirmation bias and that damage to the brain areas most directly implicated in the experience of emotion are correlated with impairments in deliberation and decision. They therefore conclude “that the way to attain and maintain moral knowledge will require improving both reasoning and emotion.” To account for our intuitive judgments, May and Kumar posit automatic, effortless bouts of reasoning that we cannot introspectively identify or describe. In Chapter 8, Piotr Patrzyk explores morally relevant heuristics in greater depth, recounting the pioneering work of G. Gigerenzer and colleagues. He begins with a critique of the kind of excessive idealization in moral psychology that results from modeling people who have not been exposed to Kant and Mill as “tacit” deontologists or consequentialists. Kohlberg, in particular, is criticized for not distinguishing the reasoning utilized to defend or justify a judgment from the cognitive processes implicated in its genesis. To get at the real causes of our moral verdicts, Patrzyk insists that we begin with an assessment of the “computational feasibility” of a proposed mechanism or decision rule. For example, we rarely have the information we would need to calculate expected utilities, so it is reasonable to suppose that we rarely do so. Instead, limited bodies of information trigger “domain-specific” mechanisms that in turn account for the different sorts of normative judgment we render, where we can assume, in advance, that the calculations or inferences instantiated by these mechanisms are tractable, robust, frugal, and quick. 9
Section I
Patrzyk extends his critique to prominent advocates of the “two systems” approach to moral judgment. Haidt is criticized for saying nothing about how system 1 takes us from the description of a scenario to an intuition of the rightness or wrongness of the actions portrayed, an allegation lent force by the cognitivist accounts of system 1 advanced by May, Kumar, and Nichols in previous chapters. Theorists have also tended to assume that system 2 processing “corrects” system 1 intuitions, rendering reflective moral judgment more reliable, but studies show that time pressure sometimes augments virtuous choice, as in a public goods game.The problem, Patrzyk claims, is that “system 1” and “system 2” are vague, overly idealized labels. But instead of replacing these terms with more descriptively adequate theories that might account for the variable effects of time pressure, some two-systems theorists implausibly claim that as a general matter our intuitive responses are self-interested, then quickly change to incorporate the interests of others, and then revert to amorality when more time is devoted to choice. According to Patrzyk, this is “data fitting” at its worst, as experimental results are shoe-horned into the two-systems framework they in fact undermine. The dominant paradigm in behavioral economics is even more idealized than the two-systems view, and Patrzyk goes on to offer devastating criticisms of the economist’s penchant for claiming that we make our decisions “as if ” we are trying to maximize expected utility. “Such research does little to answer questions about how humans perceive dilemmas, what information they look for and in what order, and how they combine information to make decisions.” How then should we model the mechanisms or processes implicated in the genesis and revision of our moral intuitions? To answer this question, Patrzyk cites work by Delton, Krasnow, Cosmides, and Tooby on why people cooperate with strangers. Delton et al. “contextualize” the decision problem by assuming that the mechanisms responsible for a decision to cooperate initially evolved under conditions of selection. A disposition to cooperate is an adaptation, as hunter-gatherers augmented their reproductive fitness by cooperating with fellow tribe members for mutual benefit. But when we utilize these strategies in our present context, we cooperate in ways that often fail to advance individual fitness. Those who ignore human history when crafting their models of moral judgment and choice entirely overlook this possibility. Patrzyk concludes by urging researchers to use what is known about the evolution of humanity to frame more realistic accounts of judgment and choice. We should assume, in particular, that processes of judgment and choice are domain specific, that the mechanisms executing these processes evolved under conditions of selection because they solved the problems humans faced in those conditions, and that utilizing these mechanisms was “rational” in these environments insofar as it secured solutions that were better or more adaptive in comparison with their tractable alternatives. But there is no way to determine whether a mechanism or process of judgment or choice is both tractable and plausibly realized in a human’s mind or brain without describing that mechanism in algorithmic detail. Those advancing serious hypotheses need to describe the “search rules” that guide acquisition of the inputs to decision, the “stopping rules” that determine when the search for information gives way to decision, and the “decision rules” that take a mental mechanism from premises to conclusion. As an example of best practices, Patrzyk describes a (2017) study conducted by Tan, Luan, and Katsikopulos on the conditions under which we will forgive someone for a perceived indiscretion.
10
Science
In Chapter 9, Leda Cosmides and John Tooby, leading researchers in this field of study, join Ricardo Guzmán to provide a masterful overview of what evolutionary moral psychology has achieved to date. They begin by arguing for the domain specificity of various mechanisms of normative judgment on evolutionary grounds. “It is hard to see how natural selection would favor a single, unitary system for generating and regulating our choices—moral or otherwise—when programs tailored for ‘tracking’ and promoting fitness in one domain (e.g., cooperative hunting, followed by sharing) require features that are not required to track and promote fitness in other domains (e.g., courtship, with competition for exclusive access to mates) . . . it is reasonable to predict as many domain-specific cognitive adaptations as there are domains in which the definitions of (evolutionarily) ‘successful’ behavioral outcomes are incommensurate.” After describing the foundations of the evolutionary approach, Cosmides, Guzmán, and Tooby describe its application to norms of incest and familial obligation. Inbreeding diminishes the reproductive fitness of families over time. Because of this, “natural selection will favor mutations that introduce motivational design features that cost-effectively reduce the probability of incest.” Some primates have solved this problem by mixing populations, as animals of one sex (typically males) leave the troop to breed. “But for species like ours, in which close genetic relatives who are reproductively mature are commonly exposed to each other, an effective way of reducing incest is to make cues of genetic relatedness reduce sexual attraction.” And this dynamic is not limited to sexual intercourse. Because our foraging ancestors typically lived with close kin throughout their lives, opportunities abounded for helping and hurting those related to them, where the fitness benefits of aiding or hindering kin often coincided with their degree of genetic relatedness. Cosmides et al. sketch the kind of “kin selection” operative in these contexts in detail and argue that foragers needed some means for discerning the genetic relatedness of individuals in their tribes to settle on adaptive policies for social interaction. A “kin detection mechanism” evolved, consisting of some “monitoring circuitry” designed to register evidence of genetic relatedness and a “kinships estimator” that transformed these cues into an index of family ties. On the basis of studies conducted by Lieberman and others, the authors suggest that maternal perinatal association (MPA) was used to discern genetic relatedness of mother and child but that younger children used the cumulative duration of coresidence to detect the relatedness of their older siblings. As we saw in Nichols’ Chapter 6, several authors have used cultural variability in incest norms as evidence that they’re learned or enculturated. Cosmides et al. try to deepen this analysis by providing a principled basis for predicting the variation in question. Lieberman et al. report that the degree of disgustingness and moral wrongness that older siblings assign to incest with younger opposite-sex siblings is directly proportional to the “MPA cue” provided by maternal care for the sibling in question but that the judgments of disgustingness and wrongness that younger siblings assign to incest with older siblings is better correlated with duration of coresidence. They also report that MPA and coresidence do not predict judgments of disgustingness or wrongness of same-sex incest. Moreover, “The same cues that regulate moral intuitions about incest—MPA and coresidence duration—regulate how often people sacrifice to help their siblings (as measured by favors done in the last month) and their willingness to incur large costs (such as donating a kidney), whether they are true
11
Section I
biological siblings, stepsiblings, or unrelated children being raised together on a kibbutz.” The concept of kin selection is then used to make general predictions about the way in which judgments of familial obligation will vary in accordance with genetic relatedness and other factors. Chapter 9 concludes with an extensive “cook’s tour” of the psychological mechanisms that have been proposed by evolutionary psychologists to account for the various ways in which humans cooperate with one another. When distantly related people cooperate, one party often enough augments the reproductive fitness of another at some cost to herself. Because the parties are not closely related, the psychology responsible for this behavior will not have its origins in kin selection. Famously, Darwin invoked non-kin group selection to account for patriotic sacrifice and the triumph of “civilized” moralities over those operative among “savages.” Indeed, the power of group selection led Darwin to an optimistic assessment of the prospects for Christian ideals of universal brotherhood and their secular counterpart: the generalized benevolence preached by Utilitarians. In contrast, our authors argue that we do not need to appeal to selection among extrafamilial groups to explain why some people sacrifice for the benefit of relative strangers. Cooperation evolved between unrelated individuals when we developed a way of detecting cheaters and a way of distinguishing dedicated cheaters from those who have accidentally failed to keep their ends of mutually beneficial bargains. “Generosity in one-shot interactions evolves easily when natural selection shapes decision systems for regulating two-person reciprocity (exchange) under conditions of uncertainty.” If the costs of an initial act of altruism were relatively low, and the costs of missing out on reciprocation were fairly high, and the opportunities for reciprocation in hunter-gatherer societies were sufficiently great, cooperating by default with those who did not strike us as cheaters would have been adaptive and so favored by selection amongst individuals within a unitary population. Our authors are similarly dismissive of the use of group selection to explain why people engage in costly acts of rule enforcement. Studies suggest that people mainly blame and punish those with whom they plan to interact in the future, and this suggests to Cosmides et al. that our investment of resources into punishment are similarly conditional in origin. The authors use “partner choice” of this kind to account for a wide variety of moral intuitions, including judgments of virtue and the respective roles we assign to effort and luck when evaluating the fairness of a distribution of goods or the character of a compatriot. In contrast, they argue that “partner control” is necessary to mitigate free riding in large societies. They conclude that each form of social interaction they have addressed is enabled by domain-specific modes of social conceptualization and dedicated modes of inference. Again, there are exactly as many “modules” of moral judgment as there were problems of cooperation to be surmounted by our foraging ancestors. These conclusions help Cosmides et al. explain why our moral intuitions are better preserved by particularist and pluralistic normative theories than those frameworks derived from a “first principle” of morality. Of course, as Hume would insist, the evolutionary psychologists’ account of why we have the moral intuitions we do is neither a vindication of those intuitions nor a reason to abandon them. To address this frankly philosophical matter we must turn to Section II of this volume, where questions about the epistemological significance of what is now known about our moral psychologies are addressed in some detail. 12
Science
Note 1. Note that in the third chapter of this volume,Vincent et al. cite Iyer et al. (2012), who add concern with liberty and oppression as a distinct category of normative cognition.
Sources Cited de Waal, F. B. M. (2016). Are We Smart Enough to Know How Smart Animals Are? New York: W. W. Norton. Iyer, R., S. Koleva, J. Graham, P. Ditto and J. Haidt, “Understanding libertarian morality: the psychological dispositions of self-identified libertarians,” PloS ONE7/8, e42366/.
13
1 THE QUEST FOR THE BOUNDARIES OF MORALITY* Stephen Stich
Alasdair MacIntyre begins his paper, “What Morality Is Not,” with a claim that may strike many philosophers as very surprising indeed. The central task to which contemporary moral philosophers have addressed themselves is that of listing the distinctive characteristics of moral utterances. (1957, 26) MacIntyre is indulging in a bit of literary license here. The philosophers he has in mind were not just concerned with moral utterances, they were also concerned to give accounts of moral judgments, moral principles, moral norms and moral issues, and of what is required for a set of “action guiding”1 principles to be a moral code—a morality. With this caveat noted, MacIntyre was surely right.The philosophical literature in the late 1950s was chocka-block with discussion of what is required for an utterance (or judgment, or principle, etc.) to count as moral. Much of this literature was inspired by R. M. Hare’s enormously influential book, The Language of Morals (1952), and his article, “Universalizability” (1954– 1955). Moreover, the outpouring of philosophical literature in this area continued long after MacIntyre’s essay, with important articles appearing throughout the 1960s, ’70s and ’80s. The existence of this bountiful literature—which has largely disappeared from the philosophical curriculum over the last quarter century—raises a number of questions, including: 1. What were these philosophers trying to do? 2. Why did they want to do it? Why was it thought to be important? 3. How did they propose to discover the distinctive characteristics of moral judgments, principles and the rest? What sorts of evidence or argument did they rely on? 4. What characteristics were proposed; which were agreed on? 5. Why did contributions to this literature gradually diminish? 6. How is more recent work relevant to the project that these philosophers were pursuing?
15
The Quest for the Boundaries of Morality
I’ll try to answer the first five of these questions in part 1, and the sixth in part 2. In part 3, I’ll explain how this philosophical literature—a bit of it—was woven into the foundation of a psychological project that also sought to characterize the “distinctive characteristics” of moral judgments, rules and transgressions and that has had an important influence on contemporary empirical moral psychology. In part 4, a preliminary conclusion, I’ll review what we’ve done. Subsequent parts address more contemporary theories as I ask what lessons can be learned from the six decades of philosophical and psychological research we’ll be reviewing.
1. The Philosophers’ Project (≈1952–≈1990): What Were These Philosophers Trying to Do? To understand what these philosophers were trying to do, we must begin with a crucial distinction. Often, when we ask whether a person’s judgment is moral, what we want to know is whether her moral judgment is true—or something in that vicinity: correct, or valid, or justified, or wise.What we are asking, to use Frankena’s (1967) useful terminology, is whether the judgment is moral as opposed to immoral. It is hardly surprising that philosophers often want to know whether a judgment or a principle is moral (as opposed to immoral). Limning the contours of the moral (in this sense), has been a goal of philosophy since antiquity.2 But it is very important to keep in mind that this was not the goal of the writers engaged in what I’m calling “The Philosophers’ Project.” Rather, borrowing again from Frankena, what they were trying to do was to distinguish moral judgments, principles, etc. from nonmoral judgments or principles. So, for example, they wanted to know how to determine whether an action guiding rule that is widely accepted in a given culture is a moral rule or some other sort of rule—a religious rule, for example, or an aesthetic rule, or a prudential rule. Whether the rule is true, or valid, or justified, etc., was simply not their concern. Similarly, confronted with the unfamiliar, largely egoistic action guiding rules described in John Ladd’s (1957) detailed study of the Navajo, they wanted to know whether this system of rules was a morality. If it was not, then, arguably, the Navajo did not have a moral code at all, and thus having a moral code is not a human universal. Closer to home, these philosophers wanted to specify how to distinguish a moral rule from a rule of etiquette. Are the tacit rules specifying appropriate behavior for people waiting on line to board a bus or to buy a coffee at Starbucks moral rules or just rules of etiquette?3 How about rules specifying appropriate clothing to wear at important events, like funerals? They also wanted some principled way of determining which legal rules are also moral rules.
2. Why Did They Want to Do It? Why Was It Thought to Be Important? The philosophers we are concerned with wanted to give an account of the conditions required for a judgment or a rule to be moral as opposed to nonmoral.Why? One reason, on which there was wide agreement, was that the account would enable us to give principled answers to the sorts of questions mentioned in the previous paragraph. It would, for example, tell us whether the Navajo, as described by Ladd, had a moral code.4 It would also tell us whether rules about how to behave while waiting on line are moral rules, whether a specified legal rule is also a moral rule, etc.Another, more controversial reason was that the account would be a specification 16
Stephen Stich
of the essence of morality. While a number of authors endorsed this view,5 others adamantly rejected it.According to Paul Taylor,“The importance of classifying moral principles . . . does not lie in the discovery of the essence of morality. (There is no such essence)” (1978, 52). With the explosion of research in empirical moral psychology over the last two decades and philosophers’ growing interest in the area, many new questions have been raised that seem to require the sort of account that philosophers engaged in the Philosophers’ Project were seeking. One clear example can be found in Richard Joyce’s influential book, The Evolution of Morality (2006). Joyce wants to provide an account of the evolution of the “moral sense,” which he characterizes as “a faculty for making moral judgments” (44). But we can’t undertake an inquiry into the evolution of the moral sense, Joyce maintains, without an account of what moral judgments are. Any attempt to understand how our ability to make moral judgments evolved will not get far if we lack a secure understanding of what a moral judgment is. (To neglect this would be like writing a book called The Origin of Virtue without any substantial discussion of what virtue is). (44) He goes on to offer his own chapter-length account of “the nature of morality,” which includes a detailed attempt to answer the question, “What is a moral judgment?”6 Another example that has garnered a great deal of attention grows out of some provocative and problematic claims by Jonathan Haidt. About a decade ago, Haidt, who has been one of the most influential moral psychologists in recent years, accused his fellow moral psychologists of politically motivated bias. Here is a quote that nicely summarizes Haidt’s critique. [S]tudents of morality are often biased by their own moral commitments. . . . One problem is that the psychological study of morality, like psychology itself, has been dominated by politically liberal researchers (which includes us). The lack of moral and political diversity among researchers has led to an inappropriate narrowing of the moral domain to issues of harm/care and fairness/reciprocity/ justice. . . . Morality in most cultures (and for social conservatives in Western cultures), is in fact much broader, including issues of in-group/loyalty, authority/respect, and purity/sanctity. . . . This article is about how morality might be partially innate. . . . We begin by arguing for a broader conception of morality and suggesting that most of the discussion of innateness to date has not been about morality per se; it has been about whether the psychology of harm and fairness is innate. (Haidt & Joseph, 2007, 367) To make their case for a broader conception of morality, Haidt and Joseph offer a brief overview of norms that prevail in other cultures. These norms include “rules about clothing, gender roles, food, and forms of address” and a host of other matters as well (371). They emphasize that people in these cultures care deeply about whether or not others follow these rules. But this is a puzzling way to defend their accusation. For surely Haidt 17
The Quest for the Boundaries of Morality
and Joseph don’t think that the “politically liberal researchers” responsible for the “inappropriate narrowing” of the moral domain are unaware that rules governing these matters are widespread in other cultures. They don’t think that these liberal researchers don’t read the newspaper or that they are anthropological ignoramuses. The issue in dispute is not whether rules like these exist or whether people care deeply about them.What is in dispute is whether these rules are moral rules. To resolve that dispute, we need an account of what it is for a rule to be a moral rule. In recent years, the philosophical literature has been awash in claims about the semantics of moral judgments (Boyd, 1988; Horgan & Timmons, 1992; Schroeder, 2008), the function of moral judgments (Roskies, 2003; Prinz, 2015), the evolutionary history of moral judgments (Joyce, 2006; Kitcher, 2011) and the psychological mechanisms underlying moral judgments (Nichols, 2004a; Prinz, 2007). In order to evaluate these claims, we need to know which normative judgments they apply to—which ones are moral judgments. And that is exactly what the Philosophers’ Project is trying to provide.
3. How Did They Propose to Discover the Distinctive Characteristics of Moral Judgments? Most of the philosophers who participated in the debate over the definition of morality during the last half of the twentieth century agreed that an analysis of ordinary linguistic usage had an important role to play in discovering and defending an appropriate definition. If a proposed definition classified as moral a judgment that we would not ordinarily describe as a moral judgment—or if it classified as nonmoral a judgment that we would ordinarily describe as moral—that was a consideration that counted against the definition. For Hare and some of the other leading figures in the debate, these sorts of linguistic considerations were the only source of evidence relevant to evaluating a definition, since the goal of the exercise was to capture the concept of moral judgment underlying ordinary usage. However, other central figures in the debate urged that this sort of descriptive conceptual analysis is one of two quite different goals that a philosopher might have when attempting to defend a definition of morality.The other goal is conceptual revision—characterizing a new concept of morality that will be better suited to playing a role in philosophical theory construction. William Frankena drew the distinction between these two projects very clearly and argued that conceptual revision is both legitimate and important. [O]ur question and our answer to it may take two forms. For when we ask what morality is or what is to be regarded as built into the concept of morality, we may be asking what our ordinary concept of it is or entails, what we actually mean by “moral” and “morality” in their relevant uses, or what the prevailing rules are for the use of these terms. . . . However, when one asks what morality is or how it is to be conceived, one may be interested, not so much in our actual concept or linguistic rules, as in proposing a way of conceiving it or a set of rules for talking about it, not so much in what our concept and uses are, as in what they should be. If the questions are taken in the first way, the discussion will be a descriptive-elucidatory one, and the arguments pro and con will have a corresponding character; if they are taken in the second sense, the inquiry will be normative, and the arguments will have a different character, though, of 18
Stephen Stich
course, one may still take the fact that we actually think and talk in a certain way as an argument for continuing to do so. Now, most recent philosophers who have dealt with our topic have been shy about making proposals of a normative sort. . . . Though some of them do at least favor one way of speaking against another, they tend to try to rest wholly on the basis of actual use and its rules. Indeed, they have tended to think that philosophers as such should not venture to propose revisions of our moral concepts, since to do so is to make a normative or value judgment, . . . and the business of philosophy is or should be (a normative judgment!) “analysis” or “logic.” . . . But if one may or must be normative at all, then in principle there is no reason why one may not be revisionary, especially if one finds difficulties and puzzles in our ordinary manners of thought and expression. In what follows, at any rate, I shall take it to be appropriate for a philosopher to ask whether something should be built into our concept of morality, even if it is not. . . . I shall take our problem to be primarily a normative rather than a descriptive-elucidatory one. (Frankena, 1967, 149–150) In an earlier paper, Frankena offers a memorable summary of this approach: Defining terms like “moral judgment” may be part of an attempt to understand, rethink, and possibly even to revise the whole institution which we call morality, just as defining “scientific judgment” may be part of an attempt to do this for science. (1958, 45) As Frankena notes, he is not alone in viewing the project of defining “moral” and “morality” as primarily revisionary and normative. Von Wright (1963, 4–5) had adopted a similar view, and in later years Cooper (1970, 93), Rawls (1971, §23) and Paul Taylor (1978) did so as well.
4. What Characteristics Were Proposed; Which Were Agreed on? Since some of the philosophers engaged in the debate over the definition of morality adopted a “descriptive-elucidatory” approach while others viewed the project as revisionary and normative, it is hardly surprising that no consensus was reached on how “moral rule,” “moral judgment” and the rest should be defined. There is a long list of features that were argued to be necessary conditions. Perhaps the most widely discussed of these was Hare’s proposal that moral rules must be “universalizable.” As Hare unpacked the notion, it required that there be no names or definite descriptions in moral rules, only predicates. While the predicates could have a very restricted extension—“people who have four lefthanded grandparents” would be fine—the rule applies to everyone to whom the predicate applies, no matter where they might be or when they might live. Another widely discussed proposal, also due to Hare, was that moral judgments are “prescriptive.” What this means is that the action-guiding force [of moral rules] derives from the fact that they entail imperatives: my acceptance of the principle “One ought to do X” commits me to accepting 19
The Quest for the Boundaries of Morality
the imperative “Let me do X”; and my acceptance of the imperative commits me in turn to doing X in the appropriate circumstances. (Wallace & Walker, 1970, 9) A third proposal was that if an action guiding principle is a moral principle for a person, then she must regard it as “overriding or supremely important” (Frankena, 1967, 155). Moral norms “outweigh, as grounds of reasons-for-action, all other kinds of norms. In cases of conflict between moral and nonmoral principles, the former are necessarily overriding” (Taylor, 1978, 44; for a similar proposal, see Cooper, 1970, 95). A related idea is that moral judgments are “categorical.” According to Gewirth (1978, 24) “Judgments of moral obligation are categorical in that what persons ought to do sets requirements for them that they cannot rightly evade by consulting their own self-interested desires or variable opinions, ideals, or institutional practices.” Another frequently discussed necessary condition was that moral rules are behavior guiding rules whose violation is met with social sanctions, “the reproach of one’s neighbors” (Cooper, 1966, 73) or something more serious, like ostracism (Sprigge, 1964, 129 ff). This was sometimes paired with the idea that moral transgressions are followed by the transgressor sanctioning himself with feelings of guilt or shame or disliking himself (Wallace & Walker, 1970, 14; Sprigge, 1964, 130). Yet another proposed necessary condition was that if two people share the same factual beliefs then their moral judgments will be the same. So if people who share their factual beliefs continue to disagree, then at least one of them is not really expressing a moral judgment (Frankena, 1963, 5–6). All of these proposals were “formal” in the sense that they did not impose any constraints on the contents of moral rules or moral judgments. And this is far from a complete list of the formal conditions that were proposed; there were many more.7 There was no shortage of critics for these formal conditions. Wittgensteinians, who maintained that “moral” was a family resemblance term, denied that there are any strictly necessary conditions for the application of the term. MacIntyre (1957), inspired by Sartre, argued that many moral judgments were neither universalizable nor (in Hare’s sense) prescriptive. Sprigge (1964) offered a quite different argument against universalizability. And so it went. I think it is fair to say that nothing on this list of proposed formal conditions achieved anything even close to consensus during the three decades during which the Philosophers’ Project was most active. Even more controversial was the question of whether more substantive social requirements should be built into the definition of morality. For example, Frankena urged that a necessary conditions for a set of rules being a morality should be that it includes judgments, rules, principles, ideals, etc., which [(i)] concern the relations of one individual . . . to others [and (ii)] involve[s] or call[s] for a consideration of the effects of his actions on others (not necessarily all others), not from the point of view of his own interests or aesthetic enjoyments, but from their own point of view. (Frankena, 1967,156) This condition allows in a wide variety of deontological and utilitarian moralities, but “it rules out as non-moral . . . such [action guiding systems] as pure egoism or prudentialism,
20
Stephen Stich
pure aestheticism, and pure religion” (157). It does not rule out “Nazi ethics,” which requires an individual to consider the effects of his actions on fellow Germans, but on some readings of Nietzsche, on which the proposed action guiding rules are purely egoistic or aesthetic, the condition entails that Nietzsche is not proposing a morality at all. Baier (1958, 199ff) proposed a similar but stronger condition on which moral rules “must be for the good of everyone alike.” Earlier,Toulmin (1950) had argued that a concern for the harmony of society is part of the meaning of “moral.” On these substantive principles, too, it is clear that no agreement was reached.
5. Why Did Contributions to this Literature Gradually Diminish? According to General Douglas MacArthur, “Old soldiers never die, they just fade away.” Much the same could be said for many philosophical debates. During the last decade of the twentieth century, discussion of the definition of morality gradually faded from the philosophical literature.8 The reason for this was certainly not that the problem of defining morality had been solved or that agreement had been reached. Nor was it the case that the importance of the issue had declined. Quite the opposite, as we saw in §3. Rather, I suspect, it was because most of the main options had been pretty thoroughly explored and promising new ideas and arguments were hard to come by. Moral philosophers turned their attention to newer issues. Perhaps the waning of the positivist-inspired prohibition against philosophers making “value judgements” also played a role. Whatever the reason, debates over the definition of morality no longer loomed large in leading journals. However, as philosophical discussion of the definition of morality wound down, the topic was moving to center stage in empirical moral psychology. That will be our topic in subsequent sections. But before getting to that, I want to briefly discuss a more recent challenge to the Philosophers’ Project.
6. Some Recent Work Relevant to the Philosophers’ Project Those engaged in the Philosophers’ Project were trying to provide an analysis of concepts like moral judgment and moral rule, and it is clear that for most of these philosophers, the analysis they sought would provide necessary and sufficient conditions.9 Moreover, those who took the project to be “descriptive-elucidatory” rather than normative wanted their account to capture the concept we actually use. That project did not meet with much success. As Jerry Fodor has famously noted, such projects rarely do (Fodor, 1981, 283). However, it might be thought that the failure of the Philosophers’ Project could be traced to the quest for an analysis providing necessary and sufficient conditions. The view that most concepts can be analyzed in this way has become known as the classical theory of concepts, and both empirical and philosophical work on concepts over the last four decades has made a convincing case that the classical theory of concepts is false for most ordinary concepts (Smith & Medin, 1981; Laurence & Margolis, 1999). There are, however, a variety of other ways of analyzing concepts utilizing prototypes, exemplars, commonsense theories or other approaches (Machery, 2009). So perhaps the descriptive-elucidatory project could be successfully revived by dropping the demand for necessary and sufficient conditions and adopting one of these alternative approaches to conceptual analysis.
21
The Quest for the Boundaries of Morality
However, recent work in moral psychology and experimental philosophy poses a challenge to this hopeful thought. It raises another, less tractable, problem for the descriptiveelucidatory project. Inspired by the work of cultural psychologists, experimental philosophers have been exploring the possibility that philosophical intuitions—spontaneous judgments about whether a familiar term applies to a real or hypothetical case—may vary in different demographic groups.10 It is widely assumed that concepts play a central role in generating philosophical intuitions (Goldman, 2007). So if intuitions vary across demographic groups—and there is a growing body of evidence that they do—then philosophically important concepts may also vary in different demographic groups. In a recent study, Levine et al. (under review) explored whether there were demographic differences in people’s concept of a moral judgment. They asked American participants of different religious faiths—Mormon, Muslim, Hindu, Jewish and secular—to judge whether a long list of normative judgments were moral judgments or some other kind of judgment, and they found striking differences between these five groups. On the basis of this work, the authors suggest that there are important differences in how the adherents of different religions conceive of morality. Using very different methods, Buchtel et al. (2015) have shown that Chinese and Westerners classify different transgressions as moral, and Wright et al. (2013) have shown that there is considerable variation when American college students are asked whether an issue is a moral issue.11 If, as this work suggests, different people and different groups have different concepts of morality, then the goal of the descriptive-elucidatory project is underspecified in an important way. That goal, as we’ve seen, is to capture “our” concept of morality, the concept of morality that “we” actually use. But who are “we”—secular people, Jews, Mormons, Muslims or Hindus? Chinese or Westerners? And however this question is answered, why is our concept of morality more important than the concept employed by other groups? Why is it that our concept provides the answer to the philosophical questions posed in parts 1 and 2 of this chapter? I have no idea how to answer these questions.Without convincing answers, the descriptive-elucidatory project, when no longer committed to the classical theory of concepts, may be a fascinating exercise in cognitive anthropology, but it is hard to see why it is of any philosophical interest.
7. The Psychologists’ Project (≈1970–the present): Turiel’s Account of Moral Judgment The psychologists’ project that will be center stage in this section grows out of the work of Elliot Turiel and his colleagues. Turiel was a student of Lawrence Kohlberg whose influential work on moral reasoning and moral development was widely discussed and enormously influential in the 1970s, ’80s and ’90s. Following Piaget, Kohlberg held that moral reasoning emerged in stages. For young children, according to Kohlberg, morality is largely a matter of obedience and punishment. Children judge that certain behaviors are wrong because they know those behaviors are likely to be punished, and their understanding of wrongness is, near enough, exhausted by the idea of punishment: wrong behavior just is behavior that is typically punished.12 Turiel, by contrast, was convinced that moral cognition is distinct from other sorts of cognition and that it emerges quite early in development. In order to make the case for this claim, he had to show that children could make characteristically 22
Stephen Stich
moral judgments. And to do that Turiel needed a test that would indicate when an experimental participant—child or adult—was making a moral judgment. It was at this point that the Philosopher’s Project played a crucial role in the development of the Psychologists’ Project, as Turiel turned to the philosophical literature for a characterization of moral judgments. Several of the necessary conditions that philosophers had proposed were endorsed by Turiel and incorporated into his own account of moral judgments. One of these was universalizability. “Moral prescriptions,” he tells us, “are universally applicable in that they apply to everyone in similar circumstances” (Turiel, 1983, 36; italics in the original). So if a young participant in an experiment judges that it is wrong for a child in her own school to push someone off a swing, and if that judgment is a moral judgment, we would expect the participant to say that it is also wrong for a child in another school to push someone off a swing. A second feature discussed in the philosophical literature that was adopted by Turiel was the categoricalness of moral judgments. He quotes with approval the passage from Gewirth that I quoted in §4: Judgments of moral obligation are categorical in that what persons ought to do sets requirements for them that they cannot rightly evade by consulting their own self-interested desires or variable opinions, ideals, or institutional practices. (Gewirth, 1978, 24, quoted in Turiel, 1983, 35) Since institutional practices cannot alter moral obligations, we should expect that if an experimental participant has judged that it is wrong to push someone off a swing and that judgment is a moral judgment, then the participant would judge that it would be wrong in another school where there was no rule against pushing people off a swing, and it would be wrong even if the principal in her own school said that there was no rule against it. In the jargon that has developed in the literature growing out of Turiel’s work, these questions are said to probe for “authority independence.” The test that Turiel proposed to determine whether a judgment is a moral judgment includes one or more questions assessing whether the participant takes her judgment to be universalizable and one or more questions assessing whether she takes her judgment to be authority independent. Both universalizability and categoricalness are “formal”—they do not impose any constraints on the content of moral rules or moral judgments. But Turiel also held that there are substantive features that all moral judgments share. They all, he maintained, deal with issues linked to harm, justice or rights. Thus if an experimental participant has made a genuinely moral judgment and is asked to explain why the behavior in question is wrong, she will typically appeal to the harm that has been done or to injustice or the violation of someone’s rights. In building substantive features into his characterization of moral judgments, Turiel is siding with Toulmin, Frankena, Baier and others who argued against a purely formal characterization of morality, though there is no indication that Turiel was aware of the debate between the formalists and their philosophical critics. Moreover, Turiel’s choice of substantive features—those linked to harm, justice and rights—was quite different from those proposed by the philosophical anti-formalists and was motivated by his account of how children acquire moral rules. With these three putative features of moral judgments in hand, Turiel proceeded to construct an empirical test to determine whether an experimental participant’s judgment 23
The Quest for the Boundaries of Morality
about a transgression is a moral judgment. The test typically begins with a brief vignette describing a hypothetical transgression. Since Turiel was interested in determining whether young children made moral judgments, the transgressions almost always involve events that would be familiar to kids. The participant is then asked a series of questions aimed at determining whether she thinks the action described is wrong, whether she thinks wrongness of the action in the vignette is “authority independent,” and whether the participant would universalize the judgment, making the same judgment if the transgression occurred at another place or time.These questions can be asked in a variety of ways depending on the age of the participant and the goals of the study.The participant is also asked to explain why the transgression is wrong, and responses are assessed to determine whether the participant invokes harm, justice or rights or whether she invokes other considerations (including custom, tradition, appeal to authority, disrupting social coordination or the likelihood of punishment) that, Turiel maintains, are the sorts of justifications that are to be expected for “conventional” transgressions (Turiel, 1983, 67). This experimental paradigm, in which a transgression is described and participants are asked questions to determine (i) whether they think it is wrong, (ii) how they would justify that judgment, (iii) whether their judgment is authority independent and (iv) whether they universalize the judgment, is frequently referred to as the moral/conventional task. Another question often asked along with the four listed here is aimed at determining how serious the participant thinks the transgression is. Of course some moral transgressions are more serious than others, and some conventional transgressions are more serious than others. But since a number of philosophers have proposed that moral considerations are “overriding,” one might think that moral transgressions should always be considered more serious than conventional transgressions. Turiel and his followers reject this idea (Tisak & Turiel, 1988, 356) and report a number studies in which participants judge that egregious conventional transgressions, like a boy wearing a dress to school, are more serious than minor moral transgressions like stealing an eraser (Turiel, 1983, 71). Thus, as Smetana notes, “the severity of the transgression is not considered to be a formal criterion for distinguishing moral and conventional rules and transgressions” (1993, 117). Before proceeding, let me introduce a bit of jargon (mine, not Turiel’s) that will prove useful.The pattern of responses in the moral/conventional task that Turiel takes to be characteristic of a moral judgment are universalizability (U), authority independence (I) and justification by appeal to harm, justice or rights (H). I will call this the UIH response pattern. Turiel takes the opposite pattern—not universalizable, not authority independent and not justified by appeal to harm, justice or rights—to be characteristic of conventional normative judgments. I’ll call that the ~U~I~H response pattern. By using the moral/conventional task with youngsters, Turiel and his collaborators were able to show that they typically gave the UIH response pattern to vignettes describing what they thought adults would consider moral transgressions, and the ~U~I~H response pattern to vignettes describing what they thought adults would describe as conventional transgressions. Turiel concluded that children can indeed make moral judgments at an age when Kohlberg’s theory predicted that they were only capable of conceptualizing morality in terms of punishment. More importantly, he concluded that young children have a basic grasp of the distinction between moral and conventional rules and transgressions. 24
Stephen Stich
8. A Critique of Turiel’s Account of Moral Judgment, and a Response Against the backdrop of the philosophical literature discussed in §1, one might well think that there is something seriously wrong with all this. Philosophers spent decades debating how “moral judgment,” “moral rule” and the rest should be defined without reaching any widely accepted conclusion.Turiel offered no additional evidence about the ordinary usage of these terms; he contributed nothing to the “descriptive-elucidatory” project of analyzing our ordinary concept of moral judgment. Nor did he offer any normative argument aimed at showing how our ordinary concept should be revised. Rather, it seems, he simply stipulated that moral judgments are universalizable, authority independent and justified by appeal to harm, justice or rights, and that the UIH response pattern can be used to identify moral judgments. But if the term “moral judgment” is supposed to have its ordinary meaning, then one can’t just stipulate how moral judgments are to be identified. If, on the other hand,Turiel proposes to use “moral judgment” as a technical term, he is free to make whatever stipulations he wishes about how moral judgments (in the technical sense) are to be identified. However, if “moral judgment” is a technical term, then one cannot infer that moral judgments (in this technical sense) have anything to do with moral judgments as the term is usually used. So showing that children make moral judgments (in the technical sense of judgments exhibiting the UIH pattern) tells us exactly nothing about whether they make moral judgments in the ordinary sense. And, of course, the same is true of adults. Without some further argument, one cannot infer from the fact that an adult’s judgment exhibits the UIH pattern to the conclusion that the adult has made a moral judgment.13 All of this, I think, is exactly right. But there is another way of construing Turiel’s project—and much of the literature that it generated—that avoids these problems.14 In a seminal paper published 40 years ago, Hilary Putnam (1975) famously argued that in many cases, “meanings just ain’t in the head.” When the term in question is a natural kind term, like “water” or “fish” or “gold,” Putnam urged, it is the job of empirical science to determine the essential features of the natural kind, and these essential features constitute the correct definition of the kind. Other philosophers, notably Devitt (1996) and Kornblith (1998), have provided insightful accounts of how this process works. Very roughly, their story goes like this. To begin, the scientist focuses on intuitively prototypical examples of the kind in question. She then looks for properties that are shared by most of these prototypical examples. If she finds a cluster of properties that are present in most prototypical examples and absent in most things that, intuitively, are not members of the kind, she hypothesizes that that cluster of properties are the essential features of the kind. It is a reasonable hypothesis that the ordinary term “moral judgment” is a natural kind term, picking out a psychological natural kind. If so, it is the job of science—psychology in this case—to determine the essential features of the natural kind. One way to do this would be for psychologists to discover a cluster of nomologically linked properties that are shared by many (but perhaps not all) cases of what they would intuitively take to be prototypical moral judgments and that are missing in many (but perhaps not all) cases of what they would intuitively take not be a moral judgment.15 With this by way of background, let’s return to Turiel. In his book-length exposition of his research program, Turiel tells us that the “strategy of the research” he reviews in several chapters was to present subjects with “prototypical examples” of moral and conventional 25
The Quest for the Boundaries of Morality
transgressions as a means of investigating whether UIH judgments16 are evoked by moral transgressions and ~U~I~H judgments are evoked by conventional transgressions (Turiel, 1983, 55). His claim that UIH judgments are moral judgments can be interpreted as a hypothesis about the essential features of moral judgments. If the hypothesis is true, we would expect that the three components of UIH judgments are nomologically linked—they typically occur together. We would also expect that many UIH judgments are intuitively classified as prototypical moral judgments, and many ~U~I~H judgments are intuitively classified as prototypical conventional judgments. To make a persuasive case for that hypothesis, we would need lots of experiments, using a wide range of prototypical transgressions and many different participant populations. Over the years Turiel and his colleagues have conducted moral/conventional task experiments on many different groups of experimental participants. Findings supporting the hypothesis that the UIH response pattern is a nomological cluster, and thus that the UIH pattern captures the essence of moral judgments, have been found in participants ranging in age from toddlers to adults (Nucci & Turiel, 1978; Smetana, 1981; Nucci & Nucci, 1982), in participants of a number of different nationalities and religions (Nucci et al., 1983; Hollos et al., 1986; Yau & Smetana, 2003; for reviews, see Smetana, 1993; Tisak, 1995; Nucci, 2001) and in children with a variety of developmental disorders, including autism (Blair, 1996; Blair et al., 2001; Nucci & Herman, 1982; Smetana et al., 1984; Smetana et al., 1999). In response to this impressive body of evidence, many psychologists and a growing number of philosophers have accepted the moral/conventional task as a reliable way of identifying moral judgments.17
9. The Case against the Hypothesis that Moral Judgments Are a Natural Kind Evoking the UIH Response While there are many studies that can be interpreted as supporting the hypothesis that moral judgments are a natural kind evoking the UIH response, the evidence for the claim that the components of the UIH package form a nomological cluster is far from uniform. Early studies indicating that UIH components do not always occur together focused on transgressions that do not involve harm (or justice or rights). Nissan (1987) used the moral/conventional task in a study that included children in traditional Arab villages in Israel. Among the transgressions that Nissan used were mixed-sex bathing and addressing a teacher by his first name—behaviors in which no one is harmed. He found that these children considered those transgressions to be universalizable (U) and authority independent (I). So, contrary to the hypothesis that the UIH package is a nomological cluster, in this study, U and I are not linked to H. In another study, Nucci and Turiel (1993) found that orthodox Jewish children in the USA judged a number of religious rules to be authority independent (I) even though they did not involve harm (or justice or rights). So in this study, I and H are not linked, contrary to the nomological cluster hypothesis. And in what is surely the most famous and most memorable study aimed at showing that the UIH cluster comes apart, Jonathan Haidt and colleagues used transgressions like washing the toilet bowl with the national flag and masturbating with a dead chicken (Haidt et al., 1993). Though Haidt’s participants agreed that none of these behaviors were harmful, his low socioeconomic status participants in Brazil and in the USA nonetheless said that the behaviors were wrong and indicated that their judgment was universalizable (U) and authority 26
Stephen Stich
independent (I)—again U and I without H. In another important study, Nichols (2002) used examples of disgusting but harmless etiquette transgressions. He found that American children judged them to be universalizable (U) and authority independent (I)—still another example of U and I without H. Moreover, in the same study, Nichols found that American college students judged these etiquette transgressions to be authority independent though not universalizable. So with these participants, I has become detached from both U and H. Taken together, these studies pose a serious challenge to the claim that the elements of the UIH package form a nomological cluster. All of the studies mentioned in the previous paragraph used transgressions that did not involve harm but nonetheless evoked other elements of the UIH package. In a 2007 study, Kelly et al. set out to explore participants’ reactions to transgressions that do involve harm. There had, of course, been many studies by Turiel and his followers in which a harmful transgression was linked to U and I. But Kelly and colleagues noted that in almost all of these studies the harmful transgressions were restricted to the sorts of behaviors that young children might encounter. This was true even of a study in which the participants included incarcerated psychopathic murderers (Blair, 1995). So Kelly and colleagues decided to focus on transgressions that are not encountered in the schoolyard, including slavery, serious corporal punishment (whipping a sailor who was drunk on duty) and physically abusing military trainees.They found that many participants judged that these sorts of transgressions were not authority independent. According to these participants, it is OK to physically abuse military trainees if it is not prohibited by the authorities, but it is not OK if it is prohibited. Kelly and colleagues also found that the judgments of many participants do not generalize over time and space. Whipping a drunken sailor is not acceptable now but was acceptable 300 years ago. Slavery is not acceptable now but was acceptable in ancient Greece and Rome. So in this study, too, the UIH package comes unstuck. We find H without U or I. The Kelly et al. study was motivated by the observation that previous moral/conventional task studies had not used a wide range of harmful transgressions; they were almost all of the “schoolyard” variety. Another, more recent, study which also used “grown-up” transgressions was undertaken because previous studies, though they included a number of different demographic groups, had all focused on participants in large-scale, relatively modern societies (Fessler et al., 2015). Fessler and colleagues decided to explore what would happen to the UIH package if grown-up transgressions were used in small-scale societies. Using transgressions like stealing, wife battery, marketplace cheating, defamation, perjury and rape, they collected data in five small-scale societies and two large-scale modern societies. They found that participants in all seven societies viewed the described actions as less bad when they occurred long ago and when they occurred far away, again challenging the claim that there is a nomological link between H and U. Endorsement by an authority figure had this effect in four of the seven societies, with the remaining three showing nonsignificant trends in the direction of reduced severity—another challenge to the nomological link between H and I. So we now have evidence that Turiel’s putative nomological cluster comes apart with grown-up transgressions in a number of societies, including small-scale societies. The lesson that I am inclined to draw from the studies discussed in the last three paragraphs is that the UIH pattern is not a nomological cluster and thus that the elements of that cluster are not the essential features of a natural kind. If that’s right, then they can’t be used to construct an empirically supported definition of morality. One way in which this 27
The Quest for the Boundaries of Morality
conclusion might be challenged is to critique the methods or analyses of the studies cited. This has been done by a number of authors, and lively debates have ensued. My own view is that the critics have not been very successful. But I am hardly an impartial observer, so I’d encourage you to make your own assessment.18
10. Another Natural Kind Account of Moral Judgment Another reaction to the studies reviewed in the previous section would be to offer an empirically informed alternative to the UIH cluster—a different account of the essential features of moral judgments. That’s the strategy adopted by Kumar (2015). The first step in Kumar’s proposed revision is to urge that the third element in Turiel’s cluster, the requirement that moral judgments be justified by appeal to harm (or justice or rights) should be abandoned. His argument for this move seems to turn on intuition, or on how things “seem”: [F]olk theories about how moral claims are justified do not seem to be part of the concept of morality. . . . [I]t would seem that many people gain a facility with moral concepts before they have any theory about what grounds them. Justificatory grounds, whatever role they may play in marking important boundaries in moral philosophy, are not internal to the ordinary concept of morality. (Kumar, 2015, §3) I confess that I do not have intuitions on such rarefied matters as what is internal to the ordinary concept of morality. But there is no need to dispute these claims since, as Kumar makes clear, he is offering an alternative to Turiel’s hypothesis about the essential features of moral judgment, and he is free to include, or exclude, whatever features he wishes. The crucial question is whether the set of features he proposes actually do form a nomological cluster. Dropping the requirement that moral judgments must be justified by harm is certainly a strategically wise move for Kumar, for it enables him to ignore some of the bestknown and most persuasive critiques of Turiel. The fact that Haidt’s low SES participants judge that transgressions not involving harm are authority independent and universalizable is not a problem for Kumar, since his theory—which he calls “MCT”—does not predict that U and I will be nomologically linked to harm. The second step in Kumar’s revision is to add a feature that does not occur in Turiel’s account. Over the last decade, there has been growing interest in the question of whether ordinary folk are moral objectivists or moral relativists.To explore the issue, a number of investigators have presented participants with moral claims like “Consciously discriminating against someone on the basis of race is morally wrong,” along with factual claims like “Homo sapiens evolved from more primitive primate species” and conventional claims like “Wearing pajamas and bathrobe to a seminar meeting is wrong behavior.” After determining that a participant agrees with the statement, the participant is told about someone who disagrees, and asked to choose among the following options: 1. The other person is surely mistaken. 2. It is possible that neither you nor the other person is mistaken. 3. It could be that you are mistaken, and the other person is correct. 28
Stephen Stich
If the participant selects (1) or (3), it is taken to be evidence that the participant is an objectivist about the claim. Selecting (2) is taken to be evidence that the participant is a relativist.19 In the earliest studies (Nichols, 2004b; Goodwin & Darley, 2008), participants’ responses in what the investigators took to be prototypical moral cases were usually similar to their responses in the scientific cases—they thought that one of the disputants must be wrong. Based on these findings, Kumar hypothesizes that objectivity is a feature of the nomological cluster that defines the concept MORAL. A third step in Kumar’s revision is to upgrade seriousness to an essential feature of moral judgments. To justify the move, he says that “research suggests that morality is unlike convention in that morality is serious” (Kumar, 2015, §2) and cites several studies in the Turiel tradition in which a seriousness question was included in a moral/conventional task experiment. His review of the moral/conventional task literature also leads him to endorse the claims that universality (being “general”) and authority independence are features of moral judgments. The upshot of all this is summarized in the following passage. We are now in a position to say what defines MORAL. A moral wrong, for instance, is a wrong that is (1) serious (2) general (3) authority-independent (4) objective [T]he four features that define MORAL are stable and mutually reinforcing. Moral judgment, like other natural kinds, is a homeostatic property cluster. . . . The human cognitive system is organized in such a way that the four features have a nomological tendency to cluster together. (Kumar, 2015, §4) For three quite different reasons, I find Kumar’s hypothesis unconvincing. First, the literature on folk moral objectivism is much more contested than Kumar suggests. Since the Nichols (2004b) and Goodwin and Darley (2008) papers were published, there have been a number of studies suggesting that the folk are not moral objectivists (Sarkissian et al., 2011) or that they are objectivists on some moral issues and not on others (Goodwin & Darley, 2012) and that participants’ responses in experiments like these are influenced by a wide range of factors including the age of the participant, how the person who disagrees is described, how controversial the issue is and whether the moral claim in question is about a bad action or a good one (Wright et al., 2013; Beebe, 2014, 2015; Beebe & Sackris, 2016). The second reason is that the format of the studies cited in the previous paragraph offers no evidence that objectivity forms a nomological cluster with other items on Kumar’s list. In these studies participants are presented with a sentence like “Consciously discriminating against someone on the basis of race is morally wrong” and asked questions designed to determine whether they are objectivists about those statements. Participants are not asked anything about seriousness, generalizability or authority independence. So Kumar is simply speculating that if they had been asked participants would judge that the transgression 29
The Quest for the Boundaries of Morality
described is serious, generalizable and authority independent.20 Of course, Kumar’s speculation might be true. But at this point there is no evidence at all that it is. To make a serious case that Kumar’s four features form a nomological cluster we would need studies that test participants on all four features, and at this writing there are no such studies. Finally, Kumar has misinterpreted the role that seriousness plays in the Turiel tradition.21 As noted in §7, Turiel and his followers do not take seriousness to be “a formal criterion for distinguishing moral and conventional rules and transgressions.” This was clearly a wise move on their part. For while there are a number of studies in which participants judge that the schoolyard moral transgressions used are more serious than the conventional, there are also studies in which conventional transgressions are judged to be more serious than moral transgressions. Moreover, when one reflects on the vast range of possible non-schoolyard transgressions, the claim that moral transgressions are more serious than conventional transgressions is singularly implausible. Though no one has done the experiment, I would be willing to bet that most people would judge that showing up naked at your grandmother’s funeral is more serious than stealing an eraser! Kumar tells us that his “moral/conventional pattern is not supposed to be exceptionless,” and that his MCT only claims that “the features usually cluster together” (Kumar, 2015, §5). But it is hard to come up with a sensible interpretation of what “usually” could mean here. If the claim is that most actual moral transgressions are more serious than most actual conventional transgressions, then we have no evidence that would support the claim, and we never will, since most transgressions of both sorts have never been recorded. If the claim is that most possible moral transgressions are more serious than most possible conventional transgressions, then Kumar will have to explain how we are to go about comparing these two infinite sets of transgressions. Perhaps there is some more plausible interpretation of Kumar’s claim. But I have no idea what it is. The bottom line, I think, is that Kumar’s MCT is no more successful than Turiel’s theory in providing a defensible account of a nomological cluster of properties that can be used in an empirically supported definition of moral judgment.
11. Summing Up and Looking Beyond: A Future Without “Morality” Most of the philosophers who contributed to the Philosopher’s Project were convinced that there is a correct or well-motivated way of dividing normative judgments into those that are moral and those that are nonmoral. But, as we saw in §1–§5, those philosophers who took their project to be “descriptive-elucidatory”—aimed at providing an analysis of the concept of moral judgment that we actually use—did not meet with much success.Though some of the necessary conditions that were proposed were widely accepted, no set of necessary and sufficient conditions convinced more than a handful of contributors to the literature. Those who took their project to be normative were, if anything, less successful. Most of the normative analyses were, at best, very sketchy. And more often than not they were not endorsed by anyone but the author. In §6 we noted that the failure of both the descriptive-elucidatory and the normative projects might be blamed on a commitment to the classical theory of concepts and that things might go better if that commitment was abandoned in favor of some other account of concepts. But there are other challenges facing those pursuing descriptive-elucidatory project. There is some evidence that people in different religious or cultural groups, and 30
Stephen Stich
even perhaps people who share their religion and culture, have notably different concepts of moral judgment. It is, I believe, too early to draw any confident conclusions from the evidence available; much more work is needed. But if it is true that there are religious, cultural and individual differences in people’s concept of moral judgment, then the descriptiveelucidatory project is both poorly specified and poorly motivated. The goal of that project is to analyze the concept of moral judgment that we actually use. But if there are significant interpersonal and intergroup differences, we need to be told who “we” refers to. We also need to be told why our concept—however “our” is unpacked—is of any special philosophical importance. Why, for example, should our concept be the one to use in deciding whether the Navajo have a moral code? Philosophers are very clever people. So perhaps this challenge can be met. But at this point, I know of no serious attempts. In §7-§10 we explored the idea that “moral judgment” might be a natural kind term with a definition that can be discovered by psychologists. Turiel’s project fits comfortably into this picture. But a growing body of evidence suggests that Turiel’s UIH cluster shatters in a variety of ways and thus that it is not a nomological cluster at all. Here too, much more work is needed. For as John Doris has eloquently reminded us, in any given experiment in psychology, there is a lot that can go wrong. So it is wise to wait until there are many experiments all pointing in the same direction (Doris, 2015, 44–49). Kumar’s alternative natural kind account of moral judgment is, I think, less promising than Turiel’s. It requires that objectivity judgments form a nomological cluster with seriousness, universality and authority independence judgments, and at this writing there is no evidence at all for that claim. But though I’m critical of Kumar’s theory, I think his strategy is a good one. If we are to find a well-motivated way of defining “moral judgment” and related terms, our best hope is to locate a nomological cluster of properties exhibited by many intuitively prototypical moral judgments but not by most intuitively prototypical nonmoral normative judgments. Finding such a cluster would be an important discovery for both moral philosophy and moral psychology. There is, of course, no guarantee that the quest for a nomological cluster account of “moral judgment” will succeed. For it may turn out that there simply is no natural kind to be found in this vicinity or that there are numerous natural kinds, none of which can sustain a compelling argument that it specifies the essential features of moral judgments. What would be the consequences if that is how things unfold? To make things easier, let’s also assume that neither the descriptive-elucidatory nor the normative project is successful, and that these three projects are the only options available for those who seek a well-motivated way of defining “moral judgment.” Perhaps the most obvious implication of the failure of these projects is that debates that turn on whether specific normative judgments are really moral judgments will turn out to be irresolvable because they are based on a mistaken assumption. Consider, for example, Jonathan Haidt’s accusation that the preponderance of politically liberal researchers has led to “an inappropriate narrowing of the moral domain.” As we saw in §2, Haidt’s accusation turns on his insistence that norms governing such matters as clothing, gender roles, food and forms of address are moral norms, and whether judgments about such matters are moral judgments. Haidt insists they are. Turiel insists they aren’t. If our assumptions are correct, then there is simply no fact to the matter. Much the same is true for those who would debate whether the Navajo, as described by Ladd, have a moral code at all. 31
The Quest for the Boundaries of Morality
Let’s turn, now, to those many philosophers who debate the semantics of moral judgments, the function of moral judgments, the evolutionary history of moral judgments and the psychological mechanisms underlying moral judgments. How would their projects be impacted if our assumptions are correct? Here the consequences are less dire. To be sure, if the parties to these debates focus on different examples, and if one side insists that the examples used by the other side are not really moral judgments at all, then the debate is irresolvable, since once again there is no fact to the matter. But this is not how most debates on these topics unfold. Rather, in most cases at least, the philosophers involved agree that the examples of moral judgments advanced by their opponents really are moral judgments. So what they are debating is the semantics, or the function, or the evolutionary history or the psychological mechanisms of judgments like those. And progress can be made without specifying the boundaries of that class. However, if it turns out, and I’m betting it will, that there are actually a number of different natural kinds included in that vaguely specified class, then future philosophers and psychologists may simply drop the term “moral judgment” and focus instead on judgments of these separate natural kinds. If that’s the way things unfold, both philosophers and psychologists may be destined for a future without “morality.”22
Notes * This paper is dedicated to the memory of William Frankena, my esteemed colleague at the University of Michigan during the first decade of my career. 1. The terms “action guide” and “action guiding” are borrowed from Frankena (1967). 2. Oddly, this project was not the primary focus of moral philosophers in the analytic tradition during the 1950s and 1960s. More on this later. 3. The example is borrowed from Stohr (2012). 4. Many of these philosophers would have agreed with Frankena, who maintained that having an account is the only way to settle this question. “One cannot say that the Navaho have a morality until after one has formed some conception of morality and found that the Navajo have such an institution” (Frankena, 1963, 17). 5. See, for example, Wallace and Walker (1970, 1) and MacIntyre (1957, 26). 6. Joyce’s account is one of the few philosophically sophisticated analyses to appear since the turn of the century. For some critical thoughts about that analysis, see Stich (2008). Southwood (2011) offers another philosophically sophisticated analysis. See too, this volume, Chapter 2 (“The Normative Sense: What is Universal? What Varies?”); Chapter 3 (“Normative Practices of Animals”); Chapter 12 (“The Denial of Moral Knowledge”); and Chapter 14 (“Nihilism and the Epistemic Profile of Moral Judgment”) for analyses of moral judgment. 7. Frankena (1963) offers a much more extensive list, along with many references. 8. Though they did not completely disappear from the literature. See Gert’s Stanford Encyclopedia of Philosophy article, “The Definition of Morality,” which was first published in 2002 and has undergone four “substantive content changes,” in 2005, 2008, 2011 and 2016. 9. Frankena, for example, tells us that the question he is asking is “What are we to take as the necessary and sufficient conditions for something’s being or being called moral or a morality” (Frankena, 1967, 146–147). 10. See Chapter 18 of this volume for more on the variability of intuitions. 11. For further discussion of research on variability in conceptions of morality, see Chapter 2 of this volume. 12. For an informed and insightful account of Kohlberg’s work, see Lapsley (1996), chapters 3 & 4; see too Chapters 5 and 16 of this volume.
32
Stephen Stich
13. It is a striking fact that a number of philosophers engaged in the Philosophers’ Project insisted that the definition of “moral judgment” must include a “material condition” that reflects “a concern for others or a consideration of social cohesiveness and the common good” (Frankena, 1963, 9). For Turiel, being justified by appeal to social cohesiveness is part of the definition of a conventional judgment. 14. I know of no evidence that Turiel or any of his followers would construe their project in this way. It is offered here as a friendly amendment that avoids the challenge posed in the previous paragraph. 15. Why “many (but perhaps not all)”? Because commonsense intuition can’t be counted on to be a flawless detector of natural kinds. Intuition told people that fool’s gold was gold and that whales were fish. But when the relevant sciences discovered the essential features of gold and fish, it turned out that intuition was wrong about fool’s gold and whales. For more on the way psychologists and other scientists might discover the essential features of a natural kind, see Stich (2018), §3. 16. I’ll use this as shorthand for judgments that exhibit the UIH response pattern. 17. Philosophers include Dwyer (2006); Dwyer et al. (2010); Joyce (2006); Levy (2005); Nichols (2004a); Prinz (2007). Psychologists are too numerous to mention. 18. For a critique of Nissan (1987), see Turiel et al. (1988). For a critique of Kelly et al. (2007), see Sousa et al. (2009); for a response, see Stich et al. (2009). Kumar (2015) offers a rather different critique of Kelly et al. (2007). For a critique of Fessler et al. (2015), see Piazza and Sousa (2016); for a response, see Fessler et al. (2016). 19. This is a somewhat simplified version of the method employed in Goodwin and Darley (2008). Other investigators have used similar methods. 20. Isn’t that speculation supported by the findings in the Turiel tradition? No, it’s not.Turiel and his followers describe a behavior, but they never ask participants whether they think that behavior is “morally wrong.” 21. In earlier papers, including Kelly et al. (2007), I have made the same mistake! 22. The authors of Chapter 2 of this volume adopt a strategy of this kind by shifting focus from the category of moral judgments to an analysis of normative judgments more generally.
References Baier, K. (1958). The Moral Point of View. Ithaca, NY: Cornell University Press. Excerpts reprinted in G.Wallace and D.Walker (eds.), The Definition of Morality. London: Methuen, 1970, 188–210. Page references are to the Wallace and Walker reprint. Beebe, J. (2014). “How Different Kinds of Disagreement Impact Folk Metaethical Judgments,” in H. Sarkissian and J. Wright (eds.), Advances in Experimental Moral Psychology. London: Bloomsbury. ———. (2015). “The Empirical Study of Folk Metaethics,” Etyka, 15, 11–28. Beebe, J. and Sackris, D. (2016). “Moral Objectivism Across the Lifespan,” Philosophical Psychology, 29, 6, 912–929. Blair, R. (1995). “A Cognitive Developmental Approach to Morality: Investigating the Psychopath,” Cognition, 57, 1–29. Blair, R. (1996). “Brief Report: Morality in the Autistic Child,” Journal of Autism and Developmental Disorders, 26, 571–579. Blair, R., Monson, J. and Frederickson, N. (2001). “Moral Reasoning and Conduct Problems in Children with Emotional and Behavioural Difficulties,” Personality and Individual Differences, 31, 799–811. Boyd, R. (1988). “How to Be a Moral Realist,” in G. Sayre-McCord (ed.), Essays on Moral Realism. Ithaca, NY: Cornell University Press. Buchtel, E., Guan,Y., Peng, Q., Su,Y., Sang, B. Chen, S. and Bond, M. (2015). “Immorality East and West: Are Prototypically Immoral Behaviors Especially Harmful, or Especially Uncultured?” To appear in Personality and Social Psychology Bulletin, 41, 10, 1382–1394.
33
The Quest for the Boundaries of Morality
Cooper, N. (1966). “Two Concepts of Morality,” Philosophy, 19–33. Reprinted in G. Wallace and D. Walker (eds.), The Definition of Morality. London: Methuen, 1970, 72–90. Page references are to the Wallace and Walker reprint. ———. (1970). “Morality and Importance,” in G. Wallace and D. Walker (eds.), The Definition of Morality. London: Methuen, 91–97. Devitt, M. (1996). Coming to Our Senses. Cambridge: Cambridge University Press. Doris, J. (2015). Talking to Our Selves: Reflection, Ignorance, and Agency. Oxford: Oxford University Press. Dwyer, S. (2006). “How Good Is the Linguistic Analogy?” in P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind: Culture and Cognition. New York: Oxford University Press. Dwyer, S., Huebner, B. and Hauser, M. (2010). “The Linguistic Analogy: Motivations, Results, and Speculations,” Topics in Cognitive Science, 2, 486–510. Fessler, D., Barrett, H., Kanovsky, M., Stich, S., Holbrook, C., Henrich, J., Bolyanatz, A., Gervais, M., Gurven, M., Kushnick, G., Pisor, A., von Rueden, C. and Laurence, S. (2015). “Moral Parochialism and Contextual Contingency Across Seven Disparate Societies,” Proceedings of the Royal Society B, 282 (1813) (August 22). doi:10.1098/rspb.2015.0907. ———. (2016). “Moral Parochialism Misunderstood: A Reply to Piazza and Sousa,” Proceedings of the Royal Society B, 283. http://dx.doi.org/10.1098/rspb.2015.2628 Fodor, J. (1981). “The Present Status of the Innateness Controversy,” in J. Fodor (ed.), Representations. Cambridge, MA: Bradford Books. Frankena, W. (1958). “MacIntyre on Defining Morality,” Philosophy, 158–162. Reprinted in G. Wallace and D. Walker (eds.), The Definition of Morality. London: Methuen, 1970, 40–46. Page references are to the Wallace and Walker reprint. ———. (1963). “Recent Conceptions of Morality,” in H. Castañeda and G. Nakhnikian (eds.), Morality and the Language of Conduct. Detroit: Wayne State University Press, 1–24. ———. (1967). “The Concept of Morality,” in University of Colorado Studies, Series in Philosophy, No. 3, 1–22. Reprinted in G. Wallace and D. Walker (eds.), The Definition of Morality. London: Methuen, 1970, 146–173. Page references are to the Wallace and Walker reprint. Gert, B. (2002). “The Definition of Morality,” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/sum2002/entries/morality-definition/ Gewirth, A. (1978). Reason and Morality. Chicago: University of Chicago Press. Goldman, A. (2007). “Philosophical Intuitions: Their Target, Their Source, and Their Epistemic Status,” Grazer Philosophische Studien, 74. Goodwin, G. and Darley, J. (2008). “The Psychology of Meta-Ethics: Exploring Objectivism,” Cognition, 106, 1339–1366. Goodwin, G. and Darley, J. (2012). “Why Are Some Moral Beliefs Perceived to Be More Objective Than Others?” Journal of Experimental Social Psychology, 48, 1, 250–256. Haidt, J. and Joseph, C. (2007). “The Moral Mind: How 5 Sets of Innate Moral Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules,” in P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind, Volume 3. New York: Oxford University Press, 367–391. Haidt, J., Koller, S. and Dias, M. (1993). “Affect, Culture and Morality, or Is It Wrong to Eat Your Dog?” Journal of Personality and Social Psychology, 65, 613–628. Hare, R. (1952). The Language of Morals. Oxford: Clarendon Press. ———. (1954/1955). “Universalizability,” Proceedings of the Aristotelian Society, 55, 295–312. Hollos, M., Leis, P. and Turiel, E. (1986).“Social Reasoning in IJO Children and Adolescents in Nigerian Communities,” Journal of Cross-Cultural Psychology, 17, 352–376. Horgan, T. and Timmons, M. (1992). “Troubles for New Wave Moral Semantics: The “Open Question Argument” Revived,” Philosophical Papers, 21, 153–175. Joyce, R. (2006). The Evolution of Morality. Cambridge, MA: MIT Press. Kelly, D., Stich, S., Haley, K., Eng, S. and Fessler, D. (2007). “Harm, Affect and the Moral/Conventional Distinction,” Mind and Language, 22, 117–131. Kitcher, P. (2011). The Ethical Project. Cambridge, MA: MIT Press.
34
Stephen Stich
Kornblith, H. (1998). “The Role of Intuition in Philosophical Inquiry: An Account with No Unnatural Ingredients,” in M. DePaul and W. Ramsey (eds.), Rethinking Intuition. Lanham, MD: Rowman & Littlefield. Kumar,V. (2015). “Moral Judgment as a Natural Kind,” Philosophical Studies, published Online February 5. doi:10.1007/s11098-015-0448-7. Ladd, J. (1957). The Structure of a Moral Code: A Philosophical Analysis of Ethical Discourse Applied to the Ethics of the Navaho. Cambridge, MA: Harvard University Press. Lapsley, D. (1996). Moral Psychology. Boulder, CO: Westview Press. Laurence, S. and Margolis, E. (1999). “Concepts and Cognitive Science,” in E. Margolis and S. Laurence (eds.), Concepts: Core Readings. Cambridge, MA: MIT Press. Levine, S., Machery, E., Rottman, J., Davis, T. and Stich, S. (under review) “Religion’s Impact on the Moral Sense”. Levy, N. (2005). “Imaginative Resistance and the Moral/Conventional Distinction,” Philosophical Psychology, 18, 231–241. Machery, E. (2009). Doing Without Concepts. Oxford: Oxford University Press. MacIntyre, A. (1957). “What Morality Is Not,” Philosophy, 32, 325–335. Reprinted in G. Wallace and D. Walker (eds.), The Definition of Morality. London: Methuen, 1970, 26–39. Page references are to the Wallace and Walker reprint. Nichols, S. (2002). “Norms with Feeling: Towards a Psychological Account of Moral Judgment,” Cognition, 84, 2, 221–236. Nichols, S. (2004a). Sentimental Rules: On the Natural Foundations of Moral Judgment. Oxford: Oxford University Press. ———. (2004b). “After Objectivity: An Empirical Study of Moral Judgment,” Philosophical Psychology, 17, 3–26. Nissan, M. (1987). “Moral Norms and Social Conventions: A Cross-Cultural Comparison,” Developmental Psychology, 23, 719–725. Nucci, L. (2001). Education in the Moral Domain. Cambridge: Cambridge University Press. Nucci, L. and Herman, S. (1982). “Behavioral Disordered Children’ s Conceptions of Moral, Conventional, and Personal Issues,” Journal of Abnormal Child Psychology, 10, 411–425. Nucci, L. and Nucci, M. (1982). “Children ’s Social Interactions in the Context of Moral and Conventional Transgressions,” Child Development, 53, 403–412. Nucci, L. and Turiel, E. (1978). “Social Interactions and the Development of Social Concepts in Preschool Children,” Child Development, 49, 400–407. Nucci, L. and Turiel, E. (1993). “God’s Word, Religious Rules, and Their Relation to Christian and Jewish Children’s Concepts of Morality,” Child Development, 64, 5, 1475–1491. Nucci, L.,Turiel, E. and Encarnacion-Gawrych, G. (1983). “Children’ s Social Interactions and Social Concepts in the Virgin Islands,” Journal of Cross-Cultural Psychology, 14, 469–487. Piazza, J. and Sousa, P. (2016). “When Injustice Is at Stake, Moral Judgments Are Not Parochial,” Proceedings of the Royal Society B, 283, 20152037. Prinz, J. (2007). The Emotional Construction of Morals. Oxford: Oxford University Press. ———. (2015). “An Empirical Case for Motivational Internalism,” in G. Bjornsson, C. Strandberg, R. Ollinder, J. Eriksson and F. Bjorklund (eds.), Motivational Internalism. Oxford: Oxford University Press. Putnam, H. (1975). “The Meaning of ‘Meaning’,” in K. Gunderson (ed.), Language, Mind and Knowledge. Minnesota Studies in the Philosophy of Science, vol. 7. Minneapolis: University of Minnesota Press. Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press. Roskies, A. (2003). “Are Ethical Judgments Intrinsically Motivational? Lessons from “Acquired Sociopathy’,” Philosophical Psychology, 16, 51–66. Sarkissian, H., Park, J, Tien, D., Wright, J. C., Knobe, J. (2011). “Folk Moral Relativism,” Mind and Language, 26, 4, 482–505. Schroeder, M. (2008). Being for: Evaluating the Semantic Program of Expressivism. Oxford: Oxford University Press.
35
The Quest for the Boundaries of Morality
Smetana, J. (1981). “Preschool Children’s Conceptions of Moral and Social Rules,” Child Development, 52, 1333–1336. ———. (1993). “Understanding of Social Rules,” in M. Bennett (ed.), The Development of Social Cognition:The Child as Psychologist. New York: Guildford Press. Smetana, J., Kelly, M. and Twentyman, C. (1984). “Abused, Neglected, and Nonmaltreated Children ’s Conceptions of Moral and Social-Conventional Transgressions,” Child Development, 55, 277–287. Smetana, J., Toth, S., Cicchetti, D., Bruce, J., Kane, P. and Daddis, C. (1999). “Maltreated and Nonmaltreated Preschoolers ’ Conceptions of Hypothetical and Actual Moral Transgressions,” Developmental Psychology, 35, 269–281. Smith, E. and Medin, D. (1981). Categories and Concepts. Cambridge, MA: Harvard University Press. Sousa, P., Holbrook, C. and Piazza, J. (2009). “The Morality of Harm,” Cognition, 113, 80–92. Southwood, N. (2011). “The Moral/Conventional Distinction,” Mind, 120, 761–802. Sprigge, T. (1964). “Definition of a Moral Judgment,” Philosophy, 301–322. Reprinted in G. Wallace and D.Walker (eds.), The Definition of Morality. London: Methuen, 1970, 118–145. Page references are to the Wallace and Walker reprint. Stich, S. (2008). “Some Questions About the Evolution of Morality,” Philosophy and Phenomenological Research, 77 (1), 228–236. Stich, S. (2018). “The Moral Domain,” in K. Gray and J. Graham (eds.), The Atlas of Moral Psychology. New York: Guilford Press, 547–555. Stich, S., Fessler, D. and Kelly, D. (2009). “On the Morality of Harm: A Response to Sousa, Holbrook and Piazza,” Cognition, 113, 93–97. Stohr, K. (2012). On Manners. New York: Routledge. Taylor, P. (1978). “On Taking the Moral Point of View,” Midwest Studies in Philosophy, III, 35–61. Tisak, M. (1995).“Domains of Social Reasoning and Beyond,” in R.Vasta (ed.), Annals of Child Development, vol. 11. London: Jessica Kingsley. Tisak, M. and Turiel, E. (1988). “Variations in Seriousness of Transgressions and Children’s Moral and Conventional Concepts,” Developmental Psychology, 24, 352–357. Toulmin, S. (1950). An Examination of the Place of Reason in Ethics. Cambridge: Cambridge University Press. Turiel, E. (1983). The Development of Social Knowledge: Morality and Convention. Cambridge: Cambridge University Press. Turiel, E., Nucci, L. and Smetana, J. (1988). “A Cross-Cultural Comparison About What? A Critique of Nissan’s (1987). Study of Morality and Convention,” Developmental Psychology, 24, 140–143. von Wright, G. (1963). The Varieties of Goodness. New York: The Humanities Press. Wallace, G. and Walker, D. (1970). “Introduction,” in G. Wallace and D. Walker (eds.), The Definition of Morality. London: Methuen. Wright, J., Grandjean, P. and McWhite, C. (2013). “The Meta-ethical Grounding of Our Moral Beliefs: Evidence for Meta-Ethical Pluralism,” Philosophical Psychology, 26, 336–361. Yau, J. and Smetana, J. (2003). “Conceptions of Moral, Social-Conventional, and Personal Events Among Chinese Preschoolers in Hong Kong,” Child Development, 74 (1), 647–658.
Further Readings For an excellent overview of early work on the Philosophers’ Project, see W. Frankena, “Recent Conceptions of Morality,” in H. Castañeda and G. Nakhnikian (eds.), Morality and the Language of Conduct (Detroit: Wayne State University Press, 1963), 1–24. G. Wallace and D. Walker, The Definition of Morality (London: Methuen, 1970) is a collection of important papers debating the Philosophers’ Project. For a definitive account of Turiel’s version of the Psychologists’ Project, see E. Turiel, The Development of Social Knowledge: Morality and Convention (Cambridge: Cambridge University Press, 1983). Two important empirical critiques of the Psychologists’ Project are J. Haidt, S. Koller, and M. Dias, “Affect, Culture and Morality, or Is It Wrong to Eat Your Dog?” Journal of Personality and Social Psychology, 65, 613–628, 1993, and D. Kelly, S. Stich, K. Haley, S. Eng, and D. Fessler, “Harm, Affect and the Moral/Conventional Distinction,” Mind and Language, 22, 117–131, 2017.V. Kumar, “Moral 36
Stephen Stich
Judgment as a Natural Kind,” Philosophical Studies, published online February 5, 2015 (doi: 10.1007/ s11098-015-0448-7) is a recent attempt to avoid the problems that beset the Psychologists’ Program. E. O’Neill, E. “Kinds of Norms,” Philosophical Compass, 12, e12416, 2017 (doi: 10.1111/phc3.12416) is a valuable discussion of the many different kinds of norms found in cultures around the world.
Related Chapters Chapter 2 The Normative Sense: What is Universal? What Varies; Chapter 5 Moral Development in Humans; Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 9 The Evolution of Moral Cognition; Chapter 14 Nihilism and the Epistemic Profile of Moral Judgment; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 21 Methods, Goals and Data in Moral Theorizing; Chapter 22 Moral Knowledge as Know-How; Chapter 25 Moral Expertise; Chapter 30 Religion and Moral Knowledge.
37
2 THE NORMATIVE SENSE What is Universal? What Varies? Elizabeth O’Neill and Edouard Machery
1. Introduction Normative cognition, as we define it, involves the capacity to make normative judgments, to remember the norms one is committed to, to learn new norms, and to be motivated in various ways (including to be motivated to comply with the norms one endorses and to punish norm violators); it also involves emotions (e.g., admiration or even awe at normative behaviors, outrage or disgust elicited by norm violations, guilt and shame elicited by one’s own norm violations). Some components of normative cognition are specific to the domain of normativity; others may be domain-general traits harnessed for normative purposes: Among the latter, for instance, outrage elicited by norm violation is just a form of anger.1 The “normative sense hypothesis” proposes that normative cognition, so understood, is shared by typical adult humans across cultures, develops early and reliably, has evolved, and may well be specific to human beings (Machery & Mallon, 2010; Machery, 2018).2 The normative sense hypothesis contrasts with a further hypothesis that proposes that there is a distinctively moral sense that is universal, develops early and reliably, and is the product of its own evolutionary trajectory (Joyce, 2006; Tomasello, 2016; Stanford, 2017). In turn, both of these hypotheses contrast with the claim that our capacity to deal with norms (to be motivated by them, comply with them, etc.) is best explained as a product of domain-general cognitive resources, including social learning (Sterelny, 2010; Prinz, 2008). Regardless of its origins, there is no doubt that the normative sense of a typical adult human is complex and highly structured. It involves many normative concepts, values, and more or less abstract norms, and it is connected with motivational structures and emotions. In this chapter, we will be focusing on how much cultural variation there is in the normative sense: Which components of the normative sense are universal and which vary across cultures? While this question is interesting in its own right, it is also tied to traditional philosophical questions.There is a clear connection between normative diversity and metaethical questions. Some take the existence of widespread, deep, irresolvable moral disagreement as evidence against moral realism (Mackie, 1977; Loeb, 1998; Harman, 1996; Prinz, 2007). Establishing the existence of deep and irresolvable disagreement is not straightforward 38
Elizabeth O’Neill and Edouard Machery
(Brandt, 1954; Machery et al., 2005) and often involves determining whether the disagreement would endure “in ideal conditions, among fully informed, fully rational discussants” (Doris & Plakias, 2008, 305).3 In addition, projects that analyze human normative concepts (e.g., hypocrisy (Kittay, 1982) or forgiveness (Murphy, 1982)) must take into account variation in how these concepts are used by different individuals or cultures. Actual diversity is also relevant when philosophers appeal to normative intuitions or judgments they take to be broadly shared, whether as prima facie evidence in favor of an intuition, as common ground shared with their interlocutor, or as a way of putting the burden of proof on their opponents (O’Neill & Machery, 2014; Colaço & Machery, 2017; Machery, 2017).4 Lastly, cross-cultural diversity and commonality has practical import for questions about which rules, values, or virtues should be inculcated as part of moral education and which should guide the design of “ethical” artificial intelligence systems (Nucci et al., 2008; Anderson & Anderson, 2007; Allen et al., 2006). Deep disagreement about the relevance, legitimacy, or significance of principles, values, and character traits or about what is right in particular cases poses a very real challenge to educators who want to impart a shared ethical system to children and to engineers who have similar goals for AI.5 We now turn to the main goal of this chapter: Presenting a few striking commonalities and differences in the normative sense across cultures. We first comment on the state of the empirical evidence on normative commonality and diversity.We then focus on four aspects of the normative sense: whether there is a universal typology of norms (part 3), whether norms are externalized (part 4), which aspects of life are the object of norms (part 5), and whether there are universal concepts used to think about the normative domain (part 6).
2. The State of the Evidence About Normativity and Morality For much of the twentieth century, the discipline perhaps best positioned to study the cross-cultural diversity of human normative and specifically moral views—anthropology— deemphasized questions of morality and its degree of diversity. This de-emphasis resulted in part from methodological worries. A primary and still pressing obstacle for cross-cultural comparison is the difficulty of interpreting concepts and behaviors of other cultures (Brown, 1991). An influential view from Malinowski holds that understanding any utterance from another culture requires an extensive understanding of its cultural context (Boehm, 1999; Moody-Adams, 1997). Much recent cross-cultural experimental work on normative concepts, including work we discuss later, takes a number of precautions to head off objections of this type. Another issue is the challenge of identifying criteria for isolating morality—moral norms, practices, etc.—as a distinct object of study.6 The influence of Durkheim produced a tendency to reject any distinction between the moral and the social (Heywood, 2015; Laidlaw, 2002; Yan, 2011). A benefit of this, in our view, is that much of the ethnographic study of norms has avoided imposing disputed and likely parochial distinctions between moral and nonmoral onto other cultures, potentially making it easier to detect cross-cultural differences in the ways humans “carve up” types of norms. Another set of difficulties resulted from the potential political consequences of claims about the anthropology of morality (Brown, 1991; Fassin, 2012b). On the one hand, if the core features of Western value systems are reflected across cultures and value systems are 39
The Normative Sense
not incommensurable, then there is a shared basis from which the systems of other cultures can be evaluated and found wanting. On the other hand, there are hazards associated with claiming that other cultures lack components of Western value systems (Heintz, 2009b; Zigon, 2008). Westerners have a long history of using such claims as a rationale for racism, cultural supremacism, and colonialism. The idea that some demographic differences might be due not to culture but rather to genetic factors is even more closely tied to racist ideologies. In the domain of values and morality, a domain where many are particularly hesitant to tolerate difference, suggestions of demographic differences due to culture or, even worse, genetics have unique potential for misinterpretation and misuse. Relatedly, each anthropologist has his or her own set of values, which often conflicts with the value system of the culture under study. Such a conflict can make it difficult for an anthropologist to study a value system without feeling an obligation to take a normative stance on it or even intervene in traditions they find repugnant, such as, e.g., female genital mutilation.7 At the other extreme, anthropologists who seek to explain traditions viewed as repugnant in the West sometimes come in for criticism for their apparent failure to take a moral stand (Fassin, 2008). Thus, for a number of reasons, until the 1990s, anthropologists rarely treated moral and normative views as distinct objects of study warranting their own research program. The current situation is not drastically different from what Edel and Edel described in the 1950s: It is true that in a great deal of its [anthropology’s] materials—in accounts of family quarrels, in details of the instructions given youths at initiation schools, between the lines of a speech in honor of a returning warrior—there is buried a wealth of data on moral rules and moral attitudes, on sanctions and justifications, and the way morality operates in relation to daily living. But apart from a few early attempts set in an evolutionary framework (e.g.,Westermarck, 1906) these data have rarely been organized and analyzed around themes relating to ethics. (4) As a result, assessment of diversity and commonality of human normative views based on ethnography must be done piecemeal by assembling evidence from diverse studies that addressed normativity and morality incidentally, in the course of examining other topics. Such topics include, for instance, taboos, rituals, symbolic behavior, traditions, institutions, and cultural systems (e.g., legal systems and religion but also traditions such as witchcraft) (Zigon, 2008).8 However, in recent years, there has been a resurgence of interest in morality among anthropologists and a discussion about how fieldwork over the past century can bear on conclusions about morality and how to approach the study of morality in the future (for instance, Csordas, 2013; Keane, 2015; Fassin, 2012a; Faubion, 2011; Heintz, 2009a; Mattingly, 2012; Zigon, 2008). Similarly, the field of sociology paid little explicit attention to morality for much of the twentieth century but has recently seen an increase in interest (Therborn, 2002; Keane, 2015; Hitlin & Vaisey, 2010, 2013). Despite these recent trends in anthropology and sociology, though, most of the explicitly cross-cultural study of human normative views and on morality has been done in psychology and economics (Graham et al., 2016). 40
Elizabeth O’Neill and Edouard Machery
So, what components of the human normative sense are present across cultures and individuals? What components vary? The remainder of this chapter highlights some striking findings that can be garnered from the anthropological, economic, and psychological literature. We argue that there is diversity in views on the moral-conventional distinction and in several concepts and principles that have been suggested as candidate universals. But we nevertheless isolate two possible universals: the phenomenon of externalizing and the normativization of a preferred class of subject matters.
3. Kinds of Norms Does a shared normative sense lead people across demographic groups to distinguish similar kinds of norms (Machery, 2012; O’Neill, 2017)? In particular, is the distinction between moral and nonmoral norms universal, or rather is it specific to some cultures? Machery (2018) distinguishes between the “fundamentalist view of morality,” which claims, among other things, that people across cultures and times intuitively distinguish between moral and nonmoral norms, and the “historicist view of morality,” which, among other things, denies this. Note that the hypothesis of a universal distinction between kinds of norms is consistent with the norms themselves varying across cultures and other demographic groups. Everyone acknowledges that particular norms vary across populations and times. But it is consistent with this that each such population sorts their norms into two distinct classes: moral and nonmoral norms.The hypothesis under debate is whether the proclivity to draw a distinction of this kind is itself universal. The hypothesis of a universal distinction between moral and nonmoral norms has been influential in moral psychology (e.g.,Turiel, 1983; Blair, 1995; Sousa et al., 2009; Gray et al., 2012; Stanford, 2017) and in moral philosophy (Nichols, 2004; Southwood, 2011; Fraser, 2012; Kumar, 2015; for further discussion, see Stich, Chapter 1, this volume). According to Turiel and colleagues’ social moral domain theory (e.g., Turiel, 1983), from a very early age on, people distinguish two kinds of wrong action.Those actions (e.g., hitting another child) that are judged worse are also judged to be “authority-independent” in that they would still be wrong if the relevant authority allowed people to act this way, and to be wrong everywhere. The theory also posits that people justify their opinion that these actions are wrong by appealing to considerations of harm, justice, or rights. Turiel and colleagues call the norms prohibiting these actions “moral norms.” By contrast, those actions (e.g., leaving the classroom without asking permission) that are judged to be less wrong are judged to be to be wrong only locally and to be authority-dependent in that they would not be wrong if the relevant authority allowed people to act this way. According to the theory, people justify their opinion that these actions are wrong by appealing to authority and convention.Turiel and colleagues call the norms prohibiting these actions “conventional norms.” On the basis of their substantial empirical research, they argue that the distinction between moral and conventional norms is a universal feature of the human mind. Turiel and colleagues’ social moral domain theory has been widely endorsed (e.g., Blair, 1995; Nichols, 2004), but recent findings suggest that the separation of wrong actions into these two kinds is an artifact of the restricted class of actions used by Turiel and colleagues to distinguish moral norms from conventions. When a larger class of actions is used, the different features that are meant to characterize moral and nonmoral norms (wrongness, 41
The Normative Sense
authority dependence, universality, justification type) come apart, and the conjunction of these properties fail to distinguish moral from nonmoral norms (Shweder et al., 1987; Haidt et al., 1993; Kelly et al., 2007). However, there is no doubt that Westerners draw a distinction between moral and nonmoral norms: Although they may be unable to define what makes a norm moral or an action morally wrong (in contrast to just wrong), Westerners find it natural or intuitive to classify an assertion such as “Thou shall not kill” as expressing a moral norm and assertions such as “Look left before crossing the street” and “Men should wear a tie at work” as expressing nonmoral norms. But this way of conceptualizing the normative domain appears to be culturally parochial.The first body of evidence to support this claim comes from linguistics. In line with the proposal that normative cognition is a cultural universal, deontic modals—that is, words translating as “ought”—and translations of the normative predicates good and bad are apparently found in every language (Wierzbicka, 2001, 167–169; Wierzbicka, 2007). By contrast, some expressions that are closely tied to the moral domain in the United States are not found in all languages. Whereas in the United States judgments about whether an action is “right” or “wrong” are tightly connected to whether it belongs to the moral domain (Skitka, 2010), translations of “right” and “wrong” are not found in every language.9 Furthermore, many languages do not have a translation of “moral” and thus do not lexicalize the distinction between moral and nonmoral norms (Wierzbicka, 2007, 68). If the moral domain were a fundamental feature of human cognition, we would expect the distinction between moral and nonmoral norms to be lexicalized in every language, as are deontic modals and the distinction between good and bad.10 The second body of evidence comes from an ongoing research program meant to determine how people divide the norms they endorse into different kinds (see Machery, 2012, for a description). Unpublished preliminary results suggest that Americans draw a distinction between moral and nonmoral norms. In contrast, Indian participants as well as Muslim participants (of various national origins) do not seem to draw the same distinction, suggesting that American ways of delimiting the moral domain may not be universal.
4. The Externalization of Norms Does the normative sense generate a specific attitude about the nature of the norms people endorse? Are these norms universally viewed as “objective,” in some sense? Or are some norms, but not all, so viewed? If so, which and why? Is there cultural variation in this respect? How and why? Both realist and anti-realist metaethicists often hold that ordinary people are moral objectivists (Mackie, 1977; Smith, 1994; Huemer, 2005), and this claim often plays an important role in metaethical arguments (Gill, 2009). Tomasello (2016) and Stanford (2017) have recently incorporated the idea that humans tend to objectivize some norms as part of their theories of the evolution of normative cognition (for discussion of Stanford (2017), see O’Neill, 2018; Patel & Machery, 2018). They hypothesize that the capacity and inclination to view some norms as objective is a cross-cultural, universal part of the evolved normative (specifically, moral) sense.
42
Elizabeth O’Neill and Edouard Machery
Here we examine one aspect of objectivism—the externalization of norms. In contrast to the scholars debating whether the folk are moral objectivists, though, we are attracted to a weaker claim: every population “externalizes” some of its norms, though these may not be moral. The concept associated with “externalization” is somewhat underspecified in the literature. According to Stanford, Humans experience the demands of morality as somehow imposed on us externally: we do not simply enjoy or prefer to act in ways that satisfy the demands of morality, we see ourselves as obligated to do so regardless of our subjective preferences and desires, and we regard such demands as imposing unconditional obligations not only on ourselves but on any and all agents whatsoever, regardless of their preferences and desires. (2017, 3–4, italics in original) People tend to experience some obligations as imposed on them from without, in a way that involves a form of desire or preference-independence. In addition, they tend to generalize: That is, they tend to think that any suitably similar person in roughly the same situation would be subject to the same sort of obligation. Conceptually speaking, the tendency to generalize can be separated from the idea that moral demands are imposed externally: Someone might judge that a highly particular obligation was imposed externally (e.g., as the God of the Judeo-Christian tradition gives particular prophets particular obligations); whereas many in the Kantian tradition argue for generalized obligations that we somehow give (or “legislate for”) ourselves. Reflecting this distinction, Tomasello’s discussion divides the objectivization of morality into three categories, which he presents as a “three-way generality—agent, target, and standards” (2016, 102–103). Tomasello’s agent generality corresponds to what we are referring to here as generality, and his standards generality has to do with externalization. Target generality is the view that there is a universal and evolved tendency for individuals to treat partners and members of their community as equals. We are not convinced that target generality is universal, and we will not evaluate here the alleged universality of the tendency to generalize. However, there are still multiple ways to interpret the idea of externalization. The core idea is that many obligations seem independent of one’s endorsement of them. For instance, a parent may feel as though he ought to care for his young child, and a grown child may feel as if she ought to care for her aging parents, irrespective of whether discharging the obligation in question will secure something the agent wants. This by itself is consistent with the possibility that obligations (or normative phenomena more generally) are contingent on the attitudes or commands of deities, an agent’s ancestors, or some similar entity (see also Keane, 2015 on this topic, p. 37). Perhaps the desire-independence of a norm can also be reconciled with a belief in its social or interpersonal origins, although, as Tomasello notes, a key feature of externalization seems to be that norms issue from a more authoritative source than the people involved in a given interaction. It may be that externalization invariably involves the belief that individuals are not capable of altering the norms from which they derive their particular obligations to one another (Tomasello, 2016, 96). On a stronger version of externalization, normative properties appear to be in the world and
43
The Normative Sense
mind-independent in a sense that precludes their dependence even on entities like deities or ancestors. Stanford seems to have this particularly strong idea of externalization in mind when he writes, We experience prototypical moral norms, demands, and obligations as imposed unconditionally, not only irrespective of our own preferences and desires but also those of any and all agents whatsoever, including those of any social group or arrangement to which we belong. (2017, 32) We think there is good evidence for the universality of some relatively weak form of externalization as well as at least some evidence against the universality of its stronger forms. Both Tomasello and Stanford offer possible evolutionary accounts of the origins of normativity and specifically of norms required for cooperation.11 We focus here on Stanford’s proposal, but in both accounts, the emergence of the tendency to externalize certain norms plays an important role in explaining how characteristically human normativity originated. Both theorists argue that the tendency to externalize morality emerged in a context where there was substantial pressure on humans to cooperate with others in a variety of changing situations and where potential cooperators were increasingly unfamiliar, as the size of human populations expanded. On Stanford’s account, individuals with an externalized understanding of their obligations rather than a view of obligations that hinged on personal preferences (or even on an authority’s or society’s potentially changeable preferences) were more attractive partners for both cooperators and would-be exploiters. Generalizing norms—taking them to apply also to others in similar circumstances—and an associated demand that potential cooperation partners comply with the same norms helped cooperative humans discriminate against those who would exploit them. Several types of experimental studies bear on our assessment of the prevalence of the human tendency to externalize norms (e.g., Wainryb et al., 2004; Nichols, 2004; Goodwin & Darley, 2008, 2010, 2012; Sarkissian et al., 2011; Quintelier et al., 2013; Wright et al., 2013, 2014). Folk objectivity has been studied empirically by investigating whether subjects think two individuals can disagree about the truth of the claim in question without one of them being wrong: If people cannot disagree about a normative claim without fault, then its truth does not depend on the disputants’ attitude about it. So the perceived impossibility of faultless disagreement implies externalization. People from North America, Europe, China, Ecuador, and Singapore tend to classify some normative claims (e.g. “hitting someone just because you feel like it is wrong”) as something that two individuals cannot disagree on without one of them being mistaken—at least if these two individuals are from the same background or culture (Goodwin & Darley, 2008; Beebe & Sackris, 2016; Beebe et al., 2015; Sarkissian et al., 2011).This finding supports the hypothesis that humans universally externalize some norms to at least some degree. But people are more likely to say that disputants can disagree over a norm and both be right if these parties are members of distinct cultures or populations (Nichols, 2004; Sarkissian et al., 2011; Sarkissian, 2016; Khoo & Knobe, 2018). This is evidence that ordinary people view the truth of such claims as dependent in some way on cultural beliefs, traditions, or normative frameworks, suggesting that people do not externalize all norms to the strongest degree imaginable. 44
Elizabeth O’Neill and Edouard Machery
There is further evidence that publicizing the externalized or generalized nature of norms promotes compliance with them. (Whether it is externalization or generalization that produces the effect is hard to disentangle.) This phenomenon, which is consistent with Stanford’s explanation for the evolution of morality, suggests that externalization or generalization may play an important role in the maintenance of normative systems over time. Young and Durwin (2013) found that people were much more willing to donate to a charity when they were primed with the question, “Do you agree that some things are just morally right or wrong, good, or bad, wherever you happen to be from in the world?” in contrast to a prime that asked about whether they agree that “there are no absolute right answers to any moral question” (303). Rai and Holyoak (2013) primed subjects to think about the truth of moral claims as independent of individual or group preferences and found that such subjects were less likely to cheat on a task and expressed less willingness to make a norm-violating purchase compared with subjects in a control condition or a condition that primed an idea of morality as mind-dependent. A limitation of this research for present purposes is that it has been only been conducted with Western populations. There is also some evidence relevant to the development of norm objectivization within an individual’s psychology, at least in the West (Nichols & Folds-Bennett, 2003). Wainryb et al. (2004) found evidence that children externalize all norms and later learn to view some norms (e.g., those traditionally categorized as conventional norms) as mind- or culturedependent. In Heiphetz and Young (2017), preschoolers were more likely than adults to judge that only one person could be correct in cases of disagreement about norms about hurting or helping others. Schmidt et al. (2017) report similar results: Children aged 4 and 6 were less likely than 9-year-olds to think that two disagreeing parties—an alien and a non-alien—who disagreed about a normative question could both be correct. In addition, as Heiphetz and Young (2017) note, younger children tend in general, across domains, to be objectivist even when older children are not. Some sociologists and anthropologists have theorized that there is a cross-cultural inclination toward reification (Berger & Luckmann, 1991). Reification occurs when a social entity is taken to be a natural one: Its social nature, including its mind-dependent mode of existence, is not recognized. For example, currency is reified if people do not grasp the social and mind-dependent nature of its status as money. Social kinds and social roles are commonly reified (Machery, 2014). Social norms get reified, too: as when people ignore the social origins of etiquette norms, treating them instead as mind-independent or natural. For instance, people often forget that the correct physical distance between two interlocutors is a matter of convention that varies widely across cultures. Much sociological research supports this claim. Gabennesch (1990, 2047) refers to the “evidence that children and adults ‘reify’ social formations by apprehending them as something other than social products.” The tendency to externalize some norms may be part of this broader inclination to reify. Lastly, although we have doubts about the moral-conventional distinction, some of the evidence associated with it may support the hypothesis that every population has amongst its norms a subgroup that are conceptualized as independent of various authorities, and so mind-independent in that sense (Nichols, 2004). Some aspects of externalization need further cross-cultural exploration; other aspects have received little empirical study. Nonetheless, the current experimental evidence 45
The Normative Sense
supports the claim that people across cultures tend to externalize at least some norms in the weak sense we have identified.
5. The Breadth of Normativizing A further question about the normative sense is the extent to which its subject matter is constrained and the extent to which people are inclined across cultures to normativize the same activities or domains. Philosophers have mostly focused on whether different cultures or other demographic groups have the same normative attitude toward a given type of action (e.g., allow or forbid it) rather than whether similar subject matters are normativized across cultures. Concerning the former question, philosophers have put forward contrasting positions. Famously, in his defense of moral relativism, Montaigne (1993) repeatedly highlighted how much conceptions of right and wrong depend on habit or custom and as a result vary across cultures. Consider the dazzlingly diversity in norms related to prudery and chastity (1993, 126): “In one and the same country virgins openly display their private parts whilst the married women carefully cover them and hide them; and there is another custom somewhat related to it: in this case chastity is only valued in the service of matrimony; girls can give themselves to whom they wish and, once pregnant, can openly abort themselves with special drugs.” Prinz (2007) follows in Montaigne’s footsteps (2007, 187–195) when he writes about marriage (188): “We tend to have moral attitudes toward marriage. We value monogamous, wedded, loving, pair bonds, and deviations from this idea are often considered immoral. Other cultures have different arrangements. Many cultures have arranged marriages rather than marriages out of love. Polygyny is also commonplace, in which a man has more than one wife.” He concludes (191): “I do not think moral differences across cultures have been seriously exaggerated.” By contrast, other philosophers have highlighted similarities across cultures (e.g., Wattles, 1996 on the golden rule); others acknowledge that normative judgments differ but argue that such differences result from nonnormative differences in the circumstances in which judgments are made or in the diverse factual beliefs people hold across cultures (Moody-Adams, 1997). What matters for us at this point is that cultures may have many conflicting norms, but there may nonetheless be substantial overlap in the subject matters that they normativize: sexual relations, for instance. Recent empirical work inspired by Haidt and colleagues’ moral foundations theory suggests indeed that there is both a shared set of broad normativized concerns or subject matters and very substantial differences across cultures, including political cultures, in how emphasis is placed on these concerns (Haidt & Graham, 2007; Graham et al., 2009, 2011).12 Five “foundations” have been identified: harm/care, fairness/reciprocity, ingroup/loyalty, authority/respect, and purity/sanctity. It has long been agreed that all cultures have values and norms related to harm, care, fairness, and justice (for harm, see Turiel, 1983; for fairness, see Baumard, 2010). More recently, moral foundations theory has proposed that in addition to harm/care and fairness/cheating, all humans also have values and norms related to ingroup/loyalty, authority/respect, and purity/sanctity. (A domain associated with liberty is sometimes added.) The first two foundations are proposed to be related to the concerns of individuals—they are “individualizing” in Graham et al.’s (2009) terminology—while the latter three are focused on strengthening groups and institutions—they are “binding.” 46
Elizabeth O’Neill and Edouard Machery
The Moral Foundations Questionnaire, whose internal and external validity is discussed in Graham et al. (2011), measures people’s endorsement of five-foundations. While the moral foundations theory and Questionnaire were originally validated by means of samples of mostly white, male, educated Americans, Graham et al. (2011) report that their fivefoundations model fits reasonably well with participants’ answers to the Moral Foundations Questionnaire for each of 11 world regions, including the USA, Canada, the UK, Australia, East Asia, and the Middle East. They conclude, “the measurement and theory of five foundational moral concerns is not specific to U.S. or Western participants” (2011, 379). Kim et al. (2012) in Korea, Bowman (2010) in Germany (cited in Nilsson & Erlandsson, 2015), Davies et al. (2014) in New Zealand, and Nilsson and Erlandsson (2015) in Sweden have also reported that the five-foundations model best fit participants’ answers to the Moral Foundations Questionnaire (though this fit was imperfect). In addition, there are some other types of concerns that do not fit neatly into moral foundations theory yet also appear (based on anthropological evidence) to be cross-cultural universals. These include concern for some form of privacy (Altman, 1977; Margulis, 2009;Vincent, 2016), respect for property (Morris, 2015; Stake, 2004; Rochat et al., 2014), and norms regulating communication and honesty (Brown, 1991). The moral foundations theory also holds that there is substantial variation in how much importance people from different backgrounds place on each of its foundations. Graham et al. (2011) report that world region predicted participants’ answers to the Moral Foundations Questionnaire. Participants from Eastern cultures (South Asia, East Asia, Southeastern Asia) endorse the ingroup and purity foundations more than Western participants (from North America and Europe). Eastern populations endorse harm, fairness, and authority foundations only slightly more than Western participants. Graham et al. also report that the effect sizes for East-West differences are small (and smaller than the effect size for gender). Furthermore, Graham et al. (2009) have shown that American liberals endorse the harm and fairness foundations more than conservatives and that conservatives endorse the ingroup, authority, and purity foundations more than liberals. Graham et al. (2011) report that this pattern generalizes outside the USA (see also Nilsson and Erlandsson (2015) in Sweden; but see Davis et al. (2016) for black Americans). They also note that political orientation makes more of a difference for the authority and purity foundations than for harm and fairness: “across cultures, the most intractable political debates are likely to involve concerns related to respect for traditions/authorities and physical/spiritual purity, while the greatest degree of moral commonality may be found in issues related to harm and care” (2011, 379). In addition to the variation in emphasis on different domains of normativity, there is noteworthy variation by culture in how much of life is governed by norms. Edel and Edel (1968) note that cultures differ on what sort of behavior may be evaluated; some societies tend to pass judgment on norm violations more than others. Among the societies that normativize more activities are “Puritan” cultures (such as the Manus people of New Guinea and the Yurok of California), which emphasize thrift, sobriety, and hard work but also, more generally, individual responsibility and obligations. In contrast, in many cultures, members regulate each other less on these dimensions. Pueblo Indians may pass judgment on others’ observance of rituals but tend to refrain from interfering in their economic or sexual business (Edel & Edel, 1968, 100). Along with variation in how many activities are subject 47
The Normative Sense
to evaluation, there is variation in who is expected to take action to enforce norms against whom in a given domain. The Chiga of western Uganda tolerate substantial “misbehavior” from a brother. Interestingly, Edel and Edel (1968) also think that the Western view that “it is the ‘disinterested spectator,’ non-kin, non-involved, who is most fit to pass critical moral judgments” is not widely shared (100–101). Finally, cultures vary in the “tightness (rigidly enforced rules and norms) vs. looseness (less rigid norms, more tolerance of deviance)” of their normative systems. States within the USA and nations around the world vary with respect to how tight or loose the dominant normative systems is, where tightness and looseness is a matter of “strength of social norms and tolerance of deviant behavior” (Gelfand et al., 2011, 1101).Variation in tightness corresponds to people’s subjective sense that they can behave in various ways in a given situation (a construct called “situational constraint”) and with people’s self-regulation, dutifulness, and self-monitoring (Gelfand et al., 2011; Harrington et al., 2014). Normative tightness is predicted by social and ecological factors, including ecological and historical threats. In summary, while there is tremendous demographic (including cultural) variation in the norms people hold, as well as in the tightness and looseness of normative systems, this variation is constrained by universals: There is a core group of subjects that are invariably normativized. We might say that the normative sense has its preferred subject matters.
6. Normative Concepts and Principles An additional set of questions concerns the concepts employed in normative cognition. Examples of such concepts include relatively “thin” or abstract concepts such as intention, side effect, and harm along with “thicker” or less abstract concepts associated, e.g., with virtue-theoretic vocabulary such as cruel, depraved, kind, and compassionate. How much commonality is there in the possession and use of these concepts across cultures? Mikhail’s (2011) hypothesis of a universal moral grammar proposes that a deontic system of concepts and rules is universally shared (see also Dwyer, 2009; Roeder and Harman, 2010). (For our purposes, we can set aside whether there is a special moral domain within the domain of the normative.) Mikhail hypothesizes that ordinary people have unconscious knowledge of a shared set of concepts, rules, and principles, as well as a “natural readiness to compute mental representations of human acts and omissions in legally cognizable terms” (2011, 39). Among the relevant concepts that he proposes humans share are the concepts of “agent, patient, act, consequence, motive, intention, circumstance, [and] proximate cause” as well as notions of voluntariness, ends, means, side effects, and prima facie wrongs, such as battery. This is an extremely bold hypothesis: On this view, people all over the world come to think of actions in normative terms through the same set of concepts. The empirical evidence on Mikhail’s proposal, particularly the extent to which the use of normative concepts is similar across cultures, remains limited. Here we will focus on just one concept: intent. One historically important principle linking attributions of intent with moral evaluations says that actions intended to produce bad outcomes are worse than actions not intended to produce bad outcomes, whether the bad outcome is accidental or foreseen (Cushman et al., 2006; Cushman, 2008); another traditional principle states that it is wrong to intend to produce a bad outcome even when one’s attempts at producing that outcome fail. A closely related idea has to do with intent’s exculpatory role: 48
Elizabeth O’Neill and Edouard Machery
Although battery is prima facie wrong, if one did not intend to commit battery, that may mitigate how much punishment one is judged to deserve or excuse one from punishment altogether. Much evidence for the principle that intended harms are worse than accidental and unforeseen harms has been found in the West, though the applicability of the principle varies in perhaps surprising ways: For instance, the absence of intention does less to mitigate perceived blameworthiness for bad outcomes related to impurity (Young & Saxe, 2009, 2011; Young & Tsoi, 2013). Furthermore, many legal systems have a category of crimes where consideration of intent is excluded: namely, harms covered by strict liability (Barrett et al., 2016). The observation that intent’s role varies across different normative domains within a culture is consistent with the universality of our conceptions of intent and its relevance to blame and punishment. After all, the same pattern of variation could appear in each culture. But since some cultures place more importance on purity-related concerns than do others, it may be that modulation of punishment by assessments of intent plays less of a role in some cultures than others. Mikhail (2009) looked at a sample of forty-one legal codes in order to evaluate the prevalence of prohibitions on intentional killing across legal systems, the proportion of systems with definitions of criminal homicide that refer to mental states, and also the prevalence of “self-defense, insanity/mental illness, necessity, duress, and provocation” (505) as excuses or justifications for intentional killing. The sample was drawn from a set of the 204 memberstates of the United Nations and Rome Statute of the International Criminal Court. He found that all these societies explicitly or implicitly had a mental element in their definition of criminal homicide. Among possible sources of “justification, excuse, or reduced punishment,” 93% of the examined societies included self-defense, 93% included insanity/ mental illness, 80% included necessity, 83% included duress, and 68% included provocation. Mikhail takes this as at least some preliminary evidence in support of his thesis, and it is indeed interesting that these components have been incorporated into so many legal systems. But of course, these nations have influenced each other substantially—evidence from groups with more distinct histories, especially traditional societies, is also necessary to support the idea of a universal moral or normative grammar. Recently, Barrett et al. (2016) examined the role of intent in ten societies across six continents (eight traditional, two Western) (see also commentary in Saxe, 2016). In one task, they varied whether an action was intentional, accidental, or produced by an agent who had a motive to act or by an agent who had a motive to refrain from acting. They elicited subjects’ judgments about the badness of the action, how much punishment the action warrants, and how the action would affect reputation. In nine out of the ten societies they studied, in at least some of the four types of cases they examined, intent was associated with increased severity in negative judgments about the action. The two Western groups studied (from Los Angeles and Storozhnitsa, Ukraine) showed the biggest effect of intent on mean severity judgments. The only group where intent was not associated with any difference in severity of judgments was the Yasawa, a fishing-horticulturalist group from the Pacific. As Barrett et al. (2016) point out, this group is unusual for its “mental opacity” norms, which prohibit speculation about others’ reasons under some circumstances. The study used four kinds of impermissible action: (i) an act of physical harm against another person, (ii) harm committed against a group (poisoning a village’s well), (iii) theft, 49
The Normative Sense
and (iv) a violation of a food taboo. Consistent with what has been found among Western populations, in the purity domain—the case of the food taboo violation—intent had the smallest effect on the severity of judgment. Intent had the largest effect on the severity of judgment in the case of theft. However, there were also noteworthy differences across cultures in how much of a difference intent made in evaluations of areas (i)–(iii): theft, personal harm, and group harm. In sum, this study suggests significant cultural variation in the extent to which intent is taken into account in the severity of negative judgments about norm violations. In a second task, Barrett et al. (2016) examined what role several types of potentially mitigating factors play on the severity of judgments about an individual who commits battery. More specifically, they looked at how severely subjects judged a harmful act: (a) committed in self-defense, (b) committed out of necessity (in order to prevent a harm), (c) due to insanity, (d) attributable to a mistake of fact, or (e) due to possession of different moral beliefs. Evaluations of these vignettes were contrasted with subjects’ evaluation of an instance of intentional battery for which no reason is stated. Barrett et al. (2016) found that all ten groups treated self-defense and necessity as mitigating. Insanity and mistakes of fact were treated as highly mitigating by subjects in the two Western societies but did not mitigate severity of judgments at all in subjects from some other societies, such as the Yasawa. Possession of different moral beliefs was not treated as mitigating in any society. (Interestingly, the authors suggest that this may support the hypothesis that a weak type of moral objectivism is a cross-cultural universal.) All of the groups studied—including the Yasawa—treated some reasons for the harmful action as mitigating. Thus, all of the subjects studied take an agent’s intent into account in some contexts of judgment, although there is significant variation in which reasons are treated as mitigating factors. On this basis, the authors speculate, “The particular patterns of assessment characteristic of large-scale industrialized societies may thus reflect relatively recently culturally evolved norms rather than inherent features of human moral judgment” (4688).Whether or not this turns out to be true, current evidence suggests that some concept of intent is shared across cultures, that intent is assigned a more important role in some normative domains than others, and that it plays a role in mitigating the severity of judgments about norm violations, even if the details of its application also show surprising patterns of variation.
7. Conclusion In this chapter, we have examined whether the normative sense universally prescribes a specific categorization scheme of norms (particularly a distinction between moral and nonmoral norms), whether norms are thought about and experienced similarly (particularly whether they are externalized), whether some subject matters are preferentially normativized, and whether the normative sense employs the same concepts across cultures. We have argued that there is no universal categorization scheme; that everywhere some norms are externalized in a weak sense; and that despite tremendous variation across cultures in emphasis and severity or “tightness,” there is a set of preferred themes or concerns that is invariably normativized. Lastly, while people across cultures appear to assign some role to intent in the evaluation of behavior, this role varies substantially across populations. 50
Elizabeth O’Neill and Edouard Machery
Notes 1. Not all episodes of anger follow the violation of a norm; people are often angry when their desires are frustrated. 2. It is controversial how much of normative cognition is found in other primates. For discussion, see Chapter 3 of this volume. 3. For further discussion of these issues see Chapters 13, 14, and 15 of this volume. 4. For analyses of moral intuitions see Chapter 18 of this volume. 5. See Chapter 27 of this volume for a discussion of various approaches to moral education and Chapters 23 and 25 for analyses of moral expertise that might be implemented by AI. 6. See Chapter 1 of this volume for extensive discussion of this issue. 7. See Chapter 15 of this volume for discussion of the limits of pluralistic tolerance for alternative moralities. 8. For more on the idea that traditions and rituals constitute a kind of moral belief or knowledge, see Chapter 22 of this volume. 9. This is not to say that “right” and “wrong” are only used in relation to moral assessment (e.g., one can make the right call or use the right tool), but when they are used to assess actions, they tend to have a moral connotation. 10. It could of course be the case that the concepts are shared across all cultures, but not lexicalized in all languages. 11. See Chapter 9 of this volume for an extensive review of evolutionary accounts of norms of cooperation. 12. See too Chapter 15 of this volume, where this pluralistic perspective is elaborated.
References Allen, C., Wendell, W. and Smit, I. (2006). “Why Machine Ethics?” IEEE Intelligent Systems, 21 (4), 12–17. Altman, I. (1977). “Privacy Regulation: Culturally Universal or Culturally Specific?” Journal of Social Issues, 33 (3), 66–84. Anderson, M. and Anderson, S. L. (2007). “Machine Ethics: Creating an Ethical Intelligent Agent,” AI Magazine, 28 (4), 15. Barrett, H. C., Bolyanatz, A., Crittenden, A. N., Fessler, D. M., Fitzpatrick, S., Gurven, M., Henrich, J., Kanovsky, M., Kushnick, G., Pisor, A. and Scelza, B. A. (2016). “Small-Scale Societies Exhibit Fundamental Variation in the Role of Intentions in Moral Judgment,” Proceedings of the National Academy of Sciences, 113 (17), 4688–4693. Baumard, N. (2010). Comment nous sommes devenus moraux: Une histoire naturelle du bien et du mal. Paris: Odile Jacob. Beebe, J., Qiaoan, R., Wysocki, T. and Endara, M. A. (2015). “Moral Objectivism in Cross-Cultural Perspective,” Journal of Cognition and Culture, 15 (3–4), 386–401. Beebe, J. R. and Sackris, D. (2016). “Moral Objectivism Across the Lifespan,” Philosophical Psychology, 29 (6), 912–929. Berger, P. L. and Luckmann, T. (1991). The Social Construction of Reality: A Treatise in the Sociology of Knowledge (No. 10). New York: Penguin Classics. Blair, R. J. R. (1995). “A Cognitive Developmental Approach to Morality: Investigating the Psychopath,” Cognition, 57 (1), 1–29. Boehm, C. (1999). “Fieldwork in Familiar Places: Morality, Culture, and Philosophy,” American Anthropologist, 101 (2), 468–469. Bowman, N. (2010). German Translation of the Moral Foundations Questionnaire—Some Preliminary Results. http://onmediatheory.blogspot.com.tr/2010/07/german-translation-of-moral-foundations.html Brandt, R. B. (1954). Hopi Ethics: A Theoretical Analysis. Chicago: University of Chicago Press. Brink, D. O. (1994). “Moral Conflict and Its Structure,” The Philosophical Review, 103 (2), 215–247. 51
The Normative Sense
Brown, D. E. (1991). Human Universals. New York: McGraw-Hill. Colaço, D. and Machery, E. (2017). “The Intuitive Is a Red Herring,” Inquiry, 60 (4), 403–419. Csordas, T. J. (2013). “Morality as a Cultural System?” Current Anthropology, 54 (5), 523–546. http:// doi.org/10.1086/672210. Cushman, F. (2008). “Crime and Punishment: Distinguishing the Roles of Causal and Intentional Analyses in Moral Judgment,” Cognition, 108 (2), 353–380. Cushman, F., Young, L. and Hauser, M. (2006). “The Role of Conscious Reasoning and Intuition in Moral Judgment: Testing Three Principles of Harm,” Psychological Science, 17 (12), 1082–1089. Davies, C. L., Sibley, C. G. and Liu, J. H. (2014). “Confirmatory Factor Analysis of the Moral Foundations Questionnaire: Independent Scale Validation in a New Zealand Sample,” Social Psychology, 45, 431–436. Davis, D. E., Rice, K.,Van Tongeren, D. R., Hook, J. N., DeBlaere, C.,Worthington Jr, E. L. and Choe, E. (2016). “The Moral Foundations Hypothesis Does Not Replicate Well in Black Samples,” Journal of Personality and Social Psychology, 110 (4), e23. Doris, J. M. and Plakias, A. (2008). “How to Argue About Disagreement: Evaluative Diversity and Moral Realism,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Vol. 2. The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA: MIT Press, 303–331. Dwyer, S. (2009). “Moral Dumbfounding and the Linguistic Analogy: Methodological Implications for the Study of Moral Judgment,” Mind & Language, 24 (3), 274–296. Edel, M. and Edel, A. (1968). Anthropology and Ethics:The Quest for Moral Understanding, revised edition. Cleveland: The Press of Case Western University Fassin, D. (2008). “Beyond Good and Evil? Questioning the Anthropological Discomfort with Morals,” Anthropological Theory, 8 (4), 333–344. http://doi.org/10.1177/1463499608096642 ———. (ed.) (2012a) A Companion to Moral Anthropology. Hoboken: John Wiley & Sons. ———. (2012b). “Introduction: Toward a Critical Moral Anthropology,” in D. Fassin (ed.), A Companion to Moral Anthropology. Hoboken: John Wiley & Sons. Faubion, J. D. (2011). An Anthropology of Ethics. Cambridge: Cambridge University Press. Fraser, B. (2012). “The Nature of Moral Judgements and the Extent of the Moral Domain,” Philosophical Explorations, 15 (1), 1–16. Kim, K. R., Kang, J-S. and Yun, S. (2012). “Moral Intuitions and Political Orientation: Similarities and Differences Between Korea and the United States,” Psychological Reports: Sociocultural Issues in Psychology, 111, 173–185. Gabennesch, H. (1990). “The Perception of Social Conventionality by Children and Adults,” Child Development, 61 (6),2047–2059. Gelfand, M. J., Raver, J. L., Nishii, L., Leslie, L. M., Lun, J., Lim, B. C., Duan, L., Almaliach, A., Ang, S., Arnadottir, J. and Aycan, Z. (2011). “Differences Between Tight and Loose Cultures: A 33-Nation Study,” Science, 332 (6033), 1100–1104. Gill, M. B. (2009). “Indeterminacy and Variability in Meta-Ethics,” Philosophical Studies, 145 (2), 215–234. Goodwin, G. P. and Darley, J. M. (2008). “The Psychology of Meta-Ethics: Exploring Objectivism,” Cognition, 106 (3), 1339–1366. ———. (2010). “The Perceived Objectivity of Ethical Beliefs: Psychological Findings and Implications for Public Policy,” Review of Philosophy and Psychology, 1 (1), 1–28. ———. (2012). “Why Are Some Moral Beliefs Perceived to be More Objective Than Others?” Journal of Experimental Social Psychology, 48 (1), 250–256. Graham, J., et al. (2016). “Cultural Differences in Moral Judgment and Behavior, Across and Within Societies,” Current Opinion in Psychology, 8, 125–130. Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S. and Ditto, P. H. (2011). “Mapping the Moral Domain,” Journal of Personality and Social Psychology, 101 (2), 366. Graham, J., Haidt, J. and Nosek, B. A. (2009). “Liberals and Conservatives Rely on Different Sets of Moral Foundations,” Journal of Personality and Social Psychology, 96, 1029–1046. Gray, K., Young, L. and Waytz, A. (2012). “Mind Perception Is the Essence of Morality,” Psychological Inquiry, 23, 101–124.
52
Elizabeth O’Neill and Edouard Machery
Haidt, J. and Graham, J. (2007). “When Morality Opposes Justice: Conservatives Have Moral Intuitions That Liberals May Not Recognize,” Social Justice Research, 20, 98–116. Haidt, J., Koller, S. and Dias, M. (1993). “Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog?” Journal of Personality and Social Psychology, 65, 613–628. Harman, G. (1996). “Moral Relativism,” in G. Harman and J. J.Thompson (eds.), Moral Relativism and Moral Objectivity. Cambridge, MA: Blackwell Publishers, 3–64. Harrington, J. R. and Gelfand, M. J. (2014). “Tightness—Looseness Across the 50 United States,” Proceedings of the National Academy of Sciences, 111 (22), 7990–7995. Heintz, M. (ed.) (2009a) The Anthropology of Moralities. New York: Berghahn Books. ———. (2009b). “Introduction:Why There Should Be an Anthropology of Moralities,” in M. Heintz (ed.), The Anthropology of Moralities. New York: Berghahn Books. Heiphetz, L. and Young, L. L. (2017). “Can Only One Person Be Right? The Development of Objectivism and Social Preferences Regarding Widely Shared and Controversial Moral Beliefs,” Cognition, 167, 78–90. Heywood, P. (2015). “Freedom in the Code:The Anthropology of (Double) Morality,” Anthropological Theory, 15 (2), 200–217. Hitlin, S. and Vaisey, S. (2010). “Back to the Future,” in Handbook of the Sociology of Morality. New York: Springer, 3–14. ———. (2013). “The New Sociology of Morality,” Annual Review of Sociology, 39. Huemer, M. (2005). Ethical Intuitionism. Basingstoke: Palgrave MacMillan. Joyce, R. (2006). The Evolution of Morality. Cambridge, MA: MIT Press. Keane, W. (2015). Ethical Life: Its Natural and Social Histories. Princeton: Princeton University Press. Kim, K. R., Kang, J-S. and Yun, S. (2012). “Moral Intuitions and Political Orientation: Similarities and Differences Between Korea and the United States,” Psychological Reports: Sociocultural Issues in Psychology, 111, 173–185. Kittay, E. F. (1982) “On Hypocrisy,” Metaphilosophy, 13, 3–4, 277–289. Kelly, D., Stich, S., Haley, K. J., Eng, S. J. and Fessler, D. M. (2007). “Harm, Affect, and the Moral/ Conventional Distinction,” Mind & Language, 22 (2), 117–131. Khoo, J. and Knobe, J. (2018). “Moral Disagreement and Moral Semantics,” Noûs, 52, 109–143. Kumar,V. (2015). “Moral Judgment as a Natural Kind,” Philosophical Studies, 172 (11), 2887–2910. Laidlaw, J. (2002). “For an Anthropology of Ethics and Freedom,” Journal of the Royal Anthropological Institute, 8 (2), 311–332. Loeb, D. (1998). “Moral Realism and the Argument from Disagreement,” Philosophical Studies, 90, 281–303. Machery, E. (2012). “Delineating the Moral Domain,” The Baltic International Yearbook of Cognition, Logic and Communication, 7. doi:10.4148/biyclc.v7i0.1777 ———. (2014). “Social Ontology and the Objection from Reification,” in M. Gallotti and J. Michael (eds.), Perspectives on Social Ontology and Social Cognition. Dordrecht: Springer, 87–100. ———. (2017). Philosophy Within Its Proper Bounds. Oxford: Oxford University Press. ———. (2018). “Morality: A Historical Invention,” in J. Graham and K. Gray (eds.), The Atlas of Moral Psychology. New York: Guilford Press, 259–265. Machery, E., Kelly, D. and Stich, S. P. (2005). “Moral Realism and Cross-Cultural Normative Diversity,” Behavioral and Brain Sciences, 28 (6), 830–830. Machery, E. and Mallon, R. (2010). “Evolution of Morality,” in John M. Doris & The Moral Psychology Research Group (eds.), The Moral Psychology Handbook. Oxford: Oxford University Press, 3–46. Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. New York: Penguin Classics. Margulis, S. T. (2009). “Psychology and Privacy,” in David Matheson (ed.), Contours of Privacy. Cambridge: Scholars Publishing. Mattingly, C. (2012). “Two Virtue Ethics and the Anthropology of Morality,” Anthropological Theory, 12 (2), 161–184. Mikhail, J. (2009). “Is the Prohibition of Homicide Universal? Evidence from Comparative Criminal Law,” Brooklyn Law Review, 75, 497.
53
The Normative Sense
———. (2011). Elements of Moral Cognition: Rawls’ Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment. Cambridge: Cambridge University Press. Montaigne, M. (1993). The Complete Essays. London: Penguin Classics. Moody-Adams, M. M. (1997). Fieldwork in Familiar Places: Morality, Culture, and Philosophy. Cambridge, MA: Harvard University Press. Morris, I. (2015). Foragers, Farmers, and Fossil Fuels: How Human Values Evolve. Princeton: Princeton University Press. Murphy, J. G. (1982). “Forgiveness and Resentment,” Midwest Studies in Philosophy, 7, 1, 503–516. Nichols, S. (2004). Sentimental Rules. Oxford: Oxford University Press. Nichols, S. and Folds-Bennett, T. (2003). “Are Children Moral Objectivists? Children’s Judgments About Moral and Response-Dependent Properties,” Cognition, 90, B23–B32. Nilsson, A. and Erlandsson, A. (2015). “The Moral Foundations Taxonomy: Structural Validity and Relation to Political Ideology in Sweden,” Personality and Individual Differences, 76, 28–32. Nucci, L. P., Krettenauer, T. and Narváez, D. (eds.) (2008). Handbook of Moral and Character Education. London: Routledge. O’Neill, E. (2017). “Kinds of Norms,” Philosophy Compass, 12 (5). ———. (2018). “Generalization and the Experience of Obligations as Externally Imposed: Distinct Contributors to the Evolution of Human Cooperation,” Behavioral and Brain Sciences, 41, E108. O’Neill, E. and Machery, E. (2014). “Experimental Philosophy: What Is It Good for,” In Current Controversies in Experimental Philosophy. New York: Routledge, vii–xxix. Patel, S. and Machery, E. (2018). “Do the Folk Need a Metaethics,” Behavioral and Brain Sciences, 41, e109. Prinz, J. (2007). The Emotional Construction of Morals. Oxford: Oxford University Press. ———. (2008). “Resisting the Linguistic Analogy: A Commentary on Hauser, Young, and Cushman,” Moral Psychology, 2, 157–170. Quintelier, K. J., De Smet, D. and Fessler, D. M. (2013).“The Moral Universalism-Relativism Debate,” KLESIS—Revue Philosophique, 27, 211–262. Rai, T. S. and Holyoak K. J. (2013). “Exposure to Moral Relativism Compromises Moral Behavior,” Journal of Experimental Social Psychology, 49 (6), 995–1001. Rochat, P., Robbins, E., Passos-Ferreira, C., Oliva, A. D., Dias, M. D. and Guo, L. (2014). “Ownership Reasoning in Children Across Cultures,” Cognition, 132 (3), 471–484. Roeder, E. and Harman, G. (2010). “Linguistics and Moral Theory,” in John Doris and the Moral Psychology Research Group (eds.), The Moral Psychology Handbook. Oxford: Oxford University Press, 272–295. Sarkissian, H. (2016). “Aspects of Folk Morality,” A Companion to Experimental Philosophy, 212–224. Sarkissian, H., Park, J., Tien, D., Wright, J. C. and Knobe, J. (2011). “Folk Moral Relativism,” Mind & Language, 26, 482. Saxe, R. (2016). “Moral Status of Accidents,” Proceedings of the National Academy of Sciences, 113 (17), 4555–4557. Schmidt, M. F., Gonzalez-Cabrera, I. and Tomasello, M. (2017). “Children’s Developing Metaethical Judgments,” Journal of Experimental Child Psychology, 164, 163–177. Shafer-Landau, R. (2003). Moral Realism: A Defence. Oxford: Oxford University Press. ———. (2012). “Evolutionary Debunking, Moral Realism and Moral Knowledge,” Journal of Ethics & Social Philosophy, 7. Shweder, R. A., Mahapatra, M. and Miller, J. G. (1987). “Culture and Moral Development,” The Emergence of Morality in Young Children, 1–83. Skitka, L. J. (2010). “The Psychology of Moral Conviction,” Social and Personality Psychology Compass, 4 (4), 267–281. Smith, M. (1994). The Moral Problem. Oxford: Blackwell. Sousa, P., Holbrook, C. and Piazza, J. (2009). “The Morality of Harm,” Cognition, 113 (1), 80–92. Southwood, N. (2011). “The Moral/Conventional Distinction,” Mind, 120 (479), 761–802. Stake, J. E. (2004). “The Property ‘Instinct’,” Philosophical Transactions of the Royal Society London B, 359, 1763–1774.
54
Elizabeth O’Neill and Edouard Machery
Stanford, P. K. (2017). “The Difference Between Ice Cream and Nazis: Moral Externalization and the Evolution of Human Cooperation,” Behavioral and Brain Sciences, 1–57. doi:10.1017/ S0140525X17001911. Sterelny, K. (2010). “Moral Nativism: A Sceptical Response,” Mind & Language, 25 (3), 279–297. Stich, S. P. (2018). “The Quest for the Boundaries of Morality,” in A. Zimmerman, K. Jones and M. Timmons (eds.), Routledge Handbook of Moral Epistemology. London: Routledge. Street, S. (2006). “A Darwinian Dilemma for Realist Theories of Value,” Philosophical Studies, 127, 109–166. Therborn, G. (2002). “Back to Norms! On the Scope and Dynamics of Norms and Normative Action,” Current Sociology, 50 (6), 863–880. Tomasello, M. (2016). A Natural History of Human Morality. Cambridge, MA: Harvard University Press. Turiel, E. (1983). The Development of Social Knowledge: Morality and Convention. Cambridge: Cambridge University Press. Vincent, D. (2016). Privacy: A Short History. Hoboken: John Wiley & Sons. Vincent, S., Ring, R. and Andrews, K. (2017). “Normative Practices of Other Animals,” in A. Zimmerman, K. Jones and M. Timmons (eds.), Routledge Handbook of Moral Epistemology. London: Routledge. Wainryb, C., Shaw, L. A., Langley, M., Cottam, K. and Lewis, R. (2004). “Children’s Thinking About Diversity of Belief in the Early School Years: Judgments of Relativism,Tolerance, and Disagreeing Persons,” Child Development, 75, 687–703. Wattles, J. (1996). The Golden Rule. Oxford: Oxford University Press. Wierzbicka, A. (2001). What Did Jesus Mean? Explaining the Sermon on the Mount and the Parables in Simple and Universal Human Concepts. New York: Oxford University Press. ———. (2007). “Moral Sense,” Journal of Social, Evolutionary, and Cultural Psychology, 1, 66–85. Wright, J. C., Cullum, J. and Grandjean, P. (2014). “The Cognitive Mechanisms of Intolerance,” Oxford Studies in Experimental Philosophy, 1. Wright, J. C., Grandjean, P. T. and McWhite, C. B. (2013). “The Meta-Ethical Grounding of Our Moral Beliefs: Evidence for Meta-Ethical Pluralism,” Philosophical Psychology, 26 (3), 336–361. Yan, Y. (2011). “How Far Away Can We Move from Durkheim?—Reflections on the New Anthropology of Morality.” Anthropology of this Century, 2. http://aotcpress.com/articles/ move-durkheim-reflections-anthropology-morality/ Young, L. and Durwin, A. J. (2013). “Moral Realism as Moral Motivation:The Impact of Meta-Ethics on Everyday Decision-Making,” Journal of Experimental Social Psychology, 49 (2), 302–306. Young, L. and Saxe, R. (2009). “Innocent Intentions: A Correlation Between Forgiveness for Accidental Harm and Neural Activity,” Neuropsychologia, 47 (10), 2065–2072. ———. (2011). “When Ignorance Is No Excuse: Different Roles for Intent Across Moral Domains,” Cognition, 120 (2), 202–214. Young, L. and Tsoi, L. (2013). “When Mental States Matter,When They Don’t, and What That Means for Morality,” Social and Personal Psychology Compass, 7 (8), 585–604. Zigon, J. (2008). Morality: An Anthropological Perspective. New York: Berg.
Further Readings An important early effort to bring moral philosophy and anthropology into contact is M. Edel and A. Edel, Anthropology and Ethics, revised edition. (Cleveland: The Press of Case Western University, 1968). For a classic overview of ethnographic evidence about universals, see D. E. Brown, Human Universals (New York: McGraw-Hill, 1991). There are several recent anthologies in anthropology addressing the topic of morality, including M. Heintz, ed., The Anthropology of Moralities (New York: Berghahn Books, 2009) and S. Howell, ed., The Ethnography of Moralities (London: Routledge, 2005). For a brief survey of the field, see J. Zigon, Morality: An Anthropological Perspective (New York: Berg, 2008). For a review of recent psychology research on diversity in moral judgments and behaviors, see J. Graham,
55
The Normative Sense
et al., “Cultural Differences in Moral Judgment and Behavior, Across and Within Societies,” Current Opinion in Psychology, 8, 125–130, 2016. Mikhail’s proposal for a universal moral grammar is laid out in J. Mikhail, Elements of Moral Cognition: Rawls’ Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment (Cambridge: Cambridge University Press, 2011). See also W. Keane, Ethical Life: Its Natural and Social Histories (Princeton: Princeton University Press, 2015) for an interesting proposal on the nature and origins of human ethics that integrates a wide variety of empirical work.
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 3 Normative Practices of Other Animals; Chapter 5 Moral Development in Humans, Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 9 The Evolution of Moral Cognition; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 21 Methods, Goals, and Data in Moral Theorizing; Chapter 22 Moral Knowledge as Know-How; Chapter 25 Moral Expertise; Chapter 30 Religion and Moral Knowledge.
56
3 NORMATIVE PRACTICES OF OTHER ANIMALS Sarah Vincent, Rebecca Ring, and Kristin Andrews
Introduction Traditionally, discussions of moral participation—and in particular moral agency—have focused on fully formed human actors. There has been some interest in the development of morality in humans, as well as interest in cultural differences when it comes to moral practices, commitments, and actions. However, until relatively recently, there has been little focus on the possibility that nonhuman animals have any role to play in morality, save being the objects of moral concern (e.g., DeGrazia, 1996; Gruen, 2002; Rollin, 2007; Singer, 1975). Moreover, when nonhuman cases are considered as evidence of moral agency or subjecthood,1 there has been an anthropocentric tendency to focus on those behaviors that inform our attributions of moral agency to humans. For example, some argue that the ability to evaluate the principles upon which a moral norm is grounded is required for full moral agency (e.g., Korsgaard, 1992, 2006, 2010; Rowlands, 2012). Certainly, if a moral agent must understand what makes an action right or wrong, then most nonhuman animals would not qualify (and perhaps some humans, too). However, if we are to understand the evolution of moral psychology and moral practice, we need to turn our attention to the foundations of full moral agency.We must first pay attention to the more broadly normative practices of other animals.2 In part 1 of this chapter, we will examine the recent attention to animal moral practice by philosophers and animal cognition researchers and argue that their approach underestimates the distribution of normative practice in animals by focusing on highly developed versions of morality. In Part 2, we will argue for an approach to examining animal normative participation that begins with a categorization of the practices that may evidence valuing. Parts 3 and 4 will consider evidence that great apes and cetaceans participate in normative practice. We will conclude in Part 5 by considering some implications of our view.
1. Current Theorizing of Animal Moral Participation Philosophical and psychological interest in the evolution of morality and the possibility of moral participation among other animals has been growing in recent years (Andrews &
57
Normative Practices of Other Animals
Gruen, 2014; Bekoff & Pierce, 2009; Flack & de Waal, 2000; Hauser, 2006; Kitcher, 2011; Korsgaard, 2006; Plutchik, 1987; Preston and de Waal, 2002; Rowlands, 2012; Tomasello, 2016; Varner, 2012; de Waal, 1996, 2006, 2009). While these approaches start with different assumptions and draw different conclusions about animal moral participation, they all ground their approaches in some recognized philosophical moral theory. Hauser adopts a contractarian approach to ethics, both Kitcher and Korsgaard accept versions of deontology, Varner assumes Hare’s version of utilitarianism, and so on. Arguments that go on to suggest that animals do have some degree of moral participation, save being objects of concern, are often framed in terms of animals having empathy or sympathy (with Rowlands, de Waal, Andrews, and Gruen aligning in this respect). On the other hand, arguments suggesting that animals lack moral participation are often based on a pair of assumptions: (a) that metacognition is required to govern oneself autonomously and (b) that self-governance is essential to morality (as Korsgaard and Kitcher would have it). In fact, rather than investigating moral practice more generally, these projects typically look to see whether a nonhuman animal has what it takes to be a good Humean, a good Kantian, a good Rawlsian, etc. Additionally, philosophical discussions of animal morality center on four sets of psychological properties that are proposed to be cognitive requirements for moral participation: (i) consciousness, observation, and metacognition (Kantianism, contractarianism, naturalism), (ii) empathy or otherregarding emotions (sentimentalism, utilitarianism), (iii) personality traits and the ability to improve them (virtue ethics), and (iv) social roles and relations (feminist ethics, care ethics). The empirical data that is given to support the view that nonhuman animals have a proto-ethics (or are moral subjects or agents) often consists in observations of behavior that would be deemed praiseworthy if performed by a human. For instance, in his discussion of the phylogenetic building blocks of morality, de Waal (2014) describes morality as a system of rules that revolves around helping and not hurting, emphasizing the well-being of others and the value of the community above the value of the individual.3 Given this framework, de Waal argues that chimpanzees display the kinds of empathy and reciprocity necessary to meet the demands of morality (de Waal, 2013). Rowlands (2012) argues that animals can be moral subjects insofar as their actions track objective moral reasons for good action, evidenced by their demonstration of concern. Bekoff and Pierce (2009) focus on behaviors that they deem consistent with cooperation, empathy, and justice. This focus on what we might take to be laudable animal action reflects our common practice when we use the term ‘moral,’ because when we call someone ‘moral,’ we typically do so with the intention to offer praise. A moral person helps others and refrains from harming others out of her concern for well-being or the greater good. Or a moral person recognizes the intrinsic value of others and treats them accordingly. Likewise, when we call someone ‘immoral,’ we place them into the sphere of morality, but we do so in order to offer condemnation or at least correction. However, this focus on laudable acts hinders our examination of the evolution of morality, given that the entryway into morality need not require objectively good behavior. When the investigation into animal morality only identifies laudable acts as evidence of moral participation, and when we look for evidence of specifically moral norms, we lose sight of the basic cognitive requirement for moral agency—namely, ought-thought, which is a cognitive modality much like mental time travel or counterfactual thinking. Thinking about what ought to be the case—like thinking about what happened in the past, what 58
Sarah Vincent, Rebecca Ring, and Kristin Andrews
might happen in the future, and what might be the case under various circumstances—is a cognitive mode that requires the thinker to do more than represent what is currently the case. The cognitive mode of thinking about what ought to be the case is what we will refer to here as naïve normativity (Andrews, in preparation). Naïve normativity is meant to be a broader category of ought-thought than specifically moral thought, though it is a cognitive building block that makes moral thought possible. We understand naïve normativity to include diverse instances of valuing, some of which are not moral. For example, if someone wears the shoes of her favorite celebrity, she thinks of this celebrity as a fashion ideal. That is a kind of normative thought. If a person uses toilet paper because she implicitly recognizes that this is a sanitary expectation of those with whom she interacts, she is influenced by normative thinking. The same might be said if she takes off shoes before entering a home to honor the wishes of the homeowner, or the gods, or the community at large. If we begin our theorizing with a focus on normative thought and participation understood in this broad way, we can better reconstruct the emergence of moral thinking across and within species—without having to identify this early stage of moral evolution with any particular moral theory. Let us clarify what we mean by ‘valuing’ in terms of naïve normativity. Some may object that normative thought should not be understood as valuing, since we value what we desire, and desire is too widespread of an attitude to be considered properly normative. We do not contend that ‘valuing’ and ‘normative thought’ are synonymous, though we do think of valuing as necessary (though not sufficient) for normative thought. Nevertheless, introducing the language of ‘valuing’ allows us to begin to wrestle with the difficulty of delineating the normative sphere. When we speak of normative practices, we mean to signal patterns of behavior shared by members of a community that demonstrate they value certain ways of doing things as opposed to others. Thus, we would not say that an individual preference (though perhaps an instance of valuing) is a normative practice. Still, by adopting the language of valuing as opposed to merely talking about normativity as ought-thought, we hope to push back on the anthropocentricism that sometimes lurks behind discussions of oughtthought that focus on its articulation in language. In addition, by talking about ‘valuing,’ we are able to emphasize that normative behaviors can be observed within group practice. For example, when we see a group of meerkats mobbing a snake in their midst, we can see that they value eliminating the snake. By reframing the discussion in terms of normativity rather than morality, we can leave behind a number of traditional distinctions that are often invoked in the discussion of moral development and evolution.The moral/conventional, prudential/moral, and etiquette/moral distinctions can all be set aside, as the practice of developing and following group norms are all cases of ought-thought in action.4 Norms, regardless of the content, are all action-guiding, aspirational ideals that individuals work toward, whether they are the norms of how best to open a coconut or the norms of how to be a reliable friend. By focusing on the normative rather than the specifically moral, we can also set aside traditional worries about the evolution of morality. It does not matter whether an action is self- or other-directed, whether the norm guiding an animal’s behavior is properly cultural or ‘merely’ biological, or whether her motivation to conform to it is internal or external. A behavior may be in some sense selfdirected, biological, and externally motivated—and still count as guided by ought-thought in the sense at issue. 59
Normative Practices of Other Animals
Furthermore, researchers should take note that norms can have a dark side. Norms lead us to express empathy and behave fairly with others, but they can also lead us to express disgust inappropriately and to behave unfairly with others. For example, revenge can be a manifestation of normative thought, even when based on an inaccurate assessment of the crimes one is seeking to redress. More broadly, norms are appealed to in order to justify wars, terrorism, slavery, and oppression of all sorts. We claim that there is evidence that great apes and cetaceans participate in normative practices and that many other kinds of species might as well. Whether they participate in morality is another topic that depends on a number of additional factors, not least of which is one’s ethical theory. Instead of asking whether or not animals engage in moral practice, we will investigate the more general question of whether or not animals engage in normative practice, ultimately defending an affirmative response to this question.
2. Types of Normative Practice Thus far, we have suggested that philosophers and animal cognition researchers underestimate the distribution of more basic normative practices in animals by focusing on moral behaviors. Still, the important work on morality can shed light on normativity. In this section, we sketch various categories of normative practice5 (some of which are also moral) in order to examine whether or not we see evidence of the relevant kind of valuing in the actions of members of other species. By reframing the question to focus on normativity as opposed to morality, we mean to broaden the space of consideration. That is, normativity includes a variety of practices involving valuing or ought-thought—whether or not that valuing or ought-thought manifests in concern for another, involves the attribution of praise or blame, or can be defended through the provision of reasons for acting some way or another. Consider the following cases: correcting the way a child holds her dining utensils, caring about our friends’ allegiance to our city’s football team, helping our partner fold clothes the right way, or pulling over to the side of the road to accommodate a funeral procession.These actions or attitudes matter to us, and we care how they are performed or adopted by others—but this kind of feeling is generally not taken to be sufficient for morality. Though their focus is on moral behaviors, psychologists Haidt et al. (2009) state that there is “some evidence of continuity with the social psychology of other primates,” albeit stopping short of calling this a continuity of morality (Haidt et al., 2009, 111). Their project, building upon Shweder and Haidt (1993), offers an account of the psychological foundations that underpin moral systems, despite the diversity of these systems. Initially, these researchers posited that five such foundations exist: care, fairness, loyalty, authority, and purity. These foundations manifest in concerns about the suffering of others; inequality, unfair practice, and justice; loyalty, self-sacrifice, and betrayal; obedience, respect, and role fulfillment; and contagion and control of desires, respectively. Perhaps to capture both the good and bad sides of the moral story, the labels assigned to these foundations have since been modified to emphasize harm, cheating, betrayal, subversion, and degradation as the respective counterparts to care, fairness, loyalty, authority, and sanctity.6 A sixth foundation has also been suggested by Iyer et al. (2012):
60
Sarah Vincent, Rebecca Ring, and Kristin Andrews
liberty/oppression, involving concerns about restrictions to freedom and autonomy, which often come into conflict with the authority foundation.7 Also concerned with the evolution of morality, psychologists Krebs and Janicki (2002) describe five categories of moral norms: obedience norms, reciprocity norms, care-based and altruistic norms, social responsibility norms, and norms of solidarity. There is a great deal of overlap between this account and the moral foundations theory of Haidt et al., with more or less direct correspondence between obedience norms and the authority/subversion foundation, between reciprocity norms and the fairness/cheating foundation, between care-based and altruistic norms and the care/harm foundation, and between social responsibility norms and the loyalty/betrayal foundation. This broadness of scope is not always found in accounts that focus on moral practice in other animals. For example, ethologist Marc Bekoff and philosopher Jessica Pierce limit their analysis to three ‘clusters’ of behavior: “the cooperation cluster (including altruism, reciprocity, honesty, and trust), the empathy cluster (including sympathy, compassion, grief, and consolation), and the justice cluster (including sharing, equity, fair play, and forgiveness)” (Bekoff & Pierce, 2009, xiv).These clusters capture only two of the categories of norms and foundations categories on offer from Haidt et al., Iyer et. al, and Krebs and Janicki—with altruism, sympathy, compassion, grief, consolation, and forgiveness being accommodated by the care-based norms and the care/harm foundation, while reciprocity norms and the fairness/cheating foundation incorporate reciprocity, honesty, trust, sharing, equity, and fair play. Our point is not to diminish the importance of Bekoff and Pierce’s work, which is remarkable both in its insistence that nonhuman animals engage in a panoply of moral behaviors and in its provocative discussion of species-relative moral agency. Rather, the point is that Bekoff and Pierce, like many other philosophers and researchers, focus on the kinds of behaviors associated with laudable moral actions rather than thinking more broadly about the more general class of normative practices of which these form a part. We combine the theoretical frameworks of Haidt et al., Iyer et al., and Krebs and Janicki to establish a range of normative practices that we can examine conceptually and empirically. In the following sections, we will argue that these kinds of normative practices are present among at least some nonhuman animals, but first we must get our conceptual footing. Obedience norms (Figure 3.1) can be reflected in the following kinds of behaviors: (a) displays of authority and respect, policing, or subversion (such as when wolf pack leaders police and interrupt sex acts between subversive female members and male outsiders), (b) demonstrations of guilt (including displays of submission in response to correction, as when a dog tucks her tail when being scolded for toppling the trash), (c) the meting out of punishments (such as when chimps destroy food that was taken by a thieving conspecific), or (d) more general teaching and obedience cases (including nonmoral cases of teaching practices like instruction on how to correctly use a tool and praise of successful usage). Reciprocity (Figure 3.2) norms are at play in the following behaviors: (a) demands for fairness or cases of cheating (such as capuchin monkeys protesting when they are given a less desirable food for completing the same task for which a conspecific is rewarded with a more desirable food), (b) instances of direct reciprocity, cooperation, mutualism, or proportionality in dyadic exchanges (including sharing or exchanging goods for mutual benefit), or
61
Normative Practices of Other Animals
Obedience Norms Behaviors
Examples in Chimps
Examples in Cetaceans
Authority and subversion
Hierarchical societies in which the dominant male must be deferred to (de Waal, 1982)
Punishment
Destroy food stolen from them but not food given to the other (Jensen et al., 2007b) Lack of evidence of third-party punishment in an experimental captive setting (Riedl et al., 2012) Demonstration teaching, with correction (Pruetz & Bertolani, 2007; Boesch, 1991, 1993) Teach by inhibition, preventing another individual from acting (e.g., mothers pull infants away from plants not normally in diet) (Hiraiwa-Hasegawa, 1990); mothers intervene when infants play with unusual or dangerous objects (Hirata, 2009) Adults tolerate youngsters closely watching them perform tasks and permit touching or taking tools (see Van Schaik, 2003 for a review)
Male bottlenose dolphins establish hierarchical dominance relationships (Connor & Norris, 1982; Connor et al., 2000) After being trained by ‘time-outs,’ dolphin gives a ‘time-out’ to researcher whenever offered food has unwanted parts (Reiss, 2011)
Teaching and obedience
Dolphin mothers teach calves to produce and manipulate bubbles that are used in hunting (Kuczaj II & Walker, 2006) Dolphin mothers teach foraging tactics to calves: pursue prey longer, make more referential body-orienting movements, and manipulate prey longer while calves observe (Bender et al., 2008) Orca mothers teach hunting techniques to calves: push them on and off beach and orient them toward prey (Whitehead & Rendell, 2015)
Figure 3.1 Obedience norms: regarding relationships of authority or dominance
(c) preferential selection of or treatment of individuals (such as when chimps choose to beg from a generous human as opposed to a selfish one). Many kinds of behaviors suggest the presence of caring or altruistic norms (Figure 3.3): (a) acts of care-giving and consolation by an observer (including responses to harm/injury, loss, or illness), (b) targeted helping acts on the part of an agent (which often involve the agent putting herself in immediate danger, such as when whales capsize hunting boats in response to the distress of injured conspecifics), (c) responses to one’s own loss (which can refer to the loss of anything one values, as diverse as the loss of food or the death of a conspecific; e.g., when captive polar bear Arturo exhibited behaviors that were widely described as consistent with depression following the death of his cage-mate Pelusa), or (d) emotion recognition (such as identifying emotions in conspecifics via direct perception of their facial expressions or behaviors). While reciprocity norms typically occur in the context of dyadic relationships, social responsibility norms (Figure 3.4) are manifested in behaviors that are aimed at benefitting all members of one’s in-group, such as: (a) cases of indirect reciprocity or cooperation (like distributing acquired goods to one’s group members or using divisions of labor), (b) acts of loyalty to or betrayal of one’s group, or (c) acts of aversion and protesting8 (including aversions to incest, killing, or pollution). 62
Reciprocity Norms Behaviors
Examples in Chimps
Fairness and cheating
Share food that is easily divided (Hare et al., 2007) Refuse to participate in tasks upon witnessing another receive a higher-valued reward (Brosnan et al., 2005, 2010) Accept all offers and fail to reject unfair offers in ultimatum game (Jensen et al., 2007a) Coordinate rope pulling to access food (Crawford, 1937; Hirata & Fuwa, 2007)
Direct reciprocity, cooperation, mutualism, and proportionality
Share food gained after hunting monkeys proportional to effort (Boesch, 1994)
Preference for individuals; discrimination
Dyads with strong social bonds cooperate to get food in an experimental setting (Melis et al., 2006) Dominant male and infant coordinate lever pulling to access food, but others fail to work with dominant (Chalmeau, 1994; Chalmeau & Gallo, 1996a, b) Share and coordinate tool use in order to gain access to food (Melis & Tomasello, 2013) Chimpanzees in long-term relationships share food and engage in grooming (Jaeggi et al., 2013) Keep track of and tend to support past supporters (de Waal & Luttrell, 1988) Adults more likely to share food with individuals who had groomed them (Brosnan & de Waal, 2002) Chimpanzees, bonobos, and orangutans distinguish between true and false beliefs in their helping behavior; they infer a human’s goal and help them achieve it (Buttelmann et al., 2017) Prefer to beg from a generous human donor over a selfish one (Subiaul et al., 2008) Prefer to select more skillful collaborators in a rope pulling cooperation task (Hirata & Fuwa, 2007; Melis et al., 2006) Juveniles self-handicap when playing with weaker individuals; also evidence of role reversal (Hayaki, 1985) Remember who attacked them and are more likely to attack former attackers (de Waal & Luttrell, 1988) Prefer to cooperate with partners who share rewards more equitably (Melis et al., 2009)
Examples in Cetaceans
Two dominant male dolphins, but not subordinates, coordinate rope-pulling to access and share food, and then synchronously interacted with emptied container (Kuczaj et al., 2015) Orcas share prey non-aggressively: each takes a piece of prey and swims in opposite directions, tearing the meat (Guinet et al., 2000) Male bottlenose dolphins form alliances that collaborate in securing consortships of females, competing with other groups to do so (Connor et al., 2000)
Bonded male dolphins perform specific affiliative behaviors with each other: synchronous swimming, petting, and adjusting signal whistles to match (Stanton & Mann, 2014; Tyack, 2000)
Figure 3.2 Reciprocity norms: regarding relationships of support or mutual benefit
Care Norms Behaviors
Examples in Chimps
Examples in Cetaceans
Caring and Console those who lose fights and reconcile Cetaceans ‘stand by’ others in distress, staying consolation after fights (de Waal & van Roosmalen, close but not offering aid, often in dangerous situations such as whaling (Connor & Norris, 1979; Kutsukake & Castles, 2004; deWaal, 2009) 1982) Console bonded individuals in distress (Fraser et al., 2008) No preference for food delivery method Cetaceans ‘support’ others in distress, pressing Targeted that also delivered food to a conspecific them to the surface until the supported recovers helping/ (Silk et al., 2005) or dies; observed intra- and interspecifically hurting (Connor & Norris, 1982; Williams, 2013) Help a human obtain out-of-reach Cetaceans help others deliver infants and help raise objects (Warneken et al., 2007) newborns to surface (Connor & Norris, 1982; McKenna, 2015;Whitehead & Rendell, 2015) Cetaceans approach injured individuals, show Prefer to use a token that supplied food violent or excited behavior, come between to self and conspecific rather than captors and the injured, bite or attack capture only to self (Horner et al., 2011); note vessels, and push the injured away from captors; Skoyles’ (2011) interpretation of this observed intra- and interspecifically (Connor & behavior as mean-spirited, not Norris, 1982) pro-social (but still normative) Help another chimpanzee even when Dolphins approached a sailor who fell overboard there is no direct benefit to self then approached search boats, going back and (Yamamoto et al., 2009) forth, thereby leading human rescuers to the sailor (Whitehead & Rendell, 2015) Target individuals to kill, castrate, and Orcas guided lost researchers by surrounding disembowel (Peterson & Wrangham, and staying with the boat until they reached 2003; Boesch et al., 2008; Wilson et al., home, then swam away in opposite direction (Morton, 2002) 2014) Males and dominants aid females and Humpback whales interfere with orca whale youth in road crossing (Hockings predatory attacks on various species, sometimes et al., 2006) rescuing the prey (Pitman & Durban, 2009; Pitman et al., 2016) A bottlenose dolphin guided a mother/calf pygmy sperm whale pair out of an area of sandbars upon which they were repeatedly stranding (Lilley, 2008) A captive orca attacked and killed a human trainer at SeaWorld, holding the trainer underwater too long (Kirby, 2012; Neiwert, 2015) Response to Mothers carry dead infants until they are Adult cetaceans carry dead calves and juveniles, loss (grief) mummified (Biro et al., 2010) sometimes until they decompose (Connor & Norris, 1982; Reggente et al., 2016) Captive orca Bjossa remained with her dead calf Responses to dying and death include for days, touching her and preventing humans caring for dying individual, examining from approaching (www.apnewsarchive. for signs of life, male aggression to the com/1995/Killer-Whale-Calf-Loses-Fight-forcorpse, all-night attendance by adult Life/id-0a2a8961200d44de8938963260ce058b); daughter, cleaning the corpse, and captive orca Corky made specific distress subsequent avoidance of the place of vocalizations and refused food for days after death (Anderson et al., 2010) calf died (Morton, 2002) Emotion Recognize basic emotions in facial recognition expressions (Parr et al., 2007) Figure 3.3 Care norms: regarding the wellbeing of others
Sarah Vincent, Rebecca Ring, and Kristin Andrews
Social Responsibility Norms Behaviors
Examples in Chimps
Examples in Cetaceans
Loyalty/betrayal
Trust friends but not nonfriends to share food (Engelmann & Herrmann, 2016) Form alliances with intragroup support (de Waal, 1982)
When transient orcas are detected nearby, resident orca groups move into and hold a defensive formation and vocalize in low grunts (Morton, 2002)
Aversion and protesting
Distribution of labor based on skill
Indirect reciprocity; cooperation for the benefit of the group
Resident orca group aggressively chased and attacked a transient group, driving them into a harbor toward the beach (Ford & Ellis, 1999) In an ultimatum game, make Neither sex disperses from resident orca natal more equitable divisions after groups; with no inbreeding, mating occurs within partner protests (Proctor community and sometimes clan but never the same et al., 2013) pod (Barrett-Lennard, 2000) Protest infanticide (Rudolf von After a human approached a dolphin calf, the mother Rohr et al., 2011) approached the familiar tour group leader, rather Bonobos protest unexpected than the trespasser and tail slapped the water; authors social violations (Clay et al., interpret as protesting norm violations (White, 2007; 2016) Whitehead & Rendell, 2015) Cooperatively hunt monkeys in One dolphin (‘the driver’) herds fish against a wall groups of four after years of of conspecifics; the same individual in each group training (Boesch, 1994) repeatedly serves as driver (Gazda et al., 2005) One dolphin swims in circles around shoal of fish and strikes muddy bottom with tail, creating a mud-ring around fish; the rest of the group gathers outside of the ring, catching jumping fish (Torres & Read, 2009) Humpback whales specialize in different elements of cooperative foraging; particular individuals are bubble-blowers or trumpeters (Whitehead & Rendell, 2015) Transient orcas coordinate hunting and share prey Break hunting snares, thereby (Saulitis et al., 2000) protecting group members (Ohashi & Matsuzawa, 2011) Both orca and dolphin groups herd fish into balls and take turns feeding (Similä & Ugarte, 1993) Humpback whales cooperate to corral herring, blowing encircling bubble nets, blasting herring with sound, and using their flippers (Whitehead & Rendell, 2015) Sperm whale females take turns babysitting each other’s calves while mothers dive to hunt (Whitehead & Rendell, 2015)
Figure 3.4 Social responsibility norms: regarding social roles and duties that benefit the group
Finally, solidarity norms (Figure 3.5), though perhaps less recognized in other species than the other norms we have discussed, may be manifested in (a) practices that reinforce group identity or culture, (b) instances of self-sacrifice in solidarity with one’s group (like cetaceans beaching themselves collectively), or (c) displays of stress or tension in response to individual freedom running counter to group interests, demands, or expectations.9 65
Solidarity Norms Behaviors
Examples in Chimps
Throw feces and wet food at humans (Hopkins et al., 2012) Police conspecifics by intervening to stop fights (Rudolf von Rohr et al., 2012; de Waal, 1982) Look longer at images of infanticide; interpreted as bystander effect by authors (Rudolf von Rohr et al., 2015) Group identity/ Demonstrate 39 patterns of behavior that differ between communities in tool culture usage, food processing, grooming, and courtship; differences not due to ecological features (Whiten et al., 1999; McGrew & Tutin, 1978)
Examples in Cetaceans
Sanctity/ degradation Liberty/ oppression
Patrol boundaries between chimpanzee communities, sometimes invading and killing adult males and infants and stealing females (Mitani & Watts, 2001; Watts et al., 2006) Throw rocks in particular trees, resulting in a cairn-like structure; authors interpret as ritual or communication behavior (Kühl et al., 2016)
Self-sacrifice
Lack of evidence of self-sacrifice accounted for by a lack of cultural systems of reward; otherwise warfare is a good model of early human warfare (Wrangham & Glowacki, 2012)
Greeting ceremony: southern resident orcas pods each form a rank, swim toward each other, come to a halt and face each other, pause, then dive and swim together in tight subgroups, with lots of vocalization, social excitement, and no hostility (Whitehead & Rendell, 2015) Sympatric orca social groups are differentiated by dialects and diets (Ford, 2002; Barrett-Lennard, 2000); sympatric sperm whale social groups are differentiated by dialect (Whitehead & Rendell, 2015) Humpback whale communities have specific songs, synchronously performed by males; songs change between and within generations and over distance as innovations are introduced (Whitehead & Rendell, 2015) Signature whistles, petting, and synchronous swimming differentiate stable social units of bottlenose dolphins from more loosely associated community members (Connor et al., 2000; Pack, 2010) Northern resident orcas rub their bodies on particular underwater-pebble beaches, whereas other resident communities or sympatric transients do not; the same beaches are revisited throughout generations (Ford et al., 2000;Whitehead & Rendell, 2015) A subgroup of the larger Shark Bay dolphin community uses sponges as foraging tools and attaches sponges to their rostrums to forage amongst sharp rocks; others sharing same habitat do not exhibit this socially learned behavior (Mann et al., 2012) Some highly socially structured cetacean groups beach themselves in mass strandings, following each other onto the beach in a deliberate manner; typically won’t leave the beach by themselves (Connor & Norris, 1982; Simmonds, 1997; Evans et al., 2002; Whitehead & Rendell, 2015)
Figure 3.5 Solidarity norms: regarding social cohesion, group identity, and belonging
Sarah Vincent, Rebecca Ring, and Kristin Andrews
These normative practices are more varied and will likely be more widespread than specifically moral practices. A couple of clarifications should be noted before we consider the normative practices of chimpanzees and cetaceans in the next two sections. First, some practices may exhibit more than one norm. In such cases, we will classify the practice in the category that seems like the best fit. Second, some normative practices may also be moral, though they need not be. Finally, the research we report should be taken for what it is: namely, a report of particular studies or observations. These observations may be mistaken, so none of them should be taken as definitive evidence that the species in question has the identified capacity.What follows is a first pass on cataloging the kinds of behaviors that have been reported in other species that map onto the kinds of normative practices reported by moral psychologists and anthropologists. Nonetheless, we think that, as a whole, the body of evidence reported in the next two sections both supports the claim that these animals engage in normative practice and warrants further investigation into the normative capacities of other animals.
3. Chimpanzee Normative Practice Chimpanzees are only one of the five great ape species; humans, orangutans, gorillas, and bonobos are the others. But we know more about chimpanzees than the other nonhuman great ape species, and philosophers have long been interested in their social abilities. If we were to examine the ‘nicest’ great ape, however, we might instead turn our attention to bonobos, a matriarchal species that resolves conflict more by touching than by fighting and is known to be more tolerant in areas such as food sharing than the chimpanzee. However, as we are looking for evidence that chimpanzees engage in normative practice (rather than evidence that they are kindly or empathetic to one another), and as we have decades of data on chimpanzee behavior both in the wild and in captivity, our focus here will be the chimpanzee.10 Chimpanzees are native to Africa and live in patriarchal fission-fusion groups, which consist of a large community (perhaps up to 60 individuals) that separates into a number of smaller groups (of up to 10 individuals) who will travel together for a time (a day or a few hours). Movement between smaller groups can be fluid, though strong family and affiliate bonds will affect the make-up of these smaller groups. When female chimpanzees mature, they leave their natal group and seek membership in a new community where they seek mates and raise offspring, usually for the rest of their lives. Males remain in their natal community and participate in dominance hierarchies that can be established and destroyed via intragroup aggression. In addition to the violence within the communities, chimpanzee males engage in violent encounters with other communities. Goodall (2010) reported observing what she calls a territory war between two chimpanzee communities that lasted for four years. Chimpanzee intergroup aggression is now well established (Boesch & Boesch-Achermann, 2000; Watts and Mitani, 2001; Watts et al., 2006) (see Figure 3.5). The social structure of the chimpanzee offers the first evidence that chimpanzees might engage in normative practice. The family identities, male alliances, and community identities suggest that chimpanzees might prefer certain ‘in-group’ ways of doing things over ‘outgroup’ practices. Furthermore, it suggests that chimpanzees are able to identify themselves as members of groups and that they are able to keep track of the different groups to which 67
Normative Practices of Other Animals
they belong (e.g., both intra-group alliance and inter-group identity). In addition, the existence of cultural differences (Whiten et al., 1999) between chimpanzee communities offers a possible mechanism for both delineating group identities and identifying out-group individuals, much in the way language, ritual, dress, etc. serve this purpose in human cultures. As females immigrate into new communities, they are at first typically very low-ranking, and in order to become integrated into the group, they may be forced to learn new cultural traditions (Luncz et al., 2012; Luncz & Boesch, 2014). Much of the recent research on chimpanzees has focused on caring norms (see Figure 3.3). Chimpanzees appear to experience empathy for their kin and affiliates, and they console individuals when they have suffered some loss. De Waal has done much to observe, elicit, and categorize these sorts of behaviors, and he suggests that chimpanzees have what he calls “the building blocks of morality,” which include empathy, reciprocity, conflict resolution, a sense of fairness, and cooperation. However, de Waal stops short of saying that animals are moral agents (de Waal, 2006). Another area of normative participation that has been of much interest in chimpanzees is in the area of reciprocity norms (see Figure 3.2). Research on chimpanzee cooperation, punishment, and fairness has yielded mixed results. There is much evidence that chimpanzees seek to assist others, and they will engage in joint action to achieve a common goal. However, this claim has been explicitly disputed by Tomasello, who thinks that what looks like cooperation in chimpanzees is really competition. He argues that chimpanzees do not share a single goal in these cases; they just happen to have the same goal. He uses Searle’s (1995) example of humans running from the rain and ending up together under a roof as an analogy for what chimpanzees are doing when they appear to be cooperating (Tomasello, 2016). However, there is a wide range of conditions in which we see chimpanzees engage in behavior that secures a joint goal, so we are not convinced by Tomasello’s skepticism. We note that studies of chimpanzee cooperation in captive settings are almost all focused on food, and chimpanzees may find cooperation particularly difficult in that context. In addition, captive chimpanzees are actively discouraged from cooperating in non-food contexts, in order to keep them under control.When chimpanzees do cooperate, this can cause a huge headache for caregivers, as when seven chimpanzees escaped from the Kansas City Zoo in 2014, after a male set up a log to be used as a ladder and then “beckoned to another six chimps to join him” (Millward, 2014). Furthermore, we know that for humans social status can have substantial impacts on willingness to cooperate with and be charitable toward others (Kumru & Vesterlund, 2010), and the studies of chimpanzee cooperation that have failed to find cooperative behavior have not, to our knowledge, controlled for prestige. Furthermore, recent studies of chimpanzee social cognition have found that chimpanzees are able to track human false beliefs in an active helping task (Buttelmann et al., 2017). As for fairness, one experimental study of chimpanzees in an ultimatum game found that chimpanzees accept ‘unfair’ offers, while humans will reject them, resulting in a loss both to self and to other (Jensen et al., 2007a). Jensen thinks this behavior shows that chimpanzees are not concerned with fairness. In the case of punishment, in experimental studies Jensen and colleagues (2007b) found that chimpanzees will punish others who directly target them, but another group failed to find evidence that chimpanzees will engage in thirdparty punishment (Riedl et al., 2012). However, in another experiment researchers found 68
Sarah Vincent, Rebecca Ring, and Kristin Andrews
that chimpanzees will start out by making selfish offers but shift to making an equitable offer when the partner protests in an iterated version of the ultimatum game (Proctor et al., 2013). It is hard to know what to make of these captive studies; the results may have to do more with the specific norms in these chimpanzee groups than with some general lack of fairness (Andrews & Gruen, 2014). Furthermore, if fairness is applying a norm to everyone to whom it should be applied (i.e., not making an exception for another or for oneself ), the first step to investigating fairness in chimpanzees must involve identifying norms. That is just beginning (see Figures 3.1–3.5), and it requires significant interest among field researchers, as the field is where we would most expect to see cultural norms. One element of Krebs and Janicki’s moral norm types and Haidt et al.’s and Iyer et al.’s moral foundations that seems to be missing in chimpanzee communities falls within obedience norms (see Figure 3.1). While chimpanzees clearly demonstrate some aspects of obedience norms, such as following dominance hierarchies, what we have not yet seen is evidence of guilt.This may be a part of normative practice in which they do not participate, or it may be that we have not yet found a way to uncover this emotion in other species. In addition, we see only sketchy evidence as of yet for some aspects of social responsibility and solidarity norms (see Figures 3.4 and 3.5). When being an individual who sees norms and oughts in the world is conflated with being an individual who acts in a good way, it is not surprising that most of the research on chimpanzee moral practice would focus on the issues of empathy, consolation, cooperation, and reciprocity. By shifting focus to the foundations of normativity, we hope to invite more research into issues of social responsibility and solidarity practices as well.
4. Cetacean Normative Practice In the previous section, we focused on one species of great ape: chimpanzees. In this section, we will present evidence of normativity in several species of cetaceans. It is only over the last 50 years that cetaceans have become subjects of modern scientific research, so there is far less experimental and naturalistic data available on their behavior compared to apes. Cetaceans are marine mammals, including all whales, dolphins, and porpoises. They live entirely in aquatic environments, primarily in a world of sound, where some perceive and relate to their world using echolocation or sonar—a sensory system that we great apes do not share. Despite these differences, we argue that cetaceans share with the great apes the capacity for ought-thought. As with chimpanzees, the social structures of cetaceans suggest that they might engage in normative practice. For example, bottlenose dolphins live in fission-fusion societies where individuals associate in small groups that can frequently change in composition (Connor et al., 2000). The social relationships within and between groups indicate that individuals can identify themselves as members of particular groups and can keep track of stable affiliations and shifting alliances. Communities, distinguished by home range and association, vary in structure and size. For example, the Shark Bay, Australia community numbers in the thousands. Both sexes are highly social, but the basic social unit consists of life-long bonded pairs or trios of males, arranged in dominance hierarchies established through aggression. These first-order alliances form second-order alliances or ‘teams,’ usually consisting of related individuals. Sometimes second-order alliances form additional, shifting alliances 69
Normative Practices of Other Animals
with other unrelated teams. Collaborating teams compete with others to secure females for reproduction (Stanton & Mann, 2014; Connor et al., 2000) (see Figure 3.2). Behaviors that distinguish bonded units from less affiliated individuals include petting (e.g., one dolphin actively moves her pectoral fin on a body part of another dolphin), synchronous movement (e.g., swimming close together and surfacing at about the same time), and similarity in signature whistles (e.g., individually distinctive whistles believed to signal an individual’s identity) (Stanton & Mann, 2014; Tyack, 2000; Pack, 2010). Calves adopt the signature whistle of their mothers, but as they separate from her, their whistles become more individualized. Bonded males adapt their whistles to that of each other.These socially learned and distinctive whistles are indicative of normativity because they not only signal individual identity, but also group identity. Like chimpanzees, nested layers of group identities, affiliations, and dominance positions suggest that dolphins might engage in a preference for their in-group way of doing things. For example, a subgroup of the Shark Bay community uses sponges as foraging tools, whereas other subgroups do not (Mann et al., 2012) (see Figure 3.5). The social lives of orcas offer some of the most compelling evidence that cetaceans participate in normative practice (see Figures 3.1–3.5). In the Pacific Northwest, there are three distinct populations, but we will focus on the two most studied—resident and transient (Barrett-Lennard, 2000). Since they share the same geographic area, group differences are not likely due to ecological differences but rather due to cultural differences (Whitehead & Rendell, 2015). The resident population consists of three socially bounded communities (Bigg et al., 1990; Leatherwood et al., 1990) that are further broken down into nested social units. Matrilines are the fundamental units and consist of a female and her descendants, usually four to 12 individuals from two to four generations. They always swim within acoustic reach of each other, often within touching distance. Both sexes stay in their matriline for life (Barrett-Lennard, 2000). Closely related and frequently associating groups of matrilines form pods. They share a distinctive dialect or set of vocal calls (Ford, 1989). Groups of pods with related, but not identical, dialects form acoustic clans (Ford, 1991). A community is made up of clans that share a common range. Pods freely associate within and between clans but never outside of their community, suggesting very strong group identities (Bigg et al., 1990). Like language in human cultures, differences in dialect between communities offer a possible mechanism for both delineating group identities and identifying out-group individuals (see Figure 3.5). In-group/out-group differences are most apparent between the resident and transient orca populations. The basic transient social unit is the matriline, but unlike residents, the transient population is a fission-fusion society wherein juvenile and adult offspring may disperse, sometimes permanently. Transient groups tend to be smaller, echolocate less frequently, and use fewer discrete call types; and they do not have discrete vocal repertoires. However, subpopulations do use a similar set of calls, and some variants are shared between subpopulations (Ford, 2002; Barrett-Lennard, 2000). Residents and transients never associate despite sharing the same waters. They usually avoid each other but have been observed in aggressive conflict (Ford & Ellis, 1999) (see Figure 3.4). They have completely different diets: transients only eat marine mammals and some seabirds, whereas residents eat only salmon, some other fish, and squid. Such dietary specialization has been described as extreme and unprecedented in sympatric species (Ford, 2002). 70
Sarah Vincent, Rebecca Ring, and Kristin Andrews
The resident orca communities are one of only two mammalian species where neither sex disperses from their natal groups; the other is the long-finned pilot whale. In other species, including chimpanzees, dispersal is likely an incest avoidance adaptation.11 Using DNA analysis, Barrett-Lennard (2000) determined that the norm for residents is outbreeding and found no evidence of inbreeding. Individuals always mate within their community, sometimes within their clan,12 but never within the same pod. Barrett-Lennard posits that individuals are sexually attracted to others with a similar—but not too similar—dialect. We have classed this manner of incest avoidance as aversion (see Figure 3.4). We cannot make the empirical claim that orcas have social taboos regarding incest, but considering how many facets of their lives indicate normative participation (e.g., dialect, diet, foraging), it stands to reason that their incredibly successful incest aversion mechanisms include normative practices around mating. A darker phenomenon that we think indicates normative practice in cetaceans, particularly norms of solidarity, is mass stranding or beaching (see Figure 3.5). One of the more complete accounts of this phenomenon involves three groups of sperm whales off the coast of South Australia (Evans et al., 2002). Their basic social unit is the matriline, consisting of several related adult females, as well as juveniles and calves of both sexes (Whitehead & Rendell, 2015). Females generally stay within their natal groups, whereas males leave at about 10 years of age. They go on to associate with other adult males or live solitarily. Matrilineal units have distinctive dialects consisting of echolocation clicks. In the Pacific, two or three units form acoustic clans distinguished by habitat use, dialect, movement strategies, and alloparenting (e.g., some groups suckle each other’s calves while babysitting). At one stranding site, witnesses saw a tightly packed group of whales offshore. One whale separated and started swimming parallel to the shore. The whale then started swimming in a ‘frantic’ fashion and moved inshore until she stranded on the beach.The remaining whales followed in groups of two or three and seemed to passively let the surf strand them. The final two whales to strand swam parallel to the beach, then turned and swam back past all the stranded whales. Next, they turned inshore and appeared to actively strand together a little further down the beach. None could be rescued. Stranded whales consisted of adult females and juveniles and calves of both sexes, which suggests group membership. At another site, one male was rescued, then re-stranded, was rescued again, and finally swam away. The reported behaviors suggest deliberate action. In these cases, there were no noxious sounds, which are sometimes correlated with strandings (Jepson et al., 2013). In other cases, changes in group behavior to avoid stranding were reported after larger individuals returned to the water. If the larger animals are also the leaders, then this suggests a leadership role might be involved. Whitehead and Rendell (2015) compare this kind of behavior to that of a human military group or mass suicide, where leaders are followed into certain death and individual interests are subjugated for the sake of group cohesion. Whether on the bright side, such as rescuing those in peril, or the dark side, such as mass strandings, cetacean social practices exhibit norms of obedience, reciprocity, caring, social responsibility, and solidarity (see Figures 3.1–3.5). The social structures and behaviors of cetaceans indicate the cognitive capacity for normative ought-thought that is foundational to normative practice and to moral psychology. 71
Normative Practices of Other Animals
5. Implications We are aware of some deflationary explanations of the phenomena that we are describing as normative. We want to respond in particular to the ‘good-mood’ hypothesis and the ‘expectations of future help’ hypothesis, as well as to the objection that scrutiny is required for morality (and perhaps even proto-morality or normativity). One such deflationary explanation of seemingly altruistic behavior is called the ‘goodmood’ hypothesis. The suggestion is that receiving help improves your mood, leading you to help others indiscriminately. This explanation is deflationary when conjoined with the sense that we should only call helpful behaviors ‘altruistic’ when they are motivated in the ‘right’ way. People differ in the motivations they are willing to term altruistic, but altruistic motivation is generally linked to some nonderivative concern for the well-being of someone other than yourself. Of course, being in a good mood might explain why you are looking for people to help. But this explanation will nevertheless deflate attributions of altruism for those (like Kant) who thought positive mood and the benevolence to which it gives rise too transient and passively acquired to redound to the merit of the benefactor. There is obvious empirical difficulty in trying to untangle mixed motives, particularly in beings who do not use sentential forms of language. But there is no reason why skepticism about animal altruism should operate as a default assumption. One might appeal to the principle of parsimony here, but the invocation of this explanatory virtue might be criticized in this context (e.g., Sober, 2015). Moreover, the good-mood hypothesis does not seem to apply to many apparent cases of nonhuman normativity. A monkey who has recently received grooming is more likely to then share food with conspecifics, but only with the monkeys who have groomed her. This is not indiscriminate sharing, so it cannot be explained by the ‘good-mood’ hypothesis (Brosnan & de Waal, 2002). We interpret this case as evidencing dyadic reciprocity norms. But it is of course possible (indeed typical) to offer some distinct deflationary explanation of these cases, too. If not good mood, then one might invoke the ‘expectations of future help’ hypothesis, involving the kind of self-interested calculation of long-term gain equally thought incompatible with genuine altruism.13 Perhaps these self-interested motivations are inconsistent with an attribution of altruism, given customary usage of that term. But are they compatible with the more general hypothesis of normative thought? Consider the “expectations of future help” hypothesis, suggesting that you help others only because you expect their help in the future. This kind of enlightened self-interest is consistent with ethical egoism, the view that the right thing to do is whatever serves one’s own interests—but it is (at least potentially) incompatible with more widely accepted moral theories (e.g., utilitarianism, Kantianism, virtue ethics, care ethics, etc.). As with the ‘good-mood’ account, there are a couple of reasons to resist the ‘expectations of future help’ hypothesis as a general account of the behaviors in question. First, we might be psychological egoists who think self-interest is all there is to moral cognition. Perhaps we just talk of human altruism but never display the phenomenon as we imagine it. Second, even if a given species acts from self-directed ultimate motives in the absence of other-regarding sentiments (as Vonk et al., 2008 suggest is invariably the case), this does not foreclose the possibility of cooperation or collaboration within animal communities. 72
Sarah Vincent, Rebecca Ring, and Kristin Andrews
Grooming expectations might realize social norms that have little to do with altruism. Again, it is important to distinguish the more general category of normativity from the narrower category of moral normativity. Altruism is a narrower category still. The role metacognition and reflective scrutiny play in normativity is closely related to these concerns. After all, we might think that what makes human morality unique is our ability to question our desires, drives, and motivations and to suppress or overcome them to act as we judge we ought. In fact, Korsgaard (1992, 2010) argues that this ability to identify one’s inclinations so as to question the propriety of acting as one is inclined to act is necessary for the possession of normative concepts (like ‘should’ and ‘obligation’) and the normative thoughts they make possible. It is only as we humans come to ask these questions and construct reflective value schemes on the basis of our answers to them that valuing comes to exist. According to Korsgaard (1992, 2010), only humans have the substantial kind of “normative self-government” that comes from an animal’s querying its own motivations. So we alone among the animals are moral beings. To be clear, in criticizing Korsgaard, we are not arguing that nonhuman animals are moral agents. Our position is that members of other species engage in practices that evidence normative thought. We mean to pull apart the concepts of normativity and morality that Korgaard fuses together. But more pointedly, there are reasons to be skeptical of the claim that the ability to scrutinize one’s own prospective motivations is essential to acting for moral reasons (Rowlands, 2012). We have presented empirical evidence to support our claim that chimpanzees and cetaceans participate in normative practices (see Figures 3.1–3.5). Much of this evidence is anecdotal, and as such, one could object that it is insufficient to justify claims of nonhuman animal normativity. But under certain conditions, anecdotes can build into a reliable data set: a set of observations that can increase our knowledge of the species under observation and augment the ecological validity of subsequent experimental hypotheses and designs (Whitehead & Rendell, 2015; Bates & Byrne, 2007). To use anecdotes as data, Bates and Byrne (2007) recommend that (a) observers are experienced with the species, (b) original records are used because of the fallibility of human memory, and (c) multiple independent records of the same phenomenon are analyzed in combination because little can be concluded from a single observation. Thus, anecdotal data need not imply unscientific data. Further, much animal cognition data is unavoidably anecdotal, considering that some behaviors are rare or unpredictable (e.g., mass strandings) and some subject sets are small or inherently difficult to observe (e.g., noncaptive cetaceans who spend most of their time out of human view). Whale researchers have recently put the recommendations of Bates and Byrne (2007) into practice. Pitman and Durban (2009) describe a humpback whale ‘rescuing’ a seal (see Figure 3.3). The seal was fleeing from predatory orcas and swam toward humpback whales. One humpback swept the seal onto her chest between her flippers. As the orcas approached, she arched and lifted the seal out of the water. Finally, the seal escaped to the safety of an ice floe. To find out if and why such ‘rescues’ are common practice for humpbacks, Pitman et al. (2016) compiled and analyzed 115 accounts of humpbacks interacting with orcas.They include published and unpublished observations by scientists, naturalists, and laypeople, so these reports vary in accuracy, detail, and interpretation. Taking this variation into account, Pitman et al. identify a clear pattern of behavior for humpbacks.When individual humpbacks 73
Normative Practices of Other Animals
detect an orca attack, they interfere. Witnesses observed prey including humpback calves, gray whale calves, seals, sea lions, and ocean sunfish. Pitman et al. conclude that the “mobbing behavior” is targeted toward mammal-eating orcas, as observed interactions with fish-eating orcas have been peaceful. Their interpretation of the behavior posits altruism. When the intended prey and its rescuer are conspecifics, kin selection can explain the evolution of the rescuer’s motivation. When prey and rescuer are likely to interact in the future and prey has the capacity to help the rescuer or her kin, reciprocal altruism is a reasonable hypothesis. But since these hypotheses do not fit a humpback’s rescue of a seal or sea lion,“interspecies altruism, even if unintentional, could not be ruled out” (Pitman et al., 2016, 2). This case shows that compiling and analyzing anecdotal evidence plays an important role both in identifying important cases and in narrowing the field of prima facie explanations of those cases.We have been arguing that the evidence of animal normativity is already sufficiently compelling to warrant belief, but Pittman’s analysis suggests that animal altruism may be confirmed to this degree as well. One could object that our interpretation of the evidence for nonhuman animal normative practices is vulnerable to anthropomorphism, which is the unwarranted or overly lax attribution of human traits to nonhuman entities. Some could argue that our use of terms such as ‘policing,’ ‘friendship,’ and ‘cooperation’ is anthropomorphic because there is something uniquely human about these capacities. For example, as we noted earlier, Tomasello (2016) claims that ‘true’ cooperation is unique to humans. He argues that to cooperate, with a joint goal in joint intention, we must know what the other has in mind—take the perspective of the other, know what the other believes and desires, and know that each other knows this about the other. According to Tomasello, cooperation requires a certain kind of mindreading ability or a tacit theory of mind, which Tomasello claims14 is unique to humans. On his view, when chimpanzees or orcas appear to be cooperating, they simply happen to have the same goal and take advantage of the effects of each other’s actions to reach that goal. We argue that anthropomorphism can be avoided by being careful about operational definitions. When searching for evidence of a trait or capacity that we know is present in humans, the operational definition should not demand more than what is typically regarded as sufficient in the human case (Andrews & Huss, 2014; Buckner, 2013). It is far from a settled question as to whether or not human cooperation requires the kind of sophisticated theory of mind Tomasello characterizes (Andrews, 2012). For example, when they cooperate to saw a piece of wood, Keenan does not need to take Fatima’s perspective or think about her beliefs in order to know that he should hold the wood while she saws. It is sufficient that they perceive that their goal is to cut wood, they have learned something about sawing wood, and they can modify their behavior according to the other’s actions.To deny that this is sufficient for cooperation would exclude many cases of human activity that we intuitively or pre-theoretically describe as cooperative in nature. Cooperation typically involves learning norms and responsibilities within group activities that have a common goal, such as team sports, mass production, predation, or communal defense. Since cooperation in humans largely involves normative participation, we take evidence of cooperation in other cognitively flexible, intelligent, social species as evidence for normative practice. Support for this claim is strengthened where roles are specialized, such as in some dolphins and humpbacks (see Figure 3.4). 74
Sarah Vincent, Rebecca Ring, and Kristin Andrews
To avoid anthropomorphism, evidence should not be cherry-picked or utilized in isolation from our other commitments and observations. We recognize certain patterns of behavior in humans as constituting certain phenomenon (e.g., friendship, policing, babysitting, etc.).When we see similar patterns effecting similar ends in other social species, we are warranted in classifying them as instances of these same phenomena. That is what consistency requires, so long as the attribution of friendship, policing, or babysitting still seems apt after all the known and likely capacities of the species in question have be taken into account (e.g., their capacities for emotion, social learning, creative problem solving, etc.). We need to develop or embrace definitions of the phenomena in question to allow for unambiguous attribution to both humans and other animals. For instance, if we define ‘friendship’ in terms of developing and maintaining affiliative social bonds, then it would be an error to deny that chimpanzees have norms of friendship. Such errors impair our knowledge of other species. Eschewing anthropomorphism “at all costs” is a “well established convention” in science (Barrett-Lennard, 2000). However, the pursuit of knowledge should not be impaired for the sake of this convention. One of its costs in animal research is that it leads to preferring the error of false negative claims over the error of false positive claims (Andrews & Huss, 2014; Sheets-Johnstone, 1992; de Waal, 1999). The costs of this “anthropectomy” (Andrews & Huss, 2014), denying that animals have properties when they in fact have them, are just as great—if not greater—than those of anthropomorphism. For example, consider the claim that pilot whales do not have culture. If this claim is erroneously accepted due to anthropectomy, then not only is knowledge impaired, but maintaining the “grindadrap”15 tradition of the Faroe Island human culture is automatically privileged over maintaining the cultural traditions of the cetaceans (since their existence is denied). Since entire pods are killed, all the cultural information and traditions that are unique to those pods are lost forever. We have constructed and defended a theoretical framework for examining the conceptual and empirical questions of normativity in nonhuman animals. To help determine what counts as normative practice, we merged moral foundations theory (Haidt et al., 2009; Iyer et al., 2012) with Krebs and Janicki’s (2002) categories of moral norms. If we find evidence of such norms in the social practices of nonhuman animals, then we have evidence of normative ought-thought—the kind of cognitive capacity that underpins moral cognition. When we remove the anthropocentric lens that has obscured some research, we can see that some claims of human uniqueness with regard to normative practice—and perhaps even the foundations of morality—may be spurious. By disentangling the question of morality from that of normativity, we can set aside theoretical commitments within moral philosophy so as to get a clear view of the normative capacities of nonhuman animals. From this vantage point, we can investigate the further question of what makes a normative practice a moral practice and see if any nonhuman normative practices count as moral ones.
Notes 1. Broadly, moral subjects are beings who can act for moral reasons, while moral agents can additionally scrutinize their motivations to act (see Rowlands, 2012 for an extended discussion of this distinction). 2. See Chapters 1 and 2 of this volume for further reasons to look beyond moral cognition to normative cognition more generally. 75
Normative Practices of Other Animals
3. It is noteworthy that de Waal excludes conventions that do not evidence empathy, reciprocity, or altruism from the moral domain. We’ll say more about why this matters shortly. 4. See Chapters 1 and 2 of this volume for in-depth discussion of these distinctions. 5. Our discussion will focus on what is meant by ‘normative.’ We should note, however, that by ‘practice,’ we are talking about patterns of behavior rather than behaviors isolated from one another and from the performer. We are not using ‘practice’ here in any more technical sense. 6. See the collaborative website http://moralfoundations.org/, a project of Peter Ditto, Jesse Graham, Jonathan Haidt, Ravi Iyer, Sena Koleva, Matt Motyl, Gary Sherman, and Sean Wojcik. 7. See Chapters 1, 2, 6, 7, 8, 9 and 16 of this volume for further discussion of the work of Haidt and colleagues on these issues. 8. Note that our inclusion of aversion cases within the social responsibility category is a departure from Haidt et al. (2009), who believe that sanctity/degradation behaviors warrant their own category.We are inclined to collapse the sanctity/degradation and in-group loyalty/betrayal foundations primarily because the former seems to involve too much cognitive sophistication (following Shweder’s emphasis on divinity) to be productive for our present discussion. 9. Iyer et al.’s (2012) addition of the sixth foundation of liberty/oppression, as with some of Haidt et al.’s (2009) sanctity/degradation behaviors, seems too cognitively demanding for our purposes. Consider that Iyer et al. discuss this sixth foundation in the context of libertarian political ideology. Still, we think that some of the tensions they describe between desire for individual freedom and respect for authority may be felt by, and displayed in the behaviors of, some other animals. For our purposes, we thought it best to categorize such tensions as the darker complement of solidarity (as harm is to care or betrayal to loyalty). 10. Note that scientists only discovered the existence of bonobos in the mid-twentieth century. 11. See Chapter 9 of this volume for further discussion of incest avoidance in chimpanzees and humans. 12. There is only one clan in the southern resident community. 13. See Chapter 5 of this volume, where a similar set of hypotheses divides explanations of the normative behaviors of human infants. 14. While Tomasello defends this position in his 2016 book, subsequently he has been among the authors in two published papers purporting that chimpanzees understand false beliefs (Krupenye et al., 2016 and Buttelmann et al., 2017); it is unclear whether Tomasello will continue to hold this view. 15. The grindadrap is an on-going Faroese tradition dating back to the sixteenth century. Participants harass entire pods of cetaceans into stranding on certain designated beaches, then slit their arteries with knives, causing them to bleed to death (www.seashepherd.org/faroes/aboutcampaign/the-grindadrap.html).
References Anderson, J. R., Gillies, A. and Lock, L. C. (2010).“Pan thanatology,” Current Biology, 20, R349–R351. Andrews, K. (in preparation). “Naïve Normativity.” Andrews, K. (2012). Do Apes Read Minds? Toward a New Folk Psychology. Cambridge, MA: MIT Press. Andrews, K. and Gruen, L. (2014). “Empathy in Other Apes,” in H. L. Maibom (ed.), Empathy and Morality. Oxford: Oxford University Press, 193–209. Andrews, K. and Huss, B. (2014). “Anthropomorphism, Anthropectomy, and the Null Hypothesis,” Biology & Philosophy, 29 (5), 711–729. Barrett-Lennard, L. G. (2000). “Population Structure and Mating Patterns of Killer Whales (Orcinus orca) as Revealed by DNA Analysis,” Doctoral dissertation, University of British Columbia. https://open.library.ubc.ca/cIRcle/collections/ubctheses/831/items/1.0099652 Bates, L. A. and Byrne, R. W. (2007). “Creative or Created: Using Anecdotes to Investigate Animal Cognition,” Methods, 42 (1), 12–21. Bekoff, M. and Pierce, J. (2009). Wild Justice: The Moral Lives of Animals. Chicago: University of Chicago Press.
76
Sarah Vincent, Rebecca Ring, and Kristin Andrews
Bender, C. E., Herzing, D. L. and Bjorklund, D. F. (2008). “Evidence of Teaching in Atlantic Spotted Dolphins (Stenella frontalis) by Mother Dolphins Foraging in the Presence of Their Calves,” Animal Cognition, 12, 43–53. Bigg, M. A., Olesiuk, P. F., Ellis, G. M., Ford, J. K. B. and Balcomb III, K. C. (1990). “Social Organization and Genealogy of Resident Killer Whales (Orcinus orca) in the Coastal Waters of British Columbia and Washington State,” in P. S. Hammond, S. A. Mizroch and G. P. Donovan (eds.), Individual Recognition of Cetaceans: Use of Photo-identification and Other Techniques to Estimate Population Parameters, 12, 386–406. Biro, D., Humle,T., Koops, K., Sousa, C., Hayashi, M. and Matsuzawa,T. (2010). “Chimpanzee Mothers at Bossou, Guinea Carry the Mummified Remains of Their Dead Infants,” Current Biology, 20, R351–R352. Boesch, C. (1991). “Teaching Among Wild Chimpanzees,” Animal Behavior, 41, 530–532. ———. (1993). “Aspects of Transmission of Tool-Use in Wild Chimpanzees,” in K. R. Gibson and T. Ingold (eds.), Tools, Language and Cognition in Human Evolution. Cambridge: Cambridge University Press, 171–183. ———. (1994). “Cooperative Hunting in Wild Chimpanzees,” Animal Behavior, 48, 653–667. Boesch, C. and Boesch-Achermann, H. (2000). The Chimpanzees of the Tai Forest: Behavioural Ecology and Evolution. Oxford: Oxford University Press. Boesch, C., Crockford, C., Herbinger, I., Wittig, R., Moebius, Y. and Normand, E. (2008). “Intergroup Conflicts Among Chimpanzees in Taï National Park: Lethal Violence and the Female Perspective,” American Journal of Primatology, 70 (6), 519–532. Brosnan, S. F. and de Waal, F. B. M. (2002). “A Proximate Perspective on Reciprocal Altruism,” Human Nature, 13, 129–152. Brosnan, S. F., Schiff, H. C. and de Waal, F. B. M. (2005). “Tolerance for Inequity May Increase with Social Closeness in Chimpanzees,” Proceedings of the Royal Society B: Biological Sciences, 272 (1560), 253–258. Brosnan, S. F.,Talbot, C., Ahlgren, M., Lambeth, S. P. and Schapiro, S. J. (2010). “Mechanisms Underlying Responses to Inequitable Outcomes in Chimpanzees, Pan troglodytes,” Animal Behaviour, 79 (6), 1229–1237. Buckner, C. (2013). “Morgan’s Canon, Meet Hume’s Dictum: Avoiding Anthropofabulation in Cross-Species Comparisons,” Biology and Philosophy, 28 (5), 853–871. Buttelmann, D., Buttelmann, F., Carpenter, M., Call, J. and Tomasello, M. (2017). “Great Apes Distinguish True from False Beliefs in an Interactive Helping Task,” PLOS One, 12 (4), e0173793. Chalmeau, R. (1994). “Do Chimpanzees Cooperate in a Learning Task?” Primates, 35, 385–392. Chalmeau, R. and Gallo, A. (1996a). “What Chimpanzees (Pan Troglodytes) Learn in a Cooperative Task,” Primates, 37, 39–47. ———. (1996b). “Cooperation in Primates: Critical Analysis of Behavioral Criteria,” Behavioural Process, 35, 101–111. Clay, Z., Ravaux, L., de Waal, F. B. M. and Zuberbühler, K. (2016). “Bonobos (Pan paniscus) Cocally Protest Against Violations of Social Expectations,” Journal of Comparative Psychology, 130 (1), 44–54. Connor, R. C. and Norris, K. (1982). “Are Dolphins Reciprocal Altruists?” The American Naturalist, 119 (3), 358–374. Connor, R. C., Wells, R. S., Mann, J. and Read, A. J. (2000). “The Bottlenose Dolphin: Social Relationships in a Fission-Fusion Society,” in J. Mann, R. C. Connor, P. L. Tyack and H. Whitehead (eds.), Cetacean Societies: Field Studies of Dolphins and Whales. Chicago: University of Chicago Press, 91–126. Crawford, M. P. (1937). The Cooperative Solving of Problems byYoung Chimpanzees. Baltimore, MD: John Hopkins Press. de Grazia, D. (1996). Taking Animals Seriously. Cambridge: Cambridge University Press. de Waal, F. B. M. (1982). Chimpanzee Politics: Power and Sex Among Apes. London: Jonathan Cape. ———. (1996). Good Natured. Cambridge, MA: Harvard University Press. ———. (1999). “Anthropomorphism and Anthropodenial,” Philosophical Topics, 27 (1), 255–280. ———. (2006). Primates and Philosophers: How Morality Evolved. Princeton: Princeton University Press.
77
Normative Practices of Other Animals
———. (2009). The Age of Empathy: Nature’s Lessons for a Kinder Society.Toronto: McClelland & Stewart. ———. (2013). The Bonobo and the Atheist: In Search of Humanism Among the Primates. New York: W. W. Norton. ———. (2014). “Natural Normativity: The ‘Is’ and ‘Ought’ of Animal Behavior,” Behaviour, 151, 185–204. de Waal, F. B. M. and Luttrell, L. M. (1988). “Mechanisms of Social Reciprocity in Three Primate Species: Symmetrical Relationship Characteristics or Cognition?” Ethology and Sociobiology, 9, 101–118. de Waal, F. B. M. and van Roosmalen, A. (1979). “Reconciliation and Consolation Among Chimpanzees,” Behavioral Ecology and Sociobiology, 5 (1), 55–66. Engelmann, J. M. and Herrmann, E. (2016). “Chimpanzees Trust Their Friends,” Current Biology, 26 (2), 252–256. http://doi.org/10.1016/j.cub.2015.11.037. Evans, K., Morrice, M., Hindell, M. and Thiele, D. (2002). “Three Mass Strandings of Sperm Whales (Physeter macrocephalus) in Southern Australia Waters,” Marine Mammal Science, 18 (3), 622–643. Flack, J. C. and de Waal, F. B. M. (2000) “ ‘Any Animal Whatever’: Darwinian Building Blocks of Morality in Monkeys and Apes,” Journal of Consciousness Studies, 7 (1–2), 1–29. Ford, J. K. B. (1989). “Acoustic Behaviour of Resident Killer Whales (Orcinus orca) off Vancouver Island, British Columbia,” Canadian Journal of Zoology, 67, 727–745. ———. (1991). “Vocal Traditions Among Resident Killer Whales (Orcinus orca) in Coastal Waters of British Columbia,” Canadian Journal of Zoology, 69, 1454–1483. ———. (2002). “Killer Whale Orcinus orca,” in W. F. Perrin, B. Wursig and H. G. M. Thewissen (eds.), The Encyclopedia of Marine Mammals. New York: Academic Press, 669–676. Ford, J. K. B. and Ellis, G. M. (1999). Transients: Mammal-Hunting Killer Whales of British Columbia, Washington, and Southeastern Alaska.Vancouver: University of British Columbia Press. Ford, J. K. B., Ellis, G. M. and Balcomb, K. C. (2000). Killer Whales. Vancouver: University of British Columbia Press. Fraser, O. N., Stahl, D. and Aureli, F. (2008). “Stress Reduction Through Consolation in Chimpanzees,” Proceeding of the National Academy of the Sciences, 105, 8557–8562. Gazda, S., Connor, R., Edgar, R. and Cox, F. (2005). “A Division of Labour with Role Specialization in Group-Hunting Bottlenose Dolphins (Tursiops truncatus) off Cedar Key, Florida,” Proceedings: Biological Sciences, 272 (1559), 135–140. Goodall, J. (2010). Through a Window: My Thirty Years with the Chimpanzees of Gombe. Boston: Houghton Mifflin Harcourt. Gruen, L. (2002). “The Morals of Animal Minds,” in C. Allen, M. Bekoff and G. Burghardt (eds.), The Cognitive Animal. Cambridge, MA: MIT Press, 437–442. Guinet, C., Barrett-Lennard, L. and Loyer, B. (2000). “Co-ordinated Attack Behavior and Prey Sharing by Killer Whales at Crozet Archipelago: Strategies for Feeding on Negatively-Buoyant Prey,” Marine Mammal Science, 16 (4), 829–834. Haidt, J., Graham, J. and Joseph, C. (2009). “Above and Below Left-Right: Ideological Narratives and Moral Foundations,” Psychological Inquiry, 20, 110–119. Hare, B., Melis, A. P.,Woods,V., Hastings, S. and Wrangham, R.W. (2007). “Tolerance Allows Bonobos to Outperform Chimpanzees on a Cooperative Task,” Current Biology, 17, 619–623. Hauser, M. D. (2006). Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. New York: Ecco and Harper Collins. Hayaki, H. (1985). “Social Play of Juvenile and Adolescent Chimpanzees in the Mahale Mountains National Park, Tanzania,” Primates, 26 (4), 343–360. http://doi.org/10.1007/BF02382452. Hiraiwa-Hasegawa, M. (1990). “A Note on the Ontogeny of Feeding,” in T. Nishida (ed.), The Chimpanzees of the Mahale Mountains. Tokyo: University of Tokyo Press, 279–283. Hirata, S. (2009). “Chimpanzee Social Intelligence: Selfishness, Altruism, and the Mother-Infant Bond,” Primates, 50, 3–11. Hirata, S. and Fuwa, K. (2007). “Chimpanzees (Pan troglodytes) Learn to Act with Other Individuals in a Cooperative Task,” Primates, 48, 13–21.
78
Sarah Vincent, Rebecca Ring, and Kristin Andrews
Hockings, K. J., Anderson, J. R. and Matsuzawa, T. (2006). “Road Crossing in Chimpanzees: A Risky Business,” Current Biology, 16 (17), R668–R670. Hopkins, W. D., Russell, J. L. and Schaeffer, J. A. (2012). “The Neural and Cognitive Correlates of Aimed Throwing in Chimpanzees: A Magnetic Resonance Image and Behavioural Study on a Unique Form of Social Tool Use,” Philosophical Transactions of the Royal Society B: Biological Sciences, 367 (1585), 37–47. http://doi.org/10.1098/rstb.2011.0195. Horner,V., Carter, J. D., Suchak, M. and de Waal, F. B. M. (2011). “Spontaneous Prosocial Choice by Chimpanzees,” Proceedings of the National Academy of Sciences of the United States of America, 108 (33), 13847–13851. http://doi.org/10.1073/pnas.1111088108. Iyer, R., Koleva, S., Graham, J., Ditto, P. and Haidt, J. (2012). “Understanding Libertarian Morality: The Psychological Dispositions of Self-Identified Libertarians,” PLOS One, 7 (8), e42366. Jaeggi, A.V., de Groot, E., Stevens, J. M. G. and van Schaik, C. P. (2013). “Mechanisms of Reciprocity in Primates: Testing for Short-Term Contingency of Grooming and Food Sharing in Bonobos and Chimpanzees,” Evolution and Human Behavior, 34 (2), 69–77. Jensen, K., Call, J. and Tomasello, M. (2007a). “Chimpanzees Are Rational Maximizers in an Ultimatum Game,” Science, 318, 107–109. ———. (2007b). “Chimpanzees Are Vengeful but Not Spiteful,” Proceeding of the National Academy of the Sciences, 104, 13046–13050. Jepson, P., Deaville, R., Acevedo-Whitehouse, K., Barnett, J., Brownlow, A., Brownell Jr, R., Clare, F., Davison, N., Law, R., Loveridge, J., Macgregor, S., Morris, S., Murphy, S., Penrose, R., Perkins, M., Pinn, E., Seibel, H., Siebert, U., Sierra, E., Simpson,V., Tasker, M., Tregenza, N., Cunningham, A. and Fernández, A. (2013). “What Caused the UK’s Largest Common Dolphin (Delphinus delphis) Mass Stranding Event?” PLOS One, 8 (4), N/a. Web. Kirby, D. (2012). Death at SeaWorld: Shamu and the Dark Side of Killer Whales in Captivity. New York: St. Martin’s Press. Kitcher, P. (2011). The Ethical Project. Cambridge, MA: Harvard University Press. Korsgaard, C. (1992). “The Sources of Normativity,” in The Tanner Lectures on Human Values. Cambridge: Cambridge University Press. ———. (2006). “Morality and the Distinctiveness of Human Action,” in S. Macedo and J. Ober (eds.), Primates and Philosophers: How Morality Evolved. Princeton: Princeton University Press. ———. (2010).“Reflections on the Evolution of Morality,” The Amherst Lecture in Philosophy, 5, 1–29. www.amherstlecture.org/korsgaard2010/. Krebs, D. L. and Janicki, M. (2002). “Biological Foundations of Moral Norms,” in M. Schaller and C. S. Crandall (eds.), Psychological Foundations of Culture. Mahwah, NJ: Lawrence Erlbaum Associates. Krupenye, C., Kano, F., Hirata, S., Call, J. and Tomasello, M. (2016). “Great Apes Anticipate That Other Individuals Will Act According to False Beliefs,” Science, 354 (6308), 110–114. Kuczaj, S. and Thames Walker, R. (2006). “How Do Dolphins Solve Problems?” in E. A. Wasserman and T. R. Zentall (eds.), Comparative Cognition: Experimental Explorations of Animal Intelligence. New York: Oxford University Press. Kuczaj, S., Winship, A. and Eskelinen, K. (2015). “Can Bottlenose Dolphins (Tursiops Truncatus) Cooperate When Solving a Novel Task?” Animal Cognition, 18 (2), 543–550. Kühl, H. S., Kalan, A. K., Arandjelovic, M., Aubert, F., D’Auvergne, L., Goedmakers, A., Jones, S., Kehoe, L., Regnaut, S., Tickle, A., Ton, E., van Schijndel, J., Abwe, E. E., Angedakin, S., Agbor, A., Ayimisin, E. A., Bailey, E., Bessone, M., Bonnet, M., Brazolla, G., Ebua Buh, V., Chancellor, R., Cipoletta, C., Cohen, H., Corogenes, K., Coupland, C., Curran, B., Deschner, T., Dierks, K., Dieguez, P., Dilambaka, E., Diotoh, O., Dowd, D., Dunn, A., Eshuis, H., Fernandez, R., Ginath, Y., Hart, J., Hedwig, D., Ter Heegde, M., Hicks, T. C., Imong, I. S., Jeffery, K. J., Junker, J., Kadam, P., Kambi, M., Kienast, I., Kujirakwinja, D., Langergraber, K. E., Lapeyre,V., Lapuente, J., Lee, K., Leinert, V., Meier, A., Maretti, G., Marrocoli, S., Mbi, T. J., Mihindou, V., Möbius, Y. B., Morgan, D., Morgan, B., Mulindahabi, F., Murai, M., Niyigabae, P., Normand, E., Ntare, N., Orsmby, L. J., Piel, A., Pruetz, J., Rundus, A., Sanz, C., Sommer,V., Stewart, F., Tagg, N.,Vanleeuwe, H.,Vergnes,
79
Normative Practices of Other Animals
V., Willie, J., Wittig, R. M., Zuberbuehler, K. and Boesch, C. (2016). “Chimpanzee Accumulative Stone Throwing,” Scientific Reports, 6, 22219. http://doi.org/10.1038/srep22219. Kumru, C. S. and Vesterlund, L. (2010). “The Effect of Status on Charitable Giving,” Journal of Public Economic Theory, 12 (4), 709–735. Kutsukake, N. and Castles, D. L. (2004). “Reconciliation and Post-Conflict Third-Party Affiliation Among Wild Chimpanzees in the Mahale Mountains,” Tanzania Primates, 45 (3), 157–165. http:// doi.org/10.1007/s10329-004-0082-z. Leatherwood, S., Matkin, C. O., Hall, J. D. and Ellis, G. M. (1990). “Killer Whales, Orcinus orca, PhotoIdentified in Prince William Sound, Alaska, 1976 Through 1987,” Canadian Field Naturalist, 104, 362–371. Lilley, R. (2008). “Dolphin Saves Stuck Whales, Guides Them Back to Sea,” http://news.national geographic.com/news/2008/03/080312-AP-dolph-whal.html Luncz, L.V. and Boesch, C. (2014). “Tradition Over Trend: Neighboring Chimpanzee Communities Maintain Differences in Cultural Behavior Despite Frequent Immigration of Adult Females,” American Journal of Primatology, 76 (7), 649–657. Luncz, L.V., Mundry, R. and Boesch, C. (2012). “Evidence for Cultural Differences Between Neighboring Chimpanzee Communities,” Current Biology, 22 (10), 922–926. Mann, J., Stanton, M. A., Patterson, E. M., Bienenstock, E. J. and Singh, L. O. (2012).“Social Networks Reveal Cultural Behaviour in Tool-Using Dolphins,” Nature Communications, 3, 980. McGrew, W. C. and Tutin, C. E. G. (1978). “Evidence for a Social Custom in Wild Chimpanzees?” Man, 13, 234–251. McKenna, C. (2015).“Marine Midwife? Endangered Orca in B.C. May Have Had Help Giving Birth,” www.ctvnews.ca/sci-tech/marine-midwife-endangered-orca-in-b-c-may-have-had-helpgiving-birth-1.217957 Melis, A. P., Hare, B. and Tomasello, M. (2006). “Chimpanzees Recruit the Best Collaborators,” Science, 311, 1297–1300. ———. (2009). “Chimpanzees Coordinate in a Negotiation Game,” Evolution and Human Behavior, 30, 381–392. Melis, A. P. and Tomasello, M. (2013). “Chimpanzees’ (Pan troglodytes) Strategic Helping in a Collaborative Task,” Biology Letters, 9, 1–4. Millward, D. (2014). “Chimps Use Ingenuity to Make Great Escape Out of Zoo,” www.telegraph. co.uk/news/worldnews/northamerica/usa/10760267/Chimps-use-ingenuity-to-make-greatescape-out-of-zoo.html Mitani, J. and Watts, D. (2001). “Boundary Patrols and Intergroup Encounters in Wild Chimpanzees,” Behaviour, 138 (3), 299–327. http://doi.org/10.1163/15685390152032488. Morton, A. (2002). Listening to Whales:What the Orcas Have Taught Us. New York: Ballantine. Neiwert, D. (2015). Of Orcas and Men: What Killer Whales Can Teach Us. New York: The Overlook Press. Ohashi, G. and Matsuzawa,T. (2011). “Deactivation of Snares by Wild Chimpanzees,” Primates: Journal of Primatology, 52 (1), 1–5. http://doi.org/10.1007/s10329-010-0212-8. Pack, A. A. (2010). “The Synergy of Laboratory and Field Studies of Dolphin Behavior and Cognition,” International Journal of Comparative Psychology, 23, 538–565. Parr, L. A.,Waller, B. M.,Vick, S. J. and Bard, K. A. (2007). “Classifying Chimpanzee Facial Expressions Using Muscle Action,” Emotion, 7 (1), 172–181. Peterson, D. and Wrangham, R. (2003). Demonic Males: Apes and the Origins of Human Violence. Boston: Mariner Books. Pitman, R. L., Deecke, V. B., Gabriele, C. M., Srinivasan, M., Black, N., Denkinger, J., Durban, J. W., Mathews, E. A., Matkin, D. R., Neilson, J. L., Schulman-Janiger, A., Shearwater, D., Stap, P. and Ternullo, R. (2016). “Humpback Whales Interfering When Mammal-Eating Killer Whales Attack Other Species: Mobbing Behavior and Interspecific Altruism?” Marine Mammal Science, 33 (1), 7–58. Pitman, R. L. and Durban, J.W. (2009). “Save the Seal! Whales Act Instinctively to Save Seals,” Natural History, 9, 48.
80
Sarah Vincent, Rebecca Ring, and Kristin Andrews
Plutchik, R. (1987). “Evolutionary Bases of Empathy,” in N. Eisenberg and J. Strayer (eds.), Empathy and Its Development. Cambridge: Cambridge University Press. Preston, S. D. and de Waal, F. B. M. (2002). “Empathy: Its Ultimate and Proximate Bases,” Behavioral and Brain Sciences, 25 (1), 1–20. Proctor, D., Williamson, R. A., de Waal, F. B. M. and Brosnan, S. F. (2013). “Chimpanzees Play the Ultimatum Game,” Proceedings of the National Academy of Sciences, 110 (6), 2070–2075. Pruetz, J. D. and Bertolani, P. (2007).“Savanna Chimpanzees, Pan Troglodytes Verus, Hunt with Tools,” Current Biology, 17, 412–417. Reggente, M. A. L., Alves, F., Nicolau, C., Freitas, L., Cagnazzi, D., Baird, R. W. and Galli, P. (2016). “Nurturant Behavior Toward Dead Conspecifics in Free-Ranging Mammals: New Records for Odontocetes and a General Review,” Journal of Mammalogy, 97. http://dx.doi.org/10.1093/ jmammal/gyw089. Reiss, D. (2011). “Dolphin Research: Educating the Public,” Science, 332 (6037), 1501. Riedl, K., Jensen, K., Call, J. and Tomasello, M. (2012). “No Third-Party Punishment in Chimpanzees,” Proceeding of the National Academy of the Sciences, 109, 14824–14829. Rollin, B. E. (2007). “Animal Mind: Science, Philosophy, and Ethics,” The Journal of Ethics, 11, 253–274. Rowlands, M. (2012). Can Animals Be Moral? Oxford: Oxford University Press. Rudolf von Rohr, C., Burkart, J. M. and van Schaik, C. P. (2011). “Evolutionary Precursors of Social Norms in Chimpanzees: A New Approach,” Biology and Philosophy, 26, 1–30. Rudolf von Rohr, C., Koski, S. E., Burkart, J. M., Caws, C., Fraser, O. N., Ziltener, A. and van Schaik, C. P. (2012). “Impartial Third-Party Interventions in Captive Chimpanzees: A Reflection of Community Concern,” PLOS One, 7 (3), e32494. http://doi.org/10.1371/journal.pone.00 32494. Rudolf von Rohr, C., van Schaik, C. P., Kissling, A. and Burkart, J. M. (2015). “Chimpanzees’ Bystander Reactions to Infanticide,” Human Nature, 26 (2), 143–160. Saulitis, E., Matkin, C., Barrett-Lennard, L., Heise, K. and Ellis, G. (2000). “Foraging Strategies of Sympatric Killer Whale (Orcinus orca) Populations in Prince William Sound, Alaska,” Marine Mammal Science, 16 (1), 94–109. Searle, J. (1995). The Construction of Social Reality. New York: Free Press. Sheets-Johnstone, M. (1992). “Taking Evolution Seriously,” American Philosophical Quarterly, 29 (4), 343–352. Shweder, R. and Haidt, J. (1993).“The Future of Moral Psychology:Truth, Intuition, and the Pluralist Way,” Psychological Science, 4, 360–365. Silk, J. B., Brosnan, S. F., Vonk, J., Henrich, J., Povinelli, D. J., Richardson, A. S., Lambeth, S. P., Mascaro, J. and Schapiro, S. J. (2005). “Chimpanzees Are Indifferent to the Welfare of Unrelated Group Members,” Nature, 437, 1357–1359. Similä, T. and Ugarte, F. (1993). “Surface and Underwater Observations of Cooperatively Feeding Killer Whales in Northern Norway,” Canadian Journal of Zoology, 71 (8), 1494–1499. Simmonds, M. P. (1997). “The Meaning of Cetacean Strandings,” Bulletin de L’Institut Royal des Sciences Naturelles de Belgique Biologie, 67, Suppl, 29–34. Singer, P. (1975). Animal Liberation. New York: HarperCollins. Skoyles, J. R. (2011). Chimpanzees Make Mean-Spirited, Not Prosocial, Choices. Proceedings of the National Academy of Sciences, 108 (42), E835. Sober, E. (2015). Ockham’s Razors: A User’s Manual. New York: Cambridge University Press. Stanton, M. A. and Mann, J. (2014). “Shark Bay Bottlenose Dolphins: A Case Study for Defining and Measuring Sociality,” in L. Karczmarski and J. Yamagiwa (eds.), Primates and Cetaceans: Field Research and Conservation of Complex Mammalian Societies. Tokyo, New York: Springer. Subiaul, F.,Vonk, J., Okamoto-Barth, S. and Barth, J. (2008). “Chimpanzees Learn the Reputation of Strangers by Observation,” Animal Cognition, 11, 611–623. Tomasello, M. (2016). A Natural History of Human Morality. Cambridge, MA: Harvard University Press.
81
Normative Practices of Other Animals
Torres, L. and Read, A. (2009). “Where to Catch a Fish? The Influence of Foraging Tactics on the Ecology of Bottlenose Dolphins (Tursiops truncates) in Florida Bay, Florida,” Marine Mammal Science, 25 (4), 797–815. Tyack, P. L. (2000). “Dolphins Whistle a Signature Tune,” Science, 289 (5483), 1310–1311. van Schaik, C. P. (2003). “Local Traditions in Orangutans and Chimpanzees: Social Learning and Social Tolerance,” in D. M. Fragaszy and S. Perry (eds.), The Biology of Traditions: Models and Evidence. New York: Cambridge University Press, 297–328. Varner, G. E. (2012). Personhood, Ethics, and Animal Cognition: Situating Animals in Hare’s Two Level Utilitarianism. Oxford: Oxford University Press. Vonk, J., Brosnan, S. F., Silk, J. B., Henrich, J., Richardson, A. S., Lambeth, S. P., Schapiro, S. J. and Povinelli, D. J. (2008). “Chimpanzees Do Not Take Advantage of Very Low Cost Opportunities to Deliver Food to Unrelated Group Members,” Animal Behavior, 75 (5), 1757–1770. doi:10.1016/ j.anbehav.2007.09.036. Warneken, F., Hare, B., Melis, A. P., Hanus, D. and Tomasello, M. (2007). “Spontaneous Altruism by Chimpanzees and Young Children,” Public Library of Science Biology, 5, 1414–1420. Watts, D. P. and Mitani, J. (2001). “Boundary Patrols and Intergroup Encounters in Wild Chimpanzees,” Behaviour, 138 (3), 299–327. Watts, D. P., Muller, M., Amsler, S., Mbabazi, G. and Mitani, J. C. (2006).“Lethal Intergroup Aggression by Chimpanzees in Kibale National Park, Uganda,” American Journal of Primatology, 68, 161–180. White, T. (2007). In Defense of Dolphins: The New Moral Frontier, Blackwell Public Philosophy. Malden, MA: Blackwell Publisher. Whitehead, H. and Rendell, L. (2015). The Cultural Lives of Whales and Dolphins. Chicago: University of Chicago Press. Whiten, A., Goodall, J., McGrew,W. C., Nishida,T., Reynolds,V., Sugiyama,Y.,Tutin, C. E. G.,Wrangham, R. W. and Boesch, C. (1999). “Cultures in Chimpanzees,” Nature, 399, 682–685. Williams, A. (2013). “The Amazing Moment a Pod of Dolphins Formed ‘Life Raft’ to Save Sick Companion from Drowning,” www.dailymail.co.uk/sciencetech/article-2269111/Dolphinscreate-life-raft-sick-companion-VIDEO.html Wilson, M. L., Boesch, C., Fruth, B., Furuichi, T., Gilby, I. C., Hashimoto, C., Hobaiter, C. L., Hohmann, G., Itoh, N., Koops, K., Lloyd, J. N., Matsuzawa, T., Mitani, J. C., Mjungu, D. C., Morgan, D., Muller, M. N., Mundry, R., Nakamura, M., Pruetz, J., Pusey, A. E., Riedel, J., Sanz, C., Schel, A. M., Simmons, N., Waller, M., Watts, D. P., White, F., Wittig, R. M., Zuberbühler, K. and Wrangham, R. W. (2014). “Lethal Aggression in Pan Is Better Explained by Adaptive Strategies Than Human Impacts,” Nature, 513 (7518), 414–417. http://doi.org/10.1038/nature13727. Wrangham, R. W. and Glowacki, L. (2012). “Intergroup Aggression in Chimpanzees and War in Nomadic Hunter-Gatherers: Evaluating the Chimpanzee Model,” Human Nature, 23 (1), 5–29. http://doi.org/10.1007/s12110-012-9132-1. Yamamoto, S., Humle, T. and Tanaka, M. (2009). “Chimpanzees Help Each Other Upon Request,” PLoS One, 4, e7416.
Further Readings For analyses on the evolution of empathy in nonhumans, see both L. Gruen, Entangled Empathy: An Alternative Ethic for Our Relationships with Animals (Brooklyn: Lantern Books, 2015) and F. de Waal, The Age of Empathy: Nature’s Lessons for a Kinder Society (Toronto: McClelland & Stewart, 2009). For a discussion of moral behaviors in other animals, see M. Bekoff and J. Pierce, Wild Justice: The Moral Lives of Animals (Chicago: University of Chicago Press, 2009). For an argument that animals can act for moral reasons without being moral agents, see M. Rowlands, Can Animals Be Moral? (Oxford: Oxford University Press, 2012). For additional information about chimpanzees and cetaceans, see— respectively—The Mind of the Chimpanzee: Ecological and Experimental Perspectives (E. V. Lonsdorf, S. R. Ross and T. Matsuzawa, eds.) (Chicago: University of Chicago Press, 2010)and H. Whitehead and L. Rendell, The Cultural Lives of Whales and Dolphins (Chicago: University of Chicago Press, 2015).
82
Sarah Vincent, Rebecca Ring, and Kristin Andrews
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 2 The Normative Sense: What is Universal? What Varies; Chapter 5 Moral Development in Humans; Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 9 The Evolution of Moral Cognition; Chapter 12 Contemporary Moral Epistemology; Chapter 13 The Denial of Moral Knowledge; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 22 Moral Knowledge as Know-How.
83
4 THE NEUROSCIENCE OF MORAL JUDGMENT Joanna Demaree-Cotton and Guy Kahane
1. Introduction We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of these processes—to the role, for example, of our intuitions or emotions in shaping our moral views or to the consistency of a judgment about a case with more general moral beliefs. Philosophers have long reflected on the way our minds engage with moral questions— on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology but also to support ambitious normative arguments. Needless to say, these normative arguments are contentious, as is, more generally, the relation between the CSM and traditional philosophical accounts of moral judgment. Where some assert that empirical evidence could resolve longstanding ethical debates (see, e.g., Churchland, 2011; Greene, 2008, 2016), others argue that neuroscience has no normative significance whatsoever (Berker, 2009). The aim of the present chapter is to bring a measure of increased clarity to this debate. We will proceed as follows. We shall begin with general reflections about the potential bearing of neuroscience on moral epistemology. Focusing on the issue of the reliability of our moral judgments, we shall suggest that neuroscientific findings have limited epistemic significance considered on their own; they are likely to make an epistemic difference only when “translated” into higher-level psychological claims.1 But neuroscientific findings are 84
Joanna Demaree-Cotton and Guy Kahane
anyway best understood as merely one stream of evidence feeding into the CSM, a broader scientific enterprise whose focus is primarily at a higher level of description. Questions about the normative significance of neuroscience are therefore unhelpful unless “neuroscience” is understood in this broader sense. These general reflections will guide the rest of the chapter. We will briefly introduce some key theories and findings that have dominated the “first wave” of recent cognitive science of morality (circa 2000–2010) that much of the philosophical debate has focused on: the “moral grammar” theory defended by Mikhail and others, Haidt’s social intuitionist model, and Greene’s dual process model.2 We then consider more closely several key themes and debates that have shaped this research, highlighting both their potential normative significance and important recent developments in empirical research that further complicate the scene, including the rejection of a sharp dichotomy between cognition and emotion,3 and a departure from strong nativist assumptions to interest in “moral learning”.4
2. Is Neuroscience Relevant to Moral Epistemology? The epistemic status of our moral intuitions and judgments often depends on their immediate causal history. Most obviously, it is widely thought that moral judgments resulting from an unreliable process are not epistemically justified.5 Think, for example, of forming moral judgments about others’ actions purely on the basis of their race, or whether you are fond of them, rather than on the basis of the circumstances, nature, and consequences of their action. Few, if any, hold that these are reliable ways of forming moral judgments.6 Reliability may not be the only feature of judgment-forming processes that affects the normative status of moral judgments. Many moral epistemologists hold that epistemic justification requires not only reliability but also forming one’s judgments in response to appropriate, morally relevant reasons. This requirement may also be morally important. For example, a correct, reliably produced moral judgment might nevertheless fail to express a virtuous character unless it was formed in response to the reasons that make that moral judgment correct.7 This brief sketch of the epistemic significance of the processes that generate our moral judgments both offers a straightforward argument supporting an epistemic role for the CSM and sets important constraints on that role. First the argument. What kinds of processes our moral judgments depend on influences the normative status of those judgments in various ways. But what kinds of processes produce our moral judgments is an empirical matter. So insofar as the CSM sheds light on the processes that produce our moral judgments, it seems straightforwardly relevant to moral epistemology. But while the CSM obviously has much to say about these processes, to impact moral epistemology it must shed light on the epistemically relevant aspects of these processes. And here things get more complicated. Think of the kinds of things the CSM can tell us about the sources of our moral judgments. It may clarify the personal-level processes that underlie them: what features of the case are consciously registered, whether or not these judgments result from explicit deliberation, whether they are based in felt emotions, etc. The CSM can also uncover the sub-personal information processing that underlies our judgments.8 Our judgments may 85
The Neuroscience of Moral Judgment
be the result of complex unconscious computations, shaped by implicitly held principles and responsive to features of a case in ways that escape our conscious awareness. Finally, the CSM can offer an account of these processes at the neural level: by identifying brain circuitry and patterns of neural activation involved in generating these judgments. Now when a judgment-forming process is specified in personal-level terms—and in many cases, also in information processing terms—we have philosophical and empirical resources available to us to make reasonable judgments as to whether that type of process is likely to produce reliable, reason-responsive moral judgments or not. Our earlier examples of unreliable processes took this form: racial prejudice or biased motives are paradigmatic examples of processes that undermine the epistemic standing of a judgment.9 Similarly, knowing which features of a moral case are being registered (and which ignored), what implicit or explicit principles are being applied, what kind of deliberative process is being followed—these can also be directly evaluated for epistemic respectability. So, insofar as we can infer what psychological process is being implemented by a given neural process, neuroscience can indirectly inform us about the reliability of moral judgments. But unless we can make such inferences, how can we determine whether a neural process is likely to issue in accurate moral judgments? It seems that we could only evaluate the reliability of a neural process by observing what moral judgments result from it and then using armchair methods to directly evaluate the judgments themselves. This is likely to require controversial commitments to substantive normative claims. It risks circularity, if we end up justifying a set of moral judgments by appeal to the reliability of a neural process that has been certified as reliable precisely because it has produced the relevant judgments. One could perhaps try to avoid such circularity by arguing that certain controversial moral judgments were produced by a reliable neural process because that process also reliably produces judgments that we all agree to be correct—though it is not obvious why the reliability in the latter context must carry over to the former (Kahane, 2016). But in any event, the work neuroscience does on this approach is pretty minimal: identifying types of processes that we then try to correlate with patterns of moral judgments.What exactly is involved in these processes, at the neural level, is irrelevant.10 These methodological considerations lead to a more substantive claim. The claim is that a moral judgment has the epistemic properties it does in virtue of higher-level psychological rather than neural properties of the judgment-forming process. On most accounts of moral justification, it is the psychological properties of judgmentforming processes that are epistemically relevant. It seems clear why this would be so for internalist accounts of justification, which stress the importance of forming beliefs on the basis of good reasons. After all, the kinds of states and processes that can constitute good or bad reasons are best described in personal-level terms, e.g.: processes of deliberation, the experience of certain emotions, weighing up of evidence, or the perception of certain nonmoral features. However, reliabilist accounts of justification also emphasize the importance of psychological processes. This is because reliabilism has had to face the challenge (the so-called “generality problem”) of telling us which way of specifying “the” process leading up to a judgment is the relevant one for evaluating the judgment’s justification. Attempted solutions have tended to stress the relevance of the reliability of the psychological process on which the judgment is founded.11 This is partly because it seems important to ensure 86
Joanna Demaree-Cotton and Guy Kahane
that beliefs are “well-founded” or “properly based”—i.e. produced by a process involving evidentially relevant mental states. By contrast, how a psychological process is physically implemented seems intuitively irrelevant to epistemology; if two people arrive at the same moral judgment by weighing up the same evidence in the same fashion, it seems irrelevant to the epistemic status of that judgment whether their neural hardware differed in any interesting way.12 So a purely neural description of the processes leading to a moral judgment will on its own tell us little about its epistemic status. We first need to map these neural properties onto higher-level psychological ones. Still, you might think that if there were a straightforward, one-to-one mapping between psychological processes and neural ones, we could— especially as neuroscience continues to progress—simply translate epistemically relevant psychological processes into neural terms. If so, questions about the epistemic status of the process leading up to a moral judgment could in principle be answered in exclusively neural terms. However—whilst we are increasingly able to make reasonable inferences about psychology from neuroscientific data—there are good reasons for thinking that the relation between psychological and neural types isn’t going to be straightforward in this way. First, many (if not all) types of psychological processes seem to be multiply realized—they may be realized by distinct neural arrangements in different individuals, or even within the same individual over time. For example, emotional processing that is normally supported by paralimbic brain areas in nonclinical populations might be supported by the lateral frontal cortex in psychopaths (Kiehl, 2008). This represents one way in which simple one-to-one mapping can fail.13 Secondly, any given brain area or network is likely to be involved in many distinct psychological processes in different contexts (see e.g. Pessoa, 2013), and it may make little sense to ask whether neural activation in such a network counts as a reliable process overall. For example, activity in the amygdala may contribute to a reliable psychological process-type in certain cases (e.g., representing abhorrent behavior as negative) and an unreliable psychological process-type in others (e.g., hostility to outgroup members). Finally, many epistemically important distinctions that are salient when judgmentforming processes are described in psychological terms are unlikely to carve distinctions that are significant at the neural level. Of course, there must be some neural difference between reliable and unreliable psychological processes in a given context.14 But this difference needn’t be one that is useful for describing the functional organization of the brain. For example, while there are competing accounts of the considerations that are relevant to the moral evaluation of some person, action, or institution, it seems unlikely that the consideration of these “morally relevant factors” on the one hand and patently morally irrelevant factors will map onto two interestingly distinct neural kinds on any such account. Indeed, the very same neural network may be involved in both, just with subtly different patterns of activation. Conversely, unless researchers embrace an exceedingly simplistic moral scheme on which (e.g.) nothing but physical pain and pleasure are relevant to the morality of an action, person, or institution, the patterns of neural activity involved in perceiving or cognizing morally relevant considerations may be quite heterogeneous—at least as heterogeneous as the considerations themselves. So it may be only when we describe what these patterns of activity represent at a higher level that it will become salient that they fall into epistemically significant groups. 87
The Neuroscience of Moral Judgment
Consequently, psychological descriptions are likely to retain their primacy over neural ones in moral epistemology. There is thus little point in considering the normative significance of neuroscience independently of the specific role of neurobiological claims within the larger body of evidence and theorizing in the CSM, much of which is anyway, at least at this stage, largely focused on higher-level cognitive processes.15 Neurobiological evidence from, say, neuroimaging or psychopharmacology can support or challenge theories of moral psychology, but these theories ultimately stand or fall in light of the whole body of relevant empirical evidence, which will frequently involve traditional questionnaires, evidence about response times, introspective reports, and the like.
3. Three Approaches to the Cognitive Science of Morality The turn of the century has witnessed a dramatic surge in scientific interest in moral judgment. After decades of seemingly fruitless attempts to solve the so-called problem of consciousness, cognitive science has turned its attention to morality. Spurred by the development of functional neuroimaging and general trends in cognitive science, cognitive scientists rejected the stale rationalist developmental theories of Piaget and Kohlberg and instead emphasized the role of innate, automatic and—somewhat more controversially— affective processes in driving moral judgment.16 The CSM has so far been dominated by three main approaches. The universal moral grammar approach—championed by Harman, Mikhail, Hauser, and Dwyer—builds on Rawls’s early work to draw a direct analogy between moral psychology and Chomskian linguistics.17 On this view, our core moral judgments reflect the working of a “moral organ”: unconscious computations map input about causation, intention, and action onto innately represented moral principles to produce universally shared intuitions about moral permissibility and wrongness. Jonathan Haidt’s social intuitionist approach also emphasizes the centrality of automatic intuitions in shaping moral judgment. On Haidt’s view, however, our moral intuitions result from rapid emotional reactions—a claim supported by studies purporting to show that we can manipulate moral judgments by triggering emotions such as disgust.18 The universal moral grammar and social intuitionist approaches sharply disagree about the role of emotion in moral judgment but agree that moral judgment is almost exclusively the product of automatic intuitions, not of conscious reasoning. For the universal moral grammar theorists, explicit reasoning plays a minimal role in our core moral “competence”—just as the conscious application of rules play a minimal role in our grammatical competence. For Haidt, such reasoning is largely used to rationalize intuitive views we would hold anyway and to pressure others into sharing our views. The third approach, Joshua Greene’s dual process model, shares much with Haidt’s but adds an important twist.19 Like Haidt, Greene holds that a great deal of moral judgment is shaped by immediate “alarm-bell”-like emotional reactions (a “system-1” type process) and that much of the justification offered in support of our moral views—including much of the theorizing of moral philosophers—is merely ex-post rationalization. But Greene also claims to find evidence for an important exception, arguing that utilitarian judgments are uniquely based in explicit reasoning (a “system-2” type process). And Greene has famously argued that this difference—which he traces to distinct neural structures—supports a normative argument favoring utilitarianism. 88
Joanna Demaree-Cotton and Guy Kahane
These are but brief sketches of these main three approaches.20 In what follows we will consider some of their claims more closely. The rest of the chapter is organized thematically. We will consider the epistemic import of several key debates in recent moral psychology: debates about the domain-specificity of moral cognition, about the respective roles of emotion and reason, and about nativist and learning approaches. We will review some of the core evidence for the three approaches but also ways in which more recent research has cast doubt on some of their key claims. In line with the discussion of the previous section, we will highlight key findings at the neural level but always as they bear on psychological theories that support claims about personal-level processes of potential epistemic importance.
4. Domain-Specificity and General Capacities Psychologists distinguish between domain-specific and domain-general capacities. Domainspecific capacities are dedicated to a target domain (possible examples: grammatical competence, face recognition); domain-general capacities apply across domains (example: general intelligence). When we form moral intuitions and judgments, are we utilizing a capacity that is specific to morality—or even a “moral module”—or are we merely drawing on general psychological capacities? Relatedly, are there specific brain areas dedicated to moral cognition? If there is a moral module, it will be natural to expect it to be realized in distinctive neural circuitry (Suhler & Churchland, 2011), though in principle a module could be realized in a more distributed network. The universal moral grammar (UMG) approach is most clearly committed to the existence of such a moral module, but the other approaches have also been friendly to domainspecificity: Haidt’s moral foundations theory, which seeks to develop the social intuitionist approach by describing in more detail the mechanisms that produce innate, affective intuitions, claims that they are produced by domain-specific, functional modules (Graham et al., 2012). Even Greene sometimes suggests that emotional “system 1” judgments are the product of evolved, domain-specific capacities.21 The evidence so far has not been very kind to the idea of a dedicated moral module, at least not one with unique neural correlates. Early moral neuroimaging studies used fMRI to investigate which brain areas were associated with specifically moral content by comparing conditions where participants assessed stimuli with moral content to conditions where they assessed nonmoral but otherwise similar content.22 These studies found that moral cognition employs many neural networks involving areas distributed around the brain, including the ventromedial and dorsomedial prefrontal cortex (vmPFC and dmPFC), the temporoparietal junction (TPJ), the precuneus, the posterior cingulate cortex (PCC), the amygdala, and the temporal pole. Moreover, these brain areas are not used exclusively for moral cognition. Rather, the brain networks involved in morality overlap extensively with those involved in theory of mind, emotion, and a host of other functions, including imagination, memory, and causal reasoning—all of which are capacities that we also use for nonmoral cognition.23 This has not, however, sounded the death toll of domain-specificity. Haidt and Joseph (2011) have defended moral modules against neuroscientific objections of this kind by highlighting that psychological modules are not the same thing as neurobiological modules.24 As noted earlier, psychological mechanisms might be domain-specific even if not based in 89
The Neuroscience of Moral Judgment
a specific area of the brain; activity in distributed, overlapping neural circuits can produce specialized mechanisms at the psychological level. For example, some UMG theorists proposed that morality relies on a unique interaction between theory of mind and emotions to produce distinctively moral cognition (Hauser, 2006, 219; Hauser & Young, 2008). And current research has not clearly distinguished the question of the existence of a mechanism dedicated to producing moral outputs from the question of the existence of a mechanism dedicated to producing evaluative or normative outputs more generally; the latter might exist even if the former doesn’t.25 Nevertheless, the view that moral cognition is based in more general psychological capacities currently seems more plausible. The empirical question of domain-specificity ties in with one common worry about moral intuitions, namely that it’s hard to see how we could calibrate them in a noncircular fashion. If the processes in question produced nothing but moral outputs, this worry would be reinforced. By contrast, if moral intuitions result from domain-general capacities that also produce nonmoral outputs, then we have an independent way of assessing the reliability of the processes that generate our moral intuitions. This would also address the worry that our moral intuitions are produced by a mysterious faculty that is “utterly different from our ordinary ways of knowing everything else” (Mackie, 1977). The empirical question of domain-specificity also relates in a fairly direct way to the question of epistemic reliability: to answer questions about reliability, we first need to identify the processes whose reliability we are trying to assess. For example, Nado (2014) has argued that evidence of domain-specificity of different categories of intuitions, including moral intuitions, means that the reliability of each category of intuitions must be assessed separately. Others have argued that moral intuitions inherit the reliability of the domaingeneral processes that produce them. One such strategy is an influential line of response to Sharon Street’s Darwinian challenge to moral realism.26 In brief, Street (2006) argues that our moral beliefs are extensively shaped by evaluative dispositions that have an evolutionary origin. Since it’s highly unlikely that such evolutionary pressures track an objective realm of moral facts, we have no reason to think that our moral judgments are reliable if we endorse moral realism. In reply, Parfit has argued that when we assess the intrinsic plausibility of our moral intuitions we are employing the same rational capacity we use when we assess epistemic, logical, and other a priori claims.27 Since there was evolutionary advantage in having a general rational capacity that tracks epistemic and logical facts, we have good reason to think that this capacity is reliable—a conclusion we can then carry over to the moral domain. One may wonder, however, whether reliability in one domain must automatically carry over to a completely different one. But more importantly, for our purposes, this strategy relies on falsifiable empirical claims: it assumes that at least reflective moral intuitions are the product of a domain-general capacity. Now, as we saw earlier, the current balance of the evidence doesn’t give much support to the idea of a domain-specific moral module. Unfortunately, however, it also doesn’t lend much support to Parfit’s aforementioned strategy. As we shall see, evidence from psychopathy and lesion studies suggests that the content of our moral judgments is strongly dependent not on whether we possess general rational capacities but on whether we have certain emotional sensibilities. It is very unlikely that we form moral judgments simply by exercising general rational capacities. 90
Joanna Demaree-Cotton and Guy Kahane
These considerations take us directly to another debate at the center of recent empirical moral psychology—that about the respective roles of emotion and reason in moral judgment.
5. Emotion and Deliberation A long tradition in moral philosophy, stretching back to Plato and Kant, emphasized a sharp distinction between reasoning and emotions, with cool reasoning the source of practical rationality and moral knowledge and emotions an irrational, distorting influence on moral judgment.28 This division between irrational emotion and rational reasoning dominated early work in the CSM. It’s clearly in the background of Haidt’s social intuitionist model and Greene’s dual process model, with the main difference being that, according to the social intuitionist model, reasoning hardly ever influences our moral judgments. In contrast, according to Greene’s model, genuine moral reasoning produces one category of moral judgment, namely “utilitarian” judgments—such as the judgment that it’s permissible to sacrifice one person as a means to saving others—while “deontological” judgments result from emotions that function in an irrational, “alarm-like” way that often overrides rational processing. By contrast, UMG theorists, who are more favorable toward such “deontological” judgments, insist that the immediate intuitions on which they are based result from “cool” unconscious computations. However, this sharp division between emotion and reason is out of touch with important recent work in moral epistemology. It has been argued, for example, that emotions are often needed to bring morally relevant features to our attention and that they may even be necessary for grasping their moral importance. Emotions, then, can play an epistemically beneficial, perhaps even essential, role in the pursuit of moral knowledge (e.g., Arpaly, 2003; Jaggar, 1989; Little, 1995; see also De Sousa, 2014).29 Whilst few deny that certain emotional experiences have an epistemically negative effect—for example, intense, overwhelming emotions that distort our perception of the situation and prevent us from considering relevant evidence—philosophers in this camp would argue that “cold” reasoning can also be subject to bias and blind spots, especially when entirely severed from emotional input. Recent developments in neuroscience strongly support this alternative way of thinking about emotion and reason (Moll et al., 2008; Nichols, 2002; see also May and Kumar, Chapter 7 of this volume; Railton, 2017, especially 5.2). In the mid-twentieth century, it was thought that emotional and cognitive processes were supported by distinct, dedicated brain regions (the “limbic system” and the neocortex, respectively). However, there is growing consensus that there is no such sharp anatomical divide in the highly interconnected human brain (e.g. Barrett & Satpute, 2013; LeDoux, 2012; Lindquist & Barrett, 2012; Okon-Singer et al., 2015; Pessoa, 2008). Brain areas involved in emotion play a crucial role in cognitive functions such as learning, attention, and decision making, and emotions depend on brain regions involved in various cognitive operations. Consequently, cognitive scientists increasingly reject the characterization of psychological processes involved in moral judgment as either “emotional” or “cognitive” (e.g. Cushman, 2013; Huebner, 2015; Moll, De OliveiraSouza & Zahn, 2008). If emotions are not easily separable from the “cognitive” means by which we represent, process, and evaluate the world, then this opens up the possibility 91
The Neuroscience of Moral Judgment
that emotional processes—not just “cold” reasoning—can make rational, evidence-sensitive contributions to moral judgment. This suggestion is supported by recent research on the network of brain areas that Greene associated with “emotional” moral judgments—in particular, the right temporoparietal junction (rTPJ),30 the amygdala, and the ventromedial prefrontal cortex (vmPFC) (Greene et al., 2001; Greene et al., 2007). The “emotional” process Greene identified appears to be based on an initial unconscious analysis of what seem like morally relevant factors concerning the intentions of agents and their causal relationship to harm. A number of recent EEG and imaging studies suggest that moral violations are initially identified in the rTPJ (Harenski, Antonenko et al., 2010), prior to experienced emotion.31 The rTPJ is crucial for making moral distinctions on the basis of whether harm to persons is intended, unintended, or merely accidental (Schaich Borg et al., 2006; Chakroff & Young, 2015). So, if you artificially disrupt rTPJ activity, participants fail to take intentions into account (Young, Camprodon et al., 2010), judging, say, that deliberate attempts to harm that fail are morally acceptable, or that harming someone by accident is wrong. The EEG studies suggest that processing continues next in the amygdala.While the amygdala is involved in affect, it is a mistake to think of it as simply generating gut feelings. The amygdala is involved in associative learning and is a crucial node in the “salience” network, which, given that we have limited time and computational resources, uses affective cues to help us pay greater attention to information that is likely to be contextually important—for example, because it has previously been associated with threats, norm violations, or benefits (Barrett & Satpute, 2013; Pessoa & Adolphs, 2010). This allows the amygdala to rapidly identify potentially relevant features and, through extensive connections to the visual and prefrontal cortex, to direct attention and processing resources to those features. Both the amygdala and the rTPJ feed information about potentially morally relevant features to frontal areas including the vmPFC.32 Whilst the vmPFC allows emotionally processed information to influence moral judgment, recent fMRI studies suggest it doesn’t do this by generating “alarm-like” reactions designed to dominate judgment, as Greene’s dual process model claimed. Rather, it facilitates a productive interaction between emotions and reasoning. Firstly, the vmPFC is part of the so-called “default mode” brain network that is responsible for imagination, visualization, and empathic understanding.33 One way the affectively charged salience network directs attention to morally relevant features is by recruiting the default mode network to imaginatively simulate the perspectives of harmed parties (Chiong et al., 2013). Secondly, the vmPFC allows us to represent different pieces of information generated by different brain areas as evaluatively relevant for the moral question at hand, converting information into “common currency” (Huebner, 2015) and allowing us to feel its relevance (Young, Bechara et al., 2010). This fits with imaging studies showing that its activity isn’t associated directly with specific inputs to moral judgment—such as emotional responses or the calculation of costs and benefits (narrowly construed)—but specifically with attempts to make an overall moral judgment (Hutcherson et al., 2015; Shenhav & Greene, 2014); nor is it associated with a specific type of conclusion, such as deontological or utilitarian judgments (Kahane et al., 2012) or the judgment that some controversial issue is wrong or not 92
Joanna Demaree-Cotton and Guy Kahane
wrong (Schaich Borg et al., 2011). Finally, there don’t appear to be inhibitory relationships between brain areas associated with emotions and those associated with cost-benefit analysis (Hutcherson et al., 2015); this again supports the hypothesis that the vmPFC does not facilitate the influence of emotions at the expense of other information but rather allows us to weigh information together to produce “all-things-considered” judgments—a view that is more consonant with common forms of moral deliberation (see Kahane, 2014). So it seems as if “emotional” processes, mediated by areas such as the rTPJ, amygdala, and vmPFC, allow us to process morally relevant information and thus can contribute to good moral reasoning. This is supported by studies of clinical populations with neural abnormalities that disrupt these circuits, such as psychopaths, patients with vmPFC damage, and patients with behavioral-variant fronto-temporal dementia (FTD). While much of their ability to reason remains intact, these clinical populations suffer from emotional deficits, including diminished empathy, that restrict their “cognitive” abilities to correctly perceive, attend to, and take into account morally relevant properties.34 For example, patients with vmPFC lesions and FTD patients have an impaired ability to infer which emotional states others are experiencing (Shamay-Tsoory & Aharon-Peretz, 2007); vmPFC patients struggle with decision making because they lack appropriate emotional reactions (Damasio, 1994) and they fail to respond to harmful intentions (e.g., judging that attempted murder is permissible;Young, Bechara et al., 2010), and FTD patients display sociopathic tendencies. Psychopaths display abnormal functioning in the amygdala, vmPFC, and TPJ when presented with moral transgressions and consequently fail to attend to and correctly process morally salient properties such as harm and mental states (Decety et al., 2015; Harenski, Harenski et al., 2010; Hoppenbrouwers et al., 2016). These clinical populations also give abnormally high rates of the so-called “utilitarian” judgments—judging that it is morally permissible to sacrifice one person as a means to saving others. Rather than being the product of good moral reasoning unfettered by irrational emotion, it seems these judgments are associated with these patients’ failure to register complex facets of moral value (Gleichgerrcht et al., 2011). Indeed, in nonclinical populations, the so-called utilitarian judgments that Greene associated with the dorsolateral prefrontal cortex (dlPFC) are not associated with genuinely utilitarian, impartial concern for others but rather with rational egoism, endorsement of clear ethical transgressions, and lower levels of altruism and identification with humanity (Kahane et al., 2015). Furthermore, in a study by FeldmanHall and colleagues (2012) in which participants decided whether to give up money to stop someone receiving painful electric shocks, activity in the dlPFC was associated with self-interested decisions and decreased empathic concern, while the vmPFC was associated with prosocial decisions.35 Moreover, the “emotional” circuits seem to facilitate truly impartial, altruistic behavior. For example, Marsh and colleagues (2014) found that extraordinary altruists have relatively enlarged right amygdala that are more active in response to other people’s emotions— which they are better at identifying. This is not to say that reasoning supported by the dlPFC cannot contribute positively to moral judgment. The dlPFC, part of the frontoparietal control network, is considered especially important for holding goals and norms in working memory and thus is important for overriding intuitive responses on reflective grounds. For example, Cushman and colleagues (2012) found that activation of the dlPFC was associated with the condemnation 93
The Neuroscience of Moral Judgment
of harmful omissions—perhaps due to an attempt to treat acts and omissions equally on the basis of consistency (see Campbell & Kumar, 2012). So while reasoning sometimes suffers from moral blind spots or is used for egoistic goals, it can make an epistemically positive contribution to moral judgment, especially when conjoined with emotional processes (see also May & Kumar, Chapter 7 of this volume). The neuroscientific work reviewed here is not conclusive. Recording brain activity through fMRI or EEG cannot directly establish the causal impact of emotions or reasoning on judgment, and we must be cautious in drawing conclusions about normal moral judgment from those suffering from neuropsychological impairments (e.g. Bartels & Pizarro, 2011; Huebner, 2015; Kiehl, 2008). Nevertheless, on balance, current evidence suggests that both emotions and reasoning contribute to moral judgment and that moral judgment may operate at its best when reasoning and emotion interact. Indeed, the influences of “emotion” and “reasoning” are not always cleanly separable, as when we use empathy to understand the effects of an action on others or when we weigh up moral reasons whose moral import was drawn to our attention using affective cues.36 These findings fit nicely with a variety of positions in moral epistemology according to which moral reasons, whether represented by intuitions or deliberated about in reasoning, should be weighed together to produce all-things-considered judgments, and that emotions and reasoning can work together to achieve this (Kahane, 2014). These findings also pose a serious challenge to attempts to debunk moral judgments on the ground that they are influenced by emotion or to regard them as epistemically sound merely because they are based on reasoning. Rather, attempts to debunk moral judgments will have to depend on more fine-grained descriptions of the psychological process in question. We have said a fair amount to show that it is both philosophically simplistic and empirically problematic to sharply contrast emotion with reason. This contrast is motivated not only by an unjustifiably dismissive view of emotion but also by a narrow understanding of reason. Longer response times are the central form of evidence for effortful explicit deliberation in moral judgment, but the mere fact that someone takes longer to reach a moral conclusion hardly shows that this process is more reliable. That surely also depends on what exactly the deliberation involves. Taking longer to form a moral judgment can be a bad sign—think of someone who takes a while to decide whether pushing a man off a footbridge just for fun is permissible. Researchers in CSM rarely give a detailed account of what moral reasoning is supposed to involve, but it is usually assumed to involve putting together an explicit moral argument.37 This would not help establish a sharp epistemic contrast between deliberation and emotion and intuition given that arguments require premises and, one might argue, these would ultimately need to be based on intuitions. But in any event, intuitions and emotions aren’t just necessary inputs to deliberation: they are often directly involved in it. Deliberation can just involve forming more reflective intuitions about a given case, and a great deal of moral deliberation consists of weighing opposing reasons and considerations, a process that in large part involves higher-order intuitions to the effect that one set of reasons outweighs another (Kahane, 2014); we’ve discussed evidence about the neural basis of this process. Moreover, feelings such as certainty, confidence, and doubt play a key role in shaping such deliberation. Future work in the CSM will need to operate with richer conceptions of both emotion and reason. 94
Joanna Demaree-Cotton and Guy Kahane
6. Moral Nativism and Moral Learning We turn, finally, to the question of innateness and learning. Biological traits are the product of a complex interaction between innate structure and environment.What is at issue is thus the extent to which aspects of our moral psychology are relatively determined in advance of experience. According to moral nativists, a full account of human moral psychology will need to make significant reference to features of it that are organized in advance of experience (see e.g. Haidt & Joseph, 2011; Graham et al., 2012). According to non-nativists, by contrast, moral judgments are best explained as the product of learning processes interacting with the environment, and little reference to specialized innate structure is needed.38 Much work in the CSM has been dominated by strong nativist assumptions. UMG theorists claim that moral judgment is produced by an innate moral module, while both the social intuitionist and dual process models explain patterns of moral intuitions by reference to emotional responses selected by evolution.39 However, evidence for these claims is fairly limited. Nativists frequently appeal to evidence that certain patterns in moral judgments are found universally across human cultures (e.g. Dwyer et al., 2010; Haidt & Joseph, 2004). But universal patterns of moral judgment could also be explained by learning mechanisms if we assume that relevant kinds of environmental input are universal. Stronger evidence for nativism comes from the application of Chomskian poverty-of-the-stimulus arguments to morality. For example, UMG theorists (and Haidt, 2001, 826–827) appeal to developmental evidence regarding the speed with which young children develop moral judgments that conform to sophisticated, abstract moral rules—such as a rule against intentionally harming others as a means.They argue that children could not develop this sort of moral psychology without innate constraints, especially given the minimal sorts of explicit feedback children get about morality, which usually concerns highly specific actions (e.g. “You should not have hit your brother”) rather than general principles (Mikhail, 2007; Dwyer et al., 2010). However, Nichols et al. (2016; see also Nichols, Chapter 6 of this volume) have recently shown that simple Bayesian assumptions make it possible to quickly infer rules prohibiting acts (but not omissions) and intended harm (but not foreseen harm) from very few occasions of highly specific, minimal feedback. This suggests that we may acquire sophisticated, abstract moral rules from the application of domain-general learning mechanisms to minimal cultural input about what sort of behavior is prohibited. Other recent work in the CSM similarly emphasizes the role of experience and learning in morality (e.g. Allman & Woodward, 2008; Crockett, 2013; Cushman, 2013; Railton, 2017; see also Campbell & Kumar, 2012), pushing against the earlier nativist status quo.This trend draws on a large body of recent work in neuroscience and computational modeling that suggests that our brains make wide use of prediction-error signals to facilitate powerful forms of learning. Evidence suggests that such learning underlies the detection of a variety of morally relevant features, including intentions and other mental states, causation, risk, reward, and expected value. Moreover, the neural mechanisms that encode reward and value (in the neuroscientist’s sense) in reinforcement learning are responsive not only to personal material reward but also to abstract social values relevant to morality, such as character assessments. Although direct evidence for neural coding of specifically “moral value” and prediction errors is lacking as of yet, it seems likely that such learning also shapes moral judgment itself. 95
The Neuroscience of Moral Judgment
Neuropsychological evidence from clinical populations is consonant with the hypothesis that domain-general reinforcement learning mechanisms play an important role in moral judgment. Given their emotional deficits, psychopaths are generally impaired in their ability to learn from negative affective reactions to predict future harms or modify their own behavior—a deficit also reflected in abnormal patterns of moral judgment and behavior. Early damage to the vmPFC similarly impairs moral judgment. The literature on reinforcement learning typically distinguishes between two sorts of algorithms—model-free and model-based—that are thought to characterize learning and decision making in the brain in distinct but overlapping circuits (Crockett, 2013; Cushman, 2013; Huebner, 2015; Railton, 2017; Nichols, Chapter 6 of this volume). Model-based algorithms compare the value of candidate actions based on a detailed model of all of the expected outcomes associated with them. This is computationally costly (it involves going over a lot of information), but it’s also far-sighted and very flexible. By contrast, model-free algorithms assign value to actions in specific contexts (“states”) simply based on reinforcement history—i.e. on whether the action-state pair has previously been associated with good or bad outcomes. This reinforcement learning can be achieved through experience, observation, or possibly through simulating the consequences of actions (Miller & Cushman, 2013). Model-free algorithms are relatively inflexible, with the value assigned to action representations only changeable over time. How the model-free and model-based distinction exactly contributes to moral judgment needs further research. Although we have been writing in terms of action-selection, model-free and model-based systems can be defined over all sorts of representations (consequences, situations, etc.), so caution is advised against assuming that they cleanly underwrite the distinction between “deontological” and “consequentialist” judgments assumed by Greene’s model (Ayars, 2016; Cushman, 2013), and model-free learning may not be able to explain the persistence of certain deontological intuitions (Railton, 2017). Nevertheless, the distinction between model-free and model-based learning may play a role in the distinction between automatic action-based and controlled outcome-based moral assessment. In particular, model-free systems may in part be responsible for the greater moral condemnation of “personal” harm, of acts over omissions, and intended harms over unintended side-effects (Crockett, 2013; Cushman, 2013). Debates about the evolutionary (or other) sources of moral judgment are obviously of great interest, but their epistemic significance isn’t straightforward. UMG theorists occasionally write as if the aim of moral philosophy is to uncover the innate “moral code” posited by UMG theory. But this is an odd idea. If this innate moral code is the product of natural selection, why should we let it guide our actions? After all, evolution “aims” at reproductive fitness, not at moral truth. Why think that dispositions that were reproductively advantageous to our ancestors in the savannah track any kind of moral truth? These kinds of considerations lead Greene (2008) to a contrary conclusion: if certain moral judgments have their source in our evolutionary history, then they should be treated with suspicion. Instead, we should use our general capacity for reason to arrive at independent, consequentialist conclusions. Evolutionary debunking arguments of this kind have received a great deal of attention in recent years (see e.g. Kahane, 2011; Vavova, 2015).40 One worry is that if they work at all, they will support general moral skepticism (Kahane, 2011; Ruse, 1988). We should also 96
Joanna Demaree-Cotton and Guy Kahane
distinguish such debunking arguments from the different argument that evolution-selected dispositions were truth-tracking in our ancestral environment but lead us astray in the very different modern context (Singer, 2005, seems to conflate these two forms of argument). Those wishing to resist evolutionary debunking arguments often seek to deny the nativist assumption that such arguments require. It might be thought that the recent shift to moral learning offers hope for such a strategy. If our moral judgments actually have their source in moral learning then it seems that evolutionary debunking arguments cannot get off the ground. However, things aren’t so simple. To begin with, evolutionary pressures may still affect the direction of moral learning, especially if the learning operates on core environmental features that have remained constant. If so, then our current moral intuitions would still have an evolutionary source in the sense the debunkers assume. Relatedly, the contribution of some innate structure has not been ruled out. In particular, moral learning must operate on a set of goals or “values”, and these are almost certain to have an evolutionary source. 41 Even if evolutionary forces did not shape our moral judgments, this hardly shows they are truth-tracking. This depends on how moral learning operates and what it operates on. Advocates of moral learning often emphasize the way such learning involves “rational” processes since they are sensitive to evidence and feedback over time (e.g. Railton, 2017). So perhaps we needn’t worry, as Greene (2016) does, about hardwired intuitions that fail to adapt to modern moral problems. But there is a great gap between tracking the moral truth and being “rational” in the sense of effectively identifying general patterns in one’s environment and relating them to pre-set goals. If our deontological intuitions are, for example, merely the side-effect of a Bayesian learning heuristic interpreting the target of others’ condemnation (as Nichols et al. suggest), then this may be as debunking as an evolutionary explanation. On the other hand, if deontological intuitions arise through learning what behaviors are associated with callous, anti-social, and otherwise immoral character traits (as Railton, 2017, suggests—a hypothesis supported by Everett et al., 2016), then they may have a basis in morally relevant considerations. The bottom line is we need more empirical research on the nature of moral learning before debunking worries can be dropped. Whether or not this epistemic worry can be addressed, moral learning accounts seem rather far from the idea that our moral judgments have their source in the exercise of a general rational capacity to reflect on a priori matters (Parfit, 2011). On such accounts, general capacities are indeed involved, but these are capacities to detect robust regularities in our environment. It is hard to see how such learning processes could detect the intrinsic wrongness of certain acts—they would at best support a broadly consequentialist reading of deontological intuitions (see e.g. Railton, 2017). However, experience can play a role in a priori reflection—we may need relevant experience (and hence, learning) to properly comprehend the content of fundamental moral principles, principles that can nevertheless be known without reliance on evidence from experience. Whether emerging accounts of moral learning are compatible with this picture remains to be seen.
7. Concluding Remarks While scientific theorizing about morality has a long history, the CSM is a fairly new field. The approaches that have dominated it in the first decade of this century already seem out 97
The Neuroscience of Moral Judgment
of date or at least in need of major revision while exploration of alternative directions (e.g., relating to moral learning) have only started. We have tried to give a reasonably up-to-date survey of the key theories and findings in the area, though, inevitably, there is also a lot of interesting work we had to leave out. What does seem clear, however, is that we are seeing a rapid advance in the scientific understanding of moral psychology. It is unlikely that this growing understanding will leave moral epistemology unchanged, and we have tried to trace some of the key connections. There are no simple knock-down arguments from findings in psychology and neuroscience to exciting moral conclusions. An argument from such findings to any kind of interesting moral conclusion will need some philosophical premises, and these will often be controversial. But this doesn’t show that such findings are irrelevant to moral epistemology. Arguments deploying such premises will be controversial and open to question—which is just to say that they will be no different than most arguments in moral epistemology.
Notes 1. One of us argues elsewhere (Demaree-Cotton, 2016) that support for one popular kind of skeptical argument for the general unreliability of moral judgment on the basis of results from cognitive science—namely, skeptical arguments appealing to findings that moral judgments are influenced by morally irrelevant ways of presenting information—has been overstated. 2. These models are also discussed in Chapters 1, 2, 5, 6, 7, 8, 9 and 16 of this volume. 3. The interaction of emotion and reasoning is also discussed at length in Chapter 7 of this volume. 4. Moral learning is also discussed at length in Chapter 6 of this volume. 5. For further discussion see Chapters 13, 17, 18 and 19 of this volume, where a variety of concepts of epistemic justification are analyzed along with the relation of reliability to them. 6. On externalism about moral justification (Shafer-Landau, 2003), unreliability may directly entail lack of justification. On internalism, unreliable moral judgments may be justified if we aren’t aware of this unreliability. 7. These views can take very different forms e.g. Arpaly, 2003; Hills, 2010. See too Chapter 25 of this volume, where Hills argues that moral worth depends on understanding the reasons why what one is doing are good, moral, or just. 8. See Dennett, 2006, on the personal/subpersonal distinction. 9. See Chapters 13 and 14 of this volume for skeptical philosophical perspectives on such demonstrations of unreliability/reliability. 10. E.g., Berker, 2009; Kamm, 2009. 11. E.g., Alston, 1995; Comesaña, 2006. See Beebe, 2004, for an explicit argument for the relevance of psychological processes over physical realizers. 12. Cf. Davis, 2009, 35. 13. Beebe (2004) and Davis (2009) appeal to multiple realizability to argue that belief-forming processes should be specified psychologically, not physically. 14. We qualify this with “in a given context” because it is possible for a given psychological process that is reliable in one context to be unreliable in another context; in such a case you might have the same neural process supporting a psychological process that is reliable in one context but unreliable in another. 15. Greene (2016, 132, and fn.9) criticizes Berker (2009) for assuming Greene attempts to draw normative conclusions from neuroscience directly. 16. For further discussion of why neuroscientific findings are of limited normative significance see Kahane, 2016. The work of Kohlberg and his students and colleagues is discussed in Chapters 1, 2, 5 and 6 of this volume.
98
Joanna Demaree-Cotton and Guy Kahane
17. Dwyer, 1999; Harman, 2008; Hauser et al., 2008; Mikhail, 2007, 2011. See too Chapter 2 of this volume. 18. Haidt, 2001, 2012. See too Chapters 1, 2, 7, 8, 9 and 16 of this volume. 19. Greene, 2008, 2016; Greene et al., 2001. 20. Other approaches defending the centrality of immediate emotional responses are Nichols, 2002 and Prinz, 2006. 21. E.g., Greene et al., 2004, 389; Greene & Haidt, 2002, Box 1. For an argument for many such modules, see Chapter 9 of this volume. 22. Bzdok et al., 2015; Greene, 2015; Greene & Haidt, 2002; Schaich Borg et al., 2006; Young & Dungan, 2012. 23. See previous note. Also, Greene, 2015, 198; Pascual et al., 2013. 24. See especially Haidt & Joseph, 2011, 2118–2119. 25. A similar hypothesis is defended in Chapter 2 of this volume. 26. Street’s argument is discussed at length in Chapter 12 of this volume. 27. Parfit, 2011, 492–497. See also de Lazari-Radek & Singer, 2012. 28. See Chapters 10 and 11 of this volume for the relevant history. 29. See Chapter 17 of this volume on moral perception and Chapter 18 on intuitions; both chapters analyze the role played by emotion in moral judgment. See too Chapter 7 of this volume on emotion and reasoning more generally. 30. The TPJ is sometimes referred to as the pSTS. 31. Decety & Cacioppo, 2012; also, Gui et al., 2016;Yoder & Decety, 2014. 32. See previous note. 33. See Li, Mai & Liu, 2014. 34. See Elliott et al., 2011, for a review. 35. Similarly, see Rand et al., 2014. 36. See Railton, 2017, especially 5.2. 37. See Saunders, 2015, for a critique of accounts of moral reasoning in the CSM. 38. The definition of “innateness” is a vexed issue in cognitive science. See Griffiths, 2009. 39. Evidence for normative understandings among chimps and other primates (Chapter 3 of this volume) opens up the possibility that aspects of moral cognition are both evolved and learned (i.e. naturally selected and culturally transmitted). For a general overview of the evolution of human moral psychology, see Chapter 9 of this volume. 40. See Chapters 12 and 13 of this volume. 41. See Chapter 9 of this volume for evolutionary explanations of moral intuitions about family obligation, incest, and a suite of phenomena related to cooperation.
References Allman, J. and Woodward, J. (2008). “What Are Intuitions and Why Should We Care About Them? A Neurobiological Perspective,” Philosophical Issues, 18, 164–185. Alston, W. P. (1995). “How to Think About Reliability,” Philosophical Topics, 23, 1–29. Arpaly, N. (2003). Unprincipled Virtue: An Inquiry Into Moral Agency. New York: Oxford University Press. Ayars, A. (2016). “Can Model-Free Reinforcement Learning Explain Deontological Moral Judgments?” Cognition, 150, 232–242. Barrett, L. F. and Satpute, A. B. (2013). “Large-Scale Brain Networks in Affective and Social Neuroscience: Towards an Integrative Functional Architecture of the Human Brain,” Current Opinion in Neurobiology, 23, 361–372. Bartels, D. M. and Pizarro, D. A. (2011). “The Mismeasure of Morals: Antisocial Personality Traits Predict Utilitarian Responses to Moral Dilemmas,” Cognition, 121, 154–161. Beebe, J. R. (2004). “The Generality Problem, Statistical Relevance and the Tri-Level Hypothesis,” Noûs, 38, 177–195.
99
The Neuroscience of Moral Judgment
Berker, S. (2009). “The Normative Insignificance of Neuroscience,” Philosophy & Public Affairs, 37, 293–329. Bzdok, D., Groß, D. and Eickhoff, S. B. (2015). “The Neurobiology of Moral Cognition: Relation to Theory of Mind, Empathy, and Mind-Wandering,” in J. Clausen and N. Levy (eds.), Handbook of Neuroethics. Dordrecht: Springer, 127–148. Campbell, R. and Kumar,V. (2012). “Moral Reasoning on the Ground,” Ethics, 122, 273–312. Chakroff, A. and Young, L. (2015). “How the Mind Matters for Morality,” AJOB Neuroscience, 6, 41–46. Chiong, W., Wilson, S. M., D’Esposito, M., Kayser, A. S., Grossman, S. N., Poorzand, P., Seeley, W. W., Miller, B. L. and Rankin, K. P. (2013). “The Salience Network Causally Influences Default Mode Network Activity During Moral Reasoning,” Brain, 136, 1929–1941. Churchland, P. S. (2011). Braintrust: What Neuroscience Tells Us About Morality. Princeton: Princeton University Press. Comesaña, J. (2006). “A Well-Founded Solution to the Generality Problem,” Philosophical Studies, 129, 27–47. Crockett, M. (2013). “Models of Morality,” Trends in Cognitive Sciences, 17, 363–366. Cushman, F. (2013). “Action, Outcome, and Value: A Dual-System Framework for Morality,” Personality and Social Psychology Review, 17, 273–292. Cushman, F., Murray, D., Gordon-McKeon, S., Wharton, S., and Greene, J. K. (2011). “Judgment before principle: engagement of the frontoparietal control network in condemning harms of omission,” Social Cognitive and Affective Neuroscience, 7, 888–895. Damasio, A. R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. New York: G.P. Putnam. Davis, J. K. (2009). “Subjectivity, Judgment, and the Basing Relationship,” Pacific Philosophical Quarterly, 90, 21–40. de Lazari-Radek, K. and Singer, P. (2012). “The Objectivity of Ethics and the Unity of Practical Reason,” Ethics, 123, 9–31. de Sousa, R. (2014). “Emotion,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2014 Edition). https://plato.stanford.edu/archives/spr2014/entries/emotion/ Decety, J. and Cacioppo, S. (2012). “The Speed of Morality: A High-Density Electrical Neuroimaging Study,” Journal of Neurophysiology, 108, 3068–3072. Decety, J., Chen, C., Harenski, C. L. and Kiehl, K. A. (2015). “Socioemotional Processing of MorallyLaden Behavior and Their Consequences on Others in Forensic Psychopaths,” Human Brain Mapping, 36, 2015–2026. Demaree-Cotton, J. (2016). “Do Framing Effects Make Moral Intuitions Unreliable?” Philosophical Psychology, 29, 1–22. Dennett, D. C. (2006). “Personal and Sub-Personal Levels of Explanation,” in J. L. Bermúdez (ed.), Philosophy of Psychology: Contemporary Readings. London: Routledge, 17–21. Dwyer, S. (1999). “Moral Competence,” in K. Murasugi and R. Stainton (eds.), Philosophy and Linguistics. Boulder, CO: Westview Press, 169–190. Dwyer, S., Huebner, B. and Hauser, M. D. (2010). “The Linguistic Analogy: Motivations, Results, and Speculations,” Topics in Cognitive Science, 2, 486–510. Everett, J. A. C., Pizarro, D. A. and Crockett, M. J. (2016). “Inference of Trustworthiness from Intuitive Moral Judgments,” Journal of Experimental Psychology: General, 145, 772–787. Elliott, R., Zahn, R., Deakin, W. J. and Anderson, I. M. (2011). “Affective Cognition and its Disruption in Mood Disorders,” Neuropsychopharmacology, 36, 153–182. FeldmanHall, O., Dalgleish, T., Thompson, R., Evans, D., Schweizer, S. and Mobbs, D. (2012). “Differential Neural Circuitry and Self-Interest in Real vs Hypothetical Moral Decisions,” Social Cognitive and Affective Neuroscience, 7, 743–751. Gleichgerrcht, E., Torralva, T., Roca, M., Pose, M. and Manes, F. (2011). “The Role of Social Cognition in Moral Judgment in Frontotemporal Dementia,” Social Neuroscience, 2, 113–122. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. and Ditto, P. H. (2012). “Moral Foundations Theory: The Pragmatic Validity of Moral Pluralism,” Advances in Experimental Social Psychology, 47, 55–130.
100
Joanna Demaree-Cotton and Guy Kahane
Greene, J. D. (2007). “Why Are VMPFC Patients More Utilitarian? A Dual-Process Theory of Moral Judgment Explains,” Trends in Cognitive Sciences, 11, 322–323. ———. (2008). “The Secret Joke of Kant’s Soul,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Vol.3: The Neuroscience of Morality: Emotion, Disease, and Development. Cambridge, MA: MIT Press, 35–79. ———. (2015). “The Cognitive Neuroscience of Moral Judgment and Decision Making,” in J. Decety and T. Wheatley (eds.), The Moral Brain. Cambridge, MA: MIT Press, 197–220. ———. (2016). “Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics,” in S. M. Liao (ed.), Moral Brains:The Neuroscience of Morality. New York: Oxford University Press, 119–149. Greene, J. D. and Haidt, J. (2002). “How (and Where) Does Moral Judgment Work?” Trends in Cognitive Sciences, 6, 517–523. Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M. and Cohen, J. D. (2004). “The Neural Bases of Cognitive Conflict and Control in Moral Judgment,” Neuron, 44, 389–400. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. and Cohen, J. D. (2001). “An fMRI Investigation of Emotional Engagement in Moral Judgment,” Science, 293, 2105–2108. Griffiths, P. (2009). “The Distinction Between Innate and Acquired Characteristics,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2009 Edition). https://plato.stanford.edu/ archives/fall2009/entries/innate-acquired/ Gui, D. Y., Gan, T. and Liu, C. (2016). “Neural Evidence for Moral Intuition and the Temporal Dynamics of Interactions Between Emotional Processes and Moral Cognition,” Social Neuroscience, 11, 380–394. Haidt, J. (2001). “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review, 108, 814–834. ———. (2012). The Righteous Mind: Why Good People are Divided by Politics and Religion. New York: Pantheon Books. Haidt, J. and Joseph, C. (2004). “Intuitive Ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues,” Daedalus, 133, 55–66. ———. (2011). “How Moral Foundations Theory Succeeded in Building on Sand: A Response to Suhler and Churchland,” Journal of Cognitive Neuroscience, 23, 2117–2122. Harenski, C. L., Antonenko, O., Shane, M. S. and Kiehl, K. A. (2010). “A Functional Imaging Investigation of Moral Deliberation and Moral Intuition,” NeuroImage, 49, 2707–2716. Harenski, C. L., Harenski, K. A., Shane, M. S. and Kiehl, K. A. (2010). “Aberrant Neural Processing of Moral Violations in Criminal Psychopaths,” Journal of Abnormal Psychology, 119, 863–874. Harman, G. (2008). “Using a Linguistic Analogy to Study Morality,” in W. Sinnott-Armstrong (ed.), Moral Psychology,Volume 1: Evolution of Morality—Adaptation and Innateness. Cambridge, MA: MIT Press, 345–352. Hauser, M. D. (2006). “The Liver and the Moral Organ,” SCAN, 1, 214–220. Hauser, M. D. and Young, L. (2008). “Modules, Minds and Morality,” in D. W. Pfaff, C. Kordon, P. Chanson and Y. Christen (eds.), Hormones and Social Behavior. Berlin, Heidelberg: Springer, 1–11. Hauser, M. D., Young, L. and Cushman, F. (2008). “Reviving Rawls’s Linguistic Analogy: Operative Principles and the Causal Structure of Moral Actions,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Vol. 2: The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA: MIT Press, 107–143. Hills, A. (2010). The Beloved Self: Morality and the Challenge from Egoism. Oxford: Oxford University Press. Hoppenbrouwers, S. S., Bulten, B. H. and Brazil, I. A. (2016). “Parsing Fear: A Reassessment of the Evidence for Fear Deficits in Psychopathy,” Psychological Bulletin, 142, 573–600. Huebner, B. (2015).“Do Emotions Play a Constitutive Role in Moral Cognition?” Topoi, 34, 427–440. Hutcherson, C. A., Montaser-Kouhsari, L., Woodward, J. and Rangel, A. (2015). “Emotional and Utilitarian Appraisals of Moral Dilemmas Are Encoded in Separate Areas and Integrated in Ventromedial Prefrontal Cortex,” The Journal of Neuroscience, 35, 12593–12605. Jaggar, A. M. (1989).“Love and Knowledge: Emotion in Feminist Epistemology,” Inquiry, 32, 151–176.
101
The Neuroscience of Moral Judgment
Kahane, G. (2011). “Evolutionary Debunking Arguments,” NOUS, 45, 103–125. ———. (2014). “Intuitive and Counterintuitive Morality,” in J. D’Arms and D. Jacobson (eds.), Moral Psychology and Human Agency: Philosophical Essays on the Science of Ethics. Oxford: Oxford University Press, 9–39. ———. (2016). “Is, Ought, and the Brain,” in S. M. Liao (ed.), Moral Brains:The Neuroscience of Morality. Oxford: Oxford University Press, 281–311. Kahane, G., Everett, J. A. C., Earp, B. D., Farias, M. and Savulescu, J. (2015) “’Utilitarian’ Judgment in Sacrifical Moral Dilemmas Do Not Reflect Impartial Concern for the Greater Good,” Cognition, 134, 193–209. Kahane, G., Katja, W., Shackel, N., Farias, M., Savulescu, J. and Tracey, I. (2012). “The Neural Basis of Intuitive and Counterintuitive Moral Judgment,” SCAN, 7, 393–402. Kamm, F. (2009). “Neuroscience and Moral Reasoning; A Note on Recent Research,” Philosophy & Public Affairs, 37, 330–345. Kiehl, K. A. (2008). “A Reply to de Oliveira-Souza, Ignácio, and Moll and Schaich Borg,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Volume 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. Cambridge, MA: MIT Press, 165–171. LeDoux, J. E. (2012). “A Neuroscientist’s Perspective on Debates About the Nature of Emotion,” Emotion Review, 4, 375–379. Li,W., Mai, X. and Liu, C. (2014). “The Default Mode Network and Social Understanding of Others: What Do Brain Connectivity Studies Tell Us,” Frontiers in Human Neuroscience, 8. doi:10.3389/ fnhum.2014.00074. Lindquist, K. A. and Barrett, L. F. (2012). “A Functional Architecture of the Human Brain: Emerging Insights from the Science of Emotion,” Trends in Cognitive Sciences, 16, 533–540. Little, M. O. (1995). “Seeing and Caring:The Role of Affect in Feminist Moral Epistemology,” Hypatia, 10, 117–137. Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. Harmondsworth: Penguin Classics. Marsh, A. A., Stoycos, S. A., Brethel-Haurwitz, K. M., Robinson, P., VanMeter, J. W. and Cardinale, E. M. (2014). “Neural and Cognitive Characteristics of Extraordinary Altruists,” PNAS, 111, 15036–15041. Mikhail, J. (2007). “Universal Moral Grammar: Theory, Evidence and the Future,” Trends in Cognitive Sciences, 11, 143–152. ———. (2011). Elements of Moral Cognition: Rawls’ Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment. New York: Cambridge University Press. Miller, R. and Cushman, F. (2013). “Aversive for Me, Wrong for You: First-Person Behavioral Aversions Underlie the Moral Condemnation of Harm,” Social and Personality Psychology Compass, 7, 707–718. Moll, J., De Oliveira-Souza, R. and Zahn, R. (2008). “The Neural Basis of Moral Cognition: Sentiments, Concepts, and Values,” Annals of the New York Academy of Sciences, 1124, 161–180. Nado, J. (2014). “Why Intuition?” Philosophy and Phenomenological Research, 89, 15–41. Nichols, S. (2002). “Norms with Feeling: Towards a Psychological Account of Moral Judgment,” Cognition, 84, 221–236. Nichols, S., Kumar, S., Lopez, T., Ayars, A. and Chan, H-Y. (2016). “Rational Learners and Moral Rules,” Mind & Language, 31, 530–554. Okon-Singer, H., Hendler,T., Pessoa, L. and Shackman, A. J. (2015). “The Neurobiology of EmotionCognition Interactions: Fundamental Questions and Strategies for Future Research,” Frontiers in Human Neuroscience, 9, 1–14. doi:10.3389/fnhum.2015.00058. Parfit, D. (2011). On What Matters:Volume Two. New York: Oxford University Press. Pascual, L., Gallardo-Pujol, D., and Rodrigues, P. (2013). “How does morality work in the brain? A functional and structural perspective of moral behavior.” Frontiers in Integrative Neuroscience, 7, 1–8. Pessoa, L. (2008). “On the Relationship Between Emotion and Cognition,” Nature Reviews Neuroscience, 9, 148–158. ———. (2013). The Cognitive-Emotional Brain: From Interactions to Integration. Cambridge, MA: MIT Press.
102
Joanna Demaree-Cotton and Guy Kahane
Pessoa, L. and Adolphs, R. (2010). “Emotion Processing and the Amygdala: From a ‘Low Road’ to ‘Many Roads’ of Evaluating Biological Significance,” Nature Reviews Neuroscience, 11, 773–782. Prinz, J. (2006). “The Emotional Basis of Moral Judgments,” Philosophical Explorations, 9, 29–43. Railton, P. (2017). “Moral Learning: Why Learning? Why moral? And Why Now?” Cognition. 167: 172–190. Rand, D. G., Peysakhovich, A., Kraft-Todd, G. T., Newman, G. E., Wurzbacher, O., Nowak, M. A. and Greene, J. D. (2014). “Social Heuristics Shape Intuitive Cooperation,” Nature Communications, 5, Article 3677. Ruse, M. (1988). “Evolutionary Ethics: Healthy Prospect or Last Infirmity,” Canadian Journal of Philosophy, 14 (Supp), 27–73. Saunders, L. F. (2015). “What Is Moral Reasoning?” Philosophical Psychology, 28, 1–20. Schaich Borg, J., Hynes, C.,Van Horn, J., Grafton, S. and Sinnott-Armstrong,W. (2006).“Consequences, Action, and Intention as Factors in Moral Judgments: An fMRI Investigation,” Journal of Cognitive Neuroscience, 18, 803–817. Schaich Borg, J., Sinnott-Armstrong, W., Calhoun, V. D. and Kiehl, K. A. (2011). “Neural Basis of Moral Verdict and Moral Deliberation,” Social Neuroscience, 6, 398–413. Shafer-Landau, R. (2003). Moral Realism: A Defence. Oxford: Oxford University Press. Shamay-Tsoory, S. G. and Aharon-Peretz, J. (2007). “Dissociable Prefrontal Networks for Cognitive and Affective Theory of Mind: A Lesion Study,” Neuropsychologia, 45, 3054–3067. Shenhav, A. and Greene, J. D. (2014). “Integrative Moral Judgment: Dissociating the Roles of the Amygdala and Ventromedial Prefrontal Cortex,” The Journal of Neuroscience, 34, 4741–4749. Singer, P. (2005). “Ethics and Intuitions,” The Journal of Ethics, 9, 331–352. Street, S. (2006). “A Darwinian Dilemma for Realist Theories of Value,” Philosophical Studies, 127, 109–166. Suhler, C. L. and Churchland, P. (2011). “Can Innate, Modular ‘Foundations’ Explain Morality? Challenges for Haidt’s Moral Foundations Theory,” Journal of Cognitive Neuroscience, 23, 2103–2116. Vavova, K. (2015). “Evolutionary Debunking of Moral Realism,” Philosophy Compass, 10, 104–116. Yoder, K. J. and Decety, J. (2014). “Spatiotemporal Neural Dynamics of Moral Judgment: A HighDensity ERP Study,” Neuropsychologia, 60, 39–45. Young, L., Bechara, A., Tranel, D., Damasio, H., Hauser, M. and Damasio, A. (2010). “Damage to Ventromedial Prefrontal Cortex Impairs Judgment of Harmful Intent,” Neuron, 65, 845–851. Young, L., Camprodon, J. A., Hauser, M., Pascual-Leone, A. and Saxe, R. (2010). “Disruption of the Right Temporoparietal Junction with Transcranial Magnetic Stimulation Reduces the Role of Beliefs in Moral Judgments,” Proceedings of the National Academy of Sciences, 107, 6753–6758. Young, L. and Dungan, J. (2012).“Where in the Brain Is Morality? Everywhere and Maybe Nowhere,” Social Neuroscience, 7, 1–10.
Further Readings For overviews of the highly influential “first wave” approaches to the cognitive science of morality, see Haidt, 2001 (for Haidt’s social intuitionist model) and Graham et al., 2012 (for moral foundations theory, a development of the SIM approach); Greene (2008, 2016) (for Greene’s dual-process theory and his argument that evidence for the theory has implications for normative ethics); and Dwyer, 1999, and Mikhail, 2007 (for introductions to the universal moral grammar approach). For arguments criticizing Greene’s claims regarding the relevance of neuroscience to ethics, see Berker (2009) and Kamm (2009), as well as Kahane (2014, 2016). See Suhler and Churchland (2011) for the claim that neuroscientific and neurobiological evidence counts against psychological claims made by Haidt’s work, including those regarding domain-specificity and innateness, and see Haidt and Joseph’s response (2011) for the argument that neurobiological evidence cannot refute their psychological theory. See Huebner (2015) for an argument that the functions performed by the brain in moral judgment cannot be classed as either “emotional” or “cognitive”. See Railton (2017) for an in-depth overview of current neuroscientific and other evidence pertaining to 103
The Neuroscience of Moral Judgment
new moral learning approaches and an argument that they may vindicate the rationality of moral judgment.
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 2 The Normative Sense: What is Universal? What Varies? Chapter 5 Moral Development in Humans; Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 9 The Evolution of Moral Cognition; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 21 Methods, Goals, and Data in Moral Theorizing; Chapter 22 Moral Knowledge as Know-How.
104
5 MORAL DEVELOPMENT IN HUMANS Julia W. Van de Vondervoort and J. Kiley Hamlin
1.The Role of Empirical Evidence in Philosophical Debates Philosophers have long debated the origins of human morality. Important discussions have addressed topics such as what actions ought to be considered right versus wrong, how humans come to behave according to moral principles (or not), and the nature of moral judgments. Some of these discussions do not lend themselves to the construction of testable and falsifiable hypotheses—the basis of scientific investigations. Indeed, presumably no empirical evidence can determine which moral positions are objectively correct (cf. Harris, 2010).That said, many philosophical debates include claims about what is regarding human morality, such as when and how morality emerges, how to best foster its emergence, and how to best characterize moral cognition. In these cases, scientific investigation can adjudicate between various philosophical positions.
2. Evidence from Moral Development: The Study of Moral Behaviors The scientific study of moral development seeks to understand how and when moral judgments and behaviors emerge during the lifespan. Researchers document children’s moral thoughts and acts at various ages, and the results of these studies inform our understanding of the basic foundations upon which morality builds and how competency develops over time. To illustrate, extensive research has explored how, when, and why children come to perform moral behavior and why some children are more likely to engage in prosocial behaviors than others. Key theorists have elaborated upon the role of rewards and punishments (Skinner, 1950;Watson, 1930), observation and imitation of adult models (Bandura & McDonald, 1963), and the active transmission of standards of behavior from caregivers (Grusec & Goodnow, 1994) in children’s performance of prosocial and antisocial acts. Studies have also explored the emergence and unique developmental trajectories of various prosocial behaviors, including helping, sharing, and comforting (Dunfield et al., 2011). Still others have examined the extent to which prosocial behaviors are motivated by naturally occurring other-directed emotions, such as empathy and/or sympathy (Eisenberg et al., 105
Moral Development in Humans
2014; Hepach et al., 2012; Hoffman, 2000), and whether these motivations are partially the result of evolved altruistic tendencies (Warneken & Tomasello, 2015). Along these lines, recent studies reveal that infants assist others in the absence of explicit requests, parental presence, or encouragement (Warneken, 2013; Warneken & Tomasello, 2013). Finally, the literature regarding individual differences in children’s performance of moral behaviors reveals that parents’ own antisocial behavior and coercive parenting styles positively predicts children’s antisocial behaviors (Tremblay et al., 2004), as does children’s disregard for others’ distress (Rhee et al., 2013). In contrast, the degree of children’s empathetic responding positively predicts their performance of prosocial behaviors (Eisenberg & Miller, 1987). The empirical study of moral behaviors has also revealed a relationship between moral judgments and tendencies to perform prosocial and antisocial acts. For example, young adults identified as showing “extraordinary moral commitment” through participation in social organizations show advanced moral reasoning (Matsuba & Walker, 2004). In contrast, juvenile offenders demonstrate poorer performance on moral reasoning measures than do non-offenders (Gavaghan et al., 1983), and psychopathy is associated with an inability to distinguish between moral and social or conventional concerns (Blair, 1995). Though evidence regarding the emergence of moral action and its relationship to moral reasoning is certainly relevant to many philosophical questions, the aim of this chapter is to provide an empirical account of the emergence of moral judgment independent from moral acts.We first outline a rationalist account of morality, as described by Immanuel Kant (1785/1993), and explain how this conceptualization of morality has informed research regarding the emergence of children’s explicit moral judgments. We then consider an emotional and/or intuitive account of morality inspired by David Hume (1739/1969) and describe how this conceptualization of morality relates to a growing body of evidence that preverbal infants evaluate others’ prosocial and antisocial behavior. Finally, we consider the extent to which infants’ evaluative abilities reflect moral versus merely social concerns and outline areas for future work.
3. Developmental Evidence for Kantian Moral Judgments To date, the majority of researchers interested in the development of moral judgments have ascribed to a Kantian definition of morality. Immanuel Kant argued that moral actions are—and must be—acts that are guided by universal and immutable moral principles (1785/1993). Mature moral judgments, then, assess to what extent others’ actions are motivated by such principles. Decades of work in the Kantian tradition have explored when children’s moral judgments explicitly demonstrate an appreciation of the universality and immutability of moral norms. The earliest psychological research in moral development characterized the emergence of morality as a slow process—young children were thought to be entirely insensitive to moral concerns, either because they wholly lack a moral sense or because their moral sense is rendered useless by egocentrism or cognitive limitations. In this view, a true moral sense results from extensive maturation and experience, and it is not until later in development that children’s judgments reflect truly moral principles (e.g., Freud, 1930/1961; Kohlberg, 1981, 1984; Piaget, 1932). For instance, the seminal developmentalist Jean Piaget studied children’s explicit moral judgments through both direct interview and indirect observation 106
Julia W. Van de Vondervoort and J. Kiley Hamlin
and argued that children actively construct their understanding of morality throughout childhood by consciously reflecting upon their experiences. According to Piaget, very young children lack any kind of moral sense. Between ages 4 and 7, children learn to follow moral rules, at first viewing these rules as unchallengeable constraints dictated by authority figures. At this stage, children interpret rules literally and fail to consider various additional factors; for example, whether a moral rule was broken intentionally or accidentally. To illustrate, Piaget famously observed that children under 6 years of age considered a child who accidently broke 15 cups to be naughtier than a child who intentionally broke one cup, revealing an outcome focus when assigning blame; by age 6, children begin focusing on intent. By age 10, children’s experience of conflict and negotiations with peers leads to a more mature understanding of the complex and authority-independent nature of morality. This increased maturity in moral reasoning is accompanied by an appreciation of moral principles such as justice. Piaget believed that initially children’s conception of justice is based in egalitarian principles (i.e., all individuals are treated the same), but that children develop an appreciation of equity, in which individuals’ specific needs are considered, around 12 years of age (Piaget, 1932). Lawrence Kohlberg extended Piaget’s findings by exploring shifts in moral reasoning across childhood and into adulthood. Kohlberg sought to determine when children’s moral judgments excluded egocentric, conventional, and legal concerns, instead focusing on universal moral principles. He investigated the development of moral judgments by presenting children with moral dilemmas (e.g., a man must decide whether to steal medicine to save his dying wife), and asking what the protagonist should do and why. Kohlberg was most interested in the reasoning behind children’s decisions and observed that the content of these responses progressed through reliable stages across development: Before age 9 children focus on self-interest and external consequences (e.g., the man should not steal the drug because he might go to jail). In contrast, adolescents’ responses typically focus on societal values (e.g., the man should not steal the drug because it is against the law), and young adults increasingly describe moral principles that transcend social norms (e.g., he should not steal the drug because others might need that medicine). Like Piaget, Kohlberg believed that these changes resulted from interactions with others that inspire active reflection on the appropriate justifications for moral decisions. Because Kohlberg did not observe reasoning consistent with Kant’s universal moral principles until later in development, he concluded that mature moral judgments do not emerge before at least early adulthood (1981, 1984; Colby et al., 1983). Kohlberg’s contemporaries explored how to more accurately and objectively quantify one’s stage of moral reasoning; for example, by rating and ranking responses to moral dilemmas (Rest, 1979) or refining how spontaneous justifications are evaluated (Gibbs & Widaman, 1982). Researchers also investigated the extent to which responses to Kohlberg’s dilemmas varied among males and females (see Gilligan, 1977 for the argument that men’s moral reasoning reflects concerns about justice while women’s reasoning reflects care for others) and across cultures. This work revealed largely similar reasoning across genders (see Walker, 2006 for review) and across cultures in the initial stages of moral reasoning, as well as cross-cultural differences in the latter stages of moral reasoning (e.g., the extent to which adults produce judgments that focus solely on moral principles rather than conventional or legal concerns; Gibbs et al., 2007; Nisan & Kohlberg, 1982; Snarey, 1985; see also Haidt & Joseph, 2008). 107
Moral Development in Humans
The work of Piaget and Kohlberg was and is instrumental in the study of children’s moral judgments. That said, their conclusions regarding the precise developmental trajectory of children’s reasoning have not always been supported by later work. To illustrate, while researchers continue to observe that the tendency to incorporate intention into one’s moral judgments increases with age (e.g., Baird & Astington, 2004; Cushman et al., 2013), studies have shown that children under age 6 do not always focus on outcomes when assigning blame. For instance, 3- to 6-year-olds treat causing harm to others as more wrong that causing harm to oneself, even when both events produce the same outcome (Tisak, 1993). Further, when Piaget’s intention versus outcome tasks are simplified and processing demands are reduced (e.g., scenarios are accompanied by pictures rather than being strictly verbal), children’s explicit moral judgments can be sensitive to intentions by age 3 (e.g., Nelson, 1980; Nobes et al., 2009;Yuill & Perner, 1988). Overall, children’s explicit moral judgments appear sensitive to the role of motivations, as opposed to a sole focus on outcomes. This focus on underlying principles (rather than on actions themselves) when making moral judgments is consistent with a Kantian definition of morality. More recent studies also explored the extent to which Kohlberg underestimated the moral reasoning abilities of young children. These studies revealed that children consider moral concerns to be distinct from personal and social-conventional concerns far earlier than Kohlberg proposed. In fact, studies eliciting verbal responses and explanations regarding morally relevant scenarios have shown that even young children appreciate the normative aspect of moral principles. At 34 months, children judge moral transgressions (e.g., hitting another child) as more likely to be wrong across contexts than social-conventional transgressions (e.g., not saying “please”). At 42 months, children also report that moral transgressions, compared to social-conventional transgressions, are more serious, not contingent upon explicit rules, and wrong regardless of whether the transgression is witnessed by an authority figure (Smetana & Braeges, 1990; see Smetana, 2006; Smetana et al., 2014 for reviews). Such distinctions between moral and social-conventional transgressions have been observed cross-culturally—for example, Chinese preschoolers in Hong Kong distinguish between moral and social-conventional transgressions by age 4 (Yau & Smetana, 2003), as do Colombian children by age 6 (Ardila-Rey & Killen, 2001).Taken together, this extensive body of research demonstrates that even young children demonstrate an appreciation for the universality of moral principles when age-appropriate methodologies are employed, a requirement for a Kantian definition of morality. What developmental processes account for this early understanding of universal moral principles? Echoing Piaget and Kohlberg’s proposals that moral development is driven by children’s evaluations of key experiences, it has been suggested that children construct distinctions between moral and nonmoral concerns through interactions with the social environment (Turiel, 1983). For example, observations of 2- and 3-year-olds’ interactions with their mothers and peers revealed that mothers display different responses to moral transgressions (e.g., focus on the victim’s rights or welfare) and conventional transgressions (e.g., focus on social order and regulation; Smetana, 1989, see also Nucci & Turiel, 1978 and Smetana, 1984). Reflecting upon these distinct responses may allow young children to distinguish between what is moral and what is conventional.
108
Julia W. Van de Vondervoort and J. Kiley Hamlin
4. Expanding the Criteria for Moral Judgments Although the study of children’s explicit moral reasoning is critical to understanding the development of moral judgments, the conceptualization of moral judgments as primarily explicit places some restrictions upon moral theorizing. Specifically, a focus on explicit reasoning restricts the age at which one could find evidence for the emergence of an evaluative moral sense (i.e., beyond ages at which children can provide explicit, verbal responses). Expanding the definition of morality allows for the possibility that the moral sense emerges before humans are capable of verbal reasoning. An expanded definition of morality may include the proposal that the moral sense is rooted in evaluative intuitions regarding morally relevant actions (i.e., the sense that certain people and actions are better than others and/or that some are good whereas others are bad). One possibility is that these intuitions are rooted in automatic emotional reactions that arise without conscious reasoning (e.g., Haidt & Joseph, 2008; but see Mikhail, 2011 for an account of non-emotional moral intuitions).1 An emotion-based account is initially appealing given the historical centrality of intuitions and emotions to several influential moral philosophers. For instance, David Hume argued that distinctions between what is right and wrong are based on feelings of approval and disapproval, which attain moral content when experienced after adopting a “general point of view” focused on the interests of the individual and his “general circle” (1739/1969). Relatedly, Adam Smith described the root of moral approval as empathizing with others’ feelings—by spontaneously imagining the feelings of those affected by an action and how those feelings would lead a relatively “neutral” observer or “impartial spectator” to judge these acts (1759/1976).2 Empirical evidence from both adults and children supports the relationship between emotions and moral judgments. Adults’ consideration of certain moral dilemmas activates brain areas associated with emotion rather than abstract reasoning (as would be expected if moral judgments always followed deliberate reasoning; Greene et al., 2001). The link between emotion and moral judgment may be especially strong early in life, with evidence showing that the observation of moral scenarios activates emotion-related brain areas in childhood whereas areas associated with abstract reasoning are increasingly active into adulthood (Decety et al., 2012). Further, individuals with early onset lesions to emotion-related brain areas are more likely to endorse egocentric rather than moral responses to hypothetical scenarios than are typically developing individuals (Taber-Thomas et al., 2014), suggesting that emotional processing is necessary for the development of moral judgments. Why might moral intuitions emerge early in development? One possibility is that moral intuitions result from socialization children receive regarding which behaviors are appropriate and/or children’s active construction of moral knowledge based on relevant experiences. Through the internalization of socialized norms (e.g., Kochanska & Aksan, 2006) or the active construction of moral concerns as distinct from social concerns (e.g., Dahl et al., 2018), children may come to feel that certain actions are good or bad even before they can explicitly justify these intuitions. This possibility suggests that moral intuitions emerge along with early social (and moral) experiences: For instance, from the beginning of the second year of life, caregivers encourage and positively reinforce infants’ prosocial behaviors (Dahl, 2015) and are especially responsive to younger toddlers’ moral transgressions (Smetana, 1984).3
109
Moral Development in Humans
A non-mutually exclusive possibility is that moral intuitions may have evolved to support humans’ unique tendency to engage in large-scale cooperation. While cooperation with others can be mutually beneficial, cooperative groups are open to exploitation by those who benefit from others’ contributions without reciprocating. Humans’ successful cooperation implies that systems have evolved to negate this exploitation. Such systems may include tendencies to evaluate others’ cooperative and uncooperative behaviors, to positively evaluate and selectively cooperate with those likely to reciprocate, to negatively evaluate and avoid cooperation with defectors, and to actively discourage (e.g., punish) exploitative behaviors that do occur (Axelrod & Hamilton, 1981; Boyd & Richerson, 1992; Cosmides & Tooby, 1992; Henrich & Henrich, 2007; Trivers, 1971).4 These evolved evaluative tendencies may take the form of moral intuitions, which subsequently guide explicit moral judgments regarding what people and acts are good and bad. Notably, evolved intuitions may be evident extremely early in development, even before children experience explicit socialization or have the opportunity to reflect on their own social and morally relevant experiences. The rest of this chapter will explore evidence that certain features of moral intuitions are present in preverbal infants.
5. Evidence for Evolved Sociomoral Intuitions To explore humans’ early moral capacities, researchers have developed methodologies that capitalize on infants’ and young children’s spontaneous reactions to morally relevant situations. Utilizing such methodologies, researchers have found that 2-year-olds spontaneously protest when they are the victims of moral transgressions but not following similar actions that do not involve violations (e.g., when a puppet attempts to throw away an object belonging to the child (a violation) versus its own object (not a violation); Rossano et al., 2011). Further, observations of 2-year-olds’ social interactions reveal that children are more responsive to moral transgressions than conventional transgressions (e.g., more statements indicating pain or loss of property, emotional reactions such as crying, and physical retributions such as grabbing toys; Smetana, 1984).Together, these studies demonstrate that 2-yearolds distinguish between moral and nonmoral concerns when they are personally involved, even before they can provide explicit verbal explanations regarding moral reasoning. Such methodologies have also been used to explore whether children appreciate the universal nature of moral concerns. These studies reveal that 3-year-olds spontaneously protest both when they and others are the victim of moral transgressions (e.g., when a puppet attempts to throw away an object belonging to the child or belonging to a third party; Rossano et al., 2011;Vaish et al., 2011). Three-year-olds also protest moral transgressions committed by both ingroup and outgroup members but only protest conventional transgressions performed by members of the ingroup (Schmidt et al., 2012).Taken together, these studies show that young children’s spontaneous responses are sensitive to the generalizability of moral principles, even when moral violations involve third parties. Relatedly, before children can generate active responses to moral events, researchers can explore infants’ processing of social and moral events. Such studies typically involve measuring infants’ differential attention to morally relevant scenarios presented live or via video. One classic study revealed that infants treat similarly valenced actions as more alike than actions that share physical characteristics (Premack & Premack, 1997). In this 110
Julia W. Van de Vondervoort and J. Kiley Hamlin
study, 12-month-olds watched a gray circle repeatedly direct one of four actions toward a black circle: Helping (bumping the circle toward its goal), hindering (bumping the circle away from its goal), caressing, or hitting. When subsequently presented with a novel hitting action, infants who initially watched helping or caressing looked longer than infants who initially watched hindering or hitting (Premack & Premack, 1997). Since infants typically look longer at dissimilar versus similar presentations, this pattern of results suggests that infants categorized actions per their social or moral valance (i.e., helping and caressing versus hindering and hitting) rather than the perceptual similarities between the actions (i.e., helping and hindering versus caressing and hitting). Beyond an understanding of actions’ sociomoral valence, infants form expectations regarding the social behavior of those who have been helped and hindered (Kuhlmeier et al., 2003; see also Fawcett & Liszkowski, 2012; Hamlin et al., 2007; Lee et al., 2015). In the first demonstration of this effect (Kuhlmeier et al., 2003), 12-month-olds watched a circle unsuccessfully attempting to climb a hill. The circle was then alternatively pushed up the hill by a “helper” and pushed down the hill by a “hinderer.” After observing several helping and hindering events, infants saw the circle approach the helper and hinderer in turn. Infants distinguished between scenarios in which the circle approached the helper compared to the hinderer, suggestive that infants recognized that the circle’s new social interactions would be influenced by the valence of its previous ones. This finding has been supported by subsequent studies using different stimuli and dependent variables (e.g., Fawcett & Liszkowski, 2012; Hamlin et al., 2007; Lee et al., 2015). In addition to expectations regarding others’ social interactions following helping and hindering, infants also have expectations regarding others’ fairness. Fifteen-month-olds look longer when resources are distributed unequally, rather than equally, between two individuals, suggesting that infants are surprised when others do not divide resources equally between recipients (Schmidt & Sommerville, 2011, see also Meristo et al., 2016; Sommerville et al., 2013). Infants’ fairness expectations are sensitive to context: 21-montholds only expect fair distributions when recipients are equally meritorious, but not when one individual completed an assigned task and the other did not (Sloane et al., 2012). Finally, infants’ sensitivity to fair distributions extends to their social expectations, with 10-month-olds expecting third parties to reward fair distributors rather than unfair distributors (Meristo & Surian, 2013; see also Meristo & Surian, 2014), and 16-month-olds expecting others to approach fair distributors rather than unfair distributors (Geraci & Surian, 2011). Taken together, these studies show that infants readily interpret and form expectations based on sociomoral actions among third parties.
6. Infants Evaluate Third-Parties’ Helping and Hindering While studies examining infants’ expectations regarding sociomoral events certainly speak to infants’ processing of sociomoral actions, they do not reveal whether infants themselves evaluate others based on their prosocial or antisocial actions. Perhaps infants expect others to approach prosocial others and/or avoid antisocial others or to divide resources fairly, but are not personally concerned with how third parties are treated. If infants are not concerned with the treatment of others, then infants’ sociomoral expectations should not be considered evidence for moral intuitions.5 111
Moral Development in Humans
To directly measure whether infants evaluate the morally relevant behaviors of third parties, researchers designed a series of scenarios that are enacted via live puppet shows. In each scenario, a protagonist repeatedly attempts to achieve a goal: To climb a steep hill, to open the lid on a box and reach an attractive toy, or to retrieve a dropped ball (Hamlin & Wynn, 2011; Hamlin et al., 2007; see also Buon et al., 2014; Scola et al., 2015). Following a series of unsuccessful attempts, the protagonist’s goal is facilitated by a “helper” (who bumps the protagonist up the hill, helps him open the box, or returns the ball) or blocked by a “hinderer” (who bumps the protagonist down the hill, closes the lid on the box, or runs away holding the ball). After watching repeated helping and hindering events in alternation, infants are presented with the helper and hinderer, and their preference for either puppet is determined by who they look toward longer or touch first. By 3 months of age, infants selectively look toward puppets who bumped the protagonist up the hill versus puppets who bumped him down and toward those who returned rather than stole the protagonist’s ball (Hamlin & Wynn, 2011; Hamlin et al., 2010). This looking-time preference for helpers over hinderers is driven by a negative evaluation of hinderers rather than a positive evaluation of helpers: 3-month-olds look longer toward neutral puppets (who were not involved in the unfulfilled goal scenario) than hinderers but look equally toward neural puppets and helpers (Hamlin et al., 2010). Once infants are physically capable of intentionally reaching for objects (4 to 5 months of age, McDonnell, 1975), infants selectively reach for helpers over hinderers in the hill, box, and ball scenario described here (Hamlin & Wynn, 2011; Hamlin et al., 2007; see also Scola et al., 2015; but see Salvadori et al., 2015).These older infants both negatively evaluate the hinderer and positively evaluate the helper: They reach toward neutral puppets over hinderers and helpers over neutral puppets (Hamlin et al., 2007). Finally, as suggested by emotion-based moral theories, infants’ responses may be driven by emotional reactions toward helping and hindering displays: Both 7- and 21- to 22-month-olds show more positive reactions after observing helping than after observing hindering (Steckler et al., 2017). Infants’ preferences for helpers over hinderers cannot be attributed to superficial characteristics of the puppets or how the puppets are presented to infants. The shape and/color of the helper and hinderer puppets and the side (right or left) of the puppets during the preference measurement is varied across infants. Further, parents are instructed to close their eyes during critical parts of the procedure, and the experimenter presenting the helper and hinderer puppets during preference measurement is unaware of what actions the puppets performed. Thus, infants’ reliable preference for helpers over hinderers is most likely due to the performance of helping or hindering actions during the puppet show (but see Scarf et al., 2012).
7. The Social Nature of Infants’ Evaluations Given that the abovementioned factors are not responsible for infants’ preferences for helpers over hinderers, we can ask what about these scenarios drives infants’ selective looking and reaching. One possibility is that infants’ preferences are based on the sociomoral valence of the helping and hindering scenarios. For instance, infants reached toward the puppet that pushed the protagonist to the top of the hill because they prefer those that help others achieve their goals. However, infants’ preferences could be due to perceptual aspects of 112
Julia W. Van de Vondervoort and J. Kiley Hamlin
the helping and hindering scenarios. For instance, infants reached toward the same puppet because they prefer those associated with particular movements and/or because they dislike other movement types (see Scarf et al., 2012). To explore the possibility that infants’ preferences are driven by low-level physical features of the helping and hindering scenarios, studies have included conditions in which helpers and hinderers interact with nonsocial targets, such as an eyeless shape that lacks self-propelled motion or a plastic non-agentive “claw.” In these cases, “helpers” bump the shape up the hill, open a box with the claw, or roll a ball back to the claw, while “hinderers” bump the shape down the hill, close the box, or run away holding the ball; critically, the nonsocial target entails that these actions are neither socially or morally relevant. When infants are presented with these characters, their looking and reaching behaviors reveal no preference for either the “helper” or “hinderer” (Hamlin & Wynn, 2011; Hamlin et al., 2007, 2010), suggestive that their preferences in social versions of the scenarios were not due solely to perceptual aspects of the helper and hinderer displays. Further evidence for a sociomoral interpretation of infants’ preferences comes from a study demonstrating that 10-month-olds prefer individuals who direct positive actions toward social targets (comforting a human) and negative actions toward nonsocial targets (pushing a backpack) rather than the reverse (Buon et al., 2014). Beyond interaction with social targets, infants’ evaluations also require clear evidence that the helper and hinderer are facilitating and blocking a specific goal. For example, 6- to 11-month-olds prefer helpers over hinderers in the hill scenario when the protagonist’s eyes are fixed pointed uphill, consistent with an intent to climb upwards. In contrast, infants show no preference when the protagonist’s eyes are unfixed and thus tend to point down the hill while he moves upwards (Hamlin, 2015; see Scarf et al., 2012). Finally, infants’ preferences are not restricted to the evaluation of helpers and hinderers: Infants in the second year of life also prefer individuals who distribute resources fairly over those who distribute resources unfairly (Burns & Sommerville, 2014; Geraci & Surian, 2011). Overall, these studies reveal that infants’ preference for prosocial over antisocial others disappears when the protagonist is not a social agent with a clear unfulfilled goal and that infants’ preference for prosocial over antisocial others is evident across different types of interactions. This suggests that infants’ preferences are rooted in the social nature of such interactions rather than any perceptual differences between them.
8. The Moral Nature of Infants’ Evaluations The evidence regarding infants’ preferences for helpers over hinderers clearly demonstrates the social nature of infants’ evaluations. However, this evidence does not strictly entail that infants’ evaluative intuitions are rooted in a moral sense—that is, that infants have an impartial sense that helping is good and/or hindering is bad. For instance, infants’ preference for helpers over hinderers may be based on who infants view as more likely to help them in the future.6 Consistent with this social but not moral interpretation, 6-month-olds’ neural activity while observing helping and hindering scenarios suggests that the detection of prosociality may be related to social processing (i.e., processes that support goal-directed grasping, pointing, and gaze direction; Gredebäck et al., 2015) rather than specifically moral processing. 113
Moral Development in Humans
The remainder of this chapter will outline studies designed to explore whether infants’ evaluations are simply social or whether these evaluations are sensitive to moral concerns. Moral concerns include the intentions and epistemic states of those performing prosocial and antisocial actions, the previous behavior of those targeted by prosocial and antisocial actions, and the value of moral actions independent from benefit to oneself.
9. Infants’ Evaluations Are Sensitive to the Individual’s Intent When evaluating whether an individual’s action is morally acceptable, mature moral thinkers consider that individual’s intention (e.g., Cushman, 2008). For instance, an individual who intentionally harmed someone is often judged more negatively than an individual who accidently caused the same harm. As discussed earlier, studies exploring explicit moral judgments have shown that children can consider intentions from age 3 (Nelson, 1980; Nobes et al., 2009; Yuill & Perner, 1988). That said, the measurement of implicit evaluations might reveal sensitivity to the role of intentions in sociomoral evaluations earlier in development. To determine whether infants’ evaluations are sensitive to the intentions behind prosocial and antisocial actions, 5- and 8-month-olds were shown puppet shows featuring successful and unsuccessful helpers and hinderers (Hamlin, 2013). Successful helpers and hinderers brought about outcomes that matched their goals, assisting or preventing a protagonist in opening a box (as in Hamlin & Wynn, 2011). In contrast, unsuccessful helpers and hinderers failed to bring about their intended outcome: The unsuccessful helper attempted but failed to assist the protagonist in opening the box, and the unsuccessful hinderer unintentionally allowed the protagonist to open the box alone. Across the different combinations of successful and unsuccessful helpers and hinderers, 8-month-olds preferred characters with helpful intentions, regardless of the outcome. Interestingly, these infants did not distinguish between those who either successfully or unsuccessfully carried out the same helpful or unhelpful intention (e.g., a successful helper versus an unsuccessful helper or an unsuccessful hinderer versus a successful hinderer); these results suggest that 8-month-olds’ sociomoral evaluations are relatively insensitive to outcomes. In contrast, 5-month-olds preferred successful helpers to successful hinderers, but showed no preferences when presented with any contrast in which one or both characters failed to achieve an intended outcome (Hamlin, 2013). This is consistent with evidence that failed attempts in general are difficult to interpret early in development (Behne et al., 2005). Overall, infants’ evaluations of helpful and unhelpful actions privilege intention by 8 months.
10. Infants’ Evaluations Are Sensitive to the Individual’s Epistemic States In addition to intention, mature moral judgments are influenced by other mental states. For instance, whether one has knowledge of others’ goals can vary the valance of the same physical action. Consider that in the box scenario described here, opening the box is prosocial if the helper knows that the protagonist wants to open the box to reach an attractive toy.Without knowledge of the protagonist’s goal, box opening could be prosocial, antisocial, or morally neutral (e.g., if the protagonist wanted to keep his toy inside the box or if the protagonist had no goal associated with the box).
114
Julia W. Van de Vondervoort and J. Kiley Hamlin
To determine whether infants’ evaluations are influenced by knowledge of others’ goals, 10-month-olds were shown scenarios featuring a protagonist and two observers (Hamlin et al., 2013). The protagonist first displayed a preference for one of two toys by repeatedly grabbing one toy. The two observers watched the protagonist’s toy selection and therefore could infer his goal (i.e., to reach the preferred toy). The protagonist was then blocked from reaching both toys. In alternation, the knowledgeable helper allowed the protagonist to reach his preferred toy, and the knowledgeable hinderer allowed the protagonist to reach his non-preferred toy. When presented with the knowledgeable helper and hinderer, 10-month-olds selectively reached toward the helper, consistent with past work demonstrating that infants prefer helpers to hinderers. The nature of infants’ evaluations was explored via two subsequent conditions. In one condition, only one object was present during the protagonist’s toy grabs, and thus the observers could not infer the protagonist’s toy preference later when two toys were available. In a second condition, the “observers” were offstage while the protagonist displayed his toy preference, and thus had no information regarding the protagonist’s goal. Subsequently each “observer” allowed the protagonist to reach one toy.When presented with the “observers” in these two conditions, 10-month-olds showed no preference, suggesting that infants’ evaluations require that helpers’ and hinderers’ acts are related to others’ goals and that helpers and hinderers are aware of this relationship (Hamlin et al., 2013a).
11. Infants’ Evaluations Are Sensitive to the Target’s Previous Behavior Thus far we have reviewed evidence that infants’ evaluations of helpers and hinderers are sensitive to prosocial and antisocial actors’ mental states, including their intentions and knowledge. These studies reveal that infants prefer those who intentionally and knowingly facilitate rather than block others’ goals. That said, there are cases in which mature moral thinkers positively evaluate goal blocking and/or negatively evaluate goal facilitating. Specifically, punishment directed toward deserving wrongdoers can be judged morally acceptable, and failure to punish when appropriate may be judged unacceptable. In terms of cooperative systems, punishment may be thought necessary to reduce the potential benefits of “cheating” the system—taking advantage of others’ cooperative behaviors without reciprocating (e.g., Boyd & Richerson, 1992; O’Gorman et al., 2009) To examine whether infants are sensitive to the context in which helping and hindering actions are performed, infants saw scenarios in which helpers and hinderers targeted protagonists who (infants knew) were previously prosocial or antisocial themselves. Across several studies, 4.5-, 8-, and 19-month-olds preferred helpers who facilitated the goals of prosocial puppets and hinderers who blocked the goals of antisocial puppets (Hamlin, 2014; Hamlin et al., 2011). These studies demonstrate that infants’ evaluations are sensitive to the context in which helpful and unhelpful actions are performed, and suggest that infants positively evaluate those that reward and punish appropriately.
12. Infants’ Evaluations Privilege Moral Value over Personal Interest In addition to considering an individual’s mental states and the previous behavior of those targeted by moral actions, mature moral judgments privilege the moral value of actions over
115
Moral Development in Humans
their efficacy in satisfying self-interested concerns. A recent study suggests that infants can also privilege moral concerns over their own self-interest (Tasimi & Wynn, 2016).This study capitalized on the fact that, in general, infants prefer to approach containers holding larger to smaller numbers of resources (e.g., crackers, Feigenson et al., 2002). In this study, 12and 13-month-olds were offered one plate containing only one cracker and another plate containing two crackers. Critically, these plates were either offered by two neutral puppets, who had neither helped nor hindered anyone, or by two puppets of differing moral value: The plate with one cracker was offered by a previously helpful puppet and the plate with two crackers was offered by a previously unhelpful puppet. Suggestive that infants’ moral concerns are sufficient to overcome their self-interest, infants reliably approached the plate with one cracker offered by the prosocial puppet. In a condition in which the cost of siding with prosociality was much greater (one versus eight crackers), more infants did approach the antisocial puppet; however, this tendency did not reach significance. Overall, this study reveals that in some cases infants are willing to place moral (or reputational) considerations ahead of their own interests (Tasimi & Wynn, 2016). Critically, the subjugation of egocentric concerns for one’s self-interest to moral concerns is a hallmark of theories of morality and moral development.
13. Distinguishing Social, Personal, and Moral Interpretations of Infants’ Sociomoral Evaluations Taken together, a growing body of work reveals that infants readily form evaluations based on third parties’ prosocial and antisocial behaviors. Certain aspects of these studies suggest a “moral” interpretation of infants’ intuitions regarding these behaviors, including their sensitivity to the intentional and epistemic states of helpers and hinderers and the possibility that infants can privilege moral concerns over their own interests. That said, it remains possible that infants’ responses reflect nuanced and powerful social and personal, but not moral, considerations. For instance, the tendency to evaluate intentional and epistemic information (e.g., Hamlin, 2013; Hamlin et al., 2013a), while long considered a hallmark of moral maturity (e.g., Piaget, 1932), could also reflect mere self-interest. Specifically, infants may track others’ mental states to analyze their likelihood to benefit infants themselves in the future, and presumably those who have intended to help others in the past are more likely to help everyone, including infants, subsequently. Similarly, infants may have chosen to forego one cracker to interact with a prosocial other (e.g., Tasimi & Wynn, 2016) because they viewed the effort dedicated to helping a third-party as more indicative of future personal help than the cost associated with providing one additional cracker. Currently, it is not possible to determine whether infants are highly skilled at determining who is likely to benefit themselves later on or whether infants view helpful behavior as valuable in itself. Relatedly, it is currently impossible to distinguish between social and moral interpretations of infants’ preferences for those who hinder antisocial others (e.g., Hamlin, 2014; Hamlin et al., 2011). The moral interpretation is that infants prefer those who punish antisocial others, because they see them as somehow deserving this preference.7 On the other hand, a plausible social interpretation is that infants view the “punisher’s” negative behaviors toward the former hinderer as indicative that the puppet and the infant share a 116
Julia W. Van de Vondervoort and J. Kiley Hamlin
negative opinion about the former hinderer. This shared evaluation might provide a source of mutual liking and affiliation between the infant and the “punisher” (as illustrated by the common phrase, “the enemy of my enemy is my friend”; e.g., Heider, 1958). Consistent with this social explanation, 9- and 14-month-olds prefer those who help those who are similar to them (i.e., who share their food preferences) and prefer those who hinder those who are dissimilar to them (i.e., who hold the opposite food preferences; Hamlin et al., 2013). Similarly, 15-month-olds generally prefer to interact with those who fairly distribute resources over those who unfairly distribute resources; however, this preference is disrupted when the fair distributor belongs to a racial outgroup (Burns & Sommerville, 2014). Although the underlying motives for infants’ preferences in these studies are unclear, one possibility is that infants show these tendencies because they view similar others and ingroup members as the most likely to benefit themselves in the future; that is, their social preferences are ultimately based on self-interest. That said, it is an open question whether a social interpretation of infants’ preferences is necessarily incompatible with a moral interpretation. In recent years, scholars have increasingly noted moral systems in some cultures include group-based concerns; that is, that the moral-conventional distinction is somewhat less pronounced elsewhere than in North America (Haidt & Joseph, 2008; Shweder et al., 1997).8 For example, given a limited number of resources, it may be “moral” to privilege the welfare of one’s ingroup (for example, the immediate family). This analysis need not include considerations of personal gain. To the extent that privileging the welfare of ingroup over outgroup members reflects a moral concern, infants’ preference for those that hinder antisocial others and those that hinder dissimilar others could be viewed as evidence that infants make moral evaluations, albeit ones that will need revision in different cultural environments.
14. Conclusion Overall, the conceptualization of moral judgments as reason-based and intuition-based has motivated researchers to explore the emergence of these judgments. Researchers who ascribe to a Kantian, reasoned-based definition of morality have focused on the development of explicit moral reasoning, and have shown that children’s explicit moral judgments reflect universal moral principles in early childhood.While this traditional view of development entailed that preverbal infants are not capable of moral judgments, an expanded definition of morality as rooted in emotion-based intuitions (consistent with the philosophies of Hume and Smith) inspired researchers to explore infants’ implicit evaluations. Researchers have since found that infants make complex evaluations of others’ prosocial and antisocial actions. Consistent with the view that infants’ intuitions regarding the morally relevant behaviors of other people reflect the early stages of moral evaluation, infants’ evaluations are sensitive to the mental states of helpers and hinderers (i.e., intention, knowledge of other’s goals), are sensitive to the previous behavior of those being helped and hindered, and can privilege moral or reputational value over immediate self-interest. If we assume infants lack complex reasoning skills, these evaluative abilities are likely rooted in emotions or intuitions regarding the moral valence of actions, rather than in a reasoned appreciation of moral principles. These moral intuitions may be influenced by early social experiences: Infants are attuned to others’ feelings and behaviors from early in 117
Moral Development in Humans
life (e.g., Sagi & Hoffman, 1976; Walden & Ogan, 1988) and can learn from others’ distinct reactions to social versus moral concerns within the first two years of life (e.g., Dahl, 2015; Smetana, 1984). That said, the early emergence of evaluations is also consistent with proposals that certain evaluative tendencies have evolved to support human’s cooperative systems (Axelrod & Hamilton, 1981; Boyd & Richerson, 1992; Cosmides & Tooby, 1992; Henrich & Henrich, 2007; Trivers. 1971). Future work should empirically evaluate whether infants’ evaluations of others’ moral actions are rooted in emotional versus nonemotional intuitions, the extent to which infants’ moral sense emerges without extensive socialization, the emergence of a moral sense within and across cultures, how experiences interact with early evaluative tendencies, and the continuity of infants’ sociomoral evaluations and children’s explicit moral judgments. The answers to these and other questions will allow us to determine the role of both nature and nurture in the development of morally mature thinkers. These answers may even allow us to resolve some of the debates about the nature of human morality that have captivated philosophers and scientists alike.
Notes 1. See too Chapters 6, 7, 8, 16, 17 and 18 of this volume for analyses of moral intuitions and their relation to emotion. 2. See Chapter 11 of this volume for more on Hume’s and Kant’s moral epistemologies and the effects their theories have had on the history of the subject. 3. See Chapter 6 of this volume for further discussion of models of moral learning. 4. See Chapter 9 of this volume for an extensive review of the evolution of those components of human moral psychology implicated in various forms of cooperation. 5. See Chapter 1 for discussion of the intuition, there attributed to William Frankena, that moral cognition distinguishes itself from prudential calculation by essentially involving concern for the welfare or rights of others. 6. See Chapter 9 of this volume for an analysis of “reciprocal altruism.” 7. See Chapters 9 and 29 of this volume on intuitions of desert and their role in justifying public policies. 8. For in-depth discussion of these findings, see Chapters 1 and 2 of this volume.
References Ardila-Rey, A. and Killen, M. (2001). “Middle Class Colombian Children’s Evaluations of Personal, Moral, and Social-Conventional Interactions in the Classroom,” International Journal of Behavioral Development, 25 (3), 246–255. Axelrod, R. and Hamilton, W. D. (1981). “The Evolution of Cooperation,” Science, 211 (4489), 1390–1396. Baird, J. A. and Astington, J. W. (2004). “The Role of Mental State Understanding in the Development of Moral Cognition and Moral Action,” New Directions for Child and Adolescent Development, 2004 (103), 37–49. Bandura, A. and McDonald, F. J. (1963). “The Influence of Social Reinforcement and the Behavior of Models in Shaping Children’s Moral Judgments,” Journal of Abnormal and Social Psychology, 67 (3), 274–281. Behne,T., Carpenter, M., Call, J. and Tomasello, M. (2005). “Unwilling Versus Unable: Infants’ Understanding of Intentional Action,” Developmental Psychology, 41 (2), 328–337. Blair, R. J. (1995).“A Cognitive Developmental Approach to Morality: Investigating the Psychopath,” Cognition, 57 (1), 1–29.
118
Julia W. Van de Vondervoort and J. Kiley Hamlin
Boyd, R. and Richerson, P. J. (1992). “Punishment Allows the Evolution of Cooperation (or Anything Else) in Sizable Groups,” Ethology and Sociobiology, 13 (3), 171–195. Buon, M., Jacob, P., Margules, S., Brunet, I., Dutat, M., Cabrol, D. and Dupoux, E. (2014). “Friend or Foe? Early Social Evaluation of Human Interactions,” PLoS One, 9 (2), e88612. Burns, M. P. and Sommerville, J. A. (2014). “I Pick You”:The Impact of Fairness and Race on Infants’ Selection of Social Partners,” Frontiers in Psychology, 5, 3. Colby, A., Kohlberg, L., Gibbs, J., Lieberman, M., Fischer, K. and Saltzstein, H. D. (1983). “A Longitudinal Study of Moral Judgment,” Monographs of the Society for Research in Child Development, 48 (1/2), 1–124. Cosmides, L. and Tooby, J. (1992). “Cognitive Adaptations for Social Exchange,” in J. Barkow, L. Cosmides and J. Tooby (eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press, 163–228. Cushman, F. (2008). “Crime and Punishment: Distinguishing the Roles of Causal and Intentional Analyses in Moral Judgment,” Cognition, 108 (2), 353–380. Cushman, F., Sheketoff, R., Wharton, S. and Carey, S. (2013). “The Development of Intent-Based Moral Judgment,” Cognition, 127 (1), 6–21. Dahl, A. (2015). “The Developing Social Context of Infant Helping in Two U.S. Sample,” Child Development, 86 (4), 1080–1093. Dahl, A., Waltzer, T. and Gross, R. L. (2018). “Helping, Hitting and Developing: Toward a Constructivist-Interactionist Account of Early Morality,” in C. C. Helwig (ed.), New Perspectives on Moral Development. New York: Routledge, 33–54. Decety, J., Michalska, K. J. and Kinzler, K. D. (2012). “The Contribution of Emotion and Cognition to Moral Sensitivity: A Neurodevelopmental Study,” Cerebral Cortex, 22 (1), 209–220. Dunfield, K., Khulmeier, V. A., O’Connell, L. and Kelley, E. (2011). “Examining the Diversity of Prosocial Behavior,” Infancy, 16 (3), 227–247. Eisenberg, N. and Miller, P. A. (1987). “The Relation of Empathy to Prosocial and Related Behaviors,” Psychological Bulletin, 101 (1), 91–119. Eisenberg, N., Spinrad, T. and Morris, A. (2014). “Empathy-Related Responding in Children,” in M. Killen and J. G. Smetana (eds.), Handbook of Moral Development (2nd edition). New York: Psychology Press, 184–207. Fawcett, C. and Liszkowski, U. (2012). “Infants Anticipate Others’ Social Preferences,” Infant and Child Development, 21 (3), 239–249. Feigenson, L., Carey, S. and Hauser, M. (2002). “The Representations Underlying Infants’ Choice of More: Object Files Versus Analog Magnitudes,” Psychological Science, 13 (2), 150–156. Freud, S. (1930/1961) Civilization and Its Discontents. New York: W. W. Norton. Gavaghan, M. P., Arnold, K. D. and Gibbs, J. C. (1983). “Moral Judgment in Delinquents and Nondelinquents: Recognition Versus Production Measures,” The Journal of Psychology, 114 (2), 267–274. Geraci, A. and Surian, L. (2011). “The Developmental Roots of Fairness: Infants’ Reactions to Equal and Unequal Distributions of Resources,” Developmental Science, 14 (5), 1012–1020. Gibbs, J. C., Basinger, K. S., Grime, R. L. and Snarey, J. R. (2007). “Moral Judgment Development Across Cultures: Revisiting Kohlberg’s Universality Claims,” Developmental Review, 27 (4), 443–500. Gibbs, J. C. and Widaman, K. F. (1982). Social Intelligence: Measuring the Development of Sociomoral Reflection. Englewood Cliffs, NJ: Prentice Hall. Gilligan, C. (1977). “In a Different Voice: Women’s Conception of the Self and of Morality,” Harvard Educational Review, 47, 481–517. Gredebäck, C., Kaduk, K., Bakker, M., Gottwald, J., Ekberg, T., Elsner, C., Reid, V. and Kenward, B. (2015). “The Neuropsychology of Infants’ Pro-Social Preferences,” Developmental Cognitive Neuroscience, 12, 106–113. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M. and Cohen, J. D. (2001). “An fMRI Investigation of Emotional Engagement in Moral Judgment,” Science, 293 (5537), 2105–2108. Grusec, J. E. and Goodnow, J. J. (1994). “Impact of Parental Discipline Methods on the Child’s Internalization of Values: A Reconceptualization of Current Points of View,” Developmental Psychology, 30 (1), 4–19.
119
Moral Development in Humans
Haidt, J. and Joseph, C. (2008). “The Moral Mind: How 5 Sets of Innate Moral Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules,” in P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind, Volume 3. New York: Oxford University Press, 367–391. Hamlin, J. K. (2013). “Failed Attempts to Help and Harm: Intention Versus Outcome in Preverbal Infants’ Social Evaluations,” Cognition, 128 (3), 451–474. ———. (2014). “Context-Dependent Social Evaluation in 4.5-Month-Old Human Infants: The Role of Domain-General Versus Domain-Specific Processes in the Development of Evaluation,” Frontiers in Psychology, 5, 614. ———. (2015). “The Case for Social Evaluation in Preverbal Infants: Gazing Toward One’s Goal Drives Infants’ Preferences for Helpers over Hinderers in the Hill Paradigm,” Frontiers in Psychology, 5, 1563. Hamlin, J. K., Mahajan, N., Liberman, Z. and Wynn, K. (2013). “Not Like Me = Bad: Infants Prefer Those Who Harm Dissimilar Others,” Psychological Science, 24 (4), 589–594. Hamlin, J. K., Ullman, T., Tenenbaum, J., Goodman, N. and Baker, C. (2013). “The Mentalistic Basis of Core Social Cognition: Experiments in Preverbal Infants and a Computational Model,” Developmental Science, 16 (2), 209–226. Hamlin, J. K. and Wynn, K. (2011). “Young Infants Prefer Prosocial to Antisocial Others,” Cognitive Development, 26 (1), 30–39. Hamlin, J. K., Wynn, K. and Bloom, P. (2007). “Social Evaluation by Preverbal Infants,” Nature, 450, 557–559. ———. (2010). “Three-Month-Olds Show a Negativity Bias in Their Social Evaluations,” Developmental Science, 13 (6), 923–929. Hamlin, J. K.,Wynn, K., Bloom, P. and Mahajan, N. (2011).“How Infants and Toddlers React to Antisocial Others,” Proceedings of the National Academy of Sciences of the United States of America (PNAS), 108 (50), 19931–19936. Harris, S. (2010). The Moral Landscape: How Science Can Determine Human Values. New York: Free Press. Heider, F. (1958). The Psychology of Interpersonal Relations. New York: Wiley-Blackwell. Henrich, N. and Henrich, J. (2007). Why Humans Cooperate: A Cultural and Evolutionary Explanation. Oxford: Oxford University Press. Hepach, R.,Vaish, A. and Tomasello, M. (2012). “Young Children Are Intrinsically Motivated to See Others Helped,” Psychological Science, 23 (9), 967–972. Hoffman, M. L. (2000). Empathy and Moral Development: Implications for Caring and Justice. Cambridge: Cambridge University Press. Hume, D. (1739/1969) A Treatise of Human Nature. Harmondsworth: Penguin Classics. Kant, I. (1785/1993) Grounding for the Metaphysics of Morals. Indianapolis, IN: Hackett Publishing. Kochanska, G. and Aksan, N. (2006). “Children’s Conscience and Self-Regulation,” Journal of Personality, 74 (6), 1587–1617. Kohlberg, L. (1981). Essays on Moral Development: Vol. 1. The Philosophy of Moral Development. San Francisco: Harper & Row. ———. (1984). Essays on Moral Development: Vol. 2. The Psychology of Moral Development. San Francisco: Harper & Row. Kuhlmeier, V., Wynn, K. and Bloom, P. (2003). “Attribution of Dispositional States by 12- MonthOlds,” Psychological Science, 14 (5), 402–408. Lee,Y.,Yun, J., Kim, E. and Song, H. (2015). “The Development of Infants’ Sensitivity to Behavioral Intentions When Inferring Others’ Social Preferences,” PLoS One, 10 (9), e0135588. Matsuba, M. K. and Walker, L. J. (2004). “Extraordinary Moral Commitment:Young Adults Involved in Social Organizations,” Journal of Personality, 72 (2), 413–436. McDonnell, P. M. (1975). “The Development of Visually Guided Reaching,” Perception & Psychophysics, 18 (3), 181–185. Meristo, M., Strid, K. and Surian, L. (2016). “Preverbal Infants’ Ability to Encode the Outcome of Distributive Actions,” Infancy, 21 (3), 353–372.
120
Julia W. Van de Vondervoort and J. Kiley Hamlin
Meristo, M. and Surian, L. (2013). “Do Infants Detect Indirect Reciprocity? Cognition, 129 (1), 102–113. ———. (2014). “Infants Distinguish Antisocial Actions Directed Towards Fair and Unfair Agents,” PLoS One, 9 (10), e110553. Mikhail, J. (2011). Elements of Moral Cognition: Rawl’s Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment. New York: Cambridge University Press. Nelson, S. A. (1980). “Factors Influencing Young Children’s Use of Motives and Outcomes as Moral Criteria,” Child Development, 51 (3), 823–829. Nisan, M. and Kohlberg, L. (1982). “Universality and Variation in Moral Judgment: A Longitudinal and Cross-Sectional Study in Turkey,” Child Development, 53 (4), 865–876. Nobes, G., Panagiotaki, G. and Pawson, C. (2009).“The Influence of Negligence, Intention, and Outcome on Children’s Moral Judgments,” Journal of Experimental Child Psychology, 104 (4), 382–397. Nucci, L. P. and Turiel, E. (1978). “Social Interactions and the Development of Social Concepts in Preschool Children,” Child Development, 49 (2), 400–407. O’Gorman, R., Henrich, J. and Van Vugt, M. (2009). “Constraining Free Riding in Public Goods Games: Designated Solitary Punishers Can Sustain Human Cooperation,” Proceedings of the Royal Society of London Series B, 276 (1655), 323–329. Piaget, J. (1932). The Moral Judgment of the Child. New York: Free Press. Premack, D. and Premack, A. J. (1997). “Infants Attribute Value ± to the Goal-Directed Actions of Self-Propelled Objects,” Journal of Cognitive Neuroscience, 9 (6), 848–856. Rest, J. R. (1979). Development in Judging Moral Issues. Minneapolis: University of Minnesota Press. Rhee, S. H., Friedman, N. P., Boeldt, D. L., Corley, R. P., Hewitt, J., Knafo, A., Lahey, B. B., Robinson, J.,Van Hulle, C. A.,Waldman, I. D. and Young, S. E. (2013). “Early Concern and Disregard for Others as Predictors of Antisocial Behavior,” Journal of Child Psychology and Psychiatry, 54 (2), 157–166. Rossano, F., Rakoczy, H. and Tomasello, M. (2011). “Young Children’s Understanding of Violations of Property Rights,” Cognition, 121 (2), 219–227. Sagi, A. and Hoffman, M. L. (1976). “Empathic Distress in the Newborn,” Developmental Psychology, 12 (2), 175–176. Salvadori, E., Blazsekova,T.,Volein, A., Karap, Z.,Tatone, D., Mascaro, O. and Csibra, G. (2015).“Probing the Strength of Infants’ Preference for Helpers over Hinderers: Two Replication Attempts of Hamlin and Wynn (2011),” PLoS One, 10 (11), e0140570. Scarf, D., Imuta, K., Colombo, M. and Hayne, H. (2012). “Social Evaluation or Simple Association? Simple Associations May Explain Moral Reasoning in Infants,” PLoS One, 7 (8), e42698. Schmidt, M. F. H., Rakoczy, H. and Tomasello, M. (2012). “Young Children Enforce Social Norms Selectively Depending on the Violator’s Group Affiliation,” Cognition, 124 (3), 325–333. Schmidt, M. F. H. and Sommerville, J. A. (2011). “Fairness Expectation and Altruistic Sharing in 15-Month-Old Human Infants,” PLoS One, 6 (10), e23223. Shweder, R. A., Much, N. C., Mahapatra, M. and Park, L. (1997). “The ‘Big Three’ of Morality (Autonomy, Community, Divinity) and the ‘Big Three’ Explanations of Suffering,” in A. Brandt and P. Rozin (eds.), Morality and Health. New York: Routledge, 119–169. Scola, C., Holvoet, C., Arciszewski,T. and Picard, D. (2015). “Further Evidence for Infants’ Preference for Prosocial over Antisocial Behaviors,” Infancy, 20 (6), 684–692. Skinner, B. F. (1950). Beyond Freedom and Dignity. New York: Knopf. Sloane, S., Baillargeon, R. and Premack, D. (2012). “Do Infants Have a Sense of Fairness?” Psychological Science, 23 (2), 196–204. Smetana, J. G. (1984). “Toddlers’ Social Interactions Regarding Moral and Social Transgressions,” Child Development, 55 (5), 1767–1776. ———. (1989). “Toddlers’ Social Interactions in the Context of Moral and Conventional Transgressions in the Home,” Developmental Psychology, 25 (4), 499–508. ———. (2006). “Social-Cognitive Domain Theory: Consistencies and Variations in Children’s Moral and Social Judgments,” in M. Killen & J. G. Smetana (eds.), Handbook of Moral Development. Mahwah, NJ: Lawrence Erlbaum Associates, 119–154.
121
Moral Development in Humans
Smetana, J. G. and Braeges, J. L. (1990). “The Development of Toddlers’ Moral and Conventional Judgments,” Merrill-Palmer Quarterly, 36 (3), 329–346. Smetana, J. G., Jambon, M. and Ball, C. (2014). “The Social Domain Approach to Children’s Moral and Social Judgments,” in M. Killen and J. G. Smetana (eds.), Handbook of Moral Development (2nd edition). New York: Psychology Press, 23–45. Smith, A. (1759/1976) The Theory of Moral Sentiments, ed. D. D. Raphael and A. L. Macfie. Oxford: Clarendon Press. Snarey, J. R. (1985). “Cross-Cultural Universality of Social-Moral Development: A Critical Review of Kohlbergian Research,” Psychological Bulletin, 97 (2), 202–232. Sommerville, J. A., Schmidt, M. F., Yun, J. and Burns, M. (2013). “The Development of Fairness Expectations and Prosocial Behavior in the Second Year of Life,” Infancy, 18 (1), 40–66. Steckler, C. M., Liberman, Z., Van de Vondervoort, J. W., Slevinsky, J., Le, D. T. and Hamlin, J. K. (2017). “Feeling Out a Link Between Feeling and Infant Sociomoral Evaluation,” British Journal of Developmental Psychology. Epub ahead of print. Taber-Thomas, B. C., Asp, E. W., Koenigs, M., Sutterer, M., Anderson, S. W. and Tranel, D. (2014). “Arrested Development: Early Prefrontal Lesions Impair the Maturation of Moral Judgment,” Brain, 137 (4), 1254–1261. Tasimi, A. and Wynn, K. (2016). “Costly Rejection of Wrongdoers by Infants and Children,” Cognition, 151, 76–79. Tisak, M. S. (1993). “Preschool Children’s Judgments of Moral and Personal Events Involving Physical Harm and Property Damage,” Merrill-Palmer Quarterly, 39 (3), 375–390. Tremblay, R. E., Nagin, D. S., Seguin, J. R., Zoccolillo, M., Zelazo, P. D., Boivin, M., Perusse, D. and Japel, C. (2004). “Physical Aggression During Early Childhood: Trajectories and Predictors,” Pediatrics, 114 (1), e43–e50. Trivers, R. L. (1971). “The Evolution of Reciprocal Altruism,” The Quarterly Review of Biology, 46 (1), 35–57. Turiel, E. (1983). The Development of Social Knowledge: Morality and Convention. Cambridge: Cambridge University Press. Vaish, A., Missana, M. and Tomasello, M. (2011). “Three-Year-Old Children Intervene in Third-Party Moral Transgressions,” British Journal of Developmental Psychology, 29 (1), 124–130. Walden, T. A. and Ogan, T. A. (1988). “The Development of Social Referencing,” Child Development, 59 (5), 1230–1240. Walker, L. J. (2006). “Morality and Gender,” in M. Killen and J. G. Smetana (eds.), Handbook of Moral Development. Mahwah, NJ: Lawrence Erlbaum Associates, 93–118. Warneken, F. (2013). “Young Children Proactively Remedy Unnoticed Accidents,” Cognition, 126 (1), 101–108. Warneken, F. and Tomasello, M. (2013). “Parental Presence and Encouragement Do Not Influence Helping in Young Children,” Infancy, 18 (3), 345–368. ———. (2015). “The Developmental and Evolutionary Origins of Human Helping and Sharing,” in D. A. Schroeder and W. G. Graziano (eds.), The Oxford Handbook of Prosocial Behavior. New York: Oxford University Press, 100–113. Watson, J. B. (1930). Behaviorism (Rev. ed.). New York: W. W. Norton. Yau, J. and Smetana, J. G. (2003). “Conceptions of Moral, Social Conventional, and Personal Events Among Chinese Preschoolers in Hong Kong,” Child Development, 74 (3), 647–658. Yuill, N. and Perner, J. (1988). “Intentionality and Knowledge in Children’s Judgments of Actor’s Responsibility and Recipient’s Emotional Reaction,” Developmental Psychology, 24 (3), 358–365.
Further Readings Extended arguments for the innate origins of certain moral foundations are found in P. Bloom, Just Babies: The Origins of Good and Evil (New York: Crown Publishers, 2013) and M. Hauser, Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong (New York: HarperCollins, 2006).
122
Julia W. Van de Vondervoort and J. Kiley Hamlin
For an in-depth of account of how moral intuitions could lead to moral judgments, see J. Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion (New York: Vintage Books, 2013). Four central papers and a series of commentaries discuss key issues in current evolutionary approaches to human morality in L. D. Katz, ed., Evolutionary Origins of Morality: Cross-Disciplinary Perspectives (Bowling Green: Imprint Academic, 2000). For a comprehensive overview of the psychological study of moral development, see both M. Killen and J. Smetana, eds., Handbook of Moral Development (Mahwah, NJ: Lawrence Erlbaum Associates, 2006) and M. Killen and J. Smetana, eds., Handbook of Moral Development (2nd ed.) (New York: Psychology Press, 2014).
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 2 The Normative Sense: What is Universal? What Varies? Chapter 3 Normative Practices of Other Animals; Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 9 The Evolution of Moral Cognition; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 21 Methods, Goals, and Data in Moral Theorizing; Chapter 22 Moral Knowledge as Know-How; Chapter 25 Moral Expertise; Chapter 27 Teaching Virtue.
123
6 MORAL LEARNING Shaun Nichols
1. Introduction The epistemic status of moral beliefs plausibly depends on the genesis of those beliefs. Imagine that the only reason we believe that killing aliens is wrong is because aliens wanted us to be more docile and installed the belief to achieve that end. In that case, our belief that killing aliens is wrong seems epistemically defective. At the other extreme, if we came to believe that killing aliens is wrong because it is a self-evident truth that we recognize through rational competence, then our belief is in very good epistemic standing. So the origins of our moral beliefs will be important for a moral epistemology.1 Many recent theories of moral judgment offer online processing accounts. That is, they attempt to explain moral judgment by invoking various cognitive and conative factors directly involved in generating a moral judgment in response to a situation or a vignette. Perhaps the most familiar such accounts draw on dual-process theories, according to which there are two broad classes of processes. System 1 processes tend to be fast, effortless, domain specific, inflexible, insensitive to new information, and generally ill-suited to effective longterm cost-benefit reasoning. System 2 processes are flexible, domain general, sensitive to new information, and better suited to long-term cost-benefit analysis but slow and effortful. On Greene’s dual-process account of moral judgment, when we are presented with the option of pushing one innocent person off of a footbridge to save five other innocent people, there is competition between a system 1 emotional process (which screams don’t!) and a system 2 process that calculates the best outcome (which says “5 > 1, dummy”). Greene argues that if system 1 is indeed what leads people to judge that it’s wrong to push in footbridge, we should discount the rational credentials of that judgment (2008).2 Jonathan Haidt’s social intuitionist model is also a dual-process account of moral judgment. On his view, our moral reactions tend to be driven by system 1 affectively valenced intuitions. System 2 plays a subsidiary role—it primarily generates post hoc justifications for our affective intuitions (2001, 815). In one study, participants were presented with a vignette in which siblings Julie and Mark have a consensual and satisfying sexual encounter, using multiple forms of birth control. Participants overwhelmingly said that it was not okay for Julia and Mark to make love. When asked to defend their answers, Haidt reports, 124
Shaun Nichols
participants often appealed to the risks of the encounter, but the experimenter effectively rebutted the justifications (e.g., by noting the use of contraceptives). Nonetheless, the participants continued to think that the act was wrong, even when they couldn’t provide any undefeated justifications. A typical response was:“I don’t know, I can’t explain it, I just know it’s wrong” (814). Haidt interprets this pattern as a manifestation of two processes: the moral condemnation is driven by an affective intuition (rather than reasoning) and the proffered justification comes from post hoc rationalizing.3 Both Greene’s and Haidt’s accounts of system 1 suggest that the process is not the kind of rational process that we might take to be epistemically vindicatory.There has been much debate about the adequacy and implications of these models, but there is a further question that researchers have begun to investigate: how did we come to have those emotions or intuitions?4 Recent work has explored the role of learning in the acquisition of the components of moral judgment. As we’ll see, the way we learn morality might bear on the epistemic standing of our moral judgments.
2. Habit Learning On Greene’s model, our resistance to pushing the man off the bridge derives from an emotional reaction. But where does this emotional reaction come from? Reinforcement learning provides a potential answer to this question (Cushman, 2013; Crockett, 2013). To see how, we first need a brief review of reinforcement learning. Researchers in machine learning distinguish between two kinds of reinforcement learning, model-based and model-free. In model-based reinforcement learning, the agent builds a model of their situation. Imagine a long maze with an I-shape that has one unit of cheese in the southwest corner and five units in the northwest corner. After running the maze several times, an agent with the requisite cognitive abilities can gradually learn a model of the maze.That model might include a map of the maze, where the cheese is, how much cheese is in each location, and the time it takes to get from one position to another. Such a model can guide decision making in several ways. Suppose that the model includes the information that it takes 20 seconds to get from the midpoint of the maze to any of the corners. If the agent also knows that the time allowed in the maze is 25 seconds, then if the agent is placed at midpoint, the expected utility of going to the northwest is highest, and so the model provides a reason for going to that corner. Now imagine that the agent is placed at the southernmost point in the maze and it would take 35 seconds to get to the NW corner, but only 5 to get to the SW corner. In that case, the model would guide the agent to go to the SW corner since it will never make it to the NW corner and one unit of cheese is better than none. This model can also be the basis for planning. Thus, if obstacles are introduced to the maze—e.g., a button needs to be pushed in order to open a door to get into the corners—then given the goal of getting to the NW corner, the agent can incorporate into the plan of action the necessary steps for avoiding the obstacles. In addition, if the goals change, then the model might be used in a different way. If the agent is sated when put into the maze it might have a new goal of exploring the NE and SE corners of the maze. Models can thus be a powerful instrument for good decision making. However, learning such models is computationally demanding. Animals also exhibit a much less computationally demanding form of learning—habit learning. This kind of learning is called 125
Moral Learning
“model-free” because it doesn’t depend on the construction of a model. Rather, the agent simply develops a value for particular actions given a situation. For instance, after getting food from pushing a blue lever several times, the reinforcement learning system might come to assign a positive value to the action of pushing a blue lever.5 Such model-free value representations drive habitual behavior, and this behavior can persist even when the original goal of the behavior is undermined. For a simple action like pushing a lever to get food, it can be hard to determine whether the act is driven by a goal (get the food) or a habit (hit the lever). Hitting the lever to get the food would be the goal-driven response derived from model-based learning; hitting the lever because it’s intrinsically rewarding would be the habitual response derived from model-free learning.We can see these explanations come apart when the goal is “devalued.” In a characteristic devaluation experiment, a rat first learns that pushing the lever is the way to get food.The rat is then removed from the cage, fed until it is completely satiated and put back into the cage. In some conditions, rats will immediately start pushing the lever even though they no longer want the food. Fiery Cushman writes, This apparently irrational action is easily explained by a model-free mechanism. The rat has a positive value representation associated with the action of pressing the lever in the “state” of being in the apparatus. This value representation is tied directly to the performance of the action, without any model linking it to a particular outcome. The rat does not press the lever expecting food; rather, it simply rates lever pressing as the behavioral choice with the highest value. (279) This habit-like behavior can also be observed if a hungry rat is led to the clear knowledge that there is no food available, such that pressing the lever will not lead to food. Nonetheless, under certain conditions, the rat will still press the bar.
Model-Free Learning and Rationality As Cushman notes, there’s something apparently irrational about the rat’s behavior when it persists in hitting the lever even when the end that inspired the behavior is no longer desired.The model-free system doesn’t draw on background knowledge and goals in updating values. As a result, when the rat’s behavior is governed by the model-free value representation, it will not be sensitive to other evidence available to the rat. As a consequence of this, the system can generate perseverative behavior like habitual bar pressing. This all coheres with the kinds of rational shortcomings highlighted by Greene’s account of system 1 processes—the model-free system is insensitive to background information and longterm goals and is generally ill-suited to cost-benefit reasoning. Although model-free learning is poorly suited to cost-benefit analysis, model-free values can still contribute to an agent’s decisions in rationally appropriate ways. To see this, it will be helpful to consider a new example, the instinctive aversion to breathing under water (see, e.g., Pedroso, 2012). Our aversion to breathing under water has a good goal-based origin, since typically breathing under water will have a bad consequence. But our aversion to breathing under water has also acquired a model-free value representation. This is
126
Shaun Nichols
revealed by the fact that many people learning to scuba dive have difficulty breathing under water, even though they know that there is oxygen available through the mouthpiece. This aversion actually poses a hazard to the novice diver because the habitual tendency to hold one’s breath can lead to a wide range of problems while diving. Divers learn to overcome this aversion. To link this up with rational choice, imagine three people who have very strong (modelfree generated) aversions to the action breathing under water. This aversion can be extinguished provided the learner gets enough practice.Two of these people, the resolute diver and the diffident diver, each has a strong desire to scuba dive, such that he believes it would greatly improve his life. The resolute diver decides to work to extinguish the aversion to breathing under water, which makes good rational sense given the value he places on diving. The diffident diver forgoes diving because of the action-based aversion, and this does not look rational since he is giving up something he regards as highly valuable; indeed, it plausibly counts as a case of weakness of will.The third person, the indifferent diver, has only a minimal desire to scuba dive, and he decides not to work to extinguish the aversion to breathing under water. This makes rational sense for him—the rewards of diving aren’t worth the aversive experiences that would be incurred in extinguishing the aversion. Thus, while the process of model-free learning is inflexible and insensitive to background knowledge and goals, the values that result from this learning can serve as inputs to an agent’s rationally apt decision making (as in the resolute and indifferent divers) or rationally defective decision making (as in the diffident diver).
Model-Free Learning and Moral Judgment Cushman applies the model-free framework to explain judgments about moral dilemmas. First, he suggests that we can think of model-free reinforcement learning as generating “action-based” value representations and model-based reinforcement learning as generating “outcome-based” value representations: the functional role of value representation in a model-free system is to select actions without any knowledge of their actual consequences, whereas the functional role of value representation in a model-based system is to select actions precisely in virtue of their expected consequences. This is the sense in which modern theories of learning and decision making rest on a distinction between action- and outcomebased value representations. (279) Cushman then suggests that this distinction can explain responses to the kinds of moral dilemmas with which we started. When presented with the possibility of pushing a man in front of a train to save five people, we resist the pushing because our model-free system has assigned a negative value to the action-type pushing, and we have this negative value because pushing typically led to negative outcomes (e.g., harm to victim) (282). In a series of clever experiments, Cushman and colleagues show that participants are in fact averse to performing actions that are typically harmful but happen to be harmless in the experiment. For instance, they asked subjects to use a rock to hit a manifestly fake hand.
127
Moral Learning
They found stronger physiological responses to such actions as compared to parallel actions that aren’t typically harmful (e.g., using a rock to smash a nut) (Cushman et al., 2012). This indicates that we do indeed have an action-based aversion to action types that are typically associated with causing harm (Cushman, 2013, 286). We saw earlier with the divers that model-free values can serve as inputs to rational choice. So even if the resistance to pushing in footbridge is driven by an action-based aversion, that alone would not entail that the decision is irrational. However, it would contribute to an argument for the irrationality of this choice, in keeping with Greene’s original argument that we should discount our intuition that it’s wrong to push (2008).The modelfree system is not sensitive to morally relevant factors like the known benefits of pushing (in this case, a net savings of four lives). And if an agent’s judgment ignores such weighty factors in favor of an aversion to pushing, their judgment is rationally suspect. The contrast with the indifferent diver case is stark—his aversion to breathing under water is a rational basis for the indifferent diver to forgo scuba diving; but it does not seem rational to forego a net savings of four lives to avoid the aversive experience associated with the action of pushing. Indeed, it would seem to be the epitome of self-indulgent squeamishness.
Descriptive Adequacy If responses to dilemmas like footbridge were driven by action-based aversion, this would provide a basis for challenging the rational basis of those judgments. However, there is reason to doubt that action-based aversion can explain moral judgment. First, finding something aversive is not the same as judging it wrong (Nichols, 2004). The novice diver finds it aversive to breathe under water without judging that it is wrong for him to do to so, much less judging the act morally wrong or immoral.Thus, to understand our judgments of wrongness (e.g., that it is wrong to push the man in front of the train), we apparently need something more than aversion. Indeed, this point applies to the very experiments that Cushman and colleagues report. Subjects are averse to pretending to smash a person’s hand with a rock, but it’s unlikely that they judge this pretense morally wrong. Second, very atypical actions (e.g., pushing someone off a bridge with a box of zucchini) would be judged wrong despite the absence of reinforcement history with these specific act-types (Ayars, 2016). Finally, whether a harm is intentional exerts an effect on moral judgment even when the mode of harm is exactly the same—e.g., bombing civilians intentionally is considered worse than bombing civilians as a side-effect of bombing an enemy. Yet this is difficult to explain on a model-free account, since intended and unintended harm presumably result in the same aversive consequences: putting your hand in a fire is no less aversive when the pain is a foreseen side-effect of retrieving your hot dog (Ayars, 2016).6 There is an easy way to address these deficiencies—by appealing to rules as a critical component of moral judgment (e.g., Nichols & Mallon, 2006; Nichols et al., 2016). Actionbased aversion is insufficient for moral judgment since moral judgment is generated not merely by registering aversive feelings but by categorizing an act as a violation of a represented prohibition. Atypical actions can be registered as violations so long as the unfamiliar act falls under the category of action prohibited by the rule. And the role of intention in moral judgment can be explained if INTENTION is part of the representation of the moral rule. 128
Shaun Nichols
It is of course consistent with a rule-based account that action-based aversions play an important role in moral judgments of wrongness. Moral rules that forbid actions that are intrinsically aversive might be treated as especially significant (Nichols, 2004). In addition, reinforcement learning might play an essential role in the internalization of the rules. However, if indeed rules play a critical role in moral judgment concerning moral dilemmas, then there is no direct argument from model-free learning to the conclusion that people’s moral judgments aren’t rational. The situation looks to be disanalogous to the irrationality of the diffident diver. For when we judge that it is wrong to push someone off of a bridge, the bare aversion to pushing is not the only thing that leads us to judge that it’s wrong to push. We also have an internalized rule that prohibits the action. As a result, we can’t discount the judgment that it is wrong to push unless we give some reason why this rule, or its role in the judgment, is rationally problematic.
3. Emotion Learning If model-free learning would threaten to undermine the rational credentials of moral judgment, model-based learning seems better suited to rationally vindicating moral judgment. Peter Railton has recently promoted the rational basis of moral judgment by drawing on such resources (Railton, 2014, 837–838). As noted in the introduction, dual-process theories often characterize system 1 as rationally defective—inflexible, domain specific, insensitive to new information, and ill-suited to effective long-term cost-benefit reasoning. Railton maintains that recent work paints a very different picture. We do have a set of resources for unconscious decision making, which Railton calls the “broad affective system,” and this system incorporates affective factors (2014, 827). But the system is a flexible learning system (813) that can incorporate information from multiple domains (2014, 817, 823) and is capable of “guiding behavioral selection via the balancing of costs, benefits, and risks” (2014, 833).7
The Broad Affective System and Rationality How does the broad affective system fare epistemically? To be sure, the broad affective system is sensitive to a broader range of evidence than model-free reinforcement learning. However, the process by which we come to attune our emotions to risks and benefits is still critically less flexible and sensitive to new information than general cognition. For instance, if I tell you that the blue pill will make you ill, you will refrain from taking it, but not because my testimony generated an attuned fear or disgust response to the pill. We can immediately incorporate such testimonial evidence into our decision making without the attunement of the broad affective system.8 Nonetheless, Railton maintains that the broad affective system is rational in an important way: “the overall picture of the broad affective system in animals and humans is remarkably congruent with our philosophical understanding of the operation of rational procedures for learning and decision making” (2014, 835). As Railton notes, this system “is a learned information structure rather than a set of stimulus-response connections (for example, it separately encodes and updates value, risk, expected value, and relational and absolute space),” and thus,“it can properly be spoken of as more or less accurate, complete, reliable, grounded,
129
Moral Learning
or experience-tested.” As a result, Railton says, the broad affective system “has the necessary features to constitute a proto-form of implicit practical knowledge” (2014, 838) Although the broad affective system is not nearly so limited as the model-free system, it remains the case that agents are plausibly characterized as irrational when they are driven by this system to act in ways they acknowledge to be imprudent or sub-optimal upon reflection. For example, many people have an attuned aversion to exercise because of the discomfort they experience when beginning an exercise regimen.This attuned aversion can lead agents to avoid exercising even when they know that moderate exercise would alleviate various ailments (e.g., back pain). Such an agent is arguably being irrational in allowing her broad affective system to carry the decisions she would otherwise make differently.
The Broad Affective System and Moral Judgment The broad affective system plays a key role in how we update our values. This is obviously true for nonmoral values. Rats acquire taste aversions when they come to associate tastes with subsequent nausea. The rat learns to assign a negative affective value to the taste, and this value might be incorporated into a model of a maze with different food options. Similarly, values that seem morally significant can also presumably be shaped by the broad affective system. Consider, for instance, the natural aversion in rats and monkeys to distress cues of their conspecifics (Masserman et al., 1964; Greene, 1969). In one devious experiment, a monkey learned that it needed to pull a chain to get food; subsequently the experimenter made it such that pulling the chain would yield food but it would also trigger a shock to a conspecific in an adjoining cage. In this task, several of the monkeys stopped pulling the chain.Their experience of witnessing the distress cues of a conspecific leads them to behave in a way that has a good moral outcome. One explanation for this behavior is that experiences of witnessing the distress cues of conspecifics consequent upon pulling the chain generates an affectively attuned appreciation that pulling the chain causes these outcomes: outcomes to which they are independently averse. The broad affective system surely plays an important role in determining what kinds of things we will find good and bad. This is to be expected since, as Railton notes, this system balances costs, benefits, and risks (2014, 833). But what about moral judgments of wrongness, the kinds of examples with which we started? Railton suggests that the broad affective system can explain these judgments as well. Recall Haidt’s case of siblings Julie and Mark having consensual sex. Haidt maintains that when people defend their condemnation by adverting to the riskiness of the encounter, this is nothing more than post hoc confabulation. Railton suggests otherwise and illustrates the point with a different sibling case, Jane and Matthew, who decide that it would be interesting and fun if they tried playing Russian roulette with the revolver they are carrying with them for protection from bears. At very least it would be a new experience for each of them. As it happens, the gun does not go off, and neither suffers any lasting trauma from the experience. They both enjoyed the game, but decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that, was it OK for them to play Russian roulette with a loaded revolver? (2014, 849)
130
Shaun Nichols
Most people think it obvious that it was not okay for the siblings to play Russian roulette. Railton goes on to draw the parallel to Julie and Mark: Julie and Mark played Russian roulette with their psyches, arguably with more than a one-in-six chance of serious harm. The fact that experimental subjects had such harms uppermost in their minds when queried about their disapproval need not show mere confabulation, since running the risk of these harms is relevant to the question whether their conduct was “OK.” (2014, 849) The idea seems to be that participants’ responses to the vignette reflect the kinds of risks that were aptly registered by the broad affective system. This account promises to give a kind of vindicatory explanation for people’s judgments about Julie and Mark having sex. The judgments themselves derive from our becoming emotionally attuned to the costs, benefits, and risks associated with such behavior.
Descriptive Adequacy Is the typical subject’s judgment about Julia and Mark generated by the attunement of the broad affective system to the harms, benefits, and risks of incest? There’s reason to be skeptical. Like Haidt’s subjects, my immediate judgment about the case was that it was wrong for Julie and Mark to have sex. Why? Well, I can assure you that it wasn’t from experiences making out with my sister. Most of us don’t learn to condemn incest via our experiences with incestuous activities or by learning about the bad consequences from others who have had the requisite experiences. What about the psychic costs that are risked by incest? First of all, psychic costs of sexual intercourse often aren’t sufficient to generate condemnation. If two friends have sex despite knowing that there is a high risk of psychic harm, we might say that they exhibited bad judgment, but this isn’t the same as what we find in the Haidt study, where participants say of Mark and Julie “I can’t explain it, I just know it’s wrong.” Second, when presented with the case of Julie and Mark, a key part of the condemnation plausibly comes from the fact that it’s categorized as incest.We learn to condemn incest because we are told that it’s wrong. And the idea that there is a psychic risk here plausibly depends on the fact that we think incest is wrong (as opposed to just registering the naturally emerging costs and benefits of sibling sex). In a group where there is no stigma against sibling sex (e.g., ancient Egyptians [Hopkins, 1980]), there would be significantly less cost to the practice. It’s worth noting that the role of testimony in learning to condemn incest is incontestable. Otherwise we can’t explain the variation in specific incest norms across cultures. In some cultures (parts of Korea and India), first-cousin marriage is absolutely forbidden; in other cultures (e.g., in Saudi Arabia), it is permitted; in other cultures, it is wrong to marry one’s parallel cousin (i.e., the child of a parent’s same-sex sibling), but not a cross-cousin (i.e., the child of a parent’s opposite sex sibling). These different norms and the different practices that flow from these norms are the product of cultural norms being passed down from generation to generation.9 The importance of categorizing an act as a violation is also evident from people’s concern about whether an act falls under a proscribed category.10 For instance, people care
131
Moral Learning
about whether a sexual encounter counts as incest.This is apparent from a casual web search for “is it incest,” which returns thousands of hits. Here are some representative queries: I stayed at my cousins house a few nights ago and hooked up with her step brother who is a year older than me. . . . I’m not sure how to feel about it, is it incest because he’s my step cousin or just kind of weird haha.11 Is it incest if i have sexual relations with my cousin?12 Ugh. Is it incest if you have sex with your adopted brother? (Asking for a friend)13 I’ve suggested that the condemnation of incest does not emerge through learning the natural rewards and punishments of engaging in the behavior. We don’t practice the behavior and thereby develop the recognition that the act is wrong. This is true for much of the moral domain. Consider cheating on tests. Most people judge that this is wrong before they ever cheat on a test. Why? Because they are told that it’s wrong to cheat on tests. Or consider theft. Children typically don’t try out stealing and have a gradual affective attunement to the costs of stealing that inclines them against theft. Again, children come to think stealing is wrong because we tell them that it is. In none of these cases do we find the appreciation of wrongness to emerge from a calculation of the costs, benefits, and risks. Rather, we learn rules that proscribe these various behaviors.Violating the rules often does, of course, result in aversive consequences like punishment. But those aversive consequences depend on the social recognition of the rule violation, and not solely on people’s becoming attuned to the natural risks of incest, cheating, or stealing. So even though the broad affective system is presumably critical to learning values, it seems like it can’t do all of the lifting. Once again, rules seem to be a key component of our capacity for moral judgment.14
4. Statistical Learning I’ve argued that rules or norms play an essential role in moral judgment. However, it has been quite unclear how these kinds of rules are acquired, which is especially pressing given the apparent complexity of those principles operative in responses to moral dilemmas studied to date. For instance, people judge that actions that produce harms are worse than allowings that produce equal harm; people also distinguish intentional harms from unintended but foreseen harms (see, e.g., Cushman et al., 2006; Mikhail, 2011). These are subtle distinctions to acquire, and children are presumably never given any explicit instruction on the matter. Few parents tell children, “It is wrong to intend to do X, but it is permissible to allow X to occur.” So if we are to explain people’s facility with moral distinctions in terms of structured rules, we need some account of how they arrive at such complex rules despite scant (explicit) instruction.The most prominent explanations explain the complexity of rules by appealing to innate contributions (e.g., Dwyer et al., 2010). But recent work on statistical learning suggests an alternative. A revolution is brewing in cognitive science. The dominant line of thought for decades had been that people are fundamentally bad at statistical reasoning. For instance, people seem to neglect prior probabilities when making judgments about likely outcomes (e.g., Kahneman & Tversky, 1973). However, work in developmental and cognitive psychology
132
Shaun Nichols
suggests that children actually have an early facility with statistical reasoning. I’ll present two sets of findings from this emerging research. It is normatively appropriate to draw inferences from samples to populations when samples are randomly drawn from that population but typically not otherwise. To see whether children appreciated this aspect of statistical inference, Xu and Garcia (2008) showed infants a person pulling 4 red ping-pong balls and 1 white one from a box without looking in the box. In that case, it’s statistically appropriate to infer that the box has mostly red balls. In keeping with this, when infants were then shown the contents of the box, they looked longer when the box contained mostly white balls than when the box contained mostly red balls. Xu and Denison (2009) then investigated whether the random sampling made a difference. At the beginning of the task, the experimenter showed a preference for selecting red balls over white ones. Then the experimenter selected balls from an opaque container as in the experiment reported earlier. But in this study, in one condition the experimenter was blindfolded while in the other she had visual access to the contents of the box. Xu and Denison found that children expected that the population would resemble the sample in the blindfolded condition more often than they did when the experimenter could see the balls she was choosing. Building on these findings, Kushnir and colleagues found that children use sampling considerations to draw inferences about preferences. When a puppet took 5 toy frogs from a population with few frogs, the children tended to think the puppet preferred toy frogs; but children tended not to make this inference when a puppet took 5 toy frogs from a population that consisted entirely of frogs (Kushnir et al., 2010). In a rather different kind of case, Xu and Tenenbaum (2007) explain word learning in terms of statistical inference based on the “size principle.” To get an intuitive sense of the principle, imagine your friend has two dice—a 4-sided die and an 8-sided one. He picks one at random, hides it from your view and rolls it 10 times. He reports the results: 3, 2, 2, 3, 1, 1, 1, 4, 3, 2. Which die do you think it is? Intuitively, it’s probably the 4-sided die. But all of the evidence is consistent with it being the 8-sided die, so why is it more probable that it’s the 4-sided one? Because otherwise it’s a suspicious coincidence that all of rolls were 4 or under. One way to think about this is that the 4-sided die hypothesis generates a proper subset of the predictions generated by the 8-sided die hypothesis. And the size principle states that when all of the evidence is consistent with the “smaller hypothesis,” that hypothesis should be preferred. Xu and Tenenbaum use the size principle to explain how the absence of evidence might play a role in word learning. When learning the word “dog,” children need only a few positive examples in which different dogs are called “dog” to infer that the extension of the term is “dog” rather than “animal”. Pointing to a poodle, a Labrador, and a Chihuahua suffices.You don’t also need to point to a blue jay or a halibut and say, “That’s not a dog.” Xu and Tenenbaum explain this in terms of the size principle: the likelihood of getting those particular examples (a poodle, a Labrador, and a Chihuahua) is higher if the extension of the word is “dog” as compared with “animal”. Xu and Tenenbaum conducted word learning experiments to confirm that children and adults use the absence of evidence to infer word meanings. For example, participants were told “these are feps” while being shown three Dalmatians and no other dogs. In that case, people tended to think that fep refers to Dalmatians rather than dogs.The absence of other dogs in the sample provides evidence for the more restricted hypothesis that fep refers to Dalmatians.
133
Moral Learning
Statistical Learning and Rationality The dominant paradigm for explaining these results is Bayesian learning (see, e.g., Perfors et al., 2011). The advocates of this view stress the rational nature of Bayesian inference. For example, Amy Perfors and colleagues write: Bayesian probability theory is not simply a set of ad hoc rules useful for manipulating and evaluating statistical information: it is also the set of unique, consistent rules for conducting plausible inference. . . . Just as formal logic describes a deductively correct way of thinking, Bayesian probability theory describes an inductively correct way of thinking. (Perfors et al., 2011) It’s unclear whether these experiments actually support the view that children are engaged in a form of Bayesian updating (see, e.g., Nichols & Samuels, 2017). But there is little doubt that the inferences in the tasks reviewed in §11 are plausible candidates for meeting familiar notions of global rationality. Critically, in these tasks, the child makes categorical inferences that are appropriate given the evidence to the agent. For instance, in the ping-pong ball task, the infant is right to infer from a random sample of mostly red balls that the population is mostly red. All of the evidence she has supports this conclusion. In addition, these tasks take exactly zero training. The normatively appropriate pattern appears on the first (and only) trial.
Statistical Learning and Morality As noted earlier, children seem to deploy subtle distinctions in the normative domain, that many rules apply to what an agent does but not to what that agent allows. But it’s puzzling how children acquire these distinctions given that they don’t receive any explicit instructions on the matter (Dwyer et al., 2010, 492). Even though children don’t get explicit instruction on the doing/allowing distinction, it’s possible that they can infer the distinction based on the kinds of examples of violations they witness. If all of the sanctioned violations are cases in which the agent intentionally produced the negative outcome, this might count as evidence that the operative rule does not forbid allowing these outcomes to persist. This is just the basic insight of the size principle (§11). In the case of the dice, the fact that none of the rolls was over 4 would be a suspicious coincidence if the die were the 8. Similarly, when none of the violations children have observed are “allowings,” this would be a suspicious coincidence if the rule prohibited allowings. The absence of evidence is itself evidence. So, what does the child’s evidence look like? We coded a portion of the standard database for child-directed speech (CHILDES) and found that the vast majority (over 99%) of observed violations concerned intentional actions. For the vast majority of rules the child learns, there is a conspicuous lack of evidence in favor of the hypothesis that the rule applies both to acting and allowing. And this counts as evidence that the rules do not apply to allowings. 134
Shaun Nichols
It is a further question whether people are sensitive to evidence about whether a rule applies to producing an outcome (acting) or to both producing and allowing an outcome.We investigated this using novel and unemotional rules. Participants were asked to infer the content of a novel rule based on sample violations. In one condition, they were given examples like the following: (i) John violated the rule by putting a truck on the shelf, (ii) Jill violated the same rule by putting a ball on the shelf, and (iii) Mike violated the rule by putting a doll on the shelf.Then they were asked to determine whether the rule applied to other cases. For instance, is Mary violating the rule when she sees a puzzle on the shelf and doesn’t remove it? Note that sample violations (i)–(iii) above are all examples of a person intentionally producing the outcome, and when the rule is taught through such examples, participants tend to infer that Mary (who allowed the outcome) did not break the rule. However, when given two examples in which a person who allows the outcome to persist (e.g., leaving a jump rope on the shelf) is said to have broken the rule, participants overwhelmingly infer that Mary is breaking the rule by leaving a puzzle on the shelf (Nichols et al., forthcoming). This suggests that people are sensitive to evidence that bears on whether the rule applies only to what an agent does or also to what an agent allows. The foregoing provides an account of how children might learn the act/allow distinction for rules. Moreover, the learning that would be involved here is plausibly rational in a robust sense. If the child is deciding whether a rule prohibits acting or both acting and allowing, and all of the evidence is consistent with the rule being an act-based rule, then using the size principle to infer that the rule is solely act-based is the correct procedure. A rational agent in such a situation should infer that the rule only prohibits acting in the way proscribed. Although the learning here would be rational, that doesn’t mean that the rule that is learned is itself rationally justified. We can still ask why parents accept and teach prohibitions on actions without accepting correlative prohibitions on allowances. The answer to that question might suggest that these kinds of rules are themselves rationally defective in some important way. But when we focus on the child trying to figure out these rules, whatever their status, given a body of evidence limited to condemnations of actions, she is right to infer that the rule prohibits acts alone.
Descriptive Adequacy I’ve suggested that statistical learning might explain the acquisition of subtle features of rules. Two immediate qualifications apply. First, thus far there is no experimental work on children using statistical learning to infer moral rules. Second, even in adults, it’s not clear exactly what the mechanism is for inferring the nature of the rules. In particular, it is not clear what algorithm(s) people are using when they judge in accordance with the size principle. In addition to these unresolved empirical questions, there is a theoretical reason that the foregoing is not a complete theory of moral judgment. Even if rules play an essential role in moral judgment, they don’t provide a full theory. At a minimum, values also play a vital role. This is evident from the fact that rules are overridden in all-things-considered judgment when adherence to the rule would cost something of sufficient value. For example, when asked about a version of the footbridge case in which someone pushes a stranger in 135
Moral Learning
front of a trolley to save billions of lives, participants tend to say that (i) the person violated a moral rule and (ii) all things considered this was the right thing to do (Nichols & Mallon, 2006). Emotional responses to vignettes might play an independently significant role in generating all-things-considered moral judgment (see, e.g., Bartels & Pizarro, 2011). These emotions might derive from model-free reinforcement learning, emotional attunement, or something else. In any case, it seems that even if children learn rules through rational inference, this does not tell the whole story about their moral judgments.15
5. Conclusion Work on moral learning is so new that it’s difficult to be confident about any of the results and interpretations advanced to date. It’s likely, though, that each type of learning explored here—model-free reinforcement learning, emotional attunement, and statistical inference—plays some role in the acquisition of the components of a mature adult’s capacity for moral judgment. Insofar as emotional attunement provides us with a flexible means for updating our values and statistical learning provides us with a rational basis for learning moral rules, the epistemic standing of our moral judgments might not be as execrable as some have suggested.
Notes 1. See Chapters 4, 13, 18, and 19 of this volume for further discussion of the reliability of moral judgment and its relation to moral knowledge. 2. For further discussion of Greene’s model, see Chapters 4 and 7 of this volume. 3. The model of moral judgment advanced by Haidt and colleagues is discussed in many of this volume’s chapters, including 1, 2, 4, 5, 7, 8, 9, 16, and 18. 4. This question is also addressed in Chapters 7 and 8 of this volume. 5. This value will change with experience, so theorists assign numbers to quantify how much value the action has. Pushing the lever might start with a value of 1.2 and gradually grow to 1.8 after several experiences. 6. For further criticism of Greene’s analysis of deontic intuitions see Chapters 4 and 7 of this volume. 7. See too the model of system 1 processing articulated in Chapter 7 of this volume and the heuristics articulated in Chapter 8. 8. Of course, this testimonial evidence (“The blue pill will make you ill”) and the subsequent belief (the blue pill will make me ill) can itself contribute to later processing by the broad affective system. We might acquire an aversion to the blue pill. However, the key point is that the incorporation of testimony here looks very different from the kind of reinforcement learning found in the broad affective system.We move directly from testimony to belief in a kind of one-shot learning. This interpretation is bolstered by the fact that changing the words will change the effect of the testimony. Replace “blue” with “red,” “pill” with “candy,” or “ill” with “well” and the behavior shifts accordingly.This is naturally explained by the direct acquisition of the corresponding belief from the testimony. 9. There is obviously a question about why siblings tend to lack sexual interest in each other. One prominent explanation is we have a mechanism with the evolved function of generating sexual disinterest between co-reared children (e.g. Lieberman et al., 2003). But that isn’t the same question as why we condemn the act. For many actions that I find unappealing (e.g., eating vegemite), I certainly don’t morally condemn those who do it. It is true, of course, that incest prohibitions are culturally widespread, and it’s possible that the prevalence of such norms depends on an evolved mechanism that makes sibling sex unappealing. For instance, we might expect norms that
136
Shaun Nichols
prohibit unpleasant acts to have a cultural advantage over other norms (cf. Nichols, 2004). But even in this case, the norms are not the same as the affective response. See Chapter 9 of this volume for further discussion of the evolution of incest norms and intuitions of incest’s immorality. 10. I’m indebted to Alisabeth Ayars for this point. 11. https://glowing.com/community/topic/72057594038730600/is-this-incest-or-just-weird 12. https://answers.yahoo.com/question/index?qid=20090109153158AAecIl6 13. https://answers.yahoo.com/question/index?qid=20111005141957AAzJozL 14. For more on the role testimony plays in the acquisition and extension of morality, see Chapters 23 and 25 of this volume. 15. See Chapter 5 of this volume for further discussion of moral development.
References Ayars, A. (2016). “Can Model-Free Reinforcement Learning Explain Deontological Moral Judgments?” Cognition, 150, 232–242. Bartels, D. M. and Pizarro, D. A. (2011). “The Mismeasure of Morals: Antisocial Personality Traits Predict Utilitarian Responses to Moral Dilemmas,” Cognition, 121 (1), 154–161. Crockett, M. J. (2013). “Models of Morality,” Trends in Cognitive Sciences, 17 (8), 363–366. Cushman, F. (2013). “Action, Outcome, and Value a Dual-System Framework for Morality,” Personality and Social Psychology Review, 17 (3), 273–292. Cushman, F., Gray, K., Gaffey, A. and Mendes, W. B. (2012). “Simulating Murder: The Aversion to Harmful Action,” Emotion, 12 (1), 2. Cushman, F.,Young, L., & Hauser, M. (2006). “The role of conscious reasoning and intuition in moral judgment: Testing three principles of harm,” Psychological Science, 17 (12), 1082–1089. Dwyer, S., Huebner, B. and Hauser, M. D. (2010). “The Linguistic Analogy: Motivations, Results, and Speculations,” Topics in Cognitive Science 2 (3), 486–510. Greene, J. T. (1969). “Altruistic Behavior in the Albino Rat,” Psychonomic Science, 14 (1), 47–48. ———. (2008). “The Secret Joke of Kant’s Soul,” Moral Psychology, 3, ed. W. Sinnott- Armstrong. Cambridge, MA: MIT Press. Haidt, J. (2001). “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review, 108 (4), 814. Hopkins, K. (1980). “Brother-Sister Marriage in Roman Egypt,” Comparative Studies in Society and History, 22 (03), 303–354. Kahneman, D. and Tversky, A. (1973). “On the Psychology of Prediction,” Psychological Review, 80, 237–251. Kushnir, T., Xu, F. and Wellman, H. M. (2010). “Young Children Use Statistical Sampling to Infer the Preferences of Other People,” Psychological Science, 21 (8), 1134–1140. Lieberman, D.,Tooby, J. and Cosmides, L. (2003). “Does Morality Have a Biological Basis? An Empirical Test of the Factors Governing Moral Sentiments Relating to Incest,” Proceedings of the Royal Society of London B: Biological Sciences, 270 (1517), 819–826. Masserman, J. H., Wechkin, S. and Terris, W. (1964). “Altruistic Behavior in Rhesus Monkeys,” The American Journal of Psychiatry, 121 (6), 584–585. Mikhail, J. (2011). Elements of Moral Cognition: Rawls’ Linguistic Analogy and the Cognitive Science of Moral and Legal Judgment. Cambridge: Cambridge University Press. Nichols, S. (2004). Sentimental Rules: On the Natural Foundations of Moral Judgment. Oxford: Oxford University Press. Nichols, S., Kumar, S., Lopez, S., Ayars, A. and Chan, H. (2016). “Rational Learners and Moral Rules,” Mind & Language, 31 (5), 530–554. Nichols, S. and Mallon, R. (2006). “Moral Dilemmas and Moral Rules,” Cognition, 100 (3), 530–542. Nichols, S. and Samuels, R. (2017). “Bayesian Psychology and Human Rationality,” in T. Hung and T. Lane (eds.), Rationality: Constraints and Contexts. London, UK: Elsevier. Pedroso, F. S. (2012). “The Diving Reflex in Healthy Infants in the First Year of Life,” Journal of Child Neurology, 27 (2), 168–171. 137
Moral Learning
Perfors, A., Tenenbaum, J. B., Griffiths, T. L. and Xu, F. (2011). “A Tutorial Introduction to Bayesian Models of Cognitive Development,” Cognition, 120 (3), 302–321. Railton, P. (2014). “The Affective Dog and Its Rational Tale: Intuition and Attunement,” Ethics, 124 (4), 813–859. Xu, F. and Denison, S. (2009). “Statistical Inference and Sensitivity to Sampling in 11-Month-Old Infants,” Cognition, 112 (1), 97–104. Xu, F. and Garcia,V. (2008). “Intuitive Statistics by 8-Month-Old Infants,” Proceedings of the National Academy of Sciences, 105 (13), 5012–5015. Xu, F. and Tenenbaum, J. B. (2007). “Word Learning as Bayesian Inference,” Psychological Review, 114 (2), 245.
Acknowledgments I am grateful to Victor Kumar, Rachana Kamtekar, Jonathan Weinberg, Aaron Zimmerman, and especially Alisabeth Ayars for comments on an earlier version of this chapter. Research for this paper was supported by Office of Naval Research grant #11492159. I would also like to thank the John Templeton Foundation for supporting this project. Opinions expressed here are those of the author and do not necessarily reflect those of the Templeton Foundation.
Further Readings For a general introduction to moral learning, see F. Cushman,V. Kumar and P. Railton, “Moral Learning: Psychological and Philosophical Perspectives,” Cognition, 167, 1, 2017. A technical introduction to reinforcement learning can be found in R. Sutton and A. Barto, Reinforcement Learning: An Introduction (Vol. 1, No. 1) (Cambridge, MA: MIT Press, 1998). Perfors et al. (2011) provides an accessible introduction to Bayesian learning.
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 2 The Normative Sense: What is Universal? What Varies? Chapter 5 Moral Development in Humans, Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 9 The Evolution of Moral Cognition; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment; Chapter 17 Moral Perception; Chapter 18 Moral Intuition; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 22 Moral Knowledge as Know-How; Chapter 27 Teaching Virtue; Chapter 28 Decision Making Under Moral-Uncertainty.
138
7 MORAL REASONING AND EMOTION Joshua May and Victor Kumar
1. Introduction Unless presently in a coma, you cannot avoid witnessing injustice.1 You will find yourself judging that a citizen or a police officer has acted wrongly by killing someone, that a politician is corrupt, that a social institution is discriminatory. In all these cases, you are making a moral judgment. But what is it that drives your judgment? Have you reasoned your way to the conclusion that something is morally wrong? Or have you reached a verdict because you feel indignation or outrage? Rationalists in moral philosophy hold that moral judgment can be based on reasoning alone. Kant argued that one can arrive at a moral belief by reasoning from principles articulating one’s duties. Sentimentalists hold instead that emotion is essential to distinctively moral judgment. Hume, Smith, and their British contemporaries argued that one cannot arrive at a moral belief without experiencing appropriate feelings at some point—e.g., by feeling compassion toward victims or anger toward perpetrators. While many theorists agree that both reason and emotion play a role in ordinary moral cognition, the dispute is ultimately about which process is most central. Controversies about the comparative roles played by reasoning and emotion in moral judgment have important implications for the nature of moral knowledge. Some sentimentalists suggest that there are no moral facts to be known, for ethics is ultimately a matter of merely having or expressing one’s feelings.2 Even if rationalism isn’t the only way to support a more objectivist conception of morality, it may allow more room for knowing right from wrong. After all, if moral judgment is fundamentally a rational enterprise, then we can ultimately form moral beliefs that are based on good reasons. However, sentimentalists might argue that they also make moral knowledge possible, perhaps even easier to attain: you just need to have the right emotional responses. Ethical knowledge may just resemble perception more than mathematical reasoning.3 Such theoretical disputes also impact several practical questions about how to resolve moral disagreements, possess ethical wisdom, or even build a robot capable of making genuine moral judgments. Do the morally inept just need to be more compassionate or angry? Or do we become ethically wise by reasoning better,
139
Moral Reasoning and Emotion
by freeing ourselves of cognitive biases that make us overconfident, inconsistent, wishful thinkers who neglect base rates?4 The debate between rationalists and sentimentalists is partly an empirical one. Philosophers in centuries past addressed the debate by drawing on the understanding of the mind available at that time. However, there is now a large and growing body of rigorous empirical research in cognitive science that promises to shed new light on the psychological and neurological mechanisms underlying moral judgment.5 To interpret empirical research on moral judgment, we need to adopt at least a provisional distinction between reasoning and emotion. It’s tempting to assume that reasoning is always conscious or that reasoning is the only psychological process that involves information processing. Neither of these assumptions is useful. Quite a bit of reasoning is unconscious, and many if not all emotions involve information processing. The assumption that reasoning is always conscious stacks the deck in favor of sentimentalism, while the assumption that only reasoning involves information processing stacks it in favor of rationalism. We will attempt to contrast reason and emotion in a more neutral way: emotional processing involves states that are inherently affective or valenced; reasoning is an inferential process that forms new beliefs on the basis of existing ones. While reason and emotion may be distinct, they can be intimately bound together, as when feelings facilitate inference. Accordingly, as we’ll see, it’s difficult to empirically pry apart reason and emotion to determine which is more fundamental or essential to moral cognition. In this chapter, we review the state of play in contemporary research on moral judgment in cognitive science, formulate hypotheses about where this research is heading, and unpack its philosophical significance. We canvass evidence that moral judgment is influenced by both reasoning and emotion separately, but we also examine emerging evidence of the interaction between the two. Throughout the chapter, we discuss the implications of this body of evidence on the rationalism-sentimentalism debate and ultimately conclude that important questions remain open about how central emotion is to moral judgment. We also suggest ways in which moral philosophy is not only guided by empirical research but continues to guide it.
2. Reasoning The paradigm of reasoning is conscious and deliberate (Haidt, 2001; Paxton et al., 2011). You calculate your portion of the dinner bill by applying the principles of arithmetic; you lie awake at night deliberating about how to pitch the proposal in the board meeting; you stare at the gizmo and run through all the possibilities of what might be causing it to malfunction. In such cases, you’re consciously aware of your reasoning and of at least some of the steps and principles applied. Reasoning can most certainly be unconscious as well. Suppose ten days ago you resolved to avoid Mr. Williams, and upon seeing him today in the lobby you immediately jump behind a wall to avoid being detected. Or suppose you’re watching a film, apparently passively, and the first fifteen minutes include no dialog but merely ambiguous social interactions and facial expressions; you find yourself suspecting there is a checkered romantic history between the two characters. These examples require inference, but the reasoning is automatic and largely unconscious. 140
Joshua May and Victor Kumar
Like other forms of cognition, moral judgment seems to involve both implicit and explicit reasoning. This is likely a consequence of our dual-process minds. We have two general systems for information processing: one is faster, more automatic, and perhaps based on fairly rigid heuristics; the other is slower, more controlled, and less constrained by heuristics (Kahneman, 2011; cf. Railton, 2014; Kumar, 2017b). Automatic processing is more efficient, since it’s fast and uses fewer resources, but controlled processing is more flexible (Greene, 2014). Our minds presumably evolved this dual-process architecture in order to allow for both advantages. However, a dual-processing architecture may not be required to capture the distinction between conscious and unconscious moral reasoning (Kennett & Fine, 2009; Mikhail, 2011).6
3. Conscious Reasoning We now have ample evidence that moral cognition likewise exhibits a duality in the types of cognitive processes that generate it. The most prominent tool for empirically probing moral judgment involves sacrificial dilemmas in which one is effectively asked whether the needs of the many outweigh those of the few. In the famous trolley scenarios, often one is supposed to imagine that an individual has the opportunity to save five innocent people from impending death but at the cost of the death of one innocent person.The protagonist can either let the five die or sacrifice the one. Participants are asked to report which choice they believe is morally appropriate. A growing amount of evidence suggests that as various factors change so do our modes of moral thinking. Consider the contrast between two famous trolley cases. In Switch, an individual in the vignette can throw a switch to divert a train from killing five innocent people, but then it will kill one innocent person on a sidetrack.The vast majority of people from various countries and cultures believe it’s morally acceptable to sacrifice the one and save the five (Hauser et al., 2007; Mikhail, 2011). However, most people do not believe it’s morally permissible to sacrifice the one if he must be pushed in the way of the train in order to stop it, as in the Footbridge case. Neuroimaging indicates that distinct areas of the brain are more active when evaluating such personal versus impersonal dilemmas (Greene, 2014). In impersonal dilemmas like Switch, where we simply weigh up the outcomes (one death is better than five), there’s greater activation of brain areas generally involved with calculative reasoning and working memory, especially the dorsolateral prefrontal cortex (dlPFC). This appears to be moral thinking that’s more controlled than automatic. When evaluating personal dilemmas, such as Footbridge, we consider that the protagonist must get more physically involved in bringing about one death to prevent five. This involves greater activity in brain areas generally associated with more automatic and emotional processing, particularly the ventral medial prefrontal cortex (vmPFC). Now, it’s quite controversial what kinds of moral intuitions are being measured here. Greene argues that the controlled processes generally lead to characteristically utilitarian moral intuitions concerned with promoting the greater good, while the automatic processes lead to nonutilitarian (e.g., deontological) intuitions concerned with avoiding up-close and personal harm. Other researchers argue that the dilemmas really contrast reasoning about intuitive versus counterintuitive moral dilemmas (Kahane et al., 2012) and that the “utilitarian” choice does not reflect impartial concern for maximizing welfare 141
Moral Reasoning and Emotion
(Kahane et al., 2015).What’s clear, though, is that there is some neurophysiological evidence for the independently plausible claim that the cognitive processes in moral cognition can be either automatic or controlled.7 Other research might seem to suggest that conscious moral reasoning is causally impotent. Moral judgments may be driven primarily by automatic intuitions, while deliberate reasoning is merely post hoc rationalization, used to justify what one already believes on intuitive grounds (Haidt, 2001). Consider, for example, evaluating a so-called harmless taboo violation such as eating an already dead dog, cleaning a toilet with your national flag, or passionately kissing your sibling. When participants are presented with brief descriptions of such scenarios, they often think they are immoral but struggle to explain why—they’re in a state of moral dumbfounding (Haidt et al., 1993). As Haidt puts it, one is like a “lawyer trying to build a case rather than a judge searching for the truth” (2001, 814). There are several reasons why moral dumbfounding does not undermine the existence of moral reasoning. First, one’s inability to articulate a justification for one’s moral belief doesn’t rule out that one arrived at it by unconscious reasoning or computation (Dwyer, 2009). Further research does suggest that taboo violations are seen as causing harm, including long-term psychological damage (Royzman et al., 2009; Royzman et al., 2015). Of course you will be dumbfounded when an experimenter challenges your quite reasonable assumption that there is any harm to eating a deceased pet. Ordinarily, this act would bring pet lovers a great deal of psychological harm or distress. There are also a number of different ways in which consciously controlled reasoning can influence moral judgment. Automatic and implicit racial biases, for example, can be deliberately suppressed or even eliminated (see discussion in Kennett & Fine, 2009).When making snap judgments, many of us are more prone to misidentify a tool as a gun when held by a black man rather than a white man. However, there is some evidence that one can become less susceptible to such biases by consciously thinking “Whenever I see a Black face on the screen, I will think the word, safe” (Stewart & Payne, 2008, 1336).While the relevant studies on implicit bias don’t clearly measure moral judgments, other experiments do. For example, more reflective people provide more “utilitarian” responses to sacrificial moral dilemmas, and judgments are further modulated by the strength of an argument and the length of time allowed for deliberation (e.g. Paxton et al., 2011; for discussion of further studies, see Greene, 2014, 704 and Chapter 16 of this volume).
4. Unconscious Reasoning Moral psychology used to emphasize the kind of conscious reasoning that we articulate when justifying a moral judgment (Kohlberg, 1973). The focus of research in this area has rightly widened in response to evidence that a surprising amount of moral thinking is automatic, implicit, and unconscious. Instead of asking participants to justify moral beliefs about a particular case, researchers now focus on patterns of intuitive responses to hypothetical moral dilemmas in order to reveal the structure of moral cognition. However, contrary to some accounts (e.g. Haidt, 2001), these unconscious processes can be quite sophisticated. And this may provide good grounds on which to count it as a form of reasoning (Horgan & Timmons, 2007; Mallon & Nichols, 2010; Mikhail, 2011).8 142
Joshua May and Victor Kumar
One important element to survive the shift in focus is the historically (and legally) foundational contrast between an intentional action and an accident. For example, we are more inclined to condemn poisoning a coworker by mixing a deadly white substance into his coffee when it’s done purposefully rather than accidentally. Moreover, the consequences are not entirely paramount here, even when evaluating the morality of the individual’s action and not her motives or character. For example, many people think the protagonist’s action is wrong so long as she intended harm, even if she didn’t in fact inflict any, because what she took to be arsenic was only sugar (Young et al., 2007). It seems “no harm, no foul” only applies when harm fails to result from an accidental risk of damage. Studies using brain imaging and stimulation suggest that these relatively automatic judgments employ brain areas associated with perceiving others’ mental states, particularly the right temporal parietal junction (Young et al., 2007; Young et al., 2010). Indeed, some researchers claim that this capacity for “mind perception” is essential to all moral judgment (Gray et al., 2012). There are certainly other computational elements of moral cognition as well. In one experiment, participants judged a battery of moral dilemmas that varied along three dimensions: whether an outcome arose from an action or omission, an intention or side effect, and through bodily contact or at a distance (Cushman et al., 2006). Each factor tended to affect people’s moral judgments, but participants weren’t always able to consciously identify them. Further studies and meta-analyses confirm that a variety of moral responses are reliably influenced, even if only slightly, by these factors across various cultures (see e.g. Hauser et al., 2007; May, 2014b; Feltz & May, 2017). Even if we can sometimes later identify the relevant factors affecting our earlier moral judgments, plausibly their initial influence on moral judgment is largely unconscious. Of course, if a moral dilemma is particularly complex or one’s consideration of it drawn out, one may be consciously aware of some inferential steps leading to the moral judgment. But ordinary experience and rigorous empirical evidence suggest that some moral beliefs are generated by rapid inference even about relatively simply scenarios. Some theorists suggest moral cognition in many ways resembles knowledge of a language in being underwritten by largely automatic, yet complex, computations in the brain (Dwyer, 2009; Mikhail, 2011). Studies in cognitive neuroscience are beginning to identify the brain regions involved in moral competence, which include neural structures independently correlated with our efforts to rapidly discern the intentions of others and the casual effects of actions (Young et al., 2007; Crockett, 2013;Yoder & Decety, 2014).9 Evidence for relatively unconscious moral reasoning isn’t limited to unrealistic trolley scenarios. Some of the studies already cited measure participants’ reactions to more ordinary situations, such as drowning and overdosing (e.g. Cushman et al., 2006). Studies of rationalization and motivated reasoning also reveal rapid and unconscious moral reasoning. For example, in a series of clever experiments (e.g. Batson et al., 1999), participants had to assign a rewarding task or benefit to either themselves or a stranger. If given the choice in public, many participants opted for a fair procedure, such as flipping a coin, for determining who was to receive the benefit. However, when allowed to employ this procedure in private, participants overwhelmingly assigned themselves the benefit, at rates much greater than chance (around 90%). Clearly there was some fiddling of the flip or fudging of the results. This is evidence of a motivation to merely appear moral without bearing the costs 143
Moral Reasoning and Emotion
of actually being moral—a kind of egoistic “moral hypocrisy” that many people fail to recognize in themselves. More interesting for our purposes is that those who flipped the coin, yet disregarded its dictates, appeared to rationalize their behavior. They rated their assignment as significantly more moral than those who simply chose not to flip and just assigned themselves the benefit. Albeit self-deceived and relatively unconscious, this is reasoning (“motivated reasoning”) that leads to a change in moral belief (see May, 2018). Such studies reveal that even if rationalism is true, moral judgment isn’t necessarily “rational” (cf. Greene, 2008). Moral reasoning is susceptible to biases that can debunk the judgments it produces, at least once we recognize the influence of these biases (see Kumar & May, forthcoming). Of course, sentimentalists might attempt to ground unconscious moral cognition in emotional processing. Perhaps, for example, reasoning about a person’s motives depends ultimately on the more emotional aspects of sympathy or empathy. There is at best a fine line between affectively valenced information processing and unconscious reasoning about motives and consequences. Extant evidence may not be sufficiently fine-grained to favor one of these accounts.
5. Emotion According to Hume, Smith, and their philosophical heirs, empathy is an essential foundation of moral judgment. Reasoning may be required to discern the consequences of an act, but one can’t make a distinctively moral judgment if one is incapable of feeling some semblance of the pain or pleasure caused by the act. In this context, “empathy” denotes the emotional capacity to imaginatively project oneself into another person’s situation and feel what they feel. It is because you feel bad when you imagine the pain of a victim that you are led to condemn the person who harmed her. According to some sentimentalists, however, other emotional capacities aside from empathy are implicated in moral thought (Nichols, 2004; Prinz, 2007; Kauppinen, 2013). We feel anger toward those who violate a moral norm, even if we do not empathize with the victim, even when the crime is victimless.We feel contempt toward those who presume an air of authority that is illegitimate.We feel disgust toward people who commit crimes against “nature” but also toward dishonest lawyers and hypocritical politicians. Moral judgment is, of course, partly evaluative rather than entirely descriptive. And many philosophers and psychologists have long held that human values are in some sense “grounded in” emotion. Anti-realists in ethics hold that emotions are used to paint the world with value, much as one’s visual experiences are used to paint the world with color.10 Psychologists since the 1980s have thought that emotion is “the currency of value” in the brain or the vehicle it employs to represent what’s good or bad. So there is good reason to explore whether there is any empirical evidence for the idea that emotions play a distinctive role in moral judgment. We will begin by examining relatively indirect evidence and then turn to more direct evidence for this sentimentalist idea.
6. Indirect Evidence of Emotion’s Role in Moral Judgment The idea that emotions are implicated in moral judgment is reinforced by evolutionary and developmental considerations. Arguably, the evolutionary roots of morality lie in emotion 144
Joshua May and Victor Kumar
(Kumar and Campbell, 2018). For example, we feel sympathy or concern for many of our fellow humans, and this is what underlies values that enjoin care and discourage harm. Individuals who lack properly functioning emotions are said to exhibit disorders in moral agency. Psychopathy is a developmental disorder that is characterized in part by diminished sympathy and guilt and dysfunction in the vmPFC and amygdala. Besides being cruel and callous, psychopaths appear to show deficits in moral judgment. In some studies, psychopaths are unable to reliably distinguish between violations of moral norms and violations of mere conventions (Blair, 1995, 1997; Blair et al., 1995; cf. Aharoni et al., 2012). However, the roots of psychopathy lie in childhood, which means that emotional deficits may disrupt only the development, not deployment, of moral judgment (see Kumar, 2016a). So, even if we accept a developmental link between emotion and valuation, rationalists might still deny that emotions play a role in mature moral judgment.What non-psychopaths retain may simply be a capacity for moral reasoning that is shaped in development by emotion. In support of this idea, patients who suffer from damage to the vmPFC in adulthood engage in anti-social behavior that is in some ways similar to psychopaths’ behavior, but they do not exhibit the same apparent deficits in moral judgment (Saver & Damasio, 1991; Taber-Thomas et al., 2014; Roskies, 2003). Emotions, then, may not be as essential to the proximate causes of moral judgment as experiences of color are to color judgments (May, 2018). To get clear on whether emotion plays a more integral role, we must turn to experimental evidence that targets occurrent moral cognition. As noted earlier, some research suggests that conscious reasoning exerts little influence on moral judgment. People can readily offer reasons for their judgments, but in some cases the judgment is automatic and the reasons are only articulated after the fact. It’s worth repeating that this does not show that reasoning is inert, since automatic moral judgments may be based on unconscious reasoning, perhaps from the very factors one uses to defend the judgments when pressed after the fact. However, in some cases, it may be that emotions generate automatic moral judgments that aren’t (also) grounded in reasoning. In Haidt’s studies of moral dumbfounding, some participants condemn actions that are apparently harmless but disgusting or emotionally evocative in other ways. Yet they maintain these judgments even when their verbal grounds for condemnation are impugned by the facts of the case as it’s described. It is generally thought that affective states are a common source of automatic heuristics that generate an immediate and intuitive judgment in a number of domains, not only in ethics (Damasio, 1994; Greene, 2014; Railton, 2014). Thus it may be emotion more than unconscious reasoning that leads participants to judge that it is wrong to have sex with one’s sibling, to eat the family pet after it has been killed, or to clean the toilet with the country flag. Some moral violations seem emotionally evocative, but are there any more decisive reasons to believe that emotions are at play in moral evaluation? Neurophysiological evidence promises to shed light on this issue, since neuroscientists have identified several areas of the brain that have been independently correlated with the expression and experience of emotion. When participants are presented with sacrificial moral dilemmas that are personal rather than impersonal, neuroimaging shows activity in vmPFC, along with more ancient areas of the brain underlying emotion, like the amygdala and the insular cortex. Still, this neuroscientific evidence doesn’t guarantee that emotions are a cause of moral judgments. Even if we can reasonably draw “reverse inferences” from brain activation to 145
Moral Reasoning and Emotion
mental process, it may be that emotions are not “upstream” of moral judgment. There are two other live possibilities. First, it may be that conscious or unconscious reasoning generates moral judgments and that these judgments then produce emotional responses “downstream.” Second, it may be that some morally salient events elicit emotions, but this emotional process runs in parallel and does not influence moral judgment (the two are elements of distinct “tributaries”). More direct experimental evidence is needed to distinguish these hypotheses. This evidence promises to reveal whether or not the automatic, unconscious information processing that underlies moral judgment is affectively valenced.11
7. Direct Evidence of Emotion’s Role in Moral Judgment Manipulation studies provide an attractive methodology for studying whether emotions are upstream of moral judgment, not just implicated in or correlated with moral development. In these studies, psychologists directly or indirectly induce emotion in participants and explore whether this changes their moral judgments relative to control groups whose emotional states have not been manipulated. So far, manipulation studies have made most use of disgust, in part because it is often easier and ethically more acceptable to induce disgust in participants (rather than, say, anger). A range of studies seems to suggest that participants tend to make more severe negative moral judgments when they are asked to sit at a messy desk, exposed to a foul odor, or presented with a disgusting short film (Schnall et al., 2008), when they are primed to feel disgust by being given the opportunity to use hand soap (Helzer & Pizarro, 2011), and when they are hypnotized to feel disgust at the mention of certain neutral words (Wheatley & Haidt, 2005). A few manipulation studies have also induced anger by asking participants to listen to unpleasant music (Seidel & Prinz, 2013a, 2013b). These studies find, too, that inducing negative emotional states makes moral judgments slightly more severe. However, some have recently argued that the effect of incidental disgust on moral judgments is slight, inconsistent, and prone to replication failure (e.g. Mallon & Nichols, 2010; May, 2014a; Huebner, 2015). A recent meta-analysis of manipulation studies employing disgust confirms this interpretation (Landy & Goodwin, 2015).These criticisms suggest that ordinary people are not as prone to simple emotional manipulation as researchers initially suggested. If someone makes you feel disgust by exposing you to a foul odor, you are not likely to judge that some entirely unrelated activity is morally more objectionable than you otherwise would have thought. This should be reassuring. Whether reasoning or emotion underlies moral judgment, the evidence does not support the cynical view that ordinary moral thought is suffused with irrationality (May, 2018). Thus far, incidental emotions do not seem to exert a significant influence on moral judgment. However, there is better evidence that integral emotions are central to moral thought, where “integral” emotions are those that are elicited by the object of moral evaluation itself, rather than by incidental conditions like foul odors. We will focus on disgust, since this has been the subject of manipulation studies reviewed here and since many philosophers are skeptical about disgust’s value in moral thought (see, e.g., Nussbaum, 2004; Kelly, 2011). One category that reliably elicits disgust involves so-called “purity” violations. For example, conservatives are especially likely to experience disgust in response to what they regard as sexually deviant behavior (Inbar et al., 2009). But liberals also regard some issues as 146
Joshua May and Victor Kumar
matters of purity. Moral vegetarians are likely to experience disgust at the sight and smell of meat (Rozin et al., 1997). In these cases, however, one might wonder whether disgust remains incidental (Inbar & Pizarro, 2014). It’s possible that the object of evaluation merely happens to be disgusting. Perhaps the properties that make killing and eating sentient creatures morally wrong are distinct from the properties that make it disgusting. However, another body of evidence suggests that people respond with disgust to moral violations that are not otherwise disgusting (see Chapman and Anderson, 2013 for review). Participants are asked to recall a recent disgusting event and also to report which emotions they feel toward various moral violations. In the bulk of these studies, participants associate disgust not with bodily fluids or revolting food, primarily, but with moral violations, such as lying or taking advantage of an innocent person. The moral violations are generally instances of cheating, dishonesty, and exploitation (Kumar, 2017a).12 This evidence suggests that integral emotions elicited by those properties of the object of evaluation that explain its wrongness may play a role in moral judgment.You are disgusted by the hypocrisy of a politician, and this is why you judge him negatively. However, while the evidence that moral violations elicit disgust is robust, the evidence that disgust produces moral judgment is not decisive. One reason is that the foregoing interpretation turns on a philosophical rather than scientific question about the nature of moral judgment. Are emotions concomitants of moral judgment? Or are they constitutive of moral judgment? Philosophers have recently argued that moral judgments consist in an emotional state, either wholly (Prinz, 2007) or in part (Kumar, 2016b). Empirical evaluation of sentimentalism turns in part on evaluating these philosophical theories about the nature of moral judgment. It’s commonly believed that sentimentalism enjoys a great deal of empirical support from recent empirical research. However, while there is suggestive indirect evidence for sentimentalism, direct evidence is much harder to find and turns on other, controversial positions in ethical theory. Still, the balance of evidence suggests that reasoning and emotions are implicated in moral judgment. Philosophers who wish to square their theories of moral thought with empirical evidence should therefore avoid extreme versions of rationalism or sentimentalism (see, e.g., Nichols, 2004; Craigie, 2011; Huebner, 2015; Kumar, 2016b). The result might resemble theories from the ancients, such as the Confucian philosopher Mencius (Morrow, 2009). However, we retain the need to carry out the important project of discerning the relative roles played by emotion and reasoning in the development and deployment of moral cognition. Central to this project is understanding how reasoning and emotion interact in moral judgment.
8. Emotions Affect Reasoning We’ve seen that reasoning, whether implicit or explicit, can directly affect moral cognition, apparently without being mediated by emotion.We’ve also encountered some evidence for the reverse: that emotion can affect moral judgment independent of reasoning. We’ll now examine evidence that reasoning and emotion can interact to generate moral judgment. Some of the best evidence that emotions affect reasoning, even outside the domain of ethics, is the unfortunate phenomenon of motivated reasoning. Wishful thinking is a familiar example. A devoted fan of a sports team with a long track record of losses reasons that the team will win this game because he wants it so badly to be true. A lover reasons that 147
Moral Reasoning and Emotion
her partner hasn’t cheated on her despite the compelling evidence demonstrating otherwise because she can’t bear to accept the truth. A long-term cigarette smoker discounts the evidence that smoking tobacco causes cancer because she’d otherwise have to give up her beloved habit. Of course, sometimes reasoning is guided by a rather noble goal: to be accurate. One’s reasoning is typically improved by triggering such motives, affecting the cognitive effort one puts into the reasoning, selecting optimal strategies for its success, and reducing cognitive biases. Overwhelming evidence, though, indicates that humans are often prone to having their reasoning guided by a variety of desires, motives, and goals (Kunda, 1990), and sometimes emotions either constitute such motives or are intimately bound up with them. While many experiments in the motivated reasoning literature do not specifically concern moral reasoning, some do. We already saw (in §4) that studies on moral hypocrisy suggest that when we take the self-interested—rather than fair—option, we often rationalize the behavior as somehow justifiable (e.g. Batson et al., 1999).This suggests that egoistic motives can guide one’s moral reasoning so that the self-serving behavior doesn’t seem so morally problematic. When engaged in motivated reasoning, we don’t necessarily believe whatever we want to believe. Rather, passions guide our reasoning in subtle ways toward the desired conclusion. Kunda writes that the evidence suggests that “people motivated to arrive at a particular conclusion attempt to be rational and to construct a justification of their desired conclusion that would persuade a dispassionate observer” (1990, 482–483). So when one’s reasoning is affected by a goal, this appears to be tempered by independent reasoning that isn’t motivated by that goal. Moreover, emotions often seem to guide reasoning in less problematic ways. We already saw this (in §6) when discussing damage to the vmPFC in adulthood. When these patients develop the so-called “acquired sociopathy,” their gut reactions to the pros and cons of possible choices are impaired, especially regarding choices with social or moral import. Because they lack the appropriate “somatic markers,” their decision making suffers, often leading to irrational, even immoral, decisions (Damasio, 1994). As we saw, it’s unclear whether such decision-making deficits are driven by impaired moral judgment (Roskies, 2003). But many theorists draw on studies of those with vmPFC damage to argue for the general importance of emotions to prospective reasoning in domains that have little to do with morality. So emotion seems to affect reasoning in both positive and negative ways. Damasio’s work on somatic markers suggests that emotions aid reasoning, while some kinds of motivated reasoning suggest that emotions corrupt reasoning. At the very least, emotions seem to affect one’s moral reasoning by directing one’s attention toward or away from certain forms of evidence. Future work may clarify the ways in which emotion negatively influences moral reasoning and the ways in which it positively influences it.
9. Reasoning Affects Emotions While emotions can guide, and even corrupt, one’s reasoning, sometimes the tables are turned: reasoning can directly alter emotional responses. In fact, we already saw evidence of this (in §3) with experiments suggesting that we can consciously overcome implicit biases or automatic responses to moral dilemmas, both of which arguably have affective components. 148
Joshua May and Victor Kumar
In addition, conscious reasoning can sometimes generate automatic responses (Saltzstein & Kasachkoff, 2004; Craigie, 2011). Disgust or outrage, for example, might be a consequence of reasoning to the conclusion that Colonel Mustard wrongfully killed innocent Professor Plum with an axe in the conservatory. Or consider learning a new skill, such as driving a car (Kennett & Fine, 2009, 85), which takes conscious deliberate effort but is eventually automated and habitual. Though reasoning is a cognitive skill, not a motor ability, it is no different in this respect. It took effort to learn how to add and subtract numbers, but now you can do it without effort. Moral reasoning follows this same pattern. One can automatically judge to be immoral an instance of stealing, embezzlement, sexual assault, or incest, but some prior reasoning, whether conscious or unconscious, could have generated the judgment which is simply now being automatically deployed by searching in one’s bank of moral norms (compare Nichols, 2004). Moreover, we deliberately monitor our automatic responses and attempt to adjust them to best meet our needs. This might be fruitfully analyzed as a kind of education or attunement (Railton, 2014; Sauer, 2017). Of course, these automatic moral responses might not always be emotional, but studies on moralization suggest that reasoning can directly affect emotions in particular. Consider vegetarians who eliminate meat from their diets for primarily moral reasons, not because they believe a meatless diet is healthier. Empirical evidence suggests that these “moral vegetarians” become disgusted by eating meat (Rozin et al., 1997), and not because they are antecedently prone to feeling disgust (Fessler et al., 2003). Rather, it seems their visceral responses changed in light of their reasoned moral belief. Similarly, after decades of unearthing the power of the tobacco industry and the detrimental health effects of smoking, people in the United States and elsewhere now moralize this habit and business enterprise. They not only judge smoking unhealthy but condemn those who smoke or sell tobacco as morally dubious. Accordingly, Americans are now more disgusted by the habit than they were decades ago, and moral condemnation of smoking is more highly correlated with disgust toward it than the belief that the practice is unhealthy (Rozin & Singh, 1999). Again we have a case in which it seems that reasoning generated a moral belief that in turn changed a range of emotional responses. Of course, belief in the depravity of smoking or selling tobacco may owe some of its character or extent to emotions, such as anger toward those in the tobacco industry who tried to hide the link between smoking and cancer for financial reasons. But the change couldn’t have been due to disgust alone, since that emotional response became prominent only after moral condemnation arose from the spread of this information.
10. The Interplay of Reasoning and Emotion We suspect that one of the most interesting and fruitful areas of future research on moral judgment will concern the interaction between reasoning and emotion in moral judgment. As we’ve seen, there is evidence that reasoning influences moral judgment, even if much of this reasoning is unconscious. There is also evidence that emotion influences moral judgment, even if much of this evidence is indirect. More recently, we’ve examined how reasoning affects emotion and vice versa. We suspect that there is also a rich interplay between reasoning and emotion. On this topic, a long history of philosophical reflection on moral thought has the potential to guide scientific research. 149
Moral Reasoning and Emotion
Consider the method of reflective equilibrium, famously popularized in ethics and political philosophy by John Rawls (1971) and others.13 Often, we find ourselves with a conflict between a moral principle and intuitions about concrete cases. For example, you may explicitly endorse a principle that permits free economic exchange between rational agents without exception. However, even when no coercion is present, you also feel outraged at those who would try to purchase slaves who “voluntarily” sell themselves or to charge inflated prices for food in communities recently devastated by a natural disaster.You come to realize that your principle permits various activities (voluntary slavery, price gouging) that are, intuitively, morally wrong. So, it seems, you must either abandon the principle as it stands or override your intuitions about the cases. Philosophers engage in reflective equilibrium, but so do ordinary people, even if less frequently. We all grapple with resolving conflicts between our more general moral principles and our judgments about specific cases. One study shows that people will revise their belief in a general moral principle, or at least their credence in it, after considering a concrete counterexample (Horne et al., 2015). And this change was not temporary but persisted for hours. The philosophical tool, it seems, has its roots in ordinary moral thinking. What sorts of psychological process lead to greater reflective equilibrium? When intuition is guided by emotion, it seems that reasoning and emotion play off one another, in a way that’s fitting for our dual-process minds (Campbell & Kumar, 2012). One feels emotionally that an activity is wrong. One consciously realizes that a moral principle one embraced in full generality permits that activity. One then feels a pressure for consistency, and several routes to restoring consistency are possible. First of all, one may use reasoning to suppress one’s feelings. Or, if moral feelings are particularly strong, they may lead one to abandon the principle. Yet another possibility is that one’s convictions and feelings are equally strong, and the pain of inconsistency leads one to conscious reasoning about how to revise or amend the principle in a way that renders it compatible with one’s feelings. Two examples illustrate the interplay between reasoning and emotion in reflective equilibrium. Many people have a powerful emotional aversion to incest. However, an increasing number of people also have liberal attitudes toward sex.They believe that sexual intercourse between two consenting adults is always morally permissible (with an exception perhaps covering cases of sexual exploitation where there is a large imbalance of power). In our experience teaching Haidt’s famous incest case, many students react initially with disgust and judge that it is morally wrong for two siblings to have consensual sex. However, after reflecting on their own liberal principles, some students, wisely or unwisely, eventually suppress their emotional response and suggest that incest in this case is (or at least might be) morally permissible. A similar reaction has been experimentally demonstrated in participants who read an evolutionary explanation of the aversion to incest that is meant to undermine the relevant intuitive moral judgment (Paxton et al., 2011). Sometimes, by contrast, emotions win the battle in our efforts to achieve reflective equilibrium. The recent revolution in attitudes toward homosexuality may be driven in part by empathy (Kumar, 2018). As many more gay people began living open and candid lives, straight people increasingly discovered that a friend or family member or colleague is gay. Since they were more able to empathize with these people, they were more likely to 150
Joshua May and Victor Kumar
appreciate the harms they experienced. This has led a tide of people in America and similar countries to accept homosexuality and abandon principles inconsistent with this new attitude—e.g., principles that prohibit “unnatural” sexual behavior. Richmond Campbell and Victor Kumar (2012) have mapped out the interplay between reasoning and emotion in a form of moral reasoning that is not captured by the traditional model of reflective equilibrium, what they call “consistency reasoning.” In consistency reasoning, you discover a conflict not between a principle and a concrete case intuition but between intuitions.You discover, in short, that you are not treating like cases alike. For example, most people are likely to judge that cruelty toward a domestic animal is wrong. However, many of these people eat meat products from factory farms, which inflict a similar level of cruelty on livestock. What could make the difference? Why is one activity wrong and the other permissible? It strikes many people that one must either revise one’s negative feelings about animal cruelty or revise one’s feelings about factory-farmed meat. Such consistency reasoning isn’t fit only for academics (Kumar & Campbell, 2016; Campbell, 2017). The inconsistencies one recognizes needn’t involve abstract principles familiar only to philosophers. Moreover, there is empirical evidence that people engage in consistency reasoning about familiar trolley cases (Petrinovich & O’Neill, 1996; Schwitzgebel & Cushman, 2012). If it’s wrong to push a large man off a footbridge, then why is it permissible to sacrifice the one for the five by flipping a switch? Campbell and Kumar argue that reasoning and emotion are engaged in a potentially recursive interplay in consistency reasoning. People have emotional reactions to particular cases.They consciously reason that there is an inconsistency. Sometimes they identify a morally irrelevant difference. But then this difference can be fed back into our intuitions. To see how this process plays out, consider Peter Singer’s (1972) famous example of consistency reasoning (see Campbell & Kumar, 2012, 292–295 for further detail). Suppose you are walking by a shallow pond and you see a young child drowning. Intuitively, you feel anticipatory guilt at the prospect of doing nothing, even if saving the child required ruining your new suit. Singer, however, asks us to consider the plight of starving children in the third world. Many people feel that donating money goes above and beyond the call of duty and that abstaining is not an occasion for guilt. But Singer argues that there is no morally relevant difference between the two cases. In both, one can save a life at a slight personal cost. If you are obligated to save drowning children, it seems you are also obligated to save starving children in the third world. Some critics suggest that there is a morally relevant difference here.You are the only one who can save the drowning child, while many other people are in a position to save starving children. Singer counters: imagine that there are several other people standing around the pond doing nothing. Singer predicts correctly that you would still feel guilt at the prospect of not helping. What this illustrates is the recursive nature of the interplay between reasoning and emotion: reasoning about morally relevant differences, initially prompted by emotional conflict, can feed back into emotional evaluation. Philosophical argumentation is often an attempt to regiment ordinary capacities for moral reasoning. It is therefore a source for better understanding the interplay between reasoning and emotion among ordinary moral agents. Moral philosophy can guide research into how reasoning and emotion combine to engender moral rationality. 151
Moral Reasoning and Emotion
11. Conclusion We draw two main conclusions. First, on a fair and plausible characterization of reasoning and emotion, they are both integral to moral judgment. In particular, when our moral beliefs undergo changes over long periods of time, there is ample space for both reasoning and emotion to play an iterative role. Second, it’s difficult to cleave reasoning from emotional processing. When the two affect moral judgment, especially across time, their interplay can make it artificial or fruitless to impose a division, even if a distinction can still be drawn between inference and valence in information processing. To some degree, our conclusions militate against extreme characterizations of the rationalism-sentimentalism divide. However, the debate is best construed as a question about which psychological process is more fundamental or essential to distinctively moral cognition. The answer still affects both theoretical and practical problems, such as how to make artificial intelligence capable of moral judgment. At the moment, the more nuanced dispute is difficult to adjudicate, but it may be addressed by further research and theorizing. Our conclusions also suggest limitations on some traditional threats to moral knowledge. Some skeptics, for example, contend that we can’t possibly know right from wrong because moral judgment is a mere matter of noncognitive feelings. Other skeptics attempt to debunk commonsense ethical intuitions, which are allegedly based on brute emotional responses. Such skeptical arguments are in danger of oversimplifying the nature of moral cognition. The processes that generate one’s moral beliefs are often automatic and difficult to identify or articulate, but this does not imply that they are unsophisticated. So we may have cause to be more optimistic about the possibility of moral knowledge, even among people who don’t regularly reflect on moral dilemmas.14 Nevertheless, as we’ve highlighted, a mental process can involve reasoning or affective information processing without being reasonable or rational. Moral cognition is subject to the same sorts of biases that affect cognition generally. Our conclusions suggest that the way to attain and maintain moral knowledge will require improving both reasoning and emotion. Indiscriminately making people more compassionate or disgust-sensitive, independent of reasoning, would be disastrous. Similarly, improving reasoning without regard to emotional response may lead to clever rationalizations of immorality. Ethical architects aiming to nudge people toward better moral thinking would be wise to consider the interplay between reason and emotion.
Notes 1. See Chapter 17 of this volume for a defense of the claim that we perceive injustice. 2. For further discussion of this position see Chapters 13 and 14 of this volume. 3. These questions are further explored in Chapters 16, 17, 18, and 19 of this volume. 4. For further discussion see Chapters 22, 23, and 25 of this volume. 5. See Chapter 4 of this volume for a discussion of complications. 6. Dual-process models of moral cognition are further discussed in Chapters 1, 2, 4, 6, 8, and 16 of this volume. 7. See Chapter 4 of this volume for a detailed discussion of Greene’s experiments and what they imply about the neurobiology of moral judgment. 8. See too Chapters 6, 8, and 9 of this volume on the sophisticated information processing structure of the relatively automatic sources of intuition and emotion. 9. See too Chapter 4 of this volume. 152
Joshua May and Victor Kumar
10. For more on the externalization of moral norms and the projection of moral value into the world, see Chapters 2, 13, and 14 of this volume. 11. See Chapter 4 of this volume for further details regarding the neurological underpinnings of our responses to moral dilemmas. 12. The role disgust plays in eliciting moral intuitions is further discussed in Chapter 9 of this volume. 13. For further discussion of reflective equilibrium, see Chapters 12, 19, 20, and 21 of this volume. 14. For further discussion of moral skepticism, see Chapters 13 and 14 of this volume.
References Aharoni, E., Sinnott-Armstrong, W. and Kiehl, K. A. (2012). “Can Psychopathic Offenders Discern Moral Wrongs? A New Look at the Moral/Conventional Distinction,” Journal of Abnormal Psychology, 121 (2), 484–497. Batson, C. D., Thompson, E. R., Seuferling, G., Whitney, H. and Strongman, J. A. (1999). “Moral Hypocrisy: Appearing Moral to Oneself Without Being So,” Journal of Personality and Social Psychology, 77 (3), 525–537. Blair, R. J. R. (1995). “A Cognitive Developmental Approach to Morality: Investigating the Psychopath,” Cognition, 57, 1–29. ———. (1997). “Moral Reasoning and the Child with Psychopathic Tendencies,” Personality and Individual Differences, 22, 731–739. Blair, R. J. R., Jones, L., Clark, F. and Smith, M. (1995). “Is the Psychopath ‘Morally Insane’?” Personality and Individual Differences, 19 (5), 741–752. Campbell, R. (2017). “Learning from Moral Inconsistency,” Cognition, 167, 46–57. Campbell, R. and Kumar,V. (2012). “Moral Reasoning on the Ground,” Ethics, 122 (2), 273–312. Chapman, H. A. and Anderson, A. K. (2013). “Things Rank and Gross in Nature,” Psychological Bulletin, 139 (2), 300–327. Craigie, J. (2011). “Thinking and Feeling: Moral Deliberation in a Dual-Process Framework,” Philosophical Psychology, 24 (1), 53–71. Crockett, M. J. (2013). “Models of Morality,” Trends in Cognitive Sciences, 17 (8), 363–366. Cushman, F. A., Young, L. L. and Hauser, M. D. (2006). “The Role of Conscious Reasoning and Intuition in Moral Judgment: Testing Three Principles of Harm,” Psychological Science, 17 (12), 1082–1089. Damasio, A. R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Avon Books. Dwyer, S. (2009).“Moral Dumbfounding and the Linguistic Analogy,” Mind & Language, 24 (3), 274–296. Feltz, A. and May, J. (2017). “The Means/Side-Effect Distinction in Moral Cognition: A MetaAnalysis,” Cognition, 166, 314–327. Fessler, D. M. T., Arguello, A. P., Mekdara, J. M. and Macias, R. (2003). “Disgust Sensitivity and Meat Consumption: A Test of an Emotivist Account of Moral Vegetarianism,” Appetite, 41 (1), 31–41. Gray, K., Young, L. and Waytz, A. (2012). “Mind Perception is the Essence of Morality,” Psychological Inquiry, 23 (2), 101–124. Greene, J. D. (2008).“The Secret Joke of Kant’s Soul,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Vol. 3. Cambridge, MA: MIT Press. ———. (2014). “Beyond Point-and-Shoot Morality,” Ethics, 124 (4), 695–726. Haidt, J. (2001). “The Emotional Dog and Its Rational Tail,” Psychological Review, 108 (4), 814–834. Haidt, J., Koller, S. H. and Dias, M. G. (1993). “Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog,” Journal of Personality and Social Psychology, 65 (4), 613–628. Hauser, M. D., Cushman, F. A.,Young, L. L., Kang-Xing Jin, R. and Mikhail, J. (2007). “A Dissociation Between Moral Judgments and Justifications,” Mind & Language, 22 (1), 1–21. Helzer, E. G., and Pizarro, D. A. (2011). Dirty Liberals! Reminders of Physical Cleanliness Influence Moral and Political Attitudes. Psychological Science, 22 (4), 517–522. Horgan, T. and Timmons, M. (2007). “Morphological Rationalism and the Psychology of Moral Judgment,” Ethical Theory and Moral Practice, 10 (3), 279–295. 153
Moral Reasoning and Emotion
Horne, Z., Powell, D. and Hummel, J. (2015). “A Single Counterexample Leads to Moral Belief Revision,” Cognitive Science, 39 (8), 1950–1964. Huebner, B. (2015).“Do Emotions Play a Constitutive Role in Moral Cognition?” Topoi, 34 (2), 427–440. Inbar, Y. and Pizarro, D. A. (2014). “Pollution and Purity in Moral and Political Judgment,” in J. C. Wright and H. Sarkissian (eds.), Advances in Experimental Moral Psychology. New York: Bloomsbury. Inbar,Y., Pizarro, D. A. and Bloom, P. (2009). “Conservatives Are More Easily Disgusted Than liberals,” Cognition and Emotion, 23 (4), 714–725. Kahane, G., Everett, J. A. C., Earp, B. D., Farias, M. and Savulescu, J. (2015). “Utilitarian Judgments in Sacrificial Moral Dilemmas Do Not Reflect Impartial Concern for the Greater Good,” Cognition, 134 (C), 193–209. Kahane, G., Wiech, K., Shackel, N., Farias, M., Savulescu, J. and Tracey, I. (2012). “The Neural Basis of Intuitive and Counterintuitive Moral Judgment,” Social Cognitive and Affective Neuroscience, 7 (4), 393–402. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus, and Giroux. Kauppinen, A. (2013). “A Humean Theory of Moral Intuition,” Canadian Journal of Philosophy, 43 (3), 360–381. Kelly, D. R. (2011). Yuck! The Nature and Moral Significance of Disgust. Cambridge, MA: MIT Press. Kennett, J. and Fine, C. (2009). “Will the Real Moral Judgment Please Stand Up?” Ethical Theory and Moral Practice, 12 (1), 77–96. Kohlberg, L. (1973). “The Claim to Moral Adequacy of a Highest Stage of Moral Judgment,” Journal of Philosophy, 70 (18), 630–646. Kumar,V. (2016a). “Psychopathy and Internalism,” Canadian Journal of Philosophy, 46, 318–345. ———. (2016b). “The Empirical Identity of Moral Judgement,” Philosophical Quarterly, 66 (265), 783–804. ———. (2017a). “Foul Behavior,” Philosophers’ Imprint, 17, 1–17. ———. (2017b). “Moral Vindications,” Cognition, 167, 124–134. ———. (2018). “The Weight of Empathy.” Unpublished manuscript. Kumar,V. and Campbell, R. (2016). “Honor and Moral Revolution,” Ethical Theory and Moral Practice, 19, 147–159. ———. (2018). Why We Are Moral: The Evolutionary Foundations of Moral Progress. Unpublished manuscript. Kumar,V. and May, J. (forthcoming). “How to Debunk Moral Beliefs,” in J. Suikkanen and A. Kauppinen (eds.), Methodology and Moral Philosophy. London: Routledge. Kunda, Z. (1990). “The Case for Motivated Reasoning,” Psychological Bulletin, 108 (3), 480–498. Landy, J. F. and Goodwin, G. P. (2015). “Does Incidental Disgust Amplify Moral Judgment? A MetaAnalytic Review of Experimental Evidence,” Perspectives on Psychological Science, 10 (4), 518–536. Mallon, R. and Nichols, S. (2010). “Rules,” in J. Doris et al. (eds.), The Moral Psychology Handbook. Oxford: Oxford University Press. Mikhail, J. (2011). Elements of Moral Cognition. Cambridge: Cambridge University Press. May, J. (2014a). “Does Disgust Influence Moral Judgment?” Australasian Journal of Philosophy, 92 (1), 125–141. ———. (2014b). “Moral Judgment and Deontology,” Philosophy Compass, 9 (11), 745–755. ———. (2018). Regard for Reason in the Moral Mind. Oxford: Oxford University Press. Morrow, D. (2009). “Moral Psychology and the ‘Mencian Creature’,” Philosophical Psychology, 22 (3), 281–304. Nichols, S. (2004). Sentimental Rules. Oxford: Oxford University Press. Nussbaum, M. C. (2004). Hiding from Humanity. Princeton: Princeton University Press. Paxton, J. M., Ungar, L. and Greene, J. D. (2011). “Reflection and Reasoning in Moral Judgment,” Cognitive Science, 36 (1), 163–177. Petrinovich, L. and O’Neill, P. (1996). “Influence of Wording and Framing Effects on Moral Intuitions,” Ethology and Sociobiology, 17 (3), 145–171. Prinz, J. J. (2007). The Emotional Construction of Morals. Oxford: Oxford University Press.
154
Joshua May and Victor Kumar
Railton, P. (2014). “The Affective Dog and Its Rational Tale: Intuition and Attunement,” Ethics, 124 (4), 813–859. Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press. Roskies, A. L. (2003). “Are Ethical Judgments Intrinsically Motivational? Lessons From ‘Acquired Sociopathy’,” Philosophical Psychology, 16 (1), 51–66. Royzman, E. B., Kim, K. and Leeman, R. F. (2015). “The Curious Tale of Julie and Mark: Unraveling the Moral Dumbfounding Effect,” Judgment and Decision Making, 10 (4), 296–313. Royzman, E. B., Leeman, R. F. and Baron, J. (2009). “Unsentimental Ethics: Towards a ContentSpecific Account of the Moral—Conventional Distinction,” Cognition, 112 (1), 159–174. Rozin, P., Markwith, M. and Stoess, C. (1997). “Moralization and Becoming a Vegetarian: The Transformation of Preferences into Values and the Recruitment of Disgust,” Psychological Science, 8 (2), 67–73. Rozin, P. and Singh, L. (1999). “The Moralization of Cigarette Smoking in the United States,” Journal of Consumer Psychology, 8 (3), 321–337. Saltzstein, H. D. and Kasachkoff, T. (2004). “Haidt’s Moral Intuitionist Theory: A Psychological and Philosophical Critique,” Review of General Psychology, 8 (4), 273–282. Sauer, H. (2017). Moral Judgments as Educated Intuitions. Cambridge, MA: MIT Press. Saver, J. L. and Damasio, A. R. (1991). “Preserved Access and Processing of Social Knowledge in a Patient with Acquired Sociopathy Due to Ventromedial Frontal Damage,” Neuropsychologia, 29 (12), 1241–1249. Schnall, S., Haidt, J., Clore, G. L. and Jordan, A. H. (2008). “Disgust as Embodied Moral Judgment,” Personality and Social Psychology Bulletin, 34 (8), 1096–1109. Schwitzgebel, E. and Cushman, F. A. (2012). “Expertise in Moral Reasoning?” Mind & Language, 27 (2), 135–153. Seidel, A. and Prinz, J. J. (2013a). “Mad and Glad: Musically Induced Emotions Have Divergent Impact on Morals,” Motivation and Emotion, 37 (3), 629–637. ———. (2013b). “Sound Morality: Irritating and Icky Noises Amplify Judgments in Divergent Moral Domains,” Cognition, 127 (1), 1–5. Singer, P. (1972). “Famine, Affluence, & Morality.” Philosophy & Public Affairs, 1 (3), 229–243. Stewart, B. D. and Payne, B. K. (2008). “Bringing Automatic Stereotyping Under Control: Implementation Intentions as Efficient Means of Thought Control,” Personality and Social Psychology Bulletin, 34 (10), 1332–1345. Taber-Thomas, B. C., Asp, E. W., Koenigs, M., Sutterer, M., Anderson, S. W. and Tranel, D. (2014). “Arrested Development: Early Prefrontal Lesions Impair the Maturation of Moral Judgement,” Brain, 137 (4), 1254–1261. Wheatley, T. and Haidt, J. (2005). “Hypnotic Disgust Makes Moral Judgments More Severe,” Psychological Science, 16 (10), 780–784. Yoder, K. J. and Decety, J. (2014). “Spatiotemporal Neural Dynamics of Moral Judgment: A HighDensity ERP Study,” Neuropsychologia, 60, 39–45. Young, L. L., Camprodon, J. A., Hauser, M. D., Pascual-Leone, A. and Saxe, R. (2010). “Disruption of the Right Temporoparietal Junction with Transcranial Magnetic Stimulation Reduces the Role of Beliefs in Moral Judgments,” Proceedings of the National Academy of Sciences, 107 (15), 6753–6758. Young, L. L., Cushman, F. A., Hauser, M. D. and Saxe, R. (2007). “The Neural Basis of the Interaction Between Theory of Mind and Moral Judgment,” Proceedings of the National Academy of Sciences, 104 (20), 8235–8240.
Further Readings For general discussions of the role of reason and emotion in moral cognition, as informed by the sciences, see: J. Greene, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them (New York: Penguin Classics, 2013); C. Helion and D. A. Pizarro, “Beyond Dual-Processes: The Interplay of Reason and Emotion in Moral Judgment,” in J. Clausen and N. Levy (eds.), Handbook of Neuroethics
155
Moral Reasoning and Emotion
(Dordrecht: Springer, 2014); H. L. Maibom, “What Experimental Evidence Shows Us About the Role of Emotions in Moral Judgement,” Philosophy Compass, 5 (11), 999–1012, 2010; J. Nado, D. Kelly and S. Stich, “Moral Judgment,” in J. Symons and P. Calvo (eds.), The Routledge Companion to Philosophy of Psychology (London: Routledge, 2009). For an older but useful and often overlooked discussion, see: D. A. Pizarro, “Nothing More Than Feelings?” Journal for the Theory of Social Behaviour, 30 (4), 355–375, 2000. A comprehensive resource on psychopaths in particular can be found in: A. L. Glenn and A. Raine, Psychopathy (New York: New York University Press, 2014). For a way into the neuroscientific evidence specifically, see: S. M. Liao, ed., Moral Brains: The Neuroscience of Morality (Oxford: Oxford University Press, 2016); J. Moll, R. Zahn, R. de Oliveira-Souza, F. Krueger and J. Grafman, “The Neural Basis of Human Moral Cognition,” Nature Reviews Neuroscience, 6 (10), 799–809, 2005.
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 2 The Normative Sense: What is Universal? What Varies; Chapter 4 The Neurological Basis of Moral Psychology; Chapter 5 Moral Development in Humans; Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 9 The Evolution of Moral Cognition; Chapter 10 Ancient and Medieval Moral Epistemology; Chapter 13 The Denial of Moral Knowledge; Chapter 14 Nihilism and the Epistemic Profile of Moral Judgment; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment; Chapter 17 Moral Perception; Chapter 18 Moral Intuition; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 27 Teaching Virtue.
156
8 MORAL INTUITIONS AND HEURISTICS Piotr M. Patrzyk
1. Introduction What information do humans look for in order to infer that some act or person is morally good or bad? Why do humans differ in the way they perceive and judge morally relevant situations? One goal of the research conducted within moral psychology is to describe information-processing routines used by humans to form moral judgments. In this chapter, I consider one research strategy one can adopt in order to achieve this goal. Specifically, I propose approaching these problems through the lens of the fast-and-frugal heuristics framework (Gigerenzer & Gaissmaier, 2011; Gigerenzer et al., 1999). According to this framework, people make decisions by relying on a repertoire of simple decision strategies, called heuristics: A heuristic is a strategy that ignores part of the information, with the goal of making decisions more quickly, frugally, and/or accurately than more complex methods. (Gigerenzer & Gaissmaier, 2011, 454) Together, these strategies form a toolbox from which one can adaptively select different heuristics as a function of task domain. The fast-and-frugal heuristics framework provides a theory of cognitive processes involved in human decision making and is thus applicable to investigating information-processing patterns of human (moral) judgment. The framework has three goals (Gigerenzer, 2010; Gigerenzer et al., 2008): 1. Descriptive—investigating the adaptive toolbox, that is, the set of fast and frugal decision strategies—the heuristics1—humans use, 2. Prescriptive—investigating the ecological rationality of heuristics, that is, their performance in relevant environments, 3. Practical—aiding real-world decision making, for instance, by informing the design of decision environments that help people to reach their moral goals.
157
Moral Intuitions and Heuristics
Despite several theoretical calls to apply the framework to the study of moral cognition (Fleischhut & Gigerenzer, 2013; Gigerenzer, 2008a, 2010), this research program remains largely underexplored in moral psychology.2 Some researchers within moral psychology treat specifying the exact cognitive processes involved in moral decision making as a peculiarity inherent to the fast-and-frugal heuristics framework. Though they acknowledge its potential, these researchers consider it as not absolutely essential to begin building a theory of how humans make moral decisions (Sinnott-Armstrong et al., 2010). Here, I would like to consider if specifying cognitive processes might indeed be an essential part of a fruitful investigation into human moral psychology. I will argue that consideration of models of information processing might inform areas that pose a challenge for the discipline, such as understanding cross-situation variability and individual differences in moral behavior. The tone of the current chapter is deliberately provocative. Novel theoretical perspectives usually arise from dissatisfaction with alternative proposals, so I begin by outlining the main points of disagreement between the fast-and-frugal heuristic tradition and its alternatives. I aim to explicate the philosophy and rationale of the heuristics program by contrasting it with other prominent research traditions. I then critically review the existing literature, highlight outstanding problems, and sketch future directions.
2. Alternative Research Approaches In the current section, I review alternative research approaches in the study of moral cognition. My aim is to acknowledge the utility of these approaches while drawing attention to shortcomings critics have exposed. A summary of the problems I expose and improvements I prescribe can be found in Table 8.1.
3. Unbounded Rationality Research in the domain of moral psychology often suffers from the normative flavor of developed theories.Very often researchers try to answer questions about what should be the case with regard to moral judgment before answering questions about how we actually render judgment in the relevant domain. In other words, many early investigations into the descriptive question about how humans make decisions were shaped by normative theories about how they should proceed (see Haidt, 2001). For example, simplified deontic approaches to moral choice seem to assume that (typically) humans consciously reason about which kinds of action are correct, choose principles on this basis, and then, upon encountering particular situations, apply the relevant norms to reach a verdict. A different ethical theory—a simplified act consequentialism—which is usually seen as standing in opposition to deontology, implies that humans do not consider learned norms but directly compute context-dependent consequences. The ideal consequentialist agent generates a representation of the set of all the possible actions available to her at a given time, predicts the consequences of these prospective actions, and then chooses among the set tied for the “best” final outcome where best is defined as the maximization of the happiness or utility of the subjects the agent can affect. Applying normative requirements to the psychological (descriptive) domain, some theories sought to explain individual differences in human moral judgment by variance in 158
Piotr M. Patrzyk
expertise in handling such complex reasoning tasks (e.g., Kohlberg, 1984). In the end, people differ in the content of norms they learn and in their facility at predicting consequences. While these developmental theories provided some useful classifications of the types of normative reasoning used by people to rationalize or justify their decisions when challenged, they did not focus on the question of how these reasoning skills relate to agents’ initial judgment or behavior, as it is unrealistic to assume that people generally use the same reasoning skills to both reach and defend a judgment from challenges (Krebs et al., 1997). Psychological theory of moral intuitions and decision making should first consider computational feasibility of proposed mechanisms. A more realistic moral psychology would drop the assumption that the world in which humans make decisions can and should be defined completely. As ideal theorists acknowledge, it is almost never true that all the information relevant for decision is available to an agent (see Savage, 1954). Models that assume these unrealistic conditions are sometimes called “rational demons,” as they envision inhumanely “rational” decision-making strategies (Gigerenzer, 2001; Gigerenzer et al., 1999). Rational demons come in two flavors: An outright unbounded rationality model assumes unconstrained human capacities for combining information. In contrast, an optimization under constraints model acknowledges the practical impossibility of unconstrained processing and argues for the necessity of determining the point at which the costs of further computation outweigh the benefits, leading to the problem of how such an optimal point can be identified (see also Table 8.1).
Table 8.1 Alternative research approaches and proposed improvements Alternative approach
Description
Cure
Unbounded rationality
Idealistic conception of how humans should combine information. The more information considered and the more effort spent, the more moral the decision is. Implicit in some formulations of deontology and consequentialism, as well as cognitive-developmental theories within psychology (see Haidt, 2001). Circularly defined one-word explanations of moral judgment, such as intuition. Often presented in dichotomies with some other mysterious forces, such as cognitive control. Different mysterious forces are labeled systems and have numbers assigned to them (see Gigerenzer, 1998). Assuming existence of hidden utilities or prosocial preferences being maximized and finding instances of behaviors showing that moral judgment was rendered “as-if ” a given decision rule was followed (see Berg & Gigerenzer, 2010).
Consider the social environment in which decisions are made and define what information might be available for making decisions (Hertwig & Herzog, 2009).
Explaining away
As-if modeling
159
Unpack black-boxes and propose psychologically plausible decision mechanisms. Consider how humans can combine available information (Marewski et al., 2010). Define the problem in a domainspecific way and have a clear conception of selective pressures shaping decision mechanism. From this, evaluate if a given mechanism adaptively serves its purpose (Cosmides & Tooby, 2006; Gigerenzer, 2001).
Moral Intuitions and Heuristics
But in the real world, people rarely (if ever) contemplate the point at which data gathering and computation grow too costly. Instead, it would seem that people base most (if not all) of their decisions on simpler domain-specific mechanisms that are able to operate even when much of the knowledge relevant to a decision is not available (Cosmides & Tooby, 1994; Gigerenzer, 2001; Simon, 1956). Once one considers their possible implementation, both utility-maximization and strict norm-adherence strategies are quickly dismissed as unrealistic (see also Wallach & Allen, 2009).3 What decision models are plausible then? Gigerenzer et al. (2008) delineate several necessary criteria for a psychologically plausible theory of choice. These are: 1. tractability—proposed decision mechanisms can actually be used by humans, 2. robustness—decision mechanisms should be effective in various different situations, 3. frugality—real-world decision mechanisms need to operate effectively on a limited amount of information, 4. speed—most decision mechanisms should generate judgments and choices quickly given task demands. The models of decision mechanisms described in Table 8.1 fail to meet all of these criteria. They are neither fast nor tractable, as proper treatment of complex moral dilemmas requires combining information in an inefficient way. They are not frugal, as they assume all information about the dilemma is just given to the person making a judgment.They are also not robust, as they do not even consider adaptation to real-world problems, focusing instead on optimality defined by standards that are posited a priori.
4. Explaining Away Contemporary moral psychology is no longer committed to the assumption of the existence of rational demons. Unfortunately, the literature in moral psychology is grounded in the overly simplistic approach assuming that cognition operates through competing forces or numbered systems (see Keren & Schul, 2009; van Bavel et al., 2015). Gigerenzer (1998) provides an overview of interrelated strategies that are commonly used by psychologists to “explain phenomena away” instead of working on descriptively adequate theories (see also Table 8.1). These include: 1. one-word explanations—inventing ambiguous labels supposed to account for different phenomena, 2. redescription—applying circular reasoning to definitions of terms, 3. muddy dichotomies—inventing simplistic oppositions of different forces without considering relations between them, 4. data fitting—building models that account for different phenomena in hindsight. Do these problems apply to contemporary moral psychology? Consider Haidt’s (2001) social intuitionist model, which was proposed as standing in opposition to the rationalist theories of moral judgment sketched in the previous subsection. Haidt’s model posits that humans perform moral reasoning in response to automatic intuitive judgments. Intuition 160
Piotr M. Patrzyk
determines an answer to a decision problem. It is only when a decision is challenged that subjects start to ponder adequate grounds for their choice. But what is the source of these intuitive judgments in the first place? These are said to “appear in consciousness automatically and effortlessly as the result of moral intuitions” (Haidt, 2001, 818). Haidt’s account appeals to a “black-box” notion of intuition. Intuition received data or stimuli and delivers verdicts or choices, but Haidt says nothing about how information operates on the data provided to it to deliver the observed range of verdicts.4 Subsequent elaborations of the dual-process model of moral cognition engage in further “hand waving” by accounting for differences in impartial moral judgment by positing cognitive control of intuitive responses (Greene, 2013; Greene & Haidt, 2002; Greene et al., 2004). According to this paradigm, confronting a morally significant situation evokes an automatic intuition, and later on, a functionally distinct cognitive mechanism induces the kind of reflection that can either confirm the intuition or deny it. Hence, differences in human judgment result from differing abilities to overcome or modulate intuitive responses. Openly ascribing to the heuristics-and-biases research tradition (see e.g., Kahneman, 2011), moral dual-process theorists argued that intuitive responses may work well in some contexts, but they continued to assume that intuition is not as reliable as deliberative processing (Greene, 2013). When intuitive responses produce good results, this is attributable to luck. Had the actor used a more deliberative method, the correct outcome would have been more likely.5 By invoking intuition as they do, the theorists we have identified neglect questions about how humans perceive moral dilemmas, in the sense of searching for and combining information (see Dubljević & Racine, 2014; Mikhail, 2008). Questions about information processing were tabled for future research, having been deemed inessential for theory construction (Greene, 2008). But it would seem that “black boxing” intuition was not a theoretically innocent maneuver. Instead, it rendered the relevant research program vulnerable to all of the problems Gigerenzer (1998) discusses: • •
different phenomena were labeled as “intuitive” (one-word explanations), “intuitive” was circularly defined as coming from a predisposition to make “intuitive” judgments (redescription), • these “intuitive” phenomena were involved in battles against “cognitive control” (muddy dichotomies), • “intuition” was invoked to fit theory to a limited set of moral dilemmas (data fitting). This can be illustrated with an example. Moral behavior, according to common thinking, is possible for adults because we have a level of self-control not found in young children and (many) nonhuman animals.We can exercise self-control to refrain from acting on automatic antisocial impulses. Someone with an unusually strong moral sense might overcome all temptations to act badly, whereas someone with an underdeveloped moral sense—usually attributed to bad parenting—would indulge temptations toward antisocial behaviors (Gottfredson & Hirschi, 1990; D. Moore & Loewenstein, 2004; Muraven & Baumeister, 2000). Differing powers of self-control are used to explain variability across individuals—who are supposed to differ in the relevant capacities and predispositions—as well as inconsistency within individuals—as people are thought to possess different levels of self-control resources across situations in an analogy with hunger or fatigue.6 161
Moral Intuitions and Heuristics
Experimental tests of this dual-process theory have provided promising results confirming researchers’ relatively commonsense intuition. For instance, studies on cheating found that time pressure is associated with more immoral choices (Shalvi et al., 2012), suggesting that cheating is the default predisposition that must be overcome with cognitively effortful moral deliberation and control. But a problem emerged for this hypothesis: observations of subjects’ performance in a public goods game found that time pressure is associated with more virtuous behavior, which suggests the opposite pattern. Perhaps pro-social behavior is the default that must be overcome by cost-benefit deliberation and top-down control (Rand et al., 2012). These conflicting observations raised a significant theoretical problem: does conscious deliberation augment moral behavior by lending due consideration to valid rules (immoral intuition hypothesis), or does deliberation augment immorality by providing a role for the kind of self-interested calculation that would otherwise find no place (moral intuition hypothesis)? Researchers from the fast-and-frugal heuristics tradition would expect that when confronted with such controversies, dual-process theorists would make their predictions more specific and propose computational process models of the varying roles of intuition, moral rules, and the calculation of self-interest, and that these competing models would be tested with further experiments. But as it turns out, no such thing has happened. Instead, dualprocess accounts have proposed a unifying nonlinear-inverted-U-shape-dual-process account in which divergent intuitions are accommodated. Simply put, when humans face a given situation they first want to behave immorally, then quickly change their mind and want to behave morally, but when they have even more time, they go back to their initial response and want to behave immorally again (Capraro & Cococcioni, 2016; Moore & Tenbrunsel, 2014). The way in which this debate has been resolved by dual-process theorists reveals profound methodological problems that should provoke serious reflection on the part of those researching human decision making. The central point of disagreement between the fastand-frugal heuristics program and alternative research approaches is the question about which research methodology is better able to advance theories of decision making (Gigerenzer, 1996).Within the fast-and-frugal heuristics framework, high-level theories appealing to intuition-deliberation dichotomies are rejected because they lack specific description of differences in information processing (see Keren & Schul, 2009; Kruglanski & Gigerenzer, 2011). In this sense, the dominant dual-process approach in moral psychology remains insufficiently serious.
5. As-if Modeling Another problem in research on moral decision making stems from the commitment to the “as-if ” research tradition (see Berg & Gigerenzer, 2010). Since it is rooted in behavioral economics, we can begin our critique by explicating the logic of this research paradigm. The behavioral economist typically: 1. determines the normative standard of what people should do, 2. finds an example of relatively widespread failure to fulfill this requirement, and
162
Piotr M. Patrzyk
3. claims that subjects exercise judgment poorly in these contexts or that their choices actually maximize the satisfaction of relatively obscure preferences subjects would not attribute to themselves (Berg & Gigerenzer, 2010; Binmore, 2005). In the moral domain, this strategy is commonly used to find evidence for so-called prosocial preferences in experimental economic games. The design of such studies looks as following (e.g., Caporael et al., 1989): One first argues that the normative standard (step 1) is to blindly pursue short-term material self-interest (even if it might lead to long-term reputational losses). Having assumed this normative standard, one uses experimental games to find that humans often violate the standard (step 2). Theorists then infer that humans have preferences for fairness that they maximize (step 3). When researchers assume that subjects maximize hidden preferences, they are employing an overly convenient strategy for gathering evidence about human decisions. Instead of considering the possibility that humans employ multiple decision mechanisms beyond the determination of preference satisfaction, researchers assume a single principle or mechanism (utility maximization) and interpret decisions in its light arguing that humans behave “asif ” they were reasoning in accord with the principle in question. To execute this strategy, researchers selectively alternate between proposals as to what is being maximized. There are various cognitive mechanisms that might explain why subjects behave “as-if ” they are maximizing prosocial utility in various experimental games. Subjects might respond to what they take to be the well-being of others, or they might be blindly following learned moral rules that enjoin pro-social behavior, or they might care about their reputations and judge pro-social choice necessary for the maintenance of these reputations. Indeed, they might combine these concerns or act from none of them, choosing the prosocial outcome out of a misunderstanding of the choice situation (Burton-Chellew et al., 2016; Krebs, 1989). The “as-if ” research philosophy allows you to pick among these possibilities and choose whatever you want to interpret the evidence (see Hagen & Hammerstein, 2006). Such research does little to answer questions about how humans perceive dilemmas, what information they look for and in what order, and how they combine information to make decisions. The “as-if ” approach will likely yield an inaccurate picture of human decision strategies, leading readers to conflate “as-if ” proofs of consistency with claims about actual cognitive processes. An approach less liable to these faults would consider both the environment in which decisions are made and the structure of the decision mechanisms people bring to contexts of choice (Krasnow & Delton, 2016). We might begin by focusing on relatively simple decision mechanisms that respond to different situation types. In the case of the (moral) problem of cooperation with strangers, such an approach was taken by Delton et al. (2011), who argued that due to the existence of selective pressure to maintain long-term cooperative relationships, humans have evolved social heuristics that classify newly met people as potential future partners (see also Yamagishi et al., 2007). This decision strategy is adaptive in the sense of securing individual interests on an evolutionary time frame but also leads to errors when it misfires. For instance, deviation is to be expected in those nontypical anonymous situations investigated experimentally by behavioral economists (e.g., Caporael et al., 1989). Behavioral economists pursuing the “as-if ” methodology might attribute prosocial
163
Moral Intuitions and Heuristics
decisions that thwart individual gain to subjects’ attempts to maximize their pro-social preferences. But on the alternative approach here described, these results might be described as “errors,” as a form of decision making adapted for use within tightly knit tribes is applied in ecologically remote environments in which strangers interact without reputational consequences (Delton et al., 2011). It is commonly thought that heuristics-and-biases researchers (Kahneman, 2011) hunt judgmental errors, whereas fast-and-frugal heuristics researchers (Gigerenzer et al., 1999) think that these errors are not really so common and that human cognition should be perceived as generally accurate. Because of this, some theorists have argued that the difference between these two research paradigms is not really as profound as alleged (Samuels et al., 2002; Sinnott-Armstrong et al., 2010). But the discussion so far reveals that a central disagreement between those pursuing these two approaches lies in the methodological approach to decision making. Should we investigate human cognition by asking the whether question— are humans rational or irrational? Or should we begin by asking the when question—under what conditions are humans decisions adaptive? I have argued that when researchers ask the whether question they are often led to use nonrepresentative, artificial stimuli and evaluate data according to arbitrarily determined normative notions. In contrast, when researchers pose the when question, this leads them to investigate cognition in representative or natural settings and evaluate performance in a less a priori way by asking whether it is adaptive or instrumentally sound in those contexts (see Gigerenzer, 1996; Katsikopoulos, 2014). In the next section, I elaborate on the latter research strategy.
6. Adaptive Toolbox of Moral Heuristics In this section I articulate strategies that might help address the methodological problems identified earlier. I organize this discussion around three principles that help define the idea that choice is determined by an adaptive toolbox (Gigerenzer, 2001): (1) domain specificity, (2) ecological rationality, and (3) psychological plausibility.
7. Domain Specificity The first assumption of this approach is that mechanisms of choice and judgment are domain-specific (Cosmides & Tooby, 1987; Tooby & Cosmides, 1992). A domain is a subset of goals that humans seek that can be defined either as a task type (classification, estimation, etc.), or as an adaptive problem humans face, such as mate choice or alliance formation (Gigerenzer, 2001).The latter conceptualization is especially useful in thinking about moral judgment, which is hypothesized as an adaptation that emerged because of its role in guiding human behavior in societal contexts in which individual interests conflict (Alexander, 1987). If we make these assumptions, we must begin by defining the problems human solve in a domain-specific way (Kenrick et al., 2009; Tooby & Cosmides, 1992). Moreover, if we assume a roughly Darwinian framework, we can assume that the problems in question are more or less directly related to reproductive fitness. As a result, we can assume that our domain-specific problem-solving modules operate on information that is related to success as defined by fitness: e.g., cues of sexual interest, cues of submissiveness, and angry facial 164
Piotr M. Patrzyk
expressions. From this perspective, rational rules such as maximization of expected utility (defined as monetary rewards) do not constitute a viable approach to modeling human behavior because they are domain-general and unboundedly flexible (Cosmides & Tooby, 1987; Gigerenzer, 2001).7 Once we assume a high level of modularity, we need some conceptualization of the goals of our differing decision mechanisms: ecologically realistic examples might include self-protection or mate acquisition (see Kenrick et al., 2010).We also need some conceptualization of the environment in which these mechanisms are deployed: e.g., what obstacles are most commonly encountered and what cues are available for mechanisms dedicated to surmounting these obstacles (Tooby & Cosmides, 1992). Given multiple roles of agents in their social groups, understanding social decision making requires taking into consideration the intractability of the social world and the presence of competing goals (Hertwig & Herzog, 2009; Hertwig & Hoffrage, 2013). In the moral domain, it needs to be made clear what adaptive purpose (or purposes) a posited decision mechanism is supposed to serve.The adaptiveness of a decision mechanism crucially depends on the selective pressures an organism faces. We can assume that domainspecific decision mechanisms evolved because they enhanced the reproductive fitness of individuals (or their kin) in the environments in which they were selected (Cosmides & Tooby, 2006). Importantly, the adaptive purpose served by a given mechanism must not be conflated with the normative goals of someone whose mind incorporates that mechanism. Humans may employ moral cognition to guide their own behavior in the social world in order to avoid retaliation, to manipulate others, or to make impartial judgments (Asao & Buss, 2016). Depending on the goal to which morality is employed, agents’ decision mechanisms are tuned to their underlying interests and past record of interactions with particular individuals (see also Tooby & Cosmides, 2010).8
8. Ecological Rationality Another principle of the adaptive toolbox vision of human cognition is a focus on investigating ecological rationality of decision mechanisms, that is, their adaptation to the physical and social structure of the environment. The question that needs to be investigated concerns the fit between a given decision mechanism and the environment in which it operates: A decision mechanism that performs well in one environment need not perform well in another (Todd & Gigerenzer, 2007, 2012). As we have seen, some traditional moral theories propose that the morality of decision strategies should be judged according to their consistency with a priori accepted rules (i.e., deontology), or that they should be judged according to the consequences for happiness, satisfaction, or utility that they produce over the long term (i.e., consequentialism). Interestingly, these conceptualizations correspond to two central ideas that have dominated general efforts to understand the mind (see Gigerenzer, 2008b): Logic, which assumes a goal of truth-seeking, resembles norm-obedience-seeking deontology, and probability, with its goal of optimization under uncertainty, plays a prominent role in consequentialism. The fast-and-frugal heuristics framework puts forward a third approach, which focuses on the functional value of decision strategies in a context (Gigerenzer, 2008b). There is no 165
Moral Intuitions and Heuristics
universal normative method that determines what the “correct” response to given task is in the sense of “correctness” here defined. Correctness is understood as adaptiveness in the relevant environment. These considerations are especially relevant in the moral domain. Many theorists argue that decisions that yield normatively desirable consequences often result from environments that evoke adaptive psychological programs rather than a genuine concern for moral rules (Cosmides & Tooby, 2006; Delton et al., 2011). Investigating moral cognition should not concern intrinsic properties of individuals but the interplay between their adaptive decision mechanisms and the environment. That decision mechanisms operating in the moral domain are adaptive does not by itself determine whether they yield normatively desirable consequences. For instance, adaptive risk-taking strategies can be exploited in novel gambling environments to make people lose money (Bennis et al., 2012). Similarly, adaptive social heuristics can be exploited to make people cooperate even if it is against their interests to do so (Delton et al., 2011). The methodology in question assumes that moral decision mechanisms are adaptations that are ecologically rational in the conditions under which they evolved, but this assertion is mute about their current adaptivity and the normative desirability of decisions produced based on them.9 Obviously, the latter assessment depends on the normative criteria that are independent of the research approach.
9. Psychological Plausibility Models of decision making developed in accordance with this methodology need to begin with a clear conception of the challenges they solve (i.e., domain specificity) and the structure of the environment in which the problem is solved (i.e., ecological rationality). To advance beyond the kind of “black boxing” indulged by those who pursue dual-process models, those pursuing the fast-and-frugal heuristics framework must postulate specific algorithmic descriptions of the information processing involved in meeting these ecological challenges (see also Marr, 1982). A guiding principle of the fast-and-frugal heuristics framework is that even though decision mechanisms are domain-specific and serve different goals and that thus cues used to make decisions might considerably vary, there exists a common pressure across domains to minimize the cost of information search and develop efficient decision mechanisms10 (Todd et al., 2016). Therefore, a goal of the program is to uncover the building blocks in the design of psychologically plausible decision mechanisms—a “missing link” that bridges the vision of adaptive problems with the vision of decision making in the real world (Todd et al., 2016). In order to describe the mechanisms behind a given decision-making task, researchers need to identify the mechanism’s input. Moreover, to specify the algorithm computed by the mechanism from this input, theorists need to describe (1) the search rules that determine what information is taken into consideration, (2) the stopping rules that determine when decision makers stop gathering information, and finally (3) the decision rules that explain how the mechanism generates a decision from the information in their possession at stopping time. An ecologically rational decision mechanism is the one that is able not only to execute the final step (3) having the information on the table but the one that successfully retrieves relevant information from the environment while ignoring 166
Piotr M. Patrzyk
redundancies. It is an assumption of domain specificity that decision mechanisms are adapted to specific cue types, where these process inputs are determined both phylogenetically and ontogenetically (New et al., 2007). How can we rigorously investigate psychologically plausible decision mechanisms, given the theoretical approach here adopted? The following research principles might be useful (Marewski et al., 2010): 1. Decision models need to be precisely defined in terms of search, stopping, and decision rules, 2. Decision models should be tested comparatively, 3. Decision models need to be studied alongside models of strategy selection (i.e., individuals choosing different strategies for different tasks), 4. Decision models need to be evaluated according to how well they predict new data, as opposed to fit it, 5. Decision models should be tested in the real world, as opposed to simplified artificial environments. For the purposes of illustration consider the study conducted by Tan et al. (2017), who investigated the decision of one human to forgive another for an indiscretion. Researchers assumed that in order to assess whether an interest-violating individual is an ally or a foe, humans combine domain-specific cues that discriminate between these types. To this end, researchers assumed that subjects are sensitive to cues of (1) the wrongdoer’s intention, (2) the justifiability of blaming that person, and (3) the presence of an apology. They also modeled costs inherent to different decisions. These were in turn determined by the relationship’s value (i.e., the benefits of maintaining the relationship, if the person is an ally) and the risks of exploitation (i.e., the cost of maintaining the relationship and being exploited, if the person is not an ally). Tan et al. (2017) proposed two strategies that might be used to combine this information: a fast-and-frugal tree, that is, a noncompensatory strategy that sequentially considers cues in order to make a classification (see Luan et al., 2011; Martignon et al., 2008) and a Franklin Rule, a compensatory linear model that combines discretely valued cues. An advantage of such a modeling approach is that the underlying variables determining the decision criterion (i.e., internal regulatory variables, Delton & Robertson, 2016; Tooby & Cosmides, 2010) can be modeled in a psychologically plausible way as properties inherent to the heuristic’s structure (see also Tan, 2016). Tan et al. (2017) had their participants (1) report their subjective sense of the relative importance of the different cues they used when deciding whether to forgive; (2) recall a specific instance in which their interests were of transgressed upon, specifying their decision with regard to the transgressor, the presence of cues, and their estimate of the wrongdoer’s potential impact on their well-being (i.e., relationship value and exploitation risk); and (3) respond to hypothetical dilemmas where the cues of being an ally versus foe were varied. From this data, they investigated which of the hypothesized decision mechanisms best accounted for the choices made, given reported sensitivities to cues. This was done both in fitting the mechanisms to decisions made in hypothetical scenarios and in predicting their decisions in recalled conflicts. It was found that in predicting participants’ decisions, models achieved accuracy ~70%, a 167
Moral Intuitions and Heuristics
result significantly above the chance level. Unfortunately, the study did not find a difference between the accuracy of fast-and-frugal trees in comparison with the linear classifier. Despite the study’s reliance on self-reports and hypothetical scenarios that fail to elicit psychologically compelling experience of the situation in which the decision task takes place (see Bauman, McGraw et al., 2014; Teper et al., 2015), debatable determination of relevant cues,11 and lack of inclusion of all of Marewski et al.’s (2010) principles (no investigation of individual differences via strategy selection; see also Marewski & Link, 2014), as well as its failure to find significant differences in performance of heuristics versus the linear model on chosen tasks, this study is a useful example of the attempt to meet the methodology here described.12
10. Conclusion Moral psychology is a fascinating research area that has produced numerous challenging findings. Unfortunately, the majority of this work has been based on models of cognition that explain behavior in hindsight and that propose that people act “as-if ” they were motivated Bayesians (Gino et al., 2016), “as-if ” they temporarily suspend cognitive control during moral decisions (Batson et al., 1999), or “as-if ” they engage in “ethical manoeuvring,” choosing norms favorable to them (Shalvi et al., 2011). Those who pursue the alternative fast-and-frugal heuristic framework do not deny that people often behave “as-if ” they were doing these things but attempt to move the analysis one step forward and uncover the processes that are really operative when people make decisions and render judgments in these domains. Adoption of this approach promises to provide a more accurate description of human moral decision strategies and perhaps, too, a more informed set of prescriptions concerning how human moral behavior can be promoted by designing good decision environments (see also Cosmides & Tooby, 2006; Gigerenzer, 2010; Grüne-Yanoff & Hertwig, 2016).
Notes 1. By moral heuristic I refer to a decision strategy that is employed in challenging areas of the moral domain. I use the term in a descriptive sense. I have nothing to say about whether decision strategies are moral in a normative sense (i.e., whether they agree with some conception of what is good, justifiable, or desirable; see Bedke, this volume, Chapter 18). 2. There exist proposals within moral psychology that use the term “heuristics” (see Sunstein, 2005, 2008), but—as will become clear later in this chapter—these are methodologically and theoretically distinct from the fast-and-frugal heuristics approach referred to here (for a comparison, see Chow, 2015). 3. See too Chapter 9 of this volume for criticism of such models. 4. For further discussion of the model of moral cognition articulated by Haidt and colleagues, see Chapters 1, 2, 4, 5, 6, 7, and 16 of this volume. 5. Greene’s view of moral judgment is further discussed in Chapters 4, 6, and 7 of this volume. 6. Different versions of this theory explain differences in behavior by positing that individuals put different emphases on individual and group interests (e.g., Kluver et al., 2014) or that their moral identity is important to a differing extent (e.g., Aquino & Reed II, 2002). 7. See Chapter 9 of this volume for extended arguments for the massive modularity of moral judgment. 8. This point is elaborated in Chapter 9 of this volume. 168
Piotr M. Patrzyk
9. The term “adaptive” is sometimes used in a different way to denote “social desirability,” especially where morally significant behaviors are concerned (see Frankenhuis & Del Giudice, 2012), and the term “heuristic” is sometimes associated with decision mechanisms that are morally (in a normative sense) inferior (Sunstein, 2008). 10. However, it needs to be noted that not all researchers in the fast-and-frugal heuristics tradition explicitly embrace the evolutionary approach as advocated in the current chapter. 11. Tan et al. note the problem of including blame as a cue (2017, 32–33). Its status is problematic as it is not an objective atomic property, such as presence of apology, but a post hoc rationalization of retributive sentiments (see e.g., Clark et al., 2014; Cushman, 2008). 12. See too the plethora of studies reported in Chapter 9 of this volume.
References Alexander, R. D. (1987). The Biology of Moral Systems. Hawthorne, NY: Aldine de Gruyter. Aquino, K. and Reed II, A. (2002). “The Self-Importance of Moral Identity,” Journal of Personality and Social Psychology, 83 (6), 1423–1440. doi:10.1037/0022–3514.83.6.1423. Asao, K. and Buss, D. M. (2016). “The Tripartite Theory of Machiavellian Morality: Judgment, Influence, and Conscience as Distinct Moral Adaptations,” in T. K. Shackelford and R. D. Hansen (eds.), The Evolution of Morality. New York: Springer International Publishing, 3–25. Batson, C. D., Thompson, E. R., Seuferling, G., Whitney, H. and Strongman, J. A. (1999). “Moral Hypocrisy: Appearing Moral to Oneself Without Being So,” Journal of Personality and Social Psychology, 77 (3), 525–537. doi:10.1037/0022–3514.77.3.525. Bauman, C. W., McGraw, A. P., Bartels, D. M. and Warren, C. (2014). “Revisiting External Validity: Concerns About Trolley Problems and Other Sacrificial Dilemmas in Moral Psychology,” Social and Personality Psychology Compass, 8, 536–554. doi:10.1111/spc3.12131. Bennis, W. M., Katsikopoulos, K.V., Goldstein, D. G., Dieckmann, A. and Berg, N. (2012). “Designed to Fit Minds: Institutions and Ecological Rationality,” in P. M. Todd, G. Gigerenzer and The ABC Research Group (eds.), Ecological Rationality: Intelligence in the World. Oxford: Oxford University Press, 409–428. Berg, N. and Gigerenzer, G. (2010). “As-If Behavioral Economics: Neoclassical Economics in Disguise?” History of Economic Ideas, 18 (1), 133–165. www.jstor.org/stable/23723790. Binmore, K. (2005). Natural Justice. New York: Oxford University Press. Burton-Chellew, M. N., El Mouden, C. and West, S. A. (2016). “Conditional Cooperation and Confusion in Public-Goods Experiments,” Proceedings of the National Academy of Sciences, 113 (5), 1291– 1296. doi:10.1073/pnas.1509740113. Caporael, L. R., Dawes, R. M., Orbell, J. M. and van de Kragt, A. J. C. (1989). “Selfishness Examined: Cooperation in the Absence of Egoistic Incentives,” Behavioral and Brain Sciences, 12 (04), 683–699. doi:10.1017/s0140525x00025292. Capraro, V. and Cococcioni, G. (2016). “Rethinking Spontaneous Giving: Extreme Time Pressure and Ego-Depletion Favor Self-Regarding Reactions,” Scientific Reports, 6, 27219. doi:10.1038/ srep27219. Chow, S. J. (2015). “Many Meanings of ‘Heuristic’,” The British Journal for the Philosophy of Science, 66 (4), 977–1016. doi:10.1093/bjps/axu028. Clark, C. J., Luguri, J. B., Ditto, P. H., Knobe, J., Shariff, A. F. and Baumeister, R. F. (2014). “Free to Punish: A Motivated Account of Free Will Belief,” Journal of Personality and Social Psychology, 106 (4), 501–513. doi:10.1037/a0035880. Cosmides, L. and Tooby, J. (1987).“From Evolution to Behavior: Evolutionary Psychology as the Missing Link,” in J. Dupre (ed.), The Latest on the Best: Essays on Evolution and Optimality. Cambridge, MA: MIT Press, 277–306. ———. (1994). “Beyond Intuition and Instinct Blindness: Toward an Evolutionarily Rigorous Cognitive Science,” Cognition, 50 (1), 41–77. doi:10.1016/0010-0277(94)90020-5. ———. (2006). “Evolutionary Psychology, Moral Heuristics, and the Law,” in G. Gigerenzer and C. Engel (eds.), Heuristics and the Law. Cambridge, MA: MIT Press, 181–212.
169
Moral Intuitions and Heuristics
Cushman, F. (2008). “Crime and Punishment: Distinguishing the Roles of Causal and Intentional Analyses in Moral Judgment,” Cognition, 108 (2), 353–380. doi:10.1016/j.cognition.2008.03.006. Delton, A. W., Krasnow, M. M., Cosmides, L. and Tooby, J. (2011). “Evolution of Direct Reciprocity Under Uncertainty Can Explain Human Generosity in One-Shot Encounters,” Proceedings of the National Academy of Sciences, 108 (32), 13335–13340. doi:10.1073/pnas.1102131108. Delton, A.W. and Robertson,T. E. (2016).“How the Mind Makes Welfare Tradeoffs: Evolution, Computation, and Emotion,” Current Opinion in Psychology, 7, 12–16. doi:10.1016/j.copsyc.2015.06.006. Dubljević,V. and Racine, E. (2014). “The ADC of Moral Judgment: Opening the Black Box of Moral Intuitions with Heuristics About Agents, Deeds, and Consequences,” AJOB Neuroscience, 5 (4), 3–20. doi:10.1080/21507740.2014.939381. Fleischhut, N. and Gigerenzer, G. (2013). “Can Simple Heuristics Explain Moral Inconsistencies?” in R. Hertwig, U. Hoffrage and The ABC Research Group (eds.), Simple Heuristics in a Social World. Oxford: Oxford University Press, 459–485. Frankenhuis, W. E. and Del Giudice, M. (2012). “When Do Adaptive Developmental Mechanisms Yield Maladaptive Outcomes?” Developmental Psychology, 48 (3), 628–642. doi:10.1037/a0025629. Gigerenzer, G. (1996). “On Narrow Norms and Vague Heuristics: A Reply to Kahneman and Tversky,” Psychological Review, 103 (3), 592–596. doi:10.1037/0033-295X.103.3.592 ———. (1998). “Surrogates for Theories,” Theory and Psychology, 8 (2), 195–204. doi:10.1177/ 0959354398082006. ———. (2001). “The Adaptive Toolbox,” in G. Gigerenzer and R. Selten (eds.), Bounded Rationality: The Adaptive Toolbox. Cambridge, MA: MIT Press, 37–50. ———. (2008a). “Moral Intuition = Fast and Frugal Heuristics?” in W. Sinnott-Armstrong (ed.), Moral Psychology.Volume 2:The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA: MIT Press, 1–26. ———. (2008b). “Why Heuristics Work,” Perspectives on Psychological Science, 3 (1), 20–29. doi:10.1111/j.1745-6916.2008.00058.x. ———. (2010). “Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality,” Topics in Cognitive Science, 2 (3), 528–554. doi:10.1111/j.1756-8765.2010.01094. Gigerenzer, G. and Gaissmaier, W. (2011). “Heuristic Decision Making,” Annual Review of Psychology, 62, 451–482. doi:10.1146/annurev-psych-120709-145346. Gigerenzer, G., Hoffrage, U. and Goldstein, D. G. (2008). “Fast and Frugal Heuristics Are Plausible Models of Cognition: Reply to Dougherty, Franco-Watkins, and Thomas,” Psychological Review, 115 (1), 230–239. doi:10.1037/0033-295X.115.1.230. Gigerenzer, G., Todd, P. and The ABC Research Group. (eds.). (1999). Simple Heuristics That Make Us Smart. Oxford: Oxford University Press. Gino, F., Norton, M. I. and Weber, R. A. (2016). “Motivated Bayesians: Feeling Moral While Acting Egoistically,” The Journal of Economic Perspectives, 30 (3), 189–212. doi:10.1257/jep.30.3.189. Gottfredson, M. R. and Hirschi, T. (1990). A General Theory of Crime. Stanford, CA: Stanford University Press. Greene, J. (2008). “Reply to Mikhail and Timmons,” in W. Sinnott-Armstrong (ed.), Moral Psychology Volume 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. Cambridge, MA: MIT Press, 105–117. ———. (2013). Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Penguin Classics. Greene, J. and Haidt, J. (2002). “How (and Where) Does Moral Judgment Work?” Trends in Cognitive Sciences, 6 (12), 517–523. doi:10.1016/S1364-6613(02)02011-9. Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M. and Cohen, J. D. (2004). “The Neural Bases of Cognitive Conflict and Control in Moral Judgment,” Neuron, 44 (2), 389–400. doi:10.1016/j. neuron.2004.09.027 Grüne-Yanoff, T. and Hertwig, R. (2016). “Nudge Versus Boost: How Coherent Are Policy and Theory?” Minds and Machines, 26 (1), 149–183. doi:10.1007/s11023-015-9367-9 Hagen, E. H. and Hammerstein, P. (2006). “Game Theory and Human Evolution: A Critique of Some Recent Interpretations of Experimental Games,” Theoretical Population Biology, 69 (3), 339– 348. doi:10.1016/j.tpb.2005.09.005.
170
Piotr M. Patrzyk
Haidt, J. (2001). “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review, 108 (4), 814–834. doi:10.1037/0033-295X.108.4.814. Hertwig, R. and Herzog, S. M. (2009). “Fast and Frugal Heuristics:Tools of Social Rationality,” Social Cognition, 27 (5), 661–698. doi:10.1521/soco.2009.27.5.661. Hertwig, R. and Hoffrage, U. (2013). “Simple Heuristics:The Foundations of Adaptive Social Behavior,” in R. Hertwig, U. Hoffrage and The ABC Research Group (eds.), Simple Heuristics in a Social World. Oxford: Oxford University Press, 3–36. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Katsikopoulos, K.V. (2014). “Bounded Rationality: The Two Cultures,” Journal of Economic Methodology, 21 (4), 361–374. doi:10.1080/1350178X.2014.965908. Kenrick, D. T., Griskevicius, V., Neuberg, S. L. and Schaller, M. (2010). “Renovating the Pyramid of Needs: Contemporary Extensions Built Upon Ancient Foundations,” Perspectives on Psychological Science, 5, 292–314. doi:10.1177/1745691610369469. Kenrick, D. T., Griskevicius, V., Sundie, J. M., Li, N. P., Li, Y. J. and Neuberg, S. L. (2009). “Deep Rationality: The Evolutionary Economics of Decision Making,” Social Cognition, 27 (5), 764–785. doi:10.1521/soco.2009.27.5.764. Keren, G. and Schul,Y. (2009). “Two Is Not Always Better Than One: A Critical Evaluation Of TwoSystem Theories,” Perspectives on Psychological Science, 4 (6), 533–550. Kluver, J., Frazier, R. and Haidt, J. (2014). “Behavioral Ethics for Homo Economicus, Homo Heuristicus, and Homo Duplex,” Organizational Behavior and Human Decision Processes, 123 (2), 150–158. doi:10.1016/j.obhdp.2013.12.004. Kohlberg, L. (1984). Essays on Moral Development: Vol. 2. The Psychology of Moral Development. New York: Harper & Row. Krasnow, M. M. and Delton, A.W. (2016). “Are Humans Too Generous and Too Punitive? Using Psychological Principles to Further Debates About Human Social Evolution,” Frontiers in Psychology, 7, 799. doi:10.3389/fpsyg.2016.00799. Krebs, D. (1989). “Egoistic Incentives in Experimental Games,” Behavioral and Brain Sciences, 12 (4), 713. doi:10.1017/s0140525x00025449. Krebs, D. L., Denton, K. and Wark, G. (1997). “The Forms and Functions of Real-Life Moral Decision-Making,” Journal of Moral Education, 26, 131–145. doi:10.1080/0305724970260202 Kruglanski, A.W. and Gigerenzer, G. (2011). “Intuitive and Deliberate Judgments Are Based on Common Principles,” Psychological Review, 118 (1), 97–109. doi:10.1037/a0020762. Luan, S., Schooler, L. J. and Gigerenzer, G. (2011). “A Signal-Detection Analysis of Fast-and-Frugal Trees,” Psychological Review, 118 (2), 316–338. doi:10.1037/a0022684. Marewski, J. N. and Link, D. (2014). “Strategy Selection: An Introduction to the Modeling Challenge,” Wiley Interdisciplinary Reviews: Cognitive Science, 5 (1), 39–59. doi:10.1002/ wcs.1265. Marewski, J. N., Schooler, L. J. and Gigerenzer, G. (2010). “Five Principles for Studying People’s Use of Heuristics,” Acta Psychologica Sinica, 42 (1), 72–87. doi:10.3724/SP.J.1041.2010.00072. Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco, CA: W. H. Freeman. Martignon, L., Katsikopoulos, K.V. and Woike, J. K. (2008). “Categorization with Limited Resources: A Family of Simple Heuristics,” Journal of Mathematical Psychology, 52 (6), 352–361. doi:10.1016/j. jmp.2008.04.003. Mikhail, J. (2008). “Moral Cognition and Computational Theory,” in W. Sinnott-Armstrong (ed.), Moral Psychology Volume 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. Cambridge, MA: MIT Press, 80–91. Moore, C. and Tenbrunsel, A. E. (2014). “Just Think About It? Cognitive Complexity and Moral Choice,” Organizational Behavior and Human Decision Processes, 123 (2), 138–149. doi:10.1016/j. obhdp.2013.10.006. Moore, D. A. and Loewenstein, G. (2004). “Self-Interest, Automaticity, and the Psychology of Conflict of Interest,” Social Justice Research, 17 (2), 189–202. doi:10.1023/B:SORE.0000027409. 88372.b4.
171
Moral Intuitions and Heuristics
Muraven, M. and Baumeister, R. F. (2000). “Self-Regulation and Depletion of Limited Resources: Does Self-Control Resemble a Muscle?” Psychological Bulletin, 126 (2), 247–259. doi:10.1037/0033-2909.126.2.247. New, J., Cosmides, L. and Tooby, J. (2007). “Category-Specific Attention for Animals Reflects Ancestral Priorities, Not Expertise,” Proceedings of the National Academy of Sciences, 104 (42), 16598– 16603. doi:10.1073/pnas.0703913104. Rand, D. G., Greene, J. D. and Nowak, M. A. (2012). “Spontaneous Giving and Calculated Greed,” Nature, 489 (7416), 427–430. doi:10.1038/nature11467. Samuels, R., Stich, S. and Bishop, M. (2002). “Ending the Rationality Wars: How to Make Disputes About Human Rationality Disappear,” in R. Elio (ed.), Common Sense, Reasoning and Rationality. New York: Oxford University Press, 236–268. Savage, L. J. (1954). The Foundations of Statistics. New York: Wiley-Blackwell. Shalvi, S., Eldar, O. and Bereby-Meyer, Y. (2012). “Honesty Requires Time (and Lack of Justifications)”, Psychological Science, 23 (10), 1264–1270. doi:10.1177/0956797612443835. Shalvi, S., Handgraaf, M. J. and De Dreu, C. K. (2011). “Ethical Manoeuvring: Why People Avoid Both Major and Minor Lies,” British Journal of Management, 22 (s1), S16–S27. doi:10.1111/j.1467-8551.2010.00709.x. Simon, H. A. (1956). “Rational Choice and the Structure of the Environment,” Psychological Review, 63 (2), 129–138. doi:10.1037/h0042769. Sinnott-Armstrong, W.,Young, L. and Cushman, F. (2010). “Moral Intuitions,” in J. M. Doris and The Moral Psychology Research Group (eds.), The Moral Psychology Handbook. New York: Oxford University Press, 246–272. Sunstein, C. R. (2005).“Moral Heuristics,” Behavioral and Brain Sciences, 28 (4), 531–542. doi:10.1017/ S0140525X05000099. ———. (2008). “Fast, Frugal, and (Sometimes) Wrong,” in W. Sinnott-Armstrong (ed.), Moral Psychology Volume 2: The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA: MIT Press, 27–30. Tan, J. H. (2016). “Process Modeling in Social Decision Making,” Doctoral dissertation. www.diss. fu-berlin.de/diss/receive/FUDISS_thesis_000000103345 [Accessed February 17, 2017]. Tan, J. H., Luan, S. and Katsikopoulos, K. (2017). “A Signal-Detection Approach to Modeling Forgiveness Decisions,” Evolution and Human Behavior, 38 (1), 27–38. doi:10.1016/j. evolhumbehav.2016.06.004 Teper, R., Zhong, C-B. and Inzlicht, M. (2015). “How Emotions Shape Moral Behavior: Some Answers (and Questions) for the Field of Moral Psychology,” Social and Personality Psychology Compass, 9, 1–14. doi:10.1111/spc3.12154. Todd, P. M. and Gigerenzer, G. (2007). “Environments That Make Us Smart: Ecological Rationality,” Current Directions in Psychological Science, 16 (3), 167–171. doi:10.1111/j.1467-8721.2007.00497.x ———. (2012). “What Is Ecological Rationality?” In P. M. Todd, G. Gigerenzer and The ABC Research Group (eds.), Ecological Rationality: Intelligence in the World. Oxford: Oxford University Press, 3–30. Todd, P. M., Hertwig, R. and Hoffrage, U. (2016).“Evolutionary Cognitive Psychology,” in D. M. Buss (ed.), The Handbook of Evolutionary Psychology (2nd ed., Vol. 2). Hoboken, NJ: Wiley-Blackwell, 885–903. Tooby, J. and Cosmides, L. (1992). Ecological Rationality and the Multimodular Mind. Center for Evolutionary Psychology Technical Report #92–91. Santa Barbara, CA: University of California. ———. (2010). “Groups in Mind: The Coalitional Roots of War and Morality,” in H. Høgh-Olesen et al. (eds.), Human Morality and Sociality: Evolutionary and Comparative Perspectives. New York: Palgrave Macmillan, 191–234. van Bavel, J. J., FeldmanHall, O. and Mende-Siedlecki, P. (2015). “The Neuroscience of Moral Cognition: From Dual Processes to Dynamic Systems,” Current Opinion in Psychology, 6, 167–172. doi:10.1016/j.copsyc.2015.08.009. Wallach, W. and Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford: Oxford University Press.
172
Piotr M. Patrzyk
Yamagishi, T., Terai, S., Kiyonari, T., Mifune, N. and Kanazawa, S. (2007). “The Social Exchange Heuristic: Managing Errors in Social Exchange,” Rationality and Society, 19 (3), 259–291. doi:10.1177/1043463107080449.
Acknowledgements I would like to thank Julian Marewski and Aaron Zimmerman for their helpful comments on earlier versions of this chapter.
Further Readings For a general introduction to the fast-and-frugal heuristics framework see G. Gigerenzer, P. Todd and The ABC Research Group, eds., Simple Heuristics That Make Us Smart (Oxford: Oxford University Press, 1999). For a discussion on methods used in this research program, see J. N. Marewski, L. J. Schooler, L. J. and G. Gigerenzer, “Five Principles for Studying People’s Use of Heuristics,” Acta Psychologica Sinica, 42 (1), 72–87, 2010. doi:10.3724/SP.J.1041.2010.00072. For an overview of research on heuristics used in social decision making, see R. Hertwig, U. Hoffrage and The ABC Research Group, eds., Simple Heuristics in a Social World (Oxford: Oxford University Press, 2013). For contributions primarily focused on moral decision making, see G. Gigerenzer, “Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality,” Topics in Cognitive Science, 2 (3), 528–554, 2010. doi:10. 1111/j.1756–8765.2010.01094, as well as G. Gigerenzer and C. Engel, eds., Heuristics and the Law (Cambridge, MA: MIT Press, 2006).
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 2 The Normative Sense: What is Universal? What Varies? Chapter 3 Normative Practices of Other Animals; Chapter 4 The Neurological Basis of Moral Psychology; Chapter 5 Moral Development in Humans, Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 9 The Evolution of Moral Cognition; Chapter 14 Nihilism and the Epistemic Profile of Moral Judgment; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment; Chapter 18 Moral Intuition; Chapter 28 Decision Making Under Moral-Uncertainty.
173
9 THE EVOLUTION OF MORAL COGNITION Leda Cosmides, Ricardo Andrés Guzmán, and John Tooby
1. Introduction Moral concepts, judgments, sentiments, and emotions pervade human social life. We consider certain actions obligatory, permitted, or forbidden, recognize when someone is entitled to a resource, and evaluate character using morally tinged concepts such as cheater, free rider, cooperative, and trustworthy. Attitudes, actions, laws, and institutions can strike us as fair, unjust, praiseworthy, or punishable: moral judgments. Morally relevant sentiments color our experiences—empathy for another’s pain, sympathy for their loss, disgust at their transgressions— and our decisions are influenced by feelings of loyalty, altruism, warmth, and compassion. Fullblown moral emotions organize our reactions—anger toward displays of disrespect, guilt over harming those we care about, gratitude for those who sacrifice on our behalf, outrage at those who harm others with impunity. A newly reinvigorated field, moral psychology, is investigating the genesis and content of these concepts, judgments, sentiments, and emotions. This handbook reflects the field’s intellectual diversity: Moral psychology has attracted psychologists (cognitive, social, developmental), philosophers, neuroscientists, evolutionary biologists, primatologists, economists, sociologists, anthropologists, and political scientists. Issues fundamental to each researcher’s home field animate their questions. Investigators who started in philosophy might design experiments inspired by Kant, Mill, and Bentham to see when our moral judgments reflect deontic intuitions or deliberative reasoning about utilitarian consequences. Economists assume that decision-makers maximize their utility when making choices; when subjects in their experiments behave altruistically or punish free riders, they write utility functions that include “social preferences” to explain these choices. Evolutionary biologists model natural selection to understand which kinds of altruism it can favor. Anthropologists ask whether the content of morality varies capriciously across cultures or displays predictable patterns. Sociologists and political scientists see how trust and cooperation shape institutions and are, in turn, shaped by them. Developmentalists want to know whether infants have moral intuitions or begin life without them. Primatologists look for traces of human moral sentiments in our primate cousins to ascertain the phylogeny of morality. Social and cognitive psychologists argue about the respective roles played by emotion and reasoning in moral judgment. Cognitive neuroscientists address the 174
Leda Cosmides et al.
emotion/reasoning debate by seeing which parts of the brain are activated when people make moral judgments. Neurologists ask whether moral judgment changes when people suffer damage to neural circuits that underwrite empathy. All interesting questions. Here we illustrate how issues relevant to moral epistemology are studied in evolutionary psychology. As in the rest of the cognitive sciences, research in evolutionary psychology tests hypotheses about the architecture of the human mind: the information-processing systems that reliably develop in all neurotypical members of our species. It departs from traditional approaches by making use of an often overlooked fact: These cognitive systems evolved to solve problems of survival and reproduction faced by our hunter-gatherer ancestors. Theories of adaptive function, which specify these problems and what counts as a solution, are used to generate testable hypotheses about the design of these mechanisms. This research method has led to the discovery of many new, previously unknown features of attention, memory, reasoning, learning, emotion, decision making, and choice (e.g., Buss, 2015; Cosmides & Tooby, 2013; Lewis et al., 2017). And it has uncovered evidence of computational systems that are functionally specialized for regulating social interactions. Embedded in these evolved systems are mechanisms of inference, judgment, and choice that generate intuitions about how we ought to treat others and how others ought to treat us: moral intuitions. That makes research on their design of direct relevance to moral psychology. We are not claiming that all the intuitions, inferences, concepts, emotions, and judgments commonly thought of as “moral” are generated by one “moral module”—that is, by a single faculty of moral cognition that applies the same ethical principles to every domain of social life. The evidence accumulated so far—from evolutionary game theory, human behavioral ecology, paleoanthropology, studies of modern hunter-gatherers, and detailed research on cognitive processes—converges on a different view: What Darwin called the human moral sense arises from a number of different computational systems, each specialized for a different domain of social interaction. A single faculty of moral cognition is unlikely to exist because a single faculty of social cognition is unlikely to exist.
2. Why Would Selection Favor Multiple Systems Regulating Social Behavior? Is all social behavior generated by a single cognitive system, a “faculty of social cognition”? The hypothesis that natural selection produced one system to handle functions as diverse as courting mates, helping kin, trading favors, and battling enemies is unlikely, for reasons we explain in this chapter. Ironically, a shorthand for talking about evolution and social behavior has contributed to the single faculty view. In summarizing an evolutionary perspective, people occasionally say that organisms are “motivated to spread their genes.” This creates the false impression that organisms have a single motivation—to spread their genes—and a general cognitive system that figures out how to do this. The same impression—that the mind is a blank slate equipped with a single goal—is created when animals are described as “choosing” to behave in ways that are adaptive—that is, in ways that increase the number of offspring that they (and their close relatives) eventually raise to reproductive maturity. The mind does not—and cannot—work that way. It is impossible for a general purpose cognitive system—one devoid of programs specialized for different social domains—to 175
The Evolution of Moral Cognition
compute which course of action available to you now will maximize the number of offspring you (or your relatives) produce in the distant future.The full argument is beyond the scope of this chapter, but can be found in Cosmides and Tooby (1987, 1994) and Tooby and Cosmides (1990a, 1992). Organisms are not “motivated to spread their genes”—although it may sometimes appear that way. It sows error and confusion to say (for example) that human mothers love and care for their children because they have a “selfish desire to spread their genes”—especially when discussing topics relevant to morality, such as altruism and selfishness. Maternal care does not exist in many species, but it does in primates: Primate mothers monitor their juvenile offspring, stay close to them, groom them, risk their own safety to protect them, and expend energy to feed them. Let’s call the cognitive system that motivates this suite of behaviors maternal love.The care this system generated had consequences for a female primate’s infants: It increased the probability that her offspring survived to reproductive age. Maternal love exists in our species because ancestral mothers who had this motivational system had more surviving children than those that did not, and those children inherited their mothers’ adaptations for maternal care. Over deep time in the hominin line, motivational systems causing maternal care replaced alternative designs that led to neglect. We are descended from ancestral mothers who reliably developed adaptations that caused them to love, rather than neglect, their children.To say mothers love their children because they “want to spread their genes” posits an intention that does not exist and confuses levels of causation. Evolutionary biologists always distinguish adaptations—which are properties of phenotypes—from the selection pressures that caused them to evolve.
Distinguishing Proximate and Ultimate Causes An organism’s behavior is generated by cognitive adaptations: computational systems that were built by natural selection. The function of these evolved systems is to acquire information and use it to regulate behavior. Identifying these mechanisms and the information to which they are responding provides a causal explanation of the organism’s behavior in the here and now (what biologists call a proximate explanation). But the computational properties of these adaptations exist as a downstream consequence of the manner in which they regulated behavior in past environments. Identifying the selection pressures that shaped these properties over deep time, and why they engineered a computational system with that design rather than an alternative design, provides a causal explanation, too: an ultimate (or functional) explanation. The behavior produced by a mechanism has reproductive consequences: An animal with that mechanism might evade more predators, more accurately remember the location of fruiting trees, or choose more helpful cooperative partners than animals with a slightly different mechanism. Mutations can change the design of a mechanism, making it different from those found in other members of the species.1 In a population of sexually reproducing organisms, a design feature that promotes reproduction better than existing alternatives leaves more replicas of itself in the next generation; over many generations, its relative frequency in the population increases until (usually) it replaces the alternative design (see below). For this reason, evolutionary biologists expect animal behavior to be regulated by computational systems that “tracked fitness” ancestrally—systems equipped 176
Leda Cosmides et al.
with features that produced adaptive (reproduction-promoting) behavior in the environments that selected for their design.
Ancestral Domains of Social Interaction With this in mind, let us now return to the original question. Would selection have favored a single faculty of social cognition over alternative designs that existed ancestrally? Would a single faculty have replaced—and subsumed the functions of—a set of functionally distinct cognitive adaptations, each specialized for regulating behavior in a different domain of social interaction? To address this question, we first need to consider what kinds of social interactions our ancestors routinely engaged in. The hunter-gatherer ancestors from whom we are descended engaged in many different types of social interaction. They hunted cooperatively, pooled risk by sharing food, formed long-term mating relationships, had short-term sexual liaisons, raised children, helped close kin, exchanged goods and favors, supported friends in disputes, competed for status, engaged in warfare, and weathered natural disasters together. Task analyses based on evolutionary game theory, human behavioral ecology, and what is known about ancestral environments indicate that what counted as adaptive (reproduction-promoting) behavior differed across these domains of social interaction and varied with the type of relationship (e.g., kin, mate, friend, rival). Consider the following examples. • When foraging success is determined more by luck than by effort, pooling risk by sharing food widely in the band benefits the individuals involved (Kaplan et al., 2012). Forming a risk pool is not adaptive, however, when productivity is a function of effort rather than luck. Evolved intuitions about when one “ought” to share, how much, and with whom can be expected to differ accordingly. • Inflicting harm can promote the reproduction of individuals and their family members when the target is a man from a rival group, but it is rarely adaptive when he is a bandmate (Boehm, 2001; Wrangham & Peterson, 1997). The ethnographic record suggests that moral sentiments track this difference: Killing outgroup rivals commonly elicits pride and praise (Chagnon, 1992; Macfarlan et al., 2014); killing an ingroup member commonly elicits shame, anger, and censure (Boehm, 2012). • Group cooperation unravels if free riders are not punished (Fehr & Gachter, 2000; Krasnow et al., 2015; Masclet et al., 2003;Yamagishi, 1986). But cooperation between two individuals can be sustained without punishing cheaters, when the option to switch partners exists (André & Baumard, 2011; Debove et al., 2015). • Fidelity requires different actions (or inaction) depending on whether one is courting a mate or a political ally (Buss et al., 1992; Tooby & Cosmides, 2010). • Reciprocating favors is necessary to maintain cooperation between friends (Trivers, 1971), but close relatives need not reciprocate help to continue receiving it (Hamilton, 1964). These are just a few examples in which selection pressures differ radically across domains of social interaction. Each implies different inferences about how others “ought” to be treated and how others “ought” to treat us. This means that an evolved system designed to produce adaptive social inferences in one of these ancestral domains would fail to produce adaptive 177
The Evolution of Moral Cognition
inferences in the other domains. To produce adaptive behavior across all of these ancestral domains, each domain would have to activate a different set of cognitive adaptations. The brain can be viewed as a set of evolved programs: computational systems that analyze situations and generate choices. Natural selection will not favor a single, cognitive system regulating choices—moral or otherwise—when programs tailored for tracking fitness in one domain (e.g., cooperative hunting, followed by sharing) require features that fail to do so in others (e.g., courtship, with competition for exclusive access to mates). To generate choices that tracked fitness ancestrally, the human cognitive architecture would need to have a number of different cognitive systems regulating social behavior, each tailored for a different class of social interactions (Bugental, 2000; Cosmides & Tooby, 1987, 1992, 1994; Haidt, 2012).
Multiple Systems to Implement Multiple Functions Because what counts as the (adaptively) wrong thing to do differed from domain to domain, it is reasonable to predict the evolution of multiple systems regulating social interaction. Indeed, there should be as many domain-specific cognitive adaptations as there were ancestral domains in which the definitions of (evolutionarily) successful behavioral outcomes are incommensurate (for argument, see Tooby et al., 2005). Because each of these systems evolved to regulate a different class of social interactions, each can be expected to have a different computational design—a different set of interlocking features, including domain-specialized concepts, inferences, motivational states, emotions, sentiments, and decision rules. When activated, these features should operate in concert, producing social intuitions—inferences, judgments, and choices—that would have promoted reproduction in the ancestral social contexts that selected for their design. The content of these social intuitions should vary across domains, however, depending on which adaptive specialization is activated.That will depend on cues in an individual’s environment. To be activated under the right circumstances, each domain-specialized system needs a front end designed to detect its target domain—a situation detector. Selection should favor situation detectors that use cues that were statistically associated with the target domain ancestrally. These cues can be very concrete (like the cry of a hungry infant, which triggers the flow of breast milk in a nursing mother) or quite abstract (like a string of foraging failures so long that it is unlikely to reflect bad luck). The perception that negative outcomes are due to bad luck should activate different sharing rules than the perception that these same failures are due to lack of effort on the part of those asking to share, for example (see below.) If we have cognitive adaptations with this design, then motivations to share— including intuitions about which distributions are “fair”—will shift in an orderly way with perceptions of luck versus effort.
3. Multiple Evolved Systems and Moral Pluralism The search for a single overarching moral principle or value is appealing, whether it is a principle of utility or Kant’s categorical imperative in its various formulations. But can a monist normative theory capture the complexity of human moral life? If social cognition is generated by multiple evolved systems, each with a different functional design, then it is unlikely that our moral intuitions can be systematized by a single principle or value. 178
Leda Cosmides et al.
Ideal utilitarianism and Kantian deontology were never advanced as descriptive theories of the mind, of course. But they have been proposed as guides to judgment and choice that humans should and therefore can use. Practically speaking, moral principles have to escape from philosophy into the larger community to improve the moral quality of human life. Studies of cultural transmission show that ideas that engage evolved inference systems spread more easily from mind to mind than ones that do not. Boyer’s (2001) analysis of which religious ideas become widespread, recurring across cultures and time, and which die on the vine illustrates this: Ideas that fail to engage our evolved intuitions fail to spread. If they survive at all, they become the esoterica of small communities of priests, monks, imams, rabbis, and other religious specialists. Esoteric debates among philosophers may give rise to moral rules and laws derived from a single, general moral principle, but these are unlikely to engage our evolved moral intuitions—they are more likely to collide with them instead (Cosmides & Tooby, 2008a, 2008b). That would limit their influence. We can, of course, cognitively reframe situations to activate alternative evolved systems in an effort to live up to the ideals articulated by a general moral principle. If that is the goal, the descriptive theories of moral cognition emerging from evolutionary psychology suggest which cues and frames will be most effective. But it may be easier for people to adopt and apply normative ideals and guides like those advanced by ethical intuitionists and moral sentimentalists, especially those who embrace pluralism (e.g., Audi, 2005; Gill & Nichols, 2008; Huemer, 2005; Ross, 1930). After all, a mind equipped with a set of cue-activated, domain-specialized systems regulating social interaction will generate moral inferences, judgments, sentiments, and intuitions that vary across social domains—creating pluralism of values and principles. These responses will also differ across time, situations, people, and cultures: Situation detectors respond to perceptions of local cues and facts, and these perceptions may differ depending on many factors, such as an individual’s past experiences, knowledge, access to family, sociocultural environment—even that individual’s current physiological state (e.g., hungry vs. sated—low blood glucose increases support for redistribution; Aarøe & Petersen, 2013). Moral intuitions will, therefore, vary accordingly. Some argue that variation in “commonsense convictions”—moral diversity—undercuts the normative proposals advanced by ethical intuitionists (e.g., Singer, 2005; Greene, 2008). That argument does not hold, however, if the variation is systematic. Whale fins and chimp arms look different but, when seen in the light of evolution, the homology of bone structure is clear; Earth and Neptune have different orbits, but both are explained by Newton’s universal law of gravitation. Diversity in the natural world resolves into patterns when the right conceptual framework is found. Moral diversity may also resolve into patterns when the architecture of our evolved computational systems is discovered, especially when this knowledge becomes integrated into theories of culture, institutions, and society (for examples, see Baumard & Boyer, 2013; Bloch & Sperber, 2002; Boyer, 2001, 2018; Boyer & Petersen, 2011; Cosmides & Tooby, 2006; Fiske, 1991; Henrich et al., 2012; Rai & Fiske, 2011). That an adaptation evolved because it produced a particular (fitness-enhancing) pattern of behavior does not make that behavior moral—obviously. But the kind of species we are is surely relevant to ethical questions, if only because “ought” (arguably) implies “can.” There is no point in arguing for the adoption of an ethical code if it violates evolved moral intuitions so profoundly that most humans will reject it. 179
The Evolution of Moral Cognition
For example, can human parents stop favoring their children over the children of strangers, as the most radical utilitarians say we must? And what would happen if they did? Let us assume for a moment that education, indoctrination, mindful meditation, or other cognitive technologies allow some parents to achieve true impartiality. What would this departure from an ancestrally typical social environment do to their children—mammals who evolved to expect a mother’s love, whose social and emotional development depends on signals that their parents value them more than strangers? Would the children suffer emotional pain with each impartial act? Would they develop attachment disorders, turning into adults who cannot form long-term bonds or sustain a family life? No one knows for sure, but these outcomes are not implausible given clinical research on social development (e.g., Goldberg et al., 2000). In the end, moral philosophers, politicians, and activists who argue in favor of particular rules, codes, and laws will have to decide what implications, if any, knowledge about human cognitive adaptations has for normative ethics, moral epistemology, and public policy. Our goal here is to explain some of the relevant selection pressures and point to research on the design of the mind that these theories of adaptive function have inspired.
4. Theories of Adaptive Function as Tools for Discovery The lungs, the heart, the kidneys—every organ in the body has an evolved function, an adaptive problem it was designed2 by natural selection to solve. Natural selection is a causal process that retains and discards features from an organism’s design on the basis of how well they solve adaptive problems: cross-generationally enduring conditions that create reproductive opportunities or obstacles, such as the presence of predators, the need to share food, or the vulnerability of infants. Adaptive problems can be thought of as reproductive opportunities or obstacles in the following sense: If the organism had a property that interacted with these conditions in just the right way, then this property would have consequences that promote its reproduction relative to alternative properties. Over the long run, down chains of descent, natural selection creates suites of features that are functional in a specific sense:The elements are well-organized to cause their own reproduction in the environment in which the species evolved. A correct theory of an organ’s function explains its architecture down to the smallest detail and stimulates the discovery of new, previously unknown, features of its design. The lungs evolved for gas exchange, not (as previously thought) for cooling organs or mixing blood. This function explains the gross anatomy of the lungs (e.g., their similarity to bellows), identifies which features are byproducts (e.g., right and left sides have different shapes to accommodate the heart and liver, not for gas exchange per se), and generated hypotheses that led to the discovery of key functional properties. By searching for machinery well designed for solving problems of gas exchange, scientists found how the thinness and composition of alveolar membranes create a blood-air barrier, for example, and uncovered a computational system that regulates the rate and depth of breathing in response to changes in the partial pressure of O2 and CO2—information it extracts from arterial blood. These are design features, that is, properties selected for because they were well-engineered for solving that adaptive problem. The brain is also an organ. Its function is not gas exchange, detoxifying poisons, or breaking down sugars; the brain is composed of neurons arranged into circuits because these circuits perform computations. The brain is composed of information-processing 180
Leda Cosmides et al.
devices—programs—that extract information from the environment and use it to regulate behavior and physiology. The question is, what programs are to be found in this organ of computation? What are the reliably developing, species-typical programs that reliably develop in most members of our species? Theories of adaptive function are tools for discovering what programs exist and how they work. Each feature of each program that evolved to regulate behavior exists because the computations it generated promoted the survival and reproduction of our ancestors better than alternative computational features that arose during human evolutionary history. Natural selection is a hill-climbing process: over time, it assembles computational systems that solve problems that affected reproduction well, given the information available in the environments that selected for their design. For more than 99% of our species’ evolutionary history, our ancestors were foragers who made their living by gathering and hunting. To survive and reproduce, our ancestors had to solve many different, complex, adaptive problems, such as finding mates, protecting children, foraging efficiently, understanding speech, spotting predators, navigating, regulating body temperature, and attracting good cooperative partners.3 Moreover, these problems had to be solved using only information that was available in ancestral environments. Knowing this allows one to approach the study of the mind like an engineer. One starts by using theories about selection pressures and knowledge of ancestral environments to identify—and do a task analysis of—an adaptive information-processing problem. The task analysis reveals properties a program would have to have in order to solve that problem well; this suggests testable hypotheses about the design of programs that evolved to solve that problem. As in the rest of psychology, evolutionary psychologists conduct empirical research to find out whether systems with these computational properties exist in the brains of contemporary humans. Moral psychology can be illuminated by research guided by theories of adaptive function. To illustrate this approach, we present one case in detail, followed by a cook’s tour of research on cognitive adaptations for cooperation. The detailed case starts with the reproductive risks and opportunities that emerge for a species in which individuals interact frequently with their siblings.
5. Kin: Duties of Beneficence and Sexual Prohibitions Clams never know their siblings. Their parents release millions of gametes into the sea, most of which are eaten. Only a few survive to adulthood, and these siblings are so dispersed that they are unlikely to ever meet, let alone interact. The ecology of many species causes siblings to disperse so widely that they never interact as adults, and siblings in species lacking parental care typically do not associate as juveniles either. Humans, however, lie at the opposite end of this spectrum. Hunter-gatherer children typically grow up in families with parents and siblings and live in bands that often include grandparents, uncles, aunts, and cousins. The uncles, aunts, and cousins are there because human siblings also associate as adults—like most people in traditional societies, adult hunter-gatherers are motivated to live with relatives nearby, if that is an option. Indeed, the hunter-gatherers from whom we are descended lived in small, semi-nomadic bands of 25–200 men, women, and children, most of them close relatives, extended family, and friends (Kelly, 1995). 181
The Evolution of Moral Cognition
That close genetic relatives frequently interacted ancestrally is an important fact about our species. Some of the best established models in evolutionary biology show that genetic relatedness is an important factor in the social evolution of such species (Hamilton, 1964; Williams & Williams, 1957). Genetic relatedness refers to the increased probability, compared to the population average, that two individuals will both carry the same randomly sampled gene, given information about common ancestors. The relatedness between two individuals (i and j) is typically expressed as a probability, rij, called the degree of relatedness. For humans, this probability usually has an upper bound around ½ (for full siblings; for parent and offspring) and a lower bound of zero (with nonrelatives). The adaptive problems that arise for species who live with close genetic relatives are nonintuitive, biologically real, and have large fitness consequences. The most important ones involve mating and providing help.
6. Degree of Relatedness and Inbreeding Depression: Selection Pressures Animals are highly organized systems (hence “organisms”), whose functioning can easily be disordered by random changes. Mutations are random events, and they occur every generation. Many of them disrupt the functioning of our tightly engineered regulatory systems. A single mutation can, for example, prevent a gene from being transcribed (or from producing the right protein). Given that our chromosomes come in pairs (one from each parent), a mutation like this need not be a problem for the individual it appears in. If it is found on only one chromosome of the pair and is recessive, the other chromosome will produce the right protein and the individual may be healthy. But if the same mutation is found on both chromosomes, the necessary protein will not be produced by either. The inability of an organism to produce one of its proteins can impair its development or prove fatal. Such genes, called “deleterious recessives,” are not rare. They accumulate in populations precisely because they are not harmful when heterozygous—that is, when they are matched with an undamaged allele. Their harmful effects are expressed, however, when they are homozygous—that is, when the same impaired gene is supplied from both parents. Each human carries a large number of deleterious recessives, most of them unexpressed. When expressed, they range in harmfulness from mild impairment to lethality. A “lethal equivalent” is a set of genes whose aggregate effects, when homozygous, completely prevent the reproduction of the individual they are in (as when they kill the bearer before reproductive age). It is estimated that each of us has at least one to two lethal equivalents worth of deleterious recessives (Bittles & Neel, 1994; Charlesworth & Charlesworth, 1999). However, because mutations are random, the deleterious recessives found in one person are usually different from those found in another. These facts become socially important when natural selection evaluates the fitness consequences of mating with a nonrelative versus mating with a close genetic relative (for example, a parent or sibling). When humans reproduce, each parent places half of its genes into a gamete, which then meet and fuse to form the offspring. For parents who are genetically unrelated, the rate at which harmful recessives placed in the two gametes are likely to match and be expressed is a function of their frequency in the population. If (as is common) the frequency in the population of a given recessive is 1/1,000, then the frequency with which it will meet itself (be homozygous) in an offspring is only 1 in 1,000,000. In contrast, 182
Leda Cosmides et al.
if the two parents are close genetic relatives, then the rate at which deleterious recessives are rendered homozygous is far higher. The degree of relatedness between full siblings, and between parents and offspring, is ½.Therefore, each of the deleterious recessives one sibling inherited from her parents has a 50% chance of being in her brother. Each sibling has a further 50% chance of placing any given gene into a gamete, which means that for any given deleterious recessive found in one sibling, there is a 1/8 chance that a brother and sister will pass two copies to their joint offspring (a ½ chance both siblings have it times a ½ chance the sister places it in the egg times a ½ chance the brother places it in the sperm).Therefore, incest between full siblings renders one-eighth of the loci homozygous in the resulting offspring, leading to a fitness reduction of 25% in a species carrying two lethal equivalents (two lethal equivalents per individual × 1/8 expression in the offspring = 25%). This is a large selection pressure—the equivalent of killing one quarter of one’s children. Because inbreeding makes children more similar to their parents, it also defeats the primary function of sexual reproduction, which is to produce genetic diversity that protects offspring against pathogens that have adapted to the parents’ phenotype (Tooby, 1982). The decline in the fitness of offspring (in their viability and consequent reproductive rate) that results from matings between close genetic relatives is called inbreeding depression. Although incest is rare, there are studies of children produced by inbreeding versus outbreeding that allow researchers to estimate the magnitude of inbreeding depression in humans. For example, Seemanova (1971) was able to compare children fathered by firstdegree relatives (brothers and fathers) to children of the same women who were fathered by unrelated men. The rate of death, severe mental handicap, and congenital disorders was 54% in the children of first-degree relatives, compared to 8.7% in the children born of nonincestuous matings (see also Adams & Neel, 1967). Both selection pressures—deleterious recessives and pathogen-driven selection for genetic diversity—have the same reproductive consequence: Individuals who avoid mating with close relatives will leave more descendants than those whose mating decisions are unaffected by relatedness. Thus natural selection will favor mutations that introduce motivational design features that cost-effectively reduce the probability of incest. In some primate species, this problem is solved by one sex (often males) leaving the natal group to join another troop. But for species like ours, in which close genetic relatives who are reproductively mature are commonly exposed to each other, an effective way of reducing incest is to make cues of genetic relatedness reduce sexual attraction. Incest is a major fitness error, and so the prospect of sex with a sibling or parent should elicit sexual disgust or revulsion—an avoidance motivation.
7. Kin Selection and Altruism The theory of natural selection follows from replicator dynamics. Genes are a mechanism by which phenotypic features replicate themselves from parent to offspring. They can be thought of as particles of design: elements that can be transmitted from parent to offspring, and that, together with an environment, cause the organism to develop some design features and not others. Because design features are embodied in individual organisms, they can propagate themselves by solving problems that increase their bearer’s reproductive success (very roughly, the number of offspring that reach reproductive age produced by that 183
The Evolution of Moral Cognition
individual). In evolutionary models, costs and benefits are usually reckoned as the average effects of a design feature on an individual’s reproductive success. One way an organism can increase its reproductive success is by investing resources (e.g., metabolic energy, time) in ways that are likely to (i) produce more offspring in the future or (ii) improve the chances that existing offspring survive. The distinction between existing and future offspring does not matter for this analysis, so let’s create a unit—offspring equivalents—for discussing the effects of a design feature on an individual’s reproductive success. A gene, however, can cause its own spread in two ways. It can produce a design feature that increases the reproductive success of (i) the individual it is in or (ii) other individuals who are more likely to carry that same gene than a random member of the population— that is, close genetic relatives. That probability is given by r, the degree of relatedness. This insight has implications for the evolution of social behavior, which were formalized in W. D. Hamilton’s (1964) theory of kin selection. When kin live in close association with one another, there are many opportunities for individuals to help their kin—to give them food, alert them to dangers, protect them from aggression, tend their wounds, lend them tools, argue in support of their interests, and so on. Given these opportunities, an organism can invest a unit of its limited resources in ways that increase its own reproductive success or that of its genetic relatives. The decision to allocate a unit of resource to a relative instead of one’s own offspring has two net effects: It increases the relative’s reproductive success (by an amount, Bkin, measured in offspring equivalents), and it prevents the helper from increasing its own reproductive success (an opportunity cost, Cself, representing offspring equivalents forgone). Consider, then, the fate of three alternative designs for a motivational system regulating decisions to help kin. An individual with Design #1 invests all its resources in producing offspring of its own. When helping a genetic relative would decrease that individual’s own reproductive success—that is, when Cself > 0—individuals with Design #1 decide to not help. Now imagine a population of individuals equipped with this design, living in an environment with a biologically plausible distribution of opportunities to help (the costs of providing help range from low to high, relative to the resulting benefits). In this population, a mutation emerges that causes the development of a different design. This new design motivates an individual to divide its resources between producing offspring of its own and helping its kin produce offspring. Under what conditions will this mutation spread?4 Consider first a mutation that produces Design #2, a motivational system that generates the decision to help kin whenever Bkin > Cself. Acts of help with these reproductive consequences increase the number of offspring produced by the kin member who received help, but that kin member may not have inherited the mutation that produced this design. For example, the probability that a full sibling inherited the same mutation—the one that produces Design #2—is only ½. When an individual with Design #2 allocates a resource to siblings, half of them do not have the mutation that produces this design; those that lack the mutation cannot pass it on to their offspring. This has consequences for the number of copies of the mutation in the next generation. When an individual with Design #2 gives a resource to its sibling, the average increase in new copies of the mutation produced through the sibling who received helped will be ½(0 + Bsib). But the number of new copies produced through the individual who provided that help will be lower than if the individual had kept the resource—a decrease of Cself. (Technically the number of copies would be these values 184
Leda Cosmides et al.
(Cself and ½Bsib) multiplied by the ½ chance a parent passes any given gene to its offspring, but this can be ignored because it is true for all parents—self and sibling both).5 For opportunities to help a sibling in which Bsib > Cself > ½(Bsib), individuals with Design #2 will decide to help their sibling. This decision allocates their resources in a way that causes a net decrease in the number of copies of that design in the next generation. Because siblings are only half as likely to have that mutation as the helper (self), the increase in copies of the mutation that result from this decision will not be large enough to offset the decrease in copies that would have been produced if self had kept the resource: Cself is greater than ½(Bsib).6 Given the same situation, individuals with Design #1 will not help their sibling: They will invest the resource in producing offspring of their own, who are twice as likely to have the gene for Design #1 as offspring produced by their sibling.When facing opportunities in this range, Design #1 produces more copies of itself than Design #2 does. However, when facing opportunities to help where ½(Bsib) > Cself > 0, Design #2 produces more copies of itself than Design #1 does. Individuals with Design # 1 allocate a resource to their own reproduction whenever Cself > 0, no matter how large Bsib is—that is, no matter how many more offspring their sibling would produce by using that resource. They make no tradeoffs between their own reproductive success and that of their siblings. To see the consequences, let us consider situations in which keeping a unit of a resource allows an individual to produce one more offspring equivalent but giving it to a sibling would increase the sibling’s reproductive success by three offspring equivalents. This is a situation in which ½(Bsib) > Cself > 0. Given payoffs in this range, individuals with Design #1 keep the resource, whereas individuals with Design #2 invest it in their sibling, who has a ½ chance of carrying the mutation for Design #2. In the next generation, there will be (3 × ½) = 1.5 copies of the mutation producing Design #2 for every copy of the gene for Design #1. When facing opportunities in this range, Design #2 produces more copies of itself than Design #1 does. There is, however, a design that has the advantages of Design #2 without its disadvantages. Consider a mutation producing a third design. Individuals with this design are motivated to divide resources between self and kin, but their decision system discounts the reproductive benefit to the close relative by the probability that this relative has inherited the same mutation—which is given by rself, kin. Individuals with this design help kin—they allocate a resource to kin rather than themselves—when the reproductive consequences helping are such that (rself, kin × Bkin) > Cself. This inequality is known as Hamilton’s rule. As before, let’s assume that keeping a unit of a resource allows an individual to produce one more offspring equivalent. For opportunities to help a sibling in which ½(Bsib) > Cself > 0, individuals with Design #1 decide to invest in their own offspring—producing one offspring equivalent—but individuals with the Hamiltonian design decide to allocate that resource to their sibling, producing >2 offspring equivalents (the same decision made by individuals with Design #2, with the same consequences). That decision translates into >1 copy of the Hamiltonian mutation in the next generation for every copy of the alternative allele, the gene that produces Design #1. This is the same relative advantage that the mutation producing Design #2 has over the gene for Design #1. But for opportunities to help a sibling in which Bsib > Cself > ½(Bsib), individuals with the Hamiltonian design make the same decision as individuals with Design #1—they invest in their own offspring, producing one offspring equivalent. By contrast, individuals with 185
The Evolution of Moral Cognition
Design #2 decide to allocate that unit of resource to their sibling, thereby decreasing the number of copies of the mutation for Design #2 in the next generation. Individuals with Design #2 produce ½ offspring equivalent for every one produced by individuals with the Hamiltonian mutation. For reproductive payoffs in this range, the Hamiltonian mutation does as well as Design #1, and both produce more copies of their respective designs than Design #2 does. In a population composed of individuals with Design #1, Design #2, and the Hamiltonian design, the mutation producing the Hamiltonian design will eventually outcompete the other two designs. This mutation promotes its own reproduction better than the existing alternatives by causing individuals who have it to make efficient tradeoffs between increasing their own reproductive success and the reproductive success of kin (who have the same mutation with probability rself, kin).7 For situations in which rself, kin(Bkin) > Cself > 0, the Hamiltonian mutation produces more copies of itself than Design #1 produces, and does no worse than Design #2. For situations in which Bkin > Cself > rself, kin(Bkin), the Hamiltonian mutation produces more copies of itself than Design #2 produces, and does no worse than Design #1. For this reason, the relative frequency of a Hamiltonian mutation can be expected to increase in the population over many generations, until it replaces the other designs: It will become universal8 and species-typical.9 What about inflicting harm on kin when doing so would increase your own reproductive success? Small group living creates opportunities to benefit yourself at your sibling’s expense—taking food from your sibling, seducing your sibling’s mate, enhancing your reputation for formidability by publicly defeating your sibling in a fight, spreading gossip that makes you look like the better cooperative partner, and so on. The same Hamiltonian reasoning applies to motivations for not inflicting harm on kin when that would benefit oneself. A mutation for restraint will spread if it causes an organism to refrain from harming kin when the reproductive consequences are such that Ckin × rself, kin > Bself . That is, the decrease in the reproductive success of the relative needs to be discounted by the probability that the relative inherited the same mutation for restraint. Reciprocation is usually necessary to select for adaptations for delivering benefits to nonrelatives (see below). But reciprocation is not necessary for selection to favor adaptations for helping kin. The motivational system need not trigger the inference that the sibling helped is obligated to reciprocate. For situations in which the payoffs satisfy Hamilton’s rule, individuals with adaptations shaped by kin selection will be motivated to help their kin without any conditions. Kin selection will favor adaptations that produce unconditional altruism toward kin: Close relatives will not need to reciprocate help to continue receiving it (Hamilton, 1964; Williams & Williams, 1957).
8. Estimating and Representing Benefits and Costs: A Computational Requirement across Social Domains Hamilton’s rule is not a computational mechanism; it is not an algorithm that makes decisions. It describes selection pressures that can be expected to shape cognitive adaptations operative when organisms do make decisions about how to divide resources between self and kin. What computational properties would adaptations like this require?
186
Leda Cosmides et al.
In Hamilton’s rule, Ci and Bi refer to the actual effects of an action on the reproductive success of individual i. But at the time an organism makes a decision, it does not—and cannot—know how that decision will affect its reproductive success in the future. This is true for every choice an organism makes: which food to eat, which mate to pursue, whether to freeze or flee when seen by a predator, whether to keep a resource or donate it to kin— all of them. Making tradeoffs between options requires computational machinery that estimates the value of one option relative to another. For example, foragers—people who hunt and gather for a living—search for and harvest some plants and prey, while ignoring others. Their choices are systematic: their decisions can be predicted by models that assume they are optimizing calories obtained for a given amount of effort (Smith & Winterhalder, 1992). Knowing four variables allows behavioral ecologists to explain 50% of the variance in their decisions about which resources to pursue. Two involve effort: search time (how long until first encounter with the resource) and handling time (the time from first encounter to when the resource is ready to eat). The other two involve nutritive value: the resource’s caloric density (calories/unit volume; e.g., avocado > cucumber) and typical volume (size of an animal or a resource patch). The success of these models implies the existence of psychological mechanisms that estimate effort and caloric value, plus mechanisms that use these values in realizing an organism’s decision as to which resources to pursue. To have produced adaptive behavior ancestrally, the values that these mechanisms compute would have to use information that reflected the average reproductive consequences of choices in our ancestral past. For example, our taste for fats and sugars evolved because these chemicals were correlated with the caloric value of food, and they were difficult to acquire ancestrally. Tastes for fats and sugars caused foraging decisions and food choices that were adaptive (reproduction-promoting) ancestrally. These tastes guide our food choices now too: that is why ice cream—a food high in fats and sugars—“tastes good.” But these preferences, which caused adaptive behavior in the past, may be maladaptive now—they can lead to diabetes and early death in advanced market economies where foods high in fat and sugar are not only abundant, but available with low search and handling time at supermarkets and fast food restaurants. To avoid confusion, it is important to distinguish reproductive costs and benefits in the past—that is, selection pressures—from costs and benefits as computed by an organism’s evolved computational systems. We don’t like ice cream more than oat bran because this preference promotes reproduction in the present; we like it now because design features causing preferences for fats and sugars promoted reproduction in the past. Humans, like other organisms, have computational systems that evolved to assign value to options we face. If the selection pressures described by Hamilton’s rule designed cognitive systems for deciding how to divide resources between self and kin, these systems would require input from other mechanisms, which estimate the costs and benefits of actions to self and others in a way that reflected average reproductive consequences in our ancestral past. Indeed, every theory of the evolution of social behavior assumes that mechanisms of this kind exist. The values computed by these mechanisms serve as input to cognitive adaptations for making social decisions—especially ones that make decisions about how we ought to treat others and how others ought to treat us.
187
The Evolution of Moral Cognition
The taste for fats and sugars is just one component of one value-computing system, and a very specialized component at that. Notice that evolved systems for computing food value cannot assign a fixed value to specific foods: The value computed for a given food should be higher when my blood sugar is low than when it is high, for example. When my blood sugar is high, and there are cues that my sister is hungry, my value-computing systems should estimate that the benefit she will derive from the venison I have will be higher than the cost to me of giving it to her. Nor can there be a fixed value for eating over other activities because adaptive behavior requires complicated tradeoffs. As one example: there are value-computing systems in women that prioritize sex over eating on days when conception is most likely (more specifically, on days when estrogen is high and progesterone low) and eating over sex on the days before menstruation (when estrogen is low and progesterone high; Roney & Simmons, 2017). This is not because women have a “motivation to spread their genes.” Sex is pleasurable—and libido fluctuates with these hormone profiles—because adaptations with these features promoted reproduction in ancestral environments. The design of value-computing systems is relevant to utilitarian theories of ethics, which assume that people can estimate the consequences of actions for the welfare of self and others. Surprisingly little is known, however, about how human minds estimate benefits and costs or how these are represented within and across domains. Input to systems that evolved for estimating the marginal benefit of keeping an additional unit of a resource versus giving it to another person should include many factors: the type of resource (food, time, energy, social capital, actions that carry a risk of death), each individual’s age (younger individuals have more of their reproductive career ahead of them), health, current reproductive state, current nutritional status, relationship (e.g., mate, child), resources already available to self and other, the size of a resource to be divided (what an economist might call income), and so on (e.g., Burnstein et al., 1994). Evolved systems should be able to calculate the costs and benefits of options presenting themselves on the fly, because these inputs are not fixed variables—they can change quickly. Whether the factors that serve as input to these calculations are morally justifiable is a question for moral epistemologists. For our purposes, we will assume that such systems exist and that they were designed to track fitness in ancestral environments. When we are discussing the design of adaptations that make social decisions, “costs” and “benefits” refer to the perceived values of resources or actions, i.e., the values as computed by the mind of the individual who is making a decision—not to the effects of these resources or actions on the lifetime reproductive success of the decision maker, its siblings, or anyone else.
9. A Kin Detection System: Computational Requirements These two adaptive problems—inbreeding avoidance and kin-directed altruism—both require that close kin are treated differently than unrelated individuals. That requires some means of distinguishing one’s close genetic relatives from people who are related distantly or not at all. A task analysis of this adaptive problem led to testable predictions about the presence and properties of a kin detection system: a neurocomputational system that is well engineered (given the structure of ancestral environments) for computing which individuals in one’s social environment are close genetic relatives (Lieberman et al., 2007). 188
Leda Cosmides et al.
For each familiar individual, j, the kin detection system should compute and update a continuous variable, the kinship index, KIj. By hypothesis, KIj is an internal regulatory variable whose magnitude reflects the kin detection system’s pairwise estimate of the degree of relatedness between self and j. The kinship index should serve as input to at least two different motivational systems: one regulating feelings of sexual attraction and revulsion and another regulating altruistic impulses. When KIj is high, it should up-regulate motivations to provide aid to j and down-regulate sexual attraction by activating disgust at the prospect of sex with j.
Ancestrally Reliable Cues to Genetic Relatedness Detecting genetic relatedness is a major adaptive problem but not an easy one to solve. Neither we nor our ancestors can see another person’s DNA directly and compare it to our own, in order to determine genetic relatedness. Nor can the problem of detecting genetic relatives be solved by a domain-general learning mechanism that picks up local, transient cues to genetic relatedness: To identify which cues predict relatedness locally, the mechanism would need to already know the genetic relatedness of others—the very information it lacks and needs to find.10 Instead, the kin detection system must contain within its evolved design a specification of the core cues that it will use to determine relatedness— cues picked out over evolutionary time by natural selection because they reliably tracked genetic relatedness in the ancestral social world. This requires monitoring circuitry, which is designed to register cues that are relevant in computing relatedness. It also requires a computational unit, a kinship estimator, whose procedures were tuned by a history of selection to take these registered inputs and transform them into a kinship index. So what cues does the monitoring circuitry register, and how does the kinship estimator transform these into a kinship index? For our hunter-gatherer ancestors, a reliable cue to relatedness is provided by the close association between mother and infant that begins with birth and is maintained by maternal attachment. Maternal perinatal association (MPA) provides an effective psychophysical foundation for the mutual kin detection of mother and child. It also provides a foundation for sibling detection. Among our ancestors, when an individual observed an infant in an enduring caretaking association with the observer’s mother, that infant was likely to be the observer’s sibling. To use this high-quality information, the kin detection system would need a monitoring subsystem specialized for registering MPA. Although MPA allows older siblings to detect younger siblings, it cannot be used by younger siblings because they do not exist at the time their older siblings were born and nursed. This implies that the kin detection system’s psychophysical front end must monitor at least one additional cue to relatedness. The cumulative duration of coresidence between two children, summed over the full period of parental care until late adolescence, is a cue that could be used to predict genetic relatedness—an expansion and modification of an early ethological proposal about imprinting during early childhood (Shepher, 1983; Westermarck, 1891/1921; Wolfe, 1995). Hunter-gatherer bands fission and fuse over time, as their members forage and visit other bands; this means individuals frequently spent short periods of time with unrelated or distantly related persons. However, hunter-gatherer parents (especially mothers) maintained 189
The Evolution of Moral Cognition
close association with their dependent children in order to care for them. Siblings, therefore, maintained a higher-than-average cumulative association with each other within the band structure. As association is summed over longer periods of time, it monotonically becomes an increasingly good cue to genetic relatedness. This invites the hypothesis that the kin detection system has a system for monitoring duration of coresidence between i (self ) and j during i’s childhood, and that its output is particularly important for younger siblings to detect older siblings.
10. Does a Kin Detection System Regulate Sibling Altruism and Sexual Aversion? To compute the kinship index, the kin detection system requires: (1) monitoring circuitry designed to register cues to relatedness (MPA, coresidence during childhood, possibly other cues) and (2) a computational device, the kinship estimator, whose procedures have been tuned by a history of selection to take these registered inputs and transform them into a kinship index—the regulatory variable that evolved to track genetic relatedness. If these cues are integrated into a single kinship index—that is, if the kinship index for each familiar individual is a real computational element of human psychology—then two distinct motivational systems should be regulated by the same pattern of input cues. For example, when i is younger than j, i’s kinship index toward j should be higher the longer they coresided during i’s childhood. As a result, i’s levels of altruism and sexual aversion toward j will be predicted by their duration of childhood coresidence. Lieberman et al. (2007) tested these hypotheses about the computational architecture of human kin detection by quantitatively matching naturally generated individual variation in two predicted cues of genetic relatedness—maternal perinatal association and duration of coresidence during childhood—to individual variation in altruism directed toward a given sibling and opposition to incest with that sibling. When the MPA cue was absent (as it always is for younger siblings detecting older siblings), duration of childhood coresidence with a specific sibling predicted measures of altruism and sexual aversion toward that sibling, with similar effect sizes. When the MPA cue was present (which is possible only for older siblings detecting younger siblings), measures of altruism and sexual aversion toward the younger sibling were high, regardless of childhood coresidence. The fact that two different motivational systems are regulated in parallel by the same cues to genetic relatedness implicates a single underlying computational variable—a kinship index—that is accessed by both motivational systems. Finally, the results imply that the architecture includes a kinship estimator, which integrates the cues to produce the kinship index. If the effects of the cues were additive, there could be a direct path from each cue to each motivational system. Instead, when both cues were available, the more reliable cue— maternal perinatal association—trumps coresidence duration. That these two cues interact in a non-compensatory way implies they are being integrated to form a variable, which then serves as input to the systems motivating altruism and sexual aversion. This pattern of cue activation has since been replicated six times with measures of altruism, in samples drawn from the US (California, Hawaii), Argentina, Belgium, and a traditional Carib society practicing horticulture (Sznycer et al., 2016); effects of coresidence duration on altruism and sexual aversion were also tested and confirmed among unrelated adults who had been 190
Leda Cosmides et al.
co-reared during childhood on a kibbutz in Israel, in communal children’s houses where groups of similar-aged peers slept, ate, and bathed (Lieberman & Lobel, 2012). This entire computational system appears to operate nonconsciously and independently of conscious beliefs. When beliefs about genetic relatedness conflict with the cues this system uses (as they do when people have coresided with stepsiblings or unrelated peers), the motivational outputs (caring, sexual disgust) are shaped by the cues, not the beliefs. Coresidence duration predicts sexual aversion and altruism toward stepsiblings (Lieberman et al., 2007) and toward genetically unrelated people raised together on kibbutzim in Israel (Lieberman & Lobel, 2012).
11. Moral Sentiments about Siblings How the kinship index is computed creates systematic variation in the strength of moral intuitions across individuals. First, it regulates the strength of moral proscriptions against sibling incest. Second, it regulates how often people sacrifice to help their siblings and how willing they are to do so. There is debate among evolutionary psychologists about why people are motivated to endorse moral prescriptions and prohibitions. The adaptive functions proposed include relationship regulation (Baumard et al., 2013; Cosmides & Tooby, 2006, 2008a; Fiske, 1991; Rai & Fiske, 2011), binding cooperative groups together via adherence to sacred values (Haidt, 2012), promoting within-group cooperative norms (Boehm, 2012; Boyd & Richerson, 2009), reducing the costs associated with taking sides in other people’s disputes by coordinating condemnation with other third parties (DeScioli & Kurzban, 2013), creating a local moral consensus favorable to realizing the individual’s preferences (Kurzban et al., 2010; Tooby & Cosmides, 2010), and mobilizing coalitions of individuals with similar interests to treat the enforcement of norms as a collective action (Tooby & Cosmides, 2010; see also Boyer, 2018). Most of these theories converge in proposing a link between disgust and morality. The emotion of disgust is reliably elicited by cues correlated with the presence of pathogens (e.g., rotting corpses, vomit, mold, (someone else’s) bodily fluids) and by the prospect of sex with genetic relatives and other partners whose value as a potential mate is low (for review, see Tyber et al., 2013). Its evolved function is to motivate one to avoid actions or objects that would have imposed fitness costs ancestrally (and now). Moral prohibitions specify actions and objects to be avoided. A default heuristic to moralize actions that are felt to be against one’s interests would connect disgust to moral prohibitions, as would attempts to promote self-serving prohibitions by portraying actions as disgusting (Tooby & Cosmides, 2010; Tyber et al., 2013). Disgust and moral prohibitions both tag actions as wrong to do. Many empirical studies confirm this link: Actions that elicit disgust are often moralized, and actions that are judged morally wrong sometimes elicit disgust (for reviews, see Haidt, 2012; Lieberman & Patrick, 2018; Tyber et al., 2013).
Disgust, Morality, and Sibling Incest Haidt and colleagues showed that the intuition that brother-sister incest is morally wrong is so strong that it persists even when the scenario described has removed all practical reasons for avoiding it (e.g., contraception was used, no one else will know, it was consensual; Haidt, 2012).This resistance to reasoning, which Haidt refers to as “moral dumbfounding,” supports a 191
The Evolution of Moral Cognition
claim relevant to ethical intuitionists: that we directly apprehend (or seemingly apprehend) the wrongness of certain actions, in a process akin to a perceptual experience (Stratton-Lake, 2016). The strength of these intuitions varies systematically, however, with factors that regulate the kinship index (Fessler & Navarrete, 2004; Lieberman et al., 2003, 2007). In the studies reviewed earlier, Lieberman et al. (2003, 2007) asked people to rank how morally wrong 19 acts were, where the list included consensual sex between siblings (third parties, not oneself). The pattern was the same as for disgust. When the MPA cue was absent, moral wrongness judgments tracked duration of coresidence with opposite-sex siblings; for subjects with younger opposite-sex siblings, they tracked the presence of the MPA cue. As the disgust-morality link predicts, this result is specific to coresidence with opposite-sex siblings: coresidence with same-sex siblings does not predict moral judgments about sibling incest at all. This result speaks against any counterexplanation that attributes harsher judgments to factors (such as having a traditional family structure) that are correlated with siblings having coresided for a long time (Lieberman et al., 2003). What about people who are not biological siblings, yet raised together? Lieberman and Lobel (2012) had the same kibbutz-raised adults rate (1) disgust at the idea of sex with their opposite-sex peers, (2) how morally wrong it would be for kibbutz classmates to have sex, and (3) how morally wrong it would be for a brother and sister to have sex. Coresidence duration with opposite-sex peers did not predict judgments of how wrong sibling incest is. It predicted how morally wrong it would be for kibbutz classmates to have sex and how disgusting they would find sex with their opposite-sex peers. These disgust and wrongness ratings were strongly correlated, as expected. But a causal pathway from coresidence duration to disgust to morality was confirmed by mediation analyses. The correlation between coresidence duration and sexual disgust remained high when ratings of moral wrongness were controlled for statistically. But controlling for sexual disgust erased the link between coresidence duration and moral wrongness; sexual disgust fully mediated the relationship between the coresidence cue and moral wrongness judgments. Some societies have explicit prohibitions (rules, norms, or laws) against incest with harsh punishments for transgressions, whereas other societies either lack explicit prohibitions or, if these exist, lack harsh punishments.Why does this cross-cultural variation exist, if an evolved mechanism causes most people to find the prospect of sex with siblings distasteful? In a comprehensive review of the ethnographic literature, Fox (1965/1984) showed that explicit prohibitions against incest are most common in societies where the sexes are segregated during childhood—a practice that results in opposite-sex siblings spending a lot of time apart. Explicit prohibitions are either absent, or accompanied by a relaxed attitude, in societies where opposite-sex siblings live in close association during childhood. Research on the computational architecture of the kin detection system demonstrates that there is variation in moral intuitions about sex with siblings (and peers!). But this variation is systematic: when analyzed in light of this evolved system, the moral diversity resolves into patterns.
Altruism and Duties of Beneficence toward Siblings The ethnographic record supports the prediction that kin selection will create adaptations motivating unconditional altruism toward kin (Fiske, 1991). Altruism toward kin is 192
Leda Cosmides et al.
widespread across cultures, and so is the ethic that kin ought to treat each other with generosity “without putting a price on what they give” and without demanding “strictly equivalent returns of one another” (Fortes, 1970, 237–238; see Fiske, 1991). Like the wrongness of incest, this obligation is directly apprehended (or seemingly apprehended): “kinship is felt to be inescapable, presupposed, and unproblematic. . . [it] inherently involves a fundamental moral and affective premise of amity, solidarity, concern, trust, prescriptive altruism manifested in generosity, loving, and freely sharing” (Fiske, 1991, 354). Notice, however, that the architecture of the kin detection system creates systematic variation in altruism toward siblings within a culture. The same cues that regulate moral intuitions about incest—MPA and coresidence duration—regulate how often people sacrifice to help their siblings (as measured by favors done in the last month) and their willingness to incur large costs (such as donating a kidney), whether they are true biological siblings, stepsiblings, or unrelated children being raised together on a kibbutz. But will moral intuitions about how much you should sacrifice to help a sibling be the same within a family? No. Trivers’ (1974) application of kin selection theory to family relationships predicts that different family members will have different intuitions about how much you should sacrifice to help your sibling—that is, different views about your duties of beneficence. Trivers’ insight was that kin selection will favor adaptations for social negotiation within the family. If so, then adaptations in each family member should be designed to weight their estimates of the costs to you and the benefits to your sibling by that family member’s kinship index toward each of you. For ease of exposition, we will assume that each family member has a kin detection system that computed a kinship index that reflects rself,j, the degree of relatedness between that individual and family member j. Let’s say you could take a costly action that benefits your sister, a full sibling. Let’s also assume that you, your sister, and your mother all agree on the magnitude of Cyou and Bsister that will result from your helping (each of you has an evolved program that evaluates this action prospectively, generating estimates of these values; see §8, “Estimating and Representing Benefits and Costs”). All else equal, your adaptations will motivate you to help your sister when Cyou < ½×Bsister. Because adaptations in your sister will also have been shaped by kin selection, they will discount Cyou by her kinship index, which reflects her degree of relatedness to you; the intuition produced by her adaptations will be that you ought to help her when ½Cyou < Bsister (but not when ½Cyou > Bsister; kin selection implies there will be limits on the costs she is willing to impose on you). The magnitude of your mother’s kinship index will be the same for both of you—her degree of relatedness to each of you is ½. So her kin-selected adaptations will generate the intuition that you should help your sister whenever ½Bsister > ½Cyou; that is, your mother will encourage you to help when Bsister > Cyou. Her opinion will be shared by every other member of the family, k, for whom rk,you = rk,sister: your father, your other full siblings, your grandparents, and their children (your uncles and aunts). These kin-selected adaptations can be expected to include moral concepts, such as “ought” and “should”: the feelings and opinions they generate are about how you ought to treat your sister, how she deserves to be treated by you. When you act otherwise—in reality or when contemplating options prospectively—these adaptations can be expected to activate moral emotions. These emotions themselves have evolved functions, which are reflected in their computational design (Tooby & Cosmides, 2008; Tooby et al., 2008). 193
The Evolution of Moral Cognition
Research on the computational architecture of anger and shame provide examples (e.g., Sell, Sznycer, Al-Shawaf, et al., 2017; Sell et al., 2009; Sznycer et al., 2016). Prospectively, the moral emotions produce evaluations of alterative actions, used in social decision making (e.g., Sznycer et al., 2016). After the fact, they recalibrate variables used by social decisionmaking systems (e.g., correcting estimates of another person’s need—the relevant costs and benefits), and motivate relevant behaviors (such as bargaining for better treatment (Sell et al., 2017) or apologizing and withdrawing socially to avoid being further devalued by others (Sznycer et al., 2016)). For example, you may feel guilt at the prospect of helping too little or, after the fact, in response to information from your mother or sister that you underestimated your sister’s need (you may experience regret when you realize you overestimated her need).Your sister and mother may grow angry when you help less than their adaptations calculated you should, motivating them to communicate this to you (argue), threaten to withdraw benefits from you, or otherwise incentivize you to treat your sister better in the future (Sell et al., 2017). As a thought experiment, let’s assume that taking the action under review benefits your sister by 5 notional units (Bsister = 5). As long as Cyou < 2.5 (i.e., < ½Bsister), you will be motivated to help, and your mother and sister will agree that you should. The same reasoning implies that the three of you will agree that you should not help your sister when Cyou > 10 (siblings are selected to refrain from imposing too much harm on one another: for values over 10, ½Cyou > 5 = Bsister). There will be a consensus about whether you ought to help your sister—moral connotation intended—when the cost to you is smaller than ½Bsister and larger than 2Bsister. The three of you will have different opinions, however, when the cost to you is between these values (in this example, when 2.5 < Cyou < 10). Mom weighs your welfare and your sister’s equally, but your sister discounts costs to you by ½; so when 5 < Cyou < 10, your sister will want your help, but your mother will think this is too much to ask, and you will too. But when 2.5 < Cyou < 5, your mother will have the intuition that you ought to help your sister (because Bsister > Cyou) and your sister will agree, but you will feel they are expecting too much from you.That is, your three brains will generate conflicting intuitions about what you ought to do—what your duties of beneficence are—when ½Bsister < Cyou < 2Bsister.There is no single solution in this range that will generate moral agreement. In fact, the same logic implies that your own moral intuitions will shift when the shoe is on the other foot—that is, when your sister has the option of helping you. The idea that she should help you will seem reasonable to you when Csister < 2Byou, but she will disagree when Csister < ½Byou. You will seem like a hypocrite: You will expect more from her than you were willing to give her in an equivalent situation (Kurzban, 2012). When you are a mother with two daughters, your intuitions about their duties of beneficence toward one another will change again: When Ci < Bj, you will feel that daughter i should help daughter j.Your own moral intuitions should change because different cognitive adaptations will be activated depending on your role in this family drama: when you are the sibling doing the helping, when you are the sibling being helped, and when you are the mother of two children. What does this mean about moral intuitions regarding duties of beneficence toward full siblings? Moral consensus within the family should emerge for actions involving a wide range of costs to you and benefits to your sibling—especially when Cyou < ½Bsib (you should 194
Leda Cosmides et al.
help her) and when Cyou > 2Bsib (don’t help her, it is too much to ask). For values in these ranges, the evolved psychologies of various family members can be expected to generate similar intuitions, feelings, or opinions about how you ought to treat your sibling—about what counts as the right thing to do.These mechanisms may also generate a moral consensus in a society—including all-things-considered judgments—when people contemplate situations of this kind prospectively, read literature, or try to decide who was wrong in a conflict between siblings. When the costs and benefits of i helping sibling j fall between those values—that is, when everyone agrees that ½Ci > Bsib > 2Ci—moral conflict is likely. Dissension will arise within the family about i’s duties toward sibling j, and your own intuitions will vary depending on whether you are the parent, the helper, or the sibling who can be helped. The evolved psychologies of various family members can be expected to generate different intuitions, feelings, or opinions about how you ought to treat your sibling for values in this range. An outcome that is morally satisfying to one sibling will feel unfair to the other. In this analysis, there is no impartial point of view. What counted as a reproductionpromoting “strategy” differed for ancestral mothers, self, and siblings, and the psychologies designed by those selection pressures can be expected to vary accordingly. The evolved psychology of mothers weights each sibling equally, not because her age and experience have made her an impartial judge but because equal weightings promoted the reproduction of mothers in our ancestral past. Her preferences were shaped by the same causal processes that produced unequal weightings in self and sibling. If the goal of ethical theorists is to create normative theories of broad applicability, analyses like this can help. Across a wide variety of situations, different family members can be expected to have very similar moral intuitions about an individual’s duties of beneficence toward their siblings. The prospects for developing normative principles that capture the intuitions of everyone involved are promising for situations like these. But when actions are perceived as implying costs and benefits in the intermediate range just discussed, the moral intuitions of family members can be expected to differ, and there is no outcome that will feel fair to all involved. Although this variation is systematic, the search for a normative principle that captures the intuitions of everyone involved is likely to fail. Theorists could accept this, adopting a normative principle for such cases that captures some, but not all, moral intuitions. Or they could decide that these situations involve matters of personal taste—like preferences for chocolate versus peppermint ice cream—that fall outside the scope of moral judgment.
12. The Evolution of Cooperation: A Cook’s Tour Biologists have been interested in the “problem of altruism” since the 1960s (Williams, 1966). They define the problem thus: How can selection favor an adaptation that causes an organism to systematically behave in ways that decrease the organism’s own reproductive success while therein increasing the reproductive success of another individual? Actions or structural features that have these consequences by design11 are defined as “altruistic” in biology. Kin selection is one of several solutions to biology’s “problem of altruism.” But kin selection cannot explain the evolution of adaptations causing altruism toward individuals who are not kin. 195
The Evolution of Moral Cognition
Adaptations causing unconditional altruism toward kin have evolved in many taxa. Altruism toward individuals who are not kin is less common zoologically, and requires adaptations that are different from those that generate altruism toward kin. In most models, altruism must be conditional to evolve. Even group selection models require the effects of altruism to fall differentially on ingroup members (e.g., Bowles & Gintis, 2013; Boyd & Richerson, 2009; McElreath & Boyd, 2006). The extent to which humans in all cultures cooperate with individuals who are not genetic relatives is among our most zoologically unusual features. What follows is a brief tour of adaptations for conditional cooperation, including social exchange in its various forms: reciprocal altruism, cooperation for mutual benefit, trade (Axelrod & Hamilton, 1981; Barclay, 2013, 2016; Baumard et al., 2013; Fiske, 1991; Noë & Hammerstein, 1995; Trivers, 1971); cooperation in groups, especially collective action (Boyd & Richerson, 2009; Tooby et al., 2006); deep engagement (banker’s paradox) relationships (Tooby & Cosmides, 1996); and risk-pooling cooperation, which can apply within cooperative dyads or groups (Kaplan & Hill, 1985; Kaplan et al., 2012). The adaptive problems that need to be solved for cooperation to evolve in these various forms are similar (but not identical); solving them requires computational systems with domain-specialized concepts, representational formats, reasoning systems, and moral sentiments. Many deontic concepts and implicit moral rules are embedded in these systems (Curry, 2015). They are also relevant to virtue ethics, providing a basis for understanding which kinds of characteristics are likely to be treated as virtues across cultures and time.
13. Evolutionary Game Theory and the Analysis of Social Behavior Game theory is a tool for analyzing strategic social behavior—how agents might behave when they are interacting with others who can anticipate and respond to their behavior. Economists have used it to analyze how people respond to incentives present in a welldefined situation.These models typically assume rational actors who calculate the payoffs of alternative options (anticipating that other players will do likewise) and choose the option that maximizes the actor’s own payoff (but see Hoffman et al., 1998). The social behavior of other people was as relentless a selection pressure as predators and efficient foraging.To specify these selection pressures more precisely, evolutionary biologists adopted game theory as an analytic tool, too (Maynard Smith, 1982). Evolutionary game theory requires no assumptions about deductive reasoning or economic rationality; indeed, it can be usefully applied to cooperation among bacteria or fighting in spiders. It is used to model interactions among agents endowed with well-defined decisions rules that produce behavior that is contingent on features of the situation (especially the behavior of other agents). Although these decision rules are sometimes called “strategies” by evolutionary biologists, this is a term of art: no deliberation by bacteria (or humans) is implied (or ruled out) by this term.Whether the decision rules being analyzed are designed to regulate foraging, fighting, or cooperating, the immediate payoffs of these decisions, in food or resources, are translated by the modeler into the currency of offspring produced by the decisionmaking agent, and these offspring inherit their parents’ decision rule. In evolutionary game theory, a decision rule or strategy that garners higher payoffs leaves more copies of itself in the next generation than alternatives that garner lower payoffs. By analyzing the reproductive consequences of alternative decision rules, evolutionary biologists can determine 196
Leda Cosmides et al.
which strategies natural selection is likely to favor and which are likely to be selected out (eliminated from the population).
14. The Evolution of Cooperation between Two Unrelated Individuals: Constraints from Game Theory The evolution of adaptations for cooperation between unrelated individuals is tricky, even when only two individuals are involved and they can interact repeatedly. Yet social exchange—an umbrella term for two-party cooperation in its many forms—is a ubiquitous feature of every human society. In evolutionary game theory, it is often modeled as a repeated “prisoner’s dilemma” (PD) game, with two agents who are not genetic relatives.12 In each round of a PD game, an agent must decide whether to cooperate or defect—to provide a benefit of magnitude B to the other agent (at cost C to oneself) or refrain from doing so. In a PD game, B − C > 0 for both agents. In evolutionary game theory, the choice made by each agent is specified by a decision rule (a “strategy”), and different agents are equipped with different decision rules. When an agent reproduces, its offspring have the same design—the same decision rule—as the parent (with high probability; many models allow mutations, i.e., a small probability that an offspring has a different decision rule from its parent). In a repeated PD, two agents play many rounds during a single generation. When it is time to reproduce, the benefits and costs each agent earned during these rounds—payoffs that can be thought of as calories acquired vs. expended, favors garnered vs. given, changes in status from winning vs. losing a fight—are translated into offspring in the next generation. The relative number of offspring each agent produces before it “dies” is proportional to the payoffs it earned during that generation: agents with designs that earned higher payoffs produce more offspring relative to agents with designs that earned lower payoffs.13 That is, the agents’ choices have consequences for their reproductive success (as in models of kin selection). This process is repeated for many generations, so the modeler can determine which decision rules—which strategies—increase in relative frequency and which are eliminated from the population. Imagine a population of agents participating in a series of PD games. Each agent is equipped with one of two possible decision rules: always cooperate or always defect. Always cooperate causes unconditional cooperation: agents with this design incur cost C to provide their partner with benefit B, regardless of how their partner behaves in return. The other decision rule, always defect, accepts benefits from others but never provides them, so it never suffers cost C.When two unconditional cooperators interact, their payoff is positive, because B − C > 0.When two defectors interact, they get nothing—they are no better or worse off than if they had not interacted at all. But every time a cooperator interacts with a defector, the cooperator suffers a net loss of C (because it pays cost C with no compensating benefit) and the defector, who incurred no cost, earns B (the benefit provided by the cooperator). Now imagine that these agents are randomly sorted into pairs for each new round and there are n rounds during a generation. Because assortment into pairs is random, the probability that an agent is paired with a cooperator is p, the proportion of cooperators in the population; the probability an agent is paired with a defector is (1 − p), the proportion of defectors in the population. The always defect rule never suffers a cost, but it earns B every 197
The Evolution of Moral Cognition
time it is paired with an agent who always cooperates, which is n×p times; thus np×B is the total payoff earned by each defector that generation. In contrast, the always cooperate rule suffers cost C in every round, for a total cost of n×C. It earns B only from the np rounds in which it meets another cooperator, for a total benefit of npB. Hence, n(pB − C) is the total payoff earned by each cooperator that generation.These payoffs determine the relative number of offspring each agent produces in the next generation. Because offspring have the same design as their parents with high probability, these payoffs also determine the relative number of copies of a design in the next generation (mutations are random with respect to design). Because npB > npB − nC, the always defect design will leave more copies of itself in the next generation than the always cooperate design. As this continues over generations, unconditional cooperators will eventually disappear from the population, and only defectors will remain. In an environment where the only alternative is a design that always cooperates, always defect is an evolutionarily stable strategy (ESS), but always cooperate is not. Although strategies that cause unconditional cooperation fail, models in evolutionary game theory show that decision rules that cause cooperation can evolve and be maintained in a population by natural selection if they implement a strategy for conditional cooperation—a strategy that not only recognizes and remembers (at least some of) its history of interaction with other agents, but uses that information to cooperate with other cooperators and defect on defectors. (One example of a strategy with these properties is tit-for-tat, a decision rule that induces cooperation on the first move, after which its adherent does whatever its partner did on the previous move; Axelrod & Hamilton, 1981; Axelrod, 1984.) Conditional cooperators remember acts of cooperation and cooperate in response, so they provide benefits to one another, earning a payoff of (B − C) every time they interact. Because the cooperation of one elicits future cooperation from the other, these designs cooperate with one another repeatedly, and the positive payoffs they earn from these interactions accumulate over rounds. In this, they are like unconditional cooperators. The difference is that conditional cooperators limit their losses to defectors.The first time a conditional cooperator interacts with a particular defector, it suffers a one-time loss, C, and the defector earns a one-time benefit, B. But the next time these two individuals meet, the conditional cooperator defects and does not resume cooperation unless its partner responds by cooperating. As a result, designs that defect cannot continue to prosper at the expense of designs that cooperate conditionally. Designs that cooperate conditionally harvest gains in trade from interacting repeatedly with one another; interactions between designs that defect do not produce these gains in trade. Because reproduction of a design is proportional to the payoffs it earns, designs that induce conditional cooperation produce more copies of themselves in the next generation than designs that induce defection. It is a prediction of this approach that over many generations a population that begins with both designs will gradually replace designs that always (or usually) defect with designs that cooperate conditionally. Defectors are often referred to as cheaters in two-party reciprocation or social exchange. The results of evolutionary game theory suggest that cognitive adaptations for participating in social exchange can be favored and maintained by natural selection, but only if they implement some form of conditional cooperation. To do so, they require design features that detect and respond to cheaters so defined.
198
Leda Cosmides et al.
Notice that evolutionary game theory analyzes which designs are favored by selection. In this context, “cheaters” are individuals who cheat in situations involving social exchange by virtue of their cognitive design: They are agents with decision rules that cause them to take benefits provided by another agent without providing what the other agent wanted. (For the purposes of this analysis, it does not matter whether the agent’s failure to reciprocate was caused by an intentional choice or by the calibration of a nonconscious (“subpersonal”) mechanism.) Not all failures to reciprocate indicate a cheater: conditional cooperators will sometimes fail to reciprocate because they suffered bad luck (e.g., their hunt failed; injury prevented them from foraging) or made a mistake. Models from evolutionary game theory show that withdrawing from cooperation with these individuals is an adaptive error (e.g., Panchanathan & Boyd, 2003; see below on generosity and free riders).
15. Detecting Cheaters and Reasoning about Social Exchange Using constraints from game theory and knowledge about the behavioral ecology of hunter-gatherers, Cosmides and Tooby developed social contract theory: a task analysis specifying (1) the adaptive problems that arise in two-party reciprocation and (2) the properties a computational system would need to solve them (Cosmides, 1985; Cosmides & Tooby, 1989, 2008a). As in the discussion of adaptations for kin-directed altruism, we assume that the human cognitive architecture has adaptations for computing the value of resources, actions, and situations to self and other. In what follows, “benefit” and “cost” refer to these computed values—to mental representations generated by the reasoner. Research on reasoning about social exchange shows that procedures for detecting cheaters operate on abstract representations of costs and benefits (e.g., Cosmides & Tooby, 2008a, 2008c).14 The provision of benefits needs to be conditional for social exchange between unrelated individuals to evolve: you deliver a benefit to an agent conditional on that agent satisfying some requirement of yours (providing a direct benefit or creating conditions that benefit you). Whether the understanding is left implicit or agreed to explicitly, this contingency can be expressed as a social contract, a conditional rule that fits the following template: If you accept benefit B from me, then you must satisfy my requirement R. A cheater is an individual who has taken benefit B without satisfying requirement R and has done so by design, not by mistake or incapacity. Understanding social exchange and detecting cheaters requires some form of conditional reasoning. The human cognitive architecture may well include subroutines that implement the inferences of first-order logic or a relatively domain-general deontic logic. But the inferences that these subroutines make will systematically fail to detect cheaters (for explanations, see Cosmides & Tooby, 2008a; Fiddick et al., 2000). Because these logics are content blind, they are insensitive to what P and Q refer to in “if P then Q.” Reasoning adaptively about social exchange requires rules of inference that are specialized for this domain of social interaction. These inference procedures need to be content sensitive—they need to operate on representational primitives such as benefit to agent 1, requirement of agent 2, obligation,15 entitlement, intention to violate, perspective of agent i. And they must include a subroutine that looks for cheaters, a specialized moral concept. The empirical evidence shows that situations of social exchange do, in fact, activate the very specialized representations, inference
199
The Evolution of Moral Cognition
rules, and violation detection procedures required to reason adaptively in this domain (for detailed reviews of the evidence, see Cosmides & Tooby, 2008a, 2008c, 2015). Reasoning about social exchange dissociates—both functionally and neurally—from reasoning about deontic rules so similar to social contracts that no other theory distinguishes between them. The neurocognitive system activated by situations involving social exchange generates inferences about how people “ought” to treat one another in interactions of this kind and triggers negative evaluations of cheaters. For this reason, it can be thought of as a moral reasoning system: a very specialized one.
16. Specialized Inferences: An Example The inferences of this specialized moral reasoning system—the social contract algorithms— diverge from the inferences of classical first order logics (Cosmides & Tooby, 2008a; Fiddick et al., 2000). In first order logic, “if P then Q” does not imply “if Q then P.” But social contract algorithms license an inference like this when P and Q refer to the benefits and requirements of agents in social exchange. For example, when Ana says “Bea, if you babysit my son, I will give you a sack of avocados” and Bea agrees, that implies “If Bea accepts the avocados from Ana, she is obligated to babysit Ana’s son.” When Bea babysits, this triggers the inference that she is entitled to the avocados and Ana is obligated to provide them; when Ana provides the avocados to Bea, it triggers the inference that she is entitled to have Bea babysit and Bea is obligated to do so. These are moral inferences, which we spontaneously make in situations of social exchange. A content-general deontic logic (a logic of obligation, entitlement, and prohibition) will not generate the adaptively correct pattern of inferences either. For situations involving social exchange, [1] “If you accept benefit B from agent X, then you are obligated to satisfy X’s requirement,” implies [2] “If you satisfy agent X’s requirement, then you are entitled to the benefit B that X offered to provide” (and vice versa). But consider a slightly more general version of [2] that operates outside the domain of social exchange: “If you satisfy requirement R, then you are entitled to E.” This cannot imply “If you get E, then you are obligated to satisfy requirement R” without violating our moral intuitions. For example, “If you are an American citizen, you have a right to a jury trial” does not imply “If you get a jury trial, then you are obligated to be an American citizen” (Cosmides & Tooby, 2008a).
17. Cheater Detection In first order logic, a conditional of the form “if P then Q” is violated when P is true and Q is false, that is, by the co-occurrence of a true antecedent and a false consequent (P & not-Q). The classical rules of deductive inference (like modus ponens and modus tollens) operate on the antecedent and consequent of a conditional, no matter what they refer to; these procedures are blind to benefits, requirements, and agents with perspectives. Social contract algorithms employ a very specific concept of violation—cheating—that does not map onto the concept of violation used in first order logic. Cheating is taking the benefit from an agent without satisfying that agent’s requirement. A cheater is someone who does this by design, not by mistake. Consider again Ana’s offer, “If you babysit my son, I will give you a sack of avocados.” Which acts count as cheating depends on whether we adopt the perspective of Ana or 200
Leda Cosmides et al.
Bea. Ana cheated if she accepted Bea’s help babysitting but did not give her avocados; Bea cheated if she accepted avocados from Ana but did not babysit. That’s not because social contracts are biconditional: if Bea does not babysit, Ana has not cheated if she decides to give Bea avocados anyway, nor has Bea cheated if she babysits but then decides she doesn’t need any more avocados. Studies with the Wason selection task, a tool developed in cognitive psychology to study conditional reasoning, show that social contracts activate a cognitive mechanism that looks for cheaters, not for logical violations. In this task, subjects are presented with a conditional rule and then asked to look for cases that could violate it. If you ask whether Ana violated the rule, they investigate cases in which Bea babysat (P) and cases in which Ana did not give her avocados (not-Q). This response—P & not-Q—is the same response subjects would give if they were looking for logical violations, reasoning with rules of logic. But they are not. If you ask instead whether Bea violated the rule, they investigate cases in which Bea accepted avocados (Q) and cases in which she did not babysit (not-P). Q & not-P is logically incorrect, but it is the correct answer if you are looking to see if Bea cheated (Gigerenzer & Hug, 1992). The same pattern of adaptively correct but logically incorrect answers can be elicited by having Ana express her offer like this: “If I give you avocados, then you must babysit my son.” Subjects asked whether Ana violated the rule still investigate occasions when Bea babysat (Q) and occasions when Ana gave her nothing (not-P). In formal logic, a true consequent (Q) with a false antecedent (not-P) does not violate a conditional rule, but these cases are the right ones to investigate if you want to know if Ana cheated (Cosmides, 1985, 1989). The most telling results come from experiments that present the same social contract rule but vary information about those who are in a position to violate it (Cosmides et al., 2010). Social contract theory predicts an adaptive specialization that looks for cheaters, not innocent mistakes. Cues relevant to this distinction regulate when subjects detect cases that violate the rule. First, intentional violations activate cheater detection, but innocent mistakes do not. Second, violation detection is up-regulated when potential violators would get the benefit regulated by the rule and down-regulated when they would not. Third, cheater detection is down-regulated when the situation makes cheating difficult—when violations are unlikely, the search for them is unlikely to reveal those with a disposition to cheat. Parametric studies show that each of these cues independently contributes to violation detection (Cosmides et al., 2010; for discussion see Cosmides & Tooby, 2015). This provides three converging lines of evidence that the mechanism activated by conditionals expressing a social contact is not designed to look for general rule violators, or deontic rule violators, or violators of social contracts, or even cases in which someone has been cheated. This mechanism does not look for violators of social exchange rules in cases of mistake—not even in cases when someone has accidentally benefited by violating a social contract. The mechanism activated has a narrow focus: It looks for violations of social contracts when this is likely to lead to detecting cheaters—defined as individuals who take a benefit that was conditionally offered by an agent while intentionally not meeting that agent’s requirement. Its computational design fits the adaptive function—detecting designs that cheat—like a key fits a lock: It is a cheater detection mechanism. 201
The Evolution of Moral Cognition
18. Partner Choice versus Partner Control Once you detect cheaters, then what? Broadly speaking, there are two possibilities:You can choose to cooperate with a different partner or you can try to reform the partner who cheated so she cooperates with you more in the future. Evolutionary biologists refer to these options as partner choice versus partner control (e.g., Barclay, 2013; Nöe & Hammerstein, 1994, 1995; Schino & Aureli, 2017). Switching partners can be the less costly strategy if alternative cooperative partners are available to you (assuming the costs of finding and establishing a new cooperative relationship are low). If they are not, the best strategy may be to reform the partner you have. The two main bargaining tools available to incentivize better behavior are to (1) inflict harm (i.e., punish) or (2) withdraw cooperation until the partner starts cooperating again—the tactic employed by tit-for-tat strategies (TFT). Both partner control tools are costly: the first risks injury, the second entails forgone opportunities to forge a profitable cooperative relationship with someone else. Threatening to use these tactics is less costly than deploying them, but when the partner herself can switch, it risks losing a partner who provides net benefits despite subtle cheating (under-reciprocating). When the repeated prisoner’s dilemma was first used to model the evolution of cooperation, partners were paired with one another randomly (Trivers, 1971; Axelrod & Hamilton, 1981). In that environment, partner choice is not an option. Strategies for conditional cooperation arising from these models employ partner control tactics (e.g., tit-for-tat). Lately, inspired by Nöe and Hammerstein’s (1994, 1995) early papers distinguishing partner control from partner choice models of the evolution of cooperation, there has been a florescence of new research on cooperative partner choice in “biological markets.” Both kinds of model have implications for moral psychology, and can shed light on morality-relevant puzzles arising from results in behavioral economics.
19. Puzzles from Behavioral Economics Cooperation can be studied in the laboratory by having people interact in games in which the monetary payoffs for different choices are carefully controlled—dictator games, prisoner’s dilemma style games, bargaining games (e.g., the ultimatum game), trust/investment games, public goods games, and others. When behavioral economists used these methods to test predictions of game theory, they found that people in small groups do not act as if they are maximizing immediate monetary payoffs (e.g., Hoffman et al., 1998; Smith, 2003). In a oneshot interaction with anonymous others, Homo economicus models predict no generosity, no cooperation, no trust, and no punishment.Yet people give more, cooperate more, trust more, and punish defections more than these models predict, even when the experimenter tells them that the interaction is one-shot and anonymous.Why? According to both economic and evolutionary game theory, repeated interactions are necessary for behaviors like this to evolve. To some, this “excess altruism” is evidence that the psychology of cooperation was shaped by group selection rather than selection operating on individuals. According to these models, groups that included individuals with psychological designs that led them to suffer costs to punish defectors would maintain higher levels of within-group cooperation and, therefore, outcompete groups without such individuals. Although individuals with designs that punish defectors will have lower fitness than members of their group who cooperate without punishing, this “strong reciprocity” design spreads because groups with 202
Leda Cosmides et al.
these individuals replace groups that lack them (Bowles & Gintis, 2013; Boyd et al., 2003; Gintis, 2000; Gintis et al., 2003). But are these behaviors really excess altruism—that is, beyond what can be explained by selection on individuals for direct reciprocity? Selection does not occur in a vacuum: The physical and social ecology of a species shape the design of its adaptations, and our huntergatherer ancestors lived in small, interdependent bands that had many encounters with individuals from neighboring bands. Adaptations for direct reciprocity evolved to regulate cooperation in an ancestral world in which most interactions were repeated.The high prior probability that any given interaction will be repeated should be reflected in their design. In fact, models of this social ecology show that meeting an individual once is a good cue that you will meet again (Krasnow et al., 2013). This has been called the “Big Mistake” hypothesis by advocates of group selection—who characterize this position as saying that our adaptations are “mistaking” one-shot interactions for repeated ones (e.g., Henrich & Henrich, 2007, 91). Critics of the Big Mistake hypothesis argue that one-shot interactions were common enough in the lives of ancestral hunter-gatherers to select against cooperation in these situations. On this basis, they argue that the Big Mistake hypothesis is mistaken. But is it? Partner control and partner choice models provide evidence that the “Big Mistake mistake” is not a mistake.
20. Partner Control and the Evolution of Generosity Agent-based simulations are widely used to study the evolution of cooperation by partner control. In most cases, the behavioral strategies are particulate—they do not have internal cognitive components that can evolve—and the simulation environment has either oneshot or repeated interactions, but not both. But what happens if these strategies have components that can evolve, and the social environment includes both one-shot and repeated interactions, as in real life? It turns out that generosity in one-shot interactions evolves easily when natural selection shapes decision systems for regulating two-person reciprocity (exchange) under conditions of uncertainty (Delton et al., 2011). In real life, you never know with certainty that you will interact with a person once and only once (until the moment before you die). Categorizing an interaction as one-shot or repeated is always a judgment made under uncertainty, based on probabilistic cues (e.g., am I far from home? Does she speak with my accent? Did he marry into my band?). In deciding whether to initiate a cooperative relationship, a contingent cooperator must use these cues to make tradeoffs between two different kinds of errors: (i) false positives, in which a one-shot interaction is mistakenly categorized as a repeated interaction, and (ii) misses, in which a repeated interaction is mistakenly categorized as one-shot. A miss is a missed opportunity to harvest gains in trade from a long string of mutually beneficial interactions. In a population of contingent cooperators, the cost of a miss is usually much higher than the cost of a false positive. To see this, consider agents who defect on a new partner when they believe the interaction is one-shot, but play TFT when they believe they will repeatedly interact with the new partner. Let’s assume repeated interactions last for only five rounds on average, and a round produces very modest gains in trade: b = 3, c = 1, so the payoff for mutual cooperation is (b − c) = 2 and the cost of cooperating with a defector is c = 1. The cost of a false 203
The Evolution of Moral Cognition
positive error is c = 1: the payoff for an agent who cooperates, (wrongly) assuming this will be a repeated interaction, with a partner who defects. But notice that the cost of a miss is 10 times greater (5 rounds × (b − c)): When the agent defects, (wrongly) assuming it is a one-shot interaction, its new partner defects in return, inaugurating a chain of reciprocal defections. Even this is an underestimate: Given that humans have relationships that span decades, an average of five rounds for repeated interactions is low. When the average number of rounds for repeated interactions is 10 (still low), the opportunity cost of a miss is the failure to harvest a payoff of 20 (10(b − c))—in this case, the cost of a miss is 20 times larger than the c = 1 cost of a false positive.When misses are more costly than false positives, it can be better to have fewer missed opportunities at the price of more false positives—cases in which agents cooperate in one-shot interactions. Using agent-based simulations, Delton et al. (2011) show that under a wide range of conditions, individual level selection favors computational designs that decide to cooperate with new partners, even in a world where most of the interactions are one-shot. Across simulations, the proportion of interactions that are one-shot varied from 10% to 90%. (Even the lowest base rate of 10% probably overestimates the percent of one-shot partners experienced by hunter-gatherers, who lived in small interdependent bands and whose extra-band encounters were primarily with people from neighboring bands.) Each new partner comes with a number—a cue summary—that serves as a hint to whether an agent’s interaction with that partner will be one-shot or repeated. The cue summaries are never perfect predictors: they are drawn from one of two normal distributions (one-shot vs. repeated) that overlap by either 13%, 32%, or 62%. Differences in how discriminable the cue summaries are accounted for 50%, responders had an incentive to become proposers; when they were keeping 55% of proposers took money from the recipient (List, 2007). Why is the same amount of money, in each case provided by the experimenter, distributed differently when it is a windfall rather than earned? Is the key variable effort expended or something else? In a classic study of Ache foragers in Paraguay, Kaplan and Hill (1985) found that the same individuals in the same culture applied different sharing rules to meat and honey than to gathered plant foods. Meat and honey were shared widely in the band—they were communally shared (Fiske, 1991), according to a rule approximating Marx’s claim that hunter-gatherers share “from each according to their ability, to each according to their need.”This was not true, however, for most of the gathered foods.These were shared within the family or with specific reciprocity partners. Effort was required to acquire all of these resources; foraging risk was the variable that explained which sharing rules were used for each resource. Hunting is a high risk-high payoff activity. Behavioral ecologists studying tribal societies find that hunters come back empty handed on more than half of their hunting trips (Kaplan et al., 2012). These reversals of fortune apply across skill levels: effort is not sufficient to ensure hunting success. When hunters do succeed in killing an animal, there is often more meat than one family can consume. Keeping this extra meat for future consumption is not practical, because of decay and the energetic costs of transport for semi-nomadic people. So hunter-gatherers store this extra food in the form of social obligations (Cashdan, 1982). They buffer high variance in foraging success by pooling their risk (Cashdan, 1982; Kaplan & Hill, 1985). My family eats today, even though my hunt failed and yours succeeded, because you share your catch with me; tomorrow, when you fail and I succeed, your family still eats because I share with you. Honey is shared widely for the same reason: The payoff is large, but there is high variance due to luck in finding and acquiring it. Gathered plant foods are different. Their caloric density is usually lower than for meat and honey, there is little variance in gathering success, and what variance exists is largely due to effort expended, not luck. Under these circumstances, risk-pooling offers no advantages. These low risk-low payoff foods are the ones shared within the family or with specific reciprocity partners (Kaplan & Hill, 1985). 210
Leda Cosmides et al.
Evoked culture or cultural transmission? This pattern—band-level sharing for high risk-high payoff foods, reciprocal sharing for low risk-low payoff foods—is typical for hunter-gatherers. But why? Is it because they have inherited packages of cultural norms that gradually accumulated over time because they worked well in this ecology? Or does this cultural pattern exist because our minds have (at least) two different evolved programs, each equipped with different sharing rules? In this view, cues of high variance activate different sharing rules than cues of low variance. When variance is high, this triggers an evolved program that generates the intuition that the lucky should share with the unlucky; when variance is low, the evolved programs activated generate the intuition that you have no obligation to share outside the family, except, perhaps, with specific social exchange partners. This second possibility is what Tooby and Cosmides (1992) call “evoked” culture: the cultural pattern is evoked by the situation—that is, it emerges because the mind is designed to activate different programs in response to cues of different ancestral situations. An evoked culture explanation predicts that cues of high versus low variance will activate different sharing rules in humans everywhere, not just among hunter-gatherers. Explanations that invoke the accumulation of norms by successbiased cultural transmission do not predict this cue-activated pattern (e.g., Henrich, 2015; Richerson & Boyd, 2006). When history and ecology differ across cultures, success-biased cultural transmission should create different packages of norms, each appropriate to the local culture. In advanced market economies, we forage at grocery stores where variance due to luck is low, we live in family units rather than bands, and when we buy food, most of it is shared within the family. We are WEIRD people, who should have different sharing norms than hunter-gatherers. Cultural evolutionists have argued that WEIRD people are unusual compared to the rest of the species (Henrich et al., 2010) and that our sharing norms are very different from those found in small-scale societies (Henrich et al., 2005). Yet WEIRD people respond to cues of high versus low variance just like hunter-gatherers do. For example, holding effort expended constant, Japanese and American college students were more willing to share money acquired via a high variance process than a low variance one; moreover, the effect of high variance was independent of individual differences in the students’ ideologies about the just distribution of resources (Kameda et al., 2002). An ingenious test of the evoked culture prediction was conducted by Kaplan et al. (2012). By creating a foraging game in a virtual world, in which each (anonymous) subject has an avatar with a “food pot,” Kaplan and colleagues showed that WEIRD students from southern California immediately detect which of two foraging patches has high versus low variance, and respond like hunter-gatherers. When they successfully gather food, they can choose to deposit the calories in their own pot or in pots of other avatars (calories in your pot determine earnings). When subjects foraged and caught food on the low variance patch, they did not share it—they usually put all the calories in their own pot. But when they foraged on the high variance patch, lucky subjects shared with unlucky ones by putting calories from their catch into the pots of other avatars. Experiencing the high variance patch elicited more sharing from the first round of the game (with Cself. But every time a resource or unit of energy expended on help is almost as valuable to self as to the sibling, this design will help. To the extent that you and your sibling are living in the same environment and have similar needs, there will be many cases like this. In each of these cases, your helping will cause a net decrease in copies of the mutation in the gene pool. 7. A common misunderstanding of Hamilton’s rule is that individuals are designed to help full siblings because they “share half their genes” (known as the “fraction of genome fallacy”; Dawkins, 1979). In Hamilton’s rule, rself, kin does not refer to the fraction of the entire genome shared by self and kin. It is the probability that self and a given kin member share a given mutation (one producing a Hamiltonian design for helping), regardless of how many other genes they may share in common. Although it is true that full siblings in a diploid species share half of their (nuclear) genes on average, with some sharing more and others less, that fact is irrelevant to the spread of a Hamiltonian mutation. The fraction of genome fallacy has led to incorrect inferences: e.g., kin selection does not imply that individuals will be more inclined to help people who share a larger fraction of their genome by virtue of ethnicity or any other factor. That is, kin selection cannot explain ethnocentrism or any other population-based social preference. 8. Random mutations are always occurring at low levels. By “universal,” biologists mean that the design develops in everyone, except for the minute number of cases in which a mutation disrupts its development. Population genetic models suggest that disorders occurring at a rate of 1 in 1,000 most likely result from random mutations rather than being side effects of adaptations that are currently under selection. 9. When every member of the population has the Hamiltonian mutation, why don’t organisms start indiscriminately helping non-kin (who, at this point, have the same design)? Remember that the Hamiltonian mutation codes for a motivation to help kin when rself,kin (Bkin) > Cself; it does not code for indiscriminate helping. A new mutation could arise that suppresses or alters the Hamiltonian one, producing individuals who help others regardless of kinship whenever Bother > Cself—Design #4. But the same logic applies: Given opportunities to help where Bsib > Cself > ½ Bsib, Design #4 helps siblings, thereby reducing its own replication relative to the Hamiltonian design, for the same reason that Design #2 does. When Design #4, which helps when Bother > Cself, helps non-kin—who have the Hamiltonian design—it reduces its own replication relative to the Hamiltonian design.
218
Leda Cosmides et al.
10. Being told by others is not a solution: this just pushes the problem one step back. The teller would have to know who is a genetic relative, how close a relative they are, and that it matters whether an individual is related by genes, marriage, or affinity. Even worse, misrepresenting this information is sometimes in the interest of the teller (e.g., mothers should want their children— whether full or half-sibs—to be more altruistic toward one another than they should want to be); see Trivers (1974) on parent-offspring conflict and the section below it on family dynamics. Other problems arise because kin terms are often used metaphorically to convey a close relationship (e.g., using “my brother!” when greeting a close friend) or to foster such relationships (e.g., a mother encouraging her child to address a close family friend as “Aunt Ellie,” or referring to a stepsibling as “your sister”). 11. When a lion eats a zebra, the zebra has increased the reproductive success of the lion and decreased its own reproductive success. But this effect was not by design (and, therefore, is not considered altruism in biology). Zebras have adaptations designed for escape, not for running into a lion’s mouth. Note that a design feature can be altruistic in the biological sense without involving intentions, knowledge, or even behavior. There are, for example, trees that respond to their leaves being eaten by releasing volatile chemicals that are sensed by neighboring trees; on sensing these chemicals, the neighboring trees produce more toxins that are distasteful to leafeating herbivores. Producing volatiles and releasing them is an altruistic design feature. 12. Kin also engage in social exchange—indeed, it is expected for resources or actions whose costs and benefits fall outside the window in which kin-selected adaptations would motivate unconditional (i.e., non-compensatory) helping. 13. The rate of conversion from payoffs (calories, status, favors) to offspring is determined by the modeler in evolutionary game theory and by nature in natural selection. The psychology of individual organisms does not convert payoffs in calories, status, favors, and so on into cognitively represented estimates of their effects on future reproduction. (It may sometimes look that way, however, because evolved mechanisms for estimating the value of calories, status, favors, and so on can be expected to respond to ancestrally reliable cues of health, ovulatory status, kinship, caloric value, and other factors that affected reproduction ancestrally (see §8, “Estimating and Representing Benefits and Costs”). But notice the importance of cues (compared to effects on future reproduction): people enjoy artificial sweeteners knowing they have no caloric benefit; they enjoy sex while using contraception; they preferentially help stepsiblings with whom they were raised and are disgusted at the prospect of sex with them; and so on.) Very little is known about the psychology by which individual organisms estimate, represent, and compare payoffs within domains (e.g., alternative foods, or the value of a unit of food to self vs. other) and across domains (e.g., allocating time to eating versus romantic opportunities; see Roney & Simmons, 2017). Are calories represented by a specialized currency, different from that used for romantic opportunities, with pairwise “exchange rates” between domain-specialized currencies? Are payoffs in different domains translated into an internal lingua franca, a domain-general currency representing “satisfaction” or “utility”? These are interesting and important empirical questions that are unanswered at this time. 14. This may not be true for other species that engage in reciprocal behavior. Vampire bats, who transfer meals of foraged blood to one another, could have mechanisms specialized for representing volume of blood transferred; baboons could have mechanisms specialized for computing time spent grooming one another. Humans, by contrast, are capable of exchanging an openended set of tools, favors, and resources. For this reason, it was a prior prediction of the task analysis that algorithms for reasoning about social exchange in humans would extract an abstract representation of benefits and costs from concrete situations describing social exchange, and that procedures for detecting cheaters would operate on those representations (e.g., Cosmides, 1985; Cosmides & Tooby, 1989). 15. By concepts such as obligation and entitlement, we are not referring to the content of an obligation— the particular actions that people feel they are obligated or entitled to do vary hugely across cultures and history. Obligation and entitlement in the sense meant are concepts defined by their relationship to one another, to other inferences in the social exchange system, and to moral
219
The Evolution of Moral Cognition
emotions. For example, when agent 1 is entitled to receive X from agent 2, that implies that agent 2 is obligated to deliver X (research on social exchange shows that people spontaneously make this inference). It might also mean that if agent 1 takes X from agent 2, agent 2 will not punish agent 1 in response. The precise meaning of these evolved concepts is an empirical question; proposals about what they mean can be found in Cosmides (1985) and Cosmides and Tooby (1989, 2008a). Note that the concept of obligation used by cognitive adaptations for social exchange may not map onto colloquial concepts of obligation. Although the word “ought” is used in both circumstances, there are reasons (both theoretical and empirical) to expect the meaning of “ought” deployed by social contract algorithms to be different from the meaning of “ought” employed by a reasoning system specialized for interpreting and reasoning about precautionary rules (ones saying that a person “ought” to take a specific precaution when facing a particular hazard; Cosmides & Tooby, 2008b; Fiddick et al., 2000). 16. If two targets have been sorted into separate mental categories—male and female, for example—subjects will be more likely to make a within-category error than a between-category error (e.g., they will be more likely to misattribute something done by a woman to another woman than to a man). This pattern will emerge whether the subject is aware of classifying the targets or not. 17. For example, given that a man had failed to find food, subjects were just as likely to mistakenly attribute that event to a man who lost food as to one of the other men who failed to find food. That is, subjects were as likely to make a between-category error as a within-category error.
References Aarøe, L. and Petersen, M. B. (2013). “Hunger Games: Fluctuations in Blood Glucose Levels Influence Support for Social Welfare,” Psychological Science, 24 (12), 2550–2556. ———. (2014). “Crowding Out Culture: Scandinavians and Americans Agree on Social Welfare in the Face of Deservingness Cues,” Journal of Politics, 76 (3), 684–697. Adams, M. and Neel, J. (1967). “Children of Incest,” Pediatrics, 40, 55–62. André, J-B. and Baumard, N. (2011). “The Evolution of Fairness in a Biological Market,” Evolution, 65, 1447–1456. Aristotle. (2005/ 350 BCE). “Nicomachean Ethics,” trans. W. D. Ross. Digireads.com. Audi, R. (2005). The Good in the Right: A Theory of Intuition and Intrinsic Value. Princeton: Princeton University Press. Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books. Axelrod, R. and Hamilton, W. (1981). “The Evolution of Cooperation,” Science, 211, 1390–1396. Barclay, P. (2013). “Strategies for Cooperation in Biological Markets, Especially for Humans,” Evolution & Human Behavior, 34 (3), 164–175. ———. (2015). “Reputation,” in Handbook of Evolutionary Psychology (2nd ed.), edited by D. Buss. Hoboken, NJ: John Wiley & Sons. ———. (2016). “Biological Markets and the Effects of Partner Choice on Cooperation and Friendship,” Current Opinion in Psychology, 7, 33–38. Barclay, P. and Willer, R. (2007). “Partner Choice Creates Competitive Altruism in Humans,” Proceedings of the Royal Society, London B, 274, 749–753. Baumard, N., André, J-B. and Sperber, D. (2013). “A Mutualistic Approach to Morality: The Evolution of Fairness by Partner Choice,” Behavioral & Brain Sciences, 36, 59–78. Baumard, N. and Boyer, P. (2013). “Explaining Moral Religions,” Trends in Cognitive Science, 17 (6), 272–280. Baumard, N., Mascaro, O. and Chevallier, C. (2012). “Preschoolers Are Able to Take Merit into Account When Distributing Goods,” Developmental Psychology, 48 (2), 492–498. Bittles, A. and Neel, J. (1994). “The Costs of Human Inbreeding and Their Implications for Variation at the DNA Level,” Nature Genetics, 8, 117–121. Bliege Bird, R. and Power, E. (2015). “Prosocial Signaling and Cooperation Among Martu Hunters,” Evolution and Human Behavior, 36 (5), 389–397.
220
Leda Cosmides et al.
Bloch, M. and Sperber, D. (2002). “Kinship and Evolved Psychological Dispositions,” Current Anthropology, 43 (5), 723–748. Boehm, C. (2001). Hierarchy in the Forest: The Evolution of Egalitarian Behavior. Cambridge, MA: Harvard University Press. ———. (2012). Moral Origins:The Evolution of Virtue, Altruism, and Shame. New York: Basic Books. Bowles, S. and Gintis, H. (2013). A Cooperative Species: Human Reciprocity and Its Evolution. Princeton: Princeton University Press. Boyd, R., Gintis, H., Bowles, S. and Richerson, P. (2003). “The Evolution of Altruistic Punishment,” Proceedings of the National Academy of Sciences of the United States of America, 100, 3531–3535. Boyd, R. and Richerson, P. (2009).“Culture and the Evolution of Human Cooperation,” Philosophical Transactions of the Royal Society, London, Biological Sciences, 364 (1533), 3281–3288. Boyer, P. (2001). Religion Explained:The Evolutionary Origins of Religious Thought. New York: Basic Books. ———. (2018). Minds Make Societies: How Cognition Explains the World Humans Create. New Haven, CT:Yale University Press. Boyer, P. and Petersen, M. (2011). “The Naturalness of (Many) Social Institutions,” Journal of Institutional Economics, 8 (1), 1–25. Bugental, D. B. (2000). “Acquisition of the Algorithms of Social Life: A Domain-Based Approach,” Psychological Bulletin, 126, 187–219. Burnstein, E., Crandall, C. and Kitayama, S. (1994). “Some Neo-Darwinian Decision Rules for Altruism: Weighting Cues for Inclusive Fitness as a Function of the Biological Importance of the Decision,” Journal of Personality and Social Psychology, 67, 773–789. Buss, D. (ed.). (2015). Handbook of Evolutionary Psychology (2nd ed.,Vol. 1 and 2). Hoboken, NJ: John Wiley & Sons. Buss, D., Larsen, R., Westen, D. and Semmelroth, J. (1992). “Sex Differences in Jealousy: Evolution, Physiology, and Psychology,” Psychological Science, 3 (4), 251–255. Cashdan, E. (1982). “Egalitarianism Among Hunters and Gatherers,” American Anthropologist, 84, 116–120. Chagnon, N. (1988). “Life Histories, Blood Revenge, and Warfare in a Tribal Population,” Science, 239, 985–992. ———. (1992). Yanomamö—The Last Days of Eden. New York: Harcourt, Brace, Jovanovich. Charlesworth, B. and Charlesworth, D. (1999). “The Genetic Basis of Inbreeding Depression,” Genetics Research, 74, 329–340. Choi, J. and Bowles, S. (2007). “The Coevolution of Parochial Altruism and War,” Science, 318, 636–640. Cosmides, L. (1985). “Deduction or Darwinian Algorithms? An Explanation of the ‘Elusive’ Content Effect on the Wason Selection Task,” Doctoral dissertation, Department of Psychology, Harvard University, University Microfilms #86-02206. ———. (1989). “The Logic of Social Exchange: Has Natural Selection Shaped How Humans Reason? Studies with the Wason Selection Task,” Cognition, 31, 187–276. Cosmides, L. and Tooby, J. (1987). “From Evolution to Behavior: Evolutionary Psychology as the Missing Link,” in J. Dupre (ed.), The Latest on the Best: Essays on Evolution and Optimality. Cambridge, MA: MIT Press. ———. (1989). “Evolutionary Psychology and the Generation of Culture, Part II. Case Study: A Computational Theory of Social Exchange,” Ethology & Sociobiology, 10, 51–97. ———. (1992). “Cognitive Adaptations for Social Exchange,” in J. Barkow, L. Cosmides and J. Tooby (eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press. ———. (1994). “Origins of Domain-Specificity: The Evolution of Functional Organization,” in L. Hirschfeld and S. Gelman (eds.), Mapping the Mind: Domain-Specificity in Cognition and Culture. New York: Cambridge University Press. ———. (2006). “Evolutionary Psychology, Moral Heuristics, and the Law,” in G. Gigerenzer and Christoph Engel (eds.), Heuristics and the Law (Dahlem Workshop Report 94). Cambridge, MA: MIT Press.
221
The Evolution of Moral Cognition
———. (2008a). “Can a General Deontic Logic Capture the Facts of Human Moral Reasoning? How the Mind Interprets Social Exchange Rules and Detects Cheaters,” in W. Sinnott-Armstrong (ed.), Moral Psychology. Cambridge, MA: MIT Press, 53–119. ———. (2008b).“Can Evolutionary Psychology Assist Logicians? A Reply to Mallon,” in W. SinnottArmstrong (ed.), Moral Psychology. Cambridge, MA: MIT Press, 131–136. ———. (2008c). “When Falsification Strikes: A Reply to Fodor,” in W. Sinnott-Armstrong (ed.), Moral Psychology. Cambridge, MA: MIT Press, 143–164. ———. (2013). “Evolutionary Psychology: New Perspectives on Cognition and Motivation,” Annual Review of Psychology, 64, 201–229. ———. (2015). “Adaptations for Reasoning About Social Exchange,” in D. Buss (ed.), The Handbook of Evolutionary Psychology, Second edition. Volume 2: Integrations. Hoboken, NJ: John Wiley & Sons, 625–668. Cosmides, L., Barrett, H. C. and Tooby, J. (2010). “Adaptive Specializations, Social Exchange, and the Evolution of Human Intelligence,” Proceedings of the National Academy of Sciences USA, 107, 9007–9014. Curry, O. (2015). “Morality as Cooperation: A Problem-Centred Approach,” in T. Shackelford and R. Hansen (eds.), The Evolution of Morality. New York: Springer, 27–51. Dawkins, R. (1979). “Twelve Misunderstandings of Kin Selection,” Ethology, 51 (92), 184–200. Debove, S., André, J-B. and Baumard, N. (2015). “Partner Choice Creates Fairness in Humans,” Proceedings of the Royal Society B, 282, 392–399. Debove, S., Baumard, N. and André, J-B. (2015). “Evolution of Equal Division Among Unequal Partners,” Evolution, 69, 561–569. Delton, A. W. and Cimino, A. (2010). “Exploring the Evolved Concept of Newcomer: Experimental Tests of a Cognitive Model,” Evolutionary Psychology, 8 (2), 317–335. Delton, A.W., Cosmides, L., Guemo, M., Robertson,T. E. and Tooby, J. (2012). “The Psychosemantics of Free Riding: Dissecting the Architecture of a Moral Concept,” Journal of Personality and Social Psychology, 102 (6), 1252–1270. Delton, A. W, Krasnow, M. M., Cosmides, L. and Tooby, J. (2011). “Evolution of Direct Reciprocity Under Uncertainty Can Explain Human Generosity in One-Shot Encounters,” Proceedings of the National Academy of Sciences, 108, 13335–13340. Delton, A. W., Nemirow, J., Robertson, T. E., Cimino, A. and Cosmides, L. (2013). “Merely Opting Out of a Public Good Is Moralized: An Error Management Approach to Cooperation,” Journal of Personality and Social Psychology, 105 (4), 621–638. DeScioli, P. and Kurzban, K. (2013). “A Solution to the Mysteries of Morality,” Psychological Bulletin, 139 (2), 477–496. Eisenbruch, A., Grillot, R., Maestripieri, D. and Roney, J. (2016). “Evidence of Partner Choice Heuristics in a One-Shot Bargaining Game,” Evolution and Human Behavior, 37, 429–439. Everett, J., Pizarro, D. and Crockett, M. (2016). “Inference of Trustworthiness from Intuitive Moral Judgments,” Journal of Experimental Psychology: General, 145 (6), 772–787. Fehl, K., van der Post, D. and Semman, D. (2011). “Co-Evolution of Behaviour and Social Network Structure Promotes Cooperation,” Ecology Letters, 14, 546–551. Fehr, E. and Gachter, S. (2000).“Cooperation and Punishment in Public Goods Experiments,” American Economic Review, 90, 980–994. Fehrler, A. and Przepiorka, W. (2013). “Charitable Giving as a Signal of Trustworthiness: Disentangling the Signaling Benefits of Altruistic Acts,” Evolution and Human Behavior, 34, 139–145. Fessler, D. and Navarrete, C. (2004). “Third-Party Attitudes Toward Sibling Incest: Evidence for Westermarck’s Hypotheses,” Evolution and Human Behavior, 25, 277–294. Fiddick, L., Cosmides, L. and Tooby, J. (2000). “No Interpretation Without Representation: The Role of Domain-Specific Representations and Inferences in the Wason Selection Task,” Cognition, 77, 1–79. Fiske, A. (1991). Structures of Social Life: The Four Elementary Forms of Human Relationships: Communal Sharing, Authority Ranking, Equality Matching, Market Pricing. New York: Free Press.
222
Leda Cosmides et al.
Fortes, M. (1970). Kinship and the Social Order: The Legacy of Lewis Henry Morgan (Morgan Lectures 1963). Oxford: Taylor & Francis. Fox, R. (1965/1984) The Red Lamp of Incest. Notre Dame, IN: The University of Notre Dame Press. Gigerenzer, G. and Hug, K. (1992). “Domain-Specific Reasoning: Social Contracts, Cheating, and Perspective Change,” Cognition, 43 (2), 121–171. Gill, M. and Nichols, S. (2008). “Sentimentalist Pluralism: Moral Psychology and Philosophical Ethics,” Philosophical Issues, 18, 143–163. Gintis, H. (2000). “Strong Reciprocity and Human Sociality,” Journal of Theoretical Biology, 206, 169–179. Gintis, H., Bowles, S., Boyd, R. and Fehr, E. (2003). “Explaining Altruistic Behavior in Humans,” Evolution and Human Behavior, 24, 153–172. Goldberg, S., Muir, R. and Kerr, J. (eds.). (2000). Attachment Theory: Social, Developmental, and Clinical Perspectives. London: The Analytic Press. Greene, J. (2008). “The Secret Joke of Kant’s Soul,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Volume 3 Moral Psychology: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. Cambridge, MA: MIT Press, 35–79. Güth, W. and Kocher, M. G. (2014). “More Than Thirty Years of Ultimatum Bargaining Experiments: Motives, Variations, and a Survey of the Recent Literature,” Journal of Economic Behavior & Organization, 108, 396–409. Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Vintage. Hamann, K., Bender, J. and Tomasello, M. (2014). “Meritocratic Sharing Is Based on Collaboration in 3-Year-Olds,” Developmental Psychology, 50 (1), 121–128. Hamilton, W. (1964). “The Genetical Evolution of Social Behavior,” Journal of Theoretical Biology, 7, 1–16. Henrich, J. (2015). The Secret of Our Success. Princeton: Princeton University Press. Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., McElreath, R., Alvard, M., Barr, A., Ensminger, J., Smith Henrich, J., Hill, K., Gill-White, F., Gurven, M., Marlowe W. F., Patton Q. J. and Tracer, D. (2005). “Economic Man in Cross-Cultural Perspective: Behavioral Experiments in 15 Small-Scale Societies,” Behavioral and Brain Sciences, 28 (6), 795–855. Henrich, J., Boyd, R. and Richerson, P. (2012). “The Puzzle of Monogamous Marriage,” Philosophical Transactions of the Royal Society B: Biological Sciences, 367 (1589), 657–669. Henrich, J., Ensminger, J., McElreath, R., Barr, A., Barrett, C., Bolyanatz, A., Cardenas, J., Gurven, M., Gwako, E., Henrich, N., Lesorogol, C., Marlowe, F., Tracer, D. and Ziker, J. (2010). “Markets, Religion, Community Size, and the Evolution of Fairness and Punishment,” Science, 327, 1480–1484. Henrich, J., Heine, S. and Norenzayan, A. (2010). “The Weirdest People in the World?” Behavioral and Brain Sciences, 33 (2/3), 1–75. Henrich, J. and Henrich, N. (2007). Why Humans Cooperate: A Cultural and Evolutionary Explanation. New York: Oxford University Press. Higham, J. (2014). “How Does Honest Costly Signaling Work?” Behavioral Ecology, 25, 8–11. Hoffman, E., McCabe, K. and Smith,V. (1998). “Behavioral Foundations of Reciprocity: Experimental Economics and Evolutionary Psychology,” Economic Inquiry, 36, 335–352. Huemer, M. (2005). Ethical Intuitionism. New York: Palgrave Macmillan. Hursthouse, R. & Pettigrove, G. (2016). “Virtue Ethics,” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). https://plato.stanford.edu/archives/win2016/ entries/ethics-virtue/ Kameda, T., Takezawa, M., Tindale, R. and Smith, C. (2002). “Social Sharing and Risk Reduction: Exploring a Computational Algorithm for the Psychology of Windfall Gains,” Evolution and Human Behavior, 23, 11–33. Kanngiesser, P. and Warneken, F. (2012). “Young Children Consider Merit When Sharing Resources with Others,” PLoS One, 7, e43979.
223
The Evolution of Moral Cognition
Kaplan, H. and Hill, K. (1985). “Food Sharing Among Ache Foragers: Tests of Explanatory Hypotheses,” Current Anthropology, 26 (2), 223–246. Kaplan, H., Schniter, E., Smith, V. and Wilson, B. (2012). “Risk and the Evolution of Human Exchange,” Proceedings of the Royal Society, B, 279 (1740), 2930–2935. Keeley, L. (1996). War Before Civilization:The Myth of the Peaceful Savage. New York: Oxford University Press. Kelly, R. (1995). The Foraging Spectrum: Diversity in Hunter-Gatherer Lifeways.Washington, DC: Smithsonian Institution Press. Kraft-Todd, G.,Yoeli, E., Bhanot, S. and Rand, D. (2015). “Promoting Cooperation in the Field,” Current Opinion in Behavioral Sciences, 3, 96–101. Krasnow, M. M., Cosmides, L., Pedersen, E. and Tooby, J. (2012).“What Are Punishment and Reputation for?” PLoS One, 7 (9), e45662. Krasnow, M. M., Delton, A. W., Cosmides, L. and Tooby, J. (2015). “Group Cooperation Without Group Selection: Modest Punishment Can Recruit Much Cooperation,” PLoS One, 10 (4), e0124561. Krasnow, M. M., Delton, A. W., Tooby, J. and Cosmides, L. (2013). “Meeting Now Suggests We Will Meet Again: Implications for Debates on the Evolution of Cooperation,” Nature Scientific Reports, 3, 1747. doi:10.1038/srep01747. Kurzban, R. (2012). Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. Princeton: Princeton University Press. Kurzban, R., Dukes, A. and Weeden, J. (2010). “Sex, Drugs, and Moral Goals: Reproductive Strategies and Views About Recreational Drugs,” Proceedings of the Royal Society of London: Series B: Biological Sciences, 277, 3501–3508. Kurzban, R.,Tooby, J. and Cosmides, L. (2001). “Can Race Be Erased? Coalitional Computation and Social Categorization,” Proceedings of the National Academy of Sciences USA, 98 (26), 15387–15392. Leimgruber, K., Shaw, A., Santos, L. and Olson, K. (2012). “Young Children Are More Generous When Others Are Aware of Their Actions,” PLoS One, 7 (10), e48292. Lewis, D., Al-Shawaf, L., Conroy-Beam, D., Asao, K. and Buss, D. (2017). “Evolutionary Psychology: A How-to Guide,” American Psychologist, 72 (4), 353–373. Lieberman, D. and Lobel, T. (2012). “Kinship on the Kibbutz: Coresidence Duration Predicts Altruism, Personal Sexual Aversions and Moral Attitudes Among Communally Reared Peers,” Evolution and Human Behavior, 33, 26–34. Lieberman, D. and Patrick, C. (2018). Objection: Disgust, Morality and the Law. New York: Oxford University Press. Lieberman, D.,Tooby, J. and Cosmides, L. (2003). “Does morality Have a Biological Basis? An Empirical Test of the Factors Governing Moral Sentiments Relating to Incest,” Proceedings of the Royal Society London (Biological Sciences), 270 (1517), 819–826. ———. (2007). “The Architecture of Human Kin detection,” Nature, 445, 727–731. Liénard, P., Chevallier, C., Mascaro, O., Kiurad, P. and Baumard, N. (2013). “Early Understanding of Merit in Turkana Children,” Journal of Cognition and Culture, 13, 57–66. Lim, J. (2010). “Welfare Tradeoff Ratios and Emotions: Psychological Foundations of Human Reciprocity,” Doctoral dissertation, Department of Anthropology, University of California, Santa Barbara. UMI Number: 3505288. List, J. (2007). “On the Interpretation of Giving in Dictator Games,” Journal of Political Economy, 115 (3), 482–493. Macfarlan, S., Walker, R., Flinn, M. and Chagnon, N. (2014). “Lethal Coalitionary Aggression and Long-Term Alliance Formation Among Yanomamö Men,” Proceedings of the National Academy of Sciences USA, 111 (47), 16662–16669. Mackie, D., Smith, E. and Ray, D. (2008). “Intergroup Emotions and Intergroup Relations,” Personality and Social Psychology Compass, 2, 1866–1880. Masclet, D., Noussair, C.,Tucker, S. and Villeval, M. C. (2003). “Monetary and Nonmonetary Punishment in the Voluntary Contributions Mechanism,’ American Economic Review, 93, 366–380. Maynard Smith, J. (1982). Evolution and the Theory of Games. Cambridge: Cambridge University Press.
224
Leda Cosmides et al.
McCloskey, D. (2006). The Bourgeois Virtues. Chicago: University of Chicago Press. McDonald, M., Navarrete, C. and van Vugt, M. (2012). “Evolution and the Psychology of Intergroup Conflict: The Male Warrior Hypothesis,” Philosophical Transactions of the Royal Society, B, 367, 670–679. McElreath, R. and Boyd, R. (2006). Mathematical Models of Social Evolution: A Guide for the Perplexed. Chicago: University of Chicago Press. McNamara, J., Barta, Z., Frohmage, L. and Houston, A. (2008). “The Coevolution of Choosiness and Cooperation,” Nature, 451, 189–192. Noë, R. and Hammerstein, P. (1994). “Biological Markets: Supply and Demand Determine the Effect of Partner Choice on Cooperation, Mutualism, and Mating,” Behavioral Ecology and Sociobiology, 35, 1–11. ———. (1995). “Biological Markets,” Trends in Ecology & Evolution, 10, 336–339. Olson, M. (1965). The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press. Oxoby, R. and Spraggon, J. (2008). “Mine and Yours: Property Rights in Dictator Games,” Journal of Economic Behavior & Organization, 65 (3–4), 703–713. Panchanathan, K. and Boyd, R. (2003). “A Tale of Two Defectors: The Importance of Standing for Evolution of Indirect Reciprocity,” Journal of Theoretical Biology, 224, 115–126. Petersen, M. (2012). “Social Welfare as Small-Scale Help: Evolutionary Psychology and the Deservingness Heuristic,” American Journal of Political Science, 56 (1), 1–16. Pietraszewski, D., Cosmides, L. and Tooby, J. (2014). “The Content of Our Cooperation, Not the Color of Our Skin: Alliance Detection Regulates Categorization by Coalition and Race, but Not Sex,” PLoS One, 9 (2), e88534. Pinker, S. (2011). The Better Angels of Our Nature: Why Violence Has Declined. New York: Penguin Classics. Price, M., Cosmides, L. and Tooby, J. (2002). “Punitive Sentiment as an Anti-Free Rider Psychological Device,” Evolution and Human Behavior, 23, 203–231. Rai,T. and Fiske, A. (2011). “Moral Psychology Is Relationship Regulation: Moral Motives for Unity, Hierarchy, Equality, and Proportionality,” Psychological Review, 118 (1), 57–75. Raihani, N. and Barclay, P. (2016). “Exploring the Trade-off Between Quality and Fairness in Human Partner Choice,” Royal Society Open Science, 3, 160510–160516. Rand, D., Arbesman, S. and Christakis, N. (2011). “Dynamic Social Networks Promote Cooperation in Experiments with Humans,” Proceedings of the National Academy of Sciences USA, 108, 19193–19198. Richerson, P. and Boyd, R. (2006). Not by Genes Alone: How Culture Transformed Human Evolution. Chicago: University of Chicago Press. Roney, J. and Simmons, Z. (2017). “Ovarian Hormone Fluctuations Predict Within-Cycle Shifts in Women’s Food Intake,” Hormones & Behavior, 90, 8–14. Ross, W. D. (1930). The Right and the Good. Oxford: Oxford University Press. Runes, D. (1983). “Dictionary of Philosophy,” Philosophical Library, 338. Schino, G. and Aureli, F. (2017). “Reciprocity in Group Living Animals: Partner Control Versus Partner Choice,” Biological Reviews, 92, 665–672. Seemanová, E. (1971).“A Study of Children of Incestuous Matings,” Human Heredity, 21 (2), 108–128. Sell, A., Sznycer, D., Al-Shawaf, L., Lim, J., Krauss, A., Feldman, A., Rascanu, R., Sugiyama, L., Cosmides, L. and Tooby, J. (2017). “The Grammar of Anger: Mapping the Computational Architecture of a Recalibrational Emotion,” Cognition, 168, 110–128. Sell, A.,Tooby, J. and Cosmides, L. (2009). “Formidability and the Logic of Human Anger,” Proceedings of the National Academy of Sciences, 106 (35), 15073–15078. Shaw, A., Montinari, N., Piovesan, M., Olson, K., Gino, F. and Norton, M. (2014). “Children Develop a Veil of Fairness,” Journal of Experimental Psychology: General, 143, 363–375. Shaw, A. and Olson, K. (2012).“Children Discard a Resource to Avoid Inequity,” Journal of Experimental Psychology: General, 141 (2), 382–395. Shepher, J. (1983). Incest: A Biosocial View. New York: Academic Press.
225
The Evolution of Moral Cognition
Sidanius, J. and Pratto, F. (1999). Social Dominance: An Intergroup Theory of Hierarchy and Oppression. New York: Cambridge University Press. Singer, P. (2005). “Ethics and Intuitions,” Journal of Ethics, 9 (3&4), 331–352. Smith, E. and Winterhalder, B. (1992). Evolutionary Ecology and Human Behavior. New York: Walter de Gruyter. Smith, V. (2003). “Constructivist and Ecological Rationality in Economics: Nobel Prize Lecture, December 8, 2002,” Published in: American Economic Review, 93 (3), 465–508. Stratton-Lake, P. (2016). “Intuitionism in Ethics,” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). https://plato.stanford.edu/archives/win2016/entries/ intuitionism-ethics/ Swanton, C. (2003). Virtue Ethics: A Pluralistic View. Oxford: Oxford University Press. Sylwester, K. and Roberts, G. (2013). “Reputation-Based Partner Choice Is an Effective Alternative to Indirect Reciprocity in Solving Social Dilemmas,” Evolution and Human Behavior, 34, 201–206. Sznycer, D., De Smet, D., Billingsley, J. and Lieberman, D. (2016). “Coresidence Duration and Cues of Maternal Investment Regulate Sibling Altruism Across Cultures,” Journal of Personality and Social Psychology, 111 (2), 159–177. Sznycer, D., Tooby, J., Cosmides, L., Porat, R., Shalvi, S. and Halperin, E. (2016). “Shame Closely Tracks the Threat of Devaluation by Others, Even Across Cultures,” Proceedings of the National Academy of Sciences USA, 113 (10), 2625–2630. Tooby, J. (1982). “Pathogens, Polymorphism, and the Evolution of Sex,” Journal of Theoretical Biology, 97, 557–576. Tooby, J. and Cosmides, L. (1988). “The Evolution of War and Its Cognitive Foundations,” Institute for Evolutionary Studies Technical Report #88–81. ———. (1990a). “The past explains the present: Emotional adaptations and the structure of ancestral environments,” Ethology and Sociobiology, 11, 375–424. doi: 10.1016/0162-3095(90)90017-Z ———. (1990b). “On the Universality of Human Nature and the Uniqueness of the Individual: The Role of Genetics and Adaptation,” Journal of Personality, 58, 17–67. ———. (1992). “The Psychological Foundations of Culture,” in J. Barkow, L. Cosmides and J. Tooby (eds.), The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press, 19–136. ———. (1996). “Friendship and the Banker’s Paradox: Other Pathways to the Evolution of Adaptations for Altruism,” in W. G. Runciman, J. Maynard Smith and R. I. M. Dunbar (eds.), Evolution of Social Behaviour Patterns in Primates and Man: Proceedings of the British Academy, 88, 119–143. ———. (2008). “The Evolutionary Psychology of the Emotions and Their Relationship to Internal Regulatory Variables,” in M. Lewis, J. Haviland-Jones and L. Feldman Barrett (eds.), Handbook of Emotions (3rd ed.). New York: Guilford Press. ———. (2010). “Groups in Mind: Coalitional Psychology and the Roots of War and Morality,” in Henrik Høgh-Olesen (ed.), Human Morality and Sociality: Evolutionary and Comparative Perspectives. London, UK: Palgrave Macmillan, 191–234. Tooby, J., Cosmides, L. and Barrett, H. C. (2005). “Resolving the Debate on Innate Ideas: Learnability Constraints and the Evolved Interpenetration of Motivational and Conceptual Functions,” in P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind: Structure and Content. New York: Oxford University Press. Tooby, J., Cosmides, L. and Price, M. (2006). “Cognitive Adaptations for n-Person Exchange: The Evolutionary Roots of Organizational Behavior,” Managerial and Decision Economics, 27, 103–129. Tooby, J., Cosmides, L., Sell, A., Lieberman, D. and Sznycer, D. (2008). “Internal Regulatory Variables and the Design of Human Motivation: A Computational and Evolutionary Approach,” in Andrew J. Elliot (ed.), Handbook of Approach and Avoidance Motivation. Mahwah, NJ: Lawrence Erlbaum Associates, 251–271. Trivers, R. (1971).“The Evolution of Reciprocal Altruism,” The Quarterly Review of Biology, 46, 35–57. ———. (1974). “Parent-Offspring Conflict,” American Zoologist, 14 (1), 249–264. Tyber, J., Lieberman, L., Kurzban, R. and DeScioli, P. (2013). “Disgust: Evolved Function and Structure,” Psychological Review, 120 (1), 65–84.
226
Leda Cosmides et al.
Van Vugt, M., De Cremer, D. and Janssen, D. (2007). “Gender Differences in Cooperation and Competition: The Male-Warrior Hypothesis,” Psychological Science, 18, 19–23. von Rueden, C. (2014). “The Roots and Fruits of Social Status in Small-Scale Human Societies,” in J. Cheng, J. Tracy and C. Anderson (eds.), The Psychology of Social Status. New York: Springer, 179–200. Westermarck, E. (1891/1921) The History of Human Marriage (5th ed.). London: Palgrave Macmillan. Williams, G. C. (1966). Adaptation and Natural Selection:A Critique of Some Current Evolutionary Thought. Princeton: Princeton University Press. Williams, G. C. and Williams, D. (1957). “Natural Selection of Individually Harmful Social Adaptations Among Sibs with Special Reference to Social Insects,” Evolution, 11, 32–39. Wolfe, A. (1995). Sexual Attraction and Childhood Association: A Chinese Brief for Edward Westermarck. Redwood City, CA: Stanford University Press. Wrangham, R. (In press 2019). The Goodness Paradox:The Strange Relationship Between Virtue and Violence in Human Evolution. New York, NY: Pantheon. Wrangham, R. and Peterson, D. (1997). Demonic Males: Apes and the Origins of Human Violence. New York: Houghton-Mifflin. Wrangham, R., Wilson, M. and Muller, M. (2006). “Comparative Rates of Violence in Chimpanzees and Humans,” Primates, 47, 14–26. Yamagishi, T. (1986). “The Provision of a Sanctioning System as a Public Good,” Journal of Personality and Social Psychology, 51, 110–116.
Further Readings For conceptual foundations of evolutionary psychology see John Tooby and Leda Cosmides “The Psychological Foundations of Culture,” in Jerome Barkow, Leda Cosmides and John Tooby, eds.), The Adapted Mind (New York: Oxford University Press, 1992) and Part I, Volume 1 of David Buss, ed., The Handbook of Evolutionary Psychology (2nd ed.) (Hoboken, NJ: John Wiley & Sons, 2016). Volumes 1 and 2 of this handbook present current research on many topics, including ones relevant to moral epistemology. For how fairness in cooperation can evolve via partner choice, see Nicolas Baumard, Jean-Baptiste André and Dan Sperber, “A Mutualistic Approach to Morality: The Evolution of Fairness by Partner Choice,” Behavioral & Brain Sciences, 36, 59–78, 2013. For an examination of evidence (including counterhypotheses) for a reasoning system specialized for social exchange and detecting cheaters, see Leda Cosmides and John Tooby, “Can a General Deontic Logic Capture the Facts of Human Moral Reasoning? How the Mind Interprets Social Exchange Rules and Detects Cheaters,” in Walter Sinnott-Armstrong (ed.), Moral Psychology (Cambridge, MA: MIT Press, 2008). Jonathan Haidt (2012) discusses multiple moral domains and the role of evolved moral intuitions versus reasoning in moral judgment in The Righteous Mind: Why Good People Are Divided by Politics and Religion (New York: Pantheon). For the link between disgust and morality, see Debra Lieberman and Carlton Patrick’s Objection: Disgust, Morality and the Law (New York: Oxford University Press, 2018). On moral norms and impartiality arising from coalitional psychology and alliance formation, see John Tooby and Leda Cosmides, “Groups in Mind: Coalitional Psychology and the Roots of War and Morality,” in Henrik Høgh-Olesen (ed.), Human Morality and Sociality: Evolutionary and Comparative Perspectives (London, UK: Palgrave Macmillan, 2010). For how evolutionary psychology relates to cultural transmission, including morality, religion, and social institutions, see Pascal Boyer’s Minds Make Societies: How Cognition Explains the World Humans Create (New Haven, CT: Yale University Press, 2018).
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 2 The Normative Sense: What is Universal? What Varies? Chapter 3 Normative Practices of Other Animals; Chapter 4 The Neurological Basis of Moral Psychology; Chapter 5 Moral Development in 227
The Evolution of Moral Cognition
Humans, Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 12 Contemporary Moral Epistemology; Chapter 13 The Denial of Moral Knowledge; Chapter 14 Nihilism and the Epistemic Profile of Moral Judgment; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment; Chapter 18 Moral Intuition; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 22 Moral Knowledge as Know-How; Chapter 23 Group Moral Knowledge; Chapter 28 Decision Making Under Moral-Uncertainty; Chapter 29 Public Policy and Philosophical Accounts of Desert.
228
SECTION II
Normative Theory
Normative theory in moral epistemology addresses questions about the nature, possibility and extent of moral knowledge.1 Among the most basic are the following: •
What are the conditions that must be met in order for an individual to know a moral claim?2 • Is such knowledge possible? If not, what are the sources of such moral skepticism? • Since knowledge of moral claims requires that those claims be true, is there a single body of universally true moral claims—claims whose truth or correctness is not dependent on the perspectives of individuals or groups? Perhaps, instead, the correctness of a set of moral claims is relative to the perspectives of individuals or groups. • If moral knowledge is possible, how is it possible? Is intuition a reliable source of moral knowledge? Is there such a thing as moral perception, and if so, is it a reliable source? Is the process of coming to know a moral claim a matter of reasoning? • If either intuition or perception (or a combination of the two) is a reliable source of moral knowledge, do these provide a non-inferential foundation for inferring other moral claims? Perhaps, instead, if moral knowledge is possible, all such knowledge is inferential, forming a coherent body of knowledge without some moral knowledge being foundational. These moral epistemological questions are often cast as questions about the nature, possibility and extent of personal moral knowledge—whether and how an individual can acquire moral knowledge, and its structure. Though individuals are influenced by religious moralities and political philosophies, personal moral beliefs are often conceptualized as pre-theoretical in nature. However, moral epistemology also addresses questions about moral theory acceptance and its relevance for everyday moral thought and action. Normative moral theories typically have a metaphysical component insofar as they attempt to set forth moral principles that “explain” the fundamental nature of right and wrong, good and bad, virtue and vice. But these same theories often contain a more practical component
Section II
insofar as the principles they posit are supposed to guide moral thinking and action. Here, the basic question is: •
How, if at all, can moral theories and the principles featured in them be known?
A question that connects the first battery of questions about personal moral knowledge with the one about moral theory is: •
How, if at all, can the principles of a normative moral theory figure usefully in everyday moral thought and action?
Finally, work in moral epistemology also includes reflection on the very enterprise of normative moral epistemology, and more broadly on the enterprise of metaethics. Here, the question is: •
What are the guiding goals, methods and data that should guide metaethical theorizing, including theorizing in normative moral epistemology?
We might single out this question as meta-metaethical, because it raises a higher-order question about the epistemology of metaethical theorizing. All of these questions are addressed in the chapters that comprise this part of the handbook, which begins with three chapters covering the history of moral epistemology. In the West, it is standard to divide the history of philosophy into the periods of ancient, medieval, modern and contemporary philosophy. Mathias Perkams’ chapter, “Ancient and Medieval Moral Epistemology,” as the title indicates, covers the first two periods. The particular focus of Perkams’ chapter is the development of the role of reason in moral choice and action in philosophical theorizing about how to live. Although the chapter covers the thought of many ancient and medieval figures, Aristotle’s Nicomachean Ethics and, in particular, Aristotle’s conception of practical reason played an especially influential role in the history of moral epistemology during these periods. For Aristotle, all human beings by nature have as an end their own happiness (eudaimonia), the successful pursuit of which requires development of various virtues and the extinction of various vices. According to Perkams, it is part of Aristotle’s moral epistemology that correct moral principles are not discovered by a virtuous person but rather “these principles are either implied in the concept of happiness . . . or they are taught in a given society.” It is because of this that a virtuous person is properly viewed as a measure of right and wrong action. The availability of the Nicomachean Ethics in the thirteenth century influenced Albert the Great and his pupil Thomas Aquinas in developing moral epistemologies characteristic of natural law theorizing about moral knowledge and right conduct. According to Aquinas’ view (which traces back to late antiquity in the work of the Stoics), universal moral rules are knowable by every individual through the use of reason. This view represents a rationalist moral epistemology. Perkams’ account of this rich history concludes with the rise of voluntarism in this same century, according to which valid moral rules, rather than being necessary and discoverable by reason, are simply those contingent rules that God happens to prescribe for human beings; they are contingent because God might have chosen completely different rules which would have made alternative rules valid, regardless of their content. In challenging such 230
Normative Theory
voluntarism, Gregor of Rimini (among the last great scholastic theologians of the Middle Ages) defended rationalist epistemology, writing that there are moral rules that would be valid, “even if God—which is impossible—did not exist.” This line is quoted 300 years later by Hugo Grotius who, according to Kenneth R. Westphal, author of the next chapter, inaugurated modern moral philosophy. Westphal’s chapter, “Modern Moral Philosophy” covers the period of moral epistemology beginning in the seventeenth century on into the nineteenth century. As Westphal explains, various tumultuous changes in the cultural, intellectual, religious, and political environment in the early seventeenth century were accompanied by turmoil in moral philosophy as attempts were made to identify and justify fundamental moral laws or principles. During this period, morality was no longer considered by many moral philosophers as “consisting primarily in obedience to authority, whether of custom, tradition, governors, clergy or the Almighty.” Hence, there was a perceived need to “ground” moral laws or principles in something other than mere authority. Also, during this period, moral philosophy was conceived as including both ethics and justice (which many now consider a topic in political and legal philosophy), lending relevance to constitutional and civil law in theorizing about the identification and justification of moral laws. As Westphal presents this history, the prospect of identifying and justifying moral laws confronted a number of problems, including the Pyrrhonian dilemma of the criterion of truth. In order to carry through with the project of identifying and justifying a substantive set of moral laws, one apparently needs some criterion for establishing the truth of moral laws. But then, for any proposed criterion, questions can be raised about its truth, and so it appears as if one is either confronted with a never-ending regress of criteria for selecting criteria, or, if one appeals to moral laws or other substantive moral claims in attempting to establish a criterion, one is confronted with the problem of vicious circularity. However, the prospect of carrying out the project was not hopeless, as Westphal sees it. He argues that David Hume inaugurated what may be called ‘natural law constructivism.’ Very roughly, the idea, as it emerged in Hume, is to appeal to objective facts about human agency and circumstances of action that are basic to the human condition as a nonarbitrary basis for identifying and justifying moral laws. Justifiable moral laws are thus ‘constructed’ from these elements. Aided by the works of Rousseau, Kant, and Hegel, natural law constructivists can arguably “ground” a determinate set of moral laws and do so in a way that avoids the Pyrrhonian dilemma as well as other problems and challenges explained in Westphal’s chapter. Rationalism and constructivism in moral epistemology also figure importantly in moral philosophy beginning with Henry Sidgwick in the late nineteenth century. In his contribution to the volume, “Contemporary Moral Epistemology,” Robert Shaver, following the lead of John Rawls, considers the contrasting epistemological positions represented by rational intuitionism and constructivism—views that purport to avoid epistemological moral skepticism. A representative defense of rational intuitionism is to be found in the conception of self-evident moral principles Sidgwick articulates in The Methods of Ethics. Sidgwick set forth four conditions that must be met to establish a moral proposition as selfevident—conditions that guard against its erroneous acceptance. (1) A proposition’s terms must be “clear and precise.” (2) The self-evidence of the principle must be ascertainable by “careful reflection.” (3) The proposition must be consistent with any other moral propositions taken to be self-evident. Finally, (4) there must be agreement over the proposition 231
Section II
among those “competent to judge” its truth. As Shaver points out, this fourth condition is often thought to raise a skeptical worry, since one might suspect that any candidate self-evident moral proposition will have “competent” detractors. While Sidgwick’s moral epistemology features self-evident moral truths, constructivists equate the truth of a moral principle with its genesis in an appropriate procedure of moral justification. The constructivist does not conceptualize moral inquiry as an attempt to discover objective moral facts. These procedures might be said to “define” what justice and morality are, though not all constructivists view these definitions as self-evident in Sidgwick’s sense. Rawls’s Kantian constructivism proceeds from a conception of persons as free, equal, rational and reasonable and envisions an original position from which such persons deliberate about moral principles. For this reason, it might be said to presuppose various moral conceptions: i.e., conceptions of freedom, equality and reasonableness that are not defined by the procedure they are used to articulate. But Sharon Street’s more minimal version of constructivism begins with all agents who value things regardless of whether they are free, equal and reasonable. Street attempts to define a procedure with these minimal inputs whose output would constitute a body of “correct” moral beliefs. Shaver describes how Sidgwick, Rawls and Street all attempt to avoid moral skepticism, and briefly evaluates their prospects for doing so. The topic of moral skepticism is the focus of Richard Joyce’s “The Denial of Moral Knowledge.” Joyce considers various types of challenge one might raise against having propositional moral knowledge. According to the standard justified true belief (JTB) analysis of knowledge, in order for an individual to have knowledge of some moral proposition P, (1) she must believe P, (2) P must be true, and (3) she must be justified in her belief. (Joyce puts aside questions about the sufficiency of the JTB analysis.) Challenging any one of these three components of moral knowledge leads to a form of moral skepticism. According to moral error theory (the topic of the following chapter) moral beliefs (and assertions that express them) purport to attribute objective moral properties to items of moral evaluation; however, such properties are never instantiated, and so affirmative moral beliefs and assertions are never true. If moral beliefs are not true, then having moral knowledge is ruled out, given the traditional analysis of knowledge. Another route to moral skepticism is noncognitivism, according to which when someone asserts a sentence or endorses a sentence-like thought with moral content, such as “Lying is wrong,” the psychological state she is in is not really one of belief. If some form of noncognitivism is true, then the belief component of the JTB analysis of knowledge will not be met—we will not have any moral beliefs properly so called—and this would undermine the possibility of moral knowledge just as surely as would the absence of moral truth. A third route to moral skepticism involves arguing that the justification condition for our moral beliefs is never met. This line of argument typically proceeds by reflecting on distorting influences that would render a belief unjustified and then arguing that there is reason to think those influences are inevitably present when people form their moral beliefs. Joyce explores all of these paths to skepticism about moral knowledge and describes how theorists might try to avoid them. In Chapter 14, “Nihilism and the Epistemic Profile of Moral Judgment,” Jonas Olson explores the first path to moral skepticism: moral nihilism, which Olson defines as the denial of moral facts or truths. One species of moral nihilism, and the topic of Olson’s chapter, is the moral error theory. According to error theorists, our moral judgments express genuine beliefs about purported moral facts, and when we assertively utter moral claims, we purport 232
Normative Theory
to refer to these moral facts.Thus, for the moral nihilist, when someone utters the sentence, “Torture is wrong,” intending to sincerely express her corresponding belief, she purports to be referring to the moral fact that torture is wrong. Since, according to the nihilist, there are no such facts, moral beliefs are systematically mistaken and moral utterances are systematically untrue. In order to defend this metaethical position, error theories face four central tasks. First, they must argue for their negative ontological thesis. Olson does this by claiming that moral beliefs (and utterances) purport to be about irreducibly normative facts that do not exist. But Olson is not advocating the abandonment of moral thought and language. So, his second task is to argue in favor of retaining moral thought and language, despite its systematic erroneousness. But if moral thinking and speaking have these advantages, why suppose they embed systematic error in the first place? Olson’s third task is to address this worry. Of course, there are metaethical theories such as expressivism (or noncognitivism) that deny that ordinary moral thought and language purport to be about moral facts. And so a fourth task is to defend the claim that moral judgments are beliefs about putative moral facts and moral utterances purport to refer to such facts. Some theorists define “moral universalism” as the view that there is a single true or most justified morality. Moral relativism is opposed to moral universalism, denying that there is a single true morality, insofar as two contradictory moral claims can both be correct or valid. But as David Wong points out in “Relativism and Pluralism in Moral Epistemology,” one should not suppose that moral relativism is committed to the extreme view that any morality is as true or justified as any other—that when it comes to morality “anything is permissible.” According to a modest form of moral relativism, which Wong defends, there can be more than one true or most justified morality, but some moralities are neither true nor maximally justified, so it is not that case that anything is (or might be) permissible. In addressing the dispute between moral universalists and modest moral relativists, Wong advocates an explanatory approach that focuses on the similarities and differences in moral beliefs and practices within and across groups and asks which of the two competing views offers the better explanation of such similarities and differences. Such an approach requires that one identify a set of existing moral frameworks or systems and determine the similarities and differences between them. This is in part a sociological project which requires that we identify various “lived” moralities and the role they play in the lives of those who embrace them. Wong thus adopts a naturalist approach to morality that “brings to bear the relevant human sciences to understanding what sort of thing a morality is and how it originated (without necessarily attempting to reduce that understanding to science).” Most theorists agree that one of the primary functions of morality is fostering and regulating interpersonal cooperation, and that a related, but somewhat distinct function is that of intrapersonal coherence and motivation. According to Wong, the values associated with a morality’s cooperative function favor building and maintaining interpersonal relationships, while its other functions (coherence and motivation) favor independence or individualism. In support of his modest moral relativism, Wong points out that there is a variety of ways a morality might serve these two functions equally well, generating a plurality of distinct moralities which are equally good. Wong concludes the chapter by addressing two questions moral relativists must answer. First, who holds the moralities in question? Groups? Individuals? Or something else? Second, if we adopt moral relativism, what should we say about the meaning and logic of moral statements? Does a moral statement of the form “X 233
Section II
is wrong” made from the perspective of one morality differ in propositional content from the statement that “X is not wrong” made from the perspective of another morality? Or can there be true moral contradictions? To motivate a rejection of moral skepticism theorists must describe the possible sources of moral knowledge. As Matthew S. Bedke explains in Chapter 18, “Moral Intuition,” it is standard practice in moral theorizing to rely on noninferential moral judgments or experiences: i.e., moral intuitions. Is this a defensible practice? In Bedke’s hands, this question resolves into three more specific inquiries. First, what are moral intuitions? That is, how might we identify these states of mind for the purpose of inquiring into their epistemic status? Second, do various objections to the epistemic credibility of our moral intuitions provide us with reason to stop using these states of mind to articulate and defend those moral principles and claims we endorse? Finally, supposing that we may reasonably rely on moral intuitions in our moral thinking, what is the most plausible vindication of this practice? What conception of moral intuitions best supports our continued reliance upon them? Bedke notes that there is a growing consensus among philosophers that intuitions are “seeming states.” An example of a visual intuition is its seeming to one that there is pool of water in the road ahead. An example of a moral seeming is its seeming to one that in the footbridge trolley scenario, it would be wrong to push the heavy guy onto the track to stop an out of control trolley that would otherwise run over five innocent people. A moral intuition, then, is a state of mind in which some moral proposition (the content of the state) seems true. The epistemic role that such intuitions play in moral theorizing, according to standard practice, is captured in what Bedke calls “the stepping stone principle”: For all propositions P, one’s belief that P based on its seeming to one that P is true, enjoys some degree of defeasible justification. Moral intuitions are thus one important input to moral theorizing, according to the standard view. After considering various qualifications and challenges to the epistemic status of moral intuitions, Bedke proceeds to consider theories about moral intuitions that purport to explain their a-theoretical features. Such views include appeals to self-evidence, intellectual perception and conceptual competence, and to dual-process views of cognitive processing. One traditional source of human knowledge about the world is perception in its varying modalities. However, though the idea of moral perception has a storied history, the concept has not figured in much contemporary theorizing about moral justification and knowledge. This is because most writers in ethics assumed that any moral knowledge there is must be general in content and so nonperceptual in nature. For instance, in coming to know that a particular instance of a brutal beating is wrong on the basis of seeing the event, one must see the beating but infer that it is wrong on the grounds that brutality is wrong in general. The thought then is that one does not perceive the wrongness of the beating but instead infers its wrongness from background moral knowledge or belief. In his contribution, “Moral Perception,” Robert Audi challenges this picture of moral knowledge by developing a conception of moral perception. On Audi’s view, moral perception does not require some special mental faculty dedicated to perceiving moral properties. Rather, the view is that one can visually, or auditorily, or tactilely perceive various moral properties. Audi grants that moral properties are not seen in precisely the same way that the colors and shapes of physical objects are perceived. However, he argues that a range of moral properties are nevertheless perceptible by those who have the concepts needed to discern them. 234
Normative Theory
Moral properties are “consequential” in the sense they are necessarily “grounded in” the “base properties” we denote when we do our best to describe the facts of the case under evaluation. One perceives a moral property by perceiving one or more of these descriptive base properties. Perhaps, for example, you perceive the wrongness of the beating described above by perceiving its brutality. Or, if “brutality” is too evaluative to “ground” the act’s wrongness, perhaps you perceive its wrongness by perceiving the extreme nature of the harm the beating inflicts. In developing this view, Audi explains how, in moral perception, one’s moral sensibility is phenomenally integrated with ordinary perception of the properties upon which the moral property is consequential to yield what he refers to an “integration theory of moral perception.” Audi argues further that moral perception can serve as a noninferential basis for coming to justifiably believe and perhaps know moral propositions about the content of one’s moral-perceptual experiences. Do people typically form their moral beliefs on the basis of conscious reasoning? This descriptive question is the subject of “Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgment,” by Christian B. Miller. According to Miller, traditional rationalism (TR) is the view that moral judgments are typically formed on the basis of conscious moral reasoning in which one or more moral principles and associated reasons are operative in their formation. Psychologist Lawrence Kohlberg is associated with TR, which arguably dominated psychological theorizing about moral judgments in the 1960s and 1970s. Since that time, this view has been challenged by social intuitionism (SI), defended most prominently by Jonathan Haidt, according to which moral judgments are not typically the result of conscious reasoning, nor do moral principles play a significant role in their formation. Furthermore, on one understanding of SI, moral judgments are not formed on the basis of reasons. Rather, according to SI, moral judgment formation is typically a product of a positive or negative feeling that arises spontaneously in subjects upon witnessing or contemplating some morally significant event, which then causes subjects to form a moral judgment. Of course, when individuals are asked why they make the particular moral judgments they do, they often refer to reasons and associated principles in an attempt to explain their judgments. But, according to SI, this is largely a matter of people “confabulating” explanatory reasons, as the reasons we invoke to defend a judgment needn’t figure among those reasons that led us to render that judgment in the first place. Thus, SI is opposed to TR in all of its essential claims about moral judgment formation. A more recent version of rationalism, morphological rationalism (MR), defended by Terry Horgan and Mark Timmons, agrees with SI that people’s moral judgments are often spontaneous rather than the result of a process of conscious reasoning. However, MR is a version of rationalism because it claims that people’s spontaneous moral judgments are typically the result of principles and reasons that operate subconsciously. And because they are, people’s efforts to explain why they make the judgments they make by citing reasons and principles operative in their moral judgment formation are typically not confabulations. Miller explains these three views, and how problems with traditional rationalism gave way to social intuitionism, and finally how morphological rationalism attempts to offer a middle way that accommodates what is plausible about TR and SI without being subject to problems these views encounter. However, as Miller explains in closing, MR, because it is a newcomer and remains relatively underdeveloped, must overcome various challenges if it is to provide a satisfactory account of moral judgment formation. 235
Section II
Foundationalism and coherentism are competing accounts of the structure of justification and knowledge. If we just consider the issue of justification for the sake of brevity, then with regard to moral thought and discourse, coherentism is that view that the justification of a moral belief derives entirely from its coherence with other beliefs. One way to put this is to say that on a coherentist conception of the justification of moral beliefs, all justification is inferential. Foundationalism, by contrast, holds that some justified moral beliefs are noninferential, or “immediate”; they are adequately justified apart from their relation to other beliefs, even if coherence with other beliefs adds to their justification. These epistemically basic moral beliefs serve as the foundation for nonbasic moral beliefs that derive their justification from the foundation. In his “Foundationalism and Coherentism in Moral Epistemology,” Noah Lemos explains these views and the challenges they each face. He argues that coherentist views must explain the concept of coherence and they must formulate a version of the view that does not meet with counterexamples—tasks that have proven particularly troublesome. A central challenge to foundationalism in ethics is to provide a plausible account of epistemically basic beliefs. How do they receive adequate justification (adequate for providing an epistemic basis for nonbasic beliefs), if not from other beliefs? Foundationalists have argued that some mathematical and logical beliefs are justified (or known) noninferentially on the basis of, say, understanding alone, and foundationalists have argued that introspection of one’s pain, pleasure or other “occurrent” psychological state can justify one’s introspective judgments about that state without inferential support from other beliefs. Memory and perception are also often taken to be sources of epistemically basic beliefs. But what about moral beliefs? One kind of foundationalism in ethics, advocated by W. D. Ross, takes principles of prima facie duty to be noninferentially justified for those who believe them on the basis of understanding them. Other versions of moral foundationalism hold that particular moral beliefs—beliefs about the moral or evaluative character of particular concrete actions—can be non-inferentially justified. Recall, for instance, Robert Audi’s defense of moral perception described earlier. And other accounts of the epistemic status of basic moral beliefs are possible. Lemos’s chapter concludes by considering possible objections to the foundationalist notion of basic beliefs, objections that he thinks the foundationalist can answer. In “Moral Theory and its Role in Everyday Moral Thought and Action,” Brad Hooker begins by first clarifying the various concepts featured in the chapter’s title. Inspired by J. S. Mill, Hooker argues that ‘morality’ may plausibly be defined in terms of such reactive attitudes as guilt, resentment, and indignation, which serve to distinguish morality from extramoral action-guiding principles such as club rules, principles of etiquette, and laws. Again, Hooker clarifies the notion of everyday moral thought and action before addressing the main question of the chapter, namely, how, if at all, a moral theory can figure usefully in everyday moral thought and action. Historically important moral theories, including versions of consequentialism, contractualism, virtue ethics and Rossian-style pluralism, all feature moral principles that purport to explain what “makes” an action obligatory, permissible or wrong. The question, then, is whether and how these purportedly explanatory or metaphysical principles (and the concepts they employ) can figure in everyday moral thought leading to action. As Hooker explains, often enough ordinary moral thought about what to do does not rely on an explicit, conscious appeal to moral principles, even though, as a result of one’s
236
Normative Theory
moral education, one’s moral decisions might be principled. In such cases, one is able, more or less effortlessly, to make morally appropriate choices without much thought. (See Chapter 16 for a discussion of the role that moral principles can play in everyday moral thought and action without being brought to mind on the occasion of thinking and acting.) However, there are everyday contexts of moral decision making in which one needs to carefully consider the alternative courses of action among which one must choose. Moral dilemmas are important examples of this phenomenon. Sometimes one must choose between minimizing the violations of the rights of one group of people and providing a large benefit to some other group. Such cases (and others that Hooker explains) complicate everyday moral thought and action, and it is here where one must engage in some form of moral theorizing in deciding what to think and do. In such contexts, one might appeal to moral principles featured in normative moral theories, which employ such familiar concepts as rights, promises and harms, such as “people should keep their promises.” However, even if the principles of moral theory can (and perhaps should) enter in everyday moral thought and action, there remains the question of which moral principles can be reliably applied in everyday contexts. Hooker concludes by reflecting on this issue. In the final chapter of this part of the handbook, “Methods, Goals, and Data in Moral Philosophy,” John Bengson, Terence Cuneo and Russ Shafer-Landau reflect on the methods employed by metaethical theorists. T he authors distinguish the inputs to metaethical theories, their outputs, and the methods and method-related goals pursued by those in the discipline. Inputs are the data to be taken account of in such theorizing, while the outputs are metaethical theories. Methods are instructions or criteria for constructing and evaluating theories in light of the goals one attempts to achieve in constructing a theory. The authors mention true belief, knowledge, and understanding as suitable theoretical goals of metaethical theorizing, but they hold that understanding—“the sort of epistemic achievement that provides comprehensive and systematic illumination” of the target subject matter—is the proper ultimate goal of theorizing. With regard to this multi-part picture of metaethical inquiry, worries have been raised, particularly against the very idea of data. As conceived by the authors, the data proper to metaethics have four principal features: they are (1) starting points for theorizing, (2) inquiry-constraining, (3) to be collected, and (4) should be neutral in the sense that they represent “common currency” initially admissible by theorists of different persuasions. One traditional worry about data so understood is their alleged theory-ladenness—a challenge to the neutrality of data and hence to inputs into any area of theorizing including metaethical theorizing. The authors try to answer this challenge to metaethical data as well as challenges to the method-ladenness of data and proceed to consider four conceptions of data that bear on the role of data in theorizing—pragmatic, metaphysical, psycho-linguistic and epistemic—arguing that all but the last fail in one way or another to accommodate the four characteristics of data. Roughly, according to the epistemic conception, data are what theorists are in a good epistemic position to accept as genuine features of the domain under consideration. Finally, and in light of their epistemic conception of data, the authors proceed to illustrate this conception by identifying four metaethical data that possess the four principal features that data points are supposed to possess.
M.T.
237
Section II
Notes 1. Because it is widely thought that knowing a proposition entails justifiably believing it, the following questions can also be raised about the justification of moral claims. 2. This question, and the remaining ones, concern so-called propositional moral knowledge— knowing that a moral proposition or claim is true. Moral know-how is discussed in Chapter 22.
238
10 ANCIENT AND MEDIEVAL MORAL EPISTEMOLOGY Matthias Perkams
1. Introduction The topics in contemporary moral epistemology have been crucial themes of philosophical inquiry since the very beginning of philosophical ethics. Heraclitus (ca. 500 bc) declares: “All men have a share in self-knowledge and sound thinking”, and “Sound thinking is the greatest virtue and wisdom: to speak the truth and to act on the basis of an understanding of the nature of things”.1 However, it was Socrates (ca. 400 bc), who took the questions of how cognition relates to actions, what its role is in bringing them about, what sound cognition is, and how it may be acquired to be the very core of philosophy. Not only did he inaugurate a still ongoing debate about the importance of right reason for acting well and about how it can be acquired, he also gave philosophy itself the aim of being the art of teaching one how to live a good life by imparting knowledge about what a good life consists in. As a consequence, practically all ancient and medieval philosophical thinkers principally agreed with the thesis that good human behavior is defined by following right reason. The ancient thinkers, especially of the Classic, Hellenistic, and Imperial periods (up to 200 ad), focused on the questions of how right thinking can bring about good actions, its relation to the virtues, and how this right thinking can be taught; in addressing these issues, they developed the nontrivial notion of a free will.2 The medieval period also addressed these questions, while introducing new ideas. Crucial was their assumption of a faculty called “the will”, which has been understood from Augustine onwards as a reason-based ability to spontaneously bring about good or bad actions. The medieval thinkers, who distinguished the will from both reason itself and the nonrational faculties of the soul, theorized how reason and will interact in order to bring about certain actions. Other new developments in medieval ethics concerned (1) the socalled natural law, which was understood often as an innate knowledge of universal moral propositions shared by any rational being that also grounds the universality of practical reason and (2) a more refined discussion of the evaluation of single actions. All these debates were inspired by the Christian doctrine of sinning, which implies that even a virtuous agent may commit a mortal sin and thereby destroy her virtuous character.
239
Ancient and Medieval Moral Epistemology
Of the many ancient and medieval positions on these issues, the ethics of Aristotle, as it is presented mainly in his Nicomachean Ethics (= N.E.), is of particular importance in philosophical ethics. This theory found its most developed and convincing form in the ethical ideas of Thomas Aquinas, who was himself a committed Aristotelian but read the N.E. in the light of more recent developments, which to the present day remain important.
2. Moral Epistemology in Ancient Philosophy A. Eudaimonism A general feature of ancient and medieval theories of practical reason is eudaimonism. According to this doctrine, which found its most famous formulation in Aristotle, who elaborated however upon (Socratic and) Platonic ideas, human beings strive to attain eudaimonia. This Greek term, which can be rendered as happiness, means a state in which a human being has reached all the approvable aims of her life in such a way that she cannot be better off, and so nothing can be added to increase her well-being. Thus Aristotle defines happiness as “complete and self-sufficient, in that it is the end of human acting” (N.E. 1.7.1097b20f.).3 In his famous “function argument”, he argues for two more widespread presuppositions of ancient and medieval ethics, that is (1) that happiness can be reached by an activity in accordance with reason and (2) that such an activity depends upon acquiring virtues (N.E. 1.7.1097b22–98b8).4 These two theses have the following three important consequences for the shape of ancient and medieval ethics: 1. Their acceptance does not imply, contrary especially to Kant, a clear-cut distinction between acting from moral in contrast to acting from nonmoral reasons: It is a sufficient motivation for a good action that it is performed because the agent understands it as forwarding her own happiness. 2. The basic criterion for distinguishing morally good from bad actions consists in whether or not one is committed to a sound idea of which end truly results in happiness for its possessor. Whereas there are strong disagreements regarding the question— whether, for example, this end is virtue (Plato, Aristotle, the Stoics), pleasure (Epicurus), or the union with God (Neoplatonic and Christian thinkers)—virtually all ancient and medieval philosophers agree that this end cannot be reached without having acquired the relevant virtues. 3. If everybody performs her actions because of the wish to become good, there is no need to explain why somebody is motivated to perform good actions. Basically someone’s belief that a certain action will contribute to one’s own happiness is a sufficient motivation for acting so. That which has to be explained are bad actions. In antiquity, their explanation always involves a cognitive error, either about what true happiness consists in or about what has to be done in order to reach it. Many medieval authors, to the contrary, thought that bad actions can be explained by spontaneous decisions of the will against the sound proposals of reason.
240
Matthias Perkams
B. Socrates and Plato The connection between bad actions and a cognitive failure is unequivocally stated in the first fundamental account of the function of reason in human action. This position may be found in the Platonic Protagoras5, but arguably it reproduces a position of the historical Socrates.6 The Platonic Socrates in the Protagoras wishes to demonstrate that knowledge (epistēmē) is not a slave of the passions but rather “something fine and such as to rule man” (Prot. 352c). In order to refute the thesis “that a man often does bad things though he knows that they are bad and could refrain from doing them” (Prot. 355a), he argues that “the good”, which any human being pursues, is in fact “pleasure” (hēdonē). If this is right, Socrates continues, we are not faced with a conflict between knowledge and pleasure (as most people seem to think), but we have to use knowledge in order to find out which actions will bring us most pleasure: If we know that greater pleasure will result from a more remote good, we will pursue this good leading to this greater pleasure instead of an action procuring us a smaller pleasure that is immediately at hand. Thus our striving for pleasure always bids us follow a rational judgment about what brings us the most pleasure; if we fail to act in a way that maximizes our pleasure, this must be due to a sort of ignorance (amathia) (Prot. 357de). This so-called “Socratic intellectualism” does not fit some of our intuitions about virtuous behavior. First, it lacks an axiomatic distinction between morally good or bad actions on one hand and pleasurable and nonpleasurable events on the other hand. Second, the consequence that our bad acts always result from ignorance is not easily reconcilable with the common sense assumption that we sometimes consciously do bad things. In fact, the Protagoras account lacks a clear distinction between knowledge and opinion, which could render the claim that wrong actions are caused by ignorance ambiguous as to the nature of the ignorance. Third, the presupposition that our striving for pleasure is our sole motive for action does not fit with the fact that we are interested in more than one good. Thus the Socratic account could be read as a forceful argument for ascribing to moral reason a crucial role for bringing about actions, but it provoked also many further discussions. A different description of how human actions are brought about can be found in Plato himself, who in the Politeia contests the claim that reason is the only power which causes actions. He states that we often experience inner conflicts and explains them by tensions between reason on the one hand and two emotional states on the other; he calls them desire (epithymia) and thymos, that is, courage or anger. Any of these three faculties can conflict with the other two.7 However, in a virtuous person they would be united in a relation of justice, according to which each faculty would perform its task to the right degree (Rep. 436a-443e). Thus, Socrates’s intellectualist but unified account of human behavior is replaced by a theory that distinguishes between three parts of the soul.8 Whereas Plato can explain why conflicts in human beings arise and how they can be resolved, he leaves unanswered these questions: What is the element guaranteeing the unity of the soul? If there is no such element, how can any conflict among the three powers lead to action at all? Is there a single source of decision making within the soul? Or is the soul only a field of conflict between different parts?
241
Ancient and Medieval Moral Epistemology
C. Aristotle Such considerations pave the way for Aristotle’s practical philosophy, which is important, among many other things, for introducing the concept of “practical reason” (N.E. 6.2.1139a26f.). Aristotle’s account of this faculty is closely connected with at least three relevant topics for describing human action: First, a theory of the soul’s faculties, second, a differentiated theory of the virtues, and third, factors that figure in sound practical reasoning. For purposes of ethical theorizing, Aristotle distinguishes two parts of the human soul, a rational part and a nonrational part.The nonrational part, which Aristotle calls “the striving desire” (orexis),9 is capable of understanding and performing reason’s commands, such that with its help one aims for eudaimonia by performing certain actions (N.E. 1.13.1102b13– 1103a3). Within the rational soul Aristotle distinguishes two further parts: the “scientific” part, concerned with unchanging objects, and the “deliberative” part, concerned with changeable things (N.E. 6.1.1139a3–15). The deliberative part directs one’s striving toward performing right or wrong actions, which depends on “choice” (prohairesis). However, the starting point for deliberation about how to act correctly is the goal of eudaimonia, which is what desire strives to achieve, such that its being properly directed is an essential precondition for being able to judge particular courses of action well (N.E. 6.2.1139a29-b 4). Consequently, Aristotle calls choice “either desiderative reason or ratiocinative desire” (N.E. 1139b4f.), which emphasizes the close connection between the rational and the nonrational factors in determining human actions. For Aristotle, the parts of the rational soul are neither identical, as Socrates has it, nor acting against each other, as Plato says in the Politeia. This structure of the soul is reflected in Aristotle’s theory of virtues. Basically, the virtues can be divided into virtues of character, the so-called ethical virtues, by which desire is directed to the right goals, and the dianoetic virtues, which improve the rational faculty itself (N.E. 1.13.1103a3–10).The most important dianoetic virtue, which is crucial, too, for moral epistemology, is “phronēsis”, usually rendered as prudence.This form of intelligence is not concerned with detecting unchanging beings but with “praxis”, that is, human action in the observable world (N.E. 6.5.1140a24-b5). Aristotle defines praxis, in contradistinction to any productive activity, by having as its end the well-being of the agent herself, which is equivalent to the virtuous activity featured in the function argument (N.E. 6.5.1140b4–7). Thus, prudence can never be used in a wrong way, whereas, in the case of artistic production, it is a sign of mastering an art that the intellectual virtue engaged in productive activities can be used in a wrong way (N.E. 6.5.1140b20–22). The ends, the realization of which result from prudence, are fixed by the ethical virtues (N.E. 6.13.1144a7–9), such that, due to them, prudence cannot be distracted from the right end by nonrational passions (cf. N.E. 6.5.1140b11–20). These ethical virtues are not natural faculties; rather, they have to be acquired by being habituated, that is, by acting repeatedly in a virtuous way (N.E. 2.1.1103a23-b6).This is also important for becoming prudent, because prudence, which is related to sense perception, relies for its judgments upon experience (N.E. 6.8.1142a10–30); thus, it too presupposes repeated right actions that are adequate to the other virtues. However, this necessity of habituation seems to involve a circularity, for it would seem as if a virtuous action can only be performed by someone who has acquired the virtue in question and is thus acting prudently. In turn, it would seem that to acquire a virtue and be acting prudently, one must first 242
Matthias Perkams
perform virtuous actions leading to acquisition of the virtue (N.E. 2.4.1105a17-b2). This circle can be avoided by noting that the actions that lead to the acquisition of the virtues are not themselves virtuous but only actions of a similar type, which do not stem from prudential judgment but from the advice of other people (cf. N.E. 1.4.1095b4–13;2.4.1105b5–18). There is a difference between someone giving alms based on a prudential judgment and giving alms only because it has been explained to her that this would be a just action.Thus, a good education, which is also the principal task of the lawgiver (N.E. 2.1.1103b2–6; 10.9.1179b29–80a28), is essential for becoming competent in judging practical matters. Consequently, educational questions are at the core of Aristotle’s Politics, as they are in Plato’s dialogue Nomoi (The Laws). If the chief competence of a prudent and virtuous person is to be able to act well in particular situations, then Aristotelian prudence is, generally speaking, not a competence for discovering right moral principles. Rather, these principles are either implied in the concept of happiness (as Aristotle demonstrates in the N.E.) or they are taught in a given society. However, in some passages, Aristotle points toward a wider role for prudence: the truly prudent and virtuous person is, for him, the measure of what is right and wrong (N.E. 3.4.1113a29–33), and there is, in Aristotle’s words, an “architectonical” prudence that makes one competent to give laws (N.E. 6.7f.1141b21–26). Thus, we can ascribe to truly prudent men the competence to act on universal rules and to implant them into their societies. The background of this assumption may be explained (as we find in some modern interpretations) by the fact that prudence necessarily specifies the ends implied in the virtues themselves, such that concrete actions can be performed and convenient regulations may be found.10 Still, successful reflection on prudence about universal norms will be not the task of everybody but presupposes the acquisition of virtues acquired by the few. The importance of single situations and their special place in Aristotle’s thought surfaces also in his doctrine of the so-called practical syllogism: Practical reasoning takes its start from a universal premise (“it is good to give alms”) and explains the case in question in this way (“to do this or that would be giving alms”), with the result that somebody acts in the proposed way (Met. 7.7.1032b6–21)11—or does not, if the action has been judged bad. In such reasoning, the second premise has to be a particular evaluation of the situation at hand (N.E. 7.3.1147a2–4).Thus, a practical syllogism is not simply a subsumption of a case under a given description, but it includes a thorough evaluation of the situation appreciating all relevant aspects in light of which the agent is determined to act. This comes out very clearly in Aristotle’s account of “uncontrolled action” (akrasia), which is intended to explain how someone can act contrary to her own better judgment— an unresolved problem in Socratic intellectualism (N.E. 7.2.1145b22–29). Indeed, Aristotle offers more than one explanation for such failures: Either someone has right knowledge but does not use it, or her particular second premise does not influence her actions, or she has the knowledge in the same way as a mad person or a person dominated by deep nonrational passions (N.E. 7.2.1146b31–1147a17). Whereas some of these descriptions seem to allow for the possibility that someone is vanquished by her passions, still, a failure of reason itself seems to be a crucial element in all these accounts. In the end, it remains open whether either an imperfect rational judgment caused the passions to overwhelm one’s choice or whether the strength of the passions was by itself strong enough to override practical reason. Probably Aristotle thinks that both a lack of ethical virtue and a lack of prudence 243
Ancient and Medieval Moral Epistemology
can be sufficient to cause bad actions, which are not really willed as such.12 In any case, a virtuous agent fitted with the virtue of prudence has to be distinguished from the uncontrolled person (N.E. 7.2.1146a4–15). Aristotle seems to assume that such an agent never acts wrongly, though he admits that very bad external conditions may in particular cases deprive even very good people of their virtuous habits (N.E. 1.10.1100b4–01a21). All in all, Aristotle offers a remarkably rich and complex analysis of the role of know ledge for human actions. An important advantage of his conception is its ability to explain moral knowledge as an integral part of the human life, on the basis of which one deliberates about how to realize one’s personal wishes. Furthermore, Aristotle’s view makes plausible the claim that the ability of judging such matters competently presupposes having acquired a good character, which includes that one’s personal wishes correspond to an ideal of a good life. However, some important questions remain open: What is the crucial factor in making a moral decision: one’s rational judgment or a striving desire? Are there any sources and criteria for practical reason apart from the experience of the prudent person? Indeed, Aristotle seems to suggest that there are universal moral rules—especially universally prohibiting rules (N.E. 2.6.1107a8–17)—but he does not state clearly in which way they are reason-based. Thus it remains an open possibility that valid moral norms are always a product of certain societies, such that societies cannot be criticized effectively by judgments of practical reason.
D. Developments in Later Antiquity These questions are further discussed in later ancient theories, which lead to some remarkable developments.The universal character of moral norms is clearly reflected by the Stoics, who see human reason as an expression of a “natural law”, which does not only rule the whole world but also the individual human being. Thus reason itself prescribes that “the good has to be done and the bad has to be omitted”, commanding those acts by which human beings will strive successfully for well-being (Stoicorum Veterum Fragmenta 3, 314). Regarding action theory, the Stoics return to a form of intellectualism, comparable to the Socratic view, but distinguish two forms of thought: on the one hand true and false “impressions” that arise in us without being totally under our control and on the other hand “assent”, which is the ability to decide which impressions to accept and which not (where “acceptance” means that we allow those impressions to influence our whole behavior).13 Among the later Stoics, the question of what the ruling faculty of a human being is becomes more prominent. Especially Epictetus (ca. 50–138) uses the Aristotelian term of “choice” (prohairesis) to describe a faculty by which we are able to decide, independently of any outward compulsion, which impulses to take up and which to reject (Dissertiones. 1.17.21–29).14 This line is pursued by ancient Christian thinkers, among whom especially Origen (ca. 185–254) accepts Stoic intellectualism with slight changes.15 However, it is Augustine (354–430) who calls this ruling faculty “the will” (voluntas) and distinguishes it from both reason and the nonrational passions (De Libero Arbitrio 1.71–76).16 Due to his vast influence, this terminology is accepted throughout the Latin-writing world—that is, through Western Europe. However, the meaning of the term and the description of the will in Augustine include many features that are not easily reconcilable. On the one hand, at least 244
Matthias Perkams
the young Augustine understands the will as a striving for the good, that is, for happiness (De Libero Arbitrio 1.76). On the other hand, he stresses the complexity of the human will by saying that there can be one or more different “wills”—that is tendencies of willing— in one human being. In his autobiographical Confessions, he describes how his own will, which is ready to give to his life a new direction, is still “bound by the chains” of his old willing, which can only be overcome slowly (Conf. 8.19–24). Thus Augustine subscribes to the words of Saint Paul “The good which I want to do, I fail to do; but what I do is the wrong which is against my will” (Rom. 7:19), and in his later years accepts a theory of predestination, according to which human beings are not at all capable of acting well (De Diversis Quaestionibus ad Simplicianum 1.1). However, he sticks to the position that the bad will, and nothing else, was the cause of the fall of the apostate angels (Civ. 12.6f) and also that human beings sometimes act badly, without any recognizable reason, only because of their willing badly (Conf. 2.8–18). Thus, the will becomes the core faculty of choice, and the role of reason seems to be reduced to being just a normative “voice” admonishing man to be good but lacking a real influence upon human acting, which is solely dominated by a will tending towards evil.
3. Moral Epistemology in Medieval Philosophy A. Basic Concepts of Medieval Ethics: Reason, Will, and Conscience Thus the medieval thinkers, familiar with the Stoic and the Augustinian positions, were faced with two tendencies: On the one hand, that reason is an important factor for human actions and the measure for what is good or bad and, on the other hand, that which actions a person performs depends upon that person’s will. Not surprisingly, they were convinced furthermore that good actions were in conformity with the will of God, who imposed the moral rules upon human behavior and who is competent to observe and judge any human action. This means that it was natural enough for medieval thinkers to see human laws as an expression of the moral rules which had been given by God. Thus, two consequences imposed themselves, which were formulated by Peter Abelard at the beginning of the twelfth century, the time in which medieval academic discussion began: First, it is “reason which must dominate me as a law”, which is abandoned by the sinner who acts, according to Saint Paul, against that what he himself wants (Commentaria in Epistolam Pauli ad Romanos (ed.) Buytaert, 209). Consequently, the medieval thinkers take up the Stoic idea that reason is a universal “natural law”, which prescribes universal rules for what has to be done and what has to be avoided.17 This universal moral law is located in a special part of the soul, which, from around 1170 onwards, is called “synderesis”. Already before that time medieval thinkers discussed whether this is a rational faculty or a kind of natural will for the good;18 later on, for example in Meister Eckhart (Deutsche Werke 2. 211.1–3), this “spark of the soul” is defined, in Neoplatonic terms, as the center of man’s inner life.19 One of the most elaborated accounts of natural law was developed by Thomas Aquinas: He distinguishes not only between the first, unchangeable rules of reason (“do not harm anybody”) and their immediate consequences (“do not steal” or “do not kill”), but adds a theory of human laws, which have to be developed by the lawgiver to comply with 245
Ancient and Medieval Moral Epistemology
the rules of natural law, in order to establish an order which is suitable for the human good (Summa theologiae [= STh] 1–2.95.2). Second, any human being has, due to her reason, the competence to judge her own actions. An individual’s reason can be identified with “conscience”—a word which is especially widespread in Latin literature from antiquity onwards—in roughly the following way: Conscience is an individual application of the universal norms that are known to any human being by the natural law of reason, such that it can judge beforehand what would be a good human action to perform and what not and afterward whether one has acted well (STh 1.79.13). From such considerations thinkers like Peter Abelard and Thomas Aquinas infer that it is a greater sin to act against one’s own conscience than to perform an action that is universally judged to be bad. Thus, Abelard excused the persecutors of Christ from having acted badly (Ethica (ed.) Luscombe, 55–57); a century later, Aquinas states that believing in Christ can only be good if it is in line with the believers’ conscience (STh 1–2.19.5). Abelard even held that the judgment of the personal conscience during a human life can prevent a later judgment of God.20 However, this does not mean that any judgment of conscience is sufficient to guarantee an action to be good. This is partly due to the fact that most medieval thinkers, unlike Kant, assumed that one’s conscience may be erroneous, which can only be excused if the error is not due to negligence or to a failure to examine one’s situation (STh 1–2.19.6). Furthermore, conscience is not only competent to judge outward actions but also the inward intentions of the agent, which are crucial for the value of an action: It is always bad to do good things with a bad aim; for example, giving alms in order to be admired by other people (STh 1–2.20.1). Less clear are cases of doing an apparently bad action with a good intention, for example, to steal something in order to help the poor.21 In order to do justice to such cases, the medieval thinkers developed very complex analyses of the human action, which pay attention to different outward and inward aspects of actions.22 In early modern times, an interesting discussion about the criteria of conscience comes up:Whereas Aquinas and his successors still hold that a good conscience can be established only by avoiding all actions that are not judged without doubt by reason to be good, from the sixteenth century onwards the so-called “probabilists” argue that any plausible judgment of practical reason can be sufficient for acting in conformance with one’s conscience.23 The action theories linked with the concepts of “reason” and “will” are already in the twelfth century very different: Anselm of Canterbury (1033–1109) and Abelard (1079– 1142) stress the crucial role of reason for man and assume a form of “will” that is rational and often conflicts with a nonrational willing. For Anselm, freedom is the capacity of the will to act in accord with actions proposed by reason (De concordia 6). Abelard distinguishes two meanings of the word “willing”, that is, either “approving or disapproving something” or “liking or disliking something”. In the second sense, to will something may mean simply to desire it, whereas in the first sense a rational judgment is included, which can be obeyed or disobeyed by the will (Commentaria in Epistolam Pauli ad Romanos (ed.) Buytaert, 206–208). Later on, Abelard calls such an act of disobedience “consent” (Ethica (ed.) Luscombe, 17).24 By contrast, his adversary Bernard of Clairvaux sees reason only as a “slave”, which merely serves the will according to its wishes (De Gratia et Libero Arbitrio I.2.3). The rapid influence of this view can be seen from the teaching of Abelard’s successor in Paris,
246
Matthias Perkams
Robert of Melun (ca. 1100–1167), who seems to have been the first to develop a position which is often called “voluntarist”, insofar as according to him only the will, as responsive to preceding rational judgment, is responsible for our good or bad acts.25 Nearly all medieval thinkers agree that the will has to be formed by acquiring virtues; however, contrary to their ancient predecessors, they stick to the Christian conviction that even a virtuous agent can still commit a mortal sin.26 An unusual idea of Abelard is that he abandons eudaimonism in favor of a distinction between moral and nonmoral motives. Liberal persons who “follow intrinsic goodness rather than utility” are the model for anybody who wants to act out of the love of God and his neighbor. This core motive, which any good person will have to fulfill if she is to act well, has to be understood as saying that we have to love God because he is good in himself and not because he can grant us a happy life. If we fail to do so, we will be “traders, even if in spiritual matters” (Commentaria in Epistolam Pauli ad Romanos (ed.) Buytaert, 201–205). However, this position remains an exception in medieval debates, because practically all thinkers assume with Augustine that for an action to be good it is sufficient that it pursues a true good.
B. The Reception of Aristotle in the Thirteenth Century: Albert the Great and Thomas Aquinas This tendency is strengthened in the thirteenth century, when Aristotle’s Nicomachean Ethics became available to Latin readers. Albert the Great and his pupil Thomas Aquinas integrate most of its ideas into their own ethical systems, partly reinterpreting Aristotelian concepts by linking them with the medieval ethical terminology.27 Thus, thanks to the great interest in ethics in the Latin world, the N.E., which had, compared with other works of Aristotle, found only a relatively scarce reception in late ancient and Arab philosophy, becomes the most important textbook of ethics and is recognized as one of the most important ethical texts of all times.28 One salient point where the N.E. gains influence is the interaction between reason and will. In order to reformulate the old idea that free decision (liberum arbitrium) or choice is produced jointly by reason and will, Albert stresses that the will is not only a faculty to bring about an action but that it is also directed toward a goal (Super Ethica 3.4.178). Aquinas elaborates this idea, employing a distinction according to the Aristotelian causes: The will is both the efficient cause of an action, that is, that which brings the action about, and its final cause, because it is active by being directed to some end. Reason, on the other hand, is the action’s formal cause, that is, that which constitutes the action as an action of some particular type (De Malo 6). Again, reason and will are inseparable from each other, because any act of the will is shaped by a rational judgment. However, it is the will’s goal directedness that sets in motion the process of reflection. Thus, as in Aristotle, human action is explained by the fact that all people strive for happiness. In Aquinas’s eyes, this does not contradict the fact that different human beings pursue very different ends, as he establishes by a peculiar account of practical reason. For the domain of reason or of morality—concepts that for him are quasi-synonymous (STh 1–2.17.4)—is not fixed in the same way as nature is. Whereas every natural object belongs
247
Ancient and Medieval Moral Epistemology
to a certain species, every human action has so many features that reason has to decide what the crucial aspect of any prospective action is. It has to take into account not only the basic description of the action (its “object”) but also certain accidental features, the so-called circumstances, including the intention of the agent. Thus a theft from a holy place can be seen as a sacrilege, not as a theft, if reason judges the religious aspect to be crucial (STh 1–2.18.9). In this sense a universal ethical law has to be applied by reason not in a deductive way—subsuming a single action under a universal genus—but by “finding” or “inventing” a sufficiently concrete rule that is fitting to judge the case in question. This is also the way in which the natural law of reason has to be concretized by the human legislator in order to fit a certain society, such that each concrete state is ruled according to suitable laws (STh 1–2.94.4).29 If these laws are not fitting any more, they should be changed, if this can be done without significant damage (STh 1–2.97.1–2). Individual practical reason is also crucial for Aquinas’s account of free action. For him, human actions are free because reason is never definitely fixed upon one good, and thus the will is never determined to pursue only one course of action. Only a good that included all perfections in itself would leave reason without the possibility to opt for another good. But there is no such good in human life in which the perfection of God can be grasped completely. Consequently, reason can always focus upon another good other than the one now in focus and direct the will in another direction (STh 1–2.10.1; De Malo 6).30 Thus, the mid-thirteenth century sees a renewal of Aristotle’s ethics that includes a more refined treatment of practical reason. The interdependence between the Stoic idea of universal moral rules known to every human being by the law of nature and the Aristotelian emphasis on individual, context-sensitive judgments about concrete situations leads to groundbreaking insights. For example, Aquinas can explain why the individual conscience is not necessarily obliged by state laws, because it is competent to judge those laws by its own moral standards (STh 1–2.96.4), and why different people can have legitimately different and even conflicting views on the same situation (STh 1–2.19.10). However, given its strong Aristotelian shape, Aquinas’s account of freedom invited criticism for having failed to describe truly free decisions of human agents in the Augustinian sense.
C. Voluntarism This problem becomes the main interest of a whole series of authors from around 1260 onward who engage critically with Aquinas and other Aristotelians. One of the first of these so-called voluntarists31 is Henry of Ghent (ca. 1240–1293).32 According to him, if free decision is explained by a rational preference, one has to assume necessarily that the will automatically follows a rational judgment once it has linked this or that course of action with the goal intended by the agent. For any action of an agent has to be seen in such a theory as a step to the final goal of happiness (Quod.1.q.16; p. 100f.). However, for explaining freedom (libertas), one has to allow that the agent can choose various courses of action proposed by reason, even if all proposed actions are equal in every respect, or if the action that is chosen seems worse from a rational point of view (Quod.1.q.16.p. 104.109f.). Now, such a choice cannot be explained by reason alone, because reason, due to its syllogistic structure, always results in conclusions that purport to follow necessarily from certain premises; consequently the crucial choice that determines what is to be done must be brought about 248
Matthias Perkams
by the will (Quod.1.q.16.p. 107f.). Henry and other authors extend these ideas toward a comprehensive theory not only of human but also of divine action. In a similar way, John Duns Scotus (ca. 1265–1308) distinguishes sharply between socalled rational causes, which are able to bring about courses of action that differ in kind, and “natural causes”, which can, under the same conditions, only produce effects of the same kind. For Scotus, this distinction between freedom and nature is such a basic feature within the genus of causes that it cannot be explained by any other concepts (Quaestiones super Metaphysicam Aristotelis 9.15 nr. 22–24; Opera philosophica 4 p. 681f.). In his view, even reason itself, because it draws necessary conclusions from accepted premises, is not a “rational” cause; rather, only the will fulfills the required criteria, because it can choose freely any course of action proposed by reason as a possibility (Quaestiones super Metaphysicam Aristotelis 9.15 nr. 36–41; ibid., 684–686). Scotus applies this structure not only to the human will but also to the creative activity of God. His argument runs as follows: If there is contingency in the world—and there can be no doubt that there is, because otherwise there would be neither free nor evil acts—this cannot be due to the activity of a cause that necessarily brings about its effects, for such a cause can only produce effects in a necessary way. Thus, one has to assume that God creates the world by a voluntary act, such that he must not be thought to be a necessary and natural cause (Reportatio Parisiensis examinata d. 39–40; q. 1–2; nn. 26–44; (ed. trans.) Soeder, 76–88). Of course, for a Christian theologian this cannot mean that God has no prescience of the actions he produces, because in this case there could be no divine providence. Scotus concludes that the omniscient divine intellect and the omnipotent divine will bring about the world together: The divine intellect determines, by necessary deduction from different premises, innumerable possibilities for how the world could be created, and from these infinite possibilities, the divine will elect one on the basis of a contingent decision within divine eternity and creates one concrete world, which he could have created otherwise (ibid. d. 38, q. 1, nn. 35–43; (ed. trans.) Soeder, 48–54). This ingenious theory has important consequences for the status of moral rules: For Scotus, most of them are themselves contingent: They are valid only because God has chosen them to be valid out of many possibilities. Thus, he restricts the absolutely necessary contents of the “natural law” to one rule, that is, that we have to love God and are not allowed to hate him (Ordinatio l. 3 d. 37 nn. 16–28; Editio Vaticana 10, pp. 279–284).33 His younger contemporary William of Ockham (ca. 1285–1347), though, questions even the necessary status of this rule and affirms, however somewhat reluctantly, that God might have commanded that it would be good to hate him. According to his theory, all moral rules are not good because they are prescribed by reason but rather reason prescribes them as good because God, who could have chosen otherwise, wants them to be valid rules (In sententias 2 q. 15; Opera theologica 5, p. 352f.). Ockham considers them to be valid “because of the ordered power of God” (potentia dei ordinata), whereas “because of the absolute power of God” (potentia dei absoluta) there is no necessity to issue one rule or another (Quodlibet 6 q.1; Opera theologica 9, p. 585f.). By this idea, Ockham can explain too, why God is free to dispense somebody from certain rules, such that actions violating these rules are good (In Sententias 1 q. 47; Opera theologica 4 p. 685), as for example, when he allowed the people of Israel to take the golden vessels of the Egyptians (Ex. 12, 35f.).34 249
Ancient and Medieval Moral Epistemology
Thus, the voluntarist theories, which are in a certain sense consequent theoretical formulations of certain Christian convictions, show a remarkable potential to question the naturalness and self-evident validity of many traditional rules of human society. Thus, it is not surprising that their theories find fierce resistance among other theologians, the so-called intellectualists, who still see the intellect not only as the normative but also as the moving force behind any human action. In order to refute the normative implications of the voluntarist theories, Gregor of Rimini (In Sententias 2 d. 34–37 a. 2; vol. 6, p. 235) formulates explicitly the axiom that certain moral rules would be valid for reason in any case, “even if God—which is impossible—was not to exist” (etsi deus non daretur). By this sentence, which was to be quoted 300 years later by Hugo Grotius at the beginning of his famous treatise “On what is right in war and peace” (Prolegomena, ch. 11), Gregor inaugurated the rationalist moral theories of early modern times, which were prompted partly by the wish to refute voluntarist irrationalism—and, as an important part of this, to found morality upon reason alone.
Notes 1. Heraclitus, B112 and B116 Diels-Kranz, transl. Graham. 2. Cf. the sketch in Frede, 2011, 1–18. 3. Quotations from the N.E. are taken from the translation of R. Crisp, slightly adapted, if necessary. For a useful interpretation of N.E. 1097a25-b20 see Pakaluk, 2005, 67–74. 4. For understanding the argument one also has to take into account parallel passages in Aristotle’s Eudemian Ethics; cf. Müller, 2003. 5. Plato, Protagoras 352a-358b. Quotes are from the translation of C.C.W. Taylor. 6. Cf. Aristotle, N.E.VII 2, 1145b 22–27. 7. See Chapter 7 of this volume, “Moral Reasoning and Emotion”. 8. Two different explanations of the relationship between the Protagoras and the Politeia account, which are examples of the complicated scientific debate on these texts, can be found in Brickhouse, Smith, 2007 and Shields, 2007. 9. Most English translations render orexis with desire, which is, however, misleading, because Aristotle distinguishes between a nonrational “desire” (epithymia) and the more general notion of orexis. 10. Cf. Reeve, 2013, 14f. and Russell, 2014, 204–206 as examples of the very different modern interpretations. 11. Cf. the overview of Aristotle’s examples of practical syllogisms in Santas, 1969, 163–169; see also Reeve, 2013, 6–17. 12. As is suggested correctly by the differentiated analysis of Müller, 2009, 110–130. 13. Cf. Frede, 2011, 37–48. 14. Cf. Hofmeister-Pich, 2010; Frede, 2011, 66–88. 15. Frede, 2011, 101–124, esp. on Origen, De Principiis 3.1–4. 16. On Augustine’s theory of will cf. for example Müller, 2009, 301–366; Frede, 2011, 153–174 (who possibly underrates the originality of Augustine). 17. See, e.g., Marenbon, 1997, 267–272. 18. Perkams, 2011. 19. Kern, 2013, 237–244. 20. In the so-called Cambridge Commentary on the Pauline Epistles, a report of Abelard’s lectures. Latin text and German translation in Perkams, 2001, 253. 21. Cf., e.g., Jensen, 2007. 22. Especially Aquinas in STh 1–2.18–21; cf. Pilsner, 2006. 23. Cf. Deman, 1936. 24. Cf. Marenbon, 1997, 258–264. 25. Cf. the texts and analyses in Perkams, 2012. 26. For example Abelard, Ethica p. 5 Luscombe. On Aquinas cf. Kent, 2013, 108f. 250
Matthias Perkams
27. The Aristotelian character of Aquinas’s ethics is explained in great detail in Rhonheimer, 1994. 28. Cf. on this Perkams, 2014, 11–23; on the reception of the N.E. in the thirteenth century cf. Celano, 2012. 29. This interpretation has been explained in Perkams, 2013 and 20168; alternative proposals are, e.g., MacDonald, 1998, 322–328; Pasnau, 2002, 221–227. 30. Again this is the interpretation I have proposed in Perkams, 2013, 85–89. 31. Cf. Kent, 1995, 94–149. 32. A comprehensive overview of Henry’s action theory can be found in Teske, 2011. 33. On this doctrine cf. Möhle, 2002. 34. On Ockham’s ethics cf. e.g. McGrade, 1999.
Editions of ancient and medival sources Albert the Great (1968/72). Alberti Magni Super Ethica = Alberti Magni Opera Omnia, vol. 14, 1–2, ed. by W. Kübel. Monasterii Westfalorum (Münster in Westfalen): Aschendorff. Anselm of Canterbury (1994): Anselm von Canterbury, Freiheitsschriften, Latein-Deutsch, trans. by H. Verweyen. Freiburg im Breisgau: Herder. Aristoteles. (1995). Aristotelis Metaphysica, ed. by W. Jaeger. Oxford: Clarendon. ———. (2011). Nichomachean Ethics, ed. and trans. by R. Crisp. Cambridge: Cambridge University Press. Augustine. (1969ff). Aurelii Augustini Opera. Turnhout: Brepols. Bernard of Clairvaux. (1963). Sancti Bernardi Opera, vol. 3, ed. by J. Le Clerc. Romae (Rome): Editiones Cistercienses. Epictetus. (1916). Epicteti Dissertationes, ed. by H. Schenkl. Lipsiae (Leipzig): Teubner. Gregory of Rimini. (1981ff.): Gregorii Ariminensis Lectura super primum et secundum sententiarum, vol. 1–7, ed. by A. D. Trapp/V. Marcolino. Berlin/New York: De Gruyter. Henry of Ghent. (1979ff.). Henrici de Gandavo Opera omnia, ed. by R. Macken et al. Leuven: University Press/Leiden: E. J. Brill. John Duns Scotus. (1997ff.). Ioannis Duns Scoti Opera philosophica, vol. 1–5, ed. by R. Andrews et al. St. Bonaventure, NY: Fransiscan Institute and St. Bonaventure University. John Duns Scotus. (2005). Johannes Duns Scotus, Pariser Vorlesungen über Wissen und Kontingenz. Reportatio Parisiensis examinata I 38–44, ed. and transl. by J. R. Söder, Freiburg im Breisgau et al.: Herder. Master Eckhart (1958ff.). Meister Eckhart, Die deutschen Werke, ed. by J. Quint. Stuttgart: W. Kohlhammer Verlag. Origen, On Principles (Critical Edition). Origenes (1976), Vier Bücher von den Prinzipien, ed. by H. Görgemanns/H. Karpp. Darmstadt: Wissenschaftliche Buchgesellschaft. Peter Abelard. (2002). Peter Abelard´s Ethics, ed. with introd. and trans. by D. E. Luscombe. Oxford: Clarendon Press. Peter Abelard. (1969). Commentaria in Epistolam Pauli ad Romanos = Petri Abaelardi Opera theologica 1, ed. by E. M. Buytaert. Turnhout: Brepols. Platon. (2003). Platonis Respublica, ed. by S. R. Slings. Oxford: Clarendon. ———. (1995). Protagoras, trans. with notes by C. C. W. Taylor. Oxford: Clarendon Press. Stoici. (1903ff.). Stoicorum veterum fragmenta, ed. H. von Arnim. Lipsiae (Leipzig): Teubner. Thomas Aquinas. (1888ff). Sancti Thomae Aquinatis Doctoris Angelici Opera Omnia Iussu Leonis XIII. P.M. edita, cura et studio fratrum praedicatorum. Romae (Rome): Editio Leonina. William of Ockham. (1967ff). Guillelmi de Ockham opera philosophica et theologica, ed. by R.Wood et al. St. Bonaventure, N.Y.: Fransiscan Institute and St. Bonaventure University.
Modern Literature Brickhouse, Th. and Smith, N. (2007). “Socrates on Akrasia, Knowledge, and the Power of Appearance,” in Ch. Bobonich and P. Destree (eds.), Akrasia in Greek Philosophy: From Socrates to Plotinus. Leiden, Boston, MA: Brill, 1–17. 251
Ancient and Medieval Moral Epistemology
Celano, A. (2012). “The Relation of Prudence and Synderesis to Happiness in the Medieval Commentaries on Aristotle’s Ethics,” in J. Miller (ed.), The Reception of Aristotle’s Ethics. Cambridge: Cambridge University Press, 125–154. Deman, Th. (1936). “Probabilisme,” in Dictionnaire de théologie catholique, 13 (1), Paris: Librairie Letouzey et Ané, 417–619. Frede, M. (2011). A Free Will: Origins of the Notion in Ancient Thought. Berkeley, Los Angeles, London: University of California Press. Graham, D.W. (ed. and trans.) (2010). The Texts of Early Greek Philosophy: The Complete Fragments and Selected Testimonies of the Major Presocratics. Cambridge: Cambridge University Press. Grotius, H. (1939). De iure belli ac pacis libri tres. Ed. by B. J. A. de Kanter, Lugduni Batavorum (Leiden): E. J. Brill. Hofmeister-Pich, R. (2010). “Προαίρεσις und Freiheit bei Epiktet: Ein Beitrag zur philosophischen Geschichte des Willensbegriffs,” in J. Müller and R. Hofmeister Pich (eds.), Wille und Handlung in der Philosophie der Kaiserzeit und Spätantike. Berlin, New York: De Gruyter, 95–127. Jensen, J., St. (2007). “When Evil Actions Become Good,” Nova et Vetera (The English Edition of the International Theological Journal). Catholic University of America:Washington D.C., 5, 747–764. Kent, B. (1995). Virtues of the Will: The Transformation of Ethics in the Late Thirteenth Century. Washington, DC: The Catholic University of America Press. Kent, B. (2013). “Losable Virtue: Aquinas on Character and Will,” in T. Hoffmann, J. Müller and M. Perkams (eds.), Aquinas and the Nicomachean Ethics. Cambridge: Cambridge University Press, 72–90. Kern, U. (2013). “Eckhart’s Anthropology,” in J. M. Hackett (ed.), A Companion to Meister Eckhart. Leiden, Boston: Brill, 237–251. MacDonald, S. (1998). “Aquinas’s Libertarian Account of Free Choice,” Revue internationale de philosophie, 52, 309–328. Marenbon, J. (1997). The Philosophy of Peter Abelard. Cambridge: Cambridge University Press. McGrade, A. S. (1999). “Natural Law and Moral Omnipotence,” in P. V. Spade (ed.), The Cambridge Companion to Ockham. Cambridge: Cambridge University Press, 273–301. Möhle, H. (2002). “Scotus’s Theory of Natural Law,” in Th. Williams (ed.), The Cambridge Companion to Duns Scotus. Cambridge: Cambridge University Press, 312–331. Müller, J. (2003). “Ergon und eudaimonia: Plädoyer für eine unifizierende Interpretation der ergonArgumente in den aristotelischen Ethiken,” Zeitschrift für philosophische Forschung, 57, 514–542. Müller, J. (2009). Willensschwäche in Antike und Mittelalter: Eine Problemgeschichte von Sokrates bis Johannes Duns Scotus. Leuven: Leuven University Press. Pakaluk, M. (2006). Aristotle’s Nicomachean Ethics: An Introduction. Cambridge: Cambridge University Press. Pasnau, R. (2002). Thomas Aquinas on Human Nature: A Philosophical Study of Summa Theologiae Ia. Cambridge: Cambridge University Press. Perkams, M. (2001). Liebe als Zentralbegriff der Ethik nach Peter Abaelard. Münster: Aschendorff. Perkams, M. (2011). “Die Entwicklung des Synderesis-Konzepts aus der Exegese von Röm 7 durch Anselm von Laon, Peter Abaelard und Robert von Melun,” in G. Mensching (ed.), Radix totius libertatis. Zum Verhältnis von Willen und Vernunft in der mittelalterlichen Philosophie des Mittelalters. Würzburg: Königshausen und Neumann, 1–43. Perkams, M. (2012). “Bernhard von Clairvaux, Robert von Melun und die Anfänge des mittelalterlichen Voluntarismus,” Vivarium, 50, 1–32. Perkams, M. (2013). “Aquinas on Choice, Will, and Voluntary Action,” in T. Hoffmann, J. Müller and M. Perkams (eds.), Aquinas and the Nicomachean Ethics. Cambridge: Cambridge University Press, 72–90. Perkams, M. (ed., trans, and intro.) (2014). Thomas von Aquin: Sententia Libri Ethicorum I et X. Kommentar zur Nikomachischen Ethik, Buch I und X. Freiburg, Basel, Wien: Herder. Perkams, M. (2018). “Practical Reason and Normativity,” in J. P. Hause (ed.), Aquinas's Summa Theologiae. A Critical Guide. Cambridge: Cambridge University Press, 150–169.
252
Matthias Perkams
Pilsner, J. (2006). The Specification of Human Actions in St Thomas Aquinas. Oxford: Oxford University Press. Reeve, Ch. (2013). Aristotle on Practical Wisdom: Nicomachean Ethics VI, trans. with an Introduction, Analysis, and Commentary. Cambridge, MA, London: Harvard University Press. Rhonheimer, M. (1994). Praktische Vernunft und Vernünftigkeit der Praxis. Handlungstheorie bei Thomas von Aquin in ihrer Entstehung aus dem Problemkontext der aristotelischen Ethik. Berlin: Akademie Verlag. Russell, D. C. (2014). “Phronesis and the Virtues (NE vi 12–13),” in R. Polansky (ed.), The Cambridge Companion to Aristotle’s Nicomachean Ethics. Cambridge: Cambridge University Press, 203–220. Santas, G. (1969). “Aristotle on Practical Inference, the Explanation of Action, and Acrasia,” Phronesis, 14, 162–189. Shields, Ch. (2007). “Unified Agency and Akrasia in Plato’s Republic,” in Ch. Bobonich and P. Destree (eds.), Akrasia in Greek Philosophy: From Socrates to Plotinus. Leiden, Boston: Brill, 61–86. Teske, R. (2011). “Henry of Ghent on Freedom of the Human Will,” in G. A. Wilson (ed.), A Companion to Henry of Ghent. Leiden, Boston: Brill, 315–335.
Further Readings Ch. Bobonich, P. Destree, ed., Akrasia in Greek Philosophy: From Socrates to Plotinus (Leiden, Boston: Brill, 2007) is a useful survey about different ancient approaches to the problem of weakness of will). M. Frede, A Free Will: Origins of the Notion in Ancient Thought (Berkeley, Los Angeles, London: University of California Press, 2011) is a masterful overview about the development of ancient action theory [A Masterly Overview about the Different Ancient Approaches to Action Theory]. T. Hoffmann, J. Müller and M. Perkams, ed., Aquinas and the Nicomachean Ethics (Cambridge: Cambridge University Press, 2013) has 14 contributions and discusses the relationship between Aquinas and Aristotle, some of them concerning moral epistemology. B. Kent, Virtues of the Will: The Transformation of Ethics in the Late Thirteenth Century (Washington, DC: The Catholic University of America Press, 1995) is the best general account about voluntarism, touching also other medieval thinkers. J. Müller, Willensschwäche in Antike und Mittelalter: Eine Problemgeschichte von Sokrates bis Johannes Duns Scotus (Leuven: Leuven University Press, 2009) is the best available book on the different treatments of weakness of will between Socrates and Duns Scotus.
Related Topics Chapter 7 Moral Reasoning and Emotion; Chapter 11 Modern Moral Epistemology, Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 27 Teaching Virtue.
253
11 MODERN MORAL EPISTEMOLOGY Kenneth R. Westphal
1. Introduction Philosophical taxonomies vary significantly; on the European continent, ‘modern’ philosophy is commonly regarded as beginning in the seventeenth century (ce) and continuing to the present—despite declarations that our age is ‘post-modern’ (or even ‘post truth’). In Anglophone circles, ‘modern’ philosophy is commonly regarded as a seventeenth- and eighteenth-century affair, followed by a century of philosophical excesses, ‘philosophy’ being reborn early in the twentieth century. This chapter considers key issues and views in moral epistemology in the seventeenth through the nineteenth centuries. This period set much of the current agenda in moral epistemology, though also innovated in ways which merit recovery. Anglophone philosophers largely credit Descartes’s Meditations (1641) with inaugurating modern philosophy.Yet Jean Barbeyrac (in 1708) and Thomas Reid (in 1788) lauded Hugo Grote, or Grotius (1625), for inaugurating modern moral philosophy, centrally because he argued against skeptics and relativists that natural law morality would obligate us independently of the Almighty, thus breaking with Thomistic syntheses of pagan philosophy and Christian theology. This Grotius did in his major work, De jure belli ac pacis (Of Justice in War and Peace, Prol. §11; cf. Schneewind, 1991). For a host of cultural, intellectual, religious and political reasons, innovation in moral philosophy was urgently needed. Culturally, the expansion of international commerce inevitably provoked issues about cultural relativity. Intellectually, Sextus Empiricus’ great compilations of Pyrrhonian skepticism were translated into Latin, edited and published by Étienne Stephani (or: Stephanus, also editor of Plato’s dialogues), first his Outlines of Pyrrhonism (Paris, 1562), then a scholarly edition of all extant works (1569); this edition appeared again in Genève (1621), with a bilingual critical edition by Fabricius (Greek and Latin; Leipzig, 1718). Pyrrhonian skeptical concerns were propagated by Montaigne’s Essays (1580, vols. 1, 2; 1588, vol. 3), so influential that they established the very genre of the essay. Regarding religion and politics, what should have been evident from—if not before—those sanguineous events Christian militants called their anti-Muslim ‘Crusades’ (1096–1099, 1147–1149, 1189–1192, 1202–1261), European Christians brought home to themselves in their Thirty 254
Kenneth R. Westphal
Years’ War (1618–1648). These violent lessons were augmented by the English civil wars (1642–1651), three Anglo-Dutch wars (1652–1654, 1665–1667, 1672–1674) and the English revolution (1688–1689). Born upon arrival of the Spanish Armada (5 April 1588), in his autobiography Hobbes quipped: “fear and I were born twins together.” In this period, moral philosophy too was in turmoil, with a profusion of attempts to identify and to justify fundamental moral laws or principles and to explain how we can recognize them and behave accordingly. One major shift in moral outlook during the seventeenth century is to regard human beings as equally and sufficiently morally competent, unless proven otherwise; thus no longer regarding morality as consisting primarily in obedience to authority, whether of custom, tradition, governors, clergy or the Almighty (Schneewind, 1998). Any cleft between causal laws of nature and normative laws of morality was not yet manifest; nature was regarded as divinely ordered, so that natural laws pertain to human governance in ways comparable to other natural regularities. Human cooperation was recognized to be necessary both to the common good and to individuals’ good, though lack of foresight, understanding or due consideration to others, i.e., excessive self-interest or ignorance, pose huge obstacles to achieving sufficient, proper forms of social cooperation. As Newtonian physics became established, major issues appeared to loom about whether or how human agency can be free, responsible and thus morally imputable. Hence moral philosophy addressed a broad agenda of issues regarding agency, freedom, motivation, perception, sensibility, understanding, authority and legitimacy. From the Ancient Greeks up to the very end of the nineteenth century, including Sidgwick (1883, 1891, 1898, 1903), moral philosophy was regarded as the proper genus comprising two coordinate species: ethics and justice. Considerations of constitutional and civil law were central to moral philosophy throughout this period. This host of practical issues bear upon moral epistemology, and conversely.This concise review of modern moral epistemology focuses on several main issues and alternatives regarding the identification and justification of basic moral norms or principles; for more comprehensive treatments, see Further Reading.
2. Natural Law and Moral Teleology One recurrent epistemological as well as moral issue in ‘teleological’ moral theories concerns the relation of orthos logos—recta ratio, right reason: reason(s) that indicate(s) what is morally right to do, and why so—to the final end(s) of human action: Whether an action (or a reason for action) is right because it contributes to the final end(s) of human action; or whether an action contributes to the final end(s) of human action because that action accords with right reason (or is done because it so accords). Aristotle, the Stoics and Leibniz held the latter view, though many teleological natural law theories held the former (e.g., Wolff, 1769; Baumgarten, 1763; cf. Ahrens, 1850, 1870; Muirhead, 1897, 1932).1 Both options raise acute issues about whether or how we can correctly identify right reason(s) and justify our so identifying them, in contrast to merely apparent, morally negligent or vicious reasons. Though modern moral philosophers and natural lawyers were theists, including Hobbes though excepting Hume, the Thirty Years’ War made plain that moral principles must be identified and justified independently of sectarian theology. Various authors attempted to revamp Classical notions of jus gentium—natural moral law holding for all humanity; an incipiently cosmopolitan aspiration. Natural law theories derive from Ancient Greek and 255
Modern Moral Epistemology
Classical Roman sources; e.g., Plato (esp. Laws 903) and Cicero (de Leg. 42–44); Julia Annas puts the points relevant here succinctly: We recognize natural law . . . by reflecting on human reason recognizing its role in the cosmos. We come to realize that law has an objective basis in nature, not just in the force of existing human laws. Having a share in natural law unites all rational beings in a community in which they are related to one another by natural justice. So justice, a proper attitude to ourselves and to others in relation to ourselves, has a natural basis. And when we articulate what is involved in having this proper attitude to ourselves and to others, we can see that this is the basis of all the virtues. . . . nature . . . has given us all shared conceptions (intelligentiae communes) which are latent and unarticulated, but which everyone can develop until we achieve clear and distinct knowledge—assuming . . . that we are not corrupted by pleasure, or misled by specious divergences of opinion. (Annas, 2013, 214) Many stout volumes of modern natural law and natural theology (e.g., Wolff, 1769; Paley, 1802) are devoted to detailing in extenso the divine order of nature, and of human life and society within it, in grandiloquent, comprehensive synopsis, apparently so as to induce within the reader a wonder, admiration and inspiration by this order, re-presented by the wise author of the treatise in hand, who comprehensively details divine providence within nature and society. However majestic, the epistemological problems are evident, well before the same literary technique is used to very different ends by the conservative traditionalist, Burke, in his Vindication of Natural Society (1757) and his highly imaginative Reflections on the Revolution in France (1790). Natural law theories span the range from radical reformist, even revolutionary, to conservative, depending upon the author’s view of the most ‘natural,’ hence proper, order of things: individual liberty or conservative stability (Stanlis, 1955, 1958; Neumann, 1957; Haakonssen, 2002). With characteristic relish Nietzsche skewered the key problem with traditional natural law, exemplified by Stoic views: “According to nature” you want to live? . . . In truth, the matter is altogether different: while you pretend rapturously to read the canon of your law in nature, you want . . . to impose your morality, your ideal, . . . you demand that [nature] should be nature “according to the Stoa,” and you would like all existence to exist only after your own image—as an immense eternal glorification and generalization of Stoicism. (Nietzsche, 1886, §9) One epistemological problem is that (putative) normative natural laws cannot be specified unambiguously, much less sufficiently, by appeal to empirical evidence open to public scrutiny. Though long-term self-interest may largely coincide with the requirements of justice, when issues of welfare and advantage arise, so do controversies about which ‘facts’ are relevant, how they are relevant and what they justify. Addressing these questions provokes issues about relevant criteria of justification and whether or how those criteria can be identified and justified. 256
Kenneth R. Westphal
3. Identifying and Justifying Basic Norms: Problems and Prospects Though Grotius is one of the few to mention Sextus Empiricus (1625, 1.12, 5.7)—and at the outset, Carneades (1625, Prol. §§5, 17, 18; 12.9.1)—thanks to Montaigne’s Essays if not to Sextus Empiricus’s works, modern moral philosophers sought to address, if implicitly, the Pyrrhonian dilemma of the criterion: in order to decide the dispute which has arisen about the criterion [of truth], we must possess an accepted criterion by which we shall be able to judge the dispute; and in order to possess an accepted criterion, the dispute about the criterion must first be decided. And when the argument thus reduces itself to a form of circular reasoning the discovery of the criterion becomes impracticable, since we do not allow [those who claim to know] to adopt a criterion by assumption, while if they offer to judge the criterion by a criterion we force them to a regress ad infinitum. And furthermore, since demonstration requires a demonstrated criterion, while the criterion requires an approved demonstration, they are forced into circular reasoning. (PH 2.20, cf. 1.116–117) Stated regarding criteria of truth, this dilemma holds equally of criteria of justification; often mistaken today for the original, the Pyrrhonian dilemma is more severe than Chisholm’s “Problem of the Criterion” and Williams’ “Agrippan Trilemma” (Westphal, 2018, §§60, 84). This dilemma directly undermines two standard approaches to justification: foundationalism and coherentism. Foundationalist justification starts with first principles or basic facts and seeks to justify other important claims on their basis by deduction or other derivation (‘basing’) relations. Coherentism rejects the foundationalist distinction between basic and derived claims or principles and seeks to justify principles or specific claims through their maximally comprehensive and informative integration within a coherent system of claims, a whole view—however extensive or specific it may be (see further Chapter 19). One key problem foundationalists must address is why their preferred first principles or basic claims are plausible, true or justified. Citing ‘values’ as foundations for identifying and justifying a moral theory cannot assist our determining which ‘values’ are the appropriate such foundations, just as Sextus’s dilemma indicates. Proponents of natural law, intuitionism, strong forms of moral particularism and commonsense moral theories often claim that basic foundational moral claims are self-evident. Pritchard (1912) claims that basic moral judgments, when properly conceived and considered, are as self-evident as elementary geometrical proofs. That itself is mistaken: geometry provides proofs by reductio ad absurdum and disjunctive syllogism; no such proof or reductio is afforded by basic moral intuitions (Westphal, 2016a, §41). Others have claimed that basic moral truths are manifestly evident to human reason, as in jus gentium; e.g., Locke proclaimed that the nongovernmental state of nature has a law of nature to govern it, which obliges everyone: and reason, which is that law, teaches all mankind, who will but consult it, that being all equal and independent, no one ought to harm another in his life, health, liberty, or possessions. (Locke, ST §6) 257
Modern Moral Epistemology
Locke equally proclaimed a natural right to punish transgressions of the law of nature (ST §§7, 12), and correctly distinguished punishment from revenge (ST §8). Yet no party to a dispute can be entrusted to judge or to act impartially (ST §§124–128); unjust use of force against anyone’s person amounts to war, according to Locke (ST §19). Hence within the nongovernmental “state of nature” we cannot be entitled to prosecute putative violations of the law of nature; Locke’s own claims about the law of nature are insufficiently evident, within his own terms (Westphal, 2016a, §45). Thomas Reid likewise held that “There must therefore be in morals, as in all other sciences, first or self-evident principles, on which all moral reasoning is grounded, and on which it ultimately rests” (1788, 3.6), yet he acknowledged that genuine basic principles must be distinguished from counterfeit (1788, 5.1), which requires sufficient education, “ripeness of understanding” and lack of prejudice (1785, 6.4, 6.8). Reid’s appeals to selfevidence and to commonsense morality invoke reliabilist and evidentiary (testimonial) considerations underwritten, he believed, by the Almighty: In a matter of common sense, every man is no less a competent judge than a mathematician is in a mathematical demonstration; and there must be a great presumption that the judgment of mankind, in such a matter, is the natural issue of those faculties which God hath given them. . . . Who can doubt whether men have universally believed, that there is a right and a wrong in human conduct; some things that merit blame, and others that are entitled to approbation? The universality of these opinions, and of many such that might be named, is sufficiently evident, from the whole tenor of human conduct, as far as our acquaintance reaches, and from the history of all ages and nations of which we have any records. (Reid, 1785, 6.4) Appealing to the uniformity of human nature and sentiments rather than to divine providence, Hume (1758) similarly claimed the universal approbation of, e.g., friendship. One problem with such appeals is that much of the claimed uniformity is more apparent than real; e.g., friendships amongst the morally vicious (bigots, racists, sexists) strongly tend to reinforce those friends’ shared vices. The presumed ‘universal’ rights of man and citizen declared by the French Republic (1789) conspicuously omitted women, whose equal rights also as citizens were proclaimed by Olympe de Gouges (Paris 1791). With equal clarity and conviction Jeremy Bentham (1795) proclaimed: “Natural Rights is simple nonsense: natural and imprescriptible rights, rhetorical nonsense—nonsense upon stilts” (WJB, 2:501A). Naturally enough, to Sextus Empiricus it appeared that any such “bare assertion counterbalances a bare assertion” (AL 1.315; cf. 2.464). ‘Self-evidence’ suffers the same epistemological difficulties noted earlier regarding traditional and modern natural law theories. In regard to any cognitively or morally significant claim, no account of self-evidence succeeds in distinguishing adequately or reliably in principle or in practice between: (a) someone’s being utterly convinced that s/he has grasped a significant truth, and on that sole basis believes that claim to be true; in contrast to: (b) grasping a significant truth, and on that sole basis being convinced that one has indeed grasped that significant truth. Self-evidence reiterates manifold Cartesian circularities in each alleged instance.2 258
Kenneth R. Westphal
Even when strong, durable consensus may prevail, commonsense morality may be insufficiently enlightened or virtuous: J. S. Mill (1850, 1869) argued vigorously against the altogether predominant commonsense morality that condoned or supported sexism, racism and slavery. Commonsense consensus is thus insufficient to identify or to justify basic moral principles. For similar reasons, neither is coherence nor reflective equilibrium (Griffin, 1996, 124–125; Daniels, 1996, 333–352): Vicious people or (e.g.) morally negligent contractarians or intuitionists can hold entirely consistent, coherent, integrated moral views that are nevertheless morally inadequate, negligent, irresponsible or willfully vicious. Utilitarianism offers a different strategy for identifying and justifying fundamental moral principles. Jeremy Bentham’s (1786–1830) extensive writings in jurisprudence won him acclaim as “legislator to the world” and as “the Newton of legislation”3; his unqualified hedonist account of value directly answers the vital question: Who or what falls within the moral domain, directly or indirectly, and who or what does not? In this regard J. S. Mill followed suit, including within the scope of moral concern not only “all mankind” but “so far as the nature of things admits, . . . the whole [of] sentient creation” (1861, CW 10:214). Though Bentham (1823, 4, 5) sketched a procedure for calculating and comparing the expected utilities of various acts—whether by individuals, groups or governments—neither hedons nor dolors (units of pain) have yielded to quantification. Acknowledging that actual calculations cannot be made, Bentham (1789, 4.6) suggested that his procedure “may . . . always be kept in view” and approximated.Waiving issues of quantification, three difficulties stand out (they and others are discussed elsewhere in this handbook): Bentham’s procedure may select between total quantities of net pleasure, yet for that reason, his procedure says nothing about whether alternative net distributions of pleasure(s) across a group, nation, region or the globe are more or less utile, moral or just than others. Second, Bentham’s calculations appear largely to accord with commonsense notions of just distribution only if sentient creatures all experience diminishing marginal utility, so that obtaining more of any one good provides him or her ever lesser increments of utility. The presence of some few who instead experience increasing marginal utility (‘utility monsters’) throw off the putative utilitarian calculations; discounting or disregarding such utility monsters requires appeal to nonutilitarian reasons or standards. Third, to many it appears entirely evident that, if someone enjoys committing unjust or morally vicious acts, that happiness neither does nor can count at all in favor of so acting. Understandably, J. S. Mill rescinded hedonism (1861, 2.4; CW 10:211) but unleashed a flood of questions about whether or how ‘utility’ can be used to specify which acts are obligatory, permissible or forbidden. Critics often suspect that utilitarians’ claims about which acts are most advisable because they are (putatively) most utile instead reflect prior convictions that those acts are said to be most utile because they are advisable; similar concerns arise in the putative weighing of the net moral worth of various consequences, now that usefulness rather than happiness or pleasure has assumed center stage in contemporary consequentialist moral theories. This suspicion permutes the question Socrates put to Euthyphro, whether the pious is pious because the gods so love it or whether the gods so love it because it is the pious (Euthyphro, 10d). Another such permutation runs through Hume’s views on moral sentiments, whether such sentiments merely respond to (and so indicate) a vicious or virtuous act or character or whether an act or character is virtuous or vicious because we so respond to it with moral sentiments. Hume’s nominalism and explanatory aims drive his ethical theory 259
Modern Moral Epistemology
toward the latter, more radical view; he takes great care to try to explain how any distinctively moral valence comes to characterize the relevant pleasures or displeasures.4 Reid (1788, 5.5, 5.7) advocates the former, less radical view, that our moral sentiments respond to moral characteristics of acts or persons, which characteristics or status our sentiments do not constitute. Settling this disagreement by philosophical means does not appear promising, unless other standards, independent of sentiments, are established to specify which moral sentiments are (sufficiently) appropriate to what acts or observations; which feelings are apt is not specified reliably by mere feelings of aptness. Social contract theories offer still different strategies for identifying and justifying basic moral norms. Socrates (as reported in Plato’s Crito) sketched an implicit yet fundamental agreement by which he was obligated to obey the laws of Athens. Though it was well understood that few if any actual societies were founded by contract, social contract analyses of political obligation gained prominence in the modern period, especially in the writings of Hobbes, Locke and Rousseau. Such analyses raise another version of the Euthyphro question: Whether a particular social contract analysis illustrates, illuminates or ratifies the (agreement-independent) grounds and principles of moral or political obligation, or instead whether that analysis or persons’ agreement to its terms constitutes those grounds or their normative validity. The latter, much more (analytically) radical view has been explored by contemporary moral philosophers (esp. Gauthier, 1997). Advocates of such radical views often claim that their contractual agreements bootstrap the very grounds and principles of moral (or political) obligation into existence, or at least into validity. Exactly how such bootstrapping is possible is often neglected. A credible account is required, because de facto agreement does not suffice for de jure agreement (and so for normative justification of the terms of that agreement) unless several agreement-independent conditions are met, including all the conditions of fair bargaining, sufficient information and sufficient independent resources so as to conclude such an agreement as a free agent with sufficient resources and understanding to be free not to conclude that agreement. Because we homo sapiens are so profoundly and manifoldly limited and interdependent creatures, that kind of free contractual agency is illusory (Westphal, 2016a, §§29–34). A further critical question about any social contract analysis of moral, social or political obligation is whether the counterfactual contractual situation is so designed as to achieve the theorist’s antecedently desired results: What issues are kept off the contractualist agenda can be as morally important as any issues expressly stated in it (cf. Pateman, 1988); note too that Adam Smith did not hold the laissez faire views ascribed to him by neo-conservatives (Lieberman, 2006; Long, 2006; Rothschild & Sen, 2006). That we are such manifoldly interdependent beings was widely acknowledged in the modern period. It was common to seek to identify and justify the grounds and principles of moral obligation by appeal to psychological or anthropological premises. Here Hume’s cardinal distinction between facts and norms (‘is’ and ‘ought’) is crucial, for in connection with our psychology or anthropology Hume’s distinction highlights issues of ‘psychologism’—an approach championed by Fries (1824), Beneke (1833) and Lipps (1893), prominent at the turn of the twentieth century across Europe and the Americas, though rejected by Kant (KdrV A86–87/B118–119) and sharply criticized by Frege (1884) and Husserl (1900): That we do think, feel, respond, judge, act or affiliate in various psychologically or anthropologically characteristic ways does not suffice to show that we should, nor that we 260
Kenneth R. Westphal
are morally obligated, so to behave—or to behave in the ways issuing from such processes. Hume (T 3.1.1.27) is right that empirical facts alone—whether social, historical, geographical, anthropological or psychological—do not suffice to identify nor to justify normative principles or conclusions about what anyone morally may or ought (not) do. The Euthyphro question and the Pyrrhonian dilemma of the criterion thus raise fundamental challenges to the prospects and strategies for justifying moral views in both ethics and justice, typical of modern moral philosophy and its contemporary descendants. The Euthyphro question underscores the desideratum of robust moral objectivity; the dilemma underscores the host of problems confronting our identifying or justifying sufficiently robust, omni-lateral, objective moral principles.Though sound moral theory won’t persuade the obtuse or the vicious to change their ways, sound moral theory is required to identify who genuinely is obtuse or vicious and what forms of prevention or redress are justifiable. Two further responses to these epistemological difficulties deserve note. One is to insist that morals is much more a matter of habituation, enculturation and sensibility than knowledge, so that the best we can do is also what we ought to do: to acquiesce in our own cultural tradition or society and behave accordingly, revising our social practices only piecemeal whenever such revisions can no longer be avoided. This kind of view may be called ‘reform conservatism’ (Epstein, 1966, 13); this pre-modern view was advocated prominently, zealously and very imaginatively by Burke (1790). Alternatively, following Rawls and also Hume’s sentiment-based ethics, many recent moral theories seek to ‘construct’ the basic principles of morals by identifying and analyzing the significance of some preferred range of subjective factors, such as basic moral intuitions, various ‘response-dependent’ concepts, (putatively) apt feelings, manifest preferences or ‘validity claims’ (Habermas’ Geltungsansprüche). These are considered elsewhere in this handbook; two points deserve note here. First, such irrealist or anti-realist moral constructions join Hume’s ethical theory in requiring pervasive uniformity in the human species of whatever factors are preferred by a specific moral construction. Such uniformity is historically and geographically fictitious. Moreover, this basis for moral theory tends to fail when we most need a credible moral theory: whenever issues of moral difference, disagreement or conflict arise (e.g., moral relativism, cultural chauvinism). Second, such constructivist theories strongly tend to regard moral justification as the task of justifying an action, claim or principle to some person(s) based upon his, her or their antecedent considerations, where these considerations are provided by or based upon the theory’s preferred domain of basic factors; i.e., such theories presume justificatory internalism, the view that all relevant justificatory considerations must be such that a person is aware of them or can become aware of them by simple reflection. Consequently, such theories provide little or no justification to those persons who happen genuinely to lack those purportedly relevant antecedent basic factors or to those who reject or disavow them or (finally) to ascetic adepts who have so conditioned themselves to lack or to be unmoved by them. These are precisely Kant’s grounds for seeking to analyze moral obligation and its justificatory grounds independently of human motives, affects or desires (Gr. §1, KprV 5:71–89). Kant’s aim to decouple issues of obligation from contents of our putative psychological states is corroborated by how Hume’s contrast between ‘is’ and ‘ought’ bears upon one of his key questions, “Why utility pleases” (EPM §5): Hume disregards the question whether utility ought to please, and ought to please morally. 261
Modern Moral Epistemology
This review of issues central to modern moral epistemology indicates how the current agenda of debates between contractarianism, Kantianism, consequentialism, intuitionism and constructivism was set in the past and why it has remained vexed. Moral epistemology must solve or dissolve the Pyrrhonian dilemma of the criterion, cogently answer the Euthyphro question, avoid appeal to controversial substantive commitments—including the sorts of subjective factors taken to be basic by most contemporary moral constructivisms—and preserve individual moral liberty. An unexpected theoretical alliance reveals that modern moral epistemology achieved all this.
4. Natural Law Constructivism The social contract tradition—including Hobbes’s—is a branch of the natural law tradition. Hume’s key insight is that the core content of a natural law theory can be identified independently of issues about moral realism (i.e., mind-independent moral facts to which true moral judgments correspond). Hume recognized that if the most fundamental rules of justice may be artificial, it does not follow that they are optional or arbitrary: The basic artifices of justice are altogether necessary to our very finite species of embodied, interdependent agency (T 3.2.1.19). Why this is so is revealed by the two most important points Hobbes established in his analysis of the nongovernmental state of nature: 1. Unlimited individual freedom of action is impossible due to consequent total mutual interference. Hence the fundamental moral question is not whether individual freedom of action may or must be limited but rather: What are the proper, justifiable limits of individual freedom of action? 2. Complete though innocent, non-malicious ignorance of what belongs to whom suffices to generate the total mutual interference characterized in the nongovernmental state of nature as the war of all on all. Consequently, justice must fundamentally be public justice, to remedy such ignorance and thus to substitute social coordination for chronic mutual interference. The key to providing objectivity within a constructivist moral theory is not to appeal to subjective states of the kinds prominent in contemporary forms of moral constructivism (mentioned earlier) but instead to appeal to objective facts about our human form of finite, embodied rational agency and to circumstances of action basic to the human condition. Hume’s theory of justice focuses upon physiological and geographical facts about the vital needs of human beings, our limited capacities to act, the relative scarcity of materials required for us to meet our vital needs and our ineluctable mutual interdependence. The principles Hume constructs (i.e., identifies) on their basis merit the designation “Laws of Nature” because for human beings they are utterly indispensable and so are non-optional. Hume’s strategy breaks the deadlock in moral theory between moral realists and non- or anti-realists by showing that their debate about ontology is irrelevant to identifying basic, objective, universal moral principles. Hume’s most basic social coordination problems stem directly from Hobbes: Under conditions of relative scarcity of external goods, the easy transfer of goods from one person to another, the limited benevolence typical of human nature, our natural ignorance of who 262
Kenneth R. Westphal
rightly possesses what and our mutual interdependence due to human frailties, we require a system of rightful possession to stabilize the distribution and use of goods and thereby to avoid chronic, fatally incapacitating mutual interference.5 The minimum effective and feasible solution to this social coordination problem is to establish, in principle and in practice, this principle: Respect rights to possessions! This is Hume’s first Principle of Justice. Hume’s three principles of justice are “that of the stability of possession, of its transference by consent, and of the performance of promises” (T 3.2.6.1, cf. 3.2.11.2). His construction of these basic rules of justice shows that they count for us human beings as “laws of nature” because without them human social life—and thus all of human life—is impossible. Remarkably, Hume constructs his entire account of justice and argues for its fundamental utility without appeal to sentiments, moral or otherwise. In particular, Hume’s exemplary case by which he argues for rule rather than act utilitarianism—that justice rightly requires returning a lost fortune even to a miser or to a seditious bigot (T 3.2.2.22)—neither involves nor requires appeal to anyone’s sentiments nor to any agent’s character. I call the approach Hume thus inaugurated “natural law constructivism.” Hume’s basic rules of justice omit personal safety, security and collectively permissible distributions of social benefits and burdens. Rousseau (1762) addressed these issues by adopting, adapting and augmenting Hume’s constructivist method (Westphal, 2013). Rousseau’s conditio sine qua non for just collective distributions of wealth is that no one is permitted to have any kind or extent of wealth, power or privilege that enables him or her to command unilaterally the actions of anyone else (Neuhouser, 2000, 64–78). That kind of dependence upon the personal will of others Rousseau (CS 1.6.1, 1.8.2) rules out as an unjust infringement of everyone’s ‘original’ right to be free to act solely upon his or her own will. More clearly than Hume, Rousseau emphasized that principles of justice and the institutions and practices they inform are mandatory for us in conditions of population density that generate mutual interference. Rousseau’s insistence that social institutions be such that no one can unilaterally command the will (and so the action) of another is required for moral freedom, which requires only obeying self-legislated laws. This may be called Rousseau’s “independence requirement.” Rousseau’s proclamation of and plea for moral autonomy may be bracing, yet analyzing and justifying moral autonomy as the correct account of human freedom and conditio sine qua non of moral justification is Kant’s key Critical contribution. Kant’s universalization tests (using the universal law formula of the ‘categorical imperative’) determine whether performing a proposed act would treat any other person only as a means and not at the same time also as a free rational agent.The key point of Kant’s method for identifying and justifying moral duties and proscriptions is to show that sufficient justifying grounds to commit a proscribed act cannot be provided to all affected parties. Conversely, sufficient justifying grounds for omitting positive moral obligations cannot be provided to all affected parties. By contrast, morally legitimate actions are such that sufficient justifying reasons for so acting can be given to all affected parties, also on the occasion of one’s own act (O’Neill, 2000a, 200; Westphal, 2016a, §§26–28). When sufficient justifying reasons for acting on a principle can be given to all those who will be affected by this action, these parties are able to follow consistently the very same principle (for the very same reasons) in thought or action on the same occasion as one proposes to act on the identified maxim. This is a modal issue of capacity and ability, not a psychological claim about what someone can(not) bring him- or 263
Modern Moral Epistemology
herself to believe or to do, nor an issue of de facto agreement or acceptance.This possibility of adopting a maxim expressing a principle of, and grounds for, action thus differs fundamentally from ‘accepting’ one, in the senses of ‘believe,’ ‘endorse’ or ‘agree to.’ Kant’s tests rule out any maxim that cannot possibly be adopted by others on the same occasion on which one proposes to act on that maxim. The universality involved in Kant’s tests includes the agent’s own action and extends (counterfactually) to all agents acting the same way at that time and over time. What we can or cannot adopt as a maxim is constrained by the form of behavior or its guiding principle (maxim), by basic facts about our finite form of embodied rational agency, by basic features of our worldly context of action and most centrally by whether the maxim of the proposed action cannot be adopted (in the indicated sense) by others because that action either evades, deceives, overrides or overpowers their rational agency. Kant’s ‘contradiction in conception’ test directly rules out maxims and acts of coercion, deception, fraud and exploitation. In principle, such maxims preclude offering to relevant others—most obviously to victims—reasons sufficient to justify their following those maxims, their (putatively) justifying reasons or the courses of action they guide in thought or action, especially as the agent acts on his or her maxim (O’Neill, 1989, 81–125). This is signaled by the lack of the very possibility of consent, which serves as a criterion of illegitimacy. Obviating the very possibility of consent on anyone’s part obviates the very possibility of offering sufficient justifying reasons for one’s action to all affected parties. Any act which obviates others’ possibility of acting upon sufficient justifying reasons cannot itself be justified and so is morally proscribed (cf. Westphal, 2018, §§2, 3). Because any maxim’s (or any course of action’s) passing his universalization tests requires that sufficient justifying reasons for that maxim or action can be given to all affected parties for acting on that maxim on that very occasion, such that they too can think, judge and act upon those same grounds, evidence and principles on that and on any relevant such occasion, Kant’s universalization tests embody at their core equal respect for all persons as free rational agents, as agents who can determine what to think and to do by rationally assessing the reasons which justify that principle or act (as obligatory, permissible or prohibited). Thus Kant’s universalization tests require no appeal to any independent premise regarding the incommensurable value or ‘dignity’ of rational nature or the moral law. Ruling out maxims that fail to pass his universalization tests establishes the minimum necessary conditions for resolving fundamental problems of conflict and social coordination that generated the central concern of modern natural law theories with establishing normative standards to govern public life, despite deep disagreements amongst various groups about the substance of a good or pious life. These principles hold both domestically and internationally; they also concern ethnic and other intergroup relations.These principles are neutral regarding theology or secularism; their point is to establish minimum sufficient conditions for just and peaceful relations amongst groups or peoples who may disagree about such often contentious, divisive issues (O’Neill, 2000b, 2003, 2004). Kant’s constructivism identifies and justifies key norms to which we are committed, regardless of whether we recognize it, by our rational requirements to act in justified ways and by the limits of our very finite and interdependent form of human agency and our worldly context of action. According to Kant, there is no public use of reason without this critical, constructivist principle of justification, which uniquely avoids presupposing any particular authority, whether ideological, religious, socio-historical or personal. 264
Kenneth R. Westphal
Ulpian, the third century (ce) Syrian natural lawyer, has been celebrated as a pioneer in human rights (Honoré, 2002). Kant agrees: his sole innate right to freedom (MdS 6:237– 238) includes constitutively innate equality, self-mastery and the presumption of innocence. These rights obligate us to Kant’s versions of Ulpian’s three core duties: to live honestly, to treat no one unjustly and to render unto each what is properly his or her own—which latter Kant reconceives as the duty to participate in a legitimate jurisdiction that secures each person’s proper due (MdS 6:236–237). Although Kant’s text may appear simply to assert these rights and duties, they are justified by his Critique of reason (in all three Critiques) as necessary to rational judgment and justification in all nonformal domains. (Exactly how and how well Kant justifies these views about rational justification cannot be detailed here; instead, see Westphal, 2016a, 2018.) Some important aspects of Kant’s justification of his account of rational justification can be indicated, as they were further explicated and augmented by Hegel, who used them to further develop the same natural law constructivism in moral philosophy.6 If we focus solely upon propositions and their inferential relations, the Pyrrhonian dilemma of the criterion is insoluble. However, rational judgment not only assesses relevant evidence and its purported implications (conclusions, judgments); rational judgment requires and involves assessing one’s own judging to determine whether the considerations one presently brings together into a candidate judgment are integrated as they ought best to be integrated to form a cogent, informed, informative and well-justified judgment about the matter under consideration (KdrV B219, A261–263/B 317–319). Hegel (1807) realized that the Pyrrhonian dilemma of the criterion can be solved by carefully explicating the possibilities of selfcritical assessment and of constructive mutual assessment; this is the ultimate significance of his account of mutual recognition (Westphal, 2018, §§60–64, 83–91). Hegel’s explication of these critical and self-critical capacities are crucial to identifying and developing virtuous forms of justificatory circularity and distinguishing these from vicious circularities.7 Methodologically and substantively, natural law constructivism is neutral with regard to moral realism, because it is strictly independent of it; yet by this very fact it is consistent with the content of a traditional natural law theory. Physiological, psychological and geographical facts regarding our capacities as agents and our context of action are required to identify and to justify basic moral principles; these are morally relevant facts—they are not themselves moral facts. Accordingly, natural law constructivism shows that the distinction between moral realism and moral constructivism, typically regarded as exclusive and exhaustive, is a false dichotomy. Because natural law constructivism identifies and justifies core moral principles without appeal to subjective states (per above), it provides an important form of justificatory externalism: the grounds for identifying and justifying these core moral principles are not limited to factors of which individuals may (not) be aware.The physical, physiological, anthropological and sociological facts relevant to natural law constructivism set the benchmark for assessing what individuals do (not) adequately acknowledge morally; what individuals may be willing to acknowledge does not set the benchmark for assessing moral relevance. Kant argued that rational principles—including moral principles—guide judgment; they do not univocally determine (fully specify) judgments (KdrV A132–135/B171–174, KprV 5:67–71). Kant also held that using the a priori principle of the categorical imperative (or its juridical counterpart, the universal law of justice) to identify our duties also requires a 265
Modern Moral Epistemology
specifically ‘practical anthropology,’ a catalogue of basic anthropological facts about our finite form of embodied semi-rational agency (Gr 4:412, TL §45; cf. Anth. 7:330–333). Taken together, these provide conditiones sine qua non to distinguish morally permissible, obligatory and prohibited forms of action. In this way, Kant’s universalization tests specify which logoi can possibly be orthoi—which reasons can possibly be right, or morally upright, because they accord with and respect the independence requirement (Westphal, 2016b). Only those actions, principles, reasons or practices can be (up)right for which we can address sufficient justifying reasons and analysis to all persons, such that they can consider, assess and judge those reasons and analyses to be sufficiently justifying, so that they too can guide their thought and action on that basis, on all relevant occasions, including the occasion on which one proposes so to act. This is Kant’s publicity requirement, that no one’s rational capacities be either evaded (e.g., by deception) or over-powered; these partially constitute the sole innate right to freedom. To act only in those ways which can be so justified is to treat everyone—including oneself—as a rational agent who is a moral person. So thinking and doing honors the independence requirement embedded in Kant’s sole innate right to freedom. Kant’s ‘doctrine of virtue,’ including his account of moral education, contributes to how we can learn, know and understand why and how acting and conducting ourselves with moral integrity is constitutive of our moral freedom, our autonomy, which includes our rational freedom from subjection to our own affects (cf. Rousseau, CS 1.8.3; pace Hume, T 2.3.3.4).8 To develop Kant’s account of moral autonomy and his basic deontic classifications into an action-guiding doctrine of duties—not merely a set of a priori principles of morals, but a code of moral conduct by which to act and to live—requires, Hegel realized, re-incorporating Kant’s critical principles of morals into the moral sciences of political economy, jurisprudence and constitutional law.To do so, Hegel brilliantly drew upon Montesquieu’s Spirit of the Laws (1749) to show how legitimate law is justified only by how and how well it functions within a republic’s political, civil and economic institutions and activities, so as to facilitate, promote and protect free, voluntary, effective and responsible action by all.9 This Hegel does in his Philosophical Outlines of Justice (1821), which is the most robustly republican moral philosophy on record, in which Rousseau’s and Kant’s independence requirement is institutionalized and constitutionally protected as the civil right due each and every adult citizen to non-domination (Westphal, 2017, 2016–17). Central to Montesquieu’s and to Hegel’s republican philosophies of law is that freedom does not consist merely in the silence of the law (to do whatever is not illegal). Instead, most law is literally a vast artifice constituting enabling conditions without which an enormous range of daily activities, including nearly all forms of economic activity, would not be possible (Jhering, 1897). Additionally, because law is so fundamental to the structure and functioning of the society into which people are born and within which they grow, mature, develop their skills, talents, work, leisure and most often their own families, each person develops his or her own agency, personality, understanding, character, aspirations and achievements in ways made possible by the legal, economic and social structures and practices of his or her society. Hence the cardinal importance of legitimate law, both constitutional and statute, and the assessment and (when justified) revision, repeal or replacement of statute law as people develop their social, political or economic activities in ways that further enhance their legitimate exercises of freedom. Hegel’s ardently republican system of 266
Kenneth R. Westphal
political representation is designed to enable and to promote such political and legal culture, in part by insuring political representation to all civil, economic and confessional sectors of the republic. There has been revival of interest in republicanism, in part due to dissatisfactions with how contemporary forms of liberalism appear to condone, if not promote, democracy of the few. Such distinctions are genuine, though not terminological. Classical Roman republics were ruled only by free male citizens. Against the historical school of jurisprudence, Hegel argued that Roman law lacked an adequate account of, indeed an adequate basis for, justice, precisely because it countenanced slavery (Rph §§2R, 3R; Enz. §433Z), whereas Kant identified the sole and sufficient basis of justice, which is our rational human freedom (Rph §§4, 133, 135R; Enz. §502R). As for liberalism, whilst his chapter on justice (Utilitarianism, ch. 5) omits it, J. S. Mill required that no one be subjected to the will of another (1861, CW 10:216). Despite his tolerance, if not advocacy, of some forms of benevolent paternalism (CW 21:118–119; 1859b) and his grave misjudgment of the opium wars (OL, ch. 5; CW 18:293), elsewhere Mill argues in detail that such subjugation is unjust (1850, 1869). This review of modern moral epistemology suggests that much contemporary moral epistemology adheres to an early modern taxonomy of philosophical options (empiricism, rationalism, intuitionism, sentimentalism, conventionalism, constructivism or skepticism), to the neglect of a much more cogent and fruitful form of natural law constructivism developed by the apparently unlikely alliance of Hume, Rousseau, Kant and Hegel. When their texts and views are reconsidered with regard to the proper genus of moral philosophy, embracing both justice and ethics, considered in connection with the natural law tradition, jurisprudence and political economy—to which Hume’s writings on economics gave rise—their methodological and substantive alliance is far less surprising and highlights problems generated by faulty philosophical taxonomies.10 As for the much debated links between justifying reasons and motivation to behave ourselves, these are formed by proper education, both formal and informal; to moral philosophy also belongs philosophy of education, as all moral philosophers up through Mill (1867) recognized (Kant TL §§49–53, GS 9; Hegel Rph §§173–180, 1853; Jaeger, 1944–1947; Green, 1999; Curren, 2000; Westphal, 2016c).
Further Readings Henry Sidgwick’s classic, Outlines of the History of Ethics (1886), should be considered together with his equally important The Principles of Political Economy (1883), The Elements of Politics (1891) and his posthumous The Development of European Polity (1903; all: London, Macmillan). Taken together, the epistemological issues stand out in relief, as also in J. B. Schneewind’s outstanding study of Sidgwick’s comprehensive moral philosophy, Sidgwick’s Ethics and Victorian Moral Philosophy (Oxford: Clarendon Press, 1977). Schneewind’s other contributions to history of moral philosophy attend copiously and sensitively to their epistemological problems and views; see his “Natural Law, Skepticism, and Methods of Ethics,” Journal of the History of Ideas 52 (2), 289–308, 1991; The Invention of Autonomy (Cambridge: Cambridge University Press, 1998); and his compendium, Moral Philosophy from Montaigne to Kant (Cambridge: Cambridge University Press, 2003). For concise introduction to Bentham’s moral-epistemological concerns, see Philip Schofield, “Jeremy Bentham: Legislator of the World,” Current Legal Problems 51 (1) (1998), 115–147.The natural law (and hence epistemological) context of Hobbes’ Leviathan is brilliantly explicated by Bernd Ludwig, Die Wiederentdeckung des 267
Modern Moral Epistemology
Epikureischen Naturrechts. Zu Thomas Hobbes’ philosophischer Entwicklung von De Cive zum Leviathan im Pariser Exil 1640–1651 (Frankfurt am Main: Klostermann, 1998), a study far ahead of the recent Anglophone reappraisals of Hobbes. Excellent contributions on modern moral philosophers, with good attention to their epistemological concerns and views, are provided in John Skorupski, ed., The Routledge Companion to Ethics (London: Routledge, 2010). ‘Natural Law Constructivism’ is examined and defended in detail in the author’s How Hume and Kant Reconstruct Natural Law: Justifying Strict Objectivity without Debating Moral Realism (Oxford: Clarendon Press, 2016). A bibliography of Anglophone works in history and philosophy of law is available from the author’s webpage on Academia.edu.
Related Chapters Chapter 7 Moral Reasoning and Emotion; Chapter 10 Ancient and Medieval Moral Epistemology, Chapter 12 Contemporary Moral Epistemology, Chapter 15 Relativism and Pluralism in Moral Epistemology, Chapter 20 Methods, Goals and Data in Moral Theorizing, Chapter 24 Moral Epistemology and Liberation Movements.
Notes 1. On Leibniz see Johns (2013); for more detailed discussion of this contrast, see Westphal (2016b). 2. Five distinct, vicious justificatory circles vitiate Descartes’ Meditations (Westphal, 1989, 18–34). 3. See Beneke’s Vorrede to his translation of Bentham (1830b), iii–xxv; Gervinus (1859), 3–40; Schofield (1998). 4. On Hume’s two accounts of moral sentiments see Westphal (2016a), §§13.3, 14.1, 15.1, 16. 5. Relative scarcity of goods: T 3.2.2, ¶¶7, 16, 18; their easy transfer: T 3.2..2, ¶¶7, 16; our limited generosity: T 3.2..2.16, 3.2.5.8, 3.3.3.24; natural ignorance of possession: T 3.2.2.11, 3.4.2.2, 3.2.6.3–4; limited powers and consequent mutual interdependence: T 3.2.2.2–3. 6. The important relations between Kant’s and Hegel’s views are not metaphysical; they are instead methodological, epistemological and semantic: Hegel was the first to realize that Kant’s Critical accounts of judgment, of rational justification and of conceptual explication—in all three Critiques and in his Critical principles of natural science and of morals—stand independently of and indeed are more successful without transcendental idealism or any such view (Westphal, 2018). Hegel’s infamous objection to empty formalism (Rph §135R) criticizes the dozens of pseudoKantian natural law theories published between Kant’s Groundwork (1785) and his own theory of justice (Rechtslehre, MdS Part 1; 1797); Westphal (2016–17), §9.1. 7. The relevant justificatory circularity is not a form of coherentism; why not is complex; key points are examined in Westphal (2017), the full account in Westphal (2018). On virtuous epistemic circularity, see Alston (1989), 319–349. 8. How to respond to those who fail to act morally raises a host of further issues, but these can only be identified by specifying who does fail to act as morality requires, what failing(s) they exhibit, how chronically, and what best can be done to redress past or present failings and to preclude their recurrence. The core aims of education, informal or formal, at whatever level, are moral aims (Green, 1999; Westphal, 2016c). Neglecting this fact is morally negligent. 9. Montesquieu’s institutional theory of law highlights the “correspondences” and mutual complementarity of social institutions and activities; his language may derive from Malbranche (Riley, 2009, §8.1); his innovation is that these contemporaneous intra-social correspondences and forms of complementarity suffice to account for the content, function and legitimacy of sound constitutional and statute law (Miraglia, 1912, 18). Hegel (Rph §3R) recognized Montesquieu’s singular achievement, and made it the juridical cornerstone of his own robust republicanism. 10. C. D. Broad (1930, 165, 206–208) introduced the contrast between ‘deontological’ and ‘teleological’ ethical theories to mark idealized poles of a continuum along which to array specific ethical
268
Kenneth R. Westphal
theories for comparative assessment. He acknowledged no actual theories of either extreme type; only later simplifications reduced Broad’s comparative spectrum to a specious dichotomy.
References Because the chronology of moral-epistemological issues and views is theoretically significant in this period, works are cited by their original (or occasionally, contemporaneous) date of publication, even when collected editions of an author’s works are also cited. Where possible, individual works are cited by the divisions indicated in their respective tables of contents, according to book, part, chapter, section (§) or occasionally paragraph (¶) numbers, rather than by page; such references hold for any reliable edition or translation. Arabic numerals have been used consistently; an author’s sub-divisions are indicated by (as it were) decimals, e.g., Hume’s Treatise, Part 3, §1, last paragraph (¶) is indicated thus: (T 1.3.1.27). Ahrens, Heinrich. (1850). Die Philosophie des Rechts und des Staates, 1. Teil: Die Rechtsphilosophie, oder das Naturrecht, auf philosophisch-anthropologischer Grundlage. Wien, C. Gerold & Sohn; (4th rev. ed.): 1852. (Only volume published; superseded by next item). ———. (1870). Naturrecht oder Philosophie des Rechts und des Staates, auf dem Grunde des ethischen Zusammenhanges von Recht und Cultur (6th rev. ed.), 2 vols.Wien, C. Gerold’s Sohn (1st ed., French: Paris 1838; 1st German ed: 1846). Alston, William P. (1989). Epistemic Justification: Essays in the Theory of Knowledge. Ithaca, NY: Cornell University Press. Annas, Julia. (2013). “Plato’s Laws and Cicero’s de Legibus,” in M. Schofield (ed.), Aristotle, Plato and Pythagoreanism in the First Century BC: New Directions for Philosophy. Cambridge: Cambridge University Press, 206–224. Baumgarten, Alexander Gottlieb. (1763). Ethica Philosophica (3rd ed.). Halle im Magdeburgischen: Hemmerde. Beneke, Friedrich Eduard. (1833). Die Philosophie in ihrem Verhältnis zur Erfahrung, zur Spekulation, und zum Leben. Berlin: E.S. Mittler. Bentham, Jeremy. (1786). “Principles of the Civil Code,” WJB, 1, 297–364. ———. (1789). Introduction to the Principles of Morals and Legislation. Oxford:The Clarendon Press; rev. ed. 1823, rpt. 1907; WJB 1, 1–154. ———. (1795). “Anarchical Fallacies: Being an Examination of the Declarations of Rights Issued During the French Revolution,” WJB, 2, 491–534. ———. (1802). Principes de législation et Traités de législation, civile et pénale, 3 vols. Paris, Bossange: Masson & Besson. ———. (1830a) Constitutional Code: For the Use of All Nations and All Governments Professing Liberal Opinions, vol. 1. London: R. Heward; rpt. WJB 9. ———. (1830b). F. E. Beneke, ed. and trans, Grundsätze der Civil- und Criminal-Gesetzgebung, 2 vols. Berlin: Amelang. ———. (1838–1843) J. Bowring, ed. The Works of Jeremey Bentham, 11 vols. Edinburgh: W. Tait; London, Simkin: Marshall & Co.; New York: Russell & Russell; cited as ‘WJB’. Broad, C. D. (1930). Five Types of Ethical Theory. London: Routledge & Kegan Paul. Burke, Edmund. (1757). A Vindication of Natural Society: Or, A View of the Miseries and Evils Arising to Mankind from Every Species of Artificial Society (2nd ed.). London: R. & J. Dodsley. ———. (1790). Reflections on the Revolution in France. London: R. & J. Dodsley. Curren, Randall. (2000). Aristotle on the Necessity of Public Education. Lanham, MD: Rowman & Littlefield. Daniels, Norman. (1996). Justice and Justification: Reflective Equilibrium in Theory and Practice. Cambridge: Cambridge University Press. Epstein, Klaus. (1966). The Genesis of German Conservatism. Princeton: Princeton University Press. Fries, Jakob Friedrich. (1824). System der Metaphysik. Heidelberg: Winter. Frege, Gottlob. (1884). Die Grundlagen der Arithmetik. Breslau: Koebner. 269
Modern Moral Epistemology
———. (1950). J.L. Austin, trans., The Foundations of Arithmetic. Oxford: Wiley-Blackwell. Gauthier, David. (1997). “Political Contractarianism,” Journal of Political Philosophy, 5 (2), 132–148. Gervinus, G. G. (1859). Geschichte des neunzehnten Jahrhunderts seit den Wiener Verträgen, vol. 4. Leipzig: Engelmann. Gouges, Marie Olymp de. (1791). “Décleartion des droits de la femme et de la citoyen,” Paris, (n.p.). www.philo5.com/Mes%20lectures/GougesOlympeDe-DeclarationDroitsFemme.htm Green, Thomas F. (1999). Voices: The Gouges Educational Formation of Conscience. Notre Dame, IN: University of Notre Dame Press. Griffin, James. (1996). Value Judgment: Improving our Ethical Beliefs. Oxford: Clarendon Press. Grote, Hugo (Grotius; Grotii) (1625). De jure belli ac pacis. Paris: N. Buon. ———. (1738). The Rights of War and Peace, J. Morrice, trans. (rev. ed.): R. Tuck, 2005; Indianapolis, IN: Library of Liberty. Haakonssen, Knud. (2002). “The Moral Conservatism of Natural Rights,” in I. Hunter and D. Saunders (eds.), Natural Law and Civil Sovereignty: Moral Right and State Authority in Early Modern Political Thought. London: Palgrave Macmillan, 27–42. Hegel, G. W. F. (1821). Grundlinien der Philosophie des Rechts oder Naturrecht und Staatswissenschaft im Grundrisse. Berlin, Nicolai; cited as ‘Rph’, rpt. in: GW 14; cited by main sections (§), or by Hegel’s published Remarks: (§nR); tr. in: idem. (2008). ———. (1830). Enzyklopädie der philosophischen Wissenschaften, 3 vols. (3rd ed.). Heidelberg, Oßwald; cited as ‘Enz.’. ———. (1853–1854) G. Thaulaw, ed., Hegel’s Ansichten über Erziehung und Unterricht, 3 vols. Kiel: Akademische Buchhandlung. Hegel, G. W. F. (1986–2016) Gesammelte Werke, 31 vols. Deutsche Forschungsgemeinschaft, with the Hegel-Kommission der Rheinisch-Westfälischen Akademie der Wissenschaften and the HegelArchiv der Ruhr-Universität Bochum. Hamburg, Meiner; cited as ‘GW’. ———. (2008). Outlines of the Philosophy of Right, trans. T. M. Knox, ed. S. Houlgate. Oxford: Oxford University Press. ———. (2009). K. Worm, ed., Hegels Werk im Kontext (5th Release). Berlin: InfoSoftWare. Honoré, Tony. (2002). Ulpian: Pioneer of Human Rights (2nd ed.). Oxford: Oxford University Press. Hume, David. (1739–1740). A Treatise of Human Nature. London, A. Millar; cited as ‘T’. ———. (1748). Philosophical Essays Concerning Human Understanding. London, A. Millar. (Original title of Hume’s first Enquiry); cited as ‘En’. ———. (1751). An Enquiry Concerning the Principles of Morals. London, A. Millar; cited as ‘EPM’. ———. (1758). “Of the Standard of Taste,” in idem., Essays and Treatises on Several Subjects (rev. ed.). London: Millar, 134–146. ———. (2001). D. F. Norton and M. J. Norton (eds.), A Treatise of Human Nature. Oxford: Oxford University Press. Husserl, Edmund. (1900–1901). Logische Untersuchungen, 2 vols. Halle: Niemeyer. Jaeger, Werner. (1944–1947). G. Highet, trans. Paideia: The Ideals of Greek Culture, 3 vols. Oxford: Oxford University Press. Jhering, Rudolph von. (1897). O. Lenel, ed. Die Jurisprudenz des täglichen Lebens (11th ed.) Jena: Fischer. ———. (1904). H. Goudy, ed. and trans. Law in Daily Life: A Collection of Legal Questions Connected with the Ordinary Events of Everyday Life. Oxford: Clarendon Press. (Translation of previous item.) Johns, Christopher. (2013). The Science of Right in Leibniz’s Moral and Political Philosophy. London: Bloomsbury. Kant, Immanuel. (1902). Könniglich Preussische (now Deutsche) Akademie der Wissenschaften. Kants Gesammelte Schriften, 29 vols. Berlin, G. Reimer (now De Gruyter); cited as ‘GS’. ———. (1995–2016). P. Guyer and A.Wood, eds. in chief, The Cambridge Edition of the Works of Immanuel Kant in Translation. Cambridge: Cambridge University Press. (Margins provide pagination of GS.)11 ———. (1996). M. Gregor and A.Wood, eds. M. Gregor, trans. Practical Philosophy. Cambridge: Cambridge University Press.
270
Kenneth R. Westphal
———. (2009). K. Worm and S. Boeck, eds. Kant im Kontext III (2nd ed.). (Release 6). Berlin: InfoSoftWare. Gr Grundlegung der Metaphysik der Sitten (1785); Groundwork of the Metaphysics of Morals. KdrV Critik der reinen Vernunft (1781: ‘A’; rev. 2nd ed. 1787: ‘B’); Critique of Pure Reason. MdS Metaphysik der Sitten (1797–98); Metaphysics of Morals – in two Parts: RL Metaphysische Anfangsgründe der Rechtslehre; Metaphysical First Principles of the Doctrine of Justice. TL Metaphysische Anfangsgründe der Tugendlerhe; Metaphysical First Principles of the Doctrine of Virtue. Lieberman, David. (2006). “Adam Smith on Justice, Rights, and Law,” in K. Haackonssen (ed.), The Cambridge Companion to Adam Smith. Cambridge: Cambridge University Press, 214–245. Lipps, Theodor. (1893). Grundzüge der Logik. Hamburg, Leipzig:Voss. Locke, John. (1690). “Second Treatise of Government,” in Anon (ed.), Two Treatises of Government. London: Awnsham Churchill; cited as ‘ST’. Long, Douglas. (2006). “Adam Smith’s Politics,” in K. Haackonssen (ed.), The Cambridge Companion to Adam Smith. Cambridge: Cambridge University Press, 288–318. Mill, John Stuart. (1963–1991) J. M. Robson, ed.-in-chief, Collected Works of John Stuart Mill, 33 vols. Toronto: University of Toronto Press; London: Routledge & Kegan Paul; cited as ‘CW’; . ———. (1850). “The Negro Question,” CW, 21, 87–95. ———. (1859a). “On Liberty,” CW, 18, 213–310; cited as ‘OL’. ———. (1859b). “A Few Words on Non-Intervention,” CW, 21, 111–124. ———. (1861). “Utilitarianism,” CW, 10, 205–259. ———. (1867). “Inaugural Address, Delivered to the University of St. Andrews, 1. Feb. 1867,” London, Longmans, Green, Reader & Dyer; rpt. in: CW, 21, 215–257. ———. (1869). “The Subjection of Women,” CW, 21, 261–340. Miraglia, Luigi. (1912). Comparative Legal Philosophy Applied to Legal Institutions. Boston, MA: Boston Book Company; rpt. New York: Palgrave Macmillan, 1921. Montesquieu, Charles Louis de Secondat, de. (1749) [Anon.] De l’Esprit des Lois, ou du rapport que les lois doivent avoir avec la constitution de chaque gouvernement, moeurs, climate religion, commerce, etc.; à quoi l’auteur a ajouté des recherches sur les lois romaines touchant les successions, sur les lois françaises et sur les lois féodales, 2 vols. Genève, Barillot & fils. ———. (1989). Cohler, A. M., Miller, B. C. and Stone, H. trans. The Spirit of the Laws. Cambridge: Cambridge University Press. Muirhead, John H. (1897). The Elements of Ethics (2nd rev. ed.). London: Murray. ———. (1932). Rule and End in Morals. Oxford: Oxford University Press. Neuhouser, Frederick. (2000). The Foundations of Hegel’s Social Theory. Cambridge, MA: Harvard University Press. Neumann, Franz. (1957). “Types of Natural Law,” in: idem., The Democratic and the Authoritarian State: Essays in Political and Legal Theory. New York: Free Press, 69–95. Nietzsche, Friedrich. (1886). Jenseits von Gut und Böse: Vorspiel einer Philosophie der Zukunft. Leipzig: Naumann. (Translation in idem. 1966). ———. (1966).W. Kaufmann, ed. and trans. Beyond Good and Evil: Prelude to a Philosophy of the Future. New York:Vintage. (Translation of previous item). O’Neill, Onora. (1989). Constructions of Reason. Cambridge: Cambridge University Press. ———. (2000a). “Kant and the Social Contract Tradition,” in F. Duchesneau, G. Lafrance and C. Piché (eds.), Kant Actuel: Hommage à Pierre Laberge. Bellarmin, Montréal, 185–300. ———. (2000b). Bounds of Justice. Cambridge: Cambridge University Press. ––––––. (2003). “Autonomy: The Emperor’s New Clothes,” Proceedings and Addresses of the Aristotelian Society, 77, 1, 1–21. ––––––. (2004). “Self-Legislation, Autonomy and the Form of Law,” in H. Nagl-Docekal and R. Langthaler (eds.), Recht, Geschichte, Religion: Die Bedeutung Kants für die Gegenwart, Sonderband der Deutschen Zeitschrift für Philosophie. Berlin: Akademie Verlag, 13–26.
271
Modern Moral Epistemology
Paley, William. (1802). Natural Theology. London: Faulder; rpt. in: idem., The Works of William Paley, D.D. Philadelphia: Woodward, 1831, 387–486. Pateman, Carol. (1988). The Sexual Contract. Oxford:Wiley-Blackwell; Cambridge: Polity Press; Stanford, CA: Stanford University Press. Pritchard, H. A. (1912). “Does Moral Philosophy Rest on a Mistake?” Mind, 21 (81), 21–37. Reid, Thomas. (1785). Essays on the Intellectual Powers of Man. In: idem. (1880), 1, 215–508; (2002), vol. 3. ———. (1788). Essays on the Active Powers of Man. In: idem. (1880), 2, 511–679; (2010), vol. 7. ———. (1880). Hamilton,W. ed. The Works of Thomas Reid, D.D., 2 vols. (8th ed.). Edinburgh: MacLachlan & Stewart; London: Longman, Green, Roberts (Numbered consecutively throughout). ———. (1995–2017). K. Haakonssen, gen. ed. The Edinburgh Edition of Thomas Reid, 10 vols. Edinburgh: Edinburgh University Press. Riley, Patrick. (2009). The Philosophers’ Philosophy of Law from the 17th Century to Our Days.Vol. 10 of: E. Pattaro, ed.-in-chief, A Treatise of Legal Philosophy and General Jurisprudence, 13 vols. Dordrecht: Springer, 2005–2016. Rothschild, Emma and Sen, Amartya. (2006). “Adam Smith’s Economics,” in K. Haackonssen (ed.), The Cambridge Companion to Adam Smith. Cambridge: Cambridge University Press, 319–394. Rousseau, Jean-Jacques. (1762). Du contrat social. Amsterdam: M.M. Rey; cited as ‘CS’. ———. (1964). “Du Contract Social,” in B. Gagnebin and M. Raymond (eds.), with F. Bouchardy et al., Oeuvres Complètes. Paris: Gallimard (Pléiade), 3, 347–470. ———. (1994). “The Social Contract,” in Social Contract, Discourse on the Virtue Most Necessary for a Hero, Political Fragments, and Geneva Manuscript, J. R. Bush, R. D. Masters and C. Kelly, trans., in R. D. Masters (ed.), Collected Writings of Rousseau. Hannover, NH: Dartmouth College Press, 1994, 4, 129–224. Sextus Empiricus. (1933a) Opera/Works, 4 vols. Greek, with English trans Rev. R. G. Bury. Cambridge, MA: Harvard University Press; Loeb Library (Cited by Book.¶ numbers.) ———. (1933b) Outlines of Pyrrhonian Skepticism. In: Works 1; cited as ‘PH’. ——-. (1933c) Against the Logicians, 2 books. In: Works 2; cited as ‘AL’. Sidgwick, Henry. (1883). The Principles of Political Economy. London: Palgrave Macmillan; rpt. 1887, 1901. ———. (1886). Outlines of the History of Ethics. London: Palgrave Macmillan. ———. (1891). The Elements of Politics. London: Palgrave Macmillan; rpt. 1897, 1908, 1919. ———. (1898). Practical Ethics: A Collection of Addresses and Essays. London: Swan Sonnenschein; rpt. 1909. ———. (1903). E. M. Sidgwick, ed. The Development of European Polity. London: Palgrave Macmillan. Schneewind, J. B. (1991). “Natural Law, Skepticism, and Methods of Ethics,” Journal of the History of Ideas, 52 (2), 289–308. ———. (1998). The Invention of Autonomy. Cambridge: Cambridge University Press. ———. (2003). Moral Philosophy from Montaigne to Kant. Cambridge: Cambridge University Press. Schofield, Philip. (1998). “Jeremy Bentham: Legislator of the World,” Current Legal Problems 51 (1), 115–147. Skorupski, John, ed., 2010. The Routledge Companion to Ethics. London, Routledge. Stanlis, Peter J. (1955). “Edmund Burke and the Natural Law,” University of Detroit Law Journal, 33, 150–190. ———. (1958). Edmund Burke and the Natural Law. Ann Arbor, MI: University of Michigan Press; rpt. New Brunswick, London: Transaction Publishers, 2003. Westphal, Kenneth R. (1989). Hegel’s Epistemological Realism. Dordrecht: Kluwer. ———. (2013). “Natural Law, Social Contract and Moral Objectivity: Rousseau’s Natural Law Constructivism,” Jurisprudence, 4 (1), 48–75; doi:10.5235/20403313.4.1.48. ———. (2016a) How Hume and Kant Reconstruct Natural Law: Justifying Strict Objectivity Without Debating Moral Realism. Oxford: Clarendon Press. ———. (2016b). “Kant, Aristotle and our Fidelity to Reason,” In: S. Baiasu and R. Demirey, guest eds., “The ethical and the juridical in Kant,” special issue of Studi Kantiani, 29, 83–102.
272
Kenneth R. Westphal
———. (2016c).“Back to the 3 R’s: Rights, Responsibilities and Reasoning,” SATS—Northern European Journal of Philosophy, 17 (1), 21–60. doi:10.1515/sats-2016-0008. ———. (2016–17). “Hegel, Natural Law and Moral Constructivism,” The Owl of Minerva, (1–2), 1–44. doi:10.5840/owl201752719. www.pdcnet.org/owl/onlinefirst ———. (2017). “Hegel’s Justification of the Human Right to Non-Domination,” Filozofia i Društvo/Philosophy and Society (Beograd), 28 (3), 579–611. http://journal.instifdt.bg.ac.rs/index. php?journal=fid&page=index ———. (2018) Grounds of Pragmatic Realism: Hegel’s Internal Critique and Transformation of Kant’s Critical Philosophy. Leiden: Brill; series: Critical Studies in German Idealism, ed. Paul Cobben. Wolff, Christian Freyherr von. (1769). Grundsätze des Natur- und Völckerrechts (2nd rev. ed.). Halle im Magdeburgischen: Renger.
273
12 CONTEMPORARY MORAL EPISTEMOLOGY Robert Shaver
1. Introduction In 1980, in “Kantian Constructivism in Moral Theory,” Rawls contrasted “Kantian constructivism” with “rational intuitionism.” He took Sidgwick’s Methods, “the outstanding achievement in modern moral theory,” as the representative of rational intuitionism (Rawls, 1999a, 341). In doing so, he suggested a way of thinking of the history—as a battle between Kant’s constructivism and the intuitionism of Clarke, Price, Sidgwick, Moore, and Ross—and inspired those who followed him, such as Korsgaard and Street, to develop the constructivist case.This has been called “the standard story of the history of modern ethics” (Stern, 2012a, 8).1 I start with Sidgwick’s moral epistemology—one shared, though less explicitly stated, by Rashdall, Moore, Ross, Prichard, Carritt, Broad, and Ewing.2 I then consider two objections: the worry that disagreement should reduce him (and intuitionists in general) to skepticism, and the Rawlsian, and perhaps Kantian, worry that Sidgwick’s view makes us “heteronomous.”3 I close by considering the claim, pressed by Street, that only constructivism can avoid a skepticism based on the explanation of our moral beliefs.
2. Sidgwick Sidgwick gives four conditions, “the complete fulfilment of which would establish a significant proposition, apparently self-evident, in the highest degree of certainty attainable” (Sidgwick, 1981, 338). The conditions are “modes of excluding error,” “based on experience of the ways in which the human mind has actually been convinced of error, and been led to discard it.” We detect error when there is conflict between a judgment first formed and the view of this judgment taken by the same mind on subsequent reconsideration; conflict between two different judgments formed by the same mind under different conditions; and finally, conflict between the judgments of different minds. (Sidgwick, 1905, 466)
274
Rob Shaver
1. “The terms of the proposition must be clear and precise” (Sidgwick, 1981, 338). As Sidgwick makes clear in his application of this condition to commonsense morality, he interprets this as requiring verdicts on all of the cases relevant to the proposition. For example, “one ought to keep one’s promises” is not clear and precise if it does not give guidance in hard cases, such as promises whose keeping would be extremely harmful to the promisee or promiser. Sidgwick suggests that clarity and precision are important not just because we want guidance but also because they help one avoid the first sort of conflict noted earlier, between views of a judgment by one mind at different times.4 This conflict arises in part because the judgment is not clear and precise—once it is made so, this conflict is less likely. 2. “The self-evidence of the proposition must be ascertained by careful reflection” (Sidgwick, 1981, 339). The worry this condition addresses is that we often “confound intuitions . . . with mere impressions or impulses, which to careful observation do not present themselves as claiming to be dictates of Reason” or with “mere opinions, to which the familiarity that comes from frequent hearing and repetition often gives a false appearance of self-evidence which attentive reflection disperses” (Sidgwick, 1981, 339). Sidgwick’s examples concern “strong sentiments” and customary rules that one cannot “define for himself ” but rather require “some authority external to the individual” as their “final arbiter” (such as rules of honor or etiquette) (Sidgwick, 1981, 340). Sidgwick again suggests that this condition is important because it helps one avoid conflict between views of a judgment by one mind at different times (Sidgwick, 1905, 466). Conflicts occur in part because I evaluate the judgment at one of the times on the basis of something other than a mere understanding of it. It is worth noting that Sidgwick does not make too much of self-evidence. He rarely suggests that self-evident propositions are needed to stop regresses; his main criticism of commonsense morality is that when made clear and precise, it fails one of the consistency tests; and he does not claim to deduce our duties from what he finds self-evident. 3. “The propositions accepted as self-evident must be mutually consistent” (Sidgwick, 1981, 341).5 4. [T]he denial by another of a proposition that I have affirmed has a tendency to impair my confidence in its validity. . . . [T]he absence of . . . disagreement must remain an indispensable negative condition of the certainty of our beliefs. For if I find any of my judgments . . . in direct conflict with a judgment of some other mind, there must be error somewhere: and if I have no more reason to suspect error in the other mind than in my own, reflective comparison between the two judgments necessarily reduces me temporarily to a state of neutrality. And though the total result in my mind is not exactly suspense of judgment, but an alternation and conflict between positive affirmation by one act of thought and the neutrality that is the result of another, it is obviously something very different from scientific certitude. (Sidgwick, 1981, 341–342)
275
Contemporary Moral Epistemology
It is obvious that in any such conflict [between intuitions] there must be error on one side or the other, or on both. The natural man will often decide unhesitatingly that the error is on the other side. But it is manifest that a philosophic mind cannot do this, unless it can prove independently that the conflicting intuitor has an inferior faculty of envisaging truth in general or this kind of truth; one who cannot do this must reasonably submit to a loss of confidence in any intuition of his own that thus is found to conflict with another’s. (Sidgwick, 1905, 464) The fourth condition is often cited—sometimes to raise a skeptical worry, sometimes to recruit Sidgwick to the “conciliationist” position on peer disagreement. Peers: Sidgwick writes of those I have no more reason to suspect of error than myself. Elsewhere he writes of those “competent to judge” and “experts” (Sidgwick, 1905, 465, 466).6 I assume that the categories of “those competent to judge” and “expert” are to be understood in terms of reasons to suspect error. If, say, someone “competent to judge” is someone I have reason to suspect more likely to err than myself, it is not clear why, if I am faced with the disagreement of someone competent to judge in this sense, I should respond with a loss of confidence.7 Sidgwick may depart here from the usage of those who require that peers share the same evidence and equal ability to interpret the evidence. But these similarities seem important because they are grounds for thinking that someone is one I have no more reason to suspect of error than myself. (The similarities are also not necessary to generate a skeptical threat. You and I might not share evidence but still be equally reliable, since neither of us has better evidence than the other.) Sidgwick’s usage is similar to that of those who require that peers are those one predicts will be equally reliable.8 Sidgwick’s account leaves it open that the number of people who disagree with me can matter. If “I found myself alone contra mundum, I should think it was more probable that I was wrong than that the world was” (Sidgwick, 1879, 109). Conciliation: Sidgwick is usually read as holding the conciliatory position on peer disagreement. According to conciliationists, in cases of peer disagreement, the correct response is to suspend judgment or give equal credence to the conflicting beliefs.9 But Sidgwick does not make either claim.The correct response is a loss of confidence and certainty, or a belief that we have not reduced the risk of error to as low a level as possible. He does add that he alternates between neutrality and affirmation. But (a) this seems only meant to support the conclusion that certainty is lost, (b) it is not the same as claiming that neutrality is the correct response, and (c) it does not appear outside the Methods. More importantly, in the Methods Sidgwick has no need to claim that suspension of judgment or equal credence is the correct response. He is stating conditions for highest certainty. He is doing so to argue that commonsense morality does worse on these conditions than his axioms do. He has no need to say any more about the correct response to disagreement than that it requires a loss of the highest certainty. Although Sidgwick does not argue for suspending judgment, one might think that he is committed to it (McGrath, 2008, 91). I consider this later. Independence: Like conciliationists, Sidgwick requires that one must “prove independently that the conflicting intuitor has an inferior faculty of envisaging truth in general or this kind of truth.” Presumably, he is ruling out the following reasoning, where I believe p and 276
Rob Shaver
you believe not-p: “the mere fact that you believe not-p is sufficient to show that you are not my epistemic peer.” Anyone who proposes a condition like Sidgwick’s should rule out this reasoning. If one does not, the disagreement condition could do little work, since most cases of disagreement would show that you are not my peer.10 Sometimes the worry motivating independence is that if I were allowed to dismiss you as a peer simply in virtue of your disagreement with me, I would beg the question (Vavova, 2014a, 312; Christensen, 2007, 198, 2009, 758, 2011, 2, 18). Say I believe p.You object: “I am as reliable as you. I believe not-p. Therefore you should suspend judgment about p.” I reply: “Since p is true, you are clearly not as reliable as me.Therefore I do not need to suspend judgment.” This is question-begging because my defense of believing p depends on assuming p. Note what these motivations for independence do not rule out. We disagree. I see your reasoning for not-p.You see my reasoning for p. I beg the question if I count your belief that not-p as showing that you are not my peer. But I do not beg the question if I dismiss you as a peer by raising an objection to your reasoning, provided (i) I do not take your reasoning to be faulty simply because it did not result in my belief—say my criticism is that you make a factual or logical error in arriving at not-p, and I would find this an error whether or not I believed p (see Christensen, 2007, 198); (ii) I have no evidence that you disagree with my objection (and so my objection, unlike p, need not be set aside on the ground that it would be question-begging to defend my objection by assuming that it is correct). Skepticism: Crisp worries that the fourth condition should lead to skepticism (Crisp, 2011, 2015, 111). But it is often true that both conditions for not begging the question are met. Regarding (ii), Sidgwick writes that “where the conflicting beliefs are not contemporaneous, it is usually not clear that the earlier thinker would have maintained his conviction if confronted with the arguments of the later” (Sidgwick, 1905, 464). Where I cannot communicate with the author, I often lack evidence about whether the author would agree with my assessment of her reasoning. If these conditions are often met, conciliation and independence will not lead to suspension of judgment as often as it seemed they might. If I have an objection to you that meets both conditions, I need not suspend judgment. There is reason to think Sidgwick agrees. He writes “I have spoken of the history of thought as revealing discrepancy between the intuitions of one age and those of a subsequent generation.” He then makes the aforementioned point about non-contemporaneous beliefs. He goes on to note that the “history of thought, however, I need hardly say, affords abundant instances of similar conflict among contemporaries” (Sidgwick, 1905, 463–464). His view seems to be that disagreement is benign among non-contemporaries but possibly worrisome among contemporaries. But the reason he gives for thinking disagreement is benign among non-contemporaries seems applicable to many cases of disagreement among contemporaries—we are often unclear about whether the person we disagree with would maintain her position when confronted with our arguments.11 Onus: When Sidgwick writes “unless [I] can prove independently that the conflicting intuitor has an inferior faculty,” he claims that unless I can show that a disputant is inferior, I lose certainty. The same seems true for the Methods. When he writes “If I have no more reason to suspect error in the other mind than in my own,” he suggests that, to avoid a loss of confidence, I must have some positive reason for thinking a disputant is more likely to err than I am.The onus is on me; if I do not discharge the burden of establishing your epistemic 277
Contemporary Moral Epistemology
inferiority, I should think of you as my peer. Distinguish this very demanding view from a less demanding one: I lose certainty only if I have reason to think that you are my peer. Here a case must be made that you are my peer; without such a case, your disagreement does not lower certainty. The less demanding view seems better. If we disagree, but I know nothing about you or your reasons, it seems wrong for me to lose confidence; I need some reason to think you must be taken seriously.12 I think Sidgwick should hold the less demanding view. But he may think that holding the more demanding view is not worrisome, at least for the disagreements he has in mind. If, as suggested earlier, it is fairly easy to dismiss you as a peer, it is not so costly to hold that the default position is that you are my peer. Sidgwick, then, might avoid skepticism in two ways. He might think that disagreement justifies merely decreased confidence rather than suspension of judgment. Or he might think that peer disagreement justifies suspension but that peers are uncommon. Although I take this to be Sidgwick’s moral epistemology, there is a complication. In the chapter entitled “The Proof of Utilitarianism,” Sidgwick writes “What primarily concerns us is not how [utilitarianism’s] principle is to be proved to those who do not accept it.” But if Utilitarianism is to be proved to a man who already holds some other moral principles . . . it would seem that the process must be one which establishes a conclusion actually superior in validity to the premises from which it starts. . . . At the same time, if the other principles are not throughout taken as valid, the so-called proof does not seem to be addressed to the [opponent] at all. To resolve this problem, he argues, What is needed is a line of argument which on the one hand allows the validity, to a certain extent, of the maxims already accepted, and on the other hand shows them to be not absolutely valid, but needing to be controlled and completed by some more comprehensive principle. (Sidgwick, 1981, 418–420) The complication is how this (call it “proof ”) fits with the epistemology sketched earlier.13 Sidgwick seems to think of “proof ” here as “proof to” someone, and “proof to” someone as starting with premises that person accepts. Running through the conditions is not a proof in this sense.14 That is (in part) why “The Proof of Utilitarianism” does not run through the conditions. There are distinct processes, both of which Sidgwick can endorse. (Whether a proof is merely ad hominem depends on the status of the opponent’s premises. If I do not take them to be true, it is merely ad hominem. If I take them to be true, or the opponent is a peer, proof is more than ad hominem: a failure to give a successful proof suggests an inconsistency in my beliefs or a case of peer disagreement.15)
3. Rawls Here is a brief sketch of Rawls’s Kantian constructivism.16 Say I consider us as free, equal, rational, and reasonable (a “conception of the person,” a “moral ideal” (Rawls, 1999a, 352)). We are free in that each can make claims about the 278
Rob Shaver
correct principles just in virtue of having interests (as, for example, slaves cannot) (Rawls, 1999a, 309, 330–331).We can also alter our conception of what is good (Rawls, 1999a, 309, 331).We are equal in that we are equally capable of understanding and following principles of justice and equally worthy of having a say in the process of choosing principles (Rawls, 1999a, 309, 333). We are rational in that we efficiently pursue our interests (Rawls, 1999a, 316).We are reasonable in that we think “all who cooperate must benefit . . . in some appropriate fashion as judged by a suitable benchmark of comparison”; there must be “fair terms of cooperation” (Rawls, 1999a, 316). Rawls takes these presuppositions (along with others) to justify thinking of our choice of principles of justice as occurring from his famous original position. (For example, thinking of us “solely as free and equal” is to justify the veil of ignorance, since if I knew my bargaining position, I would be thinking of myself as “also affected by social fortune and natural accident” (Rawls, 1999a, 310)) The original position lets us see what policy follows from these presuppositions. If a principle is chosen from the original position, it is “reasonable for us” (Rawls, 1999a, 340, 355, 356).17 For the Kantian constructivist, in contrast to Sidgwick, “there are no . . . moral facts to which the principles adopted could approximate” (Rawls, 1999a, 350). There are “no such facts apart from the procedure of construction” (Rawls, 1999a, 354, 307, 347, 351, 353). Rawls’s objection to Sidgwick is that if there are moral facts apart from the procedure of construction, we are heteronomous. It suffices for heteronomy that . . . principles obtain in virtue of relations among objects the nature of which is not affected or determined by the conception of the person. Kant’s idea of autonomy requires that there exist no such order of given objects determining the first principles of right and justice among free and equal moral persons. (Rawls, 1999a, 345; also 512, 1996, 99–100, 2000, 228–230, 235–237) Hence choice from the original position is an instance of “pure procedural justice,” in which “there exists no independent criterion of justice; what is just is defined by the outcome of the procedure itself.” This “explain[s] how the parties . . . are also autonomous. . . . [I]n their deliberations the parties are not required to apply, nor are they bound by, any antecedently given principles of right and justice” (Rawls, 1999a, 311). The “autonomy of the parties is expressed by their being at liberty to agree to any conception of justice available to them as prompted by their rational assessment of which alternative is most likely to advance their interests” (Rawls, 1999a, 312). It “belongs to the parties’ rational autonomy that there are no given antecedent principles external to their point of view to which they are bound. . . . Nor do the parties recognize certain intrinsic values as known by rational intuition” (Rawls, 1999a, 334–335; also 338). I take the argument to be as follows: 1. 2. 3. 4. 5.
According to intuitionism, there are procedure-independent moral facts. If there are procedure-independent moral facts, we are heteronomous. So if intuitionism is true, we are heteronomous. We are not heteronomous. Therefore intuitionism is false. 279
Contemporary Moral Epistemology
One might defend Sidgwick by rejecting (2): Kantian heteronomy involves acting out of inclination rather than reason; procedure-independent moral facts do not exclude acting out of reason (Stern, 2012a, 19–26, 2012b, 123–125, Irwin, 2009, 164–170).18 This would show that Kant would not criticize Sidgwick as heteronomous. But it does not show that Rawls should not. For Rawls seems to take (2) to be analytic. This is part of what he (though perhaps not Kant) means by “heteronomous.”19 The problem is that the argument is now question-begging. If our being heteronomous involves the presence of procedure-independent moral facts, (4) just asserts that there are no such facts (see Shafer-Landau, 2003, 44). Here are three further worries. a. Perhaps Rawls’s thought is that there is something unattractive about being constrained by such facts. Heteronomy is supposed to be bad. But that seems the wrong sort of objection to an epistemology.That intuitionism yields an unflattering picture of us does not show that it is wrong. It is also unclear whether there is anything unattractive about construction-independent facts of justice or morality (Irwin, 2009, 171, Baldwin, 2013, 208). Heteronomy might be bad if, say, it involves my acting on desires I wish I lacked, but that is unconnected to Rawlsian heteronomy. b. It is odd to think that I am heteronomous when my deliberation is independent of the specified “conception of the person,” given that this contains various moral claims, such as that everyone should benefit fairly from cooperation. It seems that egoists, and perhaps utilitarians, cannot be autonomous, whatever epistemology they endorse. c. Rawls thinks there is something unattractive (or false) about thinking I can be bound by a principle that might conflict with what a “rational assessment of which alternative is most likely to advance [my] interests” dictates. For example, no utilitarian claim I arrive at on intuitionist grounds can override my assessment of what is prudent. But to someone who does not share Rawls’s convictions about freedom, equality, and reasonableness, these convictions and the procedure they justify are “given” and “independent” in just the way Rawls thinks utilitarianism would be given and independent (see Baldwin, 2013, 208). (Consider a utilitarian who does not think that everyone who cooperates must benefit or that cooperation must be on fair terms (see Scheffler, 1994, 9–10;Wenar, 1995, 39 n11), or an ordinary person who finds some uncompensated sacrifices reasonable.) Rawlsian constraints on a construction of basic principles of justice constrain my rational assessment of my interest. Perhaps Rawls’s thought is that his constraints are ones I have already accepted (otherwise I am not one of the people Rawls is addressing). But by hypothesis I have also agreed to utilitarianism, in that I have been convinced by arguments for it. Utilitarianism and Rawls’s constraints differ in that Rawls assumes agreement on the latter and not the former. But that difference does not show that there is anything objectionable in my taking myself to be bound by the conclusions of arguments I take to be convincing. There is, however, another side to Rawls’s criticism of Sidgwick. For Sidgwick, “justification . . . is to be conceived as an epistemological problem” rather than as a “practical problem” (Rawls, 1999a, 341). For Kantian constructivism, the search for reasonable grounds for reaching agreement rooted in our conception of ourselves . . . replaces the search for moral truth. . . .The task is to articulate a public 280
Rob Shaver
conception of justice that all can live with who regard their person and their relation to society in a certain way. . . . What justifies a conception of justice is not its being true to an order antecedent to and given to us, but its congruence with our deeper understanding of ourselves. (Rawls, 1999a, 306–307) What matters is that “citizens equally conscientious and sharing roughly the same beliefs find that, by affirming the framework of deliberation set up . . . they are normally led to a sufficient convergence of opinion” (Rawls, 1999a, 347; also 349, 350).The aim is “a workable conception of justice designed to achieve a sufficient convergence of opinion,” a “practicable basis of public justification,” “workable public agreement” (Rawls, 1999a, 349, 355, 358). The point (developed in later papers and Political Liberalism) is that one can address the problem of choosing principles for people to live together by arguing that, given how they think of themselves, they should adopt certain principles. Whether the principles are true (or certain) is irrelevant. Whether the thoughts people have about themselves are correct is also irrelevant— Rawls gives no defense, for example, of the belief that everyone can make claims about the correct principles just in virtue of having interests, or that we are equal. We simply “find on examination that [we] hold these ideals” (Rawls, 1999a, 354). This answers a common criticism of Rawls—that he must be an intuitionist, since he presupposes the truth of various normative judgments attached to thinking of us as free, equal, rational, and reasonable (such as the judgment that everyone cooperating must benefit fairly, or that anyone with interests has a say) (e.g., Larmore, 2008, 83–86; Baldwin, 2013, 206–207). He presupposes only that we think of ourselves in this way. There is no oddity when Rawls claims that his “conceptions of society and person as ideas of reason are not, certainly, constructed any more than the principles of practical reason are constructed” (Rawls, 1996, 108; also 1999a, 513–514, 2000, 239–240). He is not conceding that he is an intuitionist about these conceptions.20 This picture is reinforced by Rawls’s comments on justification. He holds a view of justification much like Sidgwick’s account of “proof.” “Since justification is addressed to others, it proceeds from what is, or can be, held in common; and so we begin from shared fundamental ideas implicit in the public political culture” (Rawls, 1996, 100). “[J]ustification is argument addressed to those who disagree with us, or to ourselves when we are of two minds. It presumes a clash of views. . . . [P]roceed[ing] from some consensus . . . is the nature of justification” (Rawls, 1999b, 508–509; also 1999a, 394, 426–427, 594, 2001, 27). The question, then, is why this procedure should “replace” Sidgwick’s epistemology. On a less narrow account of justification, I can say that some premises justify a conclusion, whether or not there is disagreement about premises or conclusion. And there is an issue about whether those I disagree with are justified, even if there is no argument I can offer to show that, given the beliefs we have in common, they should change their views (see Irwin, 2009, 960–961). Rawls notes that on his view “justification is not regarded simply as valid argument from listed premises, even should these premises be true” (Rawls, 1999a, 394). He does not say what is wrong with regarding justification in this way. But given that he goes on to stress that his aim is “practical, and not . . . epistemological,” presumably his point is that a justification in the sense of a valid argument from true premises might fail to get agreement 281
Contemporary Moral Epistemology
(Rawls, 1999a, 394; also 2001, 27). Some will (falsely) deny the premises or the inferences. Hence Rawls responds to disagreement by trying to find a consensus, rather than by discounting many of those who disagree as not peers.21 If, however, disagreement is the worry, the same worry goes for justification in the sense of proof. I may have a valid argument that starts from premises you accept, but you might (falsely) deny its validity. Indeed, that is a likely outcome for the proof Rawls offers, which runs from the shared view of freedom, equality, etc., to controversial conclusions such as the difference principle, a primary goods metric, and a restriction to offering “public reasons.” This is what is exciting about Rawls—he (at least officially) starts with “widely accepted but weak premises” and derives conclusions about matters on which “we have much less assurance” (Rawls, 1999b, 16, 18).22 But it is also what makes persisting disagreement likely. Rawls’s explanations of persisting disagreement—for example, difficulties in weighing both evidence and normative considerations and vagueness in concepts—apply to the derivation of his principles (Rawls, 1996, 56–57, 2001, 35–36). Rawls does not, then, have a good argument for replacing Sidgwick’s epistemology. Sidgwick and Rawls just have (partly) different projects. The Rawls of Political Liberalism might agree. He has no objection to intuitionism as a “comprehensive doctrine” (Rawls, 1996, 95). He would object only if intuitionists bring publicly to bear, on controversial questions, arguments that he would forbid, given that he allows only “public reasons”—reasons that we reasonably expect everyone to reasonably endorse—to carry weight there. But it is no part of intuitionism to favor (or disfavor) bringing publicly to bear arguments that Rawls would rule out. Doing so is a particular action, to be judged by whatever principles the intuitionist thinks are true.23 Nor would intuitionists, when stating their epistemology, give reasons that would make for a disagreement that threatens stability—this is disagreement with arcane positions such as Kantian constructivism and expressivism, not, say, between Protestants and Catholics. Near the end of “Kantian Constructivism,” Rawls writes that for all I have said it is still open to the rational intuitionist to reply that I have not shown that rational intuitionism is false. . . . It has been my intention to describe constructivism by contrast and not to defend it, much less to argue that rational intuitionism is mistaken. In any case, Kantian constructivism . . . aims to establish only that the rational intuitionist notion of objectivity is unnecessary for objectivity. (Rawls, 1999a, 356). No one takes this seriously. But if one does, there is a good explanation for the weakness of the (apparent) arguments against intuitionism. There were never intended to be any.
4. Street After Rawls, some give different versions of the heteronomy objection.24 But the argument for constructivism most widely discussed by contemporary analytic philosophers owes nothing to heteronomy. Street argues as follows. 1. Say I know nothing about the connection between properties A and B. 2. Therefore I have reason to think it unlikely that what causes A also causes B. 282
Rob Shaver
3. My moral beliefs are caused by a process (evolution via natural mechanisms of selection) that retains beliefs conducive to reproductive fitness.25 4. Moral beliefs are not conducive to fitness in virtue of being true. (For example, believing that pain is bad is conducive to survival and reproduction whether or not pain really is bad.) 5. Therefore I know nothing about the connection between enhancing fitness and being a true moral claim. 6. Therefore I have reason to think it unlikely that my moral beliefs are true (Street, 2006).26 This is a version of Sidgwick’s worry, raised by his second condition, that we think some moral claims true for reasons unconnected with their truth.27 To avoid this skeptical result, Street holds that morally correct beliefs are a function of our “valuings,” regardless of the cause of the valuings. This is Rawls’s constructivism—without the restrictions to the particular ideals Rawls holds, to the particular function he favors, and to a practical political problem. It is motivated not by a concern with autonomy but rather by the desire to fit moral beliefs into a naturalistic (evolutionary) picture. (However, by not assuming any particular valuings, it avoids some of the objections to the heteronomy argument given earlier.) For Street, since the initial valuings have no specific content, the function must rely entirely on what is entailed by valuing. Street argues that to value x entails valuing the means to x. She tries to avoid the charge of assuming the truth of some moral claims by holding that valuing the means to one’s ends is not a normative principle but rather part of what it is to take something as an end (Street, 2008a, 227–230, 2010, 366–367, 372–374). Here is a worry. Street’s constructivism yields counterintuitive results. As she notes, Caligula may well, given his valuings, have most reason to torture others (Street, 2010, 371).28 Accepting this requires a very good argument for thinking, with Street, that the alternative to her constructivism is skepticism. (Other versions of constructivism, inspired by Kant, are less counterintuitive. They argue that it is constitutive of valuing, or willing, that Caligula subject himself to the universal law test, or value others as ends (e.g., Korsgaard, 1996, 1997, 2009). But these arguments are obviously very controversial (e.g., Enoch, 2006; Street, 2012); and even were they not, the position is still motivated in part by thinking that the alternative is skepticism.) One might object to Street’s skeptical argument that (5) does not follow from (4). Promoting reproductive fitness and being a true moral claim might be connected even if moral beliefs do not promote reproductive fitness in virtue of being true. It might be that many true moral claims happen to be such that accepting them promotes an organism’s fitness (e.g., pain is bad) (Enoch, 2010; Skarsaune, 2011). Street can reply that the only way to know that acceptance of a given moral claim promotes one’s fitness is to assume the truth of some moral claims—which would be question-begging in a defense of moral claims (Street, 2008b, 214–216). But here there is a problem. Street does not want her argument to justify skepticism about perceptual or arithmetic beliefs. One might worry that it does: presumably these beliefs are retained over the “deep” stretches of time featured in evolutionary theory only to the extent that they promote the fitness of their bearers. Street thinks the relevant version 283
Contemporary Moral Epistemology
of (4) is false: we know that (say) arithmetic beliefs promote fitness in virtue of being true (Street, 2006, 160–161 n35). We imagine some arithmetic belief we take to be false, like 3–2 = 0, and argue that holding this belief would compromise fitness: a hunter-gatherer might count three tigers going into a cave, count two going out, and conclude that it is safe to go in since there are no tigers left. But this defense of arithmetic knowledge assumes some arithmetic knowledge (3 – 2 ≠ 0). If arithmetic knowledge can be defended in this way, moral knowledge can be defended by assuming the truth of some moral claims: e.g., the claim that pain is bad.29 Street does reply to a version of the objection (Street, 2008b, 216–217). She grants that she must assume the falsity of (say) 3 – 2 = 0. But she notes that there is a difference between assuming that 3 – 2 ≠ 0 and that pain is bad. Believing that 3 – 2 ≠ 0 is conducive to fitness in virtue of being true. Believing that pain is bad is not conducive to fitness in virtue of being true. Street concludes that there is no explanatory connection between the truth of “pain is bad” and the fitness gained from believing that pain is bad. But this seems too quick: if I know that A and B are correlated, I have an explanation of why what causes A causes B, even if there is no further explanatory connection. Of course, if I must put aside my belief in the correlation; I cannot say this. But similarly, if I must put aside my belief that 3 – 2 ≠ 0, I cannot make out the explanatory connection Street wants. Street could settle for a general skeptical argument and a general constructivism. But it is a distinctive feature of recent constructivism to think that the normative is special—it must be constructed and relativized to the initial valuings from which the construction proceeds, either to avoid heteronomy or for “practical” purposes, or to avoid skepticism, whereas the nonnormative does not. Sidgwick agrees that the normative is special in a sense. He thinks normative claims are not made true by correspondence between the claim and any object (Sidgwick, 1905, 440, 1902, 246–247). But he does not take this to have epistemic implications: the same conditions apply to normative and nonnormative claims, and there is no need to tie correctness or truth to special relativized procedures.30
Notes 1. As Stern, 2012a: 8 n3 notes, this understanding of the history, and of Kant, was suggested before Rawls by Olafson, 1967, 38–47. For a history with the same protagonists, see Korsgaard, 2008. I set aside some earlier takes on the disagreement between Sidgwick and Rawls, centering on reflective equilibrium and self-evidence (e.g., Singer, 1974). “Reflective equilibrium” is now understood such that almost everyone uses it, including those who favor self-evidence (e.g., Rawls, 1999a, 289, 1996, 95–96, Scanlon, 2003, 149–151), and Sidgwick includes coherence considerations. For one discussion, see de Lazari-Radek & Singer, 2014, ch. 4. 2. For evidence of the sharing, see Hurka, 2014, 115–116. 3. “Perhaps Kantian” because many reject both the constructivist reading of Kant and a reading of Kant on which he levels the heteronomy charge against rational intuitionism. For the former rejection, see, for example, Stern, 2012a, ch. 1, which is a good guide to the literature. I leave aside the issue of how Kant is best interpreted. 4. “The first danger we meet by a serious effort to obtain clearness, distinctness, precision in our concepts, and definite subjective self-evidence in our judgment” (Sidgwick, 1905, 466). 5. Sidgwick here restricts the consistency test to propositions accepted as self-evident. Elsewhere he sometimes does not mention the restriction (for discussion, see Shaver, 1999, 65, Skelton, 2010, 510 and Hurka, 2014, 114). 284
Rob Shaver
6. Earlier, Sidgwick puts the condition in terms of “all other minds that have been led to contemplate” the proposition (Sidgwick, 1879, 109). 7. For a similar argument, see Elga, 2007, 499 n21 and Dougherty, 2013, 222. 8. See, for example, Elga, 2007, 484, McGrath, 2008, 91,Vavova 2014a, 307, and Dougherty, 2013, 222–223. 9. Wedgwood, 2010, 223–224, Hills, 2010, 152, Crisp, 2011, 151–152, 159, and Crisp, 2015, 111 read Sidgwick as requiring suspension of judgment. Hurka, 2014, 114 takes Sidgwick’s point to concern reduction of confidence, though he also takes Sidgwick to share Elga’s equal weight view. McGrath, 2008, 91 takes Sidgwick to be making a point about certainty though thinks his point also justifies holding that knowledge is lost. Vavova, 2014a, 327n1 lists Sidgwick as a conciliationist. Setiya, 2012, 12 says that Sidgwick’s position is “nuanced” but that “it is in the vicinity of a simpler view, that the correct response to disagreement with an ‘epistemic peer’ is to become agnostic.” 10. The condition could still do work in one case. Say I think you are slightly more reliable than me. I then take your claim that not-p to decrease your reliability to my level. Given that we are now peers, I conclude that I should be less certain that p. 11. For a similar argument, though with a different notion of peerhood, see King, 2012, 254–257, 261–262; also Audi, 2008, 489. For peerhood as requiring knowing all the same arguments, see, for example, McGrath, 2008, 91–92, 102, 103; Kelly, 2010, 111–112; van Inwagen, 2010, 19–26. 12. For another argument for the less demanding view, see Vavova, 2014a, 316–318, Christensen, 2009, 760, 2011, 14–15. 13. See especially Phillips, 2011, 68–84, which is also a good guide to the earlier literature. 14. This is close to the views of Crisp, 2013, 16, 2015, 211–212, and Skelton, 2010, 500. Phillips asks why Sidgwick ignores his four conditions when considering the proof of utilitarianism, given that (a) the conditions should help resolve disagreements; (b) elsewhere Sidgwick seems to offer the conditions to resolve disagreements (Sidgwick, 1879, 106–107; Phillips, 2011, 71, 73, 88 n23). Regarding (a), perhaps the conditions do not resolve disagreements in the way Sidgwick specifies, by starting with premises accepted by one’s opponent. Regarding (b), perhaps in “Establishment” Sidgwick is not offering the conditions to resolve disagreements but rather to “establish” first principles.The paper opens with a discussion of proof in the context of “conflict of opinion.” The conditions are then introduced as a “quite different process by which a similar result may be possibly reached.”The “similar result” is that “we establish the true first principles;” it may not be the resolution of conflict, at least in the sense that proofs resolve conflict (Sidgwick, 1879, 106, 107). 15. For the ad hominem reading, see Singer, 1974 and Skelton, 2010. The proofs Sidgwick is interested in do not seem merely ad hominem. In “Establishment,” a successful proof is “a way of establishing ethical principles” (Sidgwick, 1879, 107). And if the attempted proof of utilitarianism to the egoist were merely ad hominem, it is not clear why Sidgwick is so upset by its failure. 16. I focus mainly on it, rather than his later “political constructivism,” since it is the former that inspires constructivists in ethics and clearly opposes intuitionism. Street describes it as the “locus classicus for a statement of constructivism in ethics” (Street, 2010, 381 n1). 17. For a parallel explanation of how Kant’s categorical imperative procedure is based on this conception of the person, see Rawls, 1999a, 514–515, 2000, 239–241. 18. Sidgwick’s view seems to be that when I believe some moral claim, the belief causes me to desire to act in accordance with it (Shaver, 2006). Rawls notes this possibility (Rawls, 1996, 92, 1999a, 346, 2000, 229, 235–236). 19. Say “pure practical reason is not . . . its own sovereign authority as the supreme maker of law. . . . Heteronomy means precisely this lack of sovereign authority” (Rawls, 2000, 227). 20. Lloyd, 1994 argues that we can give “shallow” arguments for some of these normative judgments—arguments that do not depend on any “comprehensive doctrine” such as utilitarianism. Her example is that it would be arbitrary to make citizenship depend on race. She argues that although Rawls does not give these arguments, he does not rule them out, and if they work, he can (and should) claim that his principles are true. But the “shallow” appeal to arbitrariness is 285
Contemporary Moral Epistemology
exactly the argument Sidgwick gives for his axioms (and in his attempted proof to the egoist). Lloyd does, then, make Rawls into an intuitionist, in the sense that his arguments would be no different from those offered by intuitionists. 21. Sidgwick does as well, to some extent: the axioms he takes to do best on his conditions are much weaker than (for example) egoism or utilitarianism. For discussion, see Shaver, 2014; Hurka, 2014, 158–165, and Crisp, 2015, 119–125. 22. For one presentation of the worry that constructivists either start with weak premises and hence derive little specific or start with strong premises and hence do not address many, see Timmons, 2003, 400–401. Rawls sometimes scales back his goal to agreement on “constitutional essentials” (e.g., 2001, 28, 1996, xlix, 228–230). 23. Rawls himself distinguishes between intuitionism and particular moral judgments made by intuitionists at 1996, 113. Sidgwick, though a utilitarian, counsels the Ethical Societies to abide by something very like a public reasons constraint (see Skelton, 2006, 22–26). 24. For a (devastating) critical survey, see Stern, 2012b. 25. See Chapters 8 and 9 of this volume. 26. There are many reconstructions of Street’s argument. I have found Schafer, 2010 most helpful. For an argument much like Street’s, see Joyce, 2006, ch. 6. 27. For Sidgwick and evolutionary debunking, see Lillehammer, 2010 and de Lazari-Radek & Singer, 2014, ch. 7. As they note, Sidgwick claims that “no psychogonical theory has ever been put forward professing to discredit [his axioms] by showing that the causes which produced them were such as had a tendency to make them false” (Sidgwick, 1981, 383; also 1879, 111). Sidgwick does not say whether he denies evolution as a cause or denies that evolution is a cause that makes one’s beliefs likely to be false. 28. For an attempt to mitigate the counterintuitiveness of this result, see Street, 2009. 29. For versions of this objection, see Schafer, 2010, 486–468, Shafer-Landau, 2011, 12–13, 23, and Vavova, 2014b, 90–93 (though Vavova is perhaps better read as denying the inference from (1) to (2)). 30. Thanks to Joyce Jenkins for many helpful comments and discussion.
References Audi, R. (2008). “Intuition, Inference, and Rational Disagreement in Ethics,” Ethical Theory and Moral Practice, 11, 475–492. Baldwin, T. (2013). “Constructive Complaints,” in C. Bagnoli (ed.), Constructivism in Ethics. Cambridge: Cambridge University Press. Christensen, D. (2007). “Epistemology of Disagreement: The Good News,” Philosophical Review, 116, 187–217. ———. (2009). “Disagreement as Evidence: The Epistemology of Controversy,” Philosophy Compass, 4, 756–767. ———. (2011). “Disagreement, Question-Begging, and Epistemic Self-Criticism,” Philosophers’ Imprint, 11 (6). Crisp, R. (2011). “Reasonable Disagreement: Sidgwick’s Principle and Audi’s Intuitionism,” in J. Hernandez (ed.), The New Intuitionism. London: Continuum. ———. (2013). “Metaphysics, Epistemology, Utilitarianism, Intuitionism, and Egoism: A Response to Phillips on Sidgwick,” Revue d’études benthamiennes, 12. etudes-benthamiennes.revues.org/671 ———. (2015). The Cosmos of Duty: Henry Sidgwick’s Methods of Ethics. Oxford: Clarendon. de Lazari-Radek, K. and Singer, P. (2014). The Point of View of the Universe: Sidgwick and Contemporary Ethics. Oxford: Oxford University Press. Dougherty, T. (2013). “Dealing with Disagreement from the First-Person Perspective: A Probabilist Proposal,” in D. Machuca (ed.), Disagreement and Skepticism. New York: Routledge. Elga, A. (2007). “Reflection and Disagreement,” Nous, 41, 478–502. Enoch, D. (2006). “Agency, Shmagency: Why Normativity Won’t Come from What Is Constitutive of Action,” Philosophical Review, 115, 169–198. 286
Rob Shaver
———. (2010). “The Epistemological Challenge to Metanormative Realism: How Best to Understand It, and How to Cope with It,” Philosophical Studies, 148, 413–438. Hills, A. (2010). The Beloved Self. Oxford: Oxford University Press. Hurka, T. (2014). British Ethical Theorists from Sidgwick to Ewing. Oxford: Oxford University Press. Irwin, T. (2009). The Development of Ethics, vol. 3. Oxford: Oxford University Press. Joyce, R. (2006). The Evolution of Morality. Cambridge, MA: MIT Press. Kelly, T. (2010). “Peer Disagreement and Higher-Order Evidence,” in R. Feldman and T. Warfield (eds.), Disagreement. Oxford: Oxford University Press. King, N. (2012). “Disagreement: What’s the Problem? Or a Good Peer Is Hard to Find,” Philosophy and Phenomenological Research, 85, 249–272. Korsgaard, C. (1996). The Sources of Normativity. Cambridge: Cambridge University Press. ———. (1997). “The Normativity of Instrumental Reason,” in G. Cullity and B. Gaut (eds.), Ethics and Practical Reason. Oxford: Clarendon Press. ———. (2008). “Realism and Constructivism in Twentieth-Century Moral Philosophy,” in C. Korsgaard (ed.), The Constitution of Agency. Oxford: Clarendon Press. ———. (2009). Self-Constitution: Agency, Identity, and Integrity. Oxford: Clarendon Press. Larmore, C. (2008). The Autonomy of Morality. Cambridge: Cambridge University Press. Lillehammer, H. (2010). “Methods of Ethics and the Descent of Man: Darwin and Sidgwick on Ethics and Evolution,” Biology and Philosophy, 25, 361–378. Lloyd, S. (1994). “Relativizing Rawls,” Chicago-Kent Law Review, 69, 737–762. McGrath, Sarah. (2008). “Moral Disagreement and Moral Expertise,” Oxford Studies in Metaethics, 3, 87–108. Olafson, F. (1967). Principles and Persons: An Ethical Interpretation of Existentialism. Baltimore: Johns Hopkins Press. Phillips, D. (2011). Sidgwickian Ethics. New York: Oxford University Press. Rawls, J. (1996). Political Liberalism. New York: Columbia University Press. ———. (1999a). Collected Papers. Cambridge, MA: Harvard University Press. ———. (1999b). A Theory of Justice (revised ed.). Cambridge, MA: Harvard University Press. [First edition, 1971]. ———. (2000). Lectures on the History of Moral Philosophy. Cambridge, MA: Harvard University Press. ———. (2001). Justice as Fairness: A Restatement. Cambridge, MA: Harvard University Press. Scanlon, T. (2003). “Rawls on Justification,” in S. Freeman (ed.), The Cambridge Companion to Rawls. Cambridge: Cambridge University Press. Schafer, K. (2010). “Evolution and Normative Scepticism,” Australasian Journal of Philosophy, 88, 471–488. Scheffler, S. (1994). “The Appeal of Political Liberalism,” Ethics, 105, 4–22. Setiya, K. (2012). Knowing Right from Wrong. Oxford: Oxford University Press. Shafer-Landau, R. (2003). Moral Realism: A Defense. Oxford: Clarendon Press. ———. (2011). “Evolutionary Debunking, Moral Realism and Moral Knowledge,” Journal of Ethics and Social Philosophy, 7 (2012), 1–37. Shaver, R. (1999). Rational Egoism. Cambridge: Cambridge University Press. ———. (2006). “Sidgwick on Moral Motivation,” Philosophers’ Imprint, 6 (1). ———. (2014). “Sidgwick’s Axioms and Consequentialism,” Philosophical Review, 123, 173–204. Sidgwick, H. (1879). “The Establishment of Ethical First Principles,” Mind, 4, 106–111. ———. (1902). Philosophy, Its Scope and Relations. London: Palgrave Macmillan. ———. (1905). Lectures on the Philosophy of Kant and Other Philosophical Lectures and Essays. London: Palgrave Macmillan. ———. (1981). The Methods of Ethics (7th ed.). Indianapolis, IN: Hackett Publishing [First Published 1907]. Singer, P. (1974). “Sidgwick and Reflective Equilibrium,” Monist, 57, 490–517. Skarsaune, K. (2011). “Darwin and Moral Realism: Survival of the Iffiest,” Philosophical Studies, 152, 229–243. Skelton, A. (2006). “Henry Sidgwick’s Practical Ethics: A Defense,” Utilitas, 18, 199–217. 287
Contemporary Moral Epistemology
———. (2010). “Henry Sidgwick’s Moral Epistemology,” Journal of the History of Philosophy, 48, 491–519. Stern, R. (2012a) Understanding Moral Obligation. Cambridge: Cambridge University Press. ———. (2012b). “Constructivism and the Argument from Autonomy,” in J. Lenman and Y. Shemmer (eds.), Constructivism in Practical Philosophy. Oxford: Oxford University Press. Street, S. (2006). “A Darwinian Dilemma for Realist Theories of Value,” Philosophical Studies, 127, 109–166. ———. (2008a). “Constructivism About Reasons,” Oxford Studies in Metaethics, 3, 207–245. ———. (2008b). “Reply to Copp: Naturalism, Normativity, and the Varieties of Realism Worth Worrying About,” Philosophical Issues, 18, 207–228. ———. (2009). “In Defense of Future Tuesday Indifference: Ideally Coherent Eccentrics and the Contingency of What Matters,” Philosophical Issues, 19, 273–298. ———. (2010). “What Is Constructivism in Ethics and Metaethics,” Philosophy Compass, 5, 363–384. ———. (2012). “Coming to Terms with Contingency: Humean Constructivism About Practical Reason,” in J. Lenman and Y. Shemmer (eds.), Constructivism in Practical Philosophy. Oxford: Oxford University Press. Timmons, M. (2003). “The Limits of Moral Constructivism,” Ratio, 16, 391–423. van Inwagen, P. (2010). “We’re Right.They’re Wrong,” in R. Feldman and T.Warfield (eds.), Disagreement. Oxford: Oxford University Press. Vavova, Katia. (2014a). “Moral Disagreement and Moral Skepticism,” Philosophical Perspectives, 28, 302–333. ———. (2014b). “Debunking Evolutionary Debunking,” Oxford Studies in Metaethics, 9, 76–101. Wedgwood, R. (2010). “The Moral Evil Demons,” in R. Feldman and T.Warfield (eds.), Disagreement. Oxford: Oxford University Press. Wenar, L. (1995). “Political Liberalism: An Internal Critique,” Ethics, 106, 32–62.
Further Readings For influential statements of intuitionism after Sidgwick, see W. D. Ross, The Right and the Good (Oxford: Clarendon Press, 1930), ch. 2 and Michael Huemer, Ethical Intuitionism (New York: Palgrave Macmillan, 2005). For good discussions of the Sidgwick-to-Ewing school, see Thomas Hurka, ed., Underivative Duty: British Moral Philosophers from Sidgwick to Ewing (Oxford: Oxford University Press, 2011) and Philip Stratton-Lake, ed., Ethical Intuitionism: Re-Evaluations (Oxford: Oxford University Press, 2002). For convincing worries about Rawls’s attitude to truth and a proposal for what Rawls should say, see Joshua Cohen, “Truth and Public Reason,” Philosophy and Public Affairs, 37, 2–42, 2009. For an account of some motivations for and worries about constructivism, see David Enoch, “Can There Be a Global, Interesting, Coherent Constructivism About Practical Reason?” Philosophical Explorations, 12, 319–339, 2011.
Related Chapters Chapter 1 The Quest for the Boundaries of Morality, Chapter 2 The Normative Sense: What is Universal? What Varies, Chapter 3 Normative Practices of Other Animals, Chapter 9 The Evolution of Moral Cognition, Chapter 11 Contemporary Moral Epistemology, Chapter 13 The Denial of Moral Knowledge, Chapter 21 Methods, Goals and Data in Moral Theorizing.
288
13 THE DENIAL OF MORAL KNOWLEDGE Richard Joyce
1. Introduction Suppose we make the assumption that to know that p is to have a true and justified belief that p. (We all know, of course, that there are problems with this JTB assumption, but for the purposes of this chapter they will be put aside.) On this assumption, there are three ways of denying that a person knows that p: one can deny that p is true, or deny that the person believes p, or deny that the person’s belief that p is justified. The moral skeptic denies that anyone has moral knowledge, and thus there are three forms of moral skepticism: (i) the error theorist denies that any moral judgments are true; (ii) the noncognitivist denies that moral judgments are beliefs; and (iii) the justification skeptic denies that any moral beliefs are justified. This chapter will discuss these three skeptical options in turn, but will not argue for or against any of them. First, some preliminary points will be made. The skeptical positions outlined concern non-negative first-order moral judgments. 1. Stealing is morally wrong. is an example of such a judgment, but the following are not: 2. It is not the case that stealing is morally wrong. 3. John believes that stealing is wrong. 4. You are not justified in judging that stealing is wrong. The error theorist thinks that (1) is untrue, but may hold that (2), (3), and (4) are true. Likewise, mutatis mutandis, the noncognitivist and the justification skeptic. What it takes for something to count as a “moral judgment” in the first place is something that different theorists will answer in different ways.1 Perhaps moral judgments are those that involve the employment of any of a range of certain concepts (wrongness, obligation, evil, etc.), or ontological commitment to any of a range of certain properties (wrongness, obligation, evil, etc.), or the expression of any of a range of certain attitudes (disapproval, approbation, the desire
289
The Denial of Moral Knowledge
to punish, etc.). And various answers are available for how these different kinds of lists might be drawn up. In the case of properties, for example, one might say that moral properties are those that (putatively) place certain authoritative practical demands upon agents. Moral skepticism is not a form of moral anti-realism. If we accept the traditional view that moral realism is the position that moral judgments are beliefs that are sometimes true and whose truth-value is an objective matter (under some to-be-specified understanding of objectivity), then skepticism and moral realism may come apart. In short: realism says nothing about justification, and skepticism says nothing about objectivity. One may be a moral realist while maintaining that moral judgments lack justification (thus also being a moral skeptic). Alternatively, one may deny moral skepticism while maintaining that moral truths are entirely subjective (thus also being a moral anti-realist). Moral skepticism is not a form of moral eliminativism. If we take eliminativism to be the position that we should stop engaging in moral discourse, then skepticism and eliminativism may come apart. Even if moral judgments are all false or all unjustified, the question of whether we should entirely abolish moral discourse remains a live one. A certain kind of moral fictionalist, for example, maintains that we should retain moral discourse even while believing that the moral error theoretic position is true (see Joyce, 2001, 2017). Alternatively, one may deny moral skepticism but still hold that moral discourse is, pragmatically speaking, a damaging practice that is best eliminated. (Such a view forces us to treat the notion of justification with care. In denying moral skepticism one must hold that moral judgments may be epistemically justified; but this is consistent with the eliminativist contention that employing moral discourse is instrumentally unjustified.)
2. Error Theory The error theorist denies the existence of moral knowledge by denying the truth criterion of the JTB analysis. The error theorist affirms the belief criterion—holding that moral judgments are beliefs (when we consider them as mental entities) and are assertions (when we consider them as speech-acts)—and may remain silent regarding the justification criterion (i.e., the error theorist may or may not hold that our moral judgments are unjustified). The error theorist usually argues for this denial via a two-step argument: (i) that making a moral judgment ontologically commits the speaker to the instantiation of certain properties but (ii) that these properties are not actually instantiated. Perhaps the simplest way of understanding this view is via analogy with a more familiar form of error theory: atheism. The atheist maintains that when people engage sincerely in theistic discourse they commit themselves to the existence of certain entities (gods, divine providence, post-mortem paradise, etc.) but the world simply doesn’t contain these entities; hence, the atheist thinks, theistic discourse suffers from a systematic failure to state truths. (And it is worth noting in passing that the atheist may or may not hold that these theistic beliefs are epistemically justified.) Even more simply: If I assert “The book is blue” then I am claiming that the book instantiates the property of blueness; thus if the book is not blue I have said something false. The difference in the case of moral error theory is that the error theorist argues that the moral properties that are ascribed to things in the world— wrongness, obligatoriness, evil, etc.—are never instantiated. A weak form of error theory will hold that this lack of instantiation is merely a contingent affair; a stronger form will 290
Richard Joyce
hold that these properties cannot possibly be instantiated. (And it is worth noting in passing that there are analogous weak and strong versions of atheism.) Let’s think a little more, in very general terms, about why someone might endorse atheism. First, such a person does need to have some kind of conceptual grasp of the entities whose existence is being denied. She thinks of God, say, as an entity that is (inter alia) omnipotent, omniscient, and the creator of the universe. Obviously, if a theist were to respond “No, God doesn’t have those characteristics at all,” then the debate would break down; in order for the theist and atheist to be in disagreement, they must first reach a threshold of conceptual agreement. Having sufficiently grasped the relevant concepts, the atheist then denies that the world contains anything answering to them. Perhaps she looks around and concludes that there is no phenomenon whose explanation requires the existence of God—there are better explanatory hypotheses across the board—and she couples this with some principle of parsimony that permits (or demands) disbelief in anything so explanatorily unnecessary. Perhaps she reflects on the many different world religions and their varying conceptions of God (even while agreeing on the core divine attributes), and the natural question of who’s got it right and who’s got it wrong (and how this epistemic discrepancy would be explained) arouses in her the suspicion that none of them have got it right, because there’s nothing to get right. Or alternatively perhaps she just finds the characteristics ascribed to God outlandish and utterly far-fetched, and she is committed to a worldview that banishes such weirdness. (Perhaps she even finds one or more of the characteristics contradictory, in which case only the stronger version of atheism is available). Needless to say, the atheist is well aware that billions of people across the world and throughout history have believed in God, but this doesn’t particularly move her—she probably has a picture of humans as epistemically vulnerable in this respect. The picture just sketched of the atheist’s likely reasoning broadly matches that of the moral error theorist. First, the moral error theorist needs to have some conceptual grasp of the entities whose existence is being denied: properties like moral wrongness, obligatoriness, and evil. She may then maintain that there is no phenomenon whose explanation requires the existence of these properties—there are better explanatory hypotheses across the board—and she couples this with some principle of parsimony that permits (or demands) disbelief in anything so explanatorily unnecessary. Perhaps she reflects on the many different moral systems in the world, and the natural question of who’s got it right and who’s got it wrong (and how this epistemic discrepancy would be explained) arouses in her the suspicion that none of them have got it right, because there’s nothing to get right. Or, alternatively, perhaps she just finds the characteristics ascribed to moral properties outlandish and utterly far-fetched, and she is committed to a worldview that banishes such weirdness. (Perhaps she even finds one or more of the characteristics contradictory, in which case only the stronger version of error theory is available). The error theorist will be well aware that billions of people across the world and throughout history have believed in morality, but this doesn’t particularly move her—she probably has a picture of humans as epistemically vulnerable in this respect. Most of the aforementioned argumentative moves can be found in the work of John Mackie, who coined the term “error theory” (1977). First, Mackie has to establish the conceptual step of the argument. He maintains that our moral conceptual framework commits us to the existence of “objective values” and “objective prescriptions”: actions that must be 291
The Denial of Moral Knowledge
done, whether we like it or not—where this categoricity is not a human construction but rather something that supposedly transcends human institutions. According to Mackie, we think that murdering innocent people is wrong (for example) not because it undermines the perpetrator’s interests (i.e., we do not think of the prohibition as a hypothetical imperative) and not because some human institution has explicitly or implicitly decreed that it is wrong (i.e., we do not think of the prohibition as an institutional norm)—rather, we think of murdering innocent people as “wrong in itself ” (1977, 34), that the action has “not-tobe-doneness somehow built into it” (1977, 40). Mackie maintains that any system of norms that didn’t support these features would lack the peculiar but crucial kind of authority with which we imbue our moral prescriptions. It is not going too far, he asserts, “to say that this assumption [that there are objective values] has been incorporated in the basic, conventional, meanings of moral terms” (1977, 35). Having established (to his own satisfaction) this conceptual step, Mackie is well placed to execute some of the arguments that constitute the substantive step of the error theorist’s case: that these properties are not actually instantiated. His argument from relativity starts with the observation of variation and disagreement among moral systems and asks which explanatory hypothesis is more plausible: (i) that cultures disagree because they vary in their epistemic ability to discover objective moral facts or (ii) that cultures disagree because they are, essentially, inventing the moral facts. Mackie thinks the latter hypothesis is preferable. His argument from queerness states that properties with such unusual practical authority would be simply too bizarre for us to countenance their existence—a skepticism that is supplemented by the question of by what means we would have epistemic access to such properties. (“A special sort of intuition” is quickly dismissed by Mackie as “a lame answer” (1977, 39).) Mackie is aware of but unmoved by the fact that billions of people have believed in morality. He explains the widespread error by speculating that humans have a tendency to “objectify” their subjective concerns and values (invoking Hume’s metaphor of the human mind’s “great propensity to spread itself on external objects”). If it is true that humans have a tendency to project their feelings onto their experience of the world, resulting in their erroneously judging certain actions to be required or prohibited with an objective authority, then billions of people believing in morality is exactly what the error theory would predict.2 Opposition to Mackie’s arguments (or error theoretic arguments with a similar structure) can take several forms. Probably the most vulnerable premise of the skeptical argument is the conceptual step, which maintains that moral properties must have a special kind of objective authority. One might respond that even if it is true that many people have thought moral properties to have this kind of authority, it is nevertheless not necessary that they do. Analogously, at one time everyone thought that the Earth is the center of the universe, but it didn’t follow that it was part of the very concept Earth that it has this feature, such that the discovery that the Earth is not at the center could be discounted on a priori grounds.3 It is worth noticing that it is not just the precise content of Mackie’s conceptual step that is potentially problematic here. The error theorist might alight on a different puzzling feature of moral properties: not that they involve objective authority (as Mackie thought), but, say, that they appear to imply the existence of a kind of pure free will. (And then the substantive step of the error theorist’s argument would aim to establish that the world 292
Richard Joyce
just doesn’t contain the requisite kind of free will.) But the opponent can reply as before: granted that many people have thought that moral properties involve this odd kind of pure free will, it is nevertheless not necessary that they do. This problem generalizes and reiterates simply because there is no consensus on how we should distinguish between “X is widely (/universally) believed about Y” and “X is an essential part of the concept Y.” Whatever problematic characteristic of moral properties the error theorist offers, it would appear that it is always possible to doubt that it is an essential characteristic, and at this point the error theorist and her opponent will not know where to turn to settle their dispute. A typical example of this kind of opponent is the moral naturalist who identifies moral rightness (say) with something along the lines of whatever maximizes happiness (just to choose an extremely simple version). Naturalistic properties of this kind do not appear to have the kind of objective practical authority the existence of which Mackie doubts: there seems nothing inherently irrational about someone who doesn’t care whether happiness is maximized, and so it would appear that a person might have no reason to care about moral rightness (ditto, mutatis mutandis, for any naturalistic property). The naturalist accepts this, but holds that such authority is an unnecessary extravagance that Mackie has tried to foist on moral ontology—in fact (the naturalist asserts) moral discourse was never committed to anything so grand (see, e.g., Railton, 1986; Brink, 1989; Copp, 2010). A different kind of opponent will concede that Mackie has gotten the conceptual step right—yes, it is an essential characteristic of moral properties that they have objective authority—but resist the substantive step. Such an opponent would argue that there are properties with this objective authority instantiated in the actual world. Certain forms of moral rationalism—according to which moral facts are identified with what is practically rational or irrational to do—are instances of this kind of opposition. If the rationalist holds a non-instrumentalist account of rationality (that is, denies that rational imperatives are hypothetical) and, moreover, maintains that rational norms exist and are not the product of human construction, then he or she can reject Mackie’s substantive step (see, e.g., Kant, [1785] 1985, Nagel, 1970; Gewirth, 1978). Certain forms of moral nonnaturalism might also oppose Mackie by accepting the conceptual step but rejecting the substantive step. It is, after all, Mackie’s commitment to a certain naturalistic worldview that powers his argument from queerness; if, therefore, one lacks that standing commitment to place a constraint on ontology—as the nonnaturalist does—then one might countenance the existence of properties with objective authority (see, e.g., Moore, 1903; Enoch, 2011). (As mentioned before, even if the error theorist puts forward a different puzzling feature of moral properties— e.g., that they imply the existence of pure free will—opposing arguments with a matching format will be possible.)
3. Noncognitivism The noncognitivist denies the existence of moral knowledge by denying the belief criterion of the JTB analysis. Although we speak naturally of “moral beliefs” and say things like “Mary believes that telling lies is wrong,” such talk is, according to the noncognitivist, at best a kind of shorthand that doesn’t withstand scrutiny as a literal description. Rather, the kind of mental state associated with making a moral judgment is something other than belief, and, correspondingly, the kind of speech-act associated with publicly declaring 293
The Denial of Moral Knowledge
a moral judgment is something other than assertion (the expression of belief). One can interpret noncognitivism as an attempt to avoid the error theory. Sympathizing, perhaps, with worries that moral properties would be ontologically very odd indeed, so much so that asserting their instantiation would involve speakers in falsehood, the noncognitivist instead proclaims that when we make a moral judgment we simply aren’t making reference to moral properties: moral judgments cannot be false because they were never the kind of thing designed to have truth-value in the first place. Classic noncognitivism appeared in the 1930s, espoused by such thinkers as Rudolf Carnap (1935), A. J. Ayer (1936), and Charles Stevenson (1937). More contemporary noncognitivism emerged in the 1980s, especially through the work of Simon Blackburn (1984, 1993) and Allan Gibbard (1992, 2003). One of the main differences between the two is in their differing attitudes toward commonsense folk intuitions. The early noncognitivists were more willing to accuse ordinary speakers of confused thinking about their own moral discourse; later noncognitivists offer a more conciliatory tone that attempts to accommodate as much of folk discourse as possible. So, for example, the statement with which I began this section—that noncognitivists maintain that when people say things like “Mary believes that telling lies is wrong” they are saying something false—is something that early thinkers like Ayer were willing to accept but about which later noncognitivists will probably feel uncomfortable and attempt to deny. (And the same thing goes for statements like “Mary knows that it would be wrong for her to break that promise,” “Mary asserted that breaking promises is wrong,” and “The sentence ‘Promise-breaking is wrong’ is true.” Ayer would likely accuse anyone who asserted any of these claims of simply making a blunder, whereas Blackburn would try to rescue appearances.) At this point it would be natural to wonder whether later “noncognitivists” should even count as noncognitivists, given their desire to deny what appears to be the central tenet of the theory. And this is indeed a significant issue, with several commentators claiming that the noncognitivist can take attempts at accommodation too far (see Dreier, 2004; Cuneo, 2008). (It is worth noting that the contemporary metaethicist whose name is most closely associated with noncognitivism—Blackburn—has steadfastly eschewed the label.) This is not a debate to be adjudicated here; suffice it to say that if a so-called noncognitivist aims to accommodate moral beliefs—actual literal beliefs—then this position will no longer count as a denial of the belief criterion of the JTB analysis and therefore will not be of concern to this chapter. We are concerned here only with the belief-denying forms of noncognitivism. Strictly speaking, noncognitivism is an entirely negative doctrine: it denies that moral judgments are beliefs and thus denies the existence of moral knowledge. But there are a variety of positive theses offered by noncognitivists, too. Carnap states that moral judgments function as commands. Ayer claims that moral judgments function to express our feelings of (dis)approval while also attempting to arouse similar feelings in others. Stevenson has a mixed view, according to which moral judgments both state our feelings on the matter (a cognitive element) and enjoin others to share those feelings (a noncognitive element). Blackburn argues that moral judgments express our conative attitudes. And Gibbard maintains that moral judgments function to evince our allegiance to a given normative framework. The attractions of noncognitivism are several. One has already been mentioned: that it sidesteps thorny puzzles about the ontology of moral properties and the nature of our epistemic access to them. According to the noncognitivist, there simply aren’t any such 294
Richard Joyce
properties with which we could be in epistemic contact. Another apparent advantage of noncognitivism is that it can accommodate the close connection between moral judgment and motivation. If the mental state a person is in when he makes a moral judgment is a conative state, then it is no mystery that he should have some (though defeasible) motivation to act in accordance with that judgment. Noncognitivism accounts easily for the phenomenon of moral diversity different individuals and different cultures approve and disapprove of different things, so moral variation is to be expected. Noncognitivism also accounts for the apparent heatedness and intractability of moral disagreement: being conative, moral judgments are both emotional and potentially impervious to rational debate and empirical evidence. There are also several problems for noncognitivism. One has already been mentioned: that it would appear to deny the existence of certain phenomena that are commonly spoken of: moral beliefs, moral truths, moral knowledge, moral assertions. In this the noncognitivist doesn’t appear to be worse off than any other kind of moral skeptic. All, ex hypothesi, deny moral knowledge—and while it may be true that the error theorist doesn’t deny the existence of moral beliefs, whatever positive mark for “intuitiveness” the error theorist thereby gains, it is more than compensated for, most would think, by the accusation of massive falsehood. The most famous problem for the noncognitivist is the embedding problem (a.k.a. the Frege-Geach problem). Suppose that when a speaker declares “Stealing is morally wrong” she is not ascribing a property to stealing—indeed, she is not asserting anything truth-evaluable at all—but rather saying something akin to “Boo to stealing!” or “Don’t steal.” That may be all very well, but what are we going to say about her embedding this judgment in logically complex contexts, such as “If stealing is wrong, then it is wrong to encourage your brother to steal”? And what are we going to say about how these two utterances—the freestanding and the embedded forms—combine in order to entail the conclusion “Therefore it is wrong to encourage your brother to steal”? The obvious first pass will look something like this: 1. Boo to stealing. 2. If boo to stealing, then boo to encouraging your brother to steal. 3. Therefore, boo to encouraging your brother to steal. But if (1) is not even truth-evaluable, then what sense can we make of the conditional (2) being truth-evaluable (which is to say, how can we make sense of its really being a conditional?), and how can we understand the argument as valid, since validity is defined as a truth-preserving relation? One strategy for the noncognitivist’s reply is to earn the right to speak of the components of such arguments as truth-evaluable, after all (this is what Blackburn refers to as “fast track quasi-realism” (1993, 184–186)). If this program also involves earning the right to speak of the components of such arguments as items to be believed, as it presumably will, then (as stated earlier) it will no longer count as the form of moral skepticism presently under discussion and is thus beyond the purview of this chapter. Another strategy for the noncognitivist would be simply to bite the bullet and declare that we’re doing something incoherent when we embed moral judgments in logically complex contexts. However, 295
The Denial of Moral Knowledge
ascribing such widespread incoherence to a linguistic population will usually be thought of as a very unattractive option. (Note that it is qualitatively different to the widespread mistake that the error theorist attributes to speakers. The error theorist thinks we’re all mistaken about the nature of the world—a possibility that we know, when surveying the history of human thinking, has frequently turned out to be the case.The noncognitivist, by contrast, thinks that we’re all mistaken about the nature of our own linguistic practices—a possibility that is considerably less familiar.) The more interesting strategy for the noncognitivist’s reply is to deny that the components of the argument are truth-evaluable (deny that they are items of belief) while maintaining that, in the moral context, relations akin to, but distinct from, logical connectives and validity are in play. So, for example, while an entirely nonmoral sentence like “If the book is blue, then the book is colored” uses a conditional connective (understood however we would ordinarily understand it), one that includes a moral claim, such as “If it is wrong to steal, then it is wrong to encourage your brother to steal,” can be interpreted as using something that is like a conditional, but different in that it need not connect truth-evaluable relata but can connect, say, expressions of conative attitudes. The enterprise then becomes to define this quasi-conditional by reference to its relation to other quasi-connectives (such as quasi-negation, etc.) and its contribution to quasi-validity. What the noncognitivist seeks to develop, in other words, is a logic of attitudes (see Blackburn, 1984, 193–196; Schroeder, 2010). There are further challenges for the noncognitivist that might be mentioned, but I shall instead end this section by discussing what bearing empirical evidence may have on the debate between the cognitivist and the noncognitivist. Suppose for the sake of argument that we could build a machine that reliably detects the occurrent conative state of approval (say): when the machine is aimed at someone in that state, a little green light turns on. Suppose we observe a great many conversations involving moral topics (and a bunch of control conversations involving nothing moral), and we witness the green light go on whenever someone makes a moral judgment (and in conversations involving nothing moral, the light never goes on). It’s important to recognize that this would not count as empirical evidence in favor of noncognitivism. The data would not support the conclusion that the state of approval is the moral judgment—that state might be something that reliably accompanies the moral judgment (e.g., that causes it or is caused by it). And the data would not support the conclusion that the utterance “X is morally good” functions to express that state of approval. Indeed, even if that utterance did function to express that mental state, we should hardly predict a constant correlation between the two. By comparison, we know that the act of apologizing functions to express regret (for that is part of how we define the speech-act of apology), but we don’t therefore expect that any apology will be accompanied by occurrent regret (some apologies are insincere, for instance, and some are sincere but presumably too hurried or habitual for emotional arousal), nor that any regret will be accompanied by an apology. If we want to know what kind of mental state a kind of utterance functions to express, the type of empirical evidence to be consulted is not neuroscientific but socio-linguistic. What determines whether a kind of speech-act S expresses a kind of mental state M (in the sense of “expresses” in which we’re interested) are the conventions surrounding S, accepted by speaker and audience. The kind of “acceptance” here is more like knowledge-how than 296
Richard Joyce
knowledge-that—the conventions governing our speech-acts are not always transparent to competent speakers—therefore the best guide to understanding the conventions regulating speech-acts is to scrutinize their use in an array of real-life settings. In other words, the noncognitivist who maintains that moral judgments express some kind of conative state rather than belief needs to locate sociolinguistic evidence that this is how moral judgments function. Mark Kalderon (2005) presents such an argument in the course of advocating a view he calls “moral fictionalism.” When one believes something, Kalderon claims, then upon encountering an epistemic peer who firmly disagrees, one has a “lax obligation” to examine one’s reasons for believing as one does. Kalderon calls this “noncomplacency.” However, the norms surrounding morality, he argues, permit complacency: we feel no embarrassment in steadfastly maintaining our moral views in the face of disagreement from epistemic peers. Were this argument to succeed, there would be grounds for doubting that moral discourse is belief-expressing. But evidence can be mustered in the other direction, too. David Enoch (2011) argues that cases of ordinary conflict of interpersonal preferences (e.g., you want us to play tennis and I want us to go to a movie) are governed by an impartiality norm: it is wrong to stand one’s ground—rather, some kind of egalitarian compromise ought to be sought (e.g., tossing a coin). Cases of straightforward factual disagreement, by contrast, lack this characteristic. And the norms surrounding moral disagreement, Enoch goes on to argue, appear much more like those surrounding factual disagreement. If you and I disagree on the moral status of some kind of action—you think it’s permissible and I think it’s wicked—then impartial solutions like tossing a coin or alternating doing it your way and then doing it my way, etc., seem inappropriate. Enoch’s conclusion is that moral judgment appears to be governed by norms indicating that it is not a matter of personal preference, which counts against many forms of noncognitivism. I have sketched Kalderon’s and Enoch’s views in the briefest terms possible (for more detailed and critical commentary, see Joyce, 2011 and Joyce, 2013). And it will be noticed that the two views are not really opposed to each other, for it is entirely possible that moral judgments are neither expressions of belief nor mere reflections of personal preference.The intention here is not to support the evidence in either direction but rather provide illustration of the kind of empirical investigation of socio-linguistic norms that might lead to progress in the debate between the cognitivist and the noncognitivist.4
4. Justification Skepticism The justification skeptic denies the existence of moral knowledge by denying the justification criterion of the JTB analysis. As mentioned earlier, this form of skepticism is compatible with moral realism. According to this kind of skeptic, moral judgments are/express beliefs, and the beliefs may or may not be true, but we lack justification for holding these beliefs. Everyone accepts that people sometimes lack justification for their moral judgments, so this form of skepticism can be seen as something familiar writ large. One should distinguish a weak version of the view—according to which everyone as a matter of fact lacks justification for their moral beliefs (but where justification could in principle be instated)— from a strong version—according to which we are permanently stuck lacking justification. 297
The Denial of Moral Knowledge
The fact that there are many competing theories of what it takes for a belief (or believer) to have justification makes it challenging to sum up or assess this view succinctly. I shall proceed by reflecting on some familiar folk opinions about what might render a moral judgment unjustified and then sketch how some of these might be “writ large” to undermine moral knowledge in general. Suppose someone, Sally, judges that an available action, A, is morally good. We will assume that cognitivism is true, so Sally’s judgment is a belief. What kind of factors might ordinarily lead us to doubt that Sally’s belief is justified? First, we might think that Sally is insufficiently impartial in making this judgment— perhaps we see that a self-serving bias has distorted her thinking on the matter. We might think that other factors have distorted her thinking, too, such as her being overly emotional, or upset, or being drunk, or being hypnotized, and so forth. We might doubt the epistemic status of Sally’s belief if we note that other people in the same situation (armed with the same evidence and powers of reflection as Sally) have confidently judged A-type actions to be morally abhorrent. If we notice that Sally’s judgment about A appears to be inconsistent with some of her other beliefs, then we might doubt that her moral judgment is justified. Perhaps we know that Sally thinks that A is good simply because she’s parroting what she’s been told on the matter without sufficiently reflecting on it herself. Perhaps the process that has led Sally to this belief—including the process by which the belief gained purchase in the community from which Sally picked up the belief—is an unreliable one, in the sense that the process doesn’t appear to be sensitive to whether A actually is morally good. (Perhaps the process would have led to Sally’s believing that A is good even if it is false that A is good, or perhaps we cannot see how the process even could track facts about moral goodness.) This is not an exhaustive list, and I’m not claiming that any of these factors alone would be sufficient for us immediately to declare Sally’s moral judgment to lack justification, but it seems fair to say that everyday opinion allows that these factors all may bear on whether, and to what extent, Sally’s belief that A is morally good is justified. If someone thought that moral judgments in general manifest one or more of these justification-undermining features, then this may be ground for endorsing justification skepticism. Walter Sinnott-Armstrong (2006, ch. 9) argues that moral judgments exhibit several of these features. First (Sinnott-Armstrong argues), which moral beliefs one holds is likely to affect one in a variety of material ways—sometimes very substantively—so we are all, to a greater or lesser extent, potentially biased in our moral judgments. Second, while influences like drunkenness and hypnosis don’t play a widespread role in human moral judgments, emotions certainly do. There is a body of evidence from experimental and social psychology suggesting that manipulating a subject’s emotions will often result in altering his/her moral attitudes (see Moretti & di Pellegrino, 2010; Horberg et al., 2011; Olatunji et al., 2016). There is evidence from neuroscience revealing the critical role that emotional arousal plays in moral judgment (see Young & Koenigs, 2007; Decety & Wheatley, 2015; Liao, 2016). The idea that emotions influence moral judgment is of course entirely mundane, but empirical investigations may reveal the influence to be far more ubiquitous than commonly realized. The view from social intuitionism of moral judgment as an emotional knee-jerk reaction, with moral reasoning as a post hoc rationalization (see Haidt, 2001), is one that would very much cast the epistemic credentials of moral beliefs into doubt.5 298
Richard Joyce
Third, moral disagreement among equally well-informed and reflective individuals/cultures appears to be a common phenomenon. In an earlier section I mentioned Mackie’s argument that the ubiquity of moral disagreement might support the view that moral facts do not exist at all, but here we are countenancing something different: we are not being offered skepticism as an explanatory hypothesis of moral disagreement (as Mackie does), but rather we are wondering whether the existence of epistemic peers who disagree with you over X should undermine the justification of your beliefs about X. There is a very long tradition, stretching back to the ancient skeptics, of thinking that it should: references to widespread disagreement on all matters were central weapons in the classical skeptics’ arsenal. But we are not here interested in those radical global forms of skepticism that deny all knowledge; we are interested in moral skepticism in particular—a skepticism that might be maintained within a broader nonskeptical worldview—raising the question of whether there is something special about moral disagreement in particular. It seems that there might be. After all, finding people who disagree with us about moral matters is much easier than finding people who genuinely disagree about whether this table is bigger than this chair. I interact on a daily basis with people who I know disagree with me about important moral matters. Whole cultures have thought that slavery is morally acceptable, to say nothing of human sacrifice, violence toward women, killing foreigners, and so on. It might be responded that individuals and cultures have held some really wacky nonmoral beliefs, too (e.g., that leeches are a cure-all for disease). However, there remains room for arguing that moral disagreements still seem different insofar as they seem particularly resistant to resolution through rational debate or consultation of evidence (see Doris & Plakias, 2008; Adams, 2013).This may be significant, since my justification for believing that p isn’t undermined if I encounter someone who disagrees with me if the disagreement could be resolved through reasoned discussion and examination of evidence. It is only if the disagreement is intractable in the face of such discussion that the presence of an epistemic peer who believes that not-p might show my belief to lack justification. Fourth and finally, significant doubts can be raised about the origin of human moral beliefs. The crucial question is: are the processes that lead to the formation of moral beliefs sensitive to the moral facts? The focus of this question is adjustable: it might pertain to how individuals come to internalize moral norms (a question of developmental psychology), or to how cultures come to adopt their moral norms (a question of cultural anthropology), or to how humans came to have the capacity to wield moral norms in the first place (a question of evolutionary psychology).6 Suppose that the best empirically supported answers to all these questions turn out to be hypotheses that the moral error theorist can happily endorse. For example, perhaps evolutionary psychology will reveal that the reason that our ancestors developed a brain with the capacity to categorize their social world in terms like right, good, evil (etc.) was not that it allowed them to track a special class of useful facts but rather because it afforded them the advantage of strengthening social cohesion (perhaps by bolstering the motivation in favor of certain cooperative activities). This, clearly, is something to which an error theorist need have no objection. (This is the basis of the “evolutionary debunking argument”; see Joyce, 2006, 2016.) But if all moral beliefs are best explained by a hypothesis that a moral error theorist may endorse, then the process that has given rise to moral beliefs can hardly be claimed to be a reliable one. And if moral beliefs are the product of an unreliable process—a process by which beliefs are not connected to the relevant facts—then moral beliefs lack justification. 299
The Denial of Moral Knowledge
It is possible that none of the aforementioned grounds for questioning the justification of moral beliefs is alone sufficient but that together (or in combination with further considerations not mentioned here) they may entail a skeptical conclusion. It is worth noting, though, that the kind of justification skepticism that would be established would be the weak kind. The considerations raised at best show that our moral judgments lack justification; they do not appear to demonstrate that our moral judgments are unjustifiable. In other words, these arguments, if successful, would show that we lack moral knowledge, not that moral knowledge couldn’t possibly be forthcoming. An argument for the stronger skeptical conclusion is a version of the regress argument— well known to ancient skeptics like Sextus Empiricus and Agrippa, but here aimed at moral knowledge in particular. According to the traditional regress argument, any belief must be justified by other beliefs, but in order for a belief to provide justification it must itself be justified, which leads to an infinite regress. The ancients used this argument to generate a global skepticism, but we are interested in the possibility that the regress argument might succeed in showing that moral knowledge is impossible but fail to show that ordinary knowledge about the world is impossible. Here the argument can be sketched only very briefly. The foundationalist attempts to defeat the regress argument by identifying a privileged class of beliefs whose justification does not depend on other beliefs. Candidates for these basic beliefs include beliefs about our own experiences and perceptual beliefs about the world. Let us suppose for the sake of argument that some such version of foundationalism is plausible, and so the regress argument for global skepticism is defeated. Now consider moral beliefs. Are they basic foundational beliefs or are they justified by other beliefs? While there are certainly attempts in the philosophical literature to argue that moral beliefs may be foundational (e.g., Ross, 1930, 1939; Tolhurst, 1990), these views have not won widespread acceptance (recall Mackie’s snub of moral intuitionism as “a lame answer”); let us suppose then, again for the sake of argument, that these arguments fail. It is important at this point to recognize that the success of foundationalism generally (regarding, say, perceptual beliefs) is entirely compatible with the abject failure of moral foundationalism. If this were the case, then the only way left by which moral beliefs might be justified is by inference from nonmoral beliefs. But this avenue also immediately runs into severe generic problems, for it is difficult to see how a body of entirely nonmoral beliefs (or propositions, if you prefer) could even possibly nontrivially entail a moral belief (/proposition). Again: while there are certainly attempts in the philosophical literature to argue that moral conclusions may be validly derived from nonmoral premises (e.g., Prior, 1960; Searle, 1964; Zimmerman, 2010), these views have not won widespread acceptance. Obviously, no attempt has been made here to make the several steps of this case at all plausible; my intention is just to outline the structure of an argument that leads to the strong skeptical conclusion that justification for our moral beliefs is absent and will remain so.
5. Conclusion This chapter has revealed the many ways that a person may deny the existence of moral knowledge. The skeptical view has been organized into three major metaethical positions, and we have seen that there are often different versions of these positions and numerous 300
Richard Joyce
argumentative paths leading to each. For the sake of simplicity, knowledge has been assumed to be amenable to the JTB analysis, but, clearly, if this presupposition were relaxed, then the field of moral skeptical possibilities would be proportionally more complicated.
Further Readings For general introductory discussion of moral epistemology, see Aaron Zimmerman’s Moral Epistemology (London: Routledge, 2010) and papers in Walter Sinnott-Armstrong and Mark Timmons’ collection Moral Knowledge? New Readings in Moral Epistemology (Oxford: Oxford University Press, 1996). Karen Jones’ “Moral Epistemology,” in F. Jackson and M. Smith (eds.), The Oxford Handbook of Contemporary Philosophy (Oxford: Oxford University Press, 2005) provides a diagnosis of contemporary moral epistemology, and a slightly older but useful paper that does the same is Robert Audi’s “Moral Knowledge and Ethical Pluralism,” in J. Greco and E. Sosa (eds.), The Blackwell Guide to Epistemology (Oxford: Blackwell, 1999). John Mackie’s book Ethics: Inventing Right and Wrong (Harmondsworth: Penguin, 1977) presents the classic view of moral error theory, and Russ Shafer-Landau’s book Moral Realism: A Defense (Oxford: Oxford University Press, 2003) provides a counterpoint by arguing for moral realism. Both these views, and other metaethical positions besides (including noncognitivism), are described and assessed in Alex Miller’s An Introduction to Contemporary Metaethics (Cambridge, England: Polity Press, 2013). Regarding the question of whether moral judgments can be justified, Walter Sinnott-Armstrong’s book Moral Skepticisms (Oxford: Oxford University Press, 2006) surveys the options, ultimately arguing for a kind of moderate skepticism. The contrasting view, that moral judgments can be justified (within a contextualist framework) is advocated by Mark Timmons in Morality Without Foundations (Oxford: Oxford University Press, 1998).
Related Chapters Chapter 9 The Evolution of Moral Cognition, Chapter 12 Contemporary Moral Epistemology, Chapter 14 Nihilism and the Epistemic Profile of Moral Judgment, Chapter 19 Foundationalism and Coherentism in Moral Epistemology, Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action, Chapter 21 Methods, Goals and Data in Moral Theorizing, Chapter 30 Religion and Moral Knowledge.
Notes 1. See Chapters 1, 2, 3, and 7 (“Moral Reasoning and Emotion”) of this volume. 2. See Chapter 2 of this volume for the suggestion that the “externalization” of certain norms is relatively universal among human social groups. 3. See Stich’s discussion of Frankena’s normative proposals for our conceptualization of moral judgment in Chapter 1 of this volume. 4. See Chapters 1, 2, 4, and 9 of this volume for a summary of empirical investigations of roughly this sort. 5. See Chapters 4 and 7 of this volume for extensive discussion of the role emotions play in moral reasoning and judgment. 6. For an in-depth discussion of developmental human moral psychology, see Chapter 5 of this volume. For the evolution of moral psychology, see Chapters 3, 8, and 9 of this volume.
References Adams, Z. (2013). “The Fragility of Moral Disagreement,” in D. Machuca (ed.), Disagreement and Skepticism. New York: Routledge, 109–130.
301
The Denial of Moral Knowledge
Ayer, A. J. (1936). Language,Truth and Logic. London:Victor Gollancz. Blackburn, S. (1984). Spreading the Word. Oxford: Oxford University Press. ‑———. (1993). Essays in Quasi-Realism. Oxford: Oxford University Press. Brink, D. (1989). Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Carnap, R. (1935). Philosophy and Logical Syntax. London: Routledge & Kegan Paul and Trench, Trubner & Co. Copp, D. (2010). “Normativity, Deliberation, and Queerness,” in R. Joyce and S. Kirchin (eds.), A World Without Values. Dordrecht: Springer, 141–165. Cuneo, T. (2008). “Moral Realism, Quasi Realism, and Skepticism,” in J. Greco (ed.), The Oxford Handbook of Skepticism. Oxford: Oxford University Press, 176–199. Decety, J. and Wheatley, T. (eds.). (2015). The Moral Brain: A Multidisciplinary Perspective. Cambridge, MA: MIT Press. Doris, J. and Plakias, A. (2008). “How to Argue About Disagreement: Evaluative Diversity and Moral Realism,” in W. Sinnott-Armstrong (ed.), Moral Psychology Vol. 2: The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA: MIT Press, 303–331. Dreier, J. (2004). “Meta-Ethics and the Problem of Creeping Minimalism,” Philosophical Perspectives, 18, 23–44. Enoch, D. (2011). Taking Morality Seriously. Oxford: Oxford University Press. Gewirth, A. (1978). Reason and Morality. Chicago: University of Chicago Press. Gibbard, A. (1992). Wise Choices, Apt Feelings. Cambridge, MA: Harvard University Press. ———. (2003). Thinking How to Live. Cambridge, MA: Harvard University Press. Haidt, J. (2001). “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review, 108, 814–834. Horberg, E., Oveis, C. and Keltner, D. (2011). “Emotions as Moral Amplifiers,” Emotion Review, 3, 237–244. Joyce, R. (2001). The Myth of Morality. Cambridge: Cambridge University Press. ———. (2006). The Evolution of Morality. Cambridge, MA: MIT Press. ———. (2011). “Review Essay: Mark Kalderon’s Moral Fictionalism,” Philosophy and Phenomenological Research, 85, 161–173. ———. (2013). “Review of David Enoch’s Taking Morality Seriously,” Ethics, 123, 365–369. ———. (2016). “Reply: Confessions of a Modest Debunker,” in U. Leibowitz and N. Sinclair (eds.), Explanation in Mathematics and Ethics. Oxford: Oxford University Press, 124–145. ———. (2017). “Fictionalism in Metaethics,” in D. Plunkett and T. McPherson (eds.), Routledge Handbook of Metaethics. New York: Routledge, 72–86. Kalderon, M. (2005). Moral Fictionalism. Oxford: Oxford University Press. Kant, I. [1783] (1985). Groundwork to the Metaphysics of Morals, trans. H. J. Paton. London: Hutchinson. Liao, M. (ed.). (2016). Moral Brains. Oxford: Oxford University Press. Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. Harmondsworth: Penguin. Moore, G. E. (1903). Principia Ethica. Cambridge: Cambridge University Press. Moretti, L. and di Pellegrino, G. (2010). “Disgust Selectively Modulates Reciprocal Fairness in Economic Interactions,” Emotion, 10, 169–180. Nagel, T. (1970). The Possibility of Altruism. Princeton: Princeton University Press. Olatunji, B., Puncochar B. D. and Cox, R. (2016). “Effects of Experienced Disgust on MorallyRelevant Judgments,” PLoS One, 11 (8), e0160357. doi:10.1371/journal.pone.0160357. Prior, A. N. (1960). “The Autonomy of Ethics,” Australasian Journal of Philosophy, 38, 199–206. Railton, P. (1986). “Moral Realism,” Philosophical Review, 95, 163–207. Ross, W. D. (1930). The Right and the Good. Oxford: Oxford University Press. ———. (1939). The Foundations of Ethics. Oxford: Oxford University Press. Schroeder, M. (2010). Noncognitivism in Ethics. New York: Routledge. Searle, J. R. (1964). “How to Derive ‘Ought’ from ‘Is’,” Philosophical Review, 73, 43–58. Sinnott-Armstrong, W. (2006). Moral Skepticisms. Oxford: Oxford University Press. Stevenson, C. L. (1937). “The Emotive Meaning of Ethical Terms,” Mind, 46, 14–31.
302
Richard Joyce
Timmons, M. (1998). Morality Without Foundations. New York: Oxford University Press. Tolhurst, W. (1990). “On the Epistemic Value of Moral Experience,” Southern Journal of Philosophy, 29 (suppl), 67–87. Young, L. and Koenigs, M. (2007). “Investigating Emotion in Moral Cognition: A Review of Evidence from Functional Neuroimaging and Neuropsychology,” British Medical Bulletin, 84, 69–79. Zimmerman, A. (2010). Moral Epistemology. New York: Routledge.
303
14 NIHILISM AND THE EPISTEMIC PROFILE OF MORAL JUDGMENT Jonas Olson
1. Moral Nihilism and Moral Error Theory Moral nihilism is the view that there are no moral facts or moral truths. Thus conceived, moral nihilism is the ontological component of moral error theory, a view that also makes claims about the psychology and language of ordinary moral thought and discourse. Moral nihilists need not be moral error theorists, since they need not accept the claims about moral psychology and language that moral error theory makes. I shall, however, focus on moral error theory since it is the best-known and most comprehensive metaethical theory that has moral nihilism as a component. The version of moral error theory I shall consider maintains that moral judgments are beliefs about moral facts, e.g., the fact that torture is wrong, and that utterances of moral sentences, e.g., “Torture is wrong,” are assertions that purport to refer to moral facts. It also maintains that there are no moral facts. As a consequence, moral beliefs are systematically mistaken and moral assertions are uniformly untrue.1 I shall not discuss in any great detail the many arguments for moral error theory that have been offered in the literature.2 Suffice it to say here that arguments for moral error theory normally maintain that ordinary moral thought and discourse involve certain nonnegotiable but untenable ontological commitments. These commitments are nonnegotiable in that they are essential to or distinctive of moral thought and discourse, and they are untenable in that there are no facts of the kind that moral thought and discourse purport to be about.3 In order to explain why we ordinary speakers tend to think and talk as if there are moral facts although there are none, moral error theorists often advocate projectivist accounts of moral judgment and belief, according to which we mistake affective attitudes (such as approval and disapproval) for perceptions of mind-independent moral properties and facts (Hume, 1998, App 1; Mackie, 1977). Moral error theorists also often invoke debunking explanations of moral judgment and belief, according to which they originate and evolve because of their social and evolutionary advantageousness (Joyce, 2006). I shall come back to such projectivist and debunking explanations later, and in §3 I shall say a bit more about the argument for moral error theory that I find most promising. My main aim in this paper is to discuss some consequences of endorsing moral error theory, or believing to some degree that moral error theory is true. In §2, I consider the 304
Jonas Olson
implications for ordinary moral thought and discourse and the epistemological consequences for moral theorizing.We shall see that many moral error theorists have argued that moral thought and discourse are too useful to be jettisoned, although moral beliefs and assertions are never true. I shall describe my account of how they are best preserved, what I call “moral conservationism.” I shall also discuss how moral error theorists can pursue moral theorizing. In §3, I consider and respond to a recent challenge to moral error theory, due to Matt Bedke (2014). Bedke argues that insofar as moral error theorists are right that moral thought and discourse can be successfully preserved in the ways I and others have suggested, this would in fact count against moral error theory since it would be evidence that ordinary moral thought and discourse do not in fact involve the untenable ontological commitments attributed to them; moral judgments thus do not have the epistemic profile that moral error theory alleges. Moral error theorists would thus end up biting their own tail. I shall argue that the challenge can be met and that there is evidence that moral error theory is in fact correct about the epistemic profile of moral judgments.
2. Moral Error Theory, Moral Conservationism, and Moral Theory If one holds that moral thought involves systematic error and that moral claims cannot be true, it might seem that the obvious reaction is to abandon moral thought and talk. That is what some error theorists have recommended (Hinckfuss, 1987; Garner, 2007). Others have thought that moral thought and talk is too useful, both inter- and intrapersonally, to abandon. J. L. Mackie maintained that “[w]e need morality to regulate interpersonal relations, to control some of the ways in which people behave toward one another, often in opposition to contrary inclinations” (Mackie, 1977, 43), and Richard Joyce argues that “[m] oralized thinking and talking [function] as a bulwark against weakness of will [and] as an interpersonal commitment device” (Joyce, 2006, 208). Not only are moral thought and talk helpful in coordinating collective behavior and useful as antidotes to recurring temptations to lie, cheat, and steal; giving them up would also likely be costly and difficult, as Daniel Nolan, Greg Restall, and Caroline West point out: “Giving up moral talk would force largescale changes to the way we talk, think, and feel that would be extremely difficult to make” (Nolan et al., 2005, 307). In the light of such considerations, some philosophers have suggested that we take a fictionalist stance to moral thought and talk, i.e., roughly the kind of stance we take to stories about hobbits and Olympian gods, and in playing games with children (Joyce, 2001, Ch. 8; Nolan et al., 2005). That is, we are to act, think, and speak as if we believe that there are moral truths, but we are only to pretend to have moral beliefs and pretend to make moral assertions. The hope and promise of moral fictionalism is that pretense moral belief and assertion will have the same, or roughly the same, beneficial pay-offs as genuine moral belief and assertion have. However, the fictionalist practice seems to require considerable cognitive self-surveillance: in order to avoid alleged costs of believing and asserting what is not true, one is only to entertain moral propositions without believing them and to utter moral sentences without assertoric force. This arguably requires occasionally reminding oneself that morality is (mere) fiction. Such cognitive self-surveillance is likely to reduce the potency of moral thought and talk to function as a bulwark against weakness of will and as an interpersonal 305
Nihilism and Moral Judgment
commitment device, for it seems that in order for moral thought and talk to function well in these regards one needs to take morality seriously and be genuinely committed to it, which arguably requires that one suppress the thought of morality as fiction. Fictionalism thus faces a stability problem: on the one hand one is to engage and immerse in moral thought and discourse so as to reap the benefits of entertaining moral thoughts and uttering moral sentences; on the other hand one is at the same time to exercise cognitive selfsurveillance so as not to slip into believing and asserting moral propositions. In previous work I have criticized fictionalism along these lines and defended a conservationist policy, according to which we continue to believe and assert moral propositions as far as we can (Olson, 2011; 2014, ch. 9). Conservationism allows us to reap the benefits of moral thought and talk and requires no cognitive self-surveillance of the kind fictionalism requires. (In a joint forthcoming paper, Björn Eriksson and I supplement moral conservationism with some recommendations for when and how to moralize, yielding a position we call “moral negotiationism.”) An obvious question is whether it is at all possible for a moral error theorist to hold moral beliefs. It has recently been argued that it is not (Suikkanen, 2013). However, it seems to me exaggerated to deny that we can sometimes have an occurrent belief that p and a disposition to believe that not-p in different contexts. It seems not implausible that peer pressure, emotional engagement, and the like may give rise to beliefs that one rejects “in the cool hour,” e.g., in the seminar room. Beliefs are typically formed on the basis of how things appear, and many things will appear morally wrong, even to moral error theorists. For example, setting a cat on fire certainly appears wrong, and an error theorist who witnesses such an event may well form the belief that the action is wrong, at least if she temporarily suppresses her belief that there are no moral truths, possibly as an effect of emotional engagement. Such considerations may suffice as a possibility proof that conservationism is a position that is psychologically available to nihilists. It is also notable that some self-avowed moral error theorists have reported that they occasionally do have moral beliefs and make moral claims, as a result of not getting out of the deeply ingrained habit of moralizing (Pigden, 2007, 445). Since my main aim here is not to defend moral conservationism, I shall not consider objections or further elaborate its merits (see Olson 2011; 2014, Ch. 9; Eriksson & Olson, forthcoming). In the next section, we shall consider an argument that conservationist reactions and recommendations of moral error theorists tell us something important about the nature of moral judgments, which spells trouble for moral error theory. Before we get there, however, I want to consider the use, if any, that moral error theorists may make of moral theorizing. We have seen that moral thought and talk are useful, according to moral error theorists; too useful in fact to be abandoned. But how are moral error theorists to assess and compare different moral beliefs and claims epistemically? After all, we know that no moral belief or claim is true, according to moral error theory. To systematically justify and assess the plausibility of moral beliefs is normally thought to be the job of first-order moral theorizing, or normative ethics. But how can moral error theorists engage in that kind of enterprise since there is, according to them, no truth to be found in it? According to one prominent critic of moral error theory, “[i]f there are no truths within morality, [. . .] then the enterprise of normative ethics is philosophically bankrupt [and] loses its point” (Shafer-Landau, 2005, 107). 306
Jonas Olson
But note first that if lack of moral truth renders normative ethics philosophically bankrupt, then it is not only error theorists who should deem normative ethics pointless but also those noncognitivists who maintain that moral judgments are incapable of being true or false. More importantly, for our purposes, having the potential to deliver truths need not be the only way for a philosophical enterprise to have a point. Recall Mackie’s point that “[w] e need morality to regulate interpersonal relations, to control some of the ways in which people behave toward one another, often in opposition to contrary inclinations.” In order to fulfill that function, morality needs to comprise a set of mutually supportive, interpersonally recognizable and acceptable, as well as practically applicable, principles. In other words, while there are no truths in normative ethics to be discovered, a system of normative ethics needs to be invented for practical purposes. Mackie cited with approval John Rawls’s method in normative ethics (Mackie, 1977, 105). Rawls described his theory of justice as “a theory of the moral sentiments (to recall an eighteenth century title) setting out the principles governing our moral powers, or, more specifically, our sense of justice” (Rawls, 1999, 44). The important point here is that the method is not one of discovering mind-independent moral facts but of systematizing our moral sentiments by bringing them into reflective equilibrium. Since the emphasis is on moral sentiments, the point and purpose of such an enterprise does not stand or fall by the attainability of moral truth. The ultimate goal would be to formulate a set of principles or system of normative ethics that is practically applicable.To that end, the system would need to have a high degree of intuitive plausibility and general acceptability (i.e., accordance with shared sentiments), rendering it generally applicable. In order to be practically applicable and useful in cases of conflict or of inter- and intrapersonal clashes of interests and inclinations, the system would also need to be theoretically satisfactory, and to that end it would need to score highly on the criteria of simplicity, comprehensiveness, and coherence. Applicability, intuitive plausibility, general acceptability, simplicity, comprehensiveness, and coherence are standard criteria of adequacy for normative theories, and they seem perfectly available for moral error theorists pursuing normative ethics. The one standard criterion of adequacy that is not available to moral error theorists is of course that of truth. There are thus significant parts of standard moral epistemology that moral error theorists can take over, although there is according to moral error theory no such thing as moral knowledge. Mackie took Rawlsian normative ethics to be “a legitimate form of inquiry” (1977, 105). He contrasted it with Henry Sidgwick’s method, in which moral intuitions are not ultimately sentiments but purported insights of reason into mind-independent and necessary moral facts that have authority over all agents, regardless of their desires and of social conventions (Sidgwick, 1981). It is easy to see why Mackie thought that Sidgwick was mistaken to view morality as something to be discovered rather than invented. But in the light of what we have said about moral conservationism about moral thought and discourse and about the use of normative ethics, it is not easy to see why error theorists like Mackie need be dismissive of Sidgwick’s method. Since our moral sentiments are at least partly results of complex historical contingencies and various evolutionary and cultural pressures, there seems to be no guarantee that they can be systematized into an orderly reflective equilibrium. As I have already suggested, moral error theorists who pursue normative ethics will strive to meet criteria of comprehensiveness and coherence and eventually to arrive at a satisfactory reflective equilibrium. This might seem a more feasible goal if we view some 307
Nihilism and Moral Judgment
of our moral intuitions as insights into necessary moral truths and if we view and present normative theories as theories about such truths and what follows from and coheres with them. Just as moral belief and assertion are useful in coordinating collective behavior and in deliberating about what to do, belief in moral realism may be a useful template in normative theorizing. If so, the Sidgwickian method may facilitate normative theorizing.
3. Bedke’s Challenge: The Epistemic Profile of Moral Judgment It may be, however, that the sketched possibilities for moral error theorists to preserve moral thought and talk as well as normative theorizing and their alleged success in these endeavors are a burden rather than an asset. Recall that many moral error theorists report that they continue, at least occasionally, to have moral beliefs and that some defend conservationism, i.e., the view that recommends that one continue to hold moral beliefs and make moral claims, although they are believed to be false. Matt Bedke (2014) argues that these reactions and recommendations on the part of error theorists can be used to mount a challenge to moral error theory itself. Bedke’s challenge relies on a principle he calls “weak dispositionalism” (WD) about belief. It says the following: (WD) If a type of mental state “systematically fails to have the thetic direction, this is very good evidence that the mental state is not belief.” (Bedke, 2014, 190) (Bedke speaks about judgments, but I find it more apt to speak about mental states in this context.) For a mental state to have the thetic direction of fit is for it to have the mindto-world direction of fit, i.e., for it to have the function of representing the world. Mental states with the thetic direction of fit tend to be sensitive to perceived evidence. If a person who is in a mental state with content p perceives what she takes to be strong evidence that not-p, that token mental state—if it is of a type that has the thetic direction of fit—tends to go out of existence. Suppose, for example, that a person who believes that there is no humanly induced global warming is presented with considerations that she herself takes to be strong evidence that humanly induced global warming is in fact going on. If that does not eliminate her belief that there is no humanly induced global warming or does not even decrease her degree of belief, this is good evidence that her mental state with that content is in fact not belief. It would rather seem to be of a type that lacks the thetic direction of fit, such as desire, hope, or wishful thinking, and that is hence not sensitive to perceived evidence in the way belief typically is. Now, according to (WD), error theorists who claim that there is strong evidence that there are no moral truths should find that their moral beliefs tend to go out of existence, and conservationists who recommend preserving moral belief should acknowledge that it is very difficult, if not impossible, to make good on that recommendation. But as we have seen, that is not what error theorists find and acknowledge; on the contrary, they report that they continue to hold moral beliefs, and they predict that it would be difficult to let go of moral beliefs. Applying (WD) in this context suggests that moral judgments are after all not beliefs with the kind of content that error theorists take to involve ontologically untenable commitments. Error theorists’ reactions, recommendations, and predictions thus 308
Jonas Olson
put pressure on moral error theory’s psychological claim. Bedke takes this to indicate that moral error theory is mistaken about the epistemic profile of moral judgments. This leaves two possibilities: either moral judgments are desire-like states, or they are beliefs with content that does not involve ontologically untenable commitments. In either case, moral error theory is wrong about the epistemic profile of moral judgment. (A possible third alternative is that of hermeneutic fictionalism, i.e., the view that ordinary moral thought and discourse is fictionalist. However, my hunch, which is also Bedke’s (personal communication), is that any form of hermeneutic fictionalism would have to be either noncognitivist—in which case it would face the problems discussed in §4—or cognitivist—in which case it would face the kinds of questions concerning content discussed in §5.) Error theorists need not deny that (WD) is a plausible thesis concerning the type of mental state we count as belief. They may agree that it is a plausible generalization about beliefs that, ceteris paribus, a belief that p tends to disappear in the light of evidence that notp. But of course ceteris is often not paribus, and there are in fact several things error theorists can say in response to Bedke’s challenge. First, they may make use of the aforementioned point that humans often form beliefs more or less spontaneously on the basis of how things appear. Beliefs formed in that manner may conflict with less spontaneously formed beliefs that are based on more careful reflection. Consider optical illusions, such as the checker shadow illusion in which two squares that are in fact of the same color appear to normal viewers to be of different colors (the one darker than the other). Even viewers who are familiar with that kind of illusion may in unreflective moments not immediately identify what they see as an instance of the illusion and form the belief that one square is darker than the other, since that is how things appear. Similarly, moral error theorists need not deny that many situations are such that there appear to be moral facts. In such situations, moral error theorists may have at least fairly momentary and unreflective moral beliefs. Consider, for example, the famous and aforementioned example of a bunch of kids setting a cat on fire. That act certainly appears morally wrong. In spite of her metaethical commitments, an error theorist who witnesses such an event may well react by spontaneously forming a belief with the content that’s wrong. Once she reminds herself about the error theoretic arguments against moral facts, her belief that the kids’ action is wrong may disappear. But it need not disappear completely.A second point error theorists can make in response to Bedke is that belief comes in degrees. Believing to a high degree that moral error theory is true is compatible with believing to some degree that there are moral facts, e.g., that setting a cat on fire is morally wrong. As a piece of autobiography, I can report that I take moral error theory to be the most plausible metaethical theory of which I am aware. But I am not fully convinced that the theory is correct. My credence in some form or other of moral realism is not zero. In this way, moral error theorists may well have some degree of belief that certain actions are wrong and others right. It seems that one can also boost one’s degree of belief in some proposition by attending only or mostly to evidence that supports the truth of that proposition. For example, in thinking about whether the proof version of one’s book contains spelling errors one might attend to the fact that many careful people such as colleagues and copyeditors have read it without finding any. On this basis one might form the belief that the book contains no spelling errors. On the other hand, one might attend to the fact that it is highly unusual 309
Nihilism and Moral Judgment
for a book to contain no spelling errors, and this may make one less certain that the book contains no spelling errors, and it may even make one convinced that the book contains some spelling error. Similarly, a moral error theorist can boost her degree of belief that there are moral facts by attending to scenarios like the one in which some kids set a cat on fire or by considering some of the real-life atrocities which are all too common in the history of the twentieth century. Correlatively, one can suppress one’s belief that there are no moral facts by advertently not attending to evidence supporting moral error theory. Suppressing one’s belief that there are no moral facts need not be a directly intentional act. One can do so more indirectly by engaging in moral argument and political debate with one’s peers, most of whom are likely not to be moral error theorists. In such arguments and debates, questions concerning the existence of moral facts tend to be in the presuppositional background rather than in the argumentative foreground. There are also unintentional processes that are noteworthy in this regard, such as subconscious compartmentalization of beliefs and sentiments, which helps to avoid cognitive dissonance. An instance of this general phenomenon may be moral error theorists’ tendency to suppress evidence for moral error theory when witnessing atrocities or when engaging in moral argument and political debate. In these ways, moral error theorists may come to have more permanent and stable, and not just brief or momentary, moral beliefs. It deserves to be mentioned that moral error theory is not the only philosophical view whose implications are hard to square with ordinary day-to-day thinking. Consider nihilism about ordinary physical objects. Philosophers who accept such a view may well in their daily lives believe things like the keys are in the pocket or dinner is in the oven, although they are not true, according to the philosophical theory they accept. Part of the explanation may once again be that it appears to them that there are things like keys, food, and ovens, and that they have some degree of belief that there are such things, although in their reflective and philosophical moments, their highest credence is in nihilism about such things (cf. Bedke, 2014, 195–197). There is a third point that moral error theorists can offer as an explanation of why moral beliefs are especially “epistemically stubborn,” or resistant to counterevidence, such as arguments for moral error theory.This is the point that many moral beliefs are results of projections of affective attitudes of approval and disapproval. Let us say that for a subject to project an attitude of approval or disapproval onto an action, person, or object is for the subject to experience that attitude as a perception of a property of the action, person, or object that is independent of the attitudes of the subject. Such experiences may easily cause us to mistake affective attitudes for perceptions of properties that are independent of our attitudes (recall Hume’s metaphor about gilding and staining the world with colors borrowed from internal sentiment, Hume, 1998, 163) and to form the mistaken belief that there are moral properties and facts. This kind of projection of attitudes, or gilding and staining, is likely to take place even if we have been convinced by error theoretic arguments to the effect that there are no moral facts. I mentioned already in the beginning that moral error theorists often appeal to projectivist accounts in order to explain why we tend to speak and think as if there are moral facts even though there are none. Projectivism is also useful in responding to Bedke’s challenge, since it helps explain in a way that is congenial with error theory why it often appears that there are moral facts, although there are none. (An explanation of this 310
Jonas Olson
appearance that is simple but clearly not congenial with moral error theory is that there are moral facts.) To recap, I have made three points in response to Bedke’s challenge: that beliefs are often based on appearances and that we tend subconsciously to compartmentalize beliefs and sentiments in different contexts; that belief comes in degrees; and that moral beliefs are results of projections of affective attitudes of approval and disapproval. The third point connects to the first. These three points offer explanations of why moral beliefs need not go out of existence in subjects confronted with what they take to be strong or conclusive evidence that there are no moral facts and no true moral beliefs. However, one may argue that Bedke’s challenge can be restored, at least in part. For recall that the challenge stated that moral error theorists’ reactions and recommendations concerning moral belief is very good evidence that moral judgments are either beliefs that do not involve the ontologically untenable commitments error theory alleges, or that moral judgments are not beliefs at all. According to our third point, moral beliefs are results of projections of attitudes of approval and disapproval, but why not go for the view that moral judgments are attitudes of approval and disapproval? That seemingly simpler view would confirm what Bedke’s challenge seeks to establish, namely that moral error theory is mistaken about the epistemic profile about moral judgment. To address that worry, we shall in the next section consider features of moral judgments that strongly suggest that they are beliefs.
4. Are Moral Judgments Beliefs? An undisputable feature of moral judgments is that they vary in degrees of certitude, just as nonmoral judgments do. Just as one can be more or less certain that there is humanly induced global warming, one can be more or less certain that male circumcision is morally wrong. If moral judgments are beliefs, it is no more problematic to account for degrees of moral certitude than to account for degrees of nonmoral certitude. But if moral judgments are noncognitive states, like approval and disapproval, how are we to make sense of moral certitude? For example, what is it to be certain that female circumcision is morally wrong and less than certain that male circumcision is wrong? One natural thought is that degrees of moral certitude can be accounted for in terms of intensity of approval and disapproval, so that one who is certain that female circumcision is wrong and less than certain that male circumcision is wrong disapproves more strongly of female than of male circumcision. But there is also another dimension of moral judgment, what is often called moral importance. For example, it is a common thought that it is more morally important to save lives than to keep promises.This feature is not difficult for cognitivism to accommodate; to judge that saving lives is more morally important than to keep promises is simply to believe that saving lives is more morally important than keeping promises. But how can noncognitivism accommodate moral importance? Again, the natural thought is to invoke intensity of approval and disapproval: to judge that saving lives is more morally important than keeping promises is to approve more strongly of saving lives than of keeping promises (or to disapprove more strongly of omitting to save lives than of breaking promises). But the problem is that certitude and importance are two separate dimensions of moral judgment that can vary independently. (The problem was stated in Smith, 2002, and further 311
Nihilism and Moral Judgment
developed in Bykvist & Olson, 2009 and 2012. The problems for noncognitivism discussed in this section are all developed in greater detail in Bykvist & Olson, 2009.) For example, one can be highly certain that keeping promises is not of great moral importance (that breaking promises is a minor wrong), and one can be moderately certain that abortion is of great moral importance (e.g., that it is seriously wrong). Noncognitivists may propose that moral certitude be accommodated in terms of second-order approvals. For example, to be certain that male circumcision is wrong could be to approve strongly of disapproving of male circumcision. A problem with this proposal is that one can approve of disapproving of male circumcision for reasons that are unrelated to one’s moral certitude; perhaps one simply wants to share or oppose the majority view in one’s community, or perhaps an evil demon has threatened to inflict severe punishment unless one approves of disapproving of male circumcision, etc. Here are two further belief-like features of moral judgment that are difficult for noncognitivism to accommodate: First, what is it to be certain that some action is wrong? As we have seen, for cognitivism to accommodate moral certitude is no more problematic than to accommodate nonmoral certitude; to be certain that, e.g., male circumcision is wrong is to believe fully, or to believe to a degree close to 1, that male circumcision is wrong. But there is no corresponding intuitive notion of approving or disapproving fully, or to a degree close to 1. In other words, there is no intuitive top level of approval and disapproval, whereas there is one for belief. Second, it makes sense to compare degrees of nonmoral and moral certitude. We may say and think things like I am more certain that 2 + 2 = 4 than that utilitarianism is true. For cognitivism, such comparisons are not more problematic than comparisons of degrees of nonmoral certitude. But according to noncognitivism, comparisons of degrees of moral and nonmoral certitude are what we might call “cross-attitudinal” comparisons of degrees of belief and degrees of approval or disapproval. It is difficult to make sense of cross-attitudinal comparisons. What does it mean to say that one’s degree of belief that 2 + 2 = 4 is higher than one’s approval (of approval) of actions that maximize happiness? These considerations strongly suggest that moral judgments are beliefs and that moral error theory is at least to that extent correct about the epistemic profile of moral judgments. But might moral beliefs not have the kind of content that involves ontologically untenable commitments?
5. Is Moral Error Theory Right about the Content of Moral Beliefs? According to Bedke, moral error theory takes ordinary moral judgments to be about nonnatural facts. Moral error theory also denies that there are any nonnatural facts and holds that, as a consequence, ordinary moral beliefs are never true. The version of moral error theory that I favor, however, takes ordinary moral judgments to be about irreducibly normative facts. Whether such facts would be natural or nonnatural is a question error theorists can leave open. The important point is that, according to error theory, moral judgments are beliefs about irreducibly normative facts, but since there are no irreducibly normative facts, moral judgments are uniformly untrue. But what is irreducible normativity? In responding to this question, it is useful to begin by saying what it is not and to contrast it with adjacent but different notions. Following Derek Parfit, we can contrast normativity in the “rule-implying” sense with normativity in the “reason-implying” sense (2011, 312
Jonas Olson
308–310, 326–327). Examples of normative facts of the former kind are facts about what is legal or illegal, grammatical or ungrammatical; and about what accords with rules of etiquette or chess. There is no metaphysical mystery how there can be such facts, for facts about the law and grammar and about rules of etiquette or chess are all facts about human conventions. It might of course be difficult to say exactly how and why certain conventions originate and evolve, but such difficulties invite no metaphysical mysteries (Mackie, 1977, 25–27; Joyce, 2001, 34–37; Olson, 118–126). Moreover, for any fact that is normative in the rule-implying sense, e.g., that it is a rule of etiquette that one not eat peas with a spoon, we can always sensibly ask whether we have reason to—or whether we ought to or are required to—comply with that rule. Those are instances of what Christine Korsgaard and others have called “the normative question” (Korsgaard, 1996; Broome, 2007). Facts that are normative in the reason-implying sense are irreducibly normative, and they are very different. Such facts do not reduce to—and are not wholly constituted by—facts about human conventions or about agents’ motivational states or desires. In the words of the eighteenth-century moral rationalist Richard Price, irreducibly normative facts “have a real obligatory power antecedently to all positive laws, and independently of all will” (Price, 1948, 105). Are moral facts irreducibly normative (are they “reason-implying,” in Parfit’s terms), or do they reduce to facts about social conventions and agents’ motivational states or desires (are they merely “rule-implying,” in Parfit’s terms)? We can begin to answer this question by asking whether moral judgments invite the normative question. It was noted earlier that for any fact that is normative in the rule-implying sense, we can sensibly ask whether we have reason to—or whether we ought to, or are required to—comply with that particular rule. Similarly, for any fact to the effect that some agent has a certain desire, we can always sensibly ask whether the agent and others have reason to—or whether they ought to or are required to—act so as to promote the fulfillment of that desire. Now consider moral facts and moral judgments. Many moral realists and moral error theorists agree that ordinary moral thinking is committed to the following conditional: If one judges, that is believes or knows, that one ought morally to eat less meat, then one cannot sensibly ask whether there is also reason for one to eat less meat. That question will already have been answered by the moral judgment. (One could of course ask different questions, such as whether eating less meat would be comme-il-faut, or whether there is anything that motivates one to eat less meat, or whether doing so would be conducive to fulfillment of one’s desires.) This explains why it would be odd, and perhaps conceptually confused, to accept that one ought morally to eat less meat but go on to deny that one has reason to do so because social conventions do not recommend or require that one eat less meat, or because one is not motivated to do so, or because one lacks the relevant desire. In other words, moral judgments do not invite the normative question. According to many moral realists and moral error theorists, this is because moral facts are or would have to be irreducibly normative and because ordinary moral judgments are judgments about irreducibly normative facts. Error theorists of course part company with realists about irreducible normativity in thinking that there are no irreducible normative facts. Their reason for thinking that there are no irreducible normative facts is that such facts would be ontologically queer and that we can give plausible debunking explanations of why we tend to believe that there are such facts. But such arguments are not our topic here. 313
Nihilism and Moral Judgment
Some philosophers have argued that moral facts are not in fact irreducibly normative and that ordinary moral judgments, properly understood, are not in fact judgments about such facts (Foot, 1972; Finlay, 2008; for responses, see, e.g., Joyce, 2011; Olson, 2014). Proponents of such views face the difficult task of explaining away the appearance that facts and judgments about conventions and about agents’ desires seem to invite the normative question, whereas moral facts and judgments do not. The most straightforward explanation of this appearance is that moral facts are or would have to be irreducibly normative and that moral judgments are judgments about such facts. If that is also the correct explanation, error theory is after all at least roughly correct about the epistemic profile of moral judgments.
Acknowledgments Earlier versions of this chapter were presented at the third annual ENN (European Network of Normativity) meeting at Humboldt University, Berlin, November 2015, and at a seminar at Uppsala University. I thank the participants for helpful discussions. Special thanks to Matt Bedke and Folke Tersman for written comments. A generous grant from Riksbankens Jubileumsfond is gratefully acknowledged (Grant no 1432305).
Notes 1. For a discussion of other versions of moral error theory, see Olson (2014, ch. 1) and Chapter 13 of this volume. 2. For a critical exposition of some of the most influential arguments, see Olson (2014, chapters 5–6). 3. See Chapters 1 and 2 of this volume for discussion of the attempt to define moral thought and language as such.
References Bedke, M. (2014). “A Menagerie of Duties? Normative Judgments are not Beliefs About NonNatural Properties,” American Philosophical Quarterly, 51, 189–201. Broome, J. (2007). “Is Rationality Normative?” Disputatio, 23, 161–178. Bykvist, K. and Olson, J. (2009). “Expressivism and Moral Certitude,” Philosophical Quarterly, 59, 202–215. ———. (2012). “Against the Being for Account of Normative Certitude,” Journal of Ethics and Social Philosophy, 6 (June). www.jesp.org. Eriksson, B. and Olson, J. (forthcoming). “Moral Practice After Nihilism,” to appear in a volume ed. R. Garner. Finlay, S. (2008). “The Error in the Error Theory,” Australasian Journal of Philosophy, 86, 347–369. Foot, P. (1972). “Morality as a System of Hypothetical Imperatives,” Philosophical Review, 81, 305–316. Garner, R. T. (2007). “Abolishing Morality,” Ethical Theory and Moral Practice, 10, 499–513. Hinckfuss, I. (1987). The Moral Society: Its Structure and Effects, Department of Philosophy, Research School of Social Sciences, Australian National University. Hume, D. [1751] (1998). An Enquiry Concerning the Principles of Morals, ed. T. Beauchamp. Oxford: Oxford University Press. Joyce, R. (2001). The Myth of Morality. Cambridge: Cambridge University Press. ———. (2006). The Evolution of Morality. Cambridge, MA: MIT Press. ———. (2011). “The Error in ‘The Error in the Error Theory’,” Australasian Journal of Philosophy, 89, 519–534, Korsgaard, C. M. (1996). The Sources of Normativity. Cambridge: Cambridge University Press. Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. Harmondsworth: Penguin Classics. 314
Jonas Olson
Nolan, D., Restall, G. and West, C. (2005). “Moral Fictionalism Versus the Rest,” Australasian Journal of Philosophy, 83, 307–330. Olson, J. (2011). “Getting Real about Moral Fictionalism,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics, vol. 6. Oxford: Oxford University Press, 181–204. ———. (2014). Moral Error Theory: History, Critique, Defence. Oxford: Oxford University Press. Parfit, D. (2011). On What Matters,Vol. 2. Oxford: Oxford University Press. Pigden, C. (2007). “Nihilism, Nietzsche, and the Doppelganger Problem,” Ethical Theory and Moral Practice, 10, 441–456. Price, R. [1758/1787] (1948) A Review of the Principal Questions in Morals, ed. D. D. Raphael. Oxford: Clarendon Press. Rawls, J. [1971] (1999). A Theory of Justice (rev. ed.). Cambridge, MA: Harvard University Press. Shafer-Landau, R. (2005). “Error Theory and the Possibility of Normative Ethics,” Philosophical Issues, 15, 107–120. Sidgwick, H. [1907] (1981). The Methods of Ethics (7th ed.), ed. J. Rawls. Indianapolis, IN: Hackett Publishing. Smith, M. (2002). “Evaluation, Uncertitude, and Motivation,” Ethical Theory and Moral Practice, 5, 305–320. Suikkanen, J. (2013).“Moral Error Theory and the Belief Problem,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics, vol. 8. Oxford: Oxford University Press, 168–194.
Further Readings T. Cuneo, The Normative Web (Oxford: Oxford University Press, 2007) contains a critique of moral and epistemic nihilism. M. Evans and N. Shah, “Mental Agency and Metaethics,” Oxford Studies in Metaethics, 7, 2012. R. Shafer-Landau, ed., Oxford: Oxford University Press, 80–109 argues that traditional forms of anti-realism in metaethics, including nihilism, cannot accommodate central features of mental agency. S. Husi, “Why Reasons Skepticism Is Not Self-Defeating,” European Journal of Philosophy 21, 424–449, 2013 defends nihilism against the charge of being self-defeating. C. West, “Business as Usual? The Error Theory, Internalism, and the Function of Morality,” in R. Joyce and S. Kirchin (eds.), A World Without Values: Essays on John Mackie’s Error Theory (Dordrecht: Springer, 2010) discusses several options for how to think about morality if nihilism is endorsed.
Related Topics Chapter 1 The Quest for the Boundaries of Morality; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 13 The Denial of Moral Knowledge; Chapter 18 Moral Intuition; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 21 Methods, Goals, and Data in Moral Theorizing; and Chapter 28 Decision Making under Moral Uncertainty.
315
15 RELATIVISM AND PLURALISM IN MORAL EPISTEMOLOGY David B. Wong
1. Introduction Is there a single true morality? Is there a most justified morality if moralities are not the sort of thing that is truth apt? One way to address this central question in moral epistemology and metaphysics is to ask what answer is part of the best explanation of similarities and differences in moral beliefs and practices. The explanatory approach requires a determination of what similarities and differences in fact exist, but it also requires a broader explanatory framework that specifies what a morality is and what roles it plays in human life.This chapter will argue that the nature and extent of fundamental differences in moral belief and practice are best explained by a theoretical framework attributing to morality a function of facilitating and structuring social cooperation and that there is more than one true or most justified morality because there is a plurality of ways to effectively fulfill that function.The case to be made cuts against two implicit assumptions that are almost always made when the issue of relativism is posed: first, that the kind of relativism at issue is some version of “everything is permitted”; and second, that the only viable alternative is universalism, i.e., that there is a single true or most justified morality. This chapter presents an alternative to radical forms of relativism and to universalism alike: a moderate form of relativism or strong form of pluralism: that more than one morality is true or most justified but that not all moralities are true or most justified.
2. Explaining Moral Similarities and Differences The mere fact that people agree or disagree in their moral beliefs and practices does not weigh in favor of relativism. People have disagreed about the shape of the earth and whether it was at the center of the universe, but they did not typically take such disagreement to mean that there is no fact to disagree about or that there were facts for some people but not for others. In his influential argument against universalism, J. L. Mackie makes just this point, but then goes on to claim that moral disagreements reflect people’s adherence to and participation in different ways of life. The causal connection seems to be mainly that way round: it is that people approve of monogamy 316
David B. Wong
because they participate in a monogamous way of life rather than that they participate in a monogamous way of life because they approve of monogamy. (Mackie, 1977, 36) Whereas disagreements over the physical structure of the universe are best explained by appeal to lack of definitive evidence and speculation from the available evidence, moral disagreements are best explained as originating from differences in the ways people have chosen to live rather than their attempts to track independently existing moral facts or properties that make their ways of life good or right. One problem with Mackie’s argument, however, is that some people have rejected monogamous and nonmonogamous ways of life despite having participated in them. Perhaps most people come to their beliefs about monogamy because they are influenced by their culture, but the same could hold for the way most come to their beliefs about the nature of the physical world. Mackie’s claim about the best explanation requires actual investigation into how people come to their beliefs about the rightness or wrongness of monogamy to see what kinds of specific reasons they offer for or against monogamy and, perhaps more importantly, to see what the best reasons are that could be given for each position, given that in any serious disagreement that concerns many people over time, many sorts of reasons, good, bad, and indifferent, will be given for the various positions. If all the reasons, including the best ones that have been given and that one could think of, seem unsatisfyingly circular and shallow, one might suspect that Mackie’s proffered explanation might be true, but the case needs to be made from a normative perspective that seeks to evaluate the reasons for and against each form of marriage. Those who defend universalism suggest that moral disagreements may hinge on nonmoral factual disagreements that are difficult to resolve because decisive evidence is not available or because one’s preexisting position on the disagreements biases interpretation of what is in fact decisive evidence against one’s position. Other relevant epistemic explanations of disagreement are failures to reason carefully enough or to imagine what it would be like to occupy someone else’s position (see Brink, 1989; Moody-Adams, 1997; Enoch, 2009; Shafer-Landau, 1994, 1995) and religion, presumably false religion (Brink, 1989; Sturgeon, 1994). As is the case with Mackie’s argument, there is comparatively little discussion of whether an explanation actually best fits a particular kind of moral disagreement. Instead these explanations get used as stock, all-purpose explanations that in the abstract could apply equally well to both sides of a disagreement. What sorts of moral disagreement might raise the most serious challenge to universalism? We might narrow the field by focusing on disagreements that do not hinge on nonmoral factual matters and that resist explanation by appeal to bias or other kind of epistemic limitation. Consider Richard Brandt’s study (1954) of the permissive attitude, compared to the attitudes of at least some Americans, that members of the Hopi tribe had toward inflicting pain and suffering on animals. Based on his interviews with tribal elders, Brandt concluded that there was no nonmoral factual disagreement (such as differing beliefs on the sentience of animals) that accounted for this difference in moral attitude. Some have drawn on cases of moral differences presented in the empirical social science literature that they argue do not yield to universalist explanations of moral error or ignorance. One such case concerns the “culture of honor” manifested by a tendency to respond 317
Relativism and Pluralism
with violence to perceived insult, and that appears to exist in greater degree in the southern United States than in the north (Cohen et al., 1996). The hypothesized origin of this culture is the herding economy of the Scots-Irish who settled in the southern United States. The deterrent to having one’s livelihood rustled was quick and violent response to insult, but since herding days are largely over in the South, that aspect of the culture has become “functionally autonomous.” That which in the past served an external purpose has become something valued for its own sake. The actual causal origin of honor culture is in dispute (see Demetriou, 2014 for a summary of the case for alternative explanations), but the challenge it raises for universalism may not depend on which genealogy is the most plausible. It is how honor currently functions as a moral value that matters, and some who argue for relativism in this case claim that there is no disagreement on nonmoral factual information or epistemic bias or limitation that explains the disagreement (Doris & Plakias, 2008). However, more needs to be known as to what kind of value honor is. Aside from brief characterizations of honor as something men were expected to defend and assertions that subscribing to this value depends on no identifiable factual misinformation or ignorance, there is little indication of how it functions as a value or its relation to other ethical values that its adherents might have. Honor, especially construed as an ethical value, would have to go broader and deeper than a hairtrigger temper in service of reputation for toughness. To have honor might be something like being entitled to respect, either because one has successfully met a standard or because one has a status one shares with others (Appiah, 2010). It may be something like being entitled to prestige based on success in competition that is designed to bring out the best in all contestants (Demetriou, 2014). There undoubtedly are different realizations of the value of honor, but if it functions as an ethical value, one wants to know what sort of structure it has and how it relates to other ethical values that its adherents also subscribe to. This makes a difference as to how a particular realization of honor might be criticized. Once honor is placed in a larger normative context, one can realize that people from many different cultures have some version of it, and not just the American South. One might then go on to criticize the allegedly Southern culture of honor by arguing that one can assert one’s entitlement to respect in the face of rude and insulting behavior through nonviolent means that are at least as dignified as engaging in fisticuffs. Furthermore, such means are less likely to conflict with other ethical values one might have, such as the imperative not to endanger one’s physical well-being for the sake of those one cares for and whose welfare one supports. This is not to deny the value of empirical studies of values for moral philosophy. Such studies can serve as a check on what has become a very specialized academic discipline with tendencies toward rarified forms of provincialism. However, one’s case for relativism or universalism must sooner or later involve the exercise of one’s normative reasoning: not exclusively from the normative viewpoint one initially favors, but from an attempt to evaluate all normative viewpoints in contention. The more thorough an examination of these viewpoints fails to validate a defusing explanation of the type that universalists favor, the more the possibility of relativism has to be taken seriously.1 Some of the disagreements that appear not to hinge on nonmoral factual disagreement and that arguably resist explaining away by reference to epistemic bias or limitation are certain disagreements arising from the plurality of values that can come into conflict, such as the duties arising from relationship and membership in a community that can conflict with 318
David B. Wong
rights to personal autonomy or the value of acting for the greatest good of the many even when that would require violating the rights of the fewer. Such conflicts involve values that are arguably basic, irreducible to other values, or not ultimately reducible to just one. Such conflicts may be an especially significant source of challenge to universalism, given perennial philosophical disagreements as to whether all values somehow reduce to just one kind such as social utility or individual rights, and if not, as to whether there is some universally correct ranking of these values or some universally correct way of adjudicating conflicts between them in particular contexts. Types of morality can be distinguished by what values they give greater emphasis to. One type is a morality centered on the good of relationship and community; another is centered on autonomy and rights (Wong, 1984, 2006). Neither type of morality needs to exclude the values associated with the other type.The difference is often a difference in the priority assigned to one or the other set of values in case of conflict. Many Americans hold a morality centered on autonomy and rights, but they also value relationships. And among those who place central moral value on relationships and community, there is concern for individuals when their interests conflict with those of others in the group or the common group interest. This last point is not commonly recognized among proponents of rightscentered moralities, which often leads them to abruptly dismiss the other type of morality. Consider a common stereotype that relationship-oriented moralities subordinate the individual to the group. The stereotype may hold that “collectivist” moralities may have once served a good social function, perhaps under conditions that necessitated close cooperation between members of a group, but that development and new forms of technology have made such intensive forms of interdependence unnecessary. Consider some possible defusing explanations that might be applied to this stereotype by those inclined to make autonomy central to their conception of the single true morality: those who remain wedded to relationship-centered ways of life may not have had experience of other more liberating and self-directed ways. It may be the only way of life they know. Or they may benefit from being at or near the top of hierarchies that are endorsed under these moralities, and self-interest biased their moral beliefs toward these moralities. Now, however, the enlightened are in a position to clearly recognize the inherent dignity and worth of the individual and that such worth grounds rights that require for the individual opportunities and liberties to be protected against the demands of others and groups. Some people have subscribed to versions of relationship-oriented moralities that approximate this stereotype. But both empirical and philosophical analysis points to a different way to understand such moralities. Some empirical studies of relationship-centered moralities have stressed not that the individual is subordinated to the group but that the person is conceived as an “interdependent self ” (Markus & Kitayama, 1991) or a “sociocentric self ” (Shweder & Bourne, 1982), in contrast to an “independent self ” or an “egocentric self.” Under the interdependent or sociocentric conception, one’s identity as a person and one’s characteristic behavior and attitudes are understood as responses to particular people in particular contexts. The kind of person one is depends on who one is with. By contrast, the “independent” or “egocentric” self is conceived as a free-standing individual with an identity that is detachable from the particular people one is interacting with. The interdependent conception has normative implications for what constitutes the good of the individual and how that good relates to the good of others. In accordance with 319
Relativism and Pluralism
the way one’s identity can depend on whom one stands in relationship with, one’s good as an individual depends on the goods of others. One has a compelling personal interest in their welfare and in one’s relationship with them. Sustainable and morally viable relationships depend the individuals’ gaining satisfaction and fulfillment from being in those relationships. Rather than the individual’s good being subordinated to others, the individual’s good overlaps with the good of others. For example, the good of each member of a family may include or overlap with the good of other members of the family. When one family member flourishes, so do the others. These normative themes are contained in Confucian ethics, which places relationships of mutual care and respect at the core of human fulfillment. To realize oneself is to be a self in relationship, but it is recognized that the interests of individuals, even in the best of relationships, can conflict. The ideal is to balance and reconcile the conflicting interests and to do so in the light of the interdependence of individuals and of the goods they strive to realize. Sometimes an individual’s interests will have to yield to those of others. A partial compensation to that person is that a central part of their good lies in being, for instance, a member of the family. On the other hand, the good of the family cannot be achieved without consideration of an individual’s important interests. If those interests are urgent and weighty, they must become important interests of the family and can sometimes have priority in case of conflict. At other times, differences have to be split in compromise. Sometimes yielding to others should be compensated by one’s having priority at other times. In sum, one mutually adjusts conflicts in light of the interdependence of one’s own good with that of others. A story from 5A2 of the Mencius illustrates these points. It is about Shun, a legendary sage-king exemplary for his ability to get people to work together and for his filial piety. When Shun wanted to get married, he knew that if he were to ask his parents’ permission to marry, he would be denied. He decided to marry anyway without telling them. This is ordinarily an extremely unfilial act. But Mencius defends what Shun did, saying that if Shun had let his parents deny him the most important of human relationships, it would have embittered him toward his parents. What is Mencius’s reasoning? Shun’s good as an individual depends on both his desired marriage relationship and his relationship to his parents. Were he to conform to his parents’ wishes, he would deny himself not only the first relationship but also adversely affect the second. For the sake of both relationships he must assert his own good, which in the end is not separate from the good of his parents. Consider now the way that a morality placing a higher value on the rights of individuals might approach the problem. It might hold that Shun had every right to marry whomever he wants to marry. If the relationship to his parents matters a great deal to him, he might choose not to marry, but there is no moral requirement to do so and no requirement for him to try to work things out with his parents. A relationship-oriented morality such as Confucianism does not differ from one that emphasizes rights of the individual because it fails to recognize that important interests of the individual can conflict. It differs in its approach to dealing with such conflicts and in the weight it accords to the relationship as a constituent of each party’s well-being. This does not mean that the interests of each individual are reduced to the health of the relationship or that it can be acceptable to continually frustrate such interests for the sake of the relationship. Once one investigates how a tradition that tilts in favor of relationship deals with major challenges and to the extent one can do the same for a tradition that tilts in favor of 320
David B. Wong
autonomy, one not only sees that the issue between them is more subtle than initially construed, but one might also see how ways of life realizing each type of tradition might offer fulfilling human lives. One might begin to doubt that there is some single ideal balancing point that combines the strengths of the traditions and avoids the moral downsides that each is prone to have. Arriving at such a conclusion might produce an experience that could be called “moral ambivalence” (Wong, 2006), by which one becomes uncertain as to whether there is a singular truth as to how to balance or prioritize values that are shared across different moral traditions. The experience of moral ambivalence presents a challenge to universalists to explain.The stock explanations mentioned earlier might be invoked, but they need to be taken beyond the hand-waving stage. Another possible explanation congenial to at least some realists who assert the existence of a moral reality independent of the attitudes of any inquirer (see McGrath, 2010) is that aspects of this reality are not accessible to human beings, even under epistemically ideal circumstances (McGrath, 2010). While coherent, it is unclear on such a view where the line falls between the humanly unknowable and the knowable, the unresolvable and the resolvable: does it fall between those disagreements that seem to hinge only on disagreement over nonmoral factual questions and those disagreements that seem to involve basic differences over moral values? If so, that leaves a huge domain of unknowability. If that domain is smaller, how is the boundary drawn? Relativists do not have to claim that every thoughtful person has had the experience of moral ambivalence. They only have to claim that some thoughtful and informed people who have made the effort to understand the normative basis of conflicting moral viewpoints in some depth have had the experience. Relativists can appeal to those who see a single correct answer to resolve conflicts between basic values to present their reasons and to say whether their reasons ultimately rest on something other than brute moral intuition, from which other thoughtful people dissent. Those who take moral ambivalence seriously can offer their theories about the relativity of moral truth as a candidate for best explanation of that experience. That explanation will typically involve a naturalistic approach to morality.
3. Naturalistic Approaches to Morality Naturalistic approaches bring to bear the relevant human sciences to understanding what sort of thing a morality is and how it originated (without necessarily attempting to reduce that understanding to science).2 One of the most significant contemporary developments in the relevant human sciences is a new understanding of how a part of the human biological inheritance prepares us for cooperating with each other. We share with the great apes behaviors expressing empathy for others and reciprocation of the good they do for us (Flack & De Waal, 2000; de Waal et al., 2008), suggesting a shared evolutionary history resulting in genetically based dispositions for these behaviors.3 The science of human biological evolution has recently generated hypotheses as to how human beings could have evolved other-concerned and reciprocating motivations.These range from hypotheses as to why people might be disposed to act for the sake of their kin at cost to themselves (Hamilton, 1964), to why humans tend to reciprocate cooperation with cooperation (Trivers, 1971), to why they might engage in personally costly acts for the sake of nonrelated others (Sober & Wilson, 1998; Gintis, 2000).4 321
Relativism and Pluralism
It is highly unlikely that morality is fully embedded in the human genome.The theory of the coevolution of genes and culture might help fill in the gap. While there might be biologically based psychological motivations for cooperation, there is no guarantee that they will operate with each other harmoniously (e.g., kin-favoring motivations might come into conflict with altruism and reciprocity toward nonkin and strangers), much less with powerful drives to preserve and benefit the self. In the Pleistocene era the crucial biological evolution of human social motivations overlapped with the appearance of culture, so it is plausible to hypothesize, as Boyd and Richerson (2005) have, that some biologically based motivations, such as the disposition to follow the majority or to imitate the most successful members of one’s group, adapted human beings to guide themselves through culture. Moral norms and values fostered more coherence within the diverse array of human motivations, a coherence that helped to realize the functions of morality. Indeed, genes and culture might have interacted during the course of human evolution, whereby biologically based forms of altruism might have further evolved in cultural niches that require demanding forms of cooperation, such that the further evolved forms of altruism might have made possible even more demanding forms of cooperation that foster further biological evolution of the biologically based forms of altruism, and so on, creating a ratcheting effect (Richerson et al., 2003). One of the primary functions of morality on a naturalistic approach would be fostering and regulating social cooperation (Wong, 1984, 2006). Another distinct but related function is fostering and regulating intrapersonal coherence of motivation. The two functions are related because interpersonal coordination requires a degree of intrapersonal ordering of potentially conflicting motivations with the concerned individuals. It is no accident that such ideals often centrally feature relationship with others: an evolutionary adaptation that inclines human beings to cooperation is the disposition to engage in and find constructive relationships with others deeply satisfying for their own sake (Tomasello, 2014). However, the intrapersonal function is distinct because moral ideals of a worthwhile human life or excellence of character often require more or other than whatever serves interpersonal coordination.5 The path to relativism from a conception of morality with such functions is fairly clear, if not assured. There is clearly a variety of ways to foster and structure human cooperation. While some ways might clearly be worse than others and fail to perform their function, it would be surprising if there were a single correct way. The values of individual rights, individual autonomy, of promoting an aggregated good summed over individuals, and of relationship and community each present different approaches to knitting together individuals into cooperative activity and to specifying a thick conception of cooperation that makes clearer to participants what each is expected to do and what to expect of others. It is quite possible that any morality that fulfills its functions must contain a mixture of values that lean toward the individual and those that lean toward relationship, but the question is whether there is a single correct way to combine and to balance those values. Consider that people have conflicting but seemingly ground-level intuitions as to what is right in cases where the rights of one or a few individuals are sacrificed to prevent a great deal of harm to many individuals. Or consider that some might supply the answer to conflicts such as those that Shun faced on the basis of relatively clearly defined rights that individuals have to live their own lives free from the demands of even the closest relationships, while others might 322
David B. Wong
think it right to balance the demands of individuals’ interests and those with whom they are in close relationship on the basis of the interdependence of their goods. Does the insistence that there is a single right answer ultimately come down to the assertion of a brute moral intuition? If so, we might wonder what is gained, and the mutual understanding that might be lost, by this insistence.
4. A Pluralism of True or Most Justified Moralities as a Form of Relativism Relativism as defined in this chapter—the view that there is no single true or most justified morality—leaves room for the further view that not all moralities are true or belong to the most justified category. The typical debate over moral relativism pushes its definition pretty close to the extreme view that all moralities are true or equally justified. This leaves room for those on the other side to acknowledge the possibility of pluralism without calling it such and still present their position as a form of universalism. For example, some selfnamed universalists grant that some moral problems may lack a truth-apt resolution due to vagueness in moral concepts (Shafer-Landau, 1994, 1995), ties in the ranking of moral values that come into conflict (Brink, 1989), and noncomparability of such values (ShaferLandau, 1994, 1995).These authors do not indicate precisely how far they are willing to go in this sort of qualification, but clearly, the more they acknowledge that the phenomenon is widespread, the more one is entitled to wonder whether universalism is being defined in a very permissive fashion in order to co-opt its rival. It is time to recognize the vast middle ground between relativism and universalism as these are stereotypically conceived, to not worry about what label to give to the middle ground, and to start discussing whether we all have good reason to occupy it. The argument for a kind of pluralism consistent with the naturalistic approach outlined in this chapter is that morality’s functions, plus the nature of the beings it governs, constrain the content of its norms. Different moralities must share some general features if they are to perform their functions of coordinating beings with particular kinds of motivations. For example, it is arguable that all moralities adequately serving the function of fostering social cooperation must contain a norm of reciprocity—a norm of returning good for good received. Such a norm helps relieve the psychological burden of contributing to social cooperation when it comes into conflict with self-interest. It takes some of the burden off other-regarding concern as motivation for cooperation. A norm of reciprocity plays a crucial role in fostering coherence among the diverse motivations in the human psychic economy. It is further arguable that justifications for subordinating people’s interests must not rely on falsehoods such as the natural inferiority of racial or ethnic groups or the natural incapacities of women. This follows from the way that morality came to be conceptualized as a distinctive way of fostering social cooperation, one that can be contrasted with gaining cooperation through coercion or deception. To be a moral requirement, it must be possible for people whose cooperation is desired to see why (under epistemically favorable conditions) they should comply without being forced or deceived into doing so. Finally, the ubiquity of moral disagreement, even within relatively cohesive communities of shared values, makes it arguable that social cooperation would come under impossible 323
Relativism and Pluralism
pressure if it always depended on strict agreement on what values are to govern cooperation, on how to interpret or to prioritize them in case they come into conflict. All true or most justified moralities, if they are to sustain cooperation, must recognize a value of accommodation—a willingness to maintain constructive relationship with others with whom one is in serious and even intractable disagreement (Wong, 2006, chs. 2 and 9).The presence of accommodation as a value does not mean that it must take precedence over other values. Sometimes the values that bring one into conflict with others are judged to be too important to be sacrificed for the sake of maintaining a relationship with those who disagree.
5. The Messy and Dynamic Nature of Moralities Here is one of the most challenging questions that arise for relativists who hold that there is more than one true or most justified morality: who holds these moralities? Some relativists hold that identifiable, discrete groups hold them. Gilbert Harman, for example, holds that a morality is constituted by those who implicitly agree with each other to act in certain ways on the condition that others similarly intend (Harman, 1975). J. David Velleman holds that a morality consists of “doables,” which are socially constructed action-types that spell out what can be intelligibly done within a community, and also of reasons, which have force only for the members of the community within which they are socially constructed (Velleman, 2013). Both these views seem to attribute moralities to bounded communities who share norms, values, and reasons.6 Michele Moody-Adams (1997) points out the difficulties of finding such bounded communities. As she points out, there will be some in most any American community who display the very permissive attitudes toward treatment of animals that Brandt attributed to the Hopi. Though East Asian societies have comparatively more people who subscribe to moralities with a greater focus on relationship and community, they also have members who subscribe to strongly rights-oriented moralities. Even in the most cohesive of moral communities, some will dissent in fundamental ways from the prevailing consensus. Communities of people who regularly cooperate with each other rarely agree on all their values, norms, and reasons, even if they share many of these. Moral communities are not moral islands. A more realistic picture of moral agreement and disagreement will take seriously trade, conquest, and the simple thirst to explore and to know the novel as powerful forces that impel movement across whatever islands might have formed in the first place. Inevitable cross-pollination is both a force for more connectedness and a degree of mutual understanding between communities and a force for diversity of moral belief and practice within communities. This problem for conceiving morality as belonging to distinct groups who are bounded by agreement among those within and by disagreement with those without might motivate adoption of a more individualistic version of relativism: morality, properly speaking, belongs to the individual. James Dreier (1990) and Jesse Prinz (2007) have defended a version of what Drier labels “speaker relativism,” according to which the content of moral terms is determined by the individual speaker’s moral commitments (or in Drier’s language “motivating attitudes”). Given such an analysis, however, it becomes problematic to explain why people treat a speaker’s moral pronouncements as typically having implications for what they have moral reason to approve and to do. Morality seems intrinsically to have a 324
David B. Wong
normative interpersonal dimension, which follows from the naturalistic conception of its cooperation-fostering function (Wong, 2008). There is an alternative to regarding morality as the possession of a group or of the individual. Morality plausibly arises from customs and practices that emerge and evolve implicitly (most of the time) among people who belong to more than one community, and people need not share all moral values, norms, and reasons in order to cooperate with each other. They belong to communities that can dynamically expand to take in others. As cooperation grows more complex not just through incorporating a greater number of individuals but also through increasing differentiation of social roles, people with differing temperaments and value orientations come to fill those different roles, and their value orientations may become more elaborate and differentiated (consider the roles of priest, soldier, tradesperson, farmer, craftsperson, artist, and teacher in a society). Their differing strengths and perspectives enrich cooperation but complicate the task of coordination, which illustrates why the value of accommodation is necessary in a morality. Just as natural languages have no clear and unqualified boundaries (they have dialects and even idiolects of individual speakers who inherit a shared language but develop dialects and even some idiosyncratic meanings for parts of that shared language), they incorporate words and concepts from other natural languages. It seems truer to the way that varying meanings get assigned to moral terms such as “good person” and “right action” that we conceive moralities and their associated moral languages as having in an important sense a social origin that is diverse and dynamic in nature.That is, just as we learn natural languages from diverse sources around us, we may expect to learn our moral languages—the meanings of terms such as “good person” and “right action”—from those who raise us but also from those who school us, our peers, and other social influences. Though there might very well be some measure of agreement among these different sources, it will not be surprising if one sees diversity, even of the fundamental kind that does not depend on difference in nonmoral factual belief, so that different people in our social fields of influence mean somewhat different things when they use moral terms (Wong, 2011; Wong, 2014). This last point provides a clue to answering another perennial question for relativists: do they really want to say that statements such as “Abortion is wrong” is true for a rightto-life advocate X but false for a pro-choice advocate Y? The typical relativist move is to make the meaning of moral terms indexical: when someone X makes a moral statement about the rightness of an action, for instance, it is supposedly part of the meaning of “right” as applied to action that standards requiring the action are invoked and that they are standards of the speaker, or the speaker’s community. This makes the “true for X but not for Y” possibility perfectly coherent, but the indexical analyses have the liabilities of the associated theories that moral communities are uniform and bounded or that they consist of single individuals. Furthermore, unlike genuine indexicals such as “I” or “here,” there is no general knowledge possessed by all competent moral language users that moral terms are used as indexicals containing implicit references to the standards of individuals or communities. On the contrary, when people give, accept, or dispute reasons for the moral positions they or others take, they do not typically think of them as reasons limited in scope for a particular community. Such a conclusion goes along with the multiplicity of communities to which people can belong, and the complex relations of moral similarity and difference they can have with other members of their various 325
Relativism and Pluralism
communities. If one is to delimit the scope of applicability of one’s moral positions, it cannot be so simple as saying they apply to one’s community, because one belongs to multiple communities that often have significant moral differences with each other and internally among their own members. Another relativist analysis of moral language has been called “invariant relativism” (Kölbel, 2015). Here X and Y are taken to be contending over the same proposition, contrary to the indexical relativist, but there is conceived to be a parameter for the truth-value of the belief in the proposition that allows it to vary according to who is doing the believing. The parameter is the value of the believer. So it would be correct for X to believe that abortion should be prohibited but incorrect for Y to believe that. The challenge this type of analysis has to meet, however, is to explain how it is that belief in the same proposition is consistent with variability in truth-value. A third alternative is that there is a difference in propositional content because of variations in the meanings of moral terms such as “right” and “ought” and in the truth conditions of statements containing these terms (Wong, 1984, 2006, 2011). It will not be evident to any competent user of the moral language that such variation exists, given that moralities, especially when they begin to approach adequacy in performing their functions, do overlap considerably in the values they contain. Moreover, there is an expressivist dimension to the meaning of moral terms as they are typically used.They are used to prescribe actions and attitudes, and this dimension of meaning stays constant across variation in other dimensions of meaning that produce truth conditional content. This expressivist dimension is not in lieu of the truth conditional content of moral terms. Indeed, if the truth conditional content is fundamentally shaped by what is necessary for promoting and regulating social cooperation, the expressivist dimension is made possible by the truth conditional content. It is because we are talking about what we should be doing in order to work together that we are prescribing to each other. Moreover, because there is no fixed boundary as to whom we might be working with, the prescriptions expressed by moral statements can potentially extend to everyone (Wong, 2008). Given this analysis of the meaning and logic of moral statements, one may thus come to moral relativism on the basis of discovering that moral disagreements run deep enough to result in divergence in meaning and truth conditions. This is consistent with concluding that not all moral disagreements run as deep, and that on many issues, people overlap enough in the meanings they assign to moral terms that any disagreement they have is actually a conflict over the truth of a moral statement, univocally construed. A study by Goodwin and Darley (2012) seems to reveal that many people have come to this sort of nuanced conclusion: that some, but not all, moral disagreements may lack a single correct answer.
Notes 1. See Chapter 2 of this volume for further reflection on moral universals. 2. See Section I of this volume for reviews of various scientific investigations directly relevant to the emergence and evolution of morals and similar norms. 3. See Chapter 3 of this volume for detailed discussion of the norms of other animals. 4. See Chapter 9 of this volume for a detailed discussion of kin selection, reciprocal altruism, and other natural mechanisms of biological evolution. 5. See Chapter 9 of this volume for a detailed discussion of contemporary models of the evolution of cooperation both within the family and between unrelated individuals. 6. See Chapter 23 of this volume for the attribution of moral beliefs and knowledge to institutions and groups of people. 326
David B. Wong
References Appiah, K. A. (2010). The Honor Code: How Moral Revolutions Happen. New York: W. W. Norton. Boyd, R. T. and Richerson, P. J. (2005). Not by Genes Alone: How Culture Transformed Human Evolution. Chicago: University of Chicago Press. Brandt, R. (1954). Hopi Ethics. Chicago: University of Chicago Press. Brink, D. (1989). Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Cohen, D., Nisbett, R., Schwarz, N. and Bowdle, B. (1996). “Insult, Aggression, and the Southern Culture of Honor: An ‘Experimental Ethnography’,” Journal of Personality and Social Psychology, 70 (5), 945–960. Demetriou, D. (2014). “What Should Ethical Realists Say About Honor Cultures?” Ethical Theory and Moral Practice, 17 (5), 893–911. De Waal, F., Leimgruber, K. and Greenberg, A. (2008). “Giving Is Self-Rewarding for Monkeys,” Proceedings of the National Academy of Sciences, 105 (36), 13685–13689. Doris, J. and Plakias, A. (2008). “How to Argue About Disagreement: Evaluative Diversity and Moral Realism,” In W. Sinnott-Armstrong (ed.), Moral Psychology, vol. 2. The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA: MIT Press, 303–331. Dreier, J. (1990). “Internalism and Speaker Relativism,” Ethics, 101 (1), 6–26. Enoch, D. (2009). “How is Moral Disagreement a Problem for Realism?” Journal of Ethics, 13 (1), 15–50. Flack, J. and de Waal, F. (2000) “ ‘Any animal whatever’: Darwinian Building Blocks of Morality in Monkeys and Apes,” Journal of Consciousness Studies, 7 (1–2), 1–29. Gintis, H. (2000). Game Theory Evolving. Princeton: Princeton University Press. Goodwin, G. and Darley J. M. (2012). “Why Are Some Moral Beliefs Perceived to Be More Objective Than Others?” Journal of Experimental Psychology, 48, 250–256. Hamilton,W. (1964).“The Genetical Evolution of Social Behavior,” Journal of Theoretical Biology, 7, 1–16. Harman, G. (1975). “Moral Relativism Defended,” Philosophical Review, 84 (1), 3–22. Kölbel, M. (2015). “Moral Relativism,” Routledge Encyclopedia of Philosophy. Online www.rep.routle dge.com/articles/thematic/moral-relativism/v-2 Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. London: Penguin Books. Markus, H. and Kitayama, S. (1991). “Culture and the Self: Implications for Cognition, Emotion, and Motivation,” Psychological Review, 98 (2), 224–253. McGrath, S. (2010). “Moral Realism Without Convergence,” Philosophical Topics, 38 (2), 59–90. Moody-Adams, M. (1997). Fieldwork in Familiar Places: Morality, Culture, & Philosophy. Cambridge, MA: Harvard University Press. Prinz, J. J. (2007). The Emotional Construction of Morals. Oxford: Oxford University Press. Richerson, P. J., Boyd, R. T. and Henrich, J. (2003). “Cultural Evolution of Human Cooperation,” in P. Hammerstein (ed.), Genetic and Cultural Evolution of Cooperation: A Dahlem Conference Workshop. Cambridge, MA: MIT Press, 373–404. Shafer-Landau, R. (1994). “Ethical Disagreement, Ethical Objectivism and Moral Indeterminacy,” Philosophy and Phenomenological Research, 54 (2), 331–344. ———. (1995). “Vagueness, Borderline Cases and Moral Realism,” American Philosophical Quarterly, 32 (1), 83–96 Shweder, R. and Bourne, E. (1982). “Does the Concept of the Person Vary?” in A. J. Marsella and G. M. White (eds.), Cultural Conceptions of Mental Health and Therapy. Dordrecht: D. Reidel, 97–137. Sober, E. and Wilson, D. (1998). Unto Others: The Evolution and Psychology of Unselfish Behavior. Cambridge, MA: Harvard University Press. Sturgeon, N. (1994). “Moral Disagreement and Moral Relativism,” Social Philosophy and Policy, 11 (1), 80–105. Tomasello, M. (2014). “The Ultra-Social Animal,” European Journal of Social Psychology, 44, 187–194. Trivers, R. (1971). “The Evolution of Reciprocal Altruism,” Quarterly Review of Biology, 46 (1), 35–57. Velleman, J. D. (2013). Foundations for Moral Relativism. Cambridge, UK: Open Book Publishers. Wong, D. B. (1984). Moral Relativity. Berkeley, CA: University of California Press. 327
Relativism and Pluralism
———. (2006). Natural Moralities: A Defense of Pluralistic Relativism. New York: Oxford University Press. ———. (2008). “Constructing Normative Objectivity in Ethics,” Social Philosophy and Policy, 25 (1), 237–266. ———. (2011).“Relativist Explanations of Group and Interpersonal Disagreement,” in S. Hales (ed.), A Companion to Relativism. Hoboken, NJ: Wiley-Blackwell, 411–430. ———. (2014). “Response to Hansen,” in Y. Xiao and Y. Huang (eds.), Moral Relativism and Chinese Philosophy: David Wong and His Critics. Albany, NY: State University of New York Press, 215–240.
Further Readings P. Boghossian, Fear of Knowledge: Against Relativism and Constructivism (New York: Oxford University Press, 2006) is a critique of some forms of relativism, including some forms of moral relativism. M. Fricker, “Styles of Moral Relativism: A Critical Family Tree,” in R. Crisp (ed.), The Oxford Handbook of the History of Ethics (Oxford: Oxford University Press, 2013, 793–817) provides a survey and assessment of different branches of the “family tree” of moral relativism. S. D. Hales, ed., A Companion to Relativism (Hoboken, NJ: Wiley-Blackwell, 2011) is an anthology of essays that covers relativism in philosophy of language, epistemology, ethics, philosophy of language, logic, and metaphysics. M. Krausz, ed., Relativism: A Contemporary Anthology (New York: Columbia University Press, 2010) is an anthology that covers relativism in relation to facts and conceptual schemes, realism and objectivity, universalism and foundationalism, solidarity and rationality, pluralism and moral relativism, and feminism and poststructuralism. Though R. Rorty rejects the label of relativism, his (1989) Contingency, Irony, and Solidarity (Cambridge: Cambridge University Press), he is commonly regarded as defending a version of the view, including one that applies to morality.T. M. Scanlon, “Fear of Relativism,” in R. Hursthouse, G. Lawrence and W. Quinn (eds.), Virtues and Reasons (Oxford: Clarendon Press, 1998, 219–246) discusses why moral relativism has been an object of fear in philosophy.
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 2 The Normative Sense: What is Universal? What Varies?; Chapter 8 Moral Intuitions and Heuristics; Chapter 9 The Evolution of Moral Cognition; Chapter 11 Modern Moral Epistemology; Chapter 16 Rationalism and Intuitions: Assessing Three Views about the Psychology of Moral Judgment; Chapter 22 Moral Knowledge as Know-How; Chapter 23 Group Moral Knowledge.
328
16 RATIONALISM AND INTUITIONISM—ASSESSING THREE VIEWS ABOUT THE PSYCHOLOGY OF MORAL JUDGMENTS Christian B. Miller 1. Introduction One of the liveliest areas in moral psychology in recent years has been research on the extent to which conscious reasoning leads to the formation of moral judgments. The goal of this chapter is to review and briefly assess three of the leading positions today on this topic, each of which has significant implications for moral epistemology. Two quick comments before we begin. First, the primary focus of this discussion is on descriptive issues about how people actually form their moral judgments. Hence little tends to be said in this literature about normative questions concerning the ways people should go about forming them. Second and closely related, the primary body of literature that will be consulted in discussing the three views is research in psychology. This includes neuroscientific and behavioral experiments as well as the models that psychologists have constructed on the basis of those experiments. We shall proceed as follows. Part 1 offers some background on what the central issues are and how the terminology will be understood. The remaining parts then take up each of the three central positions: traditional rationalism, social intuitionism, and morphological rationalism. My goal will not be to advance my own preferred view but rather to try to provide a fair summary and assessment of each of the leading ones.
2. Background Let’s begin with an example. Suppose I read the following story in the newspaper: A local man has admitted to punishing his 8-year-old son by holding him underwater until he started to turn blue in the face. The man, 40-year-old John Smith, says that he needed to punish his son to teach him a lesson for not finishing all the food on his plate at dinner. The boy is being treated at a nearby hospital for asphyxiation, and his condition is critical at the present time. 329
Rationalism and Intuitionism Direct Causal Antecedents
Moral Judgment
First Person Explanation
of the Moral Judgment
about John Smith
for the Moral Judgment
Stage One
Stage Two
Stage Three
Figure 16.1 The three stages pertaining to my moral judgment about John Smith
Upon reading this, I come to form the moral judgment: “What John Smith did was morally wrong.” I bet you do, too. Later that day, my wife picks up the paper and reads the story. While knowing the answer already, she still says: “Wow, did you read the story about John Smith? What do you think about what he did?” Not surprisingly, I say, “What John Smith did was morally wrong.” Knowing that I study ethics, she follows this up with: “Of course, but I’m curious how you came to that conclusion.” After a moment’s thought, I answer: “Because the father was being cruel to his son.” Other things someone might have said instead include: “Because the son didn’t deserve to be treated that way,” “Because the father violated his son’s dignity,” “Because what the father did was against God’s will,” or “Because I wouldn’t want to live in a world where every father did that to his child.” Figure 16.1 illustrates how we might break down what was going on in my mind into three stages. Let me briefly comment on each of these stages.
Stage One As far as the views considered in this chapter are concerned, the main question pertaining to Stage One is the following: Question 1: Is conscious moral reasoning typically involved in the formation of a moral judgment? Note that the question is about what “typically” happens. No one would deny that there are cases where moral judgments are formed spontaneously without any prior conscious reasoning. The debate, as we will see, is about whether this is the typical way that moral judgments are formed or whether it is the exception.1 Moral ‘reasoning’ can be understood very broadly here as involving a process of weighing considerations prior to a decision about what to think or what to do. An example would be evaluating arguments on both sides of the death penalty debate before coming to a conclusion about whether it is morally wrong for a particular person to be executed. Of course other cases of reasoning need not involve this much abstract reflection. They could be a matter of, say, thinking about what Jesus or Gandhi would do, or what your mother always told you to do, and then on the basis of that answer coming to form a moral judgment. Two other closely related questions are typically brought up in this literature, although they are usually not carefully distinguished. The first one is the following: Question 2: Are moral principles typically involved in the formation of a moral judgment? 330
Christian B. Miller
“Moral principles” can also be understood broadly here as principles connecting some nonmoral facts, states of affairs, or reasons, with some moral evaluation like goodness, rightness, or virtue.2 Examples of principles people might hold include: Abortion is wrong. If it would cause a lot of pain, then don’t do it. That’s a bad thing to do if children in Africa could have used it instead. I morally have to keep my promises. Back to Question 2. It is asking whether we typically form moral judgments on the basis of one or more moral principles, regardless of how plausible or sophisticated or even coherent those principles might be. Again, these are descriptive issues, which are distinct from a normative discussion of the plausibility of the principles themselves or whether we should even be invoking principles in forming moral judgments. The third question is this one: Question 3: Are moral judgments typically formed on the basis of moral reasons? As used in this chapter, “moral reasons” will refer to morally relevant considerations that count in favor of a particular action or type of action. Specifically, the reasons here are subjective or motivating reasons—they are good moral reasons by the agent’s own lights. But they need not be normative reasons, which really are good moral reasons, period. Back to my example, suppose I formed my judgment about the wrongness of Smith’s disciplining his son on the basis of that act’s being cruel.Then my judgment would be based on a moral reason, clearly, and in this case it is a good moral reason, too. Suppose instead, though, that I formed the judgment because I actually thought Smith should have punished his son more aggressively. His action was wrong, then, because in my view the punishment was too lenient. In that version of the case, I clearly don’t have a good normative reason for forming my moral judgment (even if the judgment by itself is true—Smith’s action really was wrong!). But—in this imaginary scenario—I think that I do have a good reason. So it turns out that the judgment is based on motivating reasons (by my lights), but not on normative reasons. Now it might seem overly much to distinguish these three questions. After all, if we answer any one of them affirmatively, wouldn’t we have to do the same for the other two? But this is not something we should simply assume from the start. In fact, as we will see later on, morphological rationalism can only exist as an intelligible position if the answers to these three questions come apart.
Stage Two We can be neutral on the nature of moral judgments, and in particular whether cognitivism, noncognitivism, or some hybrid position is correct. For our purposes we can simply understand moral judgments as the agent’s determination of the moral status of some object of evaluation. The moral status could involve axiology (goodness, badness, etc.), deontology (rightness, wrongness, obligatory, etc.), or character (virtuous, honest, cruel, etc.). The objects of evaluation could range over people (“Stalin was cruel”), actions (“What John 331
Rationalism and Intuitionism
Smith did was morally wrong”), and outcomes (“The consequences of his decision were awful”), among other things.
Stage Three When my wife asked me why I judged John Smith’s action to be wrong, I replied, “Because the father was being cruel.” Suppose I was not trying to deceive her and really did take this to be the primary reason for why I formed my judgment. But was it after all? In other words, the fourth and final question that is central in this literature is the following: Question 4: When people give sincere explanations for why they formed their moral judgments, are those explanations typically accurate? The “sincere” piece is important, since obviously if a person is out to deceive others about these matters, then the resulting “explanation” will not typically be accurate. But suppose I sincerely say, “Because the father was being cruel.” That could have been what primarily led me to form my judgment about Smith. But it could not have been. It might have been something rather different, such as a feeling of disgust at what he did, which I didn’t realize was actually causally at work in Stage One. As a result, if it was a feeling of disgust that was primarily responsible for the judgment, then I am guilty of post hoc confabulation. I have made up a story (perhaps without even realizing that this is what I am doing) for why I judged Smith this way by appealing to Smith’s being cruel. Even though the story has little basis in reality, I believe it to be true. And even though it is a good story— in the sense that Smith really was cruel and that fact can make his action wrong—it is still a fictional story, since it does not accurately describe how I came to make this judgment. As we will see, there is sharp disagreement among researchers working in this area about the prevalence of post hoc confabulation in our moral psychology. But enough background for now. Let’s get to the views themselves.
3. Traditional Rationalism With the demise of behaviorism, a rationalist approach to the psychology of moral judgments became prominent in the 1960s and ’70s.3 Given the preliminary work we have already done, we can state the basic parameters of this approach straight away. Concerning Stage One, traditional rationalism (TR) offers a unified answer to all three questions: Typically moral judgments are formed on the basis of conscious moral reasoning, they are formed on the basis of one or more moral principles, and they are formed on the basis of one or more moral reasons. In other words, the answer is “yes” to all three questions. What about after the moral judgment is formed in Stage Three? Here traditional rationalists answer “yes” to the fourth question as well.When people give sincere explanations for why they formed their moral judgments, those explanations typically are accurate. So in my example, it is likely that “Because the father was being cruel” really was why I thought Smith’s disciplinary action was wrong. If this was a typical case, then according to 332
Christian B. Miller
TR its cruelty was something I consciously considered when evaluating his behavior, even if it was just a quick thought. The cruelty of his action is the primary reason for my judgment, and I might have been using a principle like, “If a parent disciplines a child far out of proportion to what the child has done, then the parent is acting cruelly, and so wrongly.” Principle, reason, conscious reasoning, and veridical explanation. As far as moral epistemology is concerned, TR has a relatively straightforward story to tell about the justification of moral judgments. Presumably their justification will be parasitic on the agent’s justification for adopting the relevant moral principles, together with the justificatory status of the (conscious) reasoning involved in arriving at the conclusions. Similarly, the default assumption will be that our post-judgment explanations for our judgments will be justified as well, although of course that justification is defeasible. The figure most often associated with traditional rationalism is Lawrence Kohlberg.4 It was Kohlberg who was primarily responsible for the prominence of rationalist views in psychology after the fall of behaviorism.The details of his well-known six-stage view need not concern us here. At the general level, though, Kohlberg wrote that the “stages were defined in terms of free responses to ten hypothetical moral dilemmas,”5 indicating his trust in the general accuracy of the justifications participants offered for their moral judgments. Furthermore, according to Kohlberg those judgments are typically the product of moral principles: “it is no more surprising to find that cognitive moral principles determine choice of conflicting social actions than it is to find that cognitive scientific principles determine choice of conflicting actions on physical objects.”6 Whether psychologists are right in interpreting Kohlberg as a traditional rationalist—in the sense outlined with Questions 1 through 4 here—is a topic that would require careful study of his voluminous writings. Our interest here, though, is in the views themselves rather than in who held them. And TR has a lot going for it. It preserves widely held views about the central role of reasoning, reasons, and principles to moral agency. It avoids skeptical conclusions about the justification for our moral judgments and our first-person explanations for them. In denying the pervasiveness of post hoc confabulation, it also resonates well with our phenomenological experience of offering accounts of why we judge things the way we do, morally speaking. After all, it doesn’t seem to us that we are making up a story to fit our judgment but rather are telling a true story about what preceded that judgment. Despite these advantages, TR is almost universally rejected today by psychologists and philosophers alike.7 Here are three commonly raised problems for the view, the first two of which apply to TR’s account of Stage One and the last to its account of Stage Three: First Objection: Bad Fit with Our Experience.While TR might capture our experience well at Stage Three, it does a noticeably poor job at Stage One. For much of the time, we do not undergo conscious deliberation, reflection, or the like prior to forming a moral judgment. Rather we experience the judgment as immediately arising within us. This is true most obviously when we have little time to act. But it is true in more mundane cases as well, such as the John Smith example. I read the account of what he did. And I concluded that he did something wrong. No conscious reasoning seemed to be involved in the middle. Second Objection: Bad Fit with the Neuroscientific Evidence.The psychologists Antonio Damasio, Joshua Greene, and others have conducted studies in the neuroscience of moral 333
Rationalism and Intuitionism
judgment formulation that are frequently taken to show the central role of affective and emotional responses. One implication of their work is commonly taken to be that, “When emotion is removed from decision making, people do not become hyperlogical and hyperethical; they become unable to feel the rightness and wrongness of simple decisions and judgments.”8 This is allegedly trouble for TR since TR downplays the role of emotion by focusing on conscious reasoning and moral principles.9 Third Objection: Bad Fit with the Dumbfounding Evidence. According to TR, Stage Three explanations for the formation of moral judgments, when offered by the person making those judgments, will typically be accurate. But in a series of well-known studies, the psychologist Jonathan Haidt found that participants struggled mightily when presented with various scenarios and asked to justify their moral judgments about them. Here is the best known of his cases: Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control bills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other.What do you think about that? Was it OK for them to make love?10 As Haidt and Fredrik Bjorklund summarize the findings of this research, “Very quick judgment was followed by a search for supporting reasons; only when these reasons were stripped away by the experimenter, few subjects changed their minds, even though many confessed that they could not explain the reasons for their decisions.”11 This is the phenomenon that came to be called “moral dumbfounding,” and it is incompatible with TR’s account of Stage Three reasoning. These problems, among others, inspired a search for another approach to understanding the psychology of moral judgments. Eventually social intuitionism emerged as a popular alternative.
4. Social Intuitionism The 1990s saw an explosion of work on automaticity and dual-process approaches in psychology. Given the problems confronting traditional rationalism, and given this work in other areas besides moral psychology, the stage was set for the emergence of what came to be called social intuitionism (SI). Haidt himself was the leading exponent of the view, and it received its canonical statement in his 2001 paper, “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment.”12 Again we can use our four questions to help outline the view. Recall that the first question was about whether conscious moral reasoning is typically involved in the formation of moral judgments. SI has a clear answer—“no.”13 Whatever reasoning is involved in typical cases, it will be doing its work subconsciously. To clarify a bit more, Haidt and Bjorklund 334
Christian B. Miller
describe a conscious process as being “intentional, effortful, and controllable and that the reasoner is aware that it is going on,” and reasoning as having “steps, at least two of which are performed consciously.”14 Moral judgment formation is not like that, at least standardly. What does cause moral judgments, then? Haidt and his colleagues are clear here too— moral judgments are typically caused by moral intuitions.15 This terminology can be misleading, since Haidt does not mean by “moral intuitions” what philosophers working in metaethics typically mean. Rather they are defined as: the sudden appearance in consciousness, or at the fringe of consciousness, of an evaluative feeling about the character or actions of a person, without any conscious awareness of having gone through steps of searching, weighing evidence, or inferring a conclusion.16 To return to our example, my judgment about John Smith could have been caused by a moral disgust intuition prompted by reading about him nearly drowning his own son, an intuition which in turn spontaneously and subconsciously brought about the moral judgment. That process of going from intuition to judgment, at any rate, is supposed to be the typical causal scenario according to SI. How about our second question concerning whether moral principles are typically involved in Stage One? Here SI is clear again—“no.” Indeed, I might even have moral principles about the immorality of child abuse that would have produced the same judgment if they had played a causal role. But normally they will be bypassed by my intuitions, which in this example took me straight to the judgment about Smith’s action. The third question concerns the contribution of moral reasons. Here it is a bit less clear what SI has to say, as I am not aware of any passages in which Haidt and company address this issue directly. But a natural reading of their work suggests that the answer will be “no” as well. If one assumes that forming moral judgments on the basis of moral reasons requires conscious reasoning, and if conscious reasoning is rare prior to judgment formation, then so too will the causal work of moral reasons be rare.17 This is how several commentators have also interpreted SI. For example, Jesse Prinz writes that, “If this interpretation of Haidt’s findings is right, normal adults have values that are not maintained by a network of carefully thought-out reasons. They are implemented by gut feelings.”18 Hanno Sauer has also claimed that SI requires conscious reasoning in order for an agent’s moral judgment to be based on reasons.19 To summarize the picture of Stage One according to SI, people typically form moral judgments on the basis of moral intuitions and not on the basis of conscious reasoning, principles, or reasons. Hence we have a view that is diametrically opposed to traditional rationalism. That opposition extends to Stage Three, where the question was about the accuracy of our sincere first-person explanations for our moral judgments. SI is happy to admit that people at Stage Three often do engage in conscious moral reasoning.20 Unfortunately, though, according to SI, what they come up with in explaining their particular moral judgments is usually not accurate. In particular, SI endorses the following: Post-Judgment Confabulation: The conscious reasoning and the appeals to various reasons and principles after the formation of a moral judgment are typically confabulatory 335
Rationalism and Intuitionism
on the part of the individual since there were in fact no causally effective reasons and principles in the first place.21 Hence Haidt and Bjorklund write that, “moral reasoning is an effortful process (as opposed to an automatic process) usually engaged in after a moral judgment is made, in which a person searches for arguments that will support an already-made judgment.”22 This captures the heart of social intuitionism as I understand the view. To be sure, there are several other significant components to SI, but this is enough for our purposes here.23 Let’s briefly turn then to some of its epistemological implications, which are also in stark contrast to TR. If moral judgments are typically formed primarily on the basis of intuitions, which are not reasons- and principle-responsive, that suggests these moral judgments are not going to be justified in the cases at issue. Similarly, at Stage Three our explanations for why we form our moral judgments will involve post hoc confabulation, presumably rendering them not justified either. If justification is necessary for knowledge, then moral knowledge at Stage Two and at Stage Three will be rare. Or this is at least one way of spelling out the potential epistemological implications of SI.24 Despite these disturbing implications, there is much to be said in favor of SI. To begin, it avoids the three problems that were raised for traditional rationalism. Indeed, those are not intended to just be objections to a rival position but are meant to serve as independent sources of positive evidence for SI. Thus SI claims to capture our phenomenological experience of immediately forming moral judgments in many cases without going through any prior conscious reasoning. The neuroscientific evidence is supposed to demonstrate the centrality of emotions to the process of moral judgment formation, and that is easily captured by SI in the form of moral intuitions. And finally, the evidence Haidt cites in favor of moral dumbfounding naturally feeds into a story about the pervasive role of post hoc confabulation.25 But social intuitionism has had no shortage of critics too. Here I only highlight two of the more serious objections that have arisen. They both pertain to SI’s account of Stage One.26
First Objection: Overreaching27 As Haidt initially formulated SI, it appears to apply to all of the moral judgments a person makes. But if so, then according to the objection, SI badly overreaches. This point has been made especially forcefully by the psychologist Darcia Narvaez.28 She calls attention to the important role of other factors at Stage One besides intuitions, such as the agent’s goals and values as well as the frequency of conscious reasoning. To take an example, as I drive home I might see a homeless person asking for money at the street corner and ask myself what I should do. No answer immediately comes to mind. I might reflect on how a few dollars could help him avoid being hungry tonight. But then I am reminded that there are plenty of shelters in this area offering free food and a clean bed. Perhaps I should put the money to work having a bigger impact with a famine relief organization in Africa. Eventually, though, I come to decide that the right thing to do is to help him out. Intuitions, goals, values, and conscious deliberation all seemed to play a role in arriving at this judgment. As Narvaez writes, often “[i]nstead of intuition’s dominating the process, intuition danced with conscious reasoning, taking turns doing the leading.”29
336
Christian B. Miller
In their response to Narvaez, Haidt and Bjorklund make a surprising concession. They in effect revise SI so that it pertains to a much narrower class of moral judgments, namely moral judgments a person makes about someone else (her character, her actions, and the like). With respect to first person moral judgments, or judgments about my own character, actions, and the like, Haidt and Bjorklund concede Narvaez’s objection and acknowledge that SI is not a plausible view for such cases.30 This is a major concession. So many of the moral judgments we make are about ourselves. Should I give money to the homeless person? Is it important to keep my promise? Would it really be so bad for me to bend the truth a little bit? But now SI not only becomes narrower in scope by not pertaining to those judgments: we also need to find another account of the psychology of our first person moral judgments. But matters are worse than this. For the first person versus third person distinction does not do the work that Haidt and Bjorklund need it to do. On the one hand, there are many cases of first person moral judgments that do not involve conscious reasoning and that seem to fit well with the SI account. Haidt and Bjorklund themselves even offer an example of someone spontaneously deciding to jump into a river to save a drowning person.31 But consider, on the other hand, that there are many cases of third person moral judgments that do involve conscious reasoning, including the “private, internal, conscious weighing of options and consequences.”32 My example of someone deliberating about the morality of the death penalty, or more specifically about the morality of a particular government official ordering the death penalty for a convicted murderer, serves as a clear example.33 At this point Haidt and Bjorklund might be wise to just abandon the first person/third person distinction and acknowledge that the story about our Stage One moral psychology is messier than they first envisioned. Moral intuitions play a role, but so do goals, principles, values, conscious reasoning, and the like. The extent to which each of them does so will vary from person to person and situation to situation.
Second Objection: Results from Affect Psychology In recent decades, an emerging picture of affect in the psychology literature paints emotions and feelings as much more sophisticated information-processing systems than does the picture that emerges from SI of “quick gut-feelings”34 that are not reasons-responsive. Peter Railton has provided a thorough review of this literature. As he notes, there is now support for the idea that affective states “could actually constitute appropriate representations of value . . . an element of practical knowledge that could guide action ‘in the right way’ to make it responsive to reasons. . . [these] well-attuned affective states can be fitting responses to value.”35 This affect research has the potential to challenge not only SI’s response to Question 3 about reasons and moral judgments but also its skeptical epistemological implications and the extent to which we are guilty of post hoc confabulation.36 These are both serious problems for social intuitionism. And combined with the difficulties we saw earlier for traditional rationalism, it would be welcome news if a plausible third option emerged. Morphological rationalism purports to be that option.
337
Rationalism and Intuitionism
5. Morphological Rationalism But is there really any room for a third option? Initially it might seem that TR and SI are not only mutually exclusive but also exhaustive options. However, by carefully distinguishing between the different questions that might arise at Stage One, we can see a third way forward. Terry Horgan and Mark Timmons developed such a position, which they called “morphological rationalism” (MR), first in a 2007 paper and more recently in a forthcoming book.37 Let’s spell out the details of MR by taking each of our four questions in turn. For the first question about the extent of conscious reasoning, MR sides with social intuitionism in accepting that typically we form moral judgments spontaneously on the basis of subconscious processing. But, strikingly, it sides with traditional rationalism on the second and third questions. In other words, MR holds that in typical cases, our moral judgments are formed on the basis of one or more moral principles that the agent holds, and they are formed for moral reasons. These are claims we saw that SI, at least as formulated here, cannot accept.38 How is MR able to deny a role for conscious reasoning in typical cases while also affirming a role for moral principles and moral reasons? By claiming that principles and reasons are doing their causal work subconsciously. This is a possibility that Haidt and company seem to have overlooked, or at least do not address in detail.39 As Horgan and Timmons note, social intuitionists seem to just assume the following: Assumption: “Unless conscious moral reasoning is part of the process leading to moral judgment, subjects’ reason-giving efforts are not a matter of citing considerations that really did play a causal role in the generation of the judgments in question; rather, reason-giving in these cases is a matter of confabulation.”40 Hanno Sauer formulates this assumption more rigorously as follows (with respect to moral reasons in particular, although the same could be said about moral principles): Causality-Requirement: A (moral) judgment that p made by subject S counts as being based on reasons only if conscious consideration of the set of reasons {q, r, s, . . ., n} that justify p causes S to hold that p.41 But Horgan and Timmons argue that these claims are false. Phenomenology does not mirror causality in this area; even though we might experience our moral judgments as typically formed immediately and spontaneously, the underlying causal processes could be highly complex and involve both principles and reasons. Hence Horgan and Timmons answer yes to the first question and no to the second and third questions—thereby, they claim, preserving the most plausible features of both TR and SI. There is an additional wrinkle to their view. One might think that what they are claiming is that moral principles are being tokened subconsciously in representational states like beliefs, and that unbeknownst to the agent, they are playing a role in an inferential causal process leading to the moral judgment in question. But that is decidedly not their view. As Horgan and Timmons clearly emphasize, “On the view we propose . . . moral principles 338
Christian B. Miller
do not need to be occurrently ‘represented’ (either consciously or unconsciously) by the system in order to be playing the relevant causal role.”42 Instead, One may think of this sort of possession as a matter of know how—a skill that is or has become part of the individual’s repertoire for negotiating her social world. When a principle or norm is possessed morphologically, one can say that its manner of operation is procedural—in virtue of possessing the principle in this manner, an individual is disposed to form moral judgments that non-accidentally conform to the principle.43 So while moral principles do play a causal role, in contrast to social intuitionism, it is a causal role that typically does not require the relevant principles to be consciously or unconsciously represented. So far we have focused on Stage One. What does MR have to say about Stage Three? Here it sides again with traditional rationalism. Horgan and Timmons acknowledge that post hoc confabulation surely does happen in some cases, but they consider it to be the exception rather than the rule. One of their main reasons for this is what they call the “nonjarringness of reason-giving experience,” where “in sincerely giving reasons for one’s all-in moral judgments, one typically experiences the giving of reasons as fitting smoothly with the experiences in which those judgments were formed, and as helping to make sense of those experiences.”44 When I appeal to John Smith’s cruelty in explaining why I judged his punishment to be morally wrong, this feels to me as fitting smoothly with my experience of forming the judgment in the first place. This experience is something that needs to be explained, and Horgan and Timmons claim that it is better explained by people typically being accurate in their post-judgment explanations rather than by there being massive confabulation, as SI insists.45 The epistemological implications of MR will be similar to those of traditional rationalism. So long as one’s view of justification does not require a prior conscious process of reasoning, then given the role—even if it is only a “procedural” role—of reasons and principles in forming moral judgments, those judgments can turn out to the justified. And similarly Stage Three explanations for one’s moral judgments can be justified as well. There is not enough evidence of widespread dumbfounding, according to Horgan and Timmons, for it to serve as a defeater for that justification, at least generally speaking.46 These epistemological implications serve as one of the advantages of MR—it avoids at least certain forms of skepticism about the justification of both moral judgments and our later explanations for them. MR thereby can support what Horgan and Timmons call the maxim of default competence-based explanation: All else equal, a theoretical explanation of a pervasive, population-wide, psychological phenomenon will be more adequate to the extent that (1) it explains the phenomenon as the product of cognitive competence rather than as a performance error, and (2) it avoids ascribing some deep-seated, population-wise, error-tendency to the cognitive architecture that subserves competence itself (e.g., an architecturally grounded tendency to erroneously conflate post-hoc confabulation with articulation of the actual reasons behind one’s moral judgments).47 339
Rationalism and Intuitionism
Furthermore, like TR, morphological rationalism has the advantage of fitting smoothly with the phenomenology of our moral experience at Stage Three. But unlike TR, it also has the advantage of capturing the experience of forming moral judgments spontaneously without prior conscious reasoning. In addition, it can accept the important role of affect and emotion in moral judgment formation. Indeed, given the recent research in affect psychology that was briefly mentioned in the previous section, subconscious affective states may even be one of the main sources of moral reasons for those judgments. Or at the very least, they can accompany moral principles and reasons in giving rise to judgments. So as Horgan and Timmons see things, MR can deliver the best of MR and SI, while at the same time avoiding their main difficulties. Unfortunately there has not been a great deal of work evaluating the view at this point in time.48 So while other objections will surely be raised once their book appears in print, let me raise two preliminary concerns here for Horgan and Timmons.49
First Objection: Connecting Stage Three and Stage One MR purports to explain why the considerations people cite for why they formed their moral judgments were typically the actual considerations that were causally operative. But—at least in cases where there was no conscious reasoning in Stage One—it is not immediately clear how that explanation would go, and as far as I am aware Horgan and Timmons have yet to provide a detailed account. What makes this challenging for MR is that, as we saw, in cases of unconscious reasoning, morphologically held principles are not being tokened in conscious or unconscious representational states. So on what basis will the agent be able to bring them to mind, in a reliable way, after having formed the judgment? Indeed, if anything there might be support here for the kind of post hoc confabulation account we saw with SI. Note that this concern does not call into question the morphological functioning of moral principles in Stage One. It only calls into question how the agent, by the time she gets to Stage Three, will be able to have spontaneous and reliable epistemic access to whichever principle it was that indeed functioned causally. The “spontaneous” bit is important. On Horgan and Timmons’s view, the agent is not, at Stage Three, typically inferring what the principle was that informed her judgment. Rather, as reflected in the relevant phenomenology, the agent spontaneously believes that it was some particular principle or other, even though it functioned entirely subconsciously and even though it was not tokened in any subconscious representational states. To see the concern from a different direction, we already noted that Horgan and Timmons link the morphological functioning of moral principles to know-how and to the work of a proceduralized skill.50 Again, those might be important connections to make when explicating the account of Stage One processing. But they don’t seem to help, one might think, with respect to Stage Three. For both know-how and skills are not always very easy to explain after the fact. To take a simple example, I know how to tie a tie, but I would have to give it a lot of thought if I were going to try to verbally explain the process step by step. Indeed, without a tie in front of me, I am not even sure I could do it! Or consider a master clay sculptor, or an elite race car driver, or a trained marksman. They might be the
340
Christian B. Miller
best in the world at what they do but at the same time be able to offer little insight to others about how they executed their tasks or what they were thinking at the time. To sum up, Horgan and Timmons need to do more to fill in the details about Stage Three processing so as to avoid the implication that their account actually supports post hoc confabulation.51
Second Objection: Skepticism Revisited One of the strengths of MR is supposed to be its acknowledgment of the rarity of conscious reasoning prior to moral judgment formation, while also avoiding any skeptical conclusions that this acknowledgment might be thought to support at Stages Two and Three. But skeptical dangers might still be lurking. Let us return to the example of John Smith and his severe punishment of his child. I spontaneously judge his behavior to be morally wrong. I report after the fact that I made this judgment because he was being cruel to his son (and, implicitly, because cruel acts are wrong). This explanation appeals to motivating reasons and, presumably, good normative reasons as well. But leave aside for the moment whether this really was the basis for my moral judgment, and let’s ask the following question: during conscious reflection at Stage Three, how am I supposed to be able to rule in or out various competing hypotheses, each of which could explain why I made the judgment in question? One hypothesis involves reasons having to do with cruelty. Another one, though, might involve a spontaneous intuition against Smith’s behavior, where this intuition is familiar from the discussion of SI and is nonrational. Yet another hypothesis might involve wanting to make the same moral judgments that other people expect me to make. Still other possibilities are no doubt imaginable. Of course, one can always choose the most morally admirable hypothesis from the list, but it’s being admirable doesn’t make it the most plausible causal hypothesis in someone’s mind. The conclusion for MR lurking in this neighborhood is a form of skepticism that I have formulated elsewhere as follows: Skepticism about Our Reasons for Forming Moral Judgment: Given the subconscious causal role of moral principles and other mental phenomenon capable of leading to the formation of moral judgments, in any given instance where a moral judgment is formed spontaneously and immediately without conscious deliberation, the agent in question has no reasonable basis upon which to discern what her actual reasons were for forming the judgment.52 To take a different example involving my own behavior instead of somebody else’s, suppose I agree to help a stranger carry a heavy box of books. On the picture developed by MR I am not in an adequate epistemic position to decide whether my helping judgment arose from a morphologically functioning moral principle, from a subconscious desire to alleviate recent feelings of guilt or embarrassment, from a subconscious desire to make the other person indebted to me, or from a variety of other possibilities. I am left without a clear way to adjudicate between them.53
341
Rationalism and Intuitionism
Again, none of this is meant to call into question the story MR has told about Stage One. There could be plenty of cases where our moral principles do generate a moral judgment in the way MR has suggested. The concern is not with Stage One but Stage Three. And it is not with the objective psychological facts about what is happening in a person’s head but rather with his epistemic justification in forming beliefs about those psychological facts. If I say that I thought it was a good idea to help carry the boxes because I saw someone in serious need of help and it is a good thing to help people in serious need, then that might indeed have been what my subconscious processing was like (although not in the form of token representational states with that content). But even so, I would have only gotten these facts correct by accident.54 I was not epistemically reliable about these matters. In other situations, I might invoke the same causal story, but instead guilt relief was really involved, and I messed up because I couldn’t discern this from the first person perspective. The practical lesson becomes that, until I can devise a way of discerning better, I should refrain from offering any explanations about the immediate causal origins of my moral judgments. And I am not unique here. Most of us will be in a similar position.This is the skeptical worry about Stage Three. Note that this skeptical worry is compatible with continuing to maintain the spontaneously formed judgment itself. But here, too, concerns might start to arise. For if I should refrain from explaining why I formed the judgment, does that undermine the justification I have for holding it (unless at this point I proceed to start consciously searching for reasons for accepting the judgment)? Suddenly, skeptical worries might arise at Stage Two as well. To be sure, none of this is any kind of skepticism about whether moral judgments can be true or false, about whether there are objective moral facts, or about whether there are normative reasons for action capable of objectively justifying our moral judgments. But it is a kind of skepticism about both the justification for those moral judgments and about the justification for our subsequent explanatory beliefs about their causal origins. Much more would need to be said to develop this concern in detail, and no doubt Horgan and Timmons will have plenty to say in reply as they continue to develop the details of morphological rationalism.
6. Conclusion We have seen three of the leading positions on the psychology of moral judgment formation and explanation. There is also room for additional positions to be developed as well, such as a close competitor to morphological rationalism in which moral principles are tokened in occurrent mental states, which are typically operative subconsciously. Research in this area is still in its infancy, and there are exciting opportunities for philosophers and psychologists to work together to better understand how our minds typically work when it comes to forming moral judgments.55
Further Readings For a helpful overview of Kohlberg’s moral psychology, see Daniel Lapsley, Moral Psychology (Boulder, CO: Westview Press, 1996). For more recent work by Haidt building on social intuitionism, see J. Graham, J. Haidt and B. Nosek, “Liberals and Conservatives Use Different Sets of Moral Foundations,” Journal of Personality and Social Psychology, 96, 1029–1046, 2009. Horgan and Timmons’s book, 342
Christian B. Miller
Illuminating Reasons: An Essay in Moral Phenomenology, will become the definitive statement of morphological rationalism once it appears in print.
Related Chapters Chapter 4 The Neurological Basis of Moral Psychology; Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 9 The Evolution of Moral Cognition; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 18 Moral Intuition; Chapter 20 Moral Theory and its Role on Everyday Moral Thought and Action; Chapter 21 Methods, Goals and Data in Moral Theorizing.
Notes 1. For more on what ‘typically’ involves in this literature, see Sauer, 2011, 709, 713. See too Chapter 7 of this volume for further discussion of the role of reasoning in moral judgment. 2. For relevant discussion, see Horgan & Timmons, 2007, 283. This will need some more refinement in a longer discussion of moral principles, since “Cruelty is wrong” is a moral principle, but the concept of cruelty is a moral concept. 3. See Chapters 1 and 5 for further discussion of the rationalist project in empirical moral psychology. 4. See, e.g., Haidt, 2001, 814, 816, Greene & Haidt, 2002, 517, Horgan & Timmons 2007, 280, Haidt & Bjorklund, 2008a, 183–185, and Sauer, 2011, 710. Others frequently linked to rationalism include Descartes (Haidt, 2001, 815 and Haidt & Bjorklund, 2008a, 183), Piaget (Haidt, 2001, 814, Horgan & Timmons, 2007, 280, Haidt & Bjorklund, 2008a, 183, and Sauer, 2011, 710), and Turiel (Haidt, 2001, 814, 816 and Haidt & Bjorklund, 2008a, 184). 5. Kohlberg, 1969, 375. 6. Ibid., 397. 7. For critical discussion, see Haidt, 2001 and Haidt & Bjorklund, 2008a. 8. Haidt & Bjorklund, 2008a, 198. 9. For more, see Haidt, 2001, 823–825, Greene & Haidt, 2002, and Haidt & Bjorklund, 2008a, 199–201. See also the helpful discussion of the role of affect in moral judgment formation in Sinnott-Armstrong,Young, and Cushman 2010 and Chapter 4 of this volume. 10. Haidt, 2001, 814. 11. Haidt & Bjorklund, 2008a, 198. For the studies, see Haidt et al., 1993 and Haidt et al., 2000. For additional relevant work, see Nisbett & Wilson, 1977 and Uhlmann et al., 2009. 12. Haidt, 2001. See also Greene & Haidt, 2002; Haidt, 2003; Haidt & Joseph, 2004, and Haidt & Bjorklund, 2008a, 2008b. 13. Haidt, 2001, 818–820 and Haidt & Bjorklund, 2008a, 189. 14. Haidt & Bjorklund, 2008a, 189. 15. Haidt, 2001, 817 and Haidt & Bjorklund, 2008a, 188. 16. Haidt & Bjorklund, 2008a, 188. This is a modification of Haidt’s original definition in Haidt, 2001, 818. Similarly Joshua Greene and Haidt write, “These feelings are best through of as affect-laden intuitions, as they appear suddenly and effortlessly in consciousness, with an affective valence (good or bad), but without any feeling of having gone through steps of searching, weighing evidence, or inferring a conclusion” (2002, 517). 17. But see Chapter 7 of this volume for complications. 18. See Prinz, 2007, 32. Earlier he endorsed the interpretation of Haidt’s dumbfounding research, according to which “subjects have no reasons for their moral judgments. They simply have a gut reaction that consensual incest and laboratory cannibalism are wrong, and a few post hoc rationalizations, which play no important role in driving those reactions” (31, emphasis his). For a complication, see page 32. 19. Sauer, 2011, 712, 716–717. See also Horgan & Timmons, 2007, 284.
343
Rationalism and Intuitionism
20. Haidt & Bjorklund, 2008a, 189. 21. This is the formulation I used in Miller, 2016, 30. 22. Haidt & Bjorklund, 2008a, 189. See also Haidt, 2001, 814, 817, 820–823 and Haidt & Bjorklund, 2008a, 189, 2008b, 249. 23. For instance, Haidt has tried to categorize the intuitions people typically have into five sets of basic intuitions—harm/care, fairness/reciprocity, authority/respect, purity/sanctity, and ingroup/out-group. He has also offered an evolutionary account for why those particular kinds of intuitions would have likely emerged over time (Haidt & Joseph, 2004 and Haidt & Bjorklund, 2008a, 203–204). In addition, the presentation of SI thus far neglects the strong emphasis Haidt has placed on the social dimensions of moral psychology. He writes that during moral debates with our peers, “reasoned persuasion works not by providing logically compelling arguments but by triggering new affectively valenced intuitions in the listener” (2001, 819). He also emphasizes the influence of group norms and the ways in which our moral intuitions are influenced simply by the fact that other people have made the moral judgments that they have (Ibid.). For more on the social dimensions to SI, see Haidt, 2001, 818–819 and Haidt & Bjorklund, 2008a, 182, 190–193. 24. For relevant discussion, see Jacobson, 2008, 224–227 and Sauer, 2011, 716–717. 25. For more on this and other evidence offered in favor of SI, see Haidt, 2001, 819–825, Greene & Haidt, 2002, and Haidt & Bjorklund, 2008a, 196–201. 26. For additional critical discussion of SI, see Pizarro & Bloom, 2003; Sneddon, 2007; Jacobson, 2008; Sauer, 2011, and Railton, 2014. 27. The discussion of this objection draws on Miller, 2016, 33–35. 28. Narvaez, 2008. For an earlier version of this concern, see Pizarro & Bloom, 2003, 195. 29. Narvaez, 2008, 235. 30. Haidt & Bjorklund, 2008b, 242–244, 249. 31. Ibid., 244. 32. Ibid., 242. 33. For further development of this objection, see Miller, 2016, 33–35. 34. Haidt & Joseph, 2004, 57. See also Haidt, 2001, 817. 35. Railton, 2014, 840–841. 36. For confabulation in particular, see Railton, 2014, 847–850. For further discussion of this research, see Chapters 6 and 7 of this volume. 37. See Horgan & Timmons, 2007, Forthcoming. 38. See Horgan & Timmons, 2007. 39. The closest Haidt comes to discussing a view like MR, as far as I can tell, is in Haidt & Bjorklund, 2008a, 212–213. See Chapter 7 of this volume for further discussion of the role of subconscious reasoning in moral judgment. 40. Horgan & Timmons, 2007, 282. 41. Sauer, 2011, 717. 42. Horgan & Timmons, 2007, 280. 43. Ibid., 286, emphasis theirs. 44. Ibid., 291. 45. Ibid., 293–294. 46. Ibid., 293. 47. Ibid., 289. 48. Miller, 2016 is one of the only detailed discussions at the present time. 49. These concerns are drawn from Miller, 2016. 50. Horgan & Timmons, 2007, 286. 51. For one potentially promising route, see Sauer, 2011, 717–718, 2012. 52. See Miller, 2016, 41. 53. John Doris has discussed related issues (2002, 139). See also Brink, 2013, 142 and the relevant studies and discussion in Uhlmann et al., 2009, especially at 489. 54. For a similar idea, see Nisbett & Wilson, 1977, 233.
344
Christian B. Miller
55. I am very grateful to the editors for inviting me to write this chapter. Work on this paper was supported by a grant from the Templeton Religion Trust. The opinions expressed here are those of the author and do not necessarily reflect the views of the Templeton Religion Trust.
References Brink, David (2013).“Situationism, Responsibility, and Fair Opportunity,” Social Philosophy and Policy, 30, 121–149. Doris, John. (2002). Lack of Character: Personality and Moral Behavior. Cambridge: Cambridge University Press. Greene, J. and Haidt, J. (2002).“How (and Where) Does Moral Judgment Work?” TRENDS in Cognitive Science, 6, 517–523. Haidt, J. (2001). “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review, 108, 814–834. ———. (2003). “The Emotional Dog Does Learn New Tricks: A Reply to Pizarro and Bloom (2003),” Psychological Review, 110, 197–198. Haidt, J. and Bjorklund, F. (2008a). “Social Intuitionists Answer Six Questions About Moral Psychology,” in Walter Sinnott-Armstrong (ed.), Moral Psychology.The Cognitive Science of Morality: Intuition and Diversity,Vol. 2. Cambridge, MA: MIT Press, 181–217. ———. (2008b). “Social Intuitionists Reason, in Conversation,” in Walter Sinnott-Armstrong (ed.), Moral Psychology: The Cognitive Science of Morality: Intuition and Diversity, Vol. 2. Cambridge, MA: MIT Press, 241–254. Haidt, J. and Joseph, C. (2004). “Intuitive Ethics: How Innately Prepared Intuitions Generate Culturally Variable Virtues,” Daedalus, 133, 55–66. Haidt, J., Koller, S. and Dias, M. (1993). “Affect, Culture, and Morality, or Is It Wrong to Eat Your Dog?” Journal of Personality and Social Psychology, 65, 613–628. Haidt, J. and Murphy, S. (2000). “Moral Dumbfounding:When Intuition Finds No Reason,” Unpublished manuscript, University of Virginia. Horgan, Terry and Timmons, Mark. (2007). “Morphological Rationalism and the Psychology of Moral Judgment,” Ethical Theory and Moral Practice, 10, 279–295. ———. (forthcoming). Illuminating Reasons: An Essay in Moral Phenomenology. Jacobson, Daniel. (2008). “Does Social Intuitionism Flatter Morality or Challenge It?” in Walter Sinnott-Armstrong (ed.), Moral Psychology: The Cognitive Science of Morality: Intuition and Diversity, Vol. 2. Cambridge, MA: MIT Press, 219–232. Kohlberg, L. (1969). “Stage and Sequence: The Cognitive-Developmental Approach to Socialization,” in David Goslin (ed.), Handbook of Socialization Theory and Research. Chicago: Rand McNally and Company, 347–480. Miller, Christian. (2016). “Assessing Two Competing Approaches to the Psychology of Moral Judgments,” Philosophical Explorations, 19, 28–47. Narvaez, D. (2008). “The Social Intuitionist Model: Some Counter-Intuitions,” in Walter SinnottArmstrong (ed.), Moral Psychology: The Cognitive Science of Morality: Intuition and Diversity, Vol. 2. Cambridge, MA: MIT Press, 233–240. Nisbett, R. and Wilson, T. (1977). “Telling More Than We Can Know: Verbal Reports on Mental Processes,” Psychological Review, 84, 231–259. Pizarro, D. and Bloom, P. (2003). “The Intelligence of the Moral Intuitions: Comment on Haidt (2001),” Psychological Review, 110, 193–196. Prinz, Jesse. (2007). The Emotional Construction of Morals. Oxford: Oxford University Press. Railton, Peter. (2014). “The Affective Dog and Its Rational Tale: Intuition and Attunement,” Ethics, 124, 813–859. Sauer, Hanno. (2011). “Social Intuitionism and the Psychology of Moral Reasoning,” Philosophy Compass, 6, 708–721. 345
Rationalism and Intuitionism
———. (2012). “Educated Intuitions: Automaticity and Rationality in Moral Judgment,” Philosophical Explorations, 15, 255–275. Sinnott-Armstrong, Walter,Young, Liane and Cushman, Fiery. (2010). “Moral Intuitions,” in John M. Doris and The Moral Psychology Research Group (eds.), The Moral Psychology Handbook. Oxford: Oxford University Press, 246–272. Sneddon, Andrew. (2007). “A Social Model of Moral Dumbfounding: Implications for Studying Moral Reasoning and Moral Judgment,” Philosophical Psychology, 20, 731–748. Uhlmann, E., Pizarro, D. Tannenbaum, D. and Ditto, P. (2009). “The Motivated Use of Moral Principles,” Judgment and Decision Making, 4, 476–491.
346
17 MORAL PERCEPTION Robert Audi
Perception is crucial for human knowledge, and there are good reasons to take it to extend to the moral realm. But at least until recently most major writers in ethics have considered any moral knowledge we possess to be non-perceptual, even if dependent on perceptual knowledge.This chapter outlines a view of perception in general and, in that light, sketches an account of moral perception.1
1. Moral Perception in Everyday Life Moral philosophers have not doubted that we can perceive—say, see, or hear—phenomena that are moral in nature, such as a bombing of noncombatants or a fatal stabbing. But seeing such things can be a mere perception of a moral phenomenon. That need not be moral perception. It may be simply perception of a deed that has moral properties—something possible for a dog. Seeing a deed that has a moral property—for example the property of being wrong— does not entail seeing its wrongness. We can hear a lie, as where, just after we see A receive change for a hundred-dollar bill, A tells a friend, B (who needs a small loan), that A has no cash. Can we, however, also morally perceive the wrong that such a lie implies?
The Perceptible and the Observable Suppose genuine moral perception is possible. If so, we may take literally discourse that represents moral properties as perceptible. One might object that we do not strictly speaking see moral properties but only non-moral properties or non-moral events that evidence their presence: it is, one might think, because we perceive certain non-moral properties that we tend to ascribe moral properties—and to say that we see, for instance, wrongdoing. Philosophers should ask, then, what relations hold between the two sorts of properties.2 To begin with, we should set aside certain unwarranted but tempting assumptions. Above all, we should not expect moral perception to be exactly like physical perception, at least exactly like perceiving everyday visible objects seen in normal light (I take vision as paradigmatic for perception, as is common). Two points are essential here.
347
Moral Perception
First, moral properties are not easily conceived as observable: no sensory representation is possible for them, as with shape and movement, though sensory representations, especially of actions, may be integrated with phenomenal elements, including certain moral emotions, that are distinctive of moral experience. Second, even the perceptible properties on which the possession of certain moral properties is based may not be strictly speaking observable, at least in this elementary way.You can see one person do a wrong to another by, for example, seeing the first slashing the tires of the other’s car.The slashing is uncontroversially observable: an “observable fact.” But what we may be properly said to perceive here may be not just a matter of what we see; it may also reflect what we already know, such as that the slasher does not own the car. Third, we may grant that even though you can visually observe the basis of the wrongdoing, your seeing the wrongdoing depends on your understanding, to at least some degree, the normative significance of the destruction of someone else’s property. Moral perception presupposes both non-moral perception and a certain background understanding of interpersonal relations that, even if quite unsophisticated, enables the moral character of what is perceived to affect the perceiver’s sensibility.
The Analogy between Perception and Action We can learn much from the analogy between perception and action. Conceived in terms of what might be called success conditions, action and perception have different “directions of fit” to the world. Action succeeds (at least in an agent-relative sense appropriate to intentional action) when it changes the world to fit the relevant aim(s) of the agent; perception succeeds when it represents to the perceiver a change in, or state of, a perceived object and does so in a way that fits—i.e., in some sense correctly represents—the world. These are rough formulations; but they are a good starting point, and theories of action and perception should enable us to refine them in illuminating ways.3 A second aspect of the analogy between perception and action is particularly important for understanding moral perception. Just as we do not do anything at all without doing something basically, i.e., other than by doing something else, and, in that way, do the basic deed “at will,” we do not perceive anything at all other than by perceiving something basically, say by simply seeing its colors and shapes, as with visual perception of a tree. Now consider an action, say greeting you. I cannot do this without, for instance, raising my hand. I greet you by raising my hand. That is a basic act for me: I do not do it by doing anything else. Someone might be able to move the relevant muscles at will; I cannot: I can move them only by moving my hand.There is a difference between a movement I make as an action and a movement in my body necessary for the action. Similarly, I see a tree by seeing its colored foliage and its shape, but I do not see these by seeing anything else. Granted, I cannot see them unless light is conveyed to my visual system, but that set of events is not my basic perception. Moreover, neither the visual system’s reception of the light nor my seeing the colors and shapes is a kind of doing conceived as a volitional phenomenon, much as neither my raising my arm nor my muscle movements underlying that action are perceptual phenomena.The structural parallels between action and perception do not undermine the ontological differences between them. We can now see how basic perceptions reveal the perceptible, something we can be perceptually aware of by (say) seeing. Some perceptible entities, however, are not perceived 348
Robert Audi
basically but only by perceiving something else—in the sense of something distinct from them even if intimately connected with them in the way that raising a hand can be intimately connected with greeting.We can see this point more clearly by considering whether the kind of perceptibility in question is a matter of being, for us, observable, where the object is constituted roughly by what is, for us, perceivable basically. The relativity view here is that a given species or subspecies tends to have a characteristic basic level of perception; it is not that the concept of perception requires positing, for all perceiving species, an absolutely basic level. For any perceiver and any time, there is a perceptually basic level for that perceiver at that time; but it does not follow, and is apparently not true, that there is some “ultimate” perceptual level basic for every perceiver at every time. Now consider injustice as a major moral phenomenon. Is it ever observable, in the most basic sense, a sense that goes with perceptual properties, roughly the kind basic for us? Is seeing injustice, for example, observational in the sense corresponding to the perceptual properties of color and shape? Or is such moral perception equivalent to seeing—in a distinctive way that is at least not narrowly observational—a set of base properties for injustice, such as a patently unequal distribution of needed food to starving children, where these properties are seen in a way that makes it obvious, upon seeing them, that an injustice is done? The second alternative points in the right direction. The remainder of this section will clarify the distinctive way in which moral perception may be a case of seeing. In asking about the relation between a moral perception of injustice and seeing the relevant base properties, i.e., the properties on which injustice is consequential, I assume something widely held: that actions and other bearers of moral properties do not have those properties brutely but on the basis of (consequentially on) having “descriptive” properties. Consequential properties may also be called grounded or resultant, terms that also indicate that a thing possesses the “higher-level” properties because it possesses the base properties. Acts are not simply wrong, in the way there can be simply hand-raisings (though in certain underlying ways even such basic acts are not simple). It is essential to the wrongness of a wrong act that it be wrong on the basis of being a lie, or because it is a promise-breaking, or as a stabbing, and so forth. Similarly, a person is not simply good but good on the basis of, or because of, or as having, good governing motives together with beliefs appropriate to guide one toward constructive ends.4 If, however, we see moral properties on the basis of our seeing non-moral properties, philosophers will ask whether one ever really sees a moral phenomenon, such as an injustice. Consider seeing a babysitter eat the last piece of chocolate cake and later accuse a child of eating it. Do we not, given what we know, see and hear wrongdoing in the accusation? We do, but the moral perception this illustrates is not the elementary kind of perception illustrated by seeing the consumption of the cake. Does moral perception, however, differ in kind from every sort of non-moral perception? One might think that the phenomenal elements in perception properly so called must be sensory in the representational way that characterizes paradigms of seeing and some of the exercises of the other senses among the five ordinary senses. But why should we expect perceiving injustice, which is not a basic perception for us and has a normative, non-sensory, moral phenomenon as object, to be just like perceptions of color, shape, flavor, or sound, which are physical or in any case sensory, non-normative and, in typical cases, basic for us? Why should there not be, for instance, a phenomenal sense of injustice that is not “pictorial” in the way exemplified by visual impressions of trees or paintings? Here we might consider 349
Moral Perception
non-visual perception. Where a moral perception is auditory, as with hearing a lie, or tactual, as with feeling one’s face slapped, we are not tempted to expect it to be pictorial, at least in the way visual experience of many kinds of things may be taken to be. One might still think that genuine perceptual experience must be cartographic, having content that provides a “mapping” from phenomenal properties, such as a tactual impression of a shape one can feel in darkness, to physical properties causing the impression, such as cubicity. From sensations of touch one can “map” the shape and size of a cube felt in the dark. But wrongdoing and, on the positive side, justice do not admit of mapping, even when they can be seen in a mappable distribution of boxes, as where a supply of food from the UN is placed symmetrically on the ground for equal distribution to needy families waiting for help. What we see must be perceptible; but even if perceptible properties, say being wrong or unjust,5 must be seen by seeing perceptual properties (often considered observable) such as bodily movements, not all perceptible properties are perceptual. Granted, the senses yield the base by which we see certain perceptible properties. Still, the latter need not be on the same level as the perceptual properties pictured or mapped by the senses. To make perceptibility clearer, we must explore the sense in which moral perception is representational.
2. The Representational Character of Moral Perception Given what we have seen so far, we should distinguish two kinds of demands one might make on a theory of moral perception. One demand requires the theory to provide a phenomenal— and especially, cartographic—representation of, say, injustice. The second, more plausible demand centers on a phenomenal representation constituted by a (richer) perceptual response to injustice. The sense of injustice, then, a kind of impression, as based on and as phenomenally integrated with a suitable ordinary perception of the properties on which injustice is consequential—grounded, in a main use of that term—might serve as the experiential element in moral perception. Call this an integration theory of moral perception.
Sensing Physically versus Sensing Morally An important constituent in this phenomenal integration is the perceiver’s felt sense of connection between two kinds of thing: the impression of, say, injustice or (on the positive side) beneficence and, on the other hand, the properties that ground the moral phenomena.This felt sense of connection is at least akin to what some have called the sense of fittingness. It normally produces, moreover (in morally sensitive adults), a disposition to attribute the moral property to the action (or other phenomenon in question) on the basis of the property or set of properties (of that action) on which the moral property is grounded. Suppose I see injustice in a distribution, say, a larger box of food given to a family smaller than the other families standing in line for the distribution of one per family. My sense of injustice normally yields a disposition to believe that distribution to be wrong because it is, say, giving more to one family in the same needy position of others. My awareness of injustice, however, if perceptual, is non-inferential.6 It is not based on any premise but is a direct response to what I see. The directness is, of course, epistemic and not causal—philosophical analysis places no restrictions on what causal processes may occur in the brain. Any kind of perception is, on my view, experiential in having some appearance in consciousness. Moral perception in some way embodies a phenomenal sense—which may, but need not, 350
Robert Audi
be in some way emotional—of the moral character of the act. This sense may, for instance, be felt disapproval, or even a kind of revulsion, as where we see a man deliberately spill hot tea on his wife’s hand. The sense need not be highly specific; it may, for instance, be a felt unfittingness between the deed and the context, as where we see male and female children treated unequally in a distribution of medicines for patients with the same infection. Similarly, but on the positive, approbative side, a felt fittingness may play a positive phenomenal role in moral perception. Think of the sense of moral rebalancing if one sees the unequal distribution of medicine rectified by an observant nurse.The equality of treatment befits the equality of need. In each instance of moral perception, the moral sense of wrongness, of injustice, or, in the positive case, of welcome rebalancing is essentially connected to perception of non-moral properties on which the moral properties are grounded. In cases like these, we might be said to sense morally, rather as someone who hears a melody in a howling wind blowing through open drainpipes might be said to sense musically. This is not because moral properties (or comparable aesthetic ones such as lyricality) are sensory—they are not—nor because there is a special moral faculty dedicated to the ethical realm. The reason is instead a kind of perceptual experience that manifests moral sensibility and appropriately incorporates a response to the properties that ground a moral property that we sense.7 Perceptibility through our moral sensibility is wider than, though it depends on, perceptuality at the level of observable properties accessible to the five senses. Consider the vivid description we find in the parable of the Good Samaritan: A priest happened to be going down the same road, and when he saw the [injured] man, he passed by on the other side. So too, a Levite . . . passed by on the other side. A Samaritan . . . when he saw him, he took pity on him. He . . . bandaged his wounds, pouring on oil and wine. Then he put the man on his own donkey, brought him to an inn and took care of him. The next day he took out two denarii and gave them to the innkeeper. “Look after him,” he said. (Luke 10:34–37) The wounded man is a pitiful sight. We are to see the priest and Levite as either lacking moral perception or, if not, responding instead to contrary motivation, whereas the Samaritan has a strong sense of moral obligation: what he ought to do. Granted, pity alone could yield the action, but the continuation of the story suggests perception of the kind manifesting a sensitivity to the obligation of beneficence. Phenomenologically, seeing the wounded man as wronged or seeing what one ought to do, or both, may have experiential elements blended with pity. Just as the sense of harmony in music or of gracefulness in dance depends on both one’s aesthetic sensibility and what is directly perceived, moral perceptions depend on both one’s moral sensibility and what one perceives. Moral perception achieves an integration of elements that come from the constitution of one’s sensibility with elements perceived on the occasion of its stimulation.8
The Multi-Level Character of Perception One way to view the theory of perception I have outlined is to consider it layered. We accommodate moral perception by distinguishing between perceptual representations of 351
Moral Perception
an ordinary sensory kind that are low level and perceptual representations of a richer, higher-level kind that are based partly on ordinary sensory representations. Can this layered, multi-level theory of perception, however, explain how moral perception can have a causal character? To see how, in a familiar kind of non-moral case, consider recognizing a person on a plane. The property of being, say, Karen does not cause my recognizing her; the causal work is done mainly by the colors and shapes that identify her to me. The theory of moral perception presented here is neutral regarding the possibility that moral properties themselves are causal. It does, however, construe seeing certain subsets of base properties for them, say for injustice, as—at least given appropriate understanding of their connection to moral properties—a kind of perception of a moral property; and this kind includes, as elements, such ordinary perceptions as seeing a violent seizure of a woman’s purse and hearing loud catcalls aimed at preventing a speech. Depending on our psychological constitution, we may indeed be unable to witness these things without a phenomenal sense of wrongdoing integrated with our perceptual representation of the wrongmaking facts.9 For many people, certain perceptible wrongs perpetrated in their presence are morally salient and unignorable. It is one thing to hold that there are genuine moral perceptions and another to take them to ground knowledge or justification regarding the moral phenomenon perceived. I defend both views but do not take the epistemic power of moral perception to depend, in a way it might seem to, on the perceiver’s possessing a priori knowledge.
Moral Perception as a Basis for Moral Knowledge We have seen the difference between a moral perception of wrongdoing and a perception merely of an act that is wrong. We can also see that moral perception does not entail the formation of moral belief or moral judgment. Still, although moral perception is not beliefentailing, if we understand moral phenomena and see certain base properties that suffice for injustice, we sometimes perceptually know and are perceptually justified in believing that, for instance, one person is doing an injustice to another. We are thus justified in seeing the deed as an injustice. When we have such perceptual knowledge or such perceptual warrant, we are often properly describable as seeing that the first is doing an injustice to the second and, indeed, as knowing this. This point does not imply that seeing an injustice is intrinsically conceptual, even for someone who has the relevant concepts. But seeing that an injustice is done is conceptual. By contrast, merely seeing a deed that constitutes an injustice is possible for a dog or a prelingual child lacking moral concepts. Once the child acquires moral concepts, of course, the same physical perception might immediately yield a moral conceptualization of the act or indeed moral knowledge thereof. Moreover, even before developing moral concepts, the child may be disturbed at seeing an injustice in the kind of act in question, say giving medicine to a fevered shivering male but not to his female sibling suffering from the same condition. It is certainly possible that children have perceptions of disparity that, together with the sense of its unfittingness, reflect a discriminative sensitivity to differential treatment of persons—especially when it is, in Aristotelian terms, dissimilar treatment of similars. These perceptions put such children in a good position to develop the concept of injustice. 352
Robert Audi
If this picture is correct, moral perception may precede moral concept formation and indeed may lie on a normal developmental route to it. Consider a different example, showing developmental elements in adults. We might see a man we view as domineering shake the hand of another, smaller man and notice a hard squeeze, with the result of redness in the other’s hand. It might not seem to us until later that we have witnessed an intimidation, though we could have been more alert and seen at the time that the former was wrongfully intimidating the latter. Such moral perceptual seemings may or may not be partly emotional, as where indignation is an element in them. One way to explain such phenomena is to say that initially, one does not see the squeezing of the hand as domineering. If we take seeing as to be essential for moral perception, it is essential to distinguish at least three cases. First, one may see the act (or other thing) as having a property, where this is ascriptive and not conceptual: roughly taking the thing to have the property in a way that reflects the information that it has that property but does not require conceptualizing that property as such (if at all). Perhaps seeing an approaching dog as dangerous can be like this for a toddler; it yields perceptually guided avoidance behavior but does not depend on conceptualizing danger as implying possible harm. Second, there is conceptual seeing as; this would be illustrated by viewing the hand-squeezing under a description such as “intimidating” (though no verbalization is required). Third, seeing as may be doxastic (belief-entailing), as where I say, to someone who took the hand squeezing to be intimidation, that I saw it as (believed it to be) intended to express enthusiasm. Doxastic seeing as is of course not truth-entailing, and even seeing an actual, inexcusable wrong is compatible with mistakenly seeing it as, say, justified self-defense. If moral perception entails seeing as at all, then in the simplest cases it requires only ascriptive seeing as and neither conceptual nor doxastic seeing as. Perhaps one way to describe sensing morally is to call it a special case of ascriptive seeing as. To recapitulate some of what has been said, perception is a kind of experiential information-bearing relation between the object perceived (which may be an action or other event) and the perceiver. I have not offered a full analysis of this perceptual relation but have indicated how, even if moral properties are not themselves causal, they can be perceptible. We perceive them by perceiving properties that ground them, which, in turn, may or may not be perceived in the basic way in which we perceive some properties: directly—other than by perceiving still others. But the dependence of moral perception on non-moral perception does not imply an inferential dependence of all moral belief or moral judgment on non-moral belief or non-moral judgment (counterpart points also apply in the aesthetic domain). Indeed, although perceiving moral properties, as where we see an injustice, commonly evokes belief, it need not. When it does, it may do so in a way that grounds that belief in perception of the properties of (say) the unjust act in virtue of which it is unjust. This kind of grounding explains how a moral belief arising in perception can constitute perceptual knowledge and can do so on grounds that are publicly accessible and, though not a guarantee of ethical agreement, a basis for it.
3. Phenomenological Elements in Moral Perception The phenomenology of perception poses challenges for even the simplest cases of moral perception. One concern is representationality. I have stressed that the sense in which a 353
Moral Perception
moral perception represents, say, wrongdoing, is not cartographic. But “represent” can still mislead. Consider this worry: What we are trying to achieve here is a conception of a state that is genuinely perceptual, but has a moral content. The phenomenal properties of outrage [say, outrage upon viewing a brutal stabbing], even when added to a perception of the base properties, don’t seem to generate a content of that sort.10 A crucial issue here is what counts as “content.” In one sense, the perception represents the wrongdoing by virtue of representing the properties on which the wrong is grounded: their presence (on my view) a priori entails, by a kind of constitutive relation, the wrongdoing. But suppose content must be propositional. Then, on the natural assumption that one is acquainted with the content of one’s own perception, some may take this propositional view of content to imply that the perceiver must believe or at least conceptualize that content. But to demand that moral content take this form is unreasonable: one can have a moral perception yet fail to believe or otherwise conceptualize a proposition that is the (or a) content appropriate to what is perceived, e.g., that an act like the discriminatory delivery of food to children is unjust.
The Presentational Aspect of Perception The idea I have proposed to account for the representative element in moral perception is not the view that, in moral perception, a proposition is believed or even conceptualized by the perceiver. Rather, (morally) perceiving an injustice yields an experiential sense of it that is integrated with—not merely added to—perception of the base properties for this injustice. The integration may or may not involve emotion, but it must go beyond the phenomenology of merely perceiving the moral phenomenon or of that merely conjoined with a moral belief concerning that phenomenon. A moral perception has its own phenomenology. It is not “neutral” for the perceiver. As I have stressed, a moral perception is not merely a perception of a moral phenomenon such as injustice. As to the question of how my account reflects the presentational element in perception, I have answered this concern in part by noting that representation need be neither cartographic nor doxastic nor even conceptual. We need not deny that having moral concepts might be needed for the discriminative phenomenal responses crucial in moral perception or indeed for the moral sensibility required for having moral perceptions. But even if, as I leave open, a measure of moral conceptuality is needed to be a moral perceiver, it does not follow that moral conceptualization is needed for every instance of moral perception. A necessary condition for achieving an ability need not be manifested on every occasion of its exercise.
Perception of Emotion as an Analogous Case Perceptions of emotions in others are a good analogy to moral perception. Compare moral perception with seeing an angry outburst that warrants saying “He’s furious!” Is the anger not really perceived because it is seen through perceiving constitutive manifestations of it, such as redness of countenance, screaming, and puffing? Granted, these can be mimicked 354
Robert Audi
by a good actor; but a well-made manikin may similarly mimic a living clothes model in a static pose. We should not conclude that living clothes models are never seen, or never seen directly. Why, then, may some injustices not be as perceptible as anger? It is true that whereas anger is seen by its manifestations, moral wrongs are seen by seeing their grounds. But why should moral perception be conceived as limited to responses to effects rather than to causes or grounds? More broadly, why should perception not be possible as a phenomenologically realized, often rich response to a variety of other reliable indicators or determinants of the perceived phenomenon? Let me explain. Suppose we think of perception as—in part—a kind of reception and processing of information that reaches one by a causal path from an information source to the mind, where the processing, as distinct from its resulting perceptual product in the mind, need not imply events in consciousness.11 This conception certainly comports well with the role perception plays in providing everyday empirical knowledge of the natural world. On this conception, it should not matter whether the information impinging on the senses is determined by what is perceived, such as a flash of light, or, instead, by determiners or evidences of that. We can know a thing either by its effects that mark it or by its causes that guarantee it. Perceptual knowledge, like much other non-inferential knowledge, is latitudinarian regarding the variety of routes by which the truth of its object is guaranteed.
4. Perception and Inference I have taken the perception of emotion to illustrate how perception is possible when its object is perceived not by directly seeing it but by perceiving properties reliably related to it. Such cases also bear on the objection that moral perception is at least tacitly inferential. A common basis for this objection is that moral perception depends on “background beliefs.” Imagine a context in which someone receives news of a setback due to someone else’s surprising incompetence. Then recall the example of seeing an angry outburst, a possible response to such news. Some moral phenomena, such as injustices, can be as perceptible as such anger. More broadly, why should moral perception not be possible as a non-inferential response to a variety of reliable indicators or determinants of the perceived moral phenomenon? The “function” of perception, one might plausibly suppose, is to enable us to navigate the world safely and skillfully.12 Fulfilling that function leaves open many ways that information—including the emotional and the moral—needed or helpful in such navigation can reach the mind and guide the agent. I have two further points here. First, granting that background beliefs may be essential for possessing at least certain of the moral concepts that may be needed to have certain moral perceptions, it does not follow that either a moral perception or a belief it elicits need be causally grounded in or—especially, justificationally based on—such beliefs. I doubt both claims. Second, I grant that our beliefs can affect what experiences we have, but this is consistent with my first point. It is also essential to see here that our beliefs, especially perceptual beliefs, need not arise from inference just because we have premises for them among our beliefs. When we do infer a proposition or engage in reasoning that leads to our inferring something from one or more premises, the inference takes us mentally along a path from what is represented by one or more psychological elements to what is represented by another such element.13 355
Moral Perception
It is true that we can traverse such a path without noticing it, but the mind also has its shortcuts. What can be reached by climbing the ladder of inference may also be accessible by a single jump from firm ground. The territory may be familiar; our destination may be in plain view; and, through the power of the imagination or some other informationally sensitive faculty, we can sometimes go directly to places we would ordinarily have to reach by many steps. Perception is often like imagination in this, and, without bypassing consciousness entirely, it can take us from information acquired directly by vision to a belief that might, under studied conditions—or less favorable conditions—also have been reached by inference. One source, then, of a tendency to posit inferences underlying the formation of perceptual belief is assimilating information processing that does not require inference to propositional processing that does. Another source of the tendency to posit inferences in perception is the resistance to foundationalism of one kind or another. On any plausible conception of a foundation, an inferential belief is not foundational, whereas perceptions and perceptual beliefs, not being based on some other belief, may be. On a plausible moderate foundationalism in the theory of perception, in every perception there are some elements basic (so in a sense “foundational”) on the occasion; but the view does not imply that there are some elements basic in every perception.
5. Moral Perception, Realism, and Rationalism Given that perception is factive, it would be at best implausible to hold that one can see, for instance, A’s wronging B, if there is no wrongdoing. Moral realism, and with it the possibility of moral knowledge, is implicit in a theory of moral perception. Realism, however, need not be naturalistic, and I do not presuppose naturalism. My view is that moral properties are not natural (roughly, “descriptive”) properties; but if they should be, my theory of moral perception is easier, not harder, to defend. For if moral properties are natural, I doubt that it need even be argued that they have explanatory power or that moral perception is in part causally constituted. It is difficult to think of any natural properties of spatiotemporal entities, and especially of actions, that even appear to lack causal power. To reiterate part of my view of moral perception, I have argued that the “process” of morally perceiving something is causal in the way perception must be, but the causal work (insofar as it is done by properties) is apparently done by the lower-level properties on which moral properties are grounded, not by moral properties themselves. One might object to countenancing even the reality of non-causal properties—or at least non-causal properties that do not characterize abstract entities—on the ground that there are none among natural properties. It may well seem that it is only normative properties, for instance moral, aesthetic, and epistemic properties such as being justified, that are supposed to be real, non-natural, and non-causal. I doubt this objection as decisive, even if true. Is it true? Consider shape, which is a natural property. A thing has shape not brutely but on the basis of such causal properties as being spherical, which affects, for instance, its movement tendencies; yet shape itself does not seem causal. If it is, that is on the basis of its grounding properties; but if a property can be causal only on the basis of the causal power of its grounding properties, this presumably holds for moral properties as well. 356
Robert Audi
It is also true that my overall ethical theory incorporates a moderate rationalism, and I have appealed to the a priori and necessary connection between the grounds of moral propositions and those propositions themselves to explain the reliability of the process connecting, say, wrongdoing with the perception of it. But I would stress that such a high level of reliability is not required for perception. Note that anger does not entail the occurrence of the behavioral manifestations by which we know someone else is angry; this does not prevent there being a reliable enough connection to make possible perceptual knowledge of anger. I deny, then, that “that the epistemic credentials of moral phenomenal responses [their ability to evidence, e.g., wrongdoing] are derivative of subject’s grasping ostensibly synthetic a priori entailments between moral properties and their non-moral grounds.”14 It is the determination relation underlying this entailment that moral perception must appropriately respond to; the modality of the relation is not crucial for the response. Suppose for the sake of argument that there is only an empirical and contingent connection between moral grounding properties and the moral properties they ground. Why should this undermine my view that moral perception is non-inferential?15 Should we consider knowledge of anger inferential in the kinds of cases I have noted, in which the occasion on which perception occurs makes anger expectable and the person observed explodes with words and gestures appropriate to the occasion, say, a tipsy guest’s carelessly breaking a valuable platter? No, but the relation is empirical and contingent. Similarly, we recognize platters, vases, and even trees by properties such as color and shape that do not a priori entail their presence. We need not posit inference here and may simply grant that some perceptions occur on the basis of other, relatively elementary constitutive perceptions.
6. Conclusion Moral perception is an element in much human experience. It is possible for any normal person but, like aesthetic experience, occurs less in some people than in others, even when they have highly similar perceptions of morally significant phenomena. It is not inferential but facilitates inference; it is non-doxastic but creates, in those with sufficient understanding, dispositions to believe moral propositions that it justifies; and it is not necessarily biased by the perceiver’s beliefs even if it is also not immune to influence by these and other elements in their psychology. It may or may not yield emotion or be caused by it; it may or may not yield intuition or judgment or be caused by them; and it may or may not motivate action or be caused by that. It often yields moral knowledge and thereby grounds an element of objectivity in ethics. It is not the only route to moral knowledge, but it is a route that different people can traverse in the search for mutual understanding and the hope of agreement on the moral questions central for coexistence.16
Notes 1. In doing this I draw on the theory of moral perception presented in my Moral Perception (Princeton: Princeton University Press, 2013) and later refined in “Moral Perception Defended,” Argumenta 1, 1 (2015). Both works consider subtleties and problems I cannot address here, as well as alternative views and citations not brought into this chapter. 2. As suggested in the text, I assume that there are moral properties. If my position on moral perception is plausible, that in itself provides reason to favor cognitivism in ethics. For one thing, perceptual beliefs are paradigms of cognition. 357
Moral Perception
3. A recent example of theorizing that employs parallels between perception and action is Ernest Sosa, Judgment and Agency (Oxford: Oxford University Press, 2015). 4. That moral properties are consequential is a view articulated in G. E. Moore’s Principia Ethica (Cambridge: Cambridge University Press, 1903) and W. D. Ross’s The Right and the Good (Oxford: Oxford University Press, 1930), esp. ch. 2. I develop it further in ch. 2 of The Good in the Right: A Theory of Intuition and Intrinsic Value (Princeton: Princeton University Press, 2004). I here presuppose that certain properties, such as, on the negative side, killing and, on the positive side, promising are a priori grounds of moral properties, but my theory of the nature of moral properties does not require a particular list of such grounds. 5. Perceptibility here is relative to circumstances: the perceptibility (for us) of wrongness does not entail that every kind of wrongness is perceptible (say, plagiarism); but the same holds for heat, which is perceptible (for us) only within a certain range. 6. More is said later to explain why many moral attributions can be non-inferential. 7. This is not to say that “Moral perception is a form of pattern recognition,” as does Max Harris Siegel in describing my view in his informative review of Moral Perception, Ratio (New Series) LVII (2014), 238–243, p. 239. Some moral perceptions may be pattern recognitions, but not all are—even if each has some pattern—since the grounding relations essential for moral perceptions need not yield a familiar pattern. But pattern recognition, e.g. with faces, is a case wherein perception may require information processing yet does not entail inference. 8. Here one might recall the element of felt demand cited by Maurice Mandelbaum in The Phenomenology of Moral Experience (Glencoe, IL: The Free Press, 1955). See, e.g., pp 48–49, which speak of situations of acute human need as “extorting” action from us. 9. For related work developing a partial phenomenology of moral perception see Terry Horgan and Mark Timmons, “What Does Phenomenology Tell us about Moral Objectivity?” Social Theory and Policy (2008), 267–300. They also explore phenomenological aspects of fittingness. 10. See Jonathan Dancy, “Moral Perception and Moral Knowledge,” Proceedings of the Aristotelian Society Supplementary Volume LXXXIV (2010), 99–118 (a detailed response to my preceding paper of the same title, 79–98), p. 102. 11. For discussion of the sense in which perception is information processing, Fred Dretske’s Knowledge and the Flow of Information (Cambridge, MA: MIT Press, 1981) is a good source. Processing information is more than its mere reception. 12. For a view of perception that has some similarities to mine but is more “practically” oriented and provides a conception of the navigation metaphor, see John Bengson, “Practical Perception,” Philosophical Issues, 26, Knowledge and Mind, 2016 doi: 10.1111/phis.12081. He conceives perception as “fundamentally practical” in the sense that it renders perceivers “poised for action.” 13. This metaphorical statement does not entail that inference (in the process sense) is propositional and roughly equivalent to “reasoning”: a kind of mental tokening of an argument. A detailed statement of my broadly propositional view of inference is provided in chs. 5 and 7–8 of Practical Reasoning and Ethical Decision (London and NY: Routledge, 2006). Some philosophers and psychologists use “inference” more broadly. See, e.g., Michael Green, “Perceiving Emotions,” Proceedings of the Aristotelian Society Supplementary Volume LXXXIV (2010), 45–62: The inferences I speak of here will not in general consist of the derivation of one proposition from a set of others. Rather . . . they will more commonly take the form of a positioning of an object in egocentric space, an attribution of absolute and relative trajectories, and so forth. (p. 49)
On this view, inferences need not be drawn, or figure in consciousness as reasoning does, or be valid or invalid, or voluntary, if indeed they constitute doings at all. I am not arguing that perception cannot involve inference if the term is used in a technical sense with the suggested breadth. 14. Robert Cowan, review of Moral Perception, Mind 123,492 (2014), 1167–1167, p. 1169. 15. This is suggested by Cowan (p. 1169). 16. For valuable comments and stimulating discussion of the issues since the publication of Moral Perception, I want to thank Carla Bagnoli, Daniel Crowe, Terence Cuneo, Scott Hagaman, David
358
Robert Audi
Killoren, Justin McBrayer, Sabine Roeser, Mark Timmons, Dennis Whitcomb, and Pekka Väyrynen.
Further Readings See Robert Audi’s Moral Perception (Princeton: Princeton University Press, 2013) for an extended defense of the view featured in this chapter. Other views that combine moral realism with an epistemology grounded in moral perception include Justin McBrayer’s “Moral Perception and the Causal Objection,” Ratio 23, 291–307, 2010, “A Limited Defense of Moral Perception,” Philosophical Studies 149, 305–320, 2010, and Terence Cuneo’s “Reidian Moral Perception,” Canadian Journal of Philosophy 33, 229–258, 2003. Sarah McGrath in “Moral Knowledge by Perception,” Philosophical Perspectives 18, 209–228, 2004, argues for the claim that “if we have moral knowledge, we have some of it in the same way we have knowledge of our immediate environment: by perception.” Socalled sentimentalist accounts of moral perception are defended by Mark Johnston, “The Authority of Affect,” Philosophy and Phenomenological Research 63, 181–214, 2001 and Sabina Döring “Seeing What to Do: Affective Perception and Rational Motivation,” Dialectica 61, 363–394, 2007. Terry Horgan and Mark Timmons, “Sentimentalist Moral-Perceptual Experience and Realist Pretensions: A Phenomenological Inquiry,” in R. Debes and K. Steuber (eds.), Ethical Sentimentalism (Cambridge: Cambridge University Press, 2017), 86–105 examine sentimentalist conceptions of moral perception that attempt to ground a presumptive case of moral realism based on appeal to the phenomenology of moral perception.
Related Chapters Chapter 2 The Normative Sense: What is Universal? What Varies?; Chapter 3 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics; Chapter 18 Moral Intuition; Chapter 19 Foundationalism and Coherentism in Moral Epistemology; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 21 Methods, Goals, and Data in Moral Theorizing; Chapter 22 Moral Knowledge as Know-How.
359
18 MORAL INTUITION Matthew S. Bedke
When developing moral theories the standard practice makes use of intuitions. Is this best practice? Here, we scrutinize exactly how theorists rely on intuitions when attempting to improve their moral outlooks, we consider the epistemic credentials of this practice, and we explore some influential theories of intuitions that have the potential to either underwrite the standard practice or bankrupt it.
1. Relying on Intuitions To get us started, consider the following cases. In Doctor 1 a doctor has only one dose of a life-saving drug. There are six patients who need it. One patient needs the whole dose to survive.The other five each need 1/5 of the dose to survive. Question: Is it morally permissible to save five patients by giving them each 1/5 of the dose, leaving the one patient who needs the whole dose to die? Most people say that, at least intuitively, doing so is permissible. In Doctor 2 five patients will die in the near future, for they need various organ and tissue transplants that are simply unavailable . . . unless a doctor kills a lone patient who has just come in for a check-up and who happens to be an exact match for all five transplants (Harman, 1977, 3–4). If the doctor lets the lone patient live, the five others will die. If the doctor saves the five he does so only by killing the lone patient. Question: Is it morally permissible to save the five by killing the one? Most people say that, at least intuitively, doing so is impermissible. These are similar cases. One can either save five or one.That seems like a very important, morally relevant fact. Yet our intuitive responses about the permissibility of these actions differ. And it is fair to say that part of the standard practice in moral philosophy is to take the intuitive difference seriously. Though it is hard to say exactly what ‘taking intuitions seriously’ comes to, at least it involves a defeasible inclination to believe that which we find intuitive. That is, most believe that it is permissible to let the patient die in Doctor 1 and that it is impermissible to kill the lone patient in Doctor 2. Importantly, the standard practice is not to blindly follow intuitions wherever they lead. In the doctor cases theorists also want to make sure there is a good explanation for why 360
Matthew S. Bedke
it is not always permissible to save the greater number. So they take intuitive verdicts like these and feed them into a more general form of inquiry where we consider things that cry out for explanation—e.g., why it is sometimes but not always OK to save the greater number?—and we try to put them in reflective equilibrium with plausible explanatory principles. Philippa Foot (1967), for instance, initially wondered whether the difference is well explained by the doctrine of double effect (DDE), which says that intending harm is worse than merely foreseeing it as a bad side effect. It looks like the action in Doctor 2 involves intending the death of the lone patient, whereas the action in Doctor 1 does not involve intending the death of the untreated patient but merely foreseeing that death as a side effect of saving five others. If so, the DDE says that the action in Doctor 2 is thereby worse than the action in Doctor 1. And that can help explain the different intuitive verdicts for the cases. By contrast, the theory that it is always permissible to save the greater number simply does not capture the intuitive verdict for Doctor 2. Foot didn’t stop there. For part of the standard method is to creatively craft additional cases that generate intuitions that can test our theories. So Foot crafted what we can call Doctor 3: A doctor can save five patients by manufacturing a drug for them, but in doing so she will release a chemical byproduct into the room of another patient, killing her. Many people report that, at least intuitively, it is impermissible to manufacture the drug. But arguably the DDE would classify this as an unintended but foreseen harm, which leaves its impermissibility a puzzle. So Foot came up with the following alternative to the DDE to try to explain all the intuitions: Our moral duties to avoid harming others are more weighty than our duties to aid them. I will not rehearse how this might help to explain the pattern of intuitive verdicts on the three doctor cases, nor will I trace the proliferation of cases for testing this bit of theory, including Judy Thomson’s famous trolley cases (Thomson, 1985). I just want us to have a sense of the standard practice. To get a fair picture of it I should hasten to emphasize that the practice not only considers intuitions about cases. Moral philosophers also take into account intuitions about more general propositions, including the intuitive plausibility of the theories that Foot considered but also the intuition that one ought to bring about the best consequences, that two actions cannot have different moral qualities unless there is some nonmoral difference between the two, etc. There are then disputes about how much relative weight to give the intuitions on cases versus those on moral general propositions. Furthermore, we want our moral theories to be elegant, explanatorily powerful, and systematic, and we want them to comport with what we know about the rest of the world, including what we know from the cognitive sciences. For these reasons it is highly likely that the best moral theories will not respect all intuitive verdicts. At least some intuitive verdicts will be explained away as erroneous in a theory that is otherwise elegant, explanatorily powerful, systematic, and a good fit with the other things we know.
2. What Are Intuitions, and Why Rely on Them? Granting that all this is part of standard practice, it is natural to wonder: Why is it standard practice? If I am trying to figure out how to live my life, or how to improve my moral outlook, how is a moral intuition relevant? Why take their deliverances seriously? 361
Moral Intuition
To begin to answer these questions it is useful to distinguish a-theoretical characterizations of intuitions from the deeper theory of what they are. An a-theoretical characterization is going to try to say enough about what intuitions are to identify them for further study.This is kind of like characterizing physical phenomena in a way that invites further scientific study, as we might identify the substance of water in terms of its surface features—clear, potable, freezes at a certain temperature, etc.—in a way that enables further investigation into the stuff. In the case of water, we have discovered that it comprises H2O molecules, and this discovered chemical composition helps to explain its a-theoretical surface features. The hope is that we might do something similar for intuitions.
3. A-Theoretical Characterizations and Their Epistemology A-theoretically, then, there is growing consensus that ‘intuition’ is often used to refer to a special kind of seeming state that can be characterized in terms of its phenomenology and cognitive role. Seeming states just are states of mind whereby certain content seems true (cf. Bealer, 2000, 3; Bengson, 2015b; Huemer, 2005, Ch. 5; Chudnoff, 2011a; Cullison, 2010). Standard examples of seeming states come from perception—it can perceptually seem that a tree is green and leafy or that there is water on the road ahead—and memory—it memorially seems that I locked the door when I left the house. These states arguably have a certain phenomenological quality that is aptly described by saying that certain propositions seem true. The thought is that intuitions like those in the doctor cases are also states whereby some content seems true, albeit intuitively rather than, say, perceptually or memorially. There is also growing consensus that these seeming states are not mere beliefs. Consider: There are things we believe or we are inclined to believe, based on good evidence, but do not find intuitive. I believe I was born on March 18, and I am inclined to believe that my son will speak French fluently, but these things are not intuitive. Further, belief in p, for many p, can be prompted by good evidence that p. But an intuition that p, for many p, is not prompted by any old good evidence that p; having an intuition takes the right kind of case or proposition. Last, intuitions often cause us to believe their contents, though it is possible to have an intuition that p without believing that p. You might become convinced that it is always permissible to save the greater number and so disbelieve your intuition in Doctor 2. Similarly, we can also distinguish intuitions from hunches, guesses, blind impulses, vague sentiments, preferences for action, or current familiar opinion (Sidgwick, 1907, 211–212).1 Turning to epistemic matters, it turns out that our a-theoretical characterization of intuitions fits nicely with a plausible epistemic principle. I will put it like this: For all p, your belief that p based on it seeming to you that p enjoys some defeasible justification (cf. Chudnoff, 2011b; Huemer, 2006, 148; Pryor, 2000). So if it seems to you that it is impermissible to kill the patient in Doctor 2, and on this basis you believe that the action is impermissible, your belief is to some degree defeasibly justified. If all this is on the right track, the a-theoretical characterization of intuitions underwrites the standard practice. Suppose we have a bunch of moral intuitions—an intuition that P, an intuition that Q, . . . and so on. On this basis we believe, and have some defeasible justification for believing, that P, that Q, . . . and so on. We can then consider whether P, Q, . . . and so on, reveal an interesting pattern. Perhaps the pattern that emerges is one predicted by the DDE, or by the principle that duties not to harm are more stringent than duties to aid, or by some 362
Matthew S. Bedke
other principle altogether. More generally, we consider what plausible theory best explains any pattern that emerges. In this way, intuitions are a stepping-stone to justified beliefs, and inference to the best explanation, or some wider process of reflective equilibrium, operates on the contents of those justified beliefs. The outcome of the reflective equilibrium might be to ultimately reject P, Q, etc. But the main idea here is that intuitions might legitimately feed into a respectable form of inquiry that takes intuitive verdicts seriously, at least on the front end. Seen in this light, moral theory that relies on moral intuitions is just one domain among many where some things seem true, one thereby has some justification for believing that which seems true, and theory helps to best explain all that is true. Intuitions thereby help to adjudicate which moral theories are best in the same way that perceptual seemings help to adjudicate which scientific theories are best (see, e.g., Chisholm, 1982, 18; Pollock & Cruz, 1999, 195; Feldman, 2003, 77; Pryor, 2000, 519). In both cases we take into consideration not only what seems to be true of particular cases but also the plausibility of general, explanatory propositions and the theoretical virtues of elegance, power, and systematicity.
4. Intuition and Explanation Before we turn to deeper theories of intuitions and the concerns that surface there, the a-theoretical approach itself is the target of a number of objections. In the next several sections I will address some of the most common ones. In the process I hope to clarify the a-theoretical approach, its epistemology, and how all this relates to standard practice. Let me phrase the first objection rhetorically: It is claimed that moral theory explains our intuitive verdicts, but how so? Suppose that I have the intuitions reported earlier for Doctor 1, Doctor 2, and Doctor 3. How is a bit of moral theory suited to explain why I am in these psychological states? As Peter Singer says, [a] normative ethical theory . . . is not trying to explain our common moral intuitions. It might reject all of them, and still be superior to other normative theories that better matched our moral judgments. For a normative moral theory is not an attempt to answer the question “Why do we think as we do about moral questions?” . . . A normative moral theory is an attempt to answer the question “What ought we to do?” (2005, 345) In reply, the mistake here is to think that moral theory is explaining facts about intuitions— who has which intuitions for which cases, how intuitions can be manipulated, etc. Those who defend the standard practice, however, would no doubt want to say that moral theory is trying to explain the contents of the intuitions, or what I have called the intuitive verdicts. We want to explain why it is impermissible to kill in Doctor 2 but permissible to fail to treat in Doctor 1, even when the numbers of living and dead match up. And moral theory is suited to explain that. Compare: If we make some scientific observations about perturbations of a heavenly body’s orbit, we don’t just want to explain why we made the observations, which presumably must appeal to our psychology and not just the laws that govern the heavenly bodies. We 363
Moral Intuition
also want to explain the contents of the observations (that which perceptually seemed true), viz., why the heavenly body was THERE, then THERE, then THERE, etc. We can leave psychology at the doorstep to explain the latter. So critics need to focus on the stepping-stone principle articulated earlier: For all p, your belief that p based on it seeming to you that p (including it intuitively seeming to you that p) thereby has some defeasible justification.
5. Non-Inferential Justification? Before we get to those criticisms, another clarification is in order. It is commonplace to characterize intuitive justification as non-inferential justification. I am not certain what to make of this claim, but two points come to mind. First, the task is not to legitimate or debunk a move (possibly an inference) from the belief that one has an intuition that P to the belief that P.The task is to legitimate or debunk a move (possibly an inference) from the intuition that P to the belief that P. Second, when we make this second move we base belief on intuition. It is not clear what this basing relation is, though it is no doubt partly causal. But whether or not it is an inference is of dubious importance. The key claim is that it is a movement of mind that can generate justified beliefs. We need to hear more about what it takes to be an inference and why it matters whether or not a movement of mind is an inference before we care about whether intuitive justification is non-inferential.
6. Epistemic Internalism? Another concern about the stepping-stone principle comes from epistemic externalist quarters. As articulated, the principle allows intuitions to justify beliefs without any reliable connection to the moral facts. To see this, imagine that we are being massively deceived by an evil demon (unbeknownst to us) and none of our seeming states are veridical. Still, according to the stepping-stone principle we are justified in believing that things are as they seem to be. A more externalist epistemology would pay more attention to the right connections with the external facts. For example, we might not want a belief to be justified unless it was produced by a reliable belief-forming process, in which case we want beliefs based on intuitions to be part of reliable belief-forming process (Markie, 2005, 356–357).2 The epistemic internalism-externalism debate marks one of the major fault lines in epistemology more generally. It would be too much to discuss it in any detail here. But if some good connection with the facts is your primary concern, you will be most interested in the deeper theories of intuitions presented below, and whether those theories draw the right connections with the facts.
7. Need We Calibrate First? Whether intuitions have a good connection with the facts is one thing. Whether we already have some evidence of this connection (or lack of it) is another. Robert Cummins would prefer a prior showing of reliability or, as he puts it, a calibration of our intuitions (Cummins, 1998, 117) before we justifiably rely upon them. The biggest concern here is that we might 364
Matthew S. Bedke
not have any way of verifying the truth of intuitive verdicts that is independent of intuitions themselves. For key intuitions, like whether killing the lone patient in Doctor 2 is impermissible, or whether pain is intrinsically bad, there is a real concern that we cannot step outside our intuitions, take a peak at the facts, and note which intuitions are roughly right and which are not before we decide which ones to rely upon (cf. Ross, 1930, 40–41; Brandt, 1979, ch. 1). Cummins could reply that this is so much the worse for justifiably relying on intuitions. Then again, perhaps the desire for independent calibration is over-demanding. For we not only lack the means to independently calibrate intuitions. We lack the means to independently calibrate any seeming states, including perceptual ones. So Cummins’ admonition starts to look less like a prudential cautionary note and more like a recipe for full-blown skepticism (Sosa, 1998). Perhaps all we can ever do is calibrate our judgments via reflective equilibrium. That is, we can make sure intuitive verdicts are not inconsistent with each other and with the other evidence we have. But that would be to carry out the standard practice, not to suspend it until we calibrate.
8. Already Shown Unreliable? Some have argued that we already have good reason to believe that moral intuitions are in fact unreliable. Sinnott-Armstrong (2006), for example, argues that the following considerations should lead one to mistrust intuitively justified judgments without some non-intuitive corroboration of their contents: (i) which moral view one adopts affects one’s self-interests, so one is likely going to be biased when adopting a moral view; (ii) there is often moral disagreement without reason to think one party is more likely to be right than the other; (iii) emotions influence one’s moral view, and emotions cloud judgment; (iv) moral views are likely to be formed in circumstances conducive to illusion as shown by heuristics and framing effects; and (v) moral beliefs might be the product of morally suspicious sources, e.g., if they are the result of the influences of the rich and powerful. His point is that, at least cumulatively, these influences require us to gather some confirming evidence for the contents of our intuitions or to show that some intuitions are not influenced by these factors. Similarly, Michael Huemer (2008) has argued that ethical intuitions conflict, that they have been influenced by one’s culture, that they have been influenced by evolutionary pressures, that they sometimes support theories that promote one’s self-interests, that they sometimes line up with strong emotion, and that the abstract ones are prone to an overgeneralization bias. On these grounds he is inclined to place more trust in formal intuitions, like the intuition that if a is better than b and b is better than c, then a is better than c (the transitivity of value), for he thinks such intuitions are less susceptible to distorting influences.
9. Is Disagreement a Problem? Let me give individual attention to a few items on these lists. First, disagreement. It is hard to assess the extent to which intuitions are shared among individuals. No doubt there is a lot of agreement. But suppose for the sake of argument that there is some domain of disagreement in intuition and intuition-based belief. Sinnott-Armstrong concludes that relying on intuitions is akin to relying on the reading of a thermometer drawn at random from a box of 100 thermometers, some portion of which are known to be unreliable (SinnottArmstrong, 2007, 201). 365
Moral Intuition
I think this mischaracterizes the epistemic role of both seeming states and testimony. To see why, consider the following case of disagreement in perceptual seemings. Suppose you are presented with the following image:
X
a
b
c
Which line of a, b, and c is the same length as x? B, of course, or so it seems (perceptually). But now imagine that several other people report that it looks to them that a or c is the same length as x.3 What is the justified response on your part? Perhaps you would take another look at the lines or measure things out. But what if you cannot do so? Should you then treat your perceptual seeming as randomly selected from a set of seemings, some portion of which are unreliable? Should you suspend judgment on which line is the same length as x? I think not. The way things look to you plays a different epistemic role than reports about the way things look to others. Perhaps you would reduce your confidence that b is the same length as x, but you would be justified if you maintain the belief. So here we have a clear case where disagreement in how things seem does not epistemically obligate you to treat your seeming as randomly selected from the pool of disagreeing intuitions. Arguably, so it goes with disagreement in moral intuition. At least, anyone who would make an exception for ethical intuitions in this regard has some explaining to do.4
10. Experimental Manipulation? But do we otherwise know that moral intuitions are formed under circumstances conducive to illusion? There are some relevant studies to consider. One set of studies shows the effects of word choice when describing cases that elicit intuitions. In Petrinovich and O’Neill (1996) subjects were given trolley-type cases, where one must choose between saving/killing the greater number and the lesser number. If given an option couched in terms 366
Matthew S. Bedke
of ‘kill’ wording (e.g., ‘to throw the switch that will result in the death of the one innocent person’), subjects on average disagree slightly with the action, whereas if given an option in terms of ‘save’ wording (e.g., ‘to throw the switch that will result in the five innocent people on the main track being saved’), subjects on average slightly agree with the action. That same paper reported other studies that showed ordering effects. Subjects were asked to indicate their approval of actions (described in terms of saving) in three trolleytype questions given in sequence. One set of subjects got the questions in one sequence, and another set of subjects got the same questions in reverse sequence. On average, subjects more strongly approved of an action when it appeared first in the sequence than when it appeared last, and the degree of approval for the middle case varied depending on which of the other two trolley-type cases came first. From effects like these, Sinnott-Armstrong argues that it is reasonable to “assign a large probability of error to moral intuitions in general” (Sinnott-Armstrong, 2008, 99). For him, any given moral intuition must be shown to be immune from such effects before it can be justified.5 But it is not clear that this is the conclusion to draw. For one, it is noteworthy that the order effects did not show subjects changing from approval to disapproval of an action based on the order in which it was presented. The effect was a shift in the degree of approval for action. So it might be misleading to talk of unreliability in this context. Second, Petrinovich and O’Neill (1996) report a set of trolley-type cases that did not show order effects at all (discussed in Sinnott-Armstrong, 2008, 61). Third, even if we grant that there are framing effects in the lab, it is not clear that such effects should encourage us all to suspend judgment on all intuitively justified beliefs. For one, we do not have enough evidence to generalize from the cases where framing effects have been shown to say that all intuitions are subject to these effects. For another, we are aware of these effects in a number of other domains, but we do not conclude in those domains that all judgments in those domains are unreliable. So though the framing and ordering effects are interesting and we should take them into consideration when improving our moral outlooks, it is not clear that they support the conclusion that moral intuitions are unreliable.
11. Distorting Influences? There is still a general concern that our moral judgments have been influenced by things that have nothing to do with the moral facts. It is unclear, for example, whether evolutionary and cultural forces would push us toward the formation of accurate moral judgments or not (see Street’s Darwinian Dilemma in her 2006). While concerns of this sort need not target intuitions per se, the concern can be directed at intuitions and whether they are good indicators of the moral facts. And the concerns can be developed in a couple of ways. First, we might worry that some of these historical forces have made our intuitions sensitive to (arguably) morally irrelevant considerations, such as whether someone in need is a relative of ours or not. I think there is growing recognition, however, that whether some consideration counts as morally relevant is a moral question. It is not clear how to decide the matter without engaging in the standard practice, which makes use of intuitions. Once we have decided what considerations are morally relevant we can turn to the etiology of intuitions and dismiss some of them as sensitive to morally irrelevant facts. But this is to enlist the method of relying on intuitions rather than discard it. 367
Moral Intuition
Second, we might simply wonder how likely it is that the forces responsible for our moral intuitions and subsequent judgments help those intuitions and judgments reflect the moral facts (whatever they are). If it seems unlikely that intuition would be a good guide, perhaps this counts as a defeater for intuitively based justification. I myself have pressed arguments along these lines, and I tend to think they have most force if we think the moral facts we are trying to represent correctly are nonnatural (or sui generis) facts (Bedke, 2009, 2014). But these issues remain hotly contested.
12. Theories of Moral Intuitions So far we have worked with an a-theoretical characterization of intuitions as seeming states with certain surface features that we would like to explain. Those surface features include: (i) a certain phenomenological quality aptly characterized as a phenomenology of seeming true intuitively, a phenomenology similar to but distinct from other ways of seeming true (e.g., perceptually), and (ii) a characteristic role in cognition whereby intuitions are prompted only by certain cases or propositions (not just any case or any good evidence will generate an intuition), where (iii) the phenomenology and cognitive role helps to distinguish intuitions from beliefs, inclinations to believe, hunches, guesses, other kinds of seeming states (e.g., perceptual ones), etc., and where (iv) the standard practice is to rely on them when attempting to improve one’s moral outlook. Further, (v) intuitions a-theoretically conceived fall under a plausible epistemic principle—the stepping-stone principle. I now turn to deeper theories of these states that have the potential to explain the surface features and that could help us decide whether intuitions indeed have the epistemic credentials needed to underwrite the standard practice.
13. Self-Evidence On one historically influential view intuitions have something to do with self-evident propositions. According to Robert Audi, a self-evident proposition is “a truth” such that “an adequate understanding of it is sufficient both for being justified in believing it and for knowing it if one believes it on the basis of that understanding” (Audi, 1998, 20, 1999, 206, 2004, 49; see also Audi, 2008, 478). Similarly, Russ Safer-Landau says “A proposition p is self-evident = df. p is such that adequately understanding and attentively considering just p is sufficient to justify believing that p” (Shafer-Landau, 2003, 247). As for candidate self-evident moral propositions, many philosophers have focused on certain general moral principles. W. D. Ross, for example, thought that there were seven self-evident moral duties, such as the duty to keep one’s promises. But others have suggested that verdicts on cases can be self-evident (Clarke, 1706, 226 (academic pagination), Prichard, 1912, 28). According to self-evidence theory, then, (some) intuitions have their distinctive surface features because they are states whereby one understands a self-evident proposition. Other states, such as standard beliefs, inclinations, hunches, perceptual seemings, etc., are not understandings of self-evident propositions, so it makes sense that intuitions would differ from them in phenomenology and cognitive role. 368
Matthew S. Bedke
This sounds like the sort of theory that would support the epistemic credentials of intuitions and underwrite the standard practice. But in fact the theory simply states that intuitions have epistemic credentials. After all, we are told that a self-evident proposition is a truth such that understanding it can justify believing it.6 So the view does not provide independent grounds for thinking that intuitions play a certain epistemic role. Maybe it could. If we had some non-epistemic characterization of what understanding a self-evident proposition amounts to, we might be able to use that bit of theory to independently evaluate the epistemic credentials of intuitions. The major concern here is whether we have clear enough ideas of understanding and self-evidence that are not merely high-minded ways of recapitulating the a-theoretical characterization of intuitions and the epistemic role they would need to underwrite the standard practice. My own view is that we await theoretical advance on this front before self-evidence theory can bear explanatory weight.7
14. Intellectual Perception Another theory says that (some) intuitions are the deliverances of intellectual perception. The point here is to draw a parallel with perception of the physical world. Just as we might visually see/apprehend that some object is spherical, we might intellectually see/apprehend that anything that is yellow is colored. We might intellectually see that any belief that is luckily formed in a certain way is not knowledge. And, coming to a moral case, we might intellectually see that saving five patients by killing one in Doctor 2 is impermissible. Proponents of intellectual perception can focus on an intellectual ability to grasp properties/universals and relations among them (Huemer, 2005, 126). They can then explain that intuitions have their distinctive surface features because they involve a distinctive sort of relation to abstract properties and their relations—the relation of intellectual grasping. Moreover, this theory looks like it will vindicate the epistemic credentials of intuitions and underwrite the standard practice. The main concern is this: Is the idea of grasping abstracta sufficiently illuminating to bear much explanatory weight? Arguably not. And to the extent we try to clarify the explanatory resources here there is a danger that they are objectionable in their own right. For it can look like we are positing strange mental powers and relations to a Platonic realm, which strikes some philosophers as mysterian. Of course, not everyone thinks these explanatory resources are objectionable or mysterious (Chudnoff, 2013; Bengson, 2015a; see also Audi, 2010; Cowan, 2015; Dancy, 2010; McBrayer, 2010 and Väyrynen, 2008).
15. Conceptual Competence Another theory of intuitions begins with the idea of conceptual competence. The rough idea is that conceptual competence brings with it some implicit knowledge of the criteria for falling under the concept. These criteria are special in that they demarcate the very boundaries of the concept, or the very boundaries of what the concept is about. So when these criteria are present in a case this is a very special sort of evidence that the concept applies. Last, it is possible to craft cases and propositions in such a way that they directly engage one’s conceptual competence; that is, certain cases and propositions enable one to 369
Moral Intuition
recognize the presence (or determinate absence) of the conceptual criteria for concept application. For some views sympathetic to this line of thought see, e.g., Goldman (2007, 4, 2013, passim), Graham and Horgan (1994), and Ludwig (2007). One theory, then, is that some case-based intuitions just are states of categorizing with a concept based solely on conceptual competence, and some intuitions concerning more general propositions are recognitions of conceptual connections based solely on one’s conceptual competence. This helps to explain why intuitions have distinctive surface features, including a special phenomenological quality and cognitive role. Of course, the details about what conceptual competence amounts to and the psychology and metaphysics behind it can be filled in in a number of ways. In fact, some would articulate conceptual competence in terms of a capacity to grasp abstracta and relations among them.8 But to make this theory distinct from the others already canvassed let us think of competence as an ability to apply psychologically encoded criteria and an ability to draw connections between concepts based on their psychological roles. This theory is also poised to vindicate the epistemic credentials of intuitions and to underwrite the standard practice. If intuitions are based on conceptual competence, then presumably they are by-and-large accurate, and so we would need some reason to question the defeasible justification of beliefs based on them, and we would need some reason to question the broader practice of treating intuitive verdicts as data to be explained by our best theorizing. The main concern is whether conceptual competencies exist or whether concept possession is criterial. Even if this concern is met, there is a more specific problem when we apply this theory to moral intuitions. For it seems like many moral intuitions are not driven by conceptual competence alone. In Doctor 2, for example, it seems overblown to say that someone who does not share the intuition of impermissibility either lacks our concept of impermissibility or fails to consider the case attentively, or makes some conceptual mistake in applying the concept to the case. They might be mistaken in their judgment, but it does not seem to be a conceptual mistake. Based on these and other considerations, I have elsewhere suggested that moral intuitions might be a special kind of conceptually grounded intuition, one where conceptual competence identifies potentially speaker-variant criteria (and attitude-involving criteria) for falling under moral concepts (Bedke, 2008, 2010). Further consideration of this position would take us too far afield. Suffice it to say that any theory of moral intuitions that grounds them in the exercise of conceptual competence is going to have to say something to explain why variations in certain moral intuitions do not appear to betray a failure of conceptual competence.
16. Outputs of System 1 Processing The last theory of intuitions I will consider comes out of the psychological literature. It starts with the idea that cognition has two reasoning processes, system 1 and system 2. System 1 processing is characterized as implicit, fast, automatic, low effort, and modular (think of gut reactions based on stereotypes and heuristics), while system 2 processing is explicit, slow, deliberative, effortful, and general purpose (think of deductive reasoning with many premises). Drawing on this independently supported distinction in cognitive science, some 370
Matthew S. Bedke
psychologists working on moral cognition have suggested that (some) intuitions are affective responses from system 1 processing (Haidt, 2001; Haidt & Bjorklund, 2008).9 When it comes to explaining the surface features of moral intuitions, the primary concern with the system 1–system 2 approach is that it lumps intuitions in with all sorts of other judgments that do not have the same surface features. Hunches, guesses, blind impulses, vague sentiments, preferences for action, popular opinions, etc., are likely the outputs of system 1, but we can pre-theoretically distinguish these states from intuitions. It looks like we need a finer-grained theory to explain why intuitions have a different phenomenology and cognitive role than these other states. Further, if this theoretical approach can be made to work, it is not a blanket vindication of the epistemic credentials of intuitions. One of the main concerns from this literature is that some system 1 judgments will be sensitive to morally irrelevant factors.We might be able to explain, for example, why we have evolved a knee-jerk reaction to actively killing someone but no similar knee-jerk reaction to letting someone die. But it is unclear what that evolutionary explanation has to do with the moral facts. Similarly, there is going to be some good scientific explanation for why we recoil at the thought of eating our deceased loved ones, and this might lead us to judge it wrong via system 1 processing, but this reaction and this judgment arguably has nothing to do with the moral status of the action and has everything to do with avoiding diseases in some ancestral environments. To press the concern we need some idea of what counts as a morally irrelevant factor. And, as we have already seen, to discern what is morally relevant we need to rely on our intuitions. So this (partial) theory of intuitions combines with information from cognitive science and feeds into the standard method in moral inquiry. The result might well be that some intuitions are dubunked (because they are sensitive to morally irrelevant factors), while others are vindicated.
17. Conclusion Clearly, the use and study of moral intuitions is a very rich field. There seems to be growing consensus on the a-theoretical matters, and the remaining points of contention are well known. The future of the field is sure to remain extremely engaging for philosophers and cognitive scientists alike.
Notes 1. We could use the term ‘intuition’ to label a different sort of state. But I think there is a type of seeming state that we often refer to with ‘intuition,’ and with those states lie the interesting epistemic and theoretical issues. 2. See Shafer-Landau (2003, chs. 11–12) for a view that mixes some elements of internalist and externalist epistemologies. 3. For a relevant experiment, see Asch (1951). For a clip go to www.youtube.com/watch?v=TYIh 4MkcfJA. 4. For related discussions about taking others to be epistemic peers, see Audi (2008, 490) and Enoch (2010, 979–981). 5. Note that the aforementioned studies ask participants to register their approval of action rather than to classify actions under moral categories (e.g., impermissible, permissible). So it is not clear that the studies focused on moral intuitions. 371
Moral Intuition
6. Both Audi and Schafer-Landau say that self-evident justification does not entail indefeasibility, and one can adequately understand a self-evident proposition and yet fail to believe it (see, e.g., Audi, 2004, Ch. 2, and Shafer-Landau, 2003, Ch. 11). 7. It looks like Audi would characterize self-evidence in terms of conceptual containment and the application of concepts (Audi, 2008, 479). In my view, this transforms the theory into a version of the conceptual competence theory, to be discussed later. 8. The classical position is represented in Russell (2008 [1912], chs. IX—XI). See also Huemer (2005, 124–126), and Cuneo and Shafer-Landau (2014). 9. See too Chapters 4, 6, 7 and 8 of this volume. Based on fMRI scans of brain activation, Joshua Greene (2008) has argued that consequentialist judgments are largely the products of system 2 while deontologist judgments are largely the products of system 1. See Chapter 4 of this volume for further discussion.
References Asch, S. E. (1951). “Effects of Group Pressure Upon the Modification and Distortion of Judgment,” in H. Guetzkow (ed.), Groups, Leadership and Men. Pittsburgh, PA: Carnegie Press, 177–190. Audi, R. (1998). “Moderate Intuitionism and the Epistemology of Moral Judgment,” Ethical Theory & Moral Practice, 1 (1), 15–44. ———. (1999). “Self-Evidence,” Philosophical Perspectives, 13, 205–228. ———. (2004). The Good in the Right: A Theory of Intuition and Intrinsic Value. Princeton: Princeton University Press. ———. (2008). “Intuition, Inference, and Rational Disagreement in Ethics,” Ethical Theory and Moral Practice, 11 (5), 475–492. ———. (2010). “Moral Perception and Moral Knowledge,” Aristotelian Society Supplementary Volume, 84 (1), 79–97. Bealer, G. (2000). “A Theory of the a Priori,” Pacific Philosophical Quarterly, 81 (1), 1–30. Bedke, M. S. (2008). “Ethical Intuitions: What They Are, What They Are Not, and How They Justify,” American Philosophical Quarterly, 45 (3), 253–269. ———. (2009). “Intuitive Non-Naturalism Meets Cosmic Coincidence,” Pacific Philosophical Quarterly, 90 (2), 188–209. ———. (2010). “Intuitional Epistemology in Ethics,” Philosophy Compass, 5 (12), 1069–1083. ———. (2014). “No Coincidence?” Oxford Studies in Metaethics, 9, 102–125. ———. (2016). “Intuitions, Meaning, and Normativity: Why Intuition Theory Supports a NonDescriptivist Metaethic,” Philosophy and Phenomenological Research, 93 (1), 144–177. Bengson, J. (2015a). “Grasping the Third Realm,” Oxford Studies in Epistemology, 5, 1–38. ———. (2015b). “The Intellectual Given,” Mind, 124 (495), 707–760. Brandt, R. B. (1979). A Theory of the Good and the Right. Oxford: Clarendon Press. Chisholm, R. (1982). The Foundations of Knowing. Minneapolis: University of Minnesota Press. Chudnoff, E. (2011a). “What Intuitions Are Like,” Philosophy and Phenomenological Research, 82 (3), 625–654. ———. (2011b). “The Nature of Intuitive Justification,” Philosophical Studies, 153 (2), 313–333. ———. (2013). “Intuitive Knowledge,” Philosophical Studies, 162 (2), 359–378. Clarke, S. (1706). “A Discourse of Natural Religion,” Reprinted portion in D. D. Raphael (ed.), British Moralists 1650–1800. Indianapolis, IN: Hackett Publishing, 224–261. Cowan, R. (2015).“Perceptual Intuitionism,” Philosophy and Phenomenological Research, 90 (1), 164–193. Cullison, A. (2010). “What Are Seemings?” Ratio, 23 (3), 260–274. Cummins, R. (1998). “Reflections on Reflective Equilibrium,” in M. DePaul and W. Ramsey (eds.), Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry. Lanham, MD: Rowman & Littlefield, 113–128. Cuneo, T. and Shafer-Landau, R. (2014). “The Moral Fixed Points: New Directions for Moral Nonnaturalism,” Philosophical Studies, 171 (3), 399–443. Dancy, J. (2010). “Moral Perception,” Aristotelian Society Supplementary Volume, 84 (1), 99–117. 372
Matthew S. Bedke
Enoch, D. (2010). “Not Just a Truthometer: Taking Oneself Seriously (But Not Too Seriously) in Cases of Peer Disagreement,” Mind, 119 (476), 953–997. Feldman, R. (2003). Epistemology. Englewood Cliffs, NJ: Prentice-Hall. Foot, P. (1967). “The Problem of Abortion and the Doctrine of the Double Effect,” Oxford Review, 5, 5–15. Goldman, A. (2007). “Philosophical Intuitions: Their Target, Their Source, and Their Epistemic Status,” Grazer Philosophische Studien: Internationale Zeitschrift für Analytische Philosophie, 74 (1), 1–26. ———. (2013). “Philosophical Naturalism and Intuitional Methodology,” in A. Casullo and J. C. Thurow (eds.), The a Priori in Philosophy. Oxford: Oxford University Press, 11–44. Graham, G. and Horgan, T. (1994). “Southern Fundamentalism and the End of Philosophy,” Philosophical Issues, 5, 219–247. Greene, J. D. (2008). “The Secret Joke of Kant’s Soul,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Vol 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. Cambridge, MA: MIT Press, 35–80. Haidt, J. (2001). “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review, 108 (4), 814–834. Haidt, J. and Bjorklund, F. (2008). “Social Intuitionists Answer Six Questions About Moral Psychology,” in W. Sinnott-Armstrong (ed.), Moral Psychology,Vol 2:The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA: MIT Press, 181–217. Harman, Gilbert (1977). The Nature of Morality:An Introduction to Ethics. Oxford: Oxford University Press. Huemer, M. (2005). Ethical Intuitionism. Houndmills, Basingstoke, Hampshire, New York: Palgrave Macmillan. ———. (2006). “Phenomenal Conservatism and the Internalist Intuition,” American Philosophical Quarterly, 43 (2), 147–158. ———. (2008). “Revisionary Intuitionism,” Social Philosophy & Policy, 25 (1), 369–392. Ludwig, K. (2007). “The Epistemology of Thought Experiments: First Person Versus Third Person Approaches,” Midwest Studies in Philosophy, 31 (1), 128–159. Markie, P. (2005). “The Mystery of Direct Perceptual Justification,” Philosophical Studies, 126 (3), 347–373. McBrayer, J. (2010). “A Limited Defense of Moral Perception,” Philosophical Studies, 149 (3), 305–320. Petrinovich, L. and O’Neill, P. (1996). “Influence of Wording and Framing Effects on Moral Intuitions,” Ethology & Socio-Biology, 17 (3), 145–171. Pollock, J. L. and Cruz, J. (1999). Contemporary Theories of Knowledge (Studies in Epistemology and Cognitive Theory) (2nd ed.). Lanham, MD: Rowman & Littlefield. Prichard, H. A. (1912). “Does Moral Philosophy Rest on a Mistake?” Mind, 21 (81), 21–37. Pryor, J. (2000). “The Skeptic and the Dogmatist,” Nôus 34 (4), 517–549. Ross, W. D. (1930). The Right and the Good. Oxford: Clarendon Press. Russell, B. [1912] (2008). The Problems of Philosophy. Rockville: Arc Manor LLC. Shafer-Landau, R. (2003). Moral Realism: A Defence. Oxford: Clarendon Press. Sidgwick, H. (1907). The Methods of Ethics (7th ed.). Indianapolis, IN: Hackett Publishing. Singer, P. (2005). “Ethics and Intuitions,” The Journal of Ethics, 9 (3–4), 331–352. Sinnott-Armstrong, W. (2006). “Moral Intuitionism Meets Empirical Psychology,” in T. Horgan and M. Timmons (eds.), Metaethics After Moore. New York: Oxford University Press, 339–366. ———. (2007). Moral Skepticism. New York: Oxford University Press. ———. (2008). “Framing Moral Intuitions,” in W. Sinnott-Armstrong (ed.), Moral Psychology, Vol 2: The Cognitive Science of Morality: Intuition and Diversity. Cambridge, MA: MIT Press, 47–76. Sosa, E. (1998). “Minimal Intuition,” in M. DePaul and W. Ramsey (eds.), Rethinking Intuition:The Psychology of Intuition and Its Role in Philosophical Inquiry. Lanham, MD: Rowman & Littlefield, 257–270. Street, S. (2006). “A Darwinian Dilemma for Realist Theories of Value,” Philosophical Studies, 127, 109–166. Thomson, J. J. (1985). “The Trolley Problem,” The Yale Law Journal, 94 (6), 1395–1415. Väyrynen, P. (2008). “Some Good and Bad News for Ethical Intuitionism,” Philosophical Quarterly, 58 (232), 489–511. 373
Moral Intuition
Further Readings W. D. Ross, The Right and the Good (Oxford: Clarendon Press, 1930). (A classic defense of ethical intuitionism, which combines an intuitional epistemology with various other theses.) Michael Huemer, Ethical Intuitionism (New York: Palgrave Macmillan, 2005). (Contains an influential modern defense of ethical intuitionism.) Robert Audi, The Good in the Right: A Theory of Intuition and Intrinsic Value (Princeton: Princeton University Press, 2004). (Combines an intuitional epistemology with some elements of Rossian deontology and Kantian moral theory.) Philip Stratton-Lake, ed., Ethical Intuitionism: Re-Evaluations (Oxford: Oxford University Press, 2002). (A nice collection of modern essays). Elijah Chudnoff, Intuition (Oxford: Oxford University Press, 2013). (Defends a theory of intuitions generally, not just ethical intuitions, with special attention to their phenomenology).
Related Chapters Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 8 Moral Intuitions and Heuristics, Chapter 11 Modern Moral Epistemology, Chapter 12 Contemporary Moral Epistemology, Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgments; Chapter 17 Moral Perception; and Chapter 21 Methods, Goals, and Data in Moral Theorizing.
374
19 FOUNDATIONALISM AND COHERENTISM IN MORAL EPISTEMOLOGY Noah Lemos
Let us begin with coherence theories of justification. What makes a body of beliefs coherent? What makes one body of beliefs more coherent than another? No single answer enjoys widespread support. Perhaps it is best to hold that different factors contribute to, or detract from, the coherence of a body of beliefs. Just as there are several factors that contribute to, or detract from, the goodness of an essay, e.g., cogent arguments or poor grammar, so we may think that the coherence depends on a variety of factors. But which factors are these? At least three factors are often cited: (i) logical inconsistency, (ii) explanatory connections, and (iii) inconsistency with norms about belief formation. Let us consider briefly each of these. One factor that detracts from the coherence of a set of beliefs is inconsistency. If my beliefs are inconsistent, then this detracts from the degree of coherence my beliefs enjoy. If I believe that both p and not-p, then this detracts from the coherence of my beliefs. Sometimes a person can believe several propositions without realizing that they are inconsistent. Even if he is unaware of the inconsistency, the fact that his beliefs are inconsistent lowers the degree of coherence his beliefs enjoy. It seems clear, however, there is a difference between the coherence of a body of beliefs and its logical consistency.Two sets of consistent beliefs can differ greatly in their coherence. Consider, for example, the doctor’s beliefs that (a) Adam has red spots on his skin, (b) Adam has a fever, (c) Adam feels itchy, and (d) Adam has the measles. Now consider the following set of unrelated beliefs: (a') Madonna is a singer, (b') Paris is in France, (c') Seven is a prime number, and (d') Snow is white. Both sets of beliefs are consistent, but intuitively the former exhibits much greater coherence than the latter. Indeed, it seems that the latter enjoys little or no coherence whatever. In any case, since each set is consistent and yet differ in their coherence, coherence and consistency are not the same things. Some philosophers hold that explanatory connections contribute to the coherence of beliefs. Consider the doctor’s beliefs about Adam. His beliefs exhibit several explanatory connections. Adam’s having the measles explains his having red spots, a fever, and his feeling itchy. The explanatory connections between these beliefs increase the coherence of the doctor’s beliefs. In general, then, when one’s beliefs exhibit explanatory relationships, the coherence of one’s beliefs is increased.
375
Foundationalism and Coherentism
Sometimes we form beliefs that conflict with the principles we accept about how we ought to form beliefs. Almost all of us hold that some ways of forming beliefs, e.g., wishful thinking and hasty generalization, are not very reliable and should be avoided. Suppose, for example, I believe that (i) beliefs based on wishful thinking are not likely be true and should be avoided. Now suppose I form some belief, B, on the basis of wishful thinking. Let B be the belief that the sun will shine tomorrow. Suppose that I become aware on further reflection that (ii) B is formed on the basis of wishful thinking. B conflicts with my other beliefs in (i) and (ii). B conflicts with my other beliefs about how I should form beliefs. Some philosophers hold that this sort of conflict lowers the coherence of one’s beliefs. It is plausible to think that these three factors are relevant to the coherence of a body of beliefs. Perhaps there are others. It is hard to say what makes a body of beliefs coherent. Still, it is important to note that this is not a problem for the coherentist alone. Foundationalists who take coherence to be relevant to the justification of belief will also want a clearer account of what makes beliefs coherent. Let’s assume, however, that we understand in a rough way what makes a body of beliefs coherent. How might we formulate a version of coherentism? One attempt would be the following: C1: S’s belief that p is justified if and only if S’s total body of beliefs is coherent and includes the belief that p. The basic idea behind C1 is that if one’s total body of beliefs is coherent, then any belief belonging to it is justified. Unfortunately, C1 won’t do. Suppose that S’s total body of beliefs is coherent and S’s belief that p is included in his total body of beliefs. According to C1, S’s belief that p is justified. But of course, all of S’s beliefs are included in his total body of beliefs. Consequently, C1 implies that if S’s belief that p is justified, then all of S’s beliefs turn out to be justified. But that seems an undesirable consequence of C1. C1 implies that if I have some justified beliefs, then all of my beliefs are justified. But that is surely false. It seems likely that some of our beliefs are justified and others aren’t. Surely some are unjustified because they are based on wishful thinking, prejudice, or insufficient evidence, while others are well justified. C1 implies that it is “all or nothing” and, thus, it is not sufficiently discriminating. In his discussion of coherentism, Richard Feldman considers the following simple version of a coherence theory of justification: C2: S is justified in believing that p if and only if the coherence value of S’s system of beliefs would be greater if it included a belief in p than it would be if it did not include that belief. (Feldman, 2003, 65) The basic idea behind C2 is simple and straightforward. Suppose S believes p.We can compare the coherence value of S’s total body of beliefs with the coherence value his total body of beliefs would have if we removed his belief that p. If removing the belief that p lowers the coherence value of his system of beliefs, then S is justified in believing that p. If S does not already believe that p, then we can compare the coherence value of his actual system of 376
Noah Lemos
beliefs with the coherence value of the system that includes the belief that p. If the system with the added belief has a higher coherence value, then the belief that p is justified. Let’s consider the doctor’s belief that Adam has the measles in light of C2. In this case, it seems that C2 yields the right verdict.The doctor believes that Adam has the measles. If that belief were removed from his total body of beliefs, then it seems this would detract from the overall coherence of his beliefs. He would have no explanation for Adam’s red spots, fever, and itchiness. The coherence of the doctor’s belief is enhanced by his belief that Adam has the measles and so, according to C2, this belief is justified. Still, as Feldman notes, there are problems with C2. One problem can be illustrated by the following example. Suppose that the night has turned quite cold and I believe on the basis of wishful thinking that my son took his coat when he left. Let’s assume that this belief is unjustified for me. If I have this belief, I might well have other beliefs that are closely connected with it. I might believe, for example, that his coat is not hanging in the closet and that he now has his coat. But now imagine that we remove from my body of beliefs the belief that he took his coat when he left. The problem is that the resulting set of beliefs would seem to be less coherent. My belief that he took his coat with him explains my other beliefs that his coat is not hanging in the closet and that he has his coat. If I simply remove my belief that he took his coat when he left, then I have no explanation for these other things I believe. Consequently, this would make my beliefs less coherent, and, according to C2, this implies that my belief that he took his coat with him is justified after all. The problem seems clear. Sometimes removing one unjustified belief from a system of beliefs can lower the coherence of the system. C2 gives us the wrong result in such a case. As Feldman notes, “The fact that any belief, even one that is not justified, can still have logical connections to many other beliefs poses a hard problem for coherentists. It is not clear how to revise coherentism to avoid this problem” (Feldman, 2003, 66). A second problem for C2 arises in the following way. Suppose I form on the basis of wishful thinking the belief that (i) my son took his coat. Suppose I reflect on this belief and come to believe that (ii) my belief that my son took his coat is based on wishful thinking. Suppose further I believe that (iii) beliefs based on wishful thinking are not likely to be true and should be avoided. Here it seems that my beliefs in (i), (ii), and (iii) conflict with one another. Setting aside the objection raised earlier, it seems that if my belief in (i) is eliminated, then the conflict among my beliefs is removed and the coherence value of my beliefs goes up. According to C2, this implies that my belief in (i) is unjustified. But note that the conflict can also be eliminated by eliminating my belief in either (ii) or (iii). Suppose, for example, my belief in (ii) is eliminated. If my belief in (ii) is eliminated, then again I eliminate the conflict and the coherence value of my beliefs goes up. But, according to C2, this implies that my belief in (ii) is also unjustified. C2 implies that my belief in (i) is unjustified and that my belief in (ii) is unjustified. But this seems mistaken. It seems false that both of these beliefs are unjustified, and, in particular, it seems false that my belief in (ii) is unjustified. Perhaps C2 can be modified to avoid this problem, but it is not clear how it should be amended. In sum, it is difficult to say exactly what contributes to the coherence of a body of beliefs or to formulate a clear, simple, and plausible version of the coherence theory. Still, as we have noted, many foundationalists also believe that coherence is relevant to the 377
Foundationalism and Coherentism
justification of belief. Consequently, the problem of explaining what makes a body of beliefs coherent is not a problem for the coherentist alone. Moreover, the failure to find a simple, clear version of the theory hardly shows that the theory is false. Still, the sort of problems noted for C2 are problems that any satisfactory version of the coherence theory must solve. Let us turn to consider the justification of moral beliefs. Coherence theories hold that our moral beliefs, like other beliefs, are justified solely in terms of their coherence with our other beliefs.1 One problem for coherence theories of justification concerns beliefs based on wishful thinking, prejudice, or bias. One way to illustrate the problem involves the example of wishful thinking considered earlier. Suppose that I believe on the basis of wishful thinking (i) that my son has taken his coat with him. Such a belief is unjustified even though it coheres with my other beliefs. Suppose that I also believe (ii) that I ought to take his coat to him if and only if he has forgotten it. Now, imagine that I infer from these two beliefs (iii) that it is false that I ought to take his coat to him. Since I am not justified in believing that my son has taken his coat, it seems that my belief that it is false that I ought to take him his coat is also unjustified. But my beliefs (i), (ii), and (iii) are all part of my coherent body of beliefs. If this is so, then it seems false that being part of a coherent body of beliefs is sufficient to justify one’s moral beliefs. The problem may also be illustrated by the following two cases: The Stingy Tipper. Joe has had dinner in a fine restaurant. He forms the belief that the service was slow and not very good and that it is right for him to leave a very small tip. Joe’s belief about the quality of the service is based on his stingy desire to leave a small tip. But his belief that the service was slow and that it is right to leave a small tip coheres with the rest of his beliefs. He does not believe that his beliefs are based on his being stingy. Still, Joe’s belief that it is right to leave a very small tip is unjustified. The Biased Philosopher. John has to decide between two job candidates, A and B. Both candidates have good teaching records and several fine publications. John believes that candidate A is the better philosopher and will be a better teacher. He believes that it is right for him to vote for A. John believes this because he is biased against the race of candidate B. John does not believe that he harbors such a bias. His belief that it is right for him to vote for A coheres with the rest of his beliefs. Still, John’s belief that it is right for him to do so is unjustified. In both of these cases, it seems that the subject’s moral beliefs are unjustified because they are based on the stingy desire to save money or racial bias. Just as beliefs that are based on wishful thinking or hasty generalizations are unjustified, so, too, it seems are beliefs based on stinginess and prejudice. Of course, the coherentist might argue that in these cases, the subject’s moral beliefs conflict with their general beliefs about how beliefs should be formed. Suppose we grant that the subjects accept the norms that beliefs based on desires to save money or racial bias are not likely to be true. If the subjects believed that their beliefs were formed on the basis of stinginess or bias, then holding their moral beliefs would not cohere with their other 378
Noah Lemos
beliefs. But in our cases, the subjects believe no such thing. Joe does not believe that his beliefs about the quality of service and the proper tip are based on his stinginess. John does not believe that his belief about the quality of the candidates is based on prejudice. Their particular moral beliefs cohere with their other beliefs. It might also be true that if one were to point out to the subjects that the service was not really slow or that there is no reason to believe that A is a better philosopher, one could change their minds. Perhaps one could get the subjects to change their beliefs and their new beliefs would be justified. But that is irrelevant to the present objection, since we assume that at the time they form their beliefs those beliefs are unjustified for them even though they cohere with the subjects’ other beliefs. Another problem for the coherence theories concerns deeply held moral principles that seem unjustified. Suppose Iago is a committed ethical egoist. Suppose he believes that it is morally right for a person, S, to perform an act A if and only if S’s performing A maximizes S’s welfare. Iago believes that it is morally right for him to do an act if and only if it maximizes his welfare. He believes that the fact that an act would cause harm to others is not in itself a moral reason to refrain from that action, though it might be indirectly a reason to avoid the action if causing harm to others would fail to maximize his own welfare. He is aware that many others do not share his moral views, but he believes that they are mistaken. He thinks that social, cultural, and religious pressures of various kinds have led people to hold mistaken moral beliefs. Now imagine that Iago can perform either act A or B. Both acts have good results for him. In fact, each act maximizes his welfare. Act B, however, has terrible consequences for other people and benefits no one else. Action A has no ill effects on anyone else. Iago believes that it is morally permissible for him to perform either action. He believes that there is no significant moral difference between them. Suppose Iago’s moral beliefs are coherent. Even if they are coherent many of his moral beliefs are not justified. His belief that there is no moral difference between acts A and B is not justified. His belief that there is no direct moral reason to refrain from harming others is not justified. Such beliefs are simply unreasonable. If this is right, then the coherence theory of justification is mistaken. Some philosophers have argued that ethical egoism is self-contradictory. G. E. Moore, for example, makes such an argument in Principia Ethica (Moore, 1903, 96–102). If Moore is right and ethical egoism is self-contradictory, then this would surely affect the degree of coherence that Iago’s beliefs enjoy. Perhaps it would make the degree of coherence had by his moral beliefs so low that they would not be justified. If this were so, then the coherentist could hold that our objection fails since it assumes that Iago’s beliefs are coherent. Unfortunately, Moore’s argument that ethical egoism is self-contradictory does not seem sound.2 If the preceding remarks are correct, then it seems that one could have a coherent set of moral beliefs, such as those held by an ethical egoist, where many of those moral beliefs are not justified. The beliefs of the ethical egoist are not unique in this regard. Suppose someone held the view that an act is morally right just in case it maximized the welfare of his family, his clan, his country, or his race. Suppose he believed that the fact that an action of his would seriously harm someone outside his family, clan, country, or race would not be a direct moral reason to avoid the action. Suppose he believed that it was permissible to cause terrible harm to others outside his group in order to maximize the welfare of those in 379
Foundationalism and Coherentism
his group when he could just as easily maximize the welfare of his group without harming anyone. Such moral beliefs are unjustified even if they do cohere with the subject’s other moral beliefs. Let us turn to consider foundationalist accounts of justified moral beliefs. Foundationalist theories of epistemic justification hold two main theses. First, that there are basic or immediately justified beliefs. Second, nonbasically justified beliefs depend for their justification on one or more basically justified beliefs. A basically justified belief is one that has some degree of justification that is not derived from some other belief. The existence of basically justified beliefs is a main source of disagreement between foundationalist and coherentist theories of justification. One of the historically most important arguments in favor of basic beliefs is the regress argument.3 According to the regress argument, suppose we grant that some beliefs are justified by other beliefs. Suppose we assume, for example, that B1 is not a basically justified belief. Let us assume that B1 is justified by B2 and B2 is justified by B3. Let us say that in such a case B1 is justified by an evidential chain that has B2 and B3 as members. Now consider the following argument: 1. All evidential chains must either (i) terminate in a belief that is not justified, (ii) be infinitely long, (iii) be circular, or (iv) terminate in a justified basic belief. 2. Options (i)–(iii) are unsatisfactory. 3. Therefore, there must be justified basic beliefs. The regress argument is an argument by elimination. It holds that the only acceptable evidential chains terminate in basically justified beliefs. Option (i) does not seem acceptable. If the evidential chain terminates in a belief, Bn, that is not justified, then it does not seem that it can confer justification on another belief. If one is not justified in believing Bn, how can it justify any other belief? Option (ii) also does not seem satisfactory, at least not for creatures like us, who do not have an infinite number of beliefs. Option (iii) also does not seem acceptable. According to option (iii), B1 could be justified by B2 and B2 by B3 and B3 by B1. But how can a belief confer justification on itself even if it is through a series of intermediate steps? Since options (i)–(iii) seem unsatisfactory, proponents of the regress argument conclude that evidential chains must terminate in basically justified beliefs. Coherentists reject the regress argument for basic beliefs. One reason they reject it is that they believe that premise (1) of the argument is false. They hold that premise (1) does not include all the options available. Justification, they claim, does not depend on linear evidential chains but is instead a matter of the overall coherence of one’s beliefs. Instead of thinking of justification in terms of evidential chains, we should think of justification in terms of overall or holistic coherence. Whether the regress argument is sound is a matter of debate. Still, it seems plausible that some beliefs are basically justified. Traditionally, examples of justified basic beliefs include two kinds: beliefs about simple logical or mathematical truths and introspective beliefs about our own mental states. Consider the propositions that all squares are squares and that if something is red and round, then it is round. One’s justification for believing such propositions does not seem to be based on one’s believing some other proposition or on inferring
380
Noah Lemos
them from some other propositions. We don’t need an argument for these propositions in order to be justified in accepting them. Such propositions are immediately justified for us. One simply considers them and “sees” intellectually that they are true. Many of our introspective beliefs about our own mental states also seem to be immediately or basically justified beliefs. Consider, for example, my belief that Paris is the capital of France. If I consider whether I have this belief, I need not infer that I do from my other beliefs. I simply consider whether I believe that Paris is the capital of France, and I find that I do. Such a belief seems to be immediately justified. Some beliefs about my own sensations and perceptual experiences would seem to be justified basic beliefs. My beliefs that I am having a sensation of red or that I am in pain are plausibly thought to be immediately justified beliefs. My belief that I am in pain is not based on or inferred from some other belief of mine. It does not seem to depend upon some other beliefs for its justification. Similarly, beliefs about one’s own perceptual experiences seem to be immediately justified. My beliefs that I seem to hear a bell or that I seem to see a dog seem to be immediately justified. I do not need to infer from some other belief that I am having such experiences. In addition to simple a priori and introspective beliefs, some foundationalists include beliefs based on memory and perception. For example, my belief that I had eggs for breakfast is a non-inferential belief that seems to be immediately justified. I do not infer the belief that I had eggs for breakfast from my other beliefs. The same is true of my perceptual belief that there is a book on the desk.That perceptual belief is not inferred from my other beliefs and it seems to be basically justified. Many contemporary foundationalists accept some form of moderate foundationalism that does not require that basic beliefs be certain or infallible and does not require that beliefs outside the foundation be deduced from basic beliefs in order to be justified. According to moderate foundationalism, basic beliefs can have varying degrees of justification, from certainty to merely probable. It also allows that beliefs outside the foundation can be justified by means other than deductive inference. It allows, for example, that non-foundational beliefs can be justified via induction and inference to the best explanation. Moreover, moderate foundationalism does not rule out the possibility that a basic belief might have support from other beliefs. It does not rule out, for example, the possibility that a basic belief might have its degree of justification increased by the fact that it coheres with other beliefs. It simply holds that a justified basic belief must have some degree of justification that is not dependent on the support it gets from other beliefs. Finally, moderate foundationalism allows that basic beliefs can cease to be justified for a person if he acquires counterevidence. It allows that some basic beliefs are defeasible, that they can be defeated by additional evidence. Foundationalists differ over what justifies basic beliefs. Some foundationalists hold that our basic beliefs are justified by non-doxastic experiences such as sensations and perceptual experience. So, for example, my belief that I am in pain is justified by the fact that I have the sensation of pain and my belief that I seem to see a dog is justified by my perceptual experience of seeming to see a dog. Other foundationalists hold that our basic beliefs are justified by being the product of a reliable cognitive process or intellectual virtue. They would hold, for example, my belief that that I had eggs for breakfast or that there is a desk before me would be justified by being the product of memory or perception.
381
Foundationalism and Coherentism
Is it plausible to think that some moral or evaluative beliefs are basically or immediately justified? Many philosophers have thought so. W. D. Ross, for example, writes: That an act qua fulfilling a promise, or qua effecting a just distribution of good . . . is prima facie right, is self-evident; not in the sense that it is evident from the beginning of our lives, or as soon as we attend to the proposition for the first time, but in the sense that when we have reached sufficient mental maturity and have given sufficient attention to the proposition it is evident without any need for proof, or of evidence beyond itself. It is evident just as a mathematical axiom, or the validity of a form of inference, is evident. . . . In our confidence that these propositions are true there is involved the same confidence in our reason that is involved in mathematics. . . . In both cases we are dealing with propositions that cannot be proved, but just as certainly need no proof. (Ross, 1930, 29–30) In holding that such prima facie moral principles are self-evident and not dependent for their justification on proof, Ross holds that they are basically or immediately justified. Bertrand Russell expressed a similar view: Perhaps the most important example of non-logical a priori knowledge is knowledge as to ethical value. I am not speaking of judgments as to what is useful or as to what is virtuous, for such judgments do require empirical premises; I am speaking of judgments as to the intrinsic desirability of things. . . . We judge, for example, that happiness is more desirable then misery, knowledge than ignorance; goodwill than hatred, and so on. Such judgments must be immediate and a priori. . . . In the present connexion, it is only important to realize that knowledge as to what is intrinsically of value is a priori in the same sense in which logic is a priori, namely in the sense that the truth of such knowledge can neither be proved nor disproved by experience. (Russell, 1912, 75–76) Russell, like Ross, holds that we have some ethical beliefs that are immediately or basically justified. Like Ross, he suggests that some ethical beliefs enjoy basic a priori justification. One unfortunate consequence of Ross’s comparing prima facie moral principles to mathematical axioms and logical principles is that it suggests that the former enjoy the same high level of epistemic privilege such as certainty and indefeasibility that the latter are often taken to enjoy.To many, such a suggestion seems implausible. For however reasonable Ross’s prima facie principles or propositions about what is intrinsically good might be, very few, if any, enjoy the certainty and indefeasibility of simple mathematical and logical propositions such as 2 + 2 = 4 and if p, then p. Whether or not Ross intended to suggest that the prima facie moral principles he endorsed were certain and indefeasible, the moderate foundationalist need not take this view. He can agree with Ross that we do have some moral and ethical beliefs that are basically justified, but he need not hold that they are certain or indefeasible. Moreover, it is open to the moderate foundationalist to agree with Ross and Russell that some ethical beliefs enjoy basic a priori justification. 382
Noah Lemos
One might object, “Propositions such as 2 + 2 = 4 and if p, then p enjoy immediate a priori justification. There is no disagreement about them. But there is disagreement about the sorts of ethical propositions for which Ross and Russell claim basic a priori justification. Therefore, they do not have basic a priori justification.” The argument assumes that if a proposition is immediately justified a priori for one person then there will be no disagreement about it. But there is no good reason to accept this assumption, especially if we concede that what has basic a priori justification can be defeasible and less than certain. It is entirely possible for one person to have basic a priori justification in believing p, and for others who consider p not to be justified in believing it.This is so simply because those who do not believe that p might have (or merely think they have) defeating evidence for p.We need not assume that disagreement implies that neither party is justified in her attitude toward p. Moreover, if we accept moderate foundationalism concerning the a priori, we need not hold that when people disagree over basically justified beliefs there is nothing more to say or that rational discussion has come to an end. By coming to see the implications of his views, one party may find it more reasonable to give up his initial belief. Even if we agree that we do have some immediately justified ethical beliefs about prima facie principles or what is intrinsically good or bad, what about our particular ethical beliefs, e.g., beliefs about whether some particular action is right or wrong or some particular state of affairs is intrinsically good or bad? Can our particular ethical beliefs be immediately justified? It seems clear that many of our particular ethical beliefs are not immediately justified. Particular ethical beliefs that depend for their justification on deliberation or reasoning are not immediately justified insofar as they depend for their justification on other beliefs. Often our particular ethical judgments depend on our beliefs about the future consequences of our actions or about what people deserve or how we and others have acted in the past. When a particular ethical belief depends for its justification on inference from other beliefs it is inferentially justified and not immediately justified. Some particular ethical beliefs, however, do not seem to be the result of deliberation or reasoning.4 Still, it is not clear that they are immediately justified. Ross, for example, offers the following example: I am walking along the street, and I see a blind man at a loss to get across the street through the stream of traffic. I probably do not ask myself what I ought to do, but more or less instinctively take him by the arm and pilot him across the street. (Ross, 1939, 168) Ross suggests that his belief that this is right thing to do is not the result of any inference, reasoning, or deliberation. His belief that this is the right thing to do is not based on inference. Still, Ross says, “Now it is clear that it is in virtue of my thinking the act to have some other character that I think I ought to do it” (Ross, 1939, 168). Again, he writes, “it is only by knowing or thinking my act to have a particular character, out of the many that it in fact has, that I know or think it to be right” (Ross, 1939, 168). Ross’s view is that some of our particular ethical beliefs are justified in virtue of our knowing or being justified in believing something about the character of the actions. It is in virtue of knowing or being justified in believing something about the nonevaluative nature of the act that I am justified 383
Foundationalism and Coherentism
in believing that the act has the evaluative feature that it has. In such cases, one’s particular ethical belief would be non-inferential, but it would not be immediately justified insofar as it depends on one’s belief that it has certain nonevaluative features. Still, it seems that some evaluative beliefs are both non-inferential and immediately justified. Consider, for example, someone driving in the mountains who comes suddenly on a spectacular vista and forms the belief, “That’s beautiful!” Here the aesthetically evaluative judgment would seem to depend on his non-doxastic perceptual states and not on the subject’s beliefs and even less upon some belief in a general aesthetic principle. His aesthetic belief is not the product of any inference. The same seems true of some evaluative beliefs such as when one savors the sensation of slipping into a warm bath and thinks, “That’s good” or experiences a sudden pain in one’s tooth and thinks, “That’s bad.” Again, such particular evaluative beliefs are not the product of an inference. One plausible view is that such evaluative judgments are justified by one’s non-doxastic experiences. Now consider the case of some particular moral judgments that do not seem to involve any inference or reasoning. Suppose one sees some boys setting a cat afire. One forms the belief that their action is wicked or wrong. Or imagine that one see an old lady fall on the sidewalk and forms the belief, “I ought to help her.” Again, these particular moral judgments seem non-inferential and not the product of reasoning or deliberation. What might justify such beliefs? One view is that they are justified by one’s non-doxastic perceptual experiences to the effect that that the boys are setting the cat afire or that the old lady has fallen. In this sort of situation, the foundationalist need not deny that one is also justified in believing that the cat is in pain or that the old lady has fallen. The foundationalist might simply hold that one’s moral beliefs do not depend on these other beliefs. Indeed, he might hold that the non-doxastic perceptual experiences that justify these other beliefs also justify the belief that their act is wrong or that he ought to help the old lady. Foundationalists of this sort might hold that such particular moral beliefs are epistemically basic and that the non-doxastic perceptual states do “double duty,” justifying both the particular ethical belief as well as some nonethical beliefs.5 Alternatively, some foundationalists might hold that what justifies such non-inferential moral beliefs are one’s non-doxastic perceptual experiences along with some reliable cognitive process or epistemic virtue. The perceptual experiences are “inputs” and the moral beliefs are an “output” of a reliable cognitive process or epistemic virtue that in the circumstances makes one good at forming true moral judgments. Again, on this view one’s moral beliefs depend, not on the subject’s other beliefs, but on his perceptual states and his reliable belief-forming processes. The view that some particular moral and evaluative judgments are immediately justified has some plausibility. Still, the view remains controversial. One objection to the view may be put this way: “Suppose one were asked why he believed he should help the fallen woman. He would cite his beliefs, viz, that she had fallen and that one ought to help people who have fallen. In justifying one’s beliefs one appeals to one’s other beliefs; therefore, one’s belief is not immediately justified.” This objection, however, overlooks the distinction between justifying a belief and a belief ’s being justified. The former is an activity that one engages in, often when prompted by someone else. The latter is a property of a belief. One typically has a great many beliefs that are justified that one has not attempted to justify. 384
Noah Lemos
While it is true that in justifying a belief one appeals to one’s beliefs, it does not follow that what confers justification on a belief, what makes it justified, are one’s other beliefs. A second objection is suggested by the following passage from William Alston concerning what is required for a particular epistemic belief to be justified: In taking a belief to be justified, we are evaluating it in a certain way. And, like any evaluative property, epistemic justification is a supervenient property, the application of which is based on more fundamental properties. . . . Hence, in order for me to be justified in believing that S’s belief that p is justified, I must be justified in certain other beliefs, viz. that S’s belief that p possesses a certain property, Q, and Q renders its possessor justified. (Another way of formulating this last belief is: a belief that there is a valid epistemic principle to the effect that any belief that has Q is justified.) (Alston, 1976, 170)6 Alston’s remarks concern epistemic justification, but they are intended to apply to all evaluative, supervenient properties, including ethically evaluative properties such as right and wrong. Following this line of thought, one’s being justified in believing that a particular action, a, is wrong would depend on one’s being justified in believing both (i) act a has some property Q that makes it wrong and (ii) whatever has Q is wrong. If this view is correct, then particular moral judgments cannot be immediately justified. Alston suggests that his view is supported by the supervenient nature of evaluative properties, but it is not clear that this is so. From the fact that one sort of property, E, supervenes on another sort of property, Q, it does not follow that justified belief that something has E requires justified belief that it has Q. It might be, for example, that my mental states supervene on my brain states, and they in turn supervene on some even more basic physical states.Yet in order to be justified in believing that I have some mental state, I need not have any justified beliefs about my brain states or on which brain states my mental states supervene. My belief that I have some mental state can be immediately justified, even if my mental properties supervene on my physical properties. The supervenient nature of ethical properties does not imply that attributions of them cannot be immediately justified.
Notes 1. For defenses of the coherence theory in moral epistemology see Brink (1989) and Sayre-McCord (1995). 2. See Fred Feldman’s criticism of Moore’s argument (Feldman, 1978, 90–92). 3. Aristotle seems to have accepted a version of the argument. See his Posterior Analytics, Book I, chapters 1 and 2. 4. See Chapters 4, 6 and 7 of this volume for further discussion. 5. For a defense of the view that moral beliefs can be justified by non-doxastic states, see Michael Huemer (2005), Robert Audi (2013), and Noah Lemos (2002). 6. Richard Hare expresses a similar view in (Hare, 1952, 111). He writes, if we knew all of the descriptive properties which a particular strawberry had . . . and if we knew also the meaning of the word “good”, then what else should we require to know, in order to be able to tell whether a strawberry was a good one? We should require to know, 385
Foundationalism and Coherentism
what are the criteria in virtue of which a strawberry is to be called a good one, or what is the standard of goodness for strawberries. We should be required to be given the major premise.
References Alston, William. (1976). “Two Types of Foundationalism,” The Journal of Philosophy, 73. Audi, Robert. (2013). Moral Perception. Princeton: Princeton University Press. Brink, David. (1989). Moral Realism and the Foundations of Ethics. New York: Cambridge University Press. Feldman, Fred. (1978). Introductory Ethics. Englewood Cliffs, NJ: Prentice-Hall. Feldman, Richard. (2003). Epistemology. Upper Saddle River, NJ: Prentice-Hall. Hare, Richard. (1952). The Language of Morals. Oxford: Oxford University Press. Huemer, Michael. (2005). Ethical Intuitionism. New York: Palgrave Macmillan. Lemos, Noah. (2002). “Epistemology and Ethics,” in Paul Moser (ed.), The Oxford Handbook of Epistemology. Oxford: Oxford University Press. Moore, G. E. (1903). Principia Ethica. Cambridge: Cambridge University Press. Ross, W. D. (1930). The Right and the Good. Oxford: Oxford University Press. ———. (1939). The Foundations of Ethics. Oxford: Oxford University Press. Russell, Bertrand. (1912). The Problems of Philosophy. Oxford: Oxford University Press. Sayre-McCord, Geoffrey. (1995). “Coherentist Epistemology and Moral Theory,” in Walter SinnottArmstrong and Mark Timmons (eds.), Moral Knowledge? New Readings in Moral Epistemology. New York: Oxford University Press.
Further Readings Robert Audi, Moral Perception (Princeton: Princeton University Press, 2013) provides a defense of the view that some moral beliefs can be justified perceptually. David Brink, Moral Realism and the Foundations of Ethics (New York: Cambridge University Press, 1989) defends moral realism and a coherentist moral epistemology, whereas Michael Huemer, Ethical Intuitionism (New York: Palgrave Macmillan, 2005) defends moral realism and an intuitionist, foundationalist moral epistemology. W. D. Ross, W. D., The Right and the Good (Oxford: Oxford University Press, 1930) is a classic defense of intuitionism. W. D. Ross, The Foundations of Ethics (Oxford: Oxford University Press, 1939) explores intuitionism further.
Related Chapters Chapter 11 Modern Moral Epistemology; Chapter 13 The Denial of Moral Knowledge; Chapter 16 Rationalism and Intuitionism: Assessing Three View about the Psychology of Moral Judgments; Chapter 17 Moral Perception; Chapter 18 Moral Intuition; Chapter 20 Methods, Goals and Data in Moral Theorizing.
386
20 MORAL THEORY AND ITS ROLE IN EVERYDAY MORAL THOUGHT AND ACTION Brad Hooker
1. The Meaning of “Moral” in “Moral Thought” and “Moral Theory” To clarify what we mean both by “moral theory” and by “everyday moral thought and action,” we will need to identify what “moral” means in them.There are perennial problems with defining “moral.”1 We want to identify a subject area with the term, but not in a way that begs questions against this or that account of the area. Let me identify what I think is the most promising line of thought, a problem with this line of thought, and then a fix for the problem. The promising line of thought holds that what distinguishes morality from other actionguiding requirements coming from club rules, etiquette, law, and self-interest is that morality is necessarily connected with certain reactive attitudes such as guilt, resentment, and indignation (Mill, 1861, ch. 5, paragraph 14; Hart, 1961, ch. 8; Sprigge, 1964; Gibbard, 1990, 41–48; Copp, 1995, 25–26, 88–96; Wallace, 1994; Darwall, 2006). Negative judgments are warranted when bad decisions are made, even if those decisions are self-regarding and nonmoral, but hostility to people is warranted only when they make bad moral decisions without excuse. Indeed, someone’s having done something morally wrong without excuse is not only a necessary but also a sufficient condition for that person’s having warranted guilt feelings and for other people’s having warranted reactions of indignation or resentment toward the perpetrator. In contrast, indignation, resentment, and guilt are not necessarily appropriate when someone does something that merely goes against club rules, etiquette, law, or self-interest. Defining morality in terms of its connection with appropriate reactive attitudes such as guilt, resentment, and indignation runs the risk of ruling out views that are recognizably theories of morality. There are recognizably moral theories that deny that feelings of guilt, resentment, and indignation are appropriate whenever someone has behaved morally wrongly without excuse. For example, there is the theory that, since guilt, resentment, and indignation are negative and destructive, these reactive attitudes should be eschewed completely. This theory might allow condemnation of morally wrong actions; what the theory rejects is directing hostility at the agents of those actions. So this theory denies that hostile reactive attitudes in response to unexcused wrong action are appropriate. 387
Moral Theory in Moral Thought and Action
An obvious response to this line of thought is that the prospect of being the targets of feelings of guilt, resentment, or indignation can deter people from morally wrong action. Of course, being the target of feelings of guilt, resentment, or indignation can be very unpleasant. But that is why the prospect of being the targets of such feelings can deter people from doing wrong. And, we might think, the benefits to society of the deterrence easily outweigh the harms such feelings bring. Building on that idea, some philosophers entirely instrumentalize such feelings. These philosophers propose that these feelings be structured and targeted in whatever way will do the most good.This instrumentalist line of thought, however, raises the question of whether cases can arise in which greater benefits would result from decoupling moral wrongness from feelings of guilt, resentment, and indignation. Take a case in which an agent does something morally wrong near the end of her life, without excuse. Arguably, her having guilt feelings, her victim’s feeling resentment, and other people’s feeling indignation would produce no net benefit in this case, given the immanent death of the perpetrator. Someone who thinks that feelings of guilt, resentment, and indignation wouldn’t be appropriate in such a case because they would “serve no useful purpose” does not accept the necessary connection I have proposed between moral wrongness and appropriate negative reactive attitudes. Instrumentalism about negative reactive attitudes does not hold that they are always inappropriate. Instrumentalism allows that having such attitudes will very often do some good, e.g., by punishing and thus deterring immoral behavior. So instrumentalism hardly dismisses such attitudes. Here we need to attend to a complication. The most prominent instrumentalists about reactive attitudes have been act-consequentialists (Sidgwick, 1907; Smart, 1973; Parfit, 1984; de Lazari-Radek & Singer, 2014, 320–321, 331–335; Singer & de LazariRadek, 2016, 198–200). Maximizing act-consequentialists believe that what makes an act right is that it produces at least as much aggregate value as any alternative act would and that what makes an act morally wrong is that this act fails to produce as much aggregate value as some alternative act would have produced. Most act-consequentialists hold that guilt, resentment, or indignation are appropriate only when and only because occurrences of these negative attitudes result in the greatest net good— whether or not these attitudes are in response to wrong acts. This kind of act-consequentialism denies that appropriate guilt, resentment, or indignation must go hand in hand with judgments of wrongness. We were considering the definition of morality in terms of its connection with appropriate reactive attitudes such as guilt, resentment, and indignation. On this definition, moral wrongness is at least necessary and often sufficient for appropriate guilt, resentment, or indignation. We can now see that one objection to this definition is that prominent act-consequentialists deny that moral wrongness is at least necessary and often sufficient for appropriate guilt, resentment, or indignation. But even these actconsequentialists admit negative reactive attitudes are very important. Whereas other theories embrace the idea that moral wrongness is a necessary and often sufficient condition for appropriate guilt, resentment, or indignation, act-consequentialism of this kind construes these reactive attitudes not as ways of recognizing immoral behavior but as a way of deterring it. 388
Brad Hooker
2. What Is Everyday Moral Thought and Action? Everyday moral thought and action is a subset of everyday thought and action. Everyday thought and action are thought and action that are routine, run of the mill, and unexceptional. The “everyday” should not be taken literally. Something can be an instance of everyday thought and action without there being instances of the same thought or action in literally every day. But, for something to be appropriately characterized as “everyday,” it must be at least fairly frequent. I have characterized “everyday” in a statistical way, not in an evaluative way. I do not mean to suggest that all of everyday thought is good or even permissible.There are very widespread and frequent ways of thinking and acting that are morally wrong. Discriminating against people because of their race, religion, origin, gender, or sexual orientation is an obvious example. Of course, if “everyday” is meant statistically, then questions about relativity arise. If Jill works in a facility for the terminally ill, dealing with people who are about to die might be routine for Jill. If Jack works in a maternity ward with a very low mortality rate, Jack probably is not regularly dealing with people who are about to die. Such variation can also occur within a single life, e.g., because of a change of jobs, having children, or taking care of an ill partner. Despite such variations, our focus in this chapter is on shared circumstances, not on differences in the circumstances different people face. So, by “everyday thought and action,” I mean thought and action that are routine and unexceptional for very many people in circumstances that are routine and unexceptional for those people. In the previous section, I defined morality in terms of its connection with appropriate reactive attitudes such as guilt, resentment, or indignation. I will now employ that definition to distinguish everyday moral thought and action from other kinds of everyday thought and action. If everyday moral thought and action go wrong without excuse, it is appropriate for the agent to feel guilt and for others to feel indignation or resentment toward the agent. If other kinds of everyday thought and action go wrong, criticizing the agent might well be appropriate (or even mandatory from teachers, coaches, or supervisors) but guilt, indignation, and resentment would be inappropriate. The reason the title of this chapter refers to thought and action is that this chapter focuses on thought meant to lead to decision and action. Sometimes moral thought consists in morally evaluating historical decisions or imaginary scenarios when the people engaged in the evaluating do not need to decide what to do. Such moral evaluation is immensely important even where there is no prospect of its resulting in action. But this chapter’s focus is on moral thought leading to a decision to act or to refrain from acting. Let me discuss some examples of everyday moral thought about what to do. In one example, you and another strong candidate are being interviewed over a few days for a job each of you very much wants.You could get away with surreptitiously spreading lies about the other candidate. Such lies would damage his chances of being given the job you want. But it never even occurs to you to spread lies about him. Normally, at the time of decision, everyday moral thought doesn’t even consider choosing to do actions of various prohibited kinds, such as physically harming an innocent person, stealing, and spreading false information about someone. Such actions are morally out of the question, at least in everyday circumstances. Hence, everyday moral thought would be wasting its time to consider them. 389
Moral Theory in Moral Thought and Action
Let me switch to a different example. Suppose you are beside an old woman who is using a walking cane. She is not someone you know.You notice that she is about to have an accident by stepping backwards into a two-foot pothole.You can prevent the fall by immediately warning her. Or you could quietly pick her up and move her out of danger. Or you could stay still and quiet—i.e., do nothing. Or you could push her into the hole. What would go through your mind before you decide what to do? You do not even consider pushing the woman backwards into the hole, since this is an act of a prohibited kind—physically harming her. You definitely do consider doing nothing, but you assume doing nothing would fail to prevent her accident.You also definitely do consider warning the woman in order to prevent the accident, and you assume that giving her a verbal warning would not upset her. Despite an aversion to embracing strangers, you might also consider picking her up and moving her rather than trying to warn her. But if you even consider this possibility, you immediately suspect that picking her up would upset her (since in effect this course of action would start out with her being grabbed by a stranger, and so she might start out thinking she was being assaulted). Picking her up also runs the risk of injuring your back. Focusing on the differences among the available alternatives, your everyday moral thought plumps for giving the warning. Relevant differences among available courses of action might be in the extent to which they would benefit others or in the extent to which they would respect or violate others’ rights, or in other ways. And available alternatives might of course differ in more than one relevant way. For example, two alternatives might differ both in their expected consequences and in the extent to which they would violate others’ rights. Often, two morally relevant differences between alternatives converge in favoring the same action. An example would be one in which the available act that would benefit others most is also the only available alternative that would avoid violating others’ rights.When all the morally relevant differences between alternatives converge in their favoring the same action, everyday moral thought can proceed without hesitation to select the alternative unanimously favored. Sometimes, even when one difference between available alternatives favors one action and another difference between alternatives favors an alternative action, everyday moral thought can move quickly. This is what happens when one difference between alternatives is obviously much less important than another. An example would be a case in which the act that would benefit others most would produce only a little larger benefit, but this act would be much worse in terms of violating rights.
3. The Relation of Everyday Moral Thought to Moral Principles At the point of decision about what to do, everyday moral thought’s focus on what seem to be the morally relevant differences between available alternative courses of action does not need to invoke principles. If you are choosing between only two alternative available acts, A and B, and there is only one relevant difference between A and B, and that difference clearly favors B, then you probably proceed immediately to do B without reference to any principle. If you have spare time or are asked for a justification, you might take the time to formulate the principle: “When choosing between only two alternative available acts A and B and there is only one relevant difference between A and B and that 390
Brad Hooker
difference clearly favors B, then proceed immediately to do B.” Once such a principle is in your repertoire, you might refer to it when making decisions. But, when dealing with straightforward and obvious cases, everyday moral thought does not usually slow up to formulate or invoke principles. Everyday moral thought needs to be highly efficient, producing a more or less constant flow of decisions. In order for agents not to be stymied into indecision and inaction, everyday moral thought needs usually to be quick and at least fairly automatic. That everyday moral thought does not usually formulate or invoke principles in easy cases does not mean that there is no place for them in the psychology of the agent engaged in everyday moral thought about easy cases. Principles might serve as unrehearsed presuppositions of decisions.What went through the agent’s mind at the time of action might have been of the form “Since alternative act A has more of property Z than any other act I could do instead, I’ll choose A.” But presumably the agent must have been sensitive to actions with this property Z, or the agent would not have taken that property to be pivotal in this case. And that sensitivity could be expressed in the form of a principle of the form “that an act has more of property Z than do alternative acts is a reason for choosing this act.”2 Agents do have standing sensitivities to the presence or absence of various properties of possible actions and thus to the differences in the degree to which alternative actions have these properties. Some of these sensitivities are moral ones, e.g., concerning the rights of others, the effects on others’ welfare, and so on. Having such sensitivities amounts to having dispositions to be averse to or attracted to acts that have the properties. At the point of making a moral decision, an agent engaged in everyday moral thought probably wouldn’t mentally note the principles corresponding to the relevant dispositions. Nevertheless, everyday moral thought might be structured by dispositions to which there are corresponding principles even if these principles do not usually appear in the conscious thoughts of agents while they are engaged in everyday moral decision making. Everyday moral thought should run smoothly normally, but there are familiar holdups. For instance, there are plenty of cases where time is needed to ascertain relevant empirical facts. Indeed, preliminary investigation often reveals that relevant empirical facts cannot be found out or cannot be found out without disproportionate costs in terms of time, energy, or invasion of privacy. When relevant empirical facts are not ascertainable or not ascertainable without undue costs, decisions have to be reached despite ignorance of some relevant empirical facts.There might nevertheless be available information about probabilities of this or that empirical fact. Where probabilistic information is available, everyday moral thought takes it into consideration, even if doing so complicates and slows down moral thinking. In some cases, however, even information about probabilities isn’t available. Does everyday moral thought have a “coping mechanism” for cases in which no probabilistic information is available? Tentatively, I suggest that risk aversion is wise.3 Another problem that can hold up everyday moral thinking is that pause is needed to think about whether an evaluative concept applies. Would it be unkind to tell your brother the truth that his new grey beard makes him look ten years older? Would it be disloyal to correct a friend’s moderate exaggeration about her achievements? Some cases where there is a question about whether an evaluative concept can be correctly applied can be helped along by formulating and assessing different possible principles. A thought that might spring to mind in the present case is that it cannot be unkind to 391
Moral Theory in Moral Thought and Action
someone to do something to or for that person where this maximally benefits that person and was done for the sake of maximally benefiting that person. This thought is in effect a principled restriction on what can count as unkindness. The example about disloyalty is more complicated. One might think that, if the degree of moral badness in a friend’s moderate exaggeration of his or her achievements were low, then correcting the exaggeration would be disloyal. One might also think that, if the degree of moral badness were high, then correcting the exaggeration would not be disloyal. To think these things is to accept a compound principle. If one accepts this principle, then the next question is how much moral badness is in someone’s moderate exaggeration of his or her achievements. One possible answer is that the dissemination of moderately incorrect information, as long as the incorrect information is fairly harmless, is morally bad but to a low degree. Another possible answer is that the dissemination of moderately incorrect information is not morally bad as long as the person isn’t aware that the information is incorrect. Yet another possible answer is that dissemination of incorrect information is very bad even if the person disseminating the information is aware it is incorrect and even if the incorrect information is fairly harmless. Deciding which of these or other possible answers is correct is not relevant for the purposes of this chapter. What is relevant here is that these answers are different principles.
4: Problems and Questions that Lead to Moral Theorizing Even more importantly, trying to decide which of these principles is correct will push one into thinking about whether the moral badness in exaggeration resides simply in the false belief that can result, or in the intention to mislead, or in whatever harm is caused, or in some combination of these factors. Such reflection definitely counts as moral theorizing. Consider now cases in which one difference between alternative available acts provides a moral reason for making one choice and at least one other difference between alternative available acts provides a not obviously weaker moral reason for making a different choice. Although sometimes decisions in such conflict cases must be made immediately, ideally decisions in conflict cases are instead made after careful, unhurried reflection. This kind of case is sufficiently important to warrant an extended example. Suppose that, as a condition of taking the job you were offered, you had to sign an agreement not to undermine the authority of your boss. Now, however, you’ve come to see that your boss always tries to shift onto others the blame for frequent mistakes he has made and always tries to take the credit for good ideas that actually came from others. So on one side of the conflict is the idea that honesty requires you not to lie. On the other side of the conflict is the point that, if asked a direct question about your boss’s willingness to admit his mistakes and to give credit to others, there is no way for you not to undermine his authority unless you lie, since you cannot stay silent without in effect enabling others to infer the truth. Now, if your boss had committed egregious moral wrongs, would your promise extend to covering up those wrongs? A promise that includes a commitment to cover up someone’s egregious moral wrongs cannot be morally binding. Morally binding promises are nearly always restricted. For example, your boss’s promise to give you a raise next year is
392
Brad Hooker
restricted by prohibitions on his stealing money or committing fraud in order to get the extra money to pay you. Likewise, your promise not to undermine your boss’s authority has limits, such as that you are not required to hide serious moral wrongs committed by your boss. The reasoning just described seems to inch beyond everyday moral thought.The reasoning just described requires thinking about imaginary (though possible) situations. Furthermore, figuring out which such “thought experiments” are relevant requires some sophistication. And if we are explaining why certain thought experiments are relevant and others are not, we are likely to be engaging in theorizing, to at least some degree. Perhaps the most common moral thought experiment is the one posed by the “rolereversal” question, “How would you feel if what you are considering doing to others were done to you?” One way you might reply to the role-reversal question would be, “Well, I wouldn’t like such a thing to be done to me, but lots of acts are morally permissible despite the fact that people affected by of those acts don’t like them. For example, although Astrid seriously considered buying Johan’s house, she buys a house on the other side of town instead. Johan doesn’t like Astrid’s decision. But Johan’s dislike of Astrid’s decision is perfectly compatible with Astrid’s decision being morally permissible.” The implication is that the fact that someone does not like being on the receiving end of some act might not be a conclusive reason not to do that act. Reasoning from thought experiments about imaginary cases to conclusions about what does or does not make acts wrong is an instance of theorizing. A different reply you could give to the role-reversal question would be, “Well, while I wouldn’t like such a thing’s being done to me, I couldn’t reasonably resent it, since I believe that the person doing such an act would be acting within his or her rights and thus I believe this person couldn’t be wronging me.”This reply distinguishes between what you don’t like and what you reasonably resent and implies that you cannot reasonably resent someone’s treatment of you unless you think that person wronged you. Here again we find theorizing rather than everyday moral thought. Even more clearly than the role-reversal question, “Why?” questions pushed far enough lead to moral theory. I suggested that everyday moral thought focuses on what seem to be the morally relevant differences among available alternative acts. Now imagine children or others ask, “Why do these differences matter?” Answers to such questions typically take the shape of principles. For example, in response to the question why the difference between paying a debt and not paying it is pivotal, you answer that morality requires people to keep their promises and in particular their promises to repay loans. If asked why the difference between preventing an accident without upsetting anyone and not preventing an accident without upsetting anyone is pivotal, you might reply that people should prevent other people from having accidents (at least where this can be done with minimal effort and risk and without upsetting anyone). If asked why you didn’t consider lying about your rival, you might reply that it is morally wrong to lie, and especially to lie about other people when this is detrimental to them. To invoke such principles seems to me to step beyond everyday moral thought, but this step does not in itself invoke or refer to any moral theory. However, suppose that you are pressed further, by being asked why promises should be kept or accidents prevented
393
Moral Theory in Moral Thought and Action
or malicious lies avoided. Such fairly abstract and general questions are not everyday ones (unless you are around children). Thinking about how you might answer such questions steps considerably beyond everyday moral thought. I cannot see how you can answer such questions without either confessing that you don’t know or referring to some moral theory.
5. Moral Theory A moral theory is constituted by an ultimate answer to the “why?” question about moral requirements and prohibitions. For example, contractualism is the moral theory that moral requirements are determined by principles that no one could reasonably reject (Scanlon, 1998; Nagel, 1991; Parfit, 2011, Vol. 1). Rule-consequentialism is the moral theory that moral requirements are determined by rules the internalization of which by a high percentage of everyone would have unsurpassed expected value (Urmson, 1953; Brandt, 1967, 1979; Hooker, 2000). Act-consequentialism is the theory that the ultimate moral requirement is to maximize aggregate value. And virtue ethics holds that moral requirements mirror the characteristic dispositions of a virtuous agent (Hursthouse, 1999). Each of these theories proposes that there is a one-sentence, informative answer to the question of what ultimately grounds and explains moral requirements. Admittedly, each of these theories employs concepts that need explication and clarification. On what can reasonable rejection be based? In terms of what values are consequences to be assessed? What features are necessary and sufficient for someone to be virtuous? Each of the rival onesentence, informative answers to the “why?” question will need many pages, if not chapters, of explication and clarification. Even with all this explication and clarification in hand, agents would not be able to apply any one of these theories without the exercise of judgment. Moral theories are often accused of being simplistic and mechanistic, as if they could be applied without the exercise of judgment. But the best versions of moral theories admit there are areas of complexity and subtlety and uncertainty and perhaps even indeterminacy. Definitely, no moral theory will be able to be applied comprehensively without some help from judgment (Carritt, 1930, 114; Rawls, 1971, 40; Shafer-Landau, 1997, 601; Scanlon, 1998, 199, 225, 246, 299; Blackburn, 1998, 44; Crisp, 2000, 29–34). Those who are pluralists at the level of basic principles deny that there is a one-sentence, monistic, informative answer to the question of what ultimately grounds moral requirements. These foundational pluralists (sometimes called Ross-style pluralists) typically maintain that there is a moral duty to do good for others in general, another to avoid harming the innocent in various ways, another to be honest, another to be loyal (exhibit some degree of partiality) to family and friends, and so on. Foundational pluralists hold that what one is morally required to do in any particular situation is a function of the interplay of these duties. Since these foundational pluralists believe that there is no strict hierarchy in the duties, these pluralists have to turn to judgment as a means of determining when one duty outweighs others in cases of conflict (Ross, 1930, 1939; Nagel, 1979; McNaughton, 1996; Stratton-Lake, 2012; Hurka, 2014, chs. 6–8).4 The attraction of foundational pluralism is that the best form of it is likely to be unbeatably intuitively attractive in terms of what it ends up ruling right and wrong. The main 394
Brad Hooker
objection to foundational pluralism is that, if there is some other theory that is equally intuitively attractive in terms of what it ends up ruling right and wrong but then goes on to supply an intuitively attractive single-principle foundation for the rest of morality, then this other theory seems more coherent than foundational pluralism. After all, theory A is more coherent than theory B if both theories are maximally consistent and comprehensive and theory A is more connected than theory B is (Sayer-McCord, 1986, 1996).
6. The Relation of Moral Theory to Everyday Moral Thought Bernard Williams was one of the most influential critics of moral theory. Here is one of his most often quoted characterizations of moral theory: Ethical theories are philosophical undertakings and commit themselves to the view that philosophy can determine, either positively or negatively, how we should think in ethics—in the negative case, to the effect that we cannot really think much at all in ethics. (Williams, 1985, 74) The quotation glides smoothly from talk of ethical theories to talk about how we should think in ethics. However, the ideas constituting an ethical theory need not be prominent within the everyday moral thought that the theory recommends. I will illustrate with reference to different ethical theories, starting with the ethical theory that “philosophy can determine . . . that we cannot really think much at all in ethics.” Here Williams was probably referring to some form of error theory, nihilism, or skepticism (Mackie, 1977; Kalderon, 2005; Olson, 2014; Streumer, 2017).5 Error theory is the view that all judgments affirming the existence of this or that moral requirement or moral prohibition are in error, since moral requirements and prohibitions are merely myths. Must error theorists thus think that everyday moral thought should be abandoned? Well, some thinkers have provocatively contended that the world would be a better place if moral thought were jettisoned completely. However, a far more common opinion among error theorists is that moral thought is very useful and should be retained, even if in reality it is based on metaphysical error and therefore merely fictional. The most prominent form of error theory is one that (a) denies that any ethical requirements or prohibitions really exist but (b) agrees that there is a compelling pragmatic argument in favor of retaining thoughts about what is morally required, prohibited, etc. Here we have an illustration of the difference between a thesis about which basic ethical beliefs are correct and a thesis about how ethical thought should be conducted. There are other examples of this bifurcation. The case most often discussed is actconsequentialism. Again, act-consequentialism is the theory that what makes an act morally required is that it produces more aggregate value than any alternative act would. On first considering act-consequentialism, one might think that, given its account of what makes acts morally required, act-consequentialism also tells agents to focus their everyday moral thought on calculating how much aggregate value alternative possible acts would produce. However, a few minutes of further reflection should show that act-consequentialism does not tell agents to conduct their everyday moral thought by thinking about 395
Moral Theory in Moral Thought and Action
which act would produce the greatest aggregate good. Trying to do such calculation on a case-by-case basis is likely to be highly counterproductive, as I will now explain. Very often, agents don’t have the information needed to calculate the consequences of alternative available actions. When agents don’t have the information needed, they cannot do the calculations. Even when agents could obtain this information, obtaining it would take time, energy, and attention. Even when, armed with the information, agents are better able to choose the act with the best consequences, the costs of obtaining the information might outweigh the extra value produced by an optimal decision as compared with an immediate decision. Moreover, even when agents possess enough information to calculate the consequences, there is the danger of miscalculation. Because of all these points, the consequences might be better on the whole if we don’t try to calculate consequences on a case-by-case basis but instead have a more immediate and less calculating procedure for making everyday moral decisions. Whether this alternative procedure is likely to have better consequences than act-consequentialist case-by-case calculation depends on what this other procedure is. A procedure of always doing what is least expected is not likely to result in much aggregate good. What would result in more aggregate good than act-consequentialist case-by-case calculation is a procedure consisting of routinely keeping our promises and telling the truth, being loyal to our friends and family, ignoring opportunities to steal, never even considering physical aggression as a means of getting what we want, and choosing among remaining alternatives on the basis of benefits to ourselves or others. So, even those who subscribe to act-consequentialism as a theory about which acts are morally right admit that everyday moral thought should follow the procedure just described, instead of case-by-case calculation of consequences. Act-consequentialists typically hold that our moral thinking should take an act-consequentialist form only when everyday moral thought either runs into difficulty in a particular case or needs systematic reform. Because act-consequentialism accepts that everyday moral thought should be channeled by rules, act-consequentialism and rule-consequentialism are often conflated. However, these are two different theories, with different accounts of what distinguishes between morally required and morally forbidden acts. Act-consequentialism is the fundamental principle that an act is morally required if and only if and because it will result in more value than any alternative act would. Rule-consequentialism is a different fundamental principle, that an act is morally required if and only if and because it is required by the code of rules whose internalization by a high percentage of everyone has unsurpassed expected value. So, although act-consequentialism and rule-consequentialism can agree about how everyday moral thought should be conducted, they disagree at the level of fundamental principle. Furthermore, there is a difference in how their fundamental principles relate to some of the concepts employed in everyday moral thought. Before I explain this difference, I need to distinguish between traditional actconsequentialism and newer forms of act-consequentialism. Traditional act-consequentialism took the values that acts are to promote to be welfare (= well-being = utility) or welfare mixed with equality. This kind of act-consequentialism accorded no value or disvalue to acts of any kind, apart from their consequences. Newer forms of act-consequentialism allow that some kinds of act can have value or disvalue apart from their consequences, and thus that the value of a scenario can be partly determined by how many acts of these kinds are 396
Brad Hooker
contained in the scenario (Portmore, 2011). These newer forms of act-consequentialism are less distinct from non-consequentialist theories. Because of the diminished contrast between these newer forms of act-consequentialism and non-consequentialist theories, in what follows I leave out of consideration these newer forms of act-consequentialism. According to traditional act-consequentialism, the concepts employed in everyday moral thought have a role only in the appropriate decision procedure for everyday moral thought—and not in explaining what makes acts required, permissible, or wrong. For example, traditional act-consequentialists are happy that everyday moral thought takes the difference between acts that break promises and acts that don’t break promises to be an important, often pivotal, difference. But traditional act-consequentialism also holds that the fact that an act involves breaking a promise does not really count morally against the act. According to traditional act-consequentialism, the full explanation of why an act of promise breaking was wrong is that this act failed to maximize value. What is doing the explaining, according to act-consequentialism, does not mention a rule against promise breaking. Similar examples could be given involving stealing, lying, and many other kinds of act. In contrast, rule-consequentialism insists that concepts appearing in everyday moral thought may well have a role in explaining what makes acts required, permissible, or wrong. Rule-consequentialists accept that everyday moral thought should steer people away from breaking their promises. According to rule-consequentialism, the full explanation of why an act of promise breaking was wrong is that a rule against promise breaking is one whose internalization by a high percentage of everyone has unsurpassed expected value. So what is doing the explaining mentions a rule against promise breaking. Again, similar examples could be given involving stealing, lying, physically harming the innocent, etc. In this respect, contractualism seems to me to be very similar to rule-consequentialism. The fundamental test that possible rules have to pass in order be selected by contractualism is different from the test that rule-consequentialism uses to select rules. That is the essential difference between contractualism and rule-consequentialism. Nevertheless, the two theories might end up selecting very similar rules. And these rules will appear both in the everyday moral thought prescribed by the theories and in the different theories’ full explanation of why this or that act is wrong. To illustrate, contractualism holds that the full explanation of why an act of (e.g.) stealing was wrong is that a rule against stealing is a rule that no one could reasonably reject. What is doing the explaining mentions a rule against stealing. The most plausible forms of virtue ethics might be like contractualism and ruleconsequentialism in this regard.Virtue ethics insists that reference to the virtues is a necessary part of any full explanation of the wrongness of an act. For example, stealing is wrong because it was something an honest person would characteristically avoid, and honesty is a trait that any virtuous person would have because such a trait is good for the individual, the species, and the social group (Hursthouse, 1999, 198–201). Foundational pluralism is like contractualism and rule-consequentialism in suggesting that the concept appropriately appearing in the everyday moral thought would also appear in a full explanation of why this or that act is wrong. For example, everyday moral thought distinguishes between acts that physically harm others and acts that do not and tells us to avoid physically harming innocent people. Likewise, everyday moral thought distinguishes between promise keeping and promise breaking, between stealing and not stealing, between doing good for others and not, etc. Foundational pluralism goes on to say that these very 397
Moral Theory in Moral Thought and Action
same distinctions appear in the foundational principles of morality and thus in full explanations of what makes acts required, permitted, or forbidden. Again, the main difference between foundational pluralism and contractualism, ruleconsequentialism, traditional act-consequentialism, and virtue ethics is that foundational pluralism denies there is an informative unifying principle underlying the rest of morality and the other theories insist that there is an informative unifying principle, though they disagree about what it is. In this respect, foundational pluralism is unlike its rival theories. But more important to the relation of moral theory to everyday moral thought is the contrast between traditional act-consequentialism and its rival theories. Traditional act-consequentialism insists that what makes acts required or morally prohibited does not involve most of the concepts in terms of which everyday moral thought should be conducted. In that way, traditional act-consequentialism is more conceptually revisionary than are rule-consequentialism, contractualism, foundational pluralism, and virtue ethics. For the accounts of what makes acts morally required or morally forbidden that rule-consequentialism, contractualism, foundational pluralism, and virtue ethics provide employ most of the concepts in terms of which everyday moral thought should be conducted.
7. Conclusion In this chapter, I have juxtaposed the fairly quick and automatic thinking and decision making that constitutes everyday moral thought and action with the slower, more complicated, and more reflective thinking that steps beyond everyday moral thought. At the point of decision of what to do, everyday moral thought focuses on what seem to be the morally relevant differences among available alternative acts. I catalogued various difficulties that can slow down everyday moral thought. One of these was the need for greater empirical information. Slowing down everyday moral thought in the face of this need is not likely to lead to thinking about moral principles. In contrast, moral principles might well come into play when we try to tease out whether a moral concept applies. Likewise, moral principles might well come into play in thinking about cases in which one difference between alternative available acts provides a moral reason for making one choice and at least one other difference between alternative available acts provides a not obviously weaker moral reason for making a different choice. Moral principles and moral theorizing even more obviously come into play when thought experiments are conducted on imaginary cases. Sometimes, imaginary cases do seem pivotally relevant to determining whether a possible way of responding to a very real case is morally permissible or not. A very common example of imaginary cases figuring in moral decision making is the role-reversal thought experiment in which we contemplate how we would react if others did to us what we are considering doing to them. Reflecting on this kind of imaginary case is very likely to involve at least some moral theorizing. Even when everyday moral thought does not run into difficult complications and is not challenged by role-reversal thought experiments, it can be challenged by the “why?” question. Pressed far enough, this question can be answered only by “I don’t know” or by pointing to whatever ultimately makes acts morally required, permissible, or prohibited. Views about what ultimately makes acts morally required, permissible, or prohibited are moral theories. 398
Brad Hooker
In the final section of the main body of this chapter, I outlined the leading moral theories. The theories differ at the level of fundamental principle but largely agree about how everyday moral thought should be conducted. It is a mistake to assume that the content of a fundamental principle must be mirrored in the procedure for everyday moral thought. Nevertheless, the differences between acts that everyday moral thought takes to be salient involve concepts that feature in the rules or virtues that rule-consequentialism, contractualism, foundational pluralism, and virtue ethics endorse as determining moral requirements and prohibitions. In this respect, traditional act-consequentialism is unlike those other moral theories.
Notes 1. See Chapters 1 and 2 of this volume for further discussion. 2. For further discussion of the respects in which ordinary moral thought is automatic and noninferential, see Chapters 4, 6, 7, 8, 9, 17 and 19 of this volume. 3. See Chapter 28 of this volume for further discussion of moral decision making under uncertainty focusing on uncertainty as to the moral facts of the case. 4. See Chapters 9 and 15 of this volume for broadly “empirical” arguments for pluralism. 5. On these topics, see Chapters 13 and 14 of this volume.
References Blackburn, S. (1998). Ruling Passions. Oxford: Clarendon Press. Brandt, R. (1967). “Some Merits of One Form of Rule-utilitarianism,” University of Colorado Studies in Philosophy, 39–65. ———. (1979). A Theory of the Good and the Right. Oxford: Clarendon Press. Carritt, E. F. (1930). A Theory of Morals. London: Oxford University Press. Copp, D. (1995). Morality, Normativity, and Society. New York: Oxford University Press. Crisp, R. (2000). “Particularizing Particularism,” in B. Hooker and M. Little (eds.), Moral Particularism. Oxford: Clarendon Press. Darwall, S. (2006). The Second-Person Standpoint. Cambridge, MA: Harvard University Press. de Lazari-Radek, K. and Singer, P. (2014). The Point of View of the Universe: Sidgwick and Contemporary Ethics. Oxford: Oxford University Press). Gibbard, A. (1990). Wise Choices, Apt Feelings. Cambridge, MA: Harvard University Press. Hart, H. L. A. (1961). The Concept of Law. Oxford: Clarendon Press. Hooker, B. (2000). Ideal Code, Real World: A Rule-Consequentialist Theory of Morality. Oxford: Clarendon Press. Hurka, T. (2014). British Ethical Theorists from Sidgwick to Ewing. Oxford: Oxford University Press. Hursthouse, R. (1999). On Virtue Ethics. Oxford: Clarendon Press. Kalderon, M. (2005). Moral Fictionalism. Oxford: Clarendon Press. Mackie, J. L. (1977). Ethics: Inventing Right and Wrong. Harmondsworth: Penguin Classics. McNaughton, D. (1996). “An Unconnected Heap of Duties?” Philosophical Quarterly, 46, 433–447. Mill, J. S. (1861). Utilitarianism. Frequently reprinted, e.g., in R. Crisp (ed.), Utilitarianism. Oxford: Oxford University Press, 1998. Nagel, T. (1979). “The Fragmentation of Value,” in his Moral Questions. Cambridge: Cambridge University Press. ———. (1991). Equality or Partiality? New York: Oxford University Press. Olson, J. (2014). Moral Error Theory: History, Critique, Defence. Oxford: Oxford University Press. Parfit, D. (1984). Reasons and Persons. Oxford: Clarendon Press. ———. (2011). On What Matters. Oxford: Oxford University Press. Portmore, D. (2011). Commonsense Consequentialism. New York: Oxford University Press. 399
Moral Theory in Moral Thought and Action
Rawls, J. (1971). A Theory of Justice. Cambridge, MA: Harvard University Press. Ross, W. D. (1930). The Right and the Good. Oxford: Clarendon Press. ———. (1939). Foundations of Ethics. Oxford: Clarendon Press. Sayer-McCord, G. (1986). “Coherence and Models for Moral Theorizing,” Pacific Philosophical Quarterly, 18, 170–190. ———. (1996). “Coherentist Epistemology and Moral Theory,” in W. Sinnott-Armstrong and M. Timmons (eds.), Moral Knowledge? New York: Oxford University Press. Scanlon, T. M. (1998). What We Owe to Each Other. Cambridge, MA: Harvard University Press. Shafer-Landau, R. (1997). “Moral Rules,” Ethics, 107, 584–611. Sidgwick, H. (1907). Methods of Ethics (7th ed.). London: Palgrave Macmillan. Singer, P. and de Lazari-Radek, K. (2016). “Doing Our Best for Hedonistic Utilitarianism,” Ethica & Politica, 18, 187–207. Smart, J. J. C. (1973). “Outline of a System of Utilitarian Ethics” in J. J. C. Smart and Bernard Williams (eds.), Utilitarianism: For and Against. Cambridge: Cambridge University Press. Sprigge, T. L. S. (1964). “Definition of a Moral Judgement,” Philosophy, 39, 301–322. Stratton-Lake, P. (2012). “Recalcitrant Pluralism,” in B. Hooker (ed.), Developing Deontology. Oxford: Blackwell Publishing. Streumer, B. (2017). Unbelievable Errors: An Error Theory About All Normative Judgements. Oxford: Oxford University Press. Urmson, J. O. (1953). “The Interpretation of the Moral Philosophy of J. S. Mill,” Philosophical Quarterly, 3, 33–39. Wallace, R. J. (1994). Responsibility and the Moral Sentiments. Cambridge, MA: Harvard University Press. Williams, B. (1985). Ethics and the Limits of Philosophy. Cambridge, MA: Harvard University Press.
Further Readings James Griffin’s What Can Philosophy Contribute to Ethics? (Oxford: Oxford University Press, 2015) juxtaposes the unrealistic “Cartesian ambition” of producing a highly systematic moral theory with the insights gained from acknowledging various practicalities, including the limits of human motivation and knowledge and the role of conventional social policies. Brad Hooker’s “Theory vs AntiTheory in Ethics,” in Ulrika Heuer and Gerald Lang (eds.), Luck,Value, and Commitment:Themes from the Moral Philosophy of Bernard Williams (Oxford: Oxford University Press, 2012) argues that some objections to ethical theory take to be unjustified what is not unjustified and other objections attribute to ethical theory commitments that ethical theory need not make. Mark Timmons, in his Moral Theory (2nd ed.) (New York: Rowman & Littlefield, 2013), identifies the theoretical ambition of moral theory as discovering the features that make actions morally required or morally wrong and the practical ambition of moral theory as leading informed agents to morally right decisions and actions, and the potential conflict between these ambitions. An especially comprehensive and careful discussion of the arguments for and against moral theory can be found in chapters 5, 7, and 8 of Robert Loudon’s Morality and Moral Theory (Oxford University Press, 1992).
Related Chapters Chapter 11 Modern Moral Epistemology, Chapter 12 Contemporary Moral Epistemology; Chapter 16 Rationalism and Intuitionism: Assessing Three View about the Psychology of Moral Judgments; Chapter 17 Moral Perception; Chapter 18 Moral Intuition; Chapter 19 Foundationalism and Coherentism in Moral Epistemology; Chapter 21 Methods, Goals and Data in Moral Theorizing, Chapter 22 Moral Knowledge as Know-How.
400
SECTION III
Applications
The final section of this Handbook,“Applications,” is an investigation of what we learn when moral epistemology goes “street level;” that is, when we examine relatively specific problems and the epistemic issues they raise. Going street level does not mean going top-down and applying an antecedently established theory of, for example, moral justification to specific problems. Instead, the approach in this section is topic-centered and asks what moral epistemology can learn from thinking carefully about a number of specific topics related to moral knowledge and belief that arise in the course of real life. Topics covered range from those with a long philosophical history, such as the relation between religion and moral knowledge, to emerging topics such as how to reason when faced with moral uncertainty and whether group moral knowledge is possible. Nonetheless, a strong theme emerges and works its way through the first six entries of this section: the role of the social in moral epistemology. The section begins with Jennifer Cole Wright’s exploration of the ways in which social practices can embed, protect, and transmit moral knowledge. Her chapter is structured by two core questions: What does it mean to say that moral knowledge is “embedded” in our social practices? And what is the nature of this knowledge? The claim that social practices can embed moral knowledge is stronger than the claim that such practices can be shaped by, or reflect, moral knowledge. It amounts to claiming that practices can be repositories of moral knowledge and that we can gain knowledge by engaging in those practices, practices such as expressing respect through proper greetings, for example. Wright responds to two objections to this view: when the moral beginner engages in practices that have moral import through copying others that does not yet mean that the beginner has either moral knowledge or is engaging in the practice for the right reasons. At a minimum, to have moral knowledge, they must also grasp the normative point of the practice rather than merely seeing it as a social regularity or just as something that we do. Moreover, the learner’s engagement with the values embedded in the practice must have the right kind of motivational structure: they must value being respectful for its own sake and not merely from fear of social ostracism or being the “odd one out.”Wright argues that feedback loops in which others respond positively to our attempts at being, say, respectful, enable us to experience the value of being respectful. In this way, engagement in social practices comes
Section III
to be hooked up to the right motivations. In addition, our social practices of behaving in certain ways are embedded in further practices of identifying, labeling, discussing, and correcting that behavior. Wright explores the tension that can emerge when individual agents or groups of agents have moral knowledge that sits uneasily with the knowledge embedded in practices. This tension can give rise to practical dilemmas, such as whether to correct someone who is using an antiquated and sexist norm of politeness to express what is, on their part, genuine respect. If social practices can embed and transmit moral knowledge, what kind of knowledge is it: knowledge that or knowledge how? Wright reviews rival accounts of this distinction’s utility and proper shape. Anti-intellectualists take the view that knowledge how, rather than consisting in attitudes toward propositions, involves a set of abilities and dispositions to behave in certain ways. Intellectualists reply that the anti-intellectualist view targets a false picture of what is involved in having propositional knowledge. Wright argues that we can sidestep much of this debate by recognizing that the value of moral knowledge consists in possessing the ability to act well and that doing so requires the capacities and dispositions that are the focus of the anti-intellectualist. The chapter concludes with reflections on the “dark side” of social practices: if social practices can embed and transmit moral knowledge, then they can also embed and transmit moral ignorance and function to legitimize oppressive social structures. The next chapter, “Group Moral Knowledge,” by Deborah Tollefsen and Christopher Lucibella, continues the exploration of the social dimensions of moral epistemology. It argues that we can attribute moral knowledge to groups and that it can be legitimate to defer to the moral expertise of appropriately constituted groups. The first step to establishing that groups can have moral knowledge is to show that groups can have moral beliefs that are not simply reducible to the beliefs held by the individuals who are members of that group. Drawing on the literature on group agency, Tollefsen and Lucibella argue that the beliefs attributable to a group can diverge from the beliefs attributable to members of the group, and so reductionism fails. Moreover, even in cases where each member of the group, considered as an individual, holds a belief, it may be incorrect to attribute that belief to the group because the proposition in question plays no role in the group’s deliberations or collective action. A group needs some aggregation procedure for moving from the beliefs of members of the group to beliefs that are attributable to the group, as well as a method for ensuring consistency and coherency among beliefs at the group level. Some aggregation procedures can result in inconsistency at the level of group belief even though each member of the group has a consistent belief set. Having established that groups can have moral beliefs that aren’t reducible to the beliefs of their members, the second step to establishing that groups can have moral knowledge is to show that these beliefs can be justified. Internalist theories of justification, which require that knowers have access to the reasons that support their beliefs, pose puzzles when applied to the case of group belief because the reasons that support a group belief might not be available to all or even to any single one of its members: e.g., in working out which policy to advocate, a group might defer to the expertise of different members regarding questions that fall within their different areas of expertise. Process reliabilism, according to which knowledge is true belief produced by a reliable process, looks like a more promising approach and makes the question of whether groups can have moral 402
Applications
knowledge a matter of whether their belief-forming processes, processes that can be distributed across individuals, are reliable. Whether they are reliable depends on how the group is composed and how it goes about forming its beliefs. Drawing on models of objectivity from feminist work in the philosophy of science, Tollefsen and Lucibella argue that moral knowledge is more likely to be had by diverse groups of inquirers, which include and take seriously the points of view of nondominant groups. Such groups will be better able to synthesize multiple perspectives, eliminate bias, and so increase objectivity. Because of this, groups may be more reliable sources of moral knowledge than individuals; hence, if it is ever permissible to accept testimony about moral matters, then it may be wise to look to groups to provide it. Not all groups that are potential sources of moral knowledge need be formally constituted groups, with relatively clear decision-making procedures, on the model of ethics committees, say. Moral knowledge might also be attributable to more loosely structured collectives such as political movements and moral communities. The relationship between liberation movements and moral knowledge and the question of whether there is something problematic about acquiring moral beliefs through testimony are taken up in the two chapters that follow. Moral philosophers sometimes lay claim to expertise in moral judgment. However, there is an emerging body of literature from and about liberation movements including slavery abolition, anti-racist activism, animal liberation, and feminism that suggests, following Marx, that moral knowledge is more likely to emerge from engaging in practice designed to change the world than from arm-chair philosophical reflection on the world. Lauren Woomer takes up the question of the relationship between liberation movements and moral knowledge in her chapter “Moral Epistemology and Liberation Movements.” Woomer argues that liberation movements both generate and disseminate moral knowledge and also cultivate agents who are able to be receptive to that knowledge. Woomer draws on Kristie Dotson’s notion of an epistemological framework, extending it to the context of moral epistemology. An epistemological framework is the lens through which we view the world and which shapes our practices of inquiry. It comprises: (i) experiences, which are a function of the agent’s social location and her habits of attention; (ii) a set of collective epistemic resources, including conceptual frameworks, background assumptions, and standards and techniques for argument and investigation; and (iii) that set of background values, norms, and expectations—called an instituted social imaginary—that shape and reinforce both habits of attention and the epistemic tools used in inquiry. Woomer explores the ways in which liberation movements challenge dominant moral epistemological frameworks and provide a social context in which new epistemological tools can be developed. They do this through cultivating the moral imaginations of their participants, enabling options that might have seemed impossible, unintelligible, or even morally wrong to come to seem both possible and morally desirable. Movements create the conceptual resources to frame and evaluate alternatives to dominant moral positions, generate the narratives, images, and associations required to make sense of these alternatives, and, sometimes, model alternative modes of moral engagement within the movement. Woomer uses examples from the emerging movement for the abolition of prisons and other forms of state-based punishment to illustrate how this can work. Choosing an example whose goal remains controversial makes salient how radical the imaginative transformations nurtured within liberation movements and the moral changes for which they advocate can be against 403
Section III
the backdrop of dominant moral epistemological frameworks.When a liberation movement succeeds, or partly succeeds, its ethical insights are fully or partly absorbed into mainstream moral frameworks. Knowledge generated within liberation movements benefits those outside the movement not only by providing conceptual resources, arguments, and data in favor of the goals of the movement but also though making injustices salient, contesting mainstream responses, and holding those with power to account, including through direct action. Alison Hills’s chapter, “Moral Expertise,” takes up the problem of whether there can be divisions of moral epistemic labor and hence whether moral communities should be structured by relations of deference to the testimony of experts, whether experts about morality in general or experts about some particular domain of morality. Divisions of epistemic labor are recognized as important in non-moral matters.They allow us to know what we, as finite creatures with limited cognitive resources, could otherwise not know: we can ask an expert and acquire knowledge through trust in their testimony. Some have the intuition that there is an asymmetry in the legitimacy of trust in testimony regarding nonmoral matters and moral matters. Hills surveys the arguments in favor of this asymmetry.The arguments divide into two kinds: epistemic and moral. The epistemic problems concern getting a match between a would-be recipient of testimony and a reliable informant. From the recipient’s side, this is the credentials problem; from the testifier’s side, it is the problem of credibility. The problem of how to trust only the trustworthy is common to the testimonial transfer of knowledge in any domain, but there are reasons to think it especially acute when it comes to moral knowledge, where there is neither a recognized credentialing process nor a way of independently checking a putative expert’s verdict, and where moral expertise—at least on some matters—is most likely to be held by members of socially subordinate groups who typically suffer credibility deficits. Moral objections to acquiring moral knowledge through testimony include: the problem of autonomy, of authenticity, of integrity, and of moral worth. Of these, Hills claims, the problem of moral worth is the most serious objection. For an action to have moral worth it must be done in response to the right reasons. Hills argues those who trust the moral testimony of others are not themselves responsive to the reasons that make their act right; they are instead relying on the ability of the one trusted to respond to moral reasons.This means that they fall short of ideal moral agency. The upshot of this argument is not that no one should ever trust moral testimony. We care about doing the right thing, and if we are not able to form a reliable judgment amount the matter at hand, perhaps because of being too close to it or because of limitations in our abilities, then we should defer to someone with better judgment. Hills concludes by exploring the relatively neglected topic of alternatives to deference—alternatives, such as guided argument and analogy, that allow the novice to begin to acquire moral understanding. The next chapter, by Alan Goldman, approaches the possibility of and problems for divisions of moral labor from the perspective of professional codes of ethics. Professional codes, such as those adopted by medical and legal associations to govern the conduct of their members, elevate a centrally important value or values that the profession is charged with serving and focus on them at the expense of other values. This means that they can give verdicts that are in conflict with those of ordinary morality. Lawyers, for example, are code-bound to protect the legal rights of their clients even when doing so is at the expense of (lawful) harms to third parties, harms that would, were it not for the relationship between 404
Applications
lawyer and client, normally prohibit so acting. Goldman claims that two puzzles for moral epistemology emerge. First, how are we to understand the relationship between professional codes of ethics and ordinary morality? This is analogous to the problem of how we are to reconcile our commonsense conception of the world with our scientific conception of it. Second, how is a professional to form a justified belief about what to do when professional morality and ordinary morality conflict? Professional codes are justified on the basis of ordinary morality due to the fiduciary nature of the relationship between professional and client. By definition, professionals have specialized knowledge regarding how to further some important value—health or legal rights, for example. Their specialized knowledge grounds their claim to justly participate in a self-regulating monopoly in the provision of services relating to this value. Licensing practices certify competence and professional codes of ethics express their commitment to serving that value within stated limits. This enables would-be clients to trust professionals, prevents unscrupulous practitioners from gaining advantage, resolves conflicts among competing values, and ensures that this important interest is not subject to the vagaries of individual judgment. In these ways professional codes provide a stable framework for expectations on the part of both client and professional. Once a professional code is in place, the professional will have reasons for complying with it that stem from the communicative role of the code and not merely from the moral content of the code. But this sets up potential conflicts between a code’s recommendations and requirements of ordinary morality. Goldman argues that this tension can be at least partly diffused by recognizing the distinction between being justified in a moral judgment and having the authority to act on that judgment and by crafting professional codes that take the form primarily of principles and standards rather than rules. Rules contain descriptive application conditions and are to be followed whenever those conditions obtain, thus limiting professional discretion. Standards, in contrast, use broad normative language, and their application requires judgment. For either kind of code, however, situations can emerge in which the professional is confronted with the epistemological problem of how to arrive at a justified belief about what she ought to do when the verdict of a code and ordinary morality diverge. Whether virtue can be taught, and if so, how, is an important topic in Classical philosophy, both Western and Chinese. Grant that we can teach people ethical theory: can we also teach them the habits of perception, feeling, judging, and acting characteristic of the virtuous person? And if we can, how should this be done? Nancy Snow and Scott Beck take up these questions in the context of the contemporary theory and practice of character education in schools. Their contribution, “Teaching Virtue,” comprises an overview of contemporary models for character education along with reflections on practice derived from a case study. It is intended, in part, as a resource for educators considering implementing character education. The contemporary literature on character education contains a number of seemingly quite different accounts of the goals of such education and methods for implementing it; however, the chapter argues, there is more in common between these various approaches than might first appear. The approaches surveyed are: social emotional learning, which, while not itself a framework for teaching virtue, is about developing the skills, including social awareness and self-management, that virtue presupposes; Integrative Ethical Education, which brings together character education with an emphasis on autonomy; Making Caring Common, which seeks to change school climate and values through 405
Section III
practices of expanding circles of caring; positive education, with its emphasis on flourishing; and Aristotelian character education, which focuses on developing ethical perception, emotional regulation, proper motivation, and practical judgment. All of these seemingly quite different approaches recognize the importance of emotion and of social responsiveness as well as judgment in character education. Each also contains an implicit conception of disciplined practice, or askesis. The claim that virtue is acquired through guided and disciplined practice is as old as Confucius, but what is distinctive about its application in this context is the recognition of the many different, role-related, disciplined practices that need to be undertaken by different members of a school community in order to effectively implement character education. This is shown by reflection on the experience of trialing virtue education at Norman High School (Oklahoma, USA). The Norman High School experience contains a number of lessons about how to move from theory to practice, chief among them the importance of adapting available models to a school’s specific context and history, since “one size” does not fit all; the ways in which character education and academic rigor can be mutually supporting; the centrality of providing adequate professional development; and the need for dialogically engendered and shared conceptions of the goals of character education and how achievement is to be measured. The next chapter, by Andrew Sepielli, takes up a problem that is gaining increasing recognition: what should one do when faced with fundamental moral uncertainty? That is, what should one do when one is uncertain about which moral principles or theory is true? This kind of uncertainty is puzzling in a way that uncertainty about what one ought to do that is generated by uncertainty over morally relevant nonmoral facts is not. Call the view that fundamental moral uncertainty should be treated roughly the same as uncertainty about nonmoral facts “moral uncertaintism.” The chapter canvasses three central problems for moral uncertaintism and explores the prospects for an adequate resolution to them.The problems are: whether it makes sense to ascribe probabilities between zero and one to moral propositions, as it does in the case of nonmoral propositions; how to compare values across competing moral theories; and the threat of regress. Some theories of what probabilities are seem to make it impossible to assign an intermediate probability (even an imprecise one) to moral propositions, especially on the view that moral propositions, if true, are necessarily true (ruling out a modal chances approach) and timelessly true (ruling out frequency accounts).The obvious response is to think of the probabilities in question as subjective probabilities or credences. But, Sepielli argues, this won’t do if a theory of the norms for decision making under moral uncertainty is to be fit to serve the role of regulating praise, blame, reward, and punishment. A Nazi is not let off the hook in virtue of being highly confident in their morally heinous views. Nor will it do if the purpose of norms for what to do under moral uncertainty is to serve as a guide for deliberation on the part of the morally uncertain. Sepielli argues that epistemic probabilities can play the relevant roles, though this view is not without its challenges. In particular, it must explain how we can make sense of the notion of intermediate epistemic probabilities when it seems that all moral propositions are equally epistemically accessible, as they would seem to be if they are knowable a priori. The second challenge concerns inter-theoretic value comparisons. In deciding what to do under moral uncertainty, we need not only to take into account the probability that a theory is right but also take into account the degree of rightness or wrongness a given act 406
Applications
has according to that theory and compare that with the degree of rightness or wrongness that act would have if a competitor theory were true. Comparability can be imposed by stipulative benchmarking between theories, but this move would make the rightness of an action in part a function of the benchmarking method chosen, which is counterintuitive. Inter-theoretic value comparisons are an open problem in the literature, but Sepielli suggests that the significance of the problem may be overestimated, given that, in many contexts, it may be possible to make nonarbitrary inter-theoretic comparisons without requiring a general solution to the problem.The final challenge concerns whether a regress threatens given that one may not only be uncertain about what one ought to do, but also uncertain about what to do when one is morally uncertain, and so on. Sepielli argues that the regress problem is more pressing for our norms for what to do under moral uncertainty in their role of guides to deliberation and action than it is in their role as regulators of praise and blame. Even here there are a number of moves available to the moral uncertaintist, including arguing that a regress, if it is a problem, is also a problem in the nonmoral case. Conceptions of moral knowledge play a significant role in legal trials. In most jurisdictions a defendant’s sentence is premised on the assumption that they knew they were acting wrongly or illegally when committing the crime in question. But background views on the purpose and justification of punishment also influence the decisions of judge and jury. In what ways, if at all, should philosophical accounts of justice, desert, and responsibility—to name just a few moral concepts of clear practical import—influence public policy regarding criminal law, sentencing, welfare, economics, and so on? In “Public Policy and Philosophical Accounts of Desert,” Steven Sverdlik explores the role accounts of desert play in shaping the moral principles that philosophers claim ought to govern criminal punishment. Consequentialists approach the issue of state punishment in terms of the good it produces (such as deterrence, reform, and prevention of harm) versus the harm it causes; retributivists, in contrast, argue that desert claims should play a central role in theories of criminal punishment. A criminal should be punished only if, and to the extent that, punishment is deserved. Sverdlik argues that both historical and contemporary accounts of desert rest, for their justification, on epistemically problematic appeals to intuition, raising doubts about their usefulness as guides to criminal justice policy. Sverdlik offers a taxonomy of the varieties of retributivism, which is divided along the axes of conception of desert and the deontic role that desert play, and is used to identify three conceptions of desert at play in the literature.Which one a theorist adopts will depend on their stance toward resultant moral luck.Two actions performed with the same intention can differ in their outcome. For example, an attempted murder might fail only by chance, as when something interferes with the bullet’s trajectory. The would-be murderer’s motives and intentions might be every bit as repugnant as those of the successful murderer.Yet legal practice, taking into account the harm actually produced, punishes them differently, thus acknowledging resultant moral luck.The notion of culpability identifies those features of an agent and her action, independent of its consequences, that make it the case that the agent deserves punishment. Conceptions of desert might accept that (i) the only thing that matters to desert is culpability, hence denying resultant luck, or (ii) both culpability and harm matter, or (iii) only harm matters. Cutting across this set of distinctions are two distinctions in the deontic role of desert claims: do they ground an obligation to punish or do they set a ceiling on the amount of punishment, or both? Further fine-grained distinctions can be 407
Section III
made within these broader families of accounts according to the theorist’s account of mitigating or exacerbating conditions. For a philosophical account of desert to have practical bite for public policy purposes it must not only defend its conception of desert and its deontological role, it must also say how strong any posited obligation to punish is or how severe punishment can be before it exceeds the desert-based ceiling on permissible punishment. Sverdlik argues that even if “desert island” thought experiments establish that there is an obligation to punish, they do nothing to establish the weight of that obligation. Similarly, thought experiments designed to illicit the intuition that it is wrong to punish the innocent do not establish the strength of this constraint, nor can intuition establish just how much punishment is too much. Sverdlik suggests that the solution may lie in stepping back from intuition and moving toward deontological moral theory. The final chapter of the book, “Religion and Moral Knowledge,” by C. A. J. Coady, takes up a question with a history as old as philosophy itself: what, if anything, does religion have to do with morality and with moral knowledge? There are two familiar extreme responses to this question—everything, and nothing at all—and Coady plots a path between them. He frames the discussion in terms of the Euthyphro question, whether piety is good because loved by the gods or loved by them because it is good. The first direction of explanation suggests a “divine command” theory and faces the objection of arbitrariness, unless we also suppose that God’s nature is good, which raises the further problem of how we can know the nature of God. Further, it seems that, at least in the shared texts of Judaism, Christianity, and Islam, God shows himself capable of commanding morally repugnant things. Coady surveys responses to these problems.The second direction of explanation threatens to make God, or the gods, irrelevant to moral epistemology. If God commands an action because it is good, then it seems God is an epistemological “middle man” who can be cut out as we go straight to investigating the moral properties that determine God’s will.The existence of morally good atheists lends support to this possibility. Many religious traditions recognize the existence of “natural morality,” or the capacity to apprehend what we ought to do without reliance on scripture, while claiming that its verdicts will be in harmony with God’s law. But the harmony claim is contestable and scripture admits of multiple interpretations, as well as having gaps where it seems that our best moral judgment is all we have to go on. Coady concludes by investigating four ways in which religion might be relevant to morality even if it is conceded that God need not be appealed to in our explanation of how moral knowledge is possible. First, perhaps religion may aid in our grasp of moral truths, given our finite and fallible capacities. Even in this supplementary role, the problem of scriptural interpretation and of distinguishing genuine from spurious revelation remains pressing. Second, belief in God may aid moral motivation. If it does so only by inducing fear of punishment in the afterlife, then it will support a kind of motivation incompatible with moral worth; this is arguably not so, however, if belief in God motivates love for others, as a manifestation of, or spillover from, one’s love of God. Third, belief in God may provide backing for and motivation to stick to absolute moral obligations, if such there be. Finally, a religious or spiritual outlook may point to and aid in addressing moral concerns that go beyond those of right and wrong action. It may do better than secular accounts, which Coady also explores, in providing a framework for addressing questions relating to the meaning of life. 408
21 METHODS, GOALS, AND DATA IN MORAL THEORIZING John Bengson, Terence Cuneo, and Russ Shafer-Landau
Philosophical methods play a crucial role in philosophical inquiry. When it comes to questions about the nature, status, and content of morality—the special purview of moral philosophy—we look to philosophical methods to help guide the construction of normative and metaethical theories, and to provide the basis for evaluating their individual and comparative merits. One of the tasks of moral epistemology is to determine how this is to be done well. Here we investigate the construction and evaluation of theories in metaethics, focusing on the nature of the methods that should govern metaethical theorizing and their relation to such theorizing.1 As we’ll see, doing so requires attending to both the possible goals of metaethical inquiry and (what we’ll call) the metaethical data—the source-material utilized by good methods to achieve those goals. The main claims about methods, goals, and data for which we’ll argue are these: First, candidate methods for metaethical theorizing must be assessed in light of the epistemic goal(s) of metaethical inquiry. Second, while there are a variety of epistemic goals that different methods may properly aspire to achieve, several prominent methods face significant challenges when assessed in light of the attractive goal of understanding. Third, while there are difficult questions about the nature, status, and collection of metaethical data, there are a range of data that must be accounted for by competing metaethical theories. Fourth, these data possess four basic features, which set them apart from other types of considerations and indicate how and why they serve as the lifeblood of theoretical inquiry. Fifth, these data should be conceived not as dialectically effective starting points, constitutive features of morality, claims about how morality seems, or descriptions about how moral language is commonly used; rather, they are what metaethicists have epistemic reason to take to be genuine features of morality itself.
409
Methods, Goals, and Data in Moral Theorizing
Sixth, by utilizing reflection on ordinary moral experience, we are able to reveal what some of these data are.
1. The Tripartite Structure of Metaethical Theorizing A helpful way to understand the nature of metaethical theorizing is to identify its constituent elements and to reveal the underlying structural relations among them. There are three such elements. There are inputs to metaethical theorizing; there are its outputs; and there is the method, or procedure, that takes one from the former to the latter.2 This abstract structure can be realized in numerous ways, depending on the candidate inputs, methods, and outputs that one opts for.
1.1 Outputs The outputs of metaethical theorizing are theories or views (we’ll use these terms interchangeably). These theories possess a variety of properties, including epistemic ones. For example, theories can be coherent, true, or justified; they can afford knowledge, or provide understanding. Theories that possess such positive epistemic properties are, under favorable conditions, the contents of a variety of epistemically significant mental states or attitudes, such as true belief, knowledge, or understanding. These epistemic achievements are among the possible goals of theorizing. It is important to distinguish the outputs of metaethical theorizing from its goals. Outputs are theories. Goals, by contrast, are the agential states we seek to attain through the activity of theoretical inquiry. Standardly, the goals of theorizing are epistemic achievements, such as those mentioned just above. Philosophers have typically failed to be clear about which of these (or other) goals they are pursuing when constructing and evaluating theories. But the differences between these goals are significant, given the dual purpose that such goals serve: they provide standards for assessing the merits of both outputs (theories) and methods. Outputs can be judged by reference to whether they (for example) enable true belief, afford knowledge, or provide understanding. If, for instance, the appropriate goal of metaethical theorizing is understanding, then a theory can rightly be judged inadequate if it doesn’t provide that good to an agent who fully grasps the theory. The goals of theorizing will also determine the adequacy of methods, since this depends in large part on whether their successful implementation yields appropriate epistemic achievements. For example, a method might yield true belief (and nothing more). But such a method will fail of its aim if what we are seeking is knowledge, which is more demanding than true belief. Some goals are more appropriate to one activity rather than another. When the activity is theoretical inquiry, undertaken for its own sake, it’s plausible to restrict the range of proper goals to genuine epistemic achievements. However, it can’t be assumed that there is a unique proper goal for theoretical inquiry; there may be many such goals. It follows that there might be no such thing as the uniquely correct method of metaethical theorizing, as different methods might be calibrated to achieve different proper goals. That said, some such goals are especially important, namely, those whose realization is sufficient to successfully resolve theoretical inquiry—reaching the point at which, to put it
410
John Bengson, Terence Cuneo and Russ Shafer-Landau
simply, there is no more work to be done.We call these ultimate proper goals, of which there may perhaps be several. Not all proper goals are ultimate. For example, true belief is not an ultimate goal of theoretical inquiry, since such belief can be unwarranted and still leave open many of the most significant questions about the domain. In fact, even if we attained a set of beliefs that were not simply true, but were also coherent, justified, or even all three of these at once, one could hardly deem inquiry complete.There would remain more work to do, for such a set does not by itself guarantee the sort of theoretical understanding required to successfully resolve inquiry. We elucidate some central features of such understanding below, but for now, think of it as the sort of epistemic achievement that provides comprehensive and systematic illumination of its target. One important reason to regard theoretical understanding as an ultimate proper goal of theorizing is that stronger epistemic achievements, such as absolute certainty, are unnecessary to successfully resolve theoretical inquiry. After all, even were we to fall short with respect to such achievements, that would not by itself impugn the success of our inquiry, so long as we had achieved genuine understanding with respect to the central questions in the domain under investigation.3 To summarize, the outputs of method are theories that possess various epistemic properties.The goals of such methods are mental states or attitudes that have such theories as their contents. When all goes well, a method will resolve theoretical inquiry in a fully satisfactory way, realizing an ultimate proper goal. Among the ultimate proper goals is theoretical understanding. In the next section, we consider what method is and sketch what it takes for a method to achieve an ultimate proper goal such as understanding.
1.2 Methods We understand a method to be a set of instructions, or criteria, for theory construction and evaluation. A method will take some set of inputs and, when successfully applied, will deliver an output that realizes one or more goals. Some goals ‘correspond’ to a method, in one of two senses: either the method was designed to facilitate their realization, or there is something else about the method itself that makes those goals especially fitting. Call these the method’s corresponding goals. We’ll say that a method is valid just in case satisfying its criteria (or following its instructions) realizes its corresponding goals; a method is proper just in case satisfying its criteria realizes a proper goal of theoretical inquiry; and a method is ultimately proper just in case satisfying its criteria realizes an ultimate proper goal of such inquiry. Naturally, what is wanted is a method that has all three of these properties: we’ll call such a method sound. As these remarks indicate, the bare notion of method is to some extent a black box, whose contents are its instructions or criteria. Since our objective here is not simply to present the abstract notion of method, but also to identify options for metaethical theorizing, and to show how to evaluate various candidate methods, we propose to work through a few examples. We’ll focus on austere versions of four methods prominent in metaethics. Although these methods are non-exclusive, each is meant to be complete, in that nothing more than following its instructions is required to achieve its corresponding goal. A brief examination of these methods will allow us to pinpoint some of the challenges that they face.
411
Methods, Goals, and Data in Moral Theorizing
Consider, first, what we’ll call the “method of analysis,” which, focusing on clarification, instructs metaethicists to define or analyze central ethical terms, concepts, or properties:4 Method of Analysis: When constructing a theory, a theorist ought to articulate and justify particular analyses of all of the domain’s central terms, concepts, or properties that meet some sufficiently high standard (e.g., being necessarily coextensive, being intensionally correct, being theoretically serviceable).The best theory is the one whose proposed analyses meet this standard to the highest degree relative to rivals. A second method, focusing on justification by argument, is what David Chalmers has called the “method of argument,” which calls for theorists to promote their view by putting together the rationale for it:5 Method of Argument: When constructing a theory, a theorist ought to assemble an adequate rationale, by formulating arguments for the theory’s answers to the central questions about the domain, as well as arguments that respond to relevant challenges, where the premises and inferences of these arguments meet some sufficiently high standard (e.g., being certain or self-evident, being scientifically or logically wellconfirmed, being shared by members of an ideal audience subsequent to extended critical examination). The best theory is the one whose central theses are the conclusions of arguments whose premises and inferences meet this standard to the highest degree relative to rivals. A third method, focusing on explanation, is what we’ll call the “method of parsimony”: Method of Parsimony: When constructing a theory, a theorist ought to identify a set of propositions about the domain that realize, to the greatest extent possible, simplicity and explanatory scope (i.e., explanations of everything that must be accounted for). The best theory is the one that achieves the greatest extent and balance of simplicity and explanatory scope relative to rivals.6 A fourth method, focusing on systematization, is what John Rawls called “wide reflective equilibrium,” which has been a mainstay of discussions of philosophical methodology, and is often cited enthusiastically by moral philosophers wishing to clarify their methodological commitments:7 Method of Reflective Equilibrium: When constructing a theory, a theorist ought to achieve coherence between various particular judgments (e.g., considered judgments regarding specific cases) and beliefs in general principles (e.g., universally quantified propositions) that address all of the central questions about the domain, through a reflective process of modification, addition, and abandonment of either the particular judgments or principles in case of conflict (with each other, or with any of one’s other relevant convictions). The best theory is the one that achieves such coherence to the highest degree relative to rivals.
412
John Bengson, Terence Cuneo and Russ Shafer-Landau
It might be that the method of analysis yields knowledge of definitions, the method of argument yields justified beliefs, the method of parsimony yields beliefs with a high probability given the evidence, and reflective equilibrium yields a coherent set of judgments. Still, there are live questions about whether any of these methods is valid or proper.8 Furthermore, if none of these methods yields a theory that provides understanding or any other ultimate proper goal (recall, e.g., our earlier observation that true, justified, or coherent belief, and even their conjunction, may fail to qualify as such), then none of these methods is sound. One way to support our doubts about the soundness of these methods is to consider what a method would have to be like in order to yield, as output, a theory that provides a great deal of understanding of a given domain. Such a method would have to yield a theory that possesses at least four characteristics. First, the theory must possess a high degree of accuracy, since inaccurate theories will yield only misunderstanding. Second, the theory must be reason-based, in the sense that it is positively supported by considerations that favor its truth. Third, the theory must be robust, answering a multitude of questions about the most important features of the domain under investigation. Fourth, the theory must be orderly, not simply offering such feature-specific answers but also affording a broader view of the domain and how it hangs together—for example, by exposing systematic connections among those (and other) features.9 We have no particularly good reason to think that any of the austere methods considered above delivers theories that have all four of these characteristics. Prima facie, nothing in the method of analysis or the method of argument ensures its outputs will be robust or orderly; the method of parsimony does not guarantee outputs that are simultaneously reason-based, robust, and orderly; and the method of reflective equilibrium (infamously) fails to promise accuracy or guarantee outputs that are reason-based in the indicated sense.10 Notably, not all proponents would resist our suggestion that these methods fail to secure understanding. For instance, Rawls himself described the method of reflective equilibrium as merely uncovering the doctrine that is “most reasonable for us” to accept; similarly, David Brink emphasizes that this method provides justification, even though it may not facilitate stronger epistemic achievements, which would (in our terminology) render it unsound.11 Do our concerns here imply that there is no sound method for metaethical theorizing? We do not draw this conclusion, but acknowledge that work on this topic remains to be done.12
1.3 Inputs We now turn to the inputs of method. Such inputs can be usefully regarded as the data of theorizing; they are the materials that a method must account for when issuing its outputs. Such data possess four basic features, which will play important roles in the ensuing discussion. First, they are starting points for theoretical reflection on a domain, bearing an asymmetrical structural relation to subsequent theorizing. Specifically, data are inputs to, not outputs of, such theorizing, the latter generating theories in light of the former, and not vice versa. Second, data are inquiry-constraining with respect to a domain, functioning to anchor a given theoretical inquiry to its subject matter—what a given type of theorizing is about (i.e.,
413
Methods, Goals, and Data in Moral Theorizing
what it purports to provide coherent, true, or justified beliefs, knowledge, or understanding of). By saying that they ‘anchor’ inquiry to its subject matter, we intend to convey two points. First, the data operate as the basic means by which theorists access that subject matter. Second, investigations of a domain that entirely ignore the data are likely to be off-track with respect to that domain and, consequently, to fail to achieve any ultimate proper goals. Third, data are collected.While not itself a controversial claim, there is controversy regarding the collection of data, especially over appropriate procedures with which to pursue such collection. Among the candidate procedures for data collection are those that utilize such sources as intuition, introspection, common sense, ordinary experience, induction from experience, linguistic judgments, and observations in controlled scientific experiments.13 Of course, there is room to be more or less restrictive on this matter; some philosophers recognize only one of these as part of an appropriate procedure for data collection, while others embrace an indiscriminate pluralism that allows for all such procedures, no one of which is judged to be invariably better than another. A moderate option is a discriminate pluralism that recognizes multiple procedures possessing varying degrees of authority. One reason to favor this moderate position in metaethics is that metaethical data will probably contain both empirical and a priori elements (hence, pluralism), and it is plausible that some procedures will be epistemically better than others (hence, discriminateness). Fourth, data are neutral, in the dual sense that they function as common currency among theorists, while also being fallible to the extent that a particular datum might be mistaken with respect to the domain regarding which they are data. Because this feature of data has come under attack by challenges to the very idea of data (stemming from worries about the theory-ladenness of data, to which we’ll respond in the next section), it is worth registering that there are at least three reasons for regarding data as neutral in this dual sense. In the first place, data must not unduly stack the deck in favor of a particular theory, but must instead be admissible to theorists of diverse persuasions. These theorists may differ about how to precisify the relevant data, which are often vague, and disagree as well about which theory best handles the data. (This is a reason for thinking that data function as common currency.)14 Second, it is always possible for an input to theorizing—a datum—to be denied that status at a later time. This happens when, for instance, theorists acquire reason to view a specific datum as invalid or erroneous with respect to the domain under investigation (hence, as a source of ‘error in the data’).Third, it is always legitimate for theorists to question whether the data align with genuine features of a domain, instead of being mere noise, or somehow off-base with respect to that domain. (These last two points jointly speak in favor of regarding data as fallible.)15 We’ve identified four basic features of data, the inputs to all theorizing. But the notion of data is not without its critics. We regard it as a virtue of the preceding characterization that it makes sense of the objections and concerns that have been pressed against this notion. In the next section, we’ll illustrate this point by discussing what is arguably the most prominent criticism of them all.
2. The Theory-Ladenness of Data (and Some Related Phenomena) Philosophers have long worried that data are not neutral starting points but instead are theory-laden; all inputs to theorizing, in metaethics and elsewhere, are themselves in some 414
John Bengson, Terence Cuneo and Russ Shafer-Landau
ways infused with or partial to one theory or another.16 If that were so, the worry continues, the structural relation between data and theorizing that we have described would be directly threatened, as would the idea of neutrality. There are several things to say in reply to this concern. The first begins with the observation that a datum’s being “infused with” a theory and its being “partial to” a theory are importantly different. The latter needn’t itself conflict with our conception of data as neutral starting points, as there are familiar cases of such partiality that present no challenge to data functioning as such among theorists of a domain. That physical data favor relativistic theory over Newtonian theory, for example, does not imply that the data unduly stack the deck in favor of relativistic theory, or that the data are the results of—rather than inputs to—such theorizing. Likewise, “infusion” needn’t be problematic, provided that the infusion originates in commitments from well-supported theories in domains beyond the one under investigation.17 No doubt this sometimes occurs; after all, data are collected, and a good source of data collection may utilize such theories in order to do its job well. In short, that data sometimes favor one theory over another, or incorporate various sorts of theoretical commitments, is perfectly compatible with all four features of data enumerated above. While that point is sufficient to answer popular worries about theory-ladenness, we also note a second reply. Suppose, as seems plausible, that some data are partial to certain metaethical theories over others. For instance, suppose the intimate connection between moral judgment and action favors certain antirealist views over realist ones, while the possibility of having true moral beliefs and gaining moral knowledge favors success theories over error theories. Realists should and do acknowledge the former datum; error theorists should and do concede the second. But so long as the method that works with the data is itself neutral among competing metaethical theories, in the sense that it allows for the rejection of various data under certain conditions, then all is above board. And that is because, third, such a method should not regard the data it works with as sacrosanct, in need of preservation come what may. As indicated above, the data are fallible starting points for theoretical efforts. Those efforts may determine, when all is said and done, that some of the data are mere appearances that do not survive critical scrutiny. This is the verdict issued for instance by error theorists with regard to the datum that we have true moral beliefs and some moral knowledge. Alternatively, theorists may accept the data, while rejecting the contention that these data favor a given theory over another. Many moral realists opt for this route after reflecting on the datum that posits a close connection between moral judgment and action. Setting aside the question of theory-laden data, we should also acknowledge the prospect of what might be called the ‘method-ladenness of data.’ In general, a method will constrain the data (or what qualifies as such) to a certain extent. To illustrate, consider the method of reflective equilibrium. According to this method, the inputs, or data, are myriad particular judgments and beliefs in general principles (and not, say, a type of qualia or desire). The fact that a method—this one or any other—constrains data by identifying criteria for data selection needn’t imply anything fishy. In particular, it does not imply that a proper method will shape the nature, content, or scope of the data it acknowledges as inputs. By contrast, the data have a profound influence on what qualifies as a sound method. That data and method are connected in this way is not illicit but an innocent consequence of what we term the ‘data-sensitivity of method.’ For example, given that data are 415
Methods, Goals, and Data in Moral Theorizing
inquiry-constraining, any method that allows inquirers to entirely ignore the data couldn’t qualify as sound. (Below we describe another example, when discussing our favored conception of data.) Above we identified four basic features of data; we’ve now noted how both theory and method may legitimately interact with data, and vice versa. The foregoing, however, leaves open some controversial issues regarding how best to conceive of data and their role in theoretical inquiry. We devote the next section to considering some of these issues, including several that deeply shape the character of metaethical theorizing.
3. Four Conceptions of Data There are at least four very different conceptions of the data that philosophers have adopted. These conceptions offer informative characterizations of data, proposing conditions under which something qualifies as a datum. Each can be evaluated in light of the extent to which it preserves the four basic features of data identified above (in §1.3). Examining these conceptions will allow us not only to explore candidate views of the nature and status of data, but also to sharpen our understanding of how data are related to the subject matter of metaethics. We begin with two conceptions of data that we believe to be inadequate for straightforward reasons. We then examine a third conception that avoids those problems, though it faces others. Finally, we explain our preferred conception—which privileges the epistemic status of data—and note a few of its principal virtues.
3.1 Two Faulty Conceptions: Dialectical and Metaphysical According to what we call the dialectical conception, φ is a datum if and only if, and because, φ is a claim (or what is expressed by a claim) that inquirers (considered collectively) provisionally agree upon as central to the domain in question.18 This conception has the virtue of being well-positioned to make sense of the neutrality of data; it also meshes with the observations that data are starting points and collected. But it does not square with the idea that data are inquiry-constraining, for it does not guarantee that investigations of a domain that entirely ignore the data (as the dialectical conception thinks of them) are likely to be off-track with respect to that domain.19 In contrast to the dialectical conception, the metaphysical conception of data holds that φ is a datum if and only if, and because, φ is a (claim regarding a) feature that is genuinely constitutive of a domain; the data give the domain both its existence and character.20 This conception, unlike the former, preserves the idea that data are inquiry-constraining, for it ensures that investigations of a domain that entirely ignore the data are likely to be offtrack. The metaphysical conception is also consistent with the fact that data are collected. However, it is not obviously compatible with the idea that data are starting points— specifying the features that are constitutive of a domain is often a result of inquiry, rather than one of its inputs. Perhaps more flagrantly, the metaphysical conception implies that the data are never mistaken with respect to the domain regarding which they are data.This directly contradicts the fallibility (hence, the neutrality) of data.
416
John Bengson, Terence Cuneo and Russ Shafer-Landau
3.2 The Psycho-Linguistic Conception More popular than the previous two conceptions is what we will call the psycho-linguistic conception of data, which holds that φ is a datum if and only if, and because, φ belongs to a domain-specific class of psychological or linguistic claims (or whatever such claims express). In the paradigm cases, such data are (or are expressed by) statements about how things seem or how language is aptly or commonly used. For example, within metaethics, it is widely accepted that moral values often seem to be objective and that it is apt to say—perhaps when discussing interrogation techniques—“It’s a fact that torture is wrong.” The psycho-linguistic conception holds that all data are like this.21 To illustrate, consider a more precisely formulated version of a putative metaethical datum alluded to above, namely: Practicality: Moral judgments have marks of practical attitudes: for example, they guide and motivate action. This formulation makes no explicit reference to how things seem or how language is used; it tells us about a putative fact regarding the intimate connection between moral judgments and action. Now consider: Practicalityψ: It seems that moral judgments have marks of practical attitudes: for example, they seem to guide and motivate action. Practicalityλ: It is apt to say (directly or by implication) “Moral judgments have marks of practical attitudes: for example, they guide and motivate action.” These claims make explicit reference to how things seem or how language is used; they do not concern a putative fact regarding the connection between moral judgments and action. The psycho-linguistic conception understands data to take the form not of Practicality but Practicalityψ or Practicalityλ. The principal motivation for the psycho-linguistic conception’s restriction of the data to psychological or linguistic claims is to satisfy neutrality. If the data consist in claims about how things seem or how language is used, then the data can function as common currency among otherwise rival metaethical theorists, whose disputes about the nature and status of morality do not impede their agreement about how morality seems or how moral language is used. And if the data consist in claims about how things seem or how language is used, then the data may be fallible, failing to depict genuine features of the metaethical domain, which are not entirely psychological or linguistic. This virtue of the psycho-linguistic conception is, however, intimately intertwined with what is perhaps its greatest vice, which can be stated as a dilemma focusing on the relation between the metaethical data and the subject matter of metaethics. Suppose, first, that the psycho-linguistic conception holds that the data themselves are the subject matter of metaethics. In that case, the subject matter of metaethics would consist entirely in how things seem or how language is used. But that is seriously mistaken, for it leaves out huge swaths of the metaethical domain, concerning (for example) the actual nature and status of moral concepts, propositions, properties, and facts, including
417
Methods, Goals, and Data in Moral Theorizing
the relations between such things and their non-moral counterparts; the actual nature and status of moral reasons, including their relations to moral values and whether they are ever categorical; the true nature of moral judgments, including their connection with action; the possibility of justified moral belief and moral knowledge; and so on. In fact, all prominent metaethical theories have recognized in practice that the subject matter of metaethics is not exhausted by how things seem and how language is used. Expressivist theories, for instance, offer accounts not simply of Practicalityψ or Practicalityλ but also of the connection specified in Practicality. And when criticizing rival cognitivist views, they charge not that these rivals are unable to account for Practicalityψ or Practicalityλ, but that they cannot account for the connection specified in Practicality (or cannot do so as well as expressivist theories do). Expressivists thereby engage in apt theorizing, which takes its object to be not mere psychological or linguistic claims of the sort recognized by the psycho-linguistic conception, but to include features, or putative features, of morality itself. This brings us to the second horn of the dilemma. Suppose that the psycho-linguistic conception were to deny that the data are the subject matter of metaethics, holding instead that the data bear some other relation to what it is that metaethical theorizing is about. This ensures that the data are neutral, and thus sustains the main motivation for the psycholinguistic conception. It also sidesteps the objection raised just above, since it avoids the consequence that the subject matter of metaethics consists—implausibly—in how things seem or how language is used. However, we see three problems with this way of proceeding. First, this approach posits a sizeable gap between the data and the subject matter of metaethics. Given that data are inquiry-constraining, functioning so as to enable metaethical theorizing to access the nature and status of morality, this gap cannot remain impassable; it needs to be bridged, so as to connect metaethical theorizing, whose inputs—on the psycholinguistic conception—are psycho-linguistic data, to the metaethical domain, which is not entirely psycho-linguistic. Here we find a deep tension within the psycho-linguistic conception. On one hand, it requires there to be a substantial gap between the data and the subject matter of metaethics, both to secure neutrality and to avoid omitting huge swaths of the metaethical domain. On the other, it also needs there not to be a substantial gap, in order to ensure that metaethical theorizing can access the subject matter of metaethics. This brings us to the second point: while there may be ways to resolve this tension (most obviously, by treating the data as an epistemic indicator of the subject matter),22 they risk simply moving the bump in the rug by generating conflict elsewhere. To appreciate this, suppose we were to cross the gap by endorsing a bridge principle that licenses a transition from the relevant psycho-linguistic claims to the (not entirely psycho-linguistic) subject matter of metaethics; such a principle would facilitate access to the latter via the data. Suppose, for example, a theory were to embrace the principle that things generally are as they seem, which would allow theorists to get at (say) the connection in Practicality via Practicalityψ.23 This principle is transparently incompatible with the fundamental commitments of various metaethical theories. No proponent of error theory or expressivism, for instance, could endorse it. To the contrary, these theories are committed to rejecting any such principle, as they are committed to the claim that, when it comes to morality, things are often, and perhaps quite generally, not as they seem.The challenge for proponents of the psycho-linguistic conception, then, is to identify a principle that bridges the gap between psycho-linguistic data and the subject matter of metaethics but isn’t in tension with the 418
John Bengson, Terence Cuneo and Russ Shafer-Landau
commitments of their own metaethical theories. It is unclear whether this challenge can be met. The third point adds to and intensifies the challenge posed by the conjunction of the first two points. As just discussed, in order for theorists to access the subject matter of metaethics via psycho-linguistic data, they will have to commit themselves to a bridge principle of some kind. But any such principle is bound to be contentious. At the very least, it will not be neutral in the sense of functioning as common currency among theorists.The worry, then, is that the psycho-linguistic conception secures neutrality by positing a gap between metaethical data and the subject matter of metaethics, which can be bridged only by introducing further commitments that are not neutral at all. The psycho-linguistic conception gives with one hand what it takes away with the other.
3.3 The Epistemic Conception We now turn to the epistemic conception of data, which holds that φ is a datum if and only if, and because, φ is a claim (or what is expressed by a claim) that inquirers (considered collectively) are in a good epistemic position to take to identify genuine features of a given domain. So, for example, Practicality—not merely Practicalityψ or Practicalityλ—is a metaethical datum just in case, and because, metaethicists are in a good epistemic position to take it to identify a genuine intimate connection between moral judgments and action. Different versions of the epistemic conception will analyze ‘good epistemic position to take’ in different ways. According to our preferred version, the analysis invokes the familiar category of reason for belief, which we understand as a defeasible epistemic status (i.e., it is possible for a reason for belief to be outweighed or extinguished by sufficiently weighty countervailing considerations). Let us call this the epistemic reason conception of data. Before considering a few of the virtues of this conception, let us first observe that it is non-committal in some important respects. It is compatible with different explanations of why inquirers have reason to take a domain to be as the data characterize it as being (e.g., epistemic conservatism, phenomenal conservatism, dogmatism, process reliabilism, proper function theory, virtue theory, safety theory, subjective Bayesianism, etc.). It is also compatible with different views about the strength of the reasons to take a domain to be those ways, and how much is required to defeat them. Further, it is compatible with different views about what is required to legitimately reject—by defeating the reason for—any given datum or set of data. Unlike some of its rivals, the epistemic reason conception preserves neutrality: the data function as common currency but are also fallible. It also makes sense of the fact that data are inquiry-constraining: it provides inquiry with an epistemic anchor (i.e., a reason-based relation) to its subject matter, thereby implying that investigations of a domain that entirely ignore the data are likely—epistemically likely, owing to the reasons theorists possess—to be off-track with respect to that domain. The epistemic reason conception also makes sense, in interesting and important ways, of the fact that data are collected, by explaining central features of such collection.To appreciate this, consider the felt need to justify the sources we employ in data collection, which flows from the demand to use only those sources that provide reason to take the domain to be the ways they tell us that it is. In the paradigm case, when collecting data, inquirers 419
Methods, Goals, and Data in Moral Theorizing
believe (or assume) that their sources of data have positive epistemic status, providing them with reason to take the domain to have certain features. For example, if inquirers think they should collect data from the outputs of our best physical theories, that is because they hold that those outputs are ones they have reason to believe. Or if inquirers think they should collect data via intuitions about thought experiments, that is because they hold that intuitions about thought experiments provide reasons for belief. The epistemic reason conception straightforwardly accounts for this dimension of data collection. It also explains controversy over candidate sources of data collection. Some would insist, for instance, that to be a source of data, we must have independent warrant for regarding its verdicts about features of the domain to be reliable. Others would reject this. These healthy disagreements make sense only to the extent that a source of data functions to provide reason for believing that the domain in question is a certain way. The controversy arises because the question of what it takes to satisfy this condition is both open and often extremely difficult. Another virtue of the epistemic reason conception is that it explains the possibility of disputes over the data in a domain. Naturally, theorists may disagree about what they (considered collectively) have epistemic reason to take to be genuine features of the domain under investigation. The epistemic reason conception has several further important implications for the nature, status, and collection of inputs to theorizing.24 It also bears on the propriety of methods. Given that inquirers have reason to take data to be genuine features of the domain under investigation, any method of theorizing that allows inquirers to reject the data without justifying their rejection couldn’t qualify as sound. In effect, the epistemic reason conception explains our earlier observation that the data have a profound influence on what qualifies as a sound method (what we’ve labeled the data-sensitivity of method, to which we’ll return below).
4. The Data for Metaethical Theorizing In the previous section, we argued that data are what inquirers have (defeasible) reason to take to be genuine features of a domain. Here we identify considerations that satisfy this description in metaethics, thereby buttressing our preferred conception and demonstrating how it applies in practice. Our point of departure is the following ordinary scenario: Collegiality: A colleague of yours has been in a bad car accident, sustaining major injuries. To ease the burden on her and her family, you pledged several weeks ago to provide a meal for them on a particular date.You have forgotten all about this, and the automatic reminder delivered by your calendar this morning—the date has arrived— comes as a surprise.You give the matter some thought.You reckon that your colleague and her family probably won’t go hungry tonight if you don’t provide a meal. But after running through a number of such scenarios, it strikes you that while there are alibis available, you should cancel the other plans you made yesterday and prepare the meal you committed to providing: given your previous commitment, this is quite clearly what the situation demands. And so you judge that this is what you ought to do. Being moved by this verdict, you set out to prepare the meal. 420
John Bengson, Terence Cuneo and Russ Shafer-Landau
Assuming that there is nothing especially unique about this scenario, we can extract from it at least four candidates for metaethical data.25 First, we have reason to think that there is the relation between your judgment about what to do and your subsequent action, which indicates the practical character of the former. Your judgment is capable of guiding your decision-making and behavior (in much the way that grasp of a recipe can guide what you do). It also moves you (perhaps in conjunction with certain of your desires or commitments), at least to some extent, to act. This inspires the following datum, which was already mentioned above: Practicality: Moral judgments have marks of practical attitudes: for example, they guide and motivate action.26 Second, at the same time, we have reason to think that your judgment about what to do also has tell-tale marks of being a descriptive attitude, such as belief. It is a response to how things strike you, in which you find yourself “coming down” on a verdict about a way the world is: the situation is such as to demand a certain type of response on your part. This judgment is a way of categorizing or classifying the world. Like paradigm beliefs, it is also fitted to play various other roles. For example, we have reason to think that its content can enter into further inferences and other sorts of logical constructions (e.g., conditionals such as if I don’t prepare a meal, then I’ll have failed to do what I ought to do). Moreover, this content can be felicitously described as true or false in what seem to be perfectly straightforward uses of these terms (e.g., if queried about your responsibilities, it would not be odd for you to say, “Yes, it’s true—I ought to prepare a meal”). This motivates a second datum: Descriptivity: Moral judgments have marks of descriptive attitudes: they are classificatory, truth-evaluable, and apt for inference. Third, we have reason to think that your judgment about what to do also has epistemic dimensions, in that it is natural to think that you may occupy a better or worse position for grasping the moral demands that apply to you. For instance, if things go well, your judgment will be justified and may even constitute knowledge and facilitate understanding about what you ought to do. These observations motivate a third datum: Grasp: Moral agents grasp, or are duly placed to grasp, moral reality. Fourth, we have reason to think that there is one correct judgment, or one limited set of correct judgments, to make in response to your situation; this implies that you can make a mistake about what morality demands of you. This gives rise to a fourth datum: Fallibility: Moral judgments can be mistaken. There are several points to make about the four candidate data we’ve identified. Obviously, they do not exhaustively characterize moral thought, discourse, and reality (or even the moral dimensions of Collegiality). Nor are they highly determinate, since they—like 421
Methods, Goals, and Data in Moral Theorizing
nearly all data—are fairly vague and open to interpretation, refinement, clarification, or elucidation; given that data are inputs to theorizing, this is as it should be. At the same time, they arguably bear the four marks of data we identified earlier (in §1.3). First, they function as starting points of metaethical theorizing, identifying some central features of ordinary moral life that subsequent theorizing must take into account. Second, they constrain metaethical inquiry, helping to anchor it to its subject matter; indeed, any metaethical theory that entirely ignored these data would be safely deemed off-track. Third, they are collected, issuing from a legitimate source, utilizing reflection on ordinary moral experience. Finally, they are neutral in the dual sense introduced earlier: they are fallible and are well-suited to function (indeed, they do function) as common currency among metaethical theories, even if some theories have ultimately rejected one or more of them as mere appearances. Let us close by observing that, while the four data we have listed are neutral, they are not methodologically innocuous, at least not if we are right about the data-sensitivity of method. If a metaethical theory were to entirely ignore these data, or to reject them without defeating the reason we have to take them to describe genuine features of morality, that theory would be in worse shape relative to its rivals that take these data into account (either by accepting them or by adequately justifying their rejection). It follows that metaethical theories that flout these data in the ways just mentioned would issue from unsound methods. For such methods would have failed to recognize that data are both inquiryconstraining and backed by reasons, making those methods ill-suited to realize the ultimate proper goals of metaethical inquiry.27
Notes 1. Few philosophers have devoted extensive and explicit attention to fundamental questions about theoretical inquiry in moral philosophy, especially metaethics. Treatments of methodology in contemporary moral philosophy have tended to focus primarily on normative ethics and been largely restricted to the particular method that Rawls (1971, 19–20) dubbed “reflective equilibrium,” which we discuss in §1.2. While we’ll focus on metaethics, the main elements of our discussion apply mutatis mutandis to normative ethics (and many other areas of philosophy). 2. This is the second stage of theoretical inquiry (theorizing); the first stage (data collection) consists in collecting the inputs to the second stage. We’ll speak of a method for theorizing and a procedure for collecting the inputs to such a method. We discuss the details of both stages below. 3. Note that the same considerations do not apply to knowledge—in the standard sense of, roughly, non-accidentally true justified belief—since it is possible to know various truths in a domain without having the sort of illumination constitutive of understanding. This means that in the absence of understanding, the presence of knowledge would leave theoretical inquiry unresolved; if this is correct, then knowledge is not an ultimate proper goal. Independent support for our assessment that understanding is an ultimate proper goal is provided by the defenses of understanding given by Elgin (1996, 122ff.), Kvanvig (2003, ch. 8), and Pritchard (2010), among others. Our assessment is compatible with understanding being an elevated type of knowledge (e.g., higher-order systematic knowledge-why); while this isn’t our view (Bengson 2017), we needn’t oppose it here. 4. A seminal instance of this method in action is Moore’s Principia Ethica (1903, ch. 1). See also Ewing (1947) and Hare (1954). For more recent versions, see Jackson and Pettit (1995), Jackson (1998, ch. 5), and Finlay (2014). While the method of analysis takes a variety of forms—some proponents privileging ordinary usage, others formal machinery from contemporary linguistic theory, still others the broadly functional style of analysis known as Ramsification—our characterization in the text abstracts away from these details in order to make explicit its core
422
John Bengson, Terence Cuneo and Russ Shafer-Landau
commitments. Similarly for the other methods characterized next. This explains why we have described them as ‘austere.’ 5. Chalmers (2014, 16); cp. van Inwagen (2006, Lecture 3). There are many possible—and quite diverse—examples of adherence to this method in metaethical theorizing; candidates include Gewirth (1978), Brandt (1979), Mackie (1977), Nagel (1986), Korsgaard (1996), Shafer-Landau (2003), Huemer (2005), Street (2008), Cuneo (2007), and Wedgwood (2007). A version of the method of argument invoking a comparative, cost-benefit standard is pursued by Enoch (2011, §1.4); cp. Schroeder (2007, §11.2). 6. This method is highly influential in contemporary metaphysics. For an example in metaethics, see Gibbard (2003, xii), who maintains that his antirealist “hypothesis explains the phenomena— and no normative realism that extends beyond the hypothesis is needed.” Cp. Harman (1977), Blackburn (1993, pt. II), Joyce (2001, 168), and Olson (2014, esp. 147–8). 7. See, e.g., Sayre-McCord (1996), DePaul (1998), and Scanlon (2002, 149). Cp. Horgan and Timmons (2006). 8. For example, Bonevac (2004) provides reason to doubt that the method of reflective equilibrium is valid. Many standard criticisms challenge the method’s propriety; see, e.g., Hare (1973), Brandt (1979, 19–21 and 1990), Raz (1982), Copp (1985), Stich (1988), Cummins (1998), Kelly and McGrath (2011), and McPherson (2015). 9. We identify two further characteristics in Bengson, Cuneo, and Shafer-Landau (forthcoming). 10. Some proponents of these methods have attempted to address related worries, but we do not find these efforts compelling, partly for the reasons given in Bengson, Cuneo, and Shafer-Landau (forthcoming §3). 11. See Rawls (1980, 534) and Brink (1989, 140–1). 12. We ourselves attempt to do that work in Bengson, Cuneo, and Shafer-Landau (forthcoming §4). 13. Gathering data with these sources (intuition, etc.) can be done in myriad ways, for example, through reflection on thought experiments, discussion with friends, examination of historical events, scrutiny of a linguistic corpus, surveys, review of scientific journals, and so forth. There are thus candidate sources of data, as well as candidate techniques for employing those sources. The conjunction of a source and a technique constitutes what we are calling a ‘procedure’ for data collection. 14. Our notion of common currency is distinct from, and does not imply, the type of consensus to which Williamson (2007, 209ff.) objects when discussing a thesis he labels “Evidence Neutrality,” in the context of a forceful critique (with which we are in broad agreement; see §3.3 below) of the tendency to “psychologize” data. The thesis of “Evidence Neutrality” invokes a notion of neutrality on which φ is neutral only if φ is “in principle uncontentiously recognizable as such,” so that—according to Williamson—you have good evidence only if you are “able to persuade all comers, however strange their views, that you have such good evidence” (210–14). By contrast, our notion of common currency comes with no such entailment. Consequently, our notion may help to preserve the role of a data set as “a neutral arbiter between rival theories” (210), a role that Williamson himself applauds. 15. By viewing data as fallible, we do not rule out that there is a distinct, infallibilist use of the term ‘data’ on which the sentence ‘data cannot be mistaken’ is true. We do not oppose this and other alternative uses of the term. The point we wish to emphasize is that any understanding of inputs to method must allow that a given input may be mistaken (whether or not it is called a ‘datum’). Even those who indicate a strong preference for an infallibilist use of ‘data’ and related terms standardly acknowledge this possibility (see, e.g., Williamson 2007, 209–10). 16. See, e.g., Hanson (1965, ch. 1). 17. Cp. Boyd (1988, 206–7). Another possibility is that it is a precisification of the data that receives the infusion, in which case the data themselves retain their status as neutral starting points. 18. See, e.g., Heney (2016, 26). Cp. the “pragmatic view” discussed by Williamson (2007, 238). (Heney also conceives of her view as a pragmatic one.) The expression ‘considered collectively’ is ours; it has two functions. First, it brackets the commitments that individual inquirers incur due to personal endorsement of particular theories of the domain. Second, it respects the epistemic 423
Methods, Goals, and Data in Moral Theorizing
position enjoyed by (members of) a collective towards some claim even in the face of a subset of the collective whose members personally lack that epistemic position with respect to that claim (perhaps because they happen to endorse idiosyncratic views—for example, idealism or eliminativism about the mental—that function for them as defeaters of the reasons possessed by their colleagues). 19. After all, the dialectical conception allows that data (to borrow a memorable phrase from Richard Rorty) closely track “what our peers let us get away with.” With perhaps the singular exception of inquiry regarding what our peers let us get away with, this falls far short of anchoring inquiry to its subject matter in the relevant sense. 20. For two possible examples in metaethics, see Korsgaard (1996, Lecture 1) and Joyce (2001, ch. 2), who hold that certain data regarding features of reasons—their excellence or authority—are constitutive of morality. Cp. Gibbard (1990, 32–3), who treats the intimate relation between moral judgment and action similarly.We are not confident whether these or any other metaethicists embrace the metaphysical conception for all data. But, importantly, our objections to this conception hold even when the conception is restricted to a subset of data. 21. Some metaethicists hold restricted versions of the psycho-linguistic conception, which allow that some data are not psycho-linguistic, even though most are (see, e.g., Finlay 2014, 121). The dilemma we will advance below applies equally to restricted versions, so long as they leave a sizeable gap between the metaethical data and the subject matter of metaethics. 22. Though such a move risks collapsing into the epistemic conception, discussed below. 23. We will not dwell on the concern, which this illustrates, that the psycho-linguistic conception’s posit of a sizeable gap between the metaethical data and the subject matter of metaethics introduces a gratuitous epicycle to metaethical theorizing—gratuitous because it can be avoided compatibly with neutrality (the principal motivation for the psycho-linguistic conception), as shown by our discussion of the epistemic conception below. 24. For instance, it implies that those who restrict the metaethical data to a specific set of considerations, such as ordinary usage, are guilty of chauvinism. For they are tacitly assuming that no other considerations could supply epistemic reason to take the metaethical domain to be one way rather than another. Our view avoids such chauvinism, by allowing that data are epistemicallysupported considerations, whatever those happen to be. 25. Cp. Horgan and Timmons’s (2006, 222–3) list of the “phenomena of morality,” which appear to function as data. 26. This is not intended to be a universal generalization. Nor must it be read as a generic.We use the plural ‘moral judgments’ because it is easy to see that many moral judgments are like the one in Collegiality. A similar point applies to other data. 27. Many thanks to Lauren Davidson, Mark Timmons, Aaron Zimmerman, and participants in a seminar at Harvard for helpful comments on earlier versions of this material.
References Bengson, J. (2017) “The Unity of Understanding,” in S. Grimm (ed.), Making Sense of the World. Oxford: Oxford University Press. Bengson, J., Cuneo, T. and Shafer-Landau, R. (forthcoming). “Method in the Service of Progress,”Analytic Philosophy. Blackburn, S. (1993) Essays in Quasi-Realism. Oxford: Oxford University Press. Bonevac, D. (2004) “Reflection Without Equilibrium,” Journal of Philosophy, 101, 363–388. Boyd, R. (1988) “How to Be a Moral Realist,” in G. Sayre-McCord (ed.), Essays on Moral Realism. Ithaca, NY: Cornell University Press, 181–228. Brandt, R. (1979) A Theory of the Good and the Right. Oxford: Oxford University Press. ———. (1990) “The Science of Man and Wide Reflective Equilibrium,” Ethics, 100, 259–278. Brink, D. (1989) Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Chalmers, D. (2014) “Why Isn’t There More Progress in Philosophy?” Philosophy, 90, 3–31. Reprinted in Ted Honderich, ed. 2015. Philosophers of Our Times. Oxford University Press, 347–370. 424
John Bengson, Terence Cuneo and Russ Shafer-Landau
Copp, D. (1985) “Morality, Reason, and Management Science:The Rationale of Cost-Benefit Analysis,” Social Philosophy and Policy, 2, 128–151. Cummins, R. (1998) “Reflection on Reflective Equilibrium,” in M. DePaul and W. Ramsey (eds.), Rethinking Intuition. Totowa, NJ: Rowman & Littlefield, 113–128. Cuneo, T. (2007) The Normative Web. Oxford: Oxford University Press. DePaul, M. (1998) “Why Bother with Reflective Equilibrium?” in M. DePaul and W. Ramsey (eds.), Rethinking Intuition. Totowa, NJ: Rowman & Littlefield, 293–309. Elgin, C. (1996) Considered Judgment. Princeton: Princeton University Press. Enoch, D. (2011) Taking Morality Seriously. Oxford: Oxford University Press. Ewing, A. C. (1947) The Definition of Good. Basingstoke: Palgrave Macmillan. Finlay, S. (2014) Confusion of Tongues: A Theory of Normative Language. Oxford: Oxford University Press. Gewirth, A. (1978) Reason and Morality. Chicago: University of Chicago Press. Gibbard, A. (1990) Wise Choices, Apt Feelings. Oxford: Oxford University Press. ———. (2003) Thinking How to Live. Cambridge, MA: Harvard University Press. Hare, R. M. (1954) The Language of Morals. Cambridge, MA: Harvard University Press. ———. (1973) “Rawls’ Theory of Justice,” Philosophical Quarterly, 23, 144–155, 241–251. Harman, G. (1977) The Nature of Morality. Oxford: Oxford University Press. Heney, D. B. (2016) Towards a Pragmatist Metaethics. London: Routledge. Horgan, T. and Timmons, M. (2006) “Morality Without Moral Facts,” in J. Dreier (ed.), Contemporary Debates in Moral Theory. Boston: Wiley-Blackwell, 220–238. Huemer, M. (2005) Ethical Intuitionism. Basingstoke: Palgrave Macmillan. Jackson, F. (1998) From Metaphysics to Ethics: A Defense of Conceptual Analysis. Oxford: Clarendon Press. Jackson, F. and Pettit, P. (1995) “Moral Functionalism and Moral Motivation,” The Philosophical Quarterly, 45, 20–40. Joyce, R. (2001) The Myth of Morality. Cambridge: Cambridge University Press. Kelly, T and McGrath, S. (2010) “Is Reflective Equilibrium Enough?” Philosophical Perspectives, 24, 325–359. Korsgaard, C. (1996) The Sources of Normativity. Cambridge, MA: Harvard University Press. Kvanvig, J. (2003) The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press. Mackie, J. L. (1977) Ethics: Inventing Right and Wrong. Harmondsworth: Penguin Classics. McPherson,T. (2015) “The Methodological Irrelevance of Reflective Equilibrium,” in C. Daly (ed.), The Palgrave Handbook of Philosophical Methods. Basingstoke: Palgrave MacMillan, 652–674. Moore, G. E. (1903) Principia Ethica. Cambridge: Cambridge University Press. Nagel, T. (1986) The View from Nowhere. Oxford: Oxford University Press. Olson, J. (2014) Moral Error Theory. Oxford: Oxford University Press. Price, H. H. (1945) “Clarity Is Not Enough,” Aristotelian Society Supplementary Volume, 19, 1–31. Pritchard, D. (2010) “Knowledge and Understanding,” in The Nature and Value of Knowledge: Three Investigations, co-authored with Alan Millar and Adrian Haddock. Oxford: Oxford University Press, 3–88. Rawls, J. (1971) A Theory of Justice. Cambridge, MA: Harvard University Press. ———. (1980) “Kantian Constructivism in Moral Theory,” Journal of Philosophy, 77, 515–572. Raz, J. (1982) “The Claims of Reflective Equilibrium.” Inquiry, 25, 307–330. Sayre-McCord, G. (1996) “Coherentist Epistemology and Moral Theory,” in W. Sinnott-Armstrong and M. Timmons (eds.), Moral Knowledge. Oxford: Oxford University Press, 137–189. Scanlon, T. (2002) “Rawls on Justification,” in S. Freeman (ed.), The Cambridge Companion to Rawls. Cambridge: Cambridge University Press, 139–167. Schroeder, M. (2007) Slaves of the Passions. Oxford: Oxford University Press. Shafer-Landau, R. (2003) Moral Realism: A Defense. Oxford: Oxford University Press. Stich, S. (1988) “Reflective Equilibrium, Analytic Epistemology and the Problem of Cognitive Diversity,” Synthese, 74, 391–413. 425
Methods, Goals, and Data in Moral Theorizing
Street, S. (2008) “Constructivism About Reasons,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics, vol. 3. Oxford: Oxford University Press, 207–246. van Inwagen, P. (2006) The Problem of Evil. Oxford: Oxford University Press. Wedgwood, R. (2007) The Nature of Normativity. Oxford: Oxford University Press. Williamson, T. (2007) The Philosophy of Philosophy. Boston: Wiley-Blackwell.
426
22 MORAL KNOWLEDGE AS KNOW-HOW Jennifer Cole Wright
This chapter discusses the idea of moral knowledge as “know-how” embedded in our social practices. When we consider the possibility that (at least some of) our moral knowledge is embedded in (at least some of) our social practices, there are a number of important questions that arise. In this chapter, I’ll consider two: Question 1.What does it mean to say that moral knowledge is “embedded” in our social practices? Question 2. What is the nature of this knowledge? In what follows I will start by considering the first question (§1) and then turn to the second (§2).Throughout the course of the chapter, I will suggest that there are a number of features of our social lives that support the idea that moral knowledge is embedded in our social practices and that at least a portion of that knowledge is best thought of as moral know-how involving capacities, abilities, and/or dispositions to behave in certain morally relevant ways.
1. Is Moral Knowledge “Embedded” in Our Social Practices? First, it is worth noting that the claim that moral knowledge is embedded in certain social practices seems to be a stronger claim than just that certain social practices reflect or exemplify or have been (at least partially) shaped by our moral knowledge. Rather, the use of the term “embedded” implies that certain social practices contain, or have incorporated, moral knowledge, which has become an essential part or characteristic of the practices themselves. In other words, the claim is that certain social practices are more than just products of moral knowledge. They are, in some meaningful way, possessors of it. Now, it strikes me that if this is true, there are a number of things that should rather straightforwardly follow. While there are likely to be more, I will restrict my exploration here to three such things—specifically, that it would be the case that: 1. We are able to meaningfully gain moral knowledge by engaging in the relevant social practices (and/or those social practices can impart moral knowledge onto those of us that engage in them). 427
Moral Knowledge as Know-How
2. The moral knowledge contained in those social practices can meaningfully come apart from (and even potentially conflict with) the moral knowledge possessed by the individuals within the community from which they originated. 3. Engaging in those social practices creates meaningful opportunities for further moral growth and advancement—i.e., the discovery and development of new moral knowledge and/or the application of existing moral knowledge to new behaviors and social practices. Let us consider each of these in turn. First, the claim that we are able to meaningfully gain moral knowledge by engaging in the relevant social practices—and/or those social practices can impart moral knowledge onto those of us that engage in them. Among other things, this suggests that one important way we learn about morality, including how to be moral, is by watching and imitating other members of our community as they engage in the relevant social practices (a claim that most developmental psychologists working on moral development would heartily support).1 Watching and imitating daily social practices helps us to learn, for instance, how resources, responsibilities, and burdens are appropriately shared within the family and larger community, how others are to be treated, etc. In this way, we are able to see what honesty, loyalty, generosity, bravery, and compassion look like,2 as well as when and how they are to be displayed—and to whom. Because what we observe is not the idiosyncratic actions of one individual but rather shared patterns of behavior across a range of individuals—for example, observing our parents, teachers, neighbors, and other community members all engaging in similar displays of respectful greeting toward one another—we come to recognize these behaviors as practices, part of the social architecture within which we live and engage with others in daily life. One worry about this is that it is not clear that learning to participate in something that I recognize as a social practice by itself will necessarily impart moral knowledge. Perhaps I do so, for example, simply because I notice that “everyone else is doing it.”3 It would seem that in order for my participation to impart moral knowledge, I would need to not simply recognize the behavior that I am engaging in as a social practice but also (at minimum) as a social practice that possesses a particular kind of normative significance. In other words, I would have to recognize the social practice as not being merely statistical in nature— generated, maintained, and enforced through the sheer frequency of its occurrence—but rather as existing because of its importance for and connection to our individual and collective moral needs, values, and welfare. Luckily, such recognition seems easy enough to come by under normal circumstances. It may occur all on its own—e.g., I observe the respectful or compassionate nature of the practice—but, if not, it is easily facilitated through the various forms of moral instruction and conversation that normally accompany our social practices, especially when we are learning them. I participate in the common greeting practice of my culture, for example, because I have been told (and observed) that to do so is respectful, something I further recognize as being valued enough to have become embedded in a common social practice in which I have learned to participate.4 Of course, this raises another important issue, which is that even if I gain moral knowledge through engaging in such a practice—I learn that doing such and so is a way to
428
Jennifer Cole Wright
respectfully greet others and, thus, I greet others in this way because it is respectful—this does not necessarily mean that I gain moral virtue. My engaging in a practice because it is respectful is, after all, not the same thing as my engaging in it because being respectful is of value to me for its own sake because it is right thing to do. Perhaps it matters to me to be respectful because that is what other people value and I don’t want to disappoint or risk offending someone (or a myriad of other reasons, some not so benign).Thus, even if we are able to gain moral knowledge through our social practices, this fact by itself cannot speak to what we will do with that knowledge or why. Yet it seems that there are several different morally relevant things to be learned through our social practices. Consider—as children, we learn to be respectful by greeting people in a certain way, we learn to be compassionate by comforting a friend in the same way we have seen others comforted and have been comforted ourselves, we learn to be generous by helping family and friends cook meals for the less fortunate of our community and by being encouraged to give from our own valuable possessions to those in need, etc. What is it that we have learned? First, we have learned that certain practices (or sets of practices) are ways of being respectful, compassionate, and generous. Of course, these are not the only ways of being respectful, compassionate, and generous—indeed, we are likely to encounter other ways of being so along the way. And being active pattern generators and identifiers (Churchland, 2000; Clark, 2000), we are likely to cluster these different ways of being respectful, compassionate, and generous together into we might call “meta-practices” (or “schemas”), which, arguably, helps to further illuminate the underlying nature and significance of the moral knowledge contained within the practices themselves. Second, as mentioned earlier, to the degree to which we observe these practices engaged in commonly—by many people across many different occasions—we learn that these ways of being respectful, compassionate, and generous are valued by our community. This implies not only that we too, as members of that community, should value them, but it provides motivation to learn/discover new ways of being respectful, compassionate, and generous. Third, by engaging in these practices ourselves, we learn what it is like to be respectful, compassionate, and generous. And to the extent that engaging in these practices creates positive feedback loops—e.g., I give my much-loved stuffed bunny to another child who lost her home and, in seeing her face light up with joy and gratitude, experience the happiness of having the stuffed animal surpassed by the happiness of giving it to another—we come to experience the value of these practices and become motivated to continue engaging in them, not just because others expect us to but for the sake of that value itself. Herein, then, lies an important key for addressing the worry stated earlier. Insofar as these positive feedback loops link up our engagement in social practices to our internal motivations, they provide the necessary (though not sufficient) “kick-starter” for the development of virtue, a development which will serve to further reinforce our valuing of and engagement in those social practices.5 Let us turn now to the second claim—that the moral knowledge contained in social practices can meaningfully come apart from (and even potentially conflict with) the moral knowledge possessed by the individuals within the community from which they originated.
429
Moral Knowledge as Know-How
There are essentially two directions that this “coming apart” (and potential conflict) might occur: 1. Certain individuals within a community possess moral knowledge that diverges from that contained in existing social practices—thus, problematizing them as outdated, inadequate, or unacceptable. 2. Existing social practices contain moral knowledge that diverges from that possessed by certain individuals—thus, problematizing it as outdated, inadequate, or unacceptable. The first of these seems relatively straightforward, even commonplace. Indeed, at times it seems that the very definition of moral progress in modern society is the constant critical reevaluation of existing social practices. Practices that were once viewed as appropriate demonstrations of respect or politeness, for example—such as a man holding open a door for a woman or referring to a child as “having autism” instead of “being autistic”6—are regularly challenged as outdated and not appropriate (or “politically correct”). Importantly, this is often not because those who originally started the social practices were misinformed or mistaken but rather that our understanding of the moral values they have helped us to navigate have continued to change and evolve—as thus, arguably, should our social practices. The second direction follows naturally from the first—as our social practices shift to capture our ever-evolving understanding of important moral values, it is inevitable that those practices will encounter individuals who have not yet “caught up.” For them, the now outdated social practices are still genuine expressions of their moral knowledge, even though others around them may fail to experience and receive them as such.7 It is important to recognize that such situations are not typically a simple matter of one person being mistaken and the other correct—rather, they are complex situations that involve competing moral considerations. Consider, for example, the already mentioned practice of men holding doors open for women. More specifically, imagine an older gentleman arriving at a building at the same time as a younger woman and holding the door open for her, a gesture that she finds insulting—under the presumption of gender equality, such a practice treats women differently than men, i.e., as the “weaker” sex—so she responds harshly. Here we can see the clash between (at least) two competing moral considerations. First, it seems clear that the practice of holding the door open—if done only for a woman because she is a woman—is not respectful in an ideal world (i.e., within a society that is, in fact, gender egalitarian).Thus, in order to treat the younger woman respectfully (as an equal) the older gentleman should not hold open the door.Yet, it would be a mistake to not nonetheless recognize his gesture as a genuine display of respect. After all, holding doors open for women is a social practice—a way of being respectful—that this man likely learned as a young boy, a practice he has engaged in all his life. And so, in holding the door open for the young woman, he was communicating to her his respect, making it an appropriate thing to do.8 In the face of such moral complexity, Calhoun (2000) argues that the appropriate response is one of civility—to recognize, appreciate, and express tolerance for the divergence in moral perspectives (and resulting social practices) that is common in a morally imperfect and evolving world, such as our own. As such, the young woman’s hostility seems ill-placed.9 430
Jennifer Cole Wright
Another interesting version of this divergence can be found in Churchland’s (2000) recent discussion of moral progress, which he argues can largely be located within our “expanding universe” of social practices. “A primitive villager of Levant,” he writes, “could aspire to many things, perhaps, but he or she could not aspire to be . . . a labor lawyer . . . a child psychologist . . . a law professor” all of which “constitute new contributions to the wellbeing of mankind” (302). Relevant to our discussion, however, is the worry that while we have expanded the breadth and depth of the social practices within which we can engage, the moral knowledge they contain has largely failed to change us in morally relevant ways—a worry that Churchland acknowledges and endorses. Indeed, he notes that, the moral character of an average North American is probably little superior to the moral character of an average inhabitant of the ancient Levant.The bulk of our moral progress, no doubt, lies in our collective institutions [and social practices] rather than in our individual hearts and minds. (303) Whether or not we fully agree with this sentiment, it brings us back to the worrying limitation of the moral knowledge contained in and imparted by social practices discussed earlier, which is that while these practices may help us to behave more morally (respectfully, compassionately, generously, etc.), they may not, by themselves, make us morally better people. While it is certainly possible for a community of people to largely engage in social practices not because they themselves valued being respectful or generous but rather because they were valued by others, at some point we would have to wonder—valued by whom? More likely, perhaps, is a community in which people do generally value being respectful and generous, but merely for the social benefits they accrue, which includes the simple, but deeply important, benefit of being considered a member of the tribe. Yet this too strikes me as unlikely, for the reasons mentioned earlier. It is hard to imagine our engagement in social practices not frequently creating positive feedback loops. And while some of these will surely be related to the social benefits (e.g., external praise received from others) associated with engaging in those practices, it seems likely—at least under normal developmental circumstances—that others would link up to the positive experiences generated by the practices themselves, eventually making engaging in them their own reward. All of this aside, it is important to acknowledge that the social architecture that makes up our daily lives may contain a collective moral “wisdom” that the individuals guided and constrained by that architecture do not—that we live better, more moral, lives by virtue of the social practices in which we are encouraged to participate.10 In this way, our social practices become not just possessors but protectors of moral knowledge—just as, in our increasingly isolated and hectic lives, we would be at risk of forgetting the value of our friends and families if it were not for certain social practices (such as the celebration of birthdays or family gatherings on Thanksgiving Day11) that encouraged us to remember. This brings me to the third claim, that engaging in certain social practices creates opportunities for further moral growth and advancement (i.e., the discovery and development of new moral knowledge and/or the application of existing moral knowledge to new behaviors and social practices). 431
Moral Knowledge as Know-How
In a recent book on virtue, Annas (2011) wrote: A boy will learn to be brave, initially, by seeing a parent chase off a dog, say, and by registering that this is brave. But right from the start he will see that his coming to be brave does not consist [merely]12 in his chasing dogs off. (22) While not the point that Annas was making with this passage, I think it nonetheless highlights the fact that, to the extent that witnessing others’ behaviors (or practices) successfully imparts moral knowledge—in this case, that the parent’s chasing off of the dog was brave— that knowledge, once imparted, is not restricted to the behavior/social practice from which it was gained. Consider the boy in this example. He witnessed his parent’s behavior and gained the moral knowledge that this particular action, the chasing off of a dog, was brave. But that is not all he gained, because he would have also likely registered his parent’s action as a good thing, in the sense of keeping him safe and protected. And, harkening back to our earlier discussion, he would also likely have recognized (either then or later on) similarities between this particular action and other ways his parent keeps him safe and protected, as well as ways that other people more generally work to keep him safe and protected, as well as ways that members of his community as a whole come to the aid, sometimes with some risk to themselves, of anyone who is in a potentially harmful situation and is in need of safety and protection. And so on. As we discussed earlier, if this boy notices that such behaviors are (fairly) widespread and occur with (at least some) regularity when situations in which they are called for arise, then he will recognize them as all part of a more general social practice (or set of practices)—something like acting bravely when someone is in need of safety/protection—which further implies that acting in such a way is something generally valued by his community. And it is not a terribly large step for the boy, as a member of that community, to want himself to act bravely in whatever way is called for. Nor would he have that much further to go to value bravery not simply because his community values it but because it is valuable in its own right. As discussed earlier, such a connection would likely be made through his own experiences of acting bravely—e.g., when he stands up for a friend who is being bullied and sees his friend’s gratitude and others’ respect, when he dares, without waking his parents, to confront the monster in his closet (who is apparently scared off by this display of bravery because it is nowhere to be found) and feels a sense of pride and self-accomplishment. Further, this process has the potential to spark an exploration and evaluation of all the different social practices intended to display bravery—the degree to which they actually do so and why, etc.—which will ultimately lead to the revision of old practices and/or the creation of new ones, thereby expanding my own and others’ understanding of what it is to be brave, as well as our collective desire to be so. In short, I think Annas (2011) was right to point out that the boy, in witnessing his parent’s action, sees more than just that that action was brave. Arguably, by virtue of the moral knowledge he gained through witnessing the action, he caught his first important glimpse into what bravery is and why it matters. 432
Jennifer Cole Wright
2. What Is the Nature of This Knowledge?
Moral knowledge, if the strained phrase is to be used at all, is knowing how to behave in certain sorts of situations in which the problems are neither merely theoretical nor merely technical. (Ryle, 1949, 316)
When we consider what sort of moral knowledge might meaningfully become “embedded” in our social practices, it makes sense to bring into our service the long-standing distinction between knowledge-that and knowledge-how (or “know-how”). This distinction has been cashed out in a variety of ways and has been discussed alongside a variety of other distinctions, such as the distinction between theoretical vs. practical knowledge and between semantic (or declarative) vs. procedural knowledge (for summaries, see Bengson & Moffett, 2012; Bengson, 2013; Fantl, 2008, 2016). But roughly, what it comes down to is the difference between the “knowing” that is achieved through the possession of certain propositions (or “facts”), on the one hand, and the “knowing” that is achieved through the possession of certain capacities, abilities, and/or dispositions to behave in certain ways, on the other. Of course, people disagree about the nature of knowing-how relative to knowing-that. What has been dubbed the “anti-intellectualist” position (see, among others, Ryle, 1946, 1949; Schiffer, 2002; Noë, 2005; Wallis, 2008; Adams, 2009; Devitt, 2011) maintains that knowing-how is essentially grounded in the possession of a set of capacities, abilities, and/ or dispositions to behave in certain ways—ways relevant to the activity in question. While that know-how may link up in various ways with a body of relevant propositional knowledge (i.e., I may know a lot of facts about how to x and x-ing), possessing that propositional knowledge, by itself, is not sufficient for knowing-how to x. Intellectualists, not surprisingly, argue that the anti-intellectualist position is untenable (Stanley & Williamson, 2001).The reasons given vary depending on the intellectualist, but a few of the criticisms of the anti-intellectualist position include that it fundamentally misrepresents the nature of propositional knowledge, which need not necessarily be consciously accessible and declarative in the way anti-intellectualists make it out to be (Fodor, 1968; Stanley, 2011); that knowing-that is often a necessary part of knowing-how (Snowdon, 2003); and that under certain circumstances people are willing to both attribute know-how to people who lack the requisite capacities, abilities, and/or dispositions and fail to attribute know-how to people who possess them—strongly suggesting that capacities, abilities, and/ or dispositions are neither necessary nor sufficient for knowing-how (Bengson et al., 2009). Consider an example. Because of where I grew up (near the Rockies, which afforded many different rock climbing opportunities for those so inclined), I possess a wealth of propositional knowledge about rock climbing—e.g., how and where it is done, what equipment, skills, and level of chutzpah are necessary to do it, etc.—even though I did not bother to learn to rock climb myself. The question is: do I know how to rock climb? We might argue that, in a sense, yes. After all, I have a good degree of factual (and “theoretical”) knowledge about rock climbing—so it seems acceptable to say that I know how to rock climb, if what we mean by that is that I know a lot about rock climbing, including how it is done.Yet it also seems perfectly appropriate 433
Moral Knowledge as Know-How
for my rock-climbing friends to protest that clearly I don’t know how to rock climb. After all, I have never gone rock climbing, nor could I (beyond a very basic level) do so successfully, even with all the factual/theoretical knowledge I possess. While there are a number of responses to this that could be given by both sides of the debate—including finding ways to bridge the gap between them13—it strikes me that, whatever view one adopts, this example captures something very important about knowhow. Even if we don’t agree with the strong anti-intellectualist position that know-how always and only is the possession of certain capacities, abilities, and/or dispositions, it is nonetheless the case that much of the value of someone knowing how to do something, as opposed to merely knowing a lot about that something, is that she can (under the right circumstances) successfully do it. When it comes to moral knowledge, this seems to especially be the case. Knowing everything there is to know about honesty, including how honesty “is done,” is of little value to anyone if we are unable to succeed in being honest when the circumstances require it. And being honest in any significant sense (i.e., not just by accident or for irrelevant reasons but because and whenever the circumstances require it) certainly involves more than just the possession of a body of propositional knowledge14—it involves a host of other lower and higher level capacities, abilities, and/or dispositions, such as the ability to perceive when honesty is required, the motivation to be honest, the self-regulatory capacity to override contrary inclinations, and so on.15 Indeed, in the absence of such capacities, abilities, and/or dispositions, it isn’t clear how much a body of propositional knowledge (including a set of moral facts, rules, principles, or maxims) would really matter.16 The same seems true at the social level as well. It is the exercise and expression of our moral knowledge in the form of good (honest, respectful, compassionate, brave, generous) deeds that has the most value. If encouraged and encountered frequently enough, these morally positive interactions become a normal part of our daily lives—of how we act and what we do—embedded in the social practices in which we collectively engage. In turn, such practices are readily encountered, witnessed, imitated, and performed, thereby perpetuating the cycle of virtue. This is not to deny that propositional knowledge (e.g., moral rules/principles/maxims) has an important role to play. Even the most ardent anti-intellectualists acknowledge that propositional knowledge often functions as a guide in the development of know-how’s capacities, abilities, and/or dispositions—if nothing else, it gives the moral learner a rough idea of what to aim for and why, providing a few essential heuristics or rules of thumb (e.g., “honesty is the best policy”; “sharing is caring”) to follow.Without them, it may be difficult to know where/how to begin the process of developing the relevant capacities, abilities, and/or dispositions (Dreyfus & Dreyfus, 1990, 1991;Varela, 1999).17 But, as Clark (2000) has elsewhere argued, such propositional knowledge not only helps the learner begin to navigate this socially shared “moral space.” More importantly, it plays a crucial role in creating that moral space to begin with. To see this, consider again the example of Annas’s boy. In order for him to see beyond the concrete dissimilarities of different observed instances of bravery—engaged in at different times in response to different threats by different people—to successfully identify the much more abstract feature(s) that links them all together, he needs the concept of bravery. According to Clark, concepts (such as bravery or generosity) are “mind tools” designed to capture abstract patterns of 434
Jennifer Cole Wright
information that would otherwise remain undetectable beneath the surface dissimilarities, thereby bringing them to the surface and making them “observable” objects (see also Jackendoff, 1996; Tse & Altarriba, 2008). The idea then, is that we can see that someone is being respectful, for example, across a range of different behaviors, each of which include very different sorts of surface information— e.g., your friend is being respectful both when shakes a stranger’s hand in greeting and when he allows his grandparents to serve themselves food before he does, even though both situations involved different actions (down to the body language and facial expressions) directed toward different people—because they both instantiate a shared abstract pattern, which we make concrete and therefore observable by introducing the concept of respect. It would make sense, then, that our social practices commonly include not only behaving in certain ways, but also identifying, labeling, and discussing those behaviors—in particular, for those who are learning and most especially when they happen to get it wrong.18 This suggests that our social practices successfully impart the moral knowledge they contain not only by inculcating in us a rich repertoire of capacities, abilities, and/or dispositions to behave in certain ways but also by highlighting and unifying the core network of shared moral values around which those social practices are organized—creating, as Clark (2000) calls it, a “situated moral epistemology” (hereafter, “moral community”). In other words, we collectively navigate and experience the moral space that we ourselves have created through the interactive relationship between our social practices and the internal and external dialogue that accompanies them. The creation of this shared moral space (or moral community) is critical for at least two reasons. First, as Churchland (2000) notes, it opens up a space of possibility—an opportunity for us to deepen our understanding of, appreciation for, and participation in the promotion of our individual and mutual welfare. Second, the social architecture of shared values and practices provides a necessary level of stability from within which this space of possibility can be explored. Of course, the danger is that as much as these shared values and practices open up a space of possibility, they leave uncaptured and unexpressed a range of other ways of valuing and behaving. As Clark (2000) writes, “every choice of moral vocabulary is . . . restrictive, rendering other patterns invisible to all but the most breathtaking (‘revolutionary’) exercises of individual thought” (19). Likewise, our focus on certain practices (and not others) as the recognized expression of that vocabulary risks becoming unnecessarily dogmatic and repressive, blind to unique and organic expressions of our shared values—much less the exploration of new values, not necessarily widely shared. In other words, each moral community—by virtue of the nature of its situatedness—is at risk of not only generating and promoting moral knowledge but also moral ignorance (Alcoff, 2007; Grasswick, 2016).19 Unfortunately, as Alcoff (2007) and others have pointed out, even more worrisome types of ignorance are possible. As different moral communities emerge, group identities and structures of privilege (including moral privilege) also emerge, both between and within those communities, which become reflected in their social practices. Such structures of privilege inevitably create variable “epistemic dispositions,” where ignorance becomes systematically linked to specific features—created and/or reinforced by social practices—of different groups of “knowers” (especially underprivileged) within the community. And as these existing social practices tend to support and reinforce themselves, this eventually results in epistemic distortions that 435
Moral Knowledge as Know-How
deny—while at the same time supporting—the structures of privilege that gave rise to them. When this happens, ignorance is no longer the product of different kinds of limitations, but a “substantive epistemic practice” in itself (Alcoff, 2007, 47; see also Mills, 2007). These are worries that every moral community must take seriously. And as members of these communities, we must always keep in mind that as moral beings, we remain essentially unfinished. Thus, we must strive to strike a balance between the old and the new, recognizing that the stability provided to us by the social architecture we have created through our shared moral values and practices is useful only insofar as it serves as a platform from which to continue growing and developing as moral beings20—a process that will invariably require an ongoing transformation of our social architecture. Perhaps, then, the most important moral knowledge contained within our social practices is the knowledge that those practices are and will always be a “work in progress.”
Notes 1. For a good overview of early moral development, see Killen and Hart (1995), Killen and Smentana (2013). See also chapters from Snow’s (2014) recent Cultivating Virtue (in particular the chapters by Narvaez, Thompson, and McAdams), Churchland (2015) and Chapter 5 of this volume. 2. Of course, “see” and “looks like” can be being used in several different ways. Here, the argument would be for a form of moral perception, where it is through the observation of social practices that certain kinds of moral knowledge can be gained—to be later reinforced through imitation and practice. See McBrayer (2010) for a discussion of different uses of perceptual terms in his “limited defense” of moral perception, as well as Blum (1991), Cuneo (2003), Wright (2007), Audi (2013), and Chapter 17 of this volume for defenses of moral perception more generally— though, see also, Dancy (2010), Huemer (2001), Jacobson (2005), and Hills (2015), among others, for a more critical/alternative perspective. 3. I recently watched a “hidden camera” social experiment video that highlighted the ease with which we do this—start participating in social practices simply because everyone else around us is doing it, even though we have no idea why (www.youtube.com/watch?v=MEhSk71gUCQ). 4. See Chapter 15 of this volume for further discussion of the centrality of respect in many moral traditions. 5. These feedback loops are not unlike those involved in other forms of skill development. See, however, Russell’s (2015) worry about the inevitable complexity and “messiness” of such feedback loops, especially, though not exclusively, in the case of virtue development. 6. For an interesting discussion of the latter, see Silberman’s (2015) Neurotribes or Solomon’s (2012) Far From the Tree, which both provide fascinating discussions on our social practices centered around our understanding (and misunderstanding) of “disability.” 7. Another reason why social practices can spring up and/or die has nothing to do with the moral knowledge they contain but with the value placed on that moral knowledge. For example, over the years I’ve noticed that one of the greeting practices once employed ubiquitously in Cambodia is becoming less common.This is arguably not a sign that the practices viewed as appropriate for expressing respect for social position have changed—it’s not being replaced with an alternative practice—but rather that the importance of expressing respect for social position has itself changed. People typically don’t consider it as important, at least in certain contexts. 8. As he becomes aware of the changing norms, the man will have to choose between continuing to engage in a practice that, for him, he experiences as a genuine display of respect and adapting his practices to be more in alignment with what he recognizes as the important moral value of gender equality. More generally, Calhoun’s (2000) important work on the virtue of civility highlights the fact that in a morally imperfect world, communicating or displaying moral values—such as respect—can often require engaging in socially recognized practices that may not actually be, based on our best moral understanding, consistent with those values. 436
Jennifer Cole Wright
9. Though her hostility might have been better placed if the person holding the door had been a younger man—someone raised with more gender egalitarian social norms—who should have “know better.” 10. This aligns quite well with the view put forth by Alfano (2013), in which he argues that we should think of virtue and character as social constructs—they come into existence (and are maintained) through the architecture of social reinforcement. 11. The problematic historical issues associated with this day aside. 12. I think the “merely” here is important—certainly, his coming to be brave can include chasing dogs off, if the circumstances where that was necessary arose. 13. As in Bengson and Moffett’s (2007, 2012) account of nonpropositional intellectualism, which would say that my knowing how to rock climb involves a nonpropositional understanding (or conception) of rock climbing. More specifically, for me to know how to rock climb, I would need to stand in an objectional understanding relation to a way w of rock climbing. 14. As the studies conducted by Rust and Schwitzgebel (2014), Schwitzgebel and Rust (2014) highlight. 15. Much of this is captured by Snow’s (2010) discussion of “social intelligence,” which she argues is foundational to virtue (virtues being particular forms of social intelligence). 16. This is consistent with the view put forth by Churchland (1996), when he argued for a “portrait of a moral person as one who has acquired a certain family of perceptual and behavioral skills,” a portrait that “contrasts sharply with the more traditional accounts that pictured a moral person as one who has agreed to follow a certain set of rules” (106). 17. See Chapter 21 of this volume for a discussion of the role moral theory plays in everyday thought and action. 18. As was the case when we first moved to South Carolina and my son was repeatedly chastised for failing to respond to his teachers (and other adults) with “Ma’am/Sir”—not only was this social practice regularly displayed for him to observe/imitate, but he was regularly reminded that it was an important display of respect. Interestingly, over time he came to experience it as such—and continues to engage in it to this day, though he now lives in a part of the country where it is not as common. 19. See too Chapter 15 of this volume, where moral parochialism is similarly diagnosed. 20. Or, to put it in Annas’ (2011) terms, a platform from which to learn and aspire to greater virtue.
References Adams, M. P. (2009). “Empirical Evidence and the Knowledge-That/Knowledge-How Distinction,” Synthese, 170, 97–114. Alcoff, L. (2007). “Epistemologies of Ignorance: Three Types,” in S. Sullivan and N. Tuana (eds.), Race and Epistemologies of Ignorance. Albany: State University of New York. Alfano, M. (2013). Character as Moral Fiction. New York: Cambridge University Press. Annas, J. (2011). Intelligent Virtue. New York: Oxford University Press. Audi, R. (2013). Moral Perception. Princeton: Princeton University Press. Bengson, J. (2013). “Knowledge How and Knowledge That,” in Encyclopedia of Philosophy and the Social Sciences. Thousand Oaks, CA: Sage Press. Bengson, J. and Moffett, M. (2007). “Know-How and Concept Possession,” Philosophical Studies, 136 (1), 31–57. ———. (eds.) (2012). Knowing How. New York: Oxford University Press. Bengson, J. Moffett, M. and Wright, J. (2009). “The Folk On Knowing How,” Philosophical Studies, 142 (3), 387–401. Blum, L. (1991). “Moral Perception and Particularity,” Ethics, 101, 701–725. Calhoun, C. (2000). “The Virtue of Civility,” Philosophy & Public Affairs, 29 (3), 251–275. Churchland, P. M. (1996). “The Neural Representation of the Social World,” in L. May, M. Friedman and A. Clark (eds.), Mind and Morals. Cambridge, MA: MIT Press. ———. (2000). “Rules, Know-How, and the Future of Moral Cognition,” Canadian Journal of Philosophy, 30, 291–306. 437
Moral Knowledge as Know-How
Churchland, P. S. (2015). “The Neurobiological Platform for Moral Values,” Royal Institute of Philosophy Supplement, 76, 97–110. Clark, A. (2000).“Word and Action: Reconciling Rules and Know-How in Moral Cognition,” Canadian Journal of Philosophy, 30, 267–289. Cuneo, T. (2003). “Reidian Moral Perception,” Canadian Journal of Philosophy, 33, 229–258. Dancy, J. (2010). “Moral Perception,” Proceedings of the Aristotelian Society, 84, 99–117. Devitt, M. (2011). “Methodology and the Nature of Knowing How,” The Journal of Philosophy, 108 (4), 205–218. Dreyfus, H. and Dreyfus, S. (1990). “What Is Morality? A Phenomenological Account of the Development of Ethical Expertise,” in D. Rasmussen (ed.), Universalism vs. Communitarianism: Contemporary Debates in Ethics. Cambridge, MA: MIT Press. ———. (1991). “Towards a Phenomenology of Moral Expertise,” Human Studies, 14, 229–250. Fantl, J. (2008). “Knowing-How and Knowing-That,” Philosophy Compass, 3 (3), 451–470. ———. (2016).“Knowledge How,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring Edition). http://plato.stanford.edu/archives/spr2016/entries/knowledge-how/ Fodor, J. A. (1968). “The Appeal to Tacit Knowledge in Psychological Explanation,” The Journal of Philosophy, 65 (20), 627–640. Grasswick, H. (2016). “Feminist Social Epistemology,” in E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter Edition). https://plato.stanford.edu/archives/win2016/entries/ feminist-social-epistemology/ Hills, A. (2015). “The Intellectuals and the Virtues,” Ethics, 126 (1), 7–36. Huemer, M. (2001). Skepticism and the Veil of Perception. New York: Rowman & Littlefield. Jackendoff, R. (1996). “How Language Helps Us Think,” Pragmatics and Cognition, 4 (1), 1–34. Jacobson, D. (2005). “Seeing by Feeling:Virtues, Skills, and Moral Perception,” Ethical Theory & Moral Practice, 8 (4), 387–409. Killen, M. and Hart, D. (1995). Morality in Everyday Life. New York: Cambridge University Press. Killen, M. and Smentana, J. (2013). Handbook of Moral Development (1st ed.). New York: Psychology Press. McBrayer, J. P. (2010). “A Limited Defense of Moral Perception,” Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 149 (3), 305–320. Mills, C. (2007). “White Ignorance,” in S. Sullivan and N. Tuana (Eds.). Race and Epistemologies of Ignorance. Albany: State University of New York. Noë, A. (2005). “Against Intellectualism,” Analysis, 65 (4), 278–290. Russell, D. C. (2015). “Aristotle on Cultivating Virtue,” in N. Snow (ed.), Cultivating Virtue: Perspectives from Philosophy,Theology, and Psychology. New York: Oxford University Press. Rust, J. and Schwitzgebel, E. (2014). “The Moral Behavior of Ethicists and the Power of Reason,” in H. Sarkissian and J. Wright (eds.), Advances in Experimental Moral Psychology. New York: Bloomsbury Academic. Ryle, G. (1946).“Knowing How and Knowing That,” In G. Ryle (ed.), Collected Papers,Vol II: Collected Essays (1929–1968). New York: Barnes & Noble. ———. (1949). The Concept of Mind. Chicago: University of Chicago Press. Schiffer, S. (2002). “Amazing Knowledge,” The Journal of Philosophy, 99 (4), 200–202. Schwitzgebel, E. and Rust, J. (2014).“The Moral Behavior of Ethics Professors: Relationships Among Self-Reported Behavior, Expressed Normative Attitude, and Directly Observed Behavior,” Philosophical Psychology, 27 (3), 293–327. Silberman, S. (2015). Neurotribes:The Legacy of Autism and the Future of Neurodiversity. New York: Avery Publications. Snow, N. (ed.) (2010). Cultivating Virtue: Perspectives from Philosophy,Theology, and Psychology. New York: Oxford University Press. Snowdon, P. (2003). “Knowing How and Knowing That: A Distinction Reconsidered,” Proceedings of the Aristotelian Society, 104 (1), 1–29. Solomon, A. (2012). Far From the Tree: Parents, Children and the Search for Identity. New York: Scribner Publishers.
438
Jennifer Cole Wright
Stanley, J. (2011). Know How. Oxford: Oxford University Press. Stanley, J. and Williamson, T. (2001). “Knowing How,” The Journal of Philosophy, 98 (8), 411–444. Sullivan, S. and Tuana, N. (2007). Race and Epistemologies of Ignorance. Albany, NY: State University of New York Press. Tse, C. S. and Altarriba, J. (2008). “Evidence Against Linguistic Relativity in Chinese and English: A Case Study of Spatial and Temporal Metaphors,” Journal of Cognition and Culture, 8, 335–357. Varela, F. (1999). Ethical Know-How: Action, Wisdom, and Cognition. Stanford, CA: Stanford University Press. Wallis, C. (2008). “Consciousness, Context, and Know-How,” Synthese, 160, 123–153. Wright, J. L. (2007). “The Role of Moral Perception in Mature Moral Agency,” Review Journal of Political Philosophy, 5 (1–2), 1–24.
Further Readings For good relevant compilations, there is Snow, Cultivating Virtue: Perspectives from Philosophy, Theology, and Psychology (New York: Oxford University Press, 2010) on virtue and Bengson and Moffett, Knowing How: Essays on Knowledge, Mind, and Action (New York: Oxford University Press, 2012) on know-how. There are several relevant articles in the Canadian Journal of Philosophy, Supplementary Volume 26: Moral Epistemology Naturalized. For anyone interested in the epistemology of ignorance, see Sullivan and Tuana, Race and Epistemologies of Ignorance (Albany: State of University of New York Press, 2007). I also recommend Calhoun’s “The Virtue of Civility,” Philosophy & Public Affairs, 29 (3), 251–275, 2000.
Related Topics Chapter 3 Normative Practices of Other Animals; Chapter 5 Moral Development in Humans; Chapter 6 Moral Learning; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 27 Teaching Virtue.
439
23 GROUP MORAL KNOWLEDGE1 Deborah Tollefsen and Christopher Lucibella
Recent discussions of group knowledge (Tuomela, 2004; List, 2005;Tollefsen, 2002a; Goldman, 2014, 2004) raise the interesting possibility that moral knowledge might be collectivized. Consider the following example: After acquiring artificial intelligence start-up Deep Mind in 2014, Google announced plans to form an ethics board that would provide guidance on the moral uncertainties raised by rapid advances in artificial intelligence.2 Though the membership of this board remains a mystery,3 Google announced that it would be made up of scientists, historians, philosophers, and public policy experts. More recently, the Partnership on Artificial Intelligence to Benefit People and Society, formed by the world’s largest technology firms including Amazon, Facebook, Google, Microsoft, and IBM, aims to tackle the ethical issues that have and will inevitably arise from advancements in technology.4 Through joint deliberation these committees may form moral judgments about the surveillance and the privacy of users, virtual reality, artificial life forms, and technological warfare. Could such committees be a source of moral knowledge? An initial caveat is in order. Debates regarding the possibility of moral knowledge are beyond the scope of this chapter. If an error theory is correct then there is no moral knowledge, because all moral propositions are false.5 If there is no moral knowledge, the question of whether groups possess moral knowledge doesn’t get off the ground. We don’t have space here to argue for the possibility of moral knowledge. Rather, we are going to assume for the purposes of this chapter that moral knowledge (of some kind) is possible. Based on this assumption, we will explore the possibility of group moral knowledge and deferring to groups.
1. Group Knowledge Over the past few decades, epistemology has become “socialized.” One of the ways in which it has shed its individualistic spirit is in its exploration of the possibility that epistemological properties such as rationality and objectivity, as well as doxastic states such as belief and judgment, might be found at the group level.The epistemology of groups is now a rapidly growing subfield of social epistemology. 440
Deborah Tollefsen and Christopher Lucibella
The literature on group knowledge can be divided into two main (but not mutually exclusive) approaches to the phenomenon. The collective intentionality approach (Gilbert, 1989, 1994, 1996, 2003; Tuomela, 1992, 1993, 1995, 2005, 2007; Hakli, 2007; Tollefsen, 2002a, 2015) draws on accounts of group belief (Gilbert, 1989, 1994, 1996; Searle, 1990, 1995; Bratman, 1993, 1999, 2004, 2006) in order to offer accounts of group knowledge.The extended epistemology approach (Palermos, 2014; Palermos, 2015; Palermos & Pritchard, 2013) combines virtue reliabilism (Greco, 1999, 2010) with the hypothesis of distributed cognition (Barnier et al., 2008 ; Hutchins, 1996; Sutton 2008; Theiner et al., 2010; Theiner, 2013;Tollefsen & Dale, 2012;Tollefsen, 2006) from philosophy of mind in order to account for group knowledge as the product of distributed cognitive abilities. The extended epistemology approach is a relatively new contribution to the literature and rests on a particular view of the nature of knowledge (a virtue-theoretic account). The collective intentionality approach, on the other hand, includes a number of established theories of group belief that could be (and have been) extended in a variety of different ways depending on one’s theory of knowledge. For the purposes of this chapter, therefore, we focus on the collective intentionality approach. The traditional analysis of knowledge has it that knowledge is justified true belief. Although the traditional analysis has been challenged, the role of belief in knowledge has played a relatively stable role.To say that S knows that p is, to many, to say that S has a certain sort of belief.6 It is not surprising, then, that group belief should play a prominent role in discussions of group knowledge. Prima facie, the idea that groups could have beliefs is a bit puzzling. Beliefs are typically characterized as “mental” states. Groups don’t have minds or brains, so then how could they have mental states like belief? Yet we have a robust practice of attributing belief to groups. As group members we attribute beliefs to “us,” such as “We believe that our institution should pay adjunct faculty a fair wage.” In addition, we often take the intentional stance toward groups and attribute beliefs to groups from the third person perspective (e.g. “The committee believes candidate A is the best candidate,” “Corporation X believes that they are not polluting the environment”). One might argue that our attributions to groups are simply shorthand ways of referring to the sum of the beliefs of group members or to the commonly held beliefs within a group.7 Margaret Gilbert (1987, 1989, 2000, 2002) was the first to argue against this “summative” approach. According to her analysis, the summative account clearly isn’t sufficient for group belief. Every member of the CIA probably believes that 2 + 2 = 4, but it is not likely to be a belief that we attribute to the CIA. We can also imagine cases where there are two groups with the same members and yet attribution of a belief to one is appropriate whereas attribution to another is completely inappropriate. Consider the case of two committees, the dress code committee and the budget committee, whose members are coextensive. The dress code committee might believe that tank tops are inappropriate for work whereas the budget committee has no view on the matter. Whether a group believes something seems to depend on more than what its individual members believe. Somehow the proposition attributed to the group must play a role in the life of the group—in its deliberations, policy making, and joint actions. Gilbert has also argued that the summative account doesn’t provide necessary conditions for group belief. According to Gilbert, we can imagine cases where no member of a group personally 441
Group Moral Knowledge
believes that p but p is appropriately attributed to the group. Members might accept that p is the judgment of the group without believing that p is true. Group belief is a function of members accepting a proposition as the view of the group (Tuomela, 2004; Hakli, 2007). A group’s belief can often diverge from the beliefs of its members; hence, group belief cannot be reduced to the beliefs of its members. The “divergence thesis” (Mathiesen, 2011) is a staple in the literature on group belief. List and Pettit (2002, 2011) have used the “discursive dilemma” as a way of supporting the divergence thesis and as a basis for arguing that groups can be agents. Agents are systems subject to rationality constraints, among other things. An agent forms attitudes and acts on those attitudes in ways that are responsive both to the environment and rationality constraints such as consistency and coherency. List and Pettit argue that certain types of groups also form attitudes in response to rationality constraints. These constraints are imposed on the collective rather than the individual level. Their focus is on decision-making groups, such as committees, and on aggregation procedures. Aggregation procedures are mechanisms for combining the judgments of individual members into group judgments that are endorsed or accepted by group members. Importantly, group members might endorse a group judgment that p, for instance, without individually believing or judging that p. This may happen when a group is trying to meet the “rationality challenge” (List, 2005; Goldman, 2004) and the group is trying to make sure that its judgments are consistent across a set of interconnected propositions or with past judgments it has made. Groups might end up endorsing a view that no member individually endorsed, so there is a divergence between what the group believes and what the individuals believe. List offers the following example (2005, 27): An expert committee is charged with preparing a report on the health consequences of air pollution in a big city. Suppose they are asked to make judgments on the following statements: P: The average particle pollution level exceeds 50 micrograms per cubic meter air. P > Q: If the average particle pollution level exceeds 50 micrograms per cubic meter air, then residents have a significantly increased risk of respiratory disease. Q: Residents have a significantly increased risk of respiratory disease. Suppose the committee uses a majority aggregation procedure whereby the group’s judgment on each proposition is the majority judgment on that proposition. The group would then, on the basis of the matrix in Table 23.1, judge that p is true, that P > Q is true, and that Q is false. This would make for a very irrational group, as it would endorse a set of inconsistent statements. Note that each individual holds a consistent set of propositions. But if we aggregate the votes of individuals according to how a majority votes on each Table 23.1 Expert committee
Individual 1 Individual 2 Individual 3 Majority
P
P > Q
Q
True True False True
True False True True
True False False False
442
Deborah Tollefsen and Christopher Lucibella
proposition, then we get a decision that is irrational for the group. This is referred to as the “discursive dilemma.” It can be generalized to various contexts and arises for any aggregation procedure that requires universal domain, anonymity, and systematicity (List & Pettit, 2002). Universal domain specifies that the decision-making procedure “accepts as admissible input any logically possible combinations of complete and consistent individual judgments on the proposition” (List, 2005, 28). That is, universal domain specifies that dissent is acceptable. Anonymity specifies that each individual’s judgment is given equal weight. Systematicity holds that the group’s judgment on each proposition depends solely on the individual’s judgments on that proposition and the same holds for all propositions. The discursive dilemma can be resolved by altering the aggregation procedure to relax at least one of the conditions. One could, for instance, relax the anonymity requirement and give one person’s vote more weight. This “dictatorial procedure” would determine the group’s view on the basis of what an individual decided was the view of the group. The individual could, then, make sure that the group endorsed a set of consistent propositions. Although some groups might agree to such a procedure, it would conflict with the ideal of democratic committees. A group could give up on the idea of universal domain and only accept inputs into the decision-making procedure that were sure to produce group judgments that did not conflict with the majority’s judgment on the matter. This happens in groups that demand consensus agreement before a vote. But this is clearly not an option for all groups. In matters of controversy, or where domains of expertise differ, disagreement among members is inevitable. Finally, one could give up on systematicity. If one gives up the idea that all of the propositions should be given the same weight and treats different propositions differently in the process of forming the group’s judgments, achieving rationality at the group level is possible. The “premise-based” procedure identifies certain propositions as being conceptually more basic than others and aggregates the votes on those propositions (identified as the “premises”) and allows those votes to determine the group’s view regarding another proposition that is conceptually dependent on the others (identified as the “conclusion”). In our example, the first two propositions would be deemed the premises and Q would be the conclusion. Since a majority of people found P true and P > Q to be true, the group would judge that Q is true. On this method the group would endorse a consistent set of propositions, but it would conflict with what a majority of people decided regarding Q. That is, on this method, the group would judge that residents have a significantly increased risk of respiratory disease but the majority of members would have judged that this is not the case. This is the “divergence” on which the “divergence thesis” rests. There are a variety of other aggregation procedures that might allow for a group to resolve the dilemma. For instance, giving up on both systematicity and anonymity would involve giving priority to some set of propositions in determining the group view and implementing a distribution of epistemic labor. Some members of the group might be experts in some domain, and so their vote on some issues would be aggregated and another set of experts might determine the vote on a distinct set of issues. In this case, “the group makes a collective judgment on each premise by taking a majority vote on that premise among the relevant ‘specialists,’ and then the group derives its collective judgments on the conclusion from these collective judgments on the premises” (List, 2005, 29). Again, this 443
Group Moral Knowledge
might lead to cases where there is a divergence between what individual members believe or judge and what the group’s belief is. There may even be cases where no individual believes that p but the group judges that p. If groups can have beliefs, then it is natural to ask whether they might also have knowledge. Process reliabilism offers one promising approach to making sense of group knowledge. Unlike theories of knowledge that involve internalist theories of justification requiring subjects to have access to reasons for belief, process reliabilism focuses on belief-forming processes.8 According to process reliabilism, knowledge is true belief produced by a reliable process. As Tollefsen notes (2002a), as long as one accepts that the processes that result in belief formation can be distributed across individuals, the idea of group knowledge is plausible. List (2005) also adopts a process reliabilist approach. He begins with Nozick’s account (1981) of knowledge. According to Nozick, a subject knows that p if (i) an agent believes that p, (ii) p is true, (iii) if p were true, the agent would believe that p, and (iv) if p were false, the agent would not believe that p. List suggests that if we use a reliabilist measure to determine how well an agent satisfies conditions (iii) and (iv), we can see how a group might meet these conditions as well. List uses two probabilities to capture conditions (iii) and (iv). The “positive” probability is the probability that an agent believes that p, given that p is true. The “negative” probability is the probability that an agent does not believe p to be true given that p is false. These probabilities represent an agent’s negative and positive reliability on an issue. Aggregating the positive and negative reliability of members of a group will produce the group’s negative and positive reliability regarding an issue. In cases where expertise is distributed across group members, reliability may be increased. Consider, again, the case involving a group of people who must make a collective judgment on a series of initial propositions that serve as evidence for another proposition, the conclusion. Instead of having each member vote on each proposition, the group might have certain individuals who are experts in that domain vote on a particular proposition, and other members, experts in different areas, vote on another proposition. In such a case the group might be more reliable with respect to p (the conclusion) than any particular individual, and so we have group knowledge that p without any individual knowing that p. Recently, Alvin Goldman has broadened his conception of a reliable process to include social processes (2014). Goldman remains agnostic regarding the nature of group belief but adopts Christian List and Philip Pettit’s account in order to develop a theory of the justification of group belief. Goldman calls the function that takes individual attitudes as inputs and yields collective beliefs as output a belief aggregation function (BAF). He then proposes a way of aggregating the justification of group belief. He calls this a justification aggregation function (JAF). According to Goldman, the greater proportion of members who are justified in believing that p and the smaller the proportion of members who are justified in rejecting that p, the greater the justification of the group belief that p. Goldman now accepts that, at least in collective contexts, belief-forming processes extend across “agential gaps” along the lines suggested by Sanford Goldberg (2010) for the case of testimony. Goldberg (2010) argues that in cases where we rely on the testimony of another, the justificatory base extends beyond the individual hearer. Of course, whether and which group belief-forming processes are reliable is an empirical question. Just as process reliabilism defers to cognitive science to identify the reliable 444
Deborah Tollefsen and Christopher Lucibella
belief-forming processes in the brain, so too “social” process reliabilism will have to defer to the social sciences to tell us which group belief-forming processes are reliable. The possibility of “groupthink” and various other social psychological effects will need to be taken into consideration. Given that groups can have beliefs that are not reducible to the beliefs of individual members and given that we can make sense of the reliability of group belief-forming processes and hence group knowledge, we can now consider the possibility of group moral knowledge.
2. Group Moral Knowledge The examples of group knowledge in the literature tend to focus on cases involving empirical knowledge or practical knowledge. But we can easily extend these cases to include moral matters. Consider the following adaptation of List’s original example involving a committee that must make judgments about the truth of various propositions and draw a conclusion based on those propositions. The additional propositions involve reference to moral norms and what the moral obligations of the committee should be. P: The average particle pollution level exceeds 50 micrograms per cubic meter air. P > Q: If the average particle pollution level exceeds 50 micrograms per cubic meter air, then residents have a significantly increased risk of respiratory disease. Q: Residents have a significantly increased risk of respiratory disease. Q > R: If residents have a significantly increased risk of respiratory disease, then we have a moral obligation to notify them of this risk. R:We have a moral obligation to tell residents of the increased risk of respiratory disease. As with the previous case, there are a variety of ways of aggregating these individual judgments in order to form a group belief. As shown in Table 23.2, a majority of the members believe that they have no moral obligation to tell residents of the increased risk of respiratory disease, and yet each proposition on which that conclusion depends is supported by a majority of the members. In order to avoid group irrationality, the group could allow the majority vote on each of the propositions determine the group’s judgment concerning R. On this approach, since a majority of people believe the premises to be true, the group would make the judgment that R is true despite the majority rejecting R. Whether these group belief-forming processes are reliable is an empirical matter. But we can see how one might extend reliabilists account of group knowledge to the case of group moral knowledge. Table 23.2 Expert moral committee
Individual 1 Individual 2 Individual 3 Majority
P
P > Q
Q
Q > R
R
True True False True
True False True True
True True False True
True False True True
True False False False
445
Group Moral Knowledge
This case is contrived for the purposes of showing how a group might be the source of moral knowledge. We don’t have to look far, however, for a real case of a group that forms judgments about moral matters and for whom we might raise the question of reliability and knowledge. Consider war tribunals. On July 20, 2016, the judges of the International People’s Tribunal announced their findings regarding the actions of the Indonesian state. They found the Indonesian state responsible for crimes against humanity, including murder, imprisonment without trial, enslavement, sexual violence, and hate propaganda. The same tribunal found the U.S. guilty of crimes against humanity during the Vietnam War. One might argue that such decisions are merely legal decisions rather than moral decisions, but clearly, the line is blurred here. Or consider a committee of bishops that determines whether a war is just on the basis of just war theory—a moral theory developed by the Catholic Church. Finally, consider our earlier example. Google has just convened its own A.I. Ethics board to establish guidelines for the development of artificial intelligence. The board will be making decisions and providing moral guidance regarding A.I. technology and its impact on human life. In each of these cases, we have a group of people who engage in joint deliberation and joint decision making about matters that are moral. These are the sorts of groups for whom moral knowledge seems possible. These examples all involve organized groups with decision-making structures. Judgments about moral matters are made explicit through processes of voting or deliberation and various discursive practices. But what about less unified or unstructured groups? In “The Social Epistemology of Morality: Learning from the Forgotten History of the Abolition of Slavery” (2016), Elizabeth Anderson explores how social groups learn moral lessons by engaging in certain forms of moral inquiry.The sorts of groups she has in mind are larger populations, moral communities such as the citizens of the United States or the people living in a certain culture across time. Anderson adopts a theory of group knowledge that emphasizes the functional role certain moral principles play within the life of a community. Rather than focus on the explicit forming of group beliefs (through some aggregation of individual beliefs), Anderson refers to the moral convictions that are shared within a group. These are not necessarily believed by every member nor are they always obeyed.They often arise in more dynamic and less deliberative ways. The moral convictions of a community may not be explicitly voted on by group members. Rather, they are principles that come to play a certain role in the life of the group. A group believes that p (where p is a moral principle) when p is “operative” within the group.9 It serves as a premise in arguments but does not, itself, require further justification. Its truth is treated as a settled matter, disputing it is regarded as, if not crazy or beyond the pale, requiring a heavy burden of proof; disputants are liable to censure or even social exclusion for calling such convictions into question. (2016, 76) Moral knowledge in a group is judged by how well the moral principles of a group can solve practical problems. “On a pragmatist account of how this works, people learn about morality from their experiences in living in accordance with their moral convictions” (2016, 77). History is a resource for moral progress as it documents the experiences of a group living under certain moral principles. 446
Deborah Tollefsen and Christopher Lucibella
Anderson explores the ways in which moral error results when moral inquiry is undertaken in ways that exclude the oppressed and subordinate members of a group (what she calls “authoritarian moral inquiry”). Using the example of Euro-American moral inquiry concerning slavery, Anderson argues that where slaves had no participation in discussions of how slavery ought to be dismantled and how emancipation ought to unfold, moral error resulted. On the other hand, when slaves participated through legal means, it opened the door for a dialogical form of moral inquiry that led to greater moral insight. We glimpse here the outlines of an alternative social epistemology of moral inquiry. As in standard philosophical models, it is dialogic in form, consisting of claims and counterclaims made on the background supposition that progress can be made through the examination of the merits and weaknesses each side’s claims have in relation to the other. Unlike standard philosophical models, however, the dialogue is not merely imagined in a single person’s head or pursued by participants who are detached from the claims being made. Rather, the critical claims and counterclaims arise from interaction of the affected parties—those who are actually making moral demands on one another and insisting that the other offer them a serious normative response (2016, 85). If Anderson is correct, then moral knowledge is possible not just within small committees or task groups whose moral deliberations take place within organizational settings but within larger groups—moral communities—and across time.10 Further, the acquisition of moral knowledge within these groups often depends, as it does in smaller groups, on the structure of moral inquiry within them. We will return to the social epistemology of moral inquiry later.
3. Group Moral Expertise We began by asking the following question: If moral knowledge is possible, could a group be the bearer of such knowledge? We argued earlier that, given a certain understanding of group belief and group knowledge, a group can have moral knowledge. The question remains whether groups are the sorts of subjects that can be considered to be moral experts, to which we should defer on some moral matters. There is a good deal of controversy as to whether moral knowledge can be legitimately gained through testimony alone (see Chapter 25 of this volume for a detailed treatment of this problem). However, we will argue that if testimonial transfer of moral knowledge is possible between individuals, then it is also possible between a group and an individual or between groups. In addition, we will propose that, regarding moral testimony, there is reason to think that the epistemic credentials of groups can, in some cases, be better than those of individuals. Thus we will be able to conclude that there can be circumstances in which it is permissible, desirable, or even obligatory to defer to a group’s moral testimony. In her chapter on moral expertise in this volume (Chapter 25), Alison Hills distinguishes between two types of moral expertise: practical expertise and expertise in moral judgment. Those with practical expertise will reliably act well, while those with expertise in moral judgment tend to arrive at true or nearly true moral judgments. Although it may or may not be the case that groups are well-suited to possessing moral expertise in action, we will 447
Group Moral Knowledge
argue that some groups are better suited to having expertise regarding moral judgments than individuals because of a group’s ability to take up and adjudicate between multiple diverse viewpoints. As a guiding example to examine this intuition, consider the commonplace practice in which hospitals and other medical care providers establish ethics committees in order to provide moral guidance for clinicians, particularly in unclear but ethically relevant cases. Faced with a moral dilemma regarding whether a course of treatment for a given patient is the right course of action, a doctor may—and in some cases, must—defer to the determination of the ethics committee as to whether or not the treatment program is morally permissible. Such a case, we contend, is unambiguously a case of a group offering moral testimony; the ethics committee confers over the case and renders a decision to the doctor—the treatment program is or is not an ethical course of action. In following the decision of the ethics committee, the doctor has deferred to the moral testimony of the committee; and while it is always possible that a doctor may disagree with the decision so rendered, in at least some cases it seems clear that he/she will form a belief that the procedure is a permissible course of treatment for his/her patient, and will be justified in forming that moral belief. Given how commonplace this practice is, nothing seems particularly problematic about this case of deference to a group’s moral testimony. Moreover, we think it is nontrivial that such ethics committees are composed of more than one member, usually including individuals who comprise a diversity of viewpoints and types of professional training—a committee might include professional ethicists, social workers, religious representatives, and clinicians, to name a few possibilities. While it would be possible to have an individual fill the role of ethical arbiter for an institution such as a hospital, we believe that there is good reason to favor the diversity of views represented by a committee. If a crucial trait for a good testimonial source of moral beliefs is that agent’s degree of moral expertise, we believe it stands to reason that a group could be a better candidate for possessing such moral expertise than an individual. Provided that the group is composed of ethically sensitive and knowledgeable individuals and that the group is structured in such a way as to allow for open dialogue and measured exchange of ideas, then a committee possessing a diversity of views seems like it would be the best possible source for reaching an unbiased determination about an ethical matter. To support this claim, we look to an analysis by Helen Longino, chiefly found in her book Science as Social Knowledge. There, Longino gives an analysis of how objectivity is conceived within the sciences. She offers a contextualist account of objectivity in scientific inquiry, such that scientific objectivity is, properly speaking, a function of the larger social context in which the inquiry takes place. Thus, it is the social nature of scientific inquiry that makes that very inquiry possible and what makes it possible for that inquiry to generate knowledge.Taking Longino seriously, it follows that the pursuit of scientific knowledge is always undertaken within a social context, and the objectivity of those findings derives not from an individual practitioner’s fidelity to method but from the scientific community’s larger set of justificatory practices and norms. Longino goes on to propose that it is the plurality of viewpoints that unfold within a social context and community that licenses us to describe some knowledge as objective. “It is the possibility of intersubjective criticism,” she claims, “that permits objectivity in spite of the context dependence of evidential reasoning” (Longino, 1990, 71). Rather than viewing 448
Deborah Tollefsen and Christopher Lucibella
objectivity as the degree to which a given belief corresponds to an independently verified fact or set of facts about the world, Longino proposes that objectivity comes from reducing the influence of subjective preference in an individual or community’s set of background beliefs—in particular through the maintenance of conditions that are conducive to allowing for questioning and criticism of those background beliefs. Objectivity is, on this account, not simply a binary determination between “objective” and “not objective” but rather is a matter of more or less objective practices—i.e., practices that do a better or worse job of blocking subjective preference. As such, objectivity arises out of good deliberative practices within an epistemically relevant community for a given set of beliefs or assertions. (Longino, 1990, 74) This, in turn, relies on the possibility of criticism from alternative points of view and the subjection of hypotheses and evidential reasoning to close critical scrutiny. Intersubjective criticism provides the possibility of knowledge being objective, insofar as it allows for the possibility of addressing and critiquing subjective background assumptions of individual practitioners. Although the focus of Longino’s analysis in her book is scientific and not moral knowledge, we believe there is good reason to draw a parallel between the two for the purposes of discussing the nature of objective moral knowledge. If we grant that scientific inquiry is a paradigm, or at least a model, of good epistemic practice, then it seems quite appropriate to posit a connection between scientific and moral reasoning. It follows from this analysis that the greater the number of different points of view included in a given community, the more likely it is that its epistemic practice will be objective, that is, that it will result in descriptions, judgments, and explanations that are more reliable in the sense of less characterized by idiosyncratic subjective preferences of community members than would otherwise be the case (Longino, 1990, 80). To illustrate how this applies to group moral testimony, consider again the case given earlier of the ethics committee that testifies to the rightfulness or wrongfulness of a given medical procedure. If we grant that the ethics committee and the body that organizes it have a joint commitment to good discursive practices, it follows that they would be comparatively better suited to rendering an unbiased and critically reflective moral decision.This is particularly true the more diverse viewpoints are represented by the members of the committee; if, for example, the group includes an ethicist, a social worker, a religious authority, and so on, this would all contribute to providing the kind of resistance to idiosyncratic subjective preferences that Longino characterizes as key to objectivity. The ability of the group to deliberate provides the kind of opportunity for critical questioning of individual background beliefs that is less likely to happen if an individual is tasked with making the same determination. And varying attention among different members of the group to different pieces of evidence provides what is likely to be a wider range of relevant data to be brought into the adjudication. Longino’s social epistemology of science can be usefully extended to offer a social epistemology of moral inquiry. The organization and structure of group moral inquiry can contribute to greater objectivity and can overcome individual biases making some groups better at moral inquiry than the individuals that comprise them. If we are correct, groups can obtain a greater degree of moral expertise than individuals by bringing together diverse voices that contribute in a variety of ways to moral judgment and increase objectivity. Anderson (2016) makes a similar point when she argues that 449
Group Moral Knowledge
collective moral inquiry needs to include voices from subordinate social groups. Collective moral inquiry needs to avoid taking an “authoritarian” form. Moral inquiry is authoritarian when (1) it is conducted by people who occupy privileged positions in a social hierarchy, (2) the moral principles being investigated are those that are supposed to govern relations between the privileged and those who occupy subordinate positions in the social hierarchy, and (3) those in subordinate positions are (a) excluded from participating in the inquiry, or (b) their contributions—their claims—are accepted as requiring some kind of response, but where the response of the privileged fails to reflect adequate uptake of subordinates’ perspectives and rather uses their social power to impose their perspective on the subordinates (Anderson, 2016, 78). Anderson identifies bias in discussions of the abolition of slavery because of abolitionists’ authoritarian moral inquiry. The exclusion of slaves from discussions of emancipation led to absurd and inconsistent views about gradual emancipation. Condorcet, for instance, who argued for the immediate abolition of sexist practices, could not seem to arrive at the same conclusion with respect to the abolition of slavery. Why did Condorcet think so much more consistently about feminism than about abolition? It is possible that the constant interaction with his wife had something to do with his moral clarity (Nall, 2008). His wife, Sophie De Grouchy, was the author of Letters on Sympathy, translator of Adam Smith’s Theory of Moral Sentiments, and hostess to a prominent salon in Paris. Inclusion of the object of his feminist concern as a co-inquirer likely enabled him to think straightforwardly about women’s emancipation. By contrast, Condorcet was isolated from the slaves in the colonies. The membership of the Société des Amis des Noirs, which endorsed only gradual emancipation, was highly elitist and segregated (Anderson, 84). Nonauthoritarian moral inquiry will involve the integration of subordinate perspectives and an uptake of those perspectives. Objectivity is increased, according to Longino, by implementing deliberative practices that are structured in such a way as to allow criticism from alternative perspectives and the subjection of judgments to close critical scrutiny. The integration of perspectives of individuals from a subordinate group stands to contribute to this form of intersubjective criticism, insofar as it allows for the possibility of addressing and critiquing subjective background assumptions of individual practitioners. As we noted earlier, groups may have reasons for adopting various forms of judgment aggregation. Some of those judgment aggregation procedures may enhance moral decision making at the group level. Others procedures may decrease it. Universal domain, for instance, specifies that dissent is acceptable. Dissent can act as a check on idiosyncratic subjective preferences and disrupt patterns of groupthink. Anonymity specifies that each individual’s judgment is given equal weight. In certain circumstances, anonymity might be suspended and the judgment of certain individuals weighted differently depending on their individual expertise or the sort of knowledge they contribute to the group. It may be, for instance, that those who have been subject to systematic racism or sexism can offer greater insight into the nature of the moral harm committed, and so their judgments may justifiably be weighted differently. Likewise, as we have seen, in some cases there are reasons for groups to give up on systematicity in order to preserve consistency at the group level. Doing so preserves the rationality of the group. Thus, both the structure of group deliberation and the procedure for aggregating individual judgments can impact group moral knowledge and expertise. A social epistemology of moral inquiry can offer insight into which group decision-making processes enhance and inhibit group moral expertise. 450
Deborah Tollefsen and Christopher Lucibella
4. Conclusion We set out to explore in this chapter the possibility of group moral knowledge and the related question of whether or not there are cases in which it is appropriate or desirable to defer to groups on morally salient questions. By examining cases of group moral deliberation, it has become clear that a group’s moral judgment can diverge from its members’ moral judgments. To the extent that these group judgments constitute knowledge, there is a sense in which groups can be said to have moral knowledge. Additionally, there can be cases, such as ethics committees and institutional review boards, in which groups can be legitimate moral testifiers; in fact, there is good reason to believe that groups can be superior to individuals as moral testifiers in some cases, as groups’ capacity for intersubjective deliberation allows for less bias and greater objectivity.
Notes 1. The authors are grateful to Stephan Blatti and Karen Jones for helpful comments on drafts of this chapter. 2. Boxer, Bianca “Google's new AI ethics board might save humanity from extinction,” www.huff ingtonpost.com/2014/01/29/google-ai_n_4683343.html 3. Shead, Sam.“The biggest mystery in AI right now is the ethics board that Google set up after buying Deep Mind,” www.businessinsider.com/google-ai-ethics-board-remains-a-mystery-2016-3 4. Parloff, Roger. “AI partnership launched by Amazon, Facebook, Google, IBM and Microsoft,” http://fortune.com/2016/09/28/ai-partnership-facebook-google-amazon/ 5. See Chapters 13 and 14 of this volume for further discussion. 6. The exception here are those who think knowledge is a distinct mental state and not reducible to a type of belief. See, for instance, Williamson (2000). 7. This view is attributed to Anthony Quinton (1976). 8. One might think that this dismisses internalism too quickly. Doesn’t group deliberation give members (and hence the group) access to group reasons? Although group deliberation may give members access to the group’s reasons, not all groups engage in deliberation prior to forming the group belief. Consider a group that simply aggregates the votes of its members without deliberation. There may be a group belief formed reliably, but the members don’t have access to the group’s reasons, only their own reasons for voting as they did. Requiring that group members have access to reasons in order for a group belief to be justified is, therefore, a bit too strong. See Miriam Solomon (2005) for a discussion of access internalism and group knowledge. 9. See Chapter 3 of this volume for relevant discussion of the ways in which norms might be operating in groups of nonhuman animals. 10. See Chapter 22 of this volume.
References Anderson, E. (2016). “The Social Epistemology of Morality: Learning from the Forgotten History of the Abolition of Slavery,” in M. S. Brady & M. Fricker (eds.), The Epistemic Life of Groups: Essays in the Epistemology of Collectives. Oxford, UK: Oxford University Press. Barnier, A. J. et al. (2008). “A Conceptual and Empirical Framework for the Social Distribution of Cognition: The Case of Memory,” Cognitive Systems Research, 9 (1), 33–51. Bratman, M. (1993). “Shared Intention,” Ethics, 104 (1), 97–113. ———. (1999). Faces of Intention: Selected Essays on Intention and Agency. Cambridge: Cambridge University Press. ———. (2004). “Shared Valuing and Frameworks for Practical Reasoning,” in Reason and Value: Themes from the Moral Philosophy of Joseph Raz. Oxford: Oxford University Press, 1–27.
451
Group Moral Knowledge
———. (2006). “Dynamics of Sociality,” Midwest Studies in Philosophy, 30 (1), 1–15. Gilbert, M. (1987). “Modeling Collective Belief,” Synthese, 73 (1), 185–204. ———. (1989). On Social Facts. Princeton: Princeton University Press. ———. (1994). “Remarks on Collective Belief,” in Socializing Epistemology: The Social Dimensions of Knowledge. Lanham, MD: Rowman & Littlefield. ———. (1996). Living Together: Rationality, Sociality, and Obligation. Lanham, MD: Rowman & Littlefield. ———. (2000). Sociality and Responsibility: New Essays in Plural Subject Theory. Lanham, MD: Rowman & Littlefield. ———. (2002). “Belief and Acceptance as Features of Groups,” Protosociology, 16, 35–69. ———. (2003). “The Structure of the Social Atom: Joint Commitment as the Foundation of Human Social Behavior,” in Socializing Metaphysics. Lanham, MD: Rowman & Littlefield. Goldberg, S. (2010). Relying on Others: An Essay in Epistemology. Oxford: Oxford University Press. Goldman, A. (2004). “Group Knowledge Versus Group Rationality: Two Approaches to Social Epistemology,” Episteme, 1 (1), 11–22. ———. (2014). “Social Process Reliabilism: Solving Justification Problems in Collective Epistemology,” in J. Lackey (ed.), Essays in Collective Epistemology. Oxford: Oxford University Press. Greco, J. (1999). “Agent Reliabilism,” Noûs, 33 (s13), 273–296. ———. (2010). Achieving Knowledge: A Virtue-Theoretic Account of Epistemic Normativity. Cambridge: Cambridge University Press. Hakli, R. (2007). “On the Possibility of Group Knowledge Without Belief,” Social Epistemology, 21 (3), 249–266. Hutchins, E. and Klausen, T. (1996). “Distributed Cognition in an Airline Cockpit,” in Cognition and Communication at Work. Cambridge: Cambridge University Press, 15–34. List, C. (2005). “Group Knowledge and Group Rationality: A Judgment Aggregation Perspective,” Episteme, 2 (1), 25–38. List, C. and Pettit, P. (2002). “Aggregating Sets of Judgments: An Impossibility Result,” Economics and Philosophy, 18 (1), 89–110. ———. (2011). Group Agency: The Possibility, Design, and Status of Corporate Agents. Oxford: Oxford University Press. Longino, H. E. (1990). Science as Social Knowledge:Values and Objectivity in Scientific Inquiry. Princeton, NJ: Princeton University Press. Mathiesen, K. (2011). “Can Groups Be Epistemic Agents?,” in H. B. Schmid, D. Sirtes and M. Weber (eds.), Collective Epistemology. Frankfurt: Ontos, 20–23. Nall, J. (2008). “Condorcet's Legacy Among the Philosophes and the Value of His Feminism for Today's Man,” Essays in the Philosophy of Humanism, 16 (1), 51–70. Nozick, R. (1981). Philosophical Explanations. Cambridge, MA: Harvard University Press. Palermos, S. (2014). “Knowledge and Cognitive Integration,” Synthese, 191 (8), 1931–1951. ———. (2015). “Active Externalism, Virtue Reliabilism and Scientific Knowledge,” Synthese, 192 (9), 2955–2986. Palermos, S. and Pritchard, D. (2013). “Extended Knowledge and Social Epistemology,” Social Epistemology Review and Reply Collective, 2 (8), 105–120. Quinton, A. (1976). “Social Objects,” Proceedings of the Aristotelian Society, 76, 1–27. Searle, J. (1990). “Collective Intentions and Actions,” in Intentions in Communication. Cambridge, MA: MIT Press. ———. (1995). The Construction of Social Reality. New York: Simon and Schuster. Solomon, M. (2005). “Groupthink Versus the Wisdom of Crowds,” Southern Journal of Philosophy, 44 (Supplement), 28–42. Sutton, J. (2008). “Between Individual and Collective Memory: Coordination, Interaction, Distribution,” Social Research, 75, 23–48. Theiner, G. (2013). “Transactive Memory Systems: A Mechanistic Analysis of Emergent Group Memory,” Review of Philosophy and Psychology, 4 (1), 65–89.
452
Deborah Tollefsen and Christopher Lucibella
Theiner, G., Allen, C. and Goldstone, R. L. (2010). “Recognizing Group Cognition,” Cognitive Systems Research, 11 (4), 378–395. Tollefsen, D. (2002a). “Challenging Epistemic Individualism,” Protosociology, 16, 86–117. ———. (2002b). “Organizations as True Believers,” Journal of Social Philosophy, 33 (3), 395–410. ———. (2006). “From Extended Mind to Collective Mind,” Cognitive Systems Research, 7 (2), 140–150. ———. (2015). Groups as Agents. London: Polity Press. Tollefsen, D. and Dale, R. (2012). “Naturalizing Joint Action: A Process-Based Approach,” Philosophical Psychology, 25 (3), 385–407. Tuomela, R. (1992). “Group Beliefs,” Synthese, 91 (3), 285–318. ———. (1993). “What Is Cooperation?” Erkenntnis, 38 (1), 87–101. ———. (1995). The Importance of Us: A Philosophical Study of Basic Social Notions. Stanford, CA: Stanford University Press. ——————. (2004). “Group Knowledge Analyzed,” Episteme, 1 (2), 109–127. ———. (2005). “We-Intentions Revisited,” Philosophical Studies, 125 (3), 327–369. ———. (2007). The Philosophy of Sociality:The Shared Point of View. Oxford: Oxford University Press. Williamson, T. (2000). Knowledge and Its Limits. Oxford: Oxford University Press.
Related Topics Chapter 3 Normative Practices of Other Animals; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 22, Moral Knowledge as Know-How; Chapter 24, Moral Epistemology and Liberation Movements; Chapter 25, Moral Expertise.
453
24 MORAL EPISTEMOLOGY AND LIBERATION MOVEMENTS Lauren Woomer
In this chapter, I will explore the relationship between liberation movements and moral knowledge. I will argue that liberation movements can serve as a means for generating and disseminating moral knowledge, and for cultivating agents who are receptive to this moral knowledge. Some of the key ways movements achieve this is by encouraging the expansion of the moral imagination and the development of new collective epistemic tools. The moral-epistemic benefits of liberation movements can be separated into two types— those that occur inside the movement and those that occur outside the movement. Inside the movement, there is the creation of a community that generates not only new moral knowledge but new moral-epistemological frameworks that stand in opposition to dominant frameworks. Participating in the movement simultaneously helps people develop into subjects who are capable of receiving this knowledge. Outside the movement, the actions of the movement help to disseminate moral knowledge and moral-epistemic resources, and thus to challenge dominant moral-epistemological frameworks. This chapter will proceed in three parts. First, I will lay out the framing concepts of the chapter by explaining what moral-epistemological frameworks are. I will then explore how movements create moral-epistemological frameworks and how they use these frameworks to challenge dominant ways of thinking.
1. Epistemological Frameworks Liberation movements serve as a tool to create new epistemological frameworks, including moral-epistemological frameworks, and challenge dominant ones. In this first section, I will unpack what I mean by this. By an epistemological framework I mean the lens through which one views the world and goes about the business of knowing. This lens includes (i) the experiences one has had, (ii) the collective epistemic resources that one uses to make sense of the world, and (iii) the wider web of values and assumptions that ground these resources. I will discuss each of these in turn. First, a knower’s epistemological framework is impacted by their situatedness. Each knower occupies a particular geographical, historical, social, and political location. This 454
Lauren Woomer
location affects what we know by influencing what is salient to us. As Gaile Pohlhaus Jr. says, “social position has a bearing on what parts of the world are prominent to the knower and what parts of the world are not” (Pohlhaus Jr., 2012, 717).1 This is in part because our location shapes what experiences we are likely to repeatedly have. This can lead to the formation of “habits of expectation, attention, and concern, thereby contributing to what one is more or less likely to notice and pursue as an object of knowledge in the experienced world” (Pohlhaus Jr., 2012, 717). This element of situatedness is thus simultaneously particular, since each individual will have their habits of attention and concern affected by their own experiences, and social, since one’s position in social structures shapes what experiences one is likely to have. Second, a knower’s epistemological framework is affected by the shared epistemic resources or tools used to interpret their experiences. We rely on these tools to help us make sense of the world, and they thus affect how we know. Epistemic resources include the vocabulary and conceptual framework shared by a community, as well as “informal patterns of reasoning, current standards of evidence, currently accepted theories and background assumptions, and particular techniques of measuring and investigating” (Grasswick, 2004, 103). Moral-epistemic resources would be the subset of these tools that communities use to evaluate whether actions, decisions, social practices, and so forth are good, right, or just. An epistemic community is crucial for the development of epistemic resources, including moral-epistemic ones.2 Because of this, situatedness and shared epistemic resources reinforce one another. Which phenomena a community feels the need to develop tools to interpret depends on what its members are attending to. In turn, we are more likely to attend to what makes sense to us.Thus, our shared epistemic resources can also impact what we know. Finally, a knower’s epistemological framework is shaped by the webs of values, norms, and assumptions that underpin their epistemic resources. Kristie Dotson (following Lorraine Code) calls these webs instituted social imaginaries. An instituted social imaginary “carries normative social meanings, customs, expectations, assumptions, values, prohibitions, and permissions—the habitus and ethos—into which people are nurtured from childhood” (Dotson, 2014, 5 citing Code, 2008, 34). These imaginaries provide the wider context that shapes, grounds, and reinforces our epistemic tools and habits of attention. All three elements of epistemological frameworks are socially shaped, and both constrain and enable our knowledge. They make it possible to know by separating relevant stimuli from irrelevant ones, allowing us to interpret and judge evidence, and providing a stable backdrop of presuppositions for our inquiries. While they help make it possible for us to know, they can also limit or distort what we know based on the particularities and biases of our community, as I will discuss further. The limitations of our epistemological frameworks can become especially problematic due to how difficult these frameworks can be to change. Dotson calls this trait resilience. To say that an epistemological framework is highly resilient means that it can “absorb extraordinarily large disturbances without redefining its structure” (Dotson, 2014, 7). By a dominant epistemological framework, I mean an epistemological framework that is typical of a member of a dominant social group and is operative in a number of social and political structures. For example, one prevalent dominant epistemological framework in the United States would consist of the habits of attention and care typical of a white, 455
Moral Epistemology and Liberation Movements
cis-gendered, heterosexual, able-bodied, Christian man, and the epistemic resources and instituted social imaginaries that are the primary influence in legal, educational, and scientific institutions, as well as in most forms of the arts and media. A dominant moralepistemological framework would be the subset of these habits, resources, and systems that inform our knowledge of what is good, right, and just. For the rest of the chapter, I will focus on how liberation movements challenge dominant epistemological frameworks, including dominant moral-epistemological frameworks. I will argue that they do so through the generation of moral-epistemic tools and knowledge by those within movements, and through the dissemination of these resources to those outside the movement. This process has the potential to counteract the resilience of dominant epistemological frameworks and lay the foundations for the development of improved frameworks. I will first examine how liberation movements affect this change on those within the movement and then how they affect this change outside the movement.
2. Moral Epistemology within the Movement In this section I will focus on how liberation movements affect the moral knowledge of those within them. Liberation movements play two main roles in this regard—fostering the generation of moral knowledge and moral-epistemological frameworks, and cultivating receptive moral-epistemic agents. Liberation movements play this role largely because they provide an epistemic community that can be used to develop new moral-epistemic tools, as well as to develop the moral imagination. Liberation movements generally challenge people to think of what currently seems impossible—namely the end of some entrenched form(s) of oppression. Marilyn Frye, for example, argues that is important for feminists to try to imagine what women would be like if they were not warped by oppression. She says, We who would love women, and well, who would change ourselves and change the world so that it is possible to love women well, we need to imagine the possibilities for what women might be if we lived lives free of the material and perceptual forces which subordinate women to men. (Frye, 1983, 76) According to Frye, this kind of imagining takes courage because it breaks from the dominant ways of viewing the world and thus is unintelligible using dominant epistemological frameworks. Even if they’re not made unintelligible, these imaginings will at least be frequently misunderstood—seeming irrational, foolish, impossible, or in some cases even immoral. Cheshire Calhoun argues, for instance, that actions that challenge conventional moral norms are often misunderstood. She says that to those who are unfamiliar with or reject the critiques of these norms that these actions are rooted in, “these acts of resistance will not be legible as either acts of resistance or as attempts to do the right thing. They will simply look like doing the wrong thing” (Calhoun, 2016, 37). It is difficult and sometimes frightening to be unintelligible or misunderstood to the wider community in this way. Not least because it can have real social consequences, such as being “regarded as deviant, outlaw, perverse, crazy, extremist” (Calhoun, 2016, 39). It is 456
Lauren Woomer
also difficult to undertake the work of creating the intelligibility and understanding that is lacking. This leads Frye to suggest that There probably is really no distinction, in the end, between imagination and courage. We can’t imagine what we can’t face, and we can’t face what we can’t imagine . . . we have to dare to rely on ourselves to make meaning and we have to imagine ourselves being capable of that: capable of weaving the web of meaning which will hold us in some kind of intelligibility. (Frye, 1983, 80) This kind of imaginative thinking is thus best done within a community that can encourage the consideration of new possibilities, support people through a potentially difficult and taxing process of exploring these possibilities, and sometimes even model these possibilities. It is communities that weave webs of meaning and generate new epistemic, including moral-epistemic, tools in the process. One reason liberation movements are so useful is because they provide precisely this kind of community. To show how this works, I will examine the prison abolition movement.The prison abolition movement works toward a world in which prisons—and usually also the wider nexus of state-based systems of punishment, such as police, surveillance, electronic monitoring, juvenile detention, and immigrant detention—no longer exist. Prison abolitionists hold that the practices of imprisonment and state-based punishment are so fundamentally flawed that simply reforming these systems is not enough: they must instead be replaced entirely. Prison abolition is difficult to imagine and often viewed with incredulity. Angela Davis notes that a common reaction from those first hearing about prison abolition is to assume that abolitionists are simply trying to ameliorate prison conditions or perhaps to reform the prison in more fundamental ways. In most circles, prison abolition is simply unthinkable and implausible. Prison abolitionists are dismissed as utopians and idealists whose ideas are at best unrealistic and impracticable, and, at worst, mystifying and foolish. (Davis, 2003, 9–10) This reaction occurs in part because the idea of prisons is so bound up with dominant ideas about punishment, crime, safety, and justice that their existence seems like common sense to most people. Dominant understandings, according to Brady Heiner and Sarah Tyson, typically only recognize one form of accountability for violence—the one in which an individual person is punished by the state, which is portrayed as the protector of the helpless victim. “For the state to take on this role, the community must be disappeared, the person who experienced the harm reduced to a victim, and the person who committed the harm transmuted into a monster” (Heiner & Tyson, 2017, 15). In this framework, incarceration is taken to be the natural response to violence—it is the only conceivable way to achieve justice for the victim and to protect the public from individuals who are thought to be dangerous. Using this lens, Alternative forms of community accountability and redress that break from statecentric carceral systems appear baffling, irresponsible, even monstrous. The choice 457
Moral Epistemology and Liberation Movements
seems to be confined to either ensnaring an individual with the punitive arms of the state or fomenting complete, unaccountable disorder. (Heiner & Tyson, 2017, 2) Prison abolitionists seek alternatives to this false dichotomy. Liberation movements like the prison abolition movement can enable people to imagine new possibilities that seem impossible in dominant frameworks. They do this by pushing their members to imagine alternatives and creating the space in which to do that imagining. A community that takes this kind of imagining as its norm can unleash the creativity of its participants. Ana Clarissa Rojas, Alisa Bierria, and Mimi Kim share the following story from a 2001 INCITE! activist institute in the Bushwick neighborhood of Brooklyn, New York, for example.3 The institute was held in conjunction with a collective of young and adult working-class Latinas, Afro-Latinas, and black women called Sista II Sista. During a discussion of alternatives to the violence of criminality and potential organizing strategies, a 12-year-old sista stood with her hand in the air and exclaimed, “Why don’t we make Bushwick a liberation zone for women?” The room became quiet, but the deep, meditative pause was then interrupted by an enlivening set of questions: How would we do that? What would that look like? Where would we start? What would need to be in place? No one doubted its possibility. This young sista’s phrase swiftly illuminated minds that had been clouded by years of state maneuvers to disempower communities. After this moment, members of the group addressed one another differently. We spoke as if we could attain that goal, and many voices became one. That question sparked our imaginations to think as a community and to conceive of solutions and responses not offered by the mainstream antiviolence movement [which generally relies on state and nonprofit structures rather than communities]. (Durazo et al., 2012b, 3, emphasis mine) This story highlights the unique epistemic environment being fostered by the local abolitionist community. The reaction of the members of INCITE! and Sista II Sista stands in stark contrast to the incredulity that both Davis and Heiner and Tyson observe in the dominant response to abolitionist ideas. Rather than dismissing the young woman’s question out of hand, the conference attendees entertain her suggestion as a live possibility and engage with it as a group. Because of this, they are able to create the epistemic space for alternative possibilities and solutions that are not available in dominant understandings of how to stop violence and address harms done by violence. In most places outside of this community, the girl’s question would likely have been treated as implausible or silly, and any further imagining, and the fruits it bore, would have been cut off before it began. This kind of expansion of imagination will be necessary to think of alternatives to prisons and state punishment. As Davis points out, achieving this goal will take more than a simple switch—instead we will have to “imagine a constellation of alternative strategies and institutions” (Davis, 2003, 107). Some strategies she suggests include “demilitarization of schools, revitalization of education at all levels, a health system that provides free physical and mental health care to all, and a justice system based on reparation and reconciliation 458
Lauren Woomer
rather than retribution and vengeance” (Davis, 2003, 107). We can see this “constellation of alternatives” approach at work in many abolitionist projects, including the calls of a number of black-led organizations against state violence (such as the Movement for Black Lives (2016) and Black Youth Project 100 (2016)) for local, state, and federal money to be divested from police and incarceration and instead invested into providing services such as health care and education to black communities.4 In imagining new alternatives, movements develop new epistemic tools (which are, as discussed earlier, always developed collectively).They develop the language needed to conceptualize and express these alternatives, and the theories, images, and narratives needed to make sense of them. They often also develop new standards to determine whether an alternative is good, just, or effective. These tools are not ones that are found in dominant epistemological frameworks, and oftentimes they clash directly with dominant epistemic tools in a way that makes them seem strange, silly, distasteful, or wrong to outside observers. Many of the epistemic tools developed by the abolitionist movement are moral-epistemic tools. This is because they are developed in response to questions about the right way to treat those who have done harm, what justice for victims of violence should look like, and who is worthy of our moral concern. These tools not only help us understand how our world is, but they help us judge how it ought to be and whether attempts to make progress are succeeding or failing. We can see examples of these alternative epistemic resources in the toolkit developed by Creative Interventions, an organization founded by Mimi Kim to create community-based (as opposed to state-based) solutions that everyday people can use to address, prevent, and end violence.The toolkit aims to serve as a guide for organizations or communities seeking to engage in these community-based responses to violence instead of relying on police or other state institutions. In this toolkit, Creative Interventions walks users through the stages of their intervention model and also provides the epistemic tools necessary for understanding this model by explaining the terms and principles behind their approach.5 I will focus on their articulation of some of their principles. Creative Interventions and many similar organizations use the “community-based intervention” or “community accountability” approach to addressing violence. This approach generally engages the person who has been harmed, the person who has done the harm, and the community in which the harm occurred with the goal of supporting the compassionate repair of harm for survivors of violence and all of those affected by violence; supporting people doing harm to take accountability for violence (that is, recognize, end and take responsibility), and changing community norms so that violence does not continue. (Creative Interventions, 2012, 5) The community accountability framework stands in stark contrast to the dominant understanding of accountability for violence in its conception of the roles of the parties involved in an act of violence. The community accountability approach reconceives of the role of victim/survivor of violence, as well as the person doing harm. The approach centers the needs and desires of the harmed person when determining the appropriate course of action in response to a 459
Moral Epistemology and Liberation Movements
violent incident, avoids blaming the victim for the incident, and involves them in the intervention process to the extent that they would like to be. In the dominant model, the desires of victims are not taken into account when determining the appropriate punishment for the one who did harm. The humanity of the one doing the harm is also preserved in this approach even while they are held responsible for the harm. For this reason, Creative Interventions explains that they do not use the terms used by the criminal justice system to refer to those who do harm (such as perpetrator, criminal, and offender) (Creative Interventions, 2012, 8). The ultimate goal is to bring those who do harm back into the community. In the dominant model, by contrast, the person doing harm is expelled from the community and possibly even killed, with the goal of punishing or shaming the person, or removing them as a threat. Finally, this approach conceives of the community as an actor in incidents of violence, holding it accountable for the role it played in enabling the violence—for instance, by “ignoring, minimizing or even encouraging violence” (Creative Interventions, 2012, 5). As such, the community also has a responsibility to change itself to address the problem. Communities must also recognize, end and take responsibility for violence—by becoming more knowledgeable, skillful, and willing to take action to intervene in violence and to support social norms and conditions that prevent violence from happening in the first place. (Creative Interventions, 2012, 5, emphasis in original) This again contrasts with the dominant justice model, which takes the individual to be the sole cause of the harm done and the sole bearer of the burden of change. Moreover, many liberation movements seek to not just imagine new possibilities and develop the epistemic tools to express them, but also to model them. For those involved in liberation movements, theory isn’t very valuable without practical implementation. Many organizations that work within an abolitionist framework seek to put abolitionist ideals into practice in their own organizations. For example, many seek to develop structures of accountability similar to those explored in the Creative Interventions toolkit that enable them to resolve conflict and address harm without involving the police or other state punishment structures.This not only requires them to develop appropriate theoretical resources (principles, guidelines, language, criteria for success, etc.) but also to put these ideas into action. Doing so moves moral theory out of the realm of the abstract, and exposes reallife benefits and obstacles of the ideas being explored. Besides helping people within the movement refine their views and build their imaginative capacities, the implementation process deepens their knowledge of alternative possibilities by allowing them to be directly experienced. By creating the space to imagine new alternatives that go beyond the limits of dominant epistemological frameworks, and sometimes even to experience these new alternatives, liberation movements are not only able to foster the creation of new epistemic tools but of new epistemic subjects who are equipped to use these tools. As Heiner and Tyson write, CA [community accountability] activists may not possess a roadmap to an abolitionist future, but they are not only creating spaces in which we can begin to 460
Lauren Woomer
think, imagine, and feel what those future political and epistemological systems might be like; they are producing the kinds of communities capable of inhabiting those systems. (Heiner & Tyson, 2017, 27 emphasis in original) It should be noted that the expanded imagination discussed in this section is related to one’s epistemic standpoint in complex ways. Being a member of oppressed groups makes one more likely to develop an expanded imagination but does not guarantee it. Feminist standpoint theories have long held that members of oppressed groups are in a better epistemic position to understand unjust social structures due to the different set of experiences they have had while navigating these structures and the fact that they have less motivation to ignore the structures’ unpleasant reality.6 We see this reflected in the fact that nearly all of the abolitionist organizations I have mentioned so far were founded by women of color, often queer women of color, and have a membership that consists mostly of people of color. As Heiner and Tyson say, Indeed, the continuum of alternatives to law enforcement and prisons [to be discussed] have largely been developed by people who have lived and continue to live close to state violence and, thus, do not see the police or other mechanisms of the state as potential solutions to the violence in their lives. (Heiner & Tyson, 2017, 14) Individuals who were formerly incarcerated also often play a large role in abolitionist organizations and theory, having directly experienced the harms of the prison system themselves.7 For example, Angela Davis became a prominent abolitionist scholar and also cofounded the prison abolitionist organization Critical Resistance after having herself been a political prisoner. However, the epistemic frameworks that these organizations employ do not automatically come with being a member of an oppressed group (as most standpoint theorists would acknowledge). Work goes into developing these alternative frameworks, and this work is done within a community. Often this community takes the form of a movement. Sandra Harding, for example, argues that feminist standpoints are achieved through political activism. She says, Only through such [political] struggles can we begin to see beneath the appearances created by an unjust social order to the reality of how this social order is in fact constructed and maintained. This need for struggle emphasizes the fact that a feminist standpoint is not something that anyone can have simply by claiming it. It is an achievement. A standpoint differs in this respect from a perspective, which anyone can have simply by “opening one’s eyes.” (Harding, 1991, 127) Because standpoints are achieved, not automatic, liberation movements are still epistemically beneficial to members of oppressed groups. Movements can expose their participants to background knowledge that most people lack. For example, in her autobiography, Assata 461
Moral Epistemology and Liberation Movements
Shakur repeatedly tells of learning about black history through her participation in various groups in the black liberation movement rather than from history books or school (Shakur, 1987). Sometimes movements can also affirm and provide a lens for understanding the experiences and feelings of moral outrage of members of oppressed groups—sentiments that cannot be understood within dominant frameworks. Frye, for instance, touts the benefits of consciousness-raising efforts in which members of oppressed groups talk openly about their experiences in order to learn from the patterns that emerge. When women participate in this practice, Frye says The experiences of each woman and of the women collectively generate a new web of meaning. Our process has been one of discovering, recognizing, and creating patterns—patterns within which experience made a new kind of sense, or in many instances, for the first time made any sense at all. (Frye, 1996, 38) This act of pattern-finding helps make experiences that are not understood within dominant frameworks intelligible and affirms them rather than leaving one feeling as though one is anomalous or crazy. This community, then, doesn’t just provide a space in which to imagine new possibilities but roots this imagining in an understanding of the realities of the world and a trust in its members’ lived experiences. The fact that standpoints are achieved and not automatic also opens up the possibility of people who are not members of the oppressed group targeted by a given movement still epistemically benefiting from participating in the movement. Participating in movements can expose these individuals to alternative frameworks, and create trusting relationships within which to learn to use these frameworks.8
3. Moral Epistemology Outside the Movement For those not directly involved in them, liberation movements primarily serve the purpose of disseminating the moral knowledge and moral-epistemic resources they generate, and putting enough pressure on dominant epistemological systems to create change. While this dissemination is important for those members of oppressed groups who are not involved in the liberation movements that target them, it is also crucial for members of dominant groups, who may not otherwise be exposed to these issues as seen from the point of view of those most affected. One way this dissemination can be achieved is by participants in the movement sharing information, analyses, theories, and stories.These can be shared via formal print and broadcast media (such as newspapers, books, academic journals, TV news, and movies), as well as through informal media avenues (such as blogs, infographics, videos, and social media). The information and analyses in question can seek to directly convey moral knowledge (e.g., an argument for why a social practice is unjust) or can seek to provide a foundational understanding of a phenomenon that is needed to underpin moral knowledge (e.g., providing statistics about racial disparities in mass incarceration so that one is able to come to a moral understanding that it is unjust). Sharing stories of the lived experiences of those who are marginalized or oppressed (or sometimes fictional representations thereof) can 462
Lauren Woomer
often be as powerful a tool for developing moral knowledge as distributing information and analyses. The dissemination of moral knowledge can also be achieved through direct action tactics such as protests, marches, and the takeover of government offices or other public spaces. This method of dissemination works in part by capturing and fixing attention on unjust social practices that many, especially members of groups who are enacting or benefiting from the injustices, would rather not acknowledge and thereby overcoming a common strategy for maintaining ignorance.9 At the very least, people who do not want to face this moral problem will have to work harder to avoid paying attention to it or resort to explaining it away. Both of these tactics can provide a source of contention, which, as Elizabeth Anderson notes, can serve as a catalyst for the moral-epistemic development of the dominant group. She argues that “social groups learn to improve their moral norms through historical processes of contention over them” (Anderson, 2016, 93). Contention can include moral argumentation or “a variety of other ways of making interpersonal claims, including petitions, hearings, testimonials, election campaigns, voting, bargaining, litigation, demonstrations, strikes, disobedience, and rebellion” (Anderson, 2016, 93). Anderson argues that this kind of contention is especially crucial for the moral-epistemic development of dominant social groups because of the bias that comes from being in a position of power without a mechanism for accountability. One positive effect of liberation movements, on this view, would be that they instigate acts of contention through which members of subordinated groups can hold the dominant group accountable. Anderson sees in these kinds of moments the potential for “inducing error-correction, counteracting bias, clearing up confusion, taking up morally relevant information, making people receptive to admitting mistakes, drawing logical conclusions, and other epistemic improvements” (Anderson, 2016, 93). Finally, as mentioned earlier, many liberation movements also seek to model the new possibilities that they have imagined. This modeling doesn’t just benefit those inside the movement who are directly putting the alternatives into practice, but also those outside the movement who get to see the results. In successful cases, this modeling offers evidence that these alternatives can in fact work.The more these alternatives are put into action, the easier it also becomes for people outside the movement to see them as plausible. This can be seen in abolitionist campaigns for what Dan Berger, Mariame Kaba, and David Stein call “non-reformist reforms” (Berger et al., 2017). These kinds of reforms seek to “reduce rather than strengthen the scope of policing, imprisonment, and surveillance,” thus addressing pressing problems faced by those in prisons or otherwise in contact with the criminal justice system while refusing to work “within the confines of the existing order” (Berger et al., 2017). Examples that Berger et al. give of these efforts include campaigns across the country in which abolitionists have worked to end solitary confinement and the death penalty, stop the construction of new prisons, eradicate cash bail, organized to free people from prison, opposed the expansion of punishment through hate crime laws and surveillance, pushed for universal health care, and developed alternative modes of conflict resolution that do not rely on the criminal punishment system. (Berger et al., 2017) 463
Moral Epistemology and Liberation Movements
Many of these local campaigns have succeeded in achieving these goals, and most have at the very least opened up public conversations about the possibilities they are envisioning and fighting for. With some luck, the strategies employed by movements will enable them to create a disturbance large enough to counter the resilience of dominant epistemological frameworks. How difficult it is to make changes like the ones discussed largely depends on to what degree this adjustment is also challenging dominant frameworks. If the change accords with dominant experiences or instituted social imaginaries, the adjustment will be easier.We can see this in the different levels of pushback to the abolition of the death penalty (as pointed out by (Davis, 2003)) or reforms of the criminal justice system that only target nonviolent drug offenders, as opposed to prison abolition. The former changes leave dominant epistemological frameworks largely in place—we can still keep our core ideas about punishment, justice, and crime intact. Prison abolition, however, pushes us beyond the limits of dominant epistemological frameworks. Because of this, it is more difficult for the change to seem intelligible, possible, or plausible. While difficult, even these changes are not impossible. Davis points out that the abolition of slavery and the end of legally endorsed segregation, for example, once seemed unimaginable, but thanks in part to liberation movements now seem like common sense (Davis, 2003, 22–25). Though these changes have not been achieved perfectly and were likely not solely achieved by the actions of the movements, this is a tremendous moral-epistemic shift that undoubtedly would not have occurred without continued pressure from organized activists. Finally, it is important to note that movements not only spread true moral claims but the tools needed to understand them. Consider the movement against anti-black state violence. The many groups involved employ diverse tactics to distribute their moral knowledge and to capture and hold attention on it—ranging from articles, reports, and interviews to direct action in the streets. However, they also provide moral-epistemic tools for understanding their actions. The phrase “Black Lives Matter,” for example, is not just a rallying cry that forces attention on the issue of police violence and other forms of state violence, but a moral-epistemic tool that provides a lens for interpreting incidents of police and state violence. The Black Lives Matter Network says of the phrase, which was originally coined by Alicia Garza, Opal Tometi, and Patrisse Cullors: Rooted in the experiences of Black people in this country who actively resist our dehumanization, #BlackLivesMatter is a call to action and a response to the virulent anti-Black racism that permeates our society. . . . When we say Black Lives Matter, we are talking about the ways in which Black people are deprived of our basic human rights and dignity. (Black Lives Matter Network) “Black Lives Matter” rejects interpretations of victims of police violence as frightening thugs, criminals, or would-be criminals who deserved whatever came to them, and instead presents a vision of them as worthy of our moral consideration—as mattering.
464
Lauren Woomer
Through this lens, someone like Michael Brown is not “no angel,” as the New York Times described him, or a “demon” who could run through bullets and barely flinch, as the officer who shot Brown portrayed him, but a child, a potential brother or son (Eligon, 2014; Krishnadev, 2014). As Brittney Cooper says, Michael Brown was a human being to us [black people], and more than that, a kid. . . . That inability to see black people as human, as vulnerable, as children, as people worthy of protecting is an epistemology problem, a framework problem, a problem about how our experiences shape what we are and are not able to know. The limitations of our frameworks are helped along by willful ignorance and withholding of empathy. (Cooper, 2014) This lens is both epistemically and morally beneficial—giving us the tools to better understand both the phenomenon of police violence and its wrongness.The wrongness lies not just in the racial disparities in police violence, but also in the fact that it harms lives that matter and are worthy of our moral concern. The “Black Lives Matter” slogan succeeds in these ways partly because of its affective power. It calls on people to not just think differently but feel differently, if they do not already feel this way. Through its various methods, then, Black Lives Matter not only disseminates and holds attention on information and analyses of police violence, but disseminates and holds attention on a moral-epistemic tool for understanding the wrongness of police violence. By disseminating moral knowledge and moral-epistemic resources, liberation movements can serve as a means to overcome ignorance, cultivate moral emotions, and provide an entry point into the movements for those outside.
4. Conclusion In conclusion, liberation movements can be an important catalyst for the development of new moral-epistemic knowledge, tools, and frameworks that push beyond the limits of what seems possible within dominant frameworks. They provide an epistemic community in which people can imagine, find ways to express, and sometimes even experience alternatives to oppressive systems. They also provide a means for disseminating and fixing attention on both the moral-epistemic knowledge and tools developed within the movement through the education efforts, direct action campaigns, and modeling performed by activists. All of this puts pressure on dominant epistemological frameworks, and, with some luck, can lead to shifts in what seems right, just, and possible to members of a culture.
Acknowledgments I would like to thank my partner, William D’Alessandro, for his patience, encouragement, and editing skills, and Karen Jones for her helpful comments. I would also like to thank the Chicago Community Bond Fund and Moms United Against Violence and Incarceration for teaching me what abolitionist organizing looks like on the ground.
465
Moral Epistemology and Liberation Movements
Notes 1. See Code, 1991; Collins, 2000; Haraway, 1988; Harding, 1991; Hartsock, 1983 for some articulations of this position. 2. The role of communally shared epistemic tools in knowing has been explored by many feminist epistemologists (with Lorraine Code (Code, 1995) and Lynn Hankinson Nelson (Nelson, 1990) being two of the earliest) under the label of epistemic resources, hermeneutical resources, conceptual schemes, and frameworks. I will use epistemic tools and epistemic resources interchangeably in this chapter. 3. INCITE! is a collective of feminists of color that work to end both state-based and interpersonal violence against women, gender nonconforming, and trans people of color through communitybased solutions. See www.incite-national.org for more information. 4. See the Movement for Black Lives’ 2016 platform at https://policy.m4bl.org/platform/ and BYP100’s 2016 “Agenda to Build Black Futures” and 2017 report on state spending on police and incarceration vs. social services, “Freedom to Thrive: Reimagining Safety & Security in our Communities” (co-written with the Center for Popular Democracy and Law for Black Lives) at http://agendatobuildblackfutures.org/. 5. See Durazo et al.’s special issue of Social Justice on community accountability (Durazo et al., 2012a), and the Communities Against Rape and Abuse (CARA)’s chapter “taking risks: implementing grassroots community accountability strategies” for other articulations of abolitionist tools and principles (CARA, 2006). 6. See Harding, 2004 for a good anthology exploring standpoint theories. 7. Some scholars have begun to acknowledge the epistemic value of this standpoint by incorporating the writings of currently or formerly incarcerated people into their work on topics like prison and justice. For example, Hames-García, 2004 argues that prison writings should be treated as social theory and used to shape our understandings of justice and freedom, and a 2015 anthology on philosophy and mass incarceration (Adelsberg et al., 2015) includes essays by prisoners alongside essays by activists and philosophers. 8. This is not to say that this will always be easy or go smoothly, as outsiders will have a lot of learning to do in the face of information and tools that don’t always match their own experiences of the world due to differences in social position. 9. See Woomer, 2017 for a more in depth discussion of the role of attention in the maintenance of ignorance.
Works Cited Adelsberg, G., Guenther, L. and Zeman, S. (eds.) (2015). Death and Other Penalties: Philosophy in a Time of Mass Incarceration. New York: Fordham University Press. Anderson, E. (2016). “The Social Epistemology of Morality: Learning from the Forgotten History of the Abolition of Slavery,” in M. Brady and M. Fricker (eds.), The Epistemic Life of Groups: Essays in the Epistemology of Collectives. Oxford: Oxford University Press. Berger, D., Kaba, M. and Stein, D. (2017). “What Abolitionists Do,” Jacobin Magazine, August 24. https://jacobinmag.com/2017/08/prison-abolition-reform-mass-incarceration Black Lives Matter Network. “A HerStory of the #BlackLivesMatter Movement,” http://blacklives matter.com/herstory/ Black Youth Project 100. (2016). “Agenda to Build Black Futures,” Self-published. http://agendato buildblackfutures.org/wp-content/uploads/2016/01/BYP_AgendaBlackFutures_booklet_web. pdf Calhoun, C. (2016). “Moral Failure,” in C. Calhoun (ed.), Moral Aims: Essays on Getting It Right and Practicing Morality with Others. Oxford: Oxford University Press. CARA (Communities Against Rape and Abuse). (2006). “Taking Risks: Implementing Grassroots Community Accountability Strategies,” in INCITE! Women of Color Against Violence (eds.), Color of Violence:The INCITE! Anthology. Boston: South End Press.
466
Lauren Woomer
Code, L. (1991). What Can She Know? Feminist Theory and the Construction of Knowledge. Ithaca, NY: Cornell University Press. ———. (1995). Rhetorical Spaces: Essays on Gendered Locations. New York: Routledge. ———. (2008). “Advocacy, Negotiation, and the Politics of Unknowing,” Southern Journal of Philosophy, 46 (Supplement 1), 32–51. Collins, P. (2000). Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment (2nd ed.). New York: Routledge. Cooper, B. (2014).“White America's Scary Delusion:Why Its Sense of Black Humanity Is So Skewed.” Salon.com, December 3. https://www.salon.com/2014/12/03/white_americas_scary_delusion_ why_violence_is_at_the_core_of_whiteness/ Center for Popular Democracy, Law for Black Lives, and Black Youth Project 100. (2017). “Freedom to Thrive: Reimagining Safety and Security in Our Communities,” Self-published. http://agenda tobuildblackfutures.org/wp-content/uploads/2017/07/FreedomtoThriveWeb.pdf Creative Interventions. (2012). “Section 2: Some Basics Everyone Should Know,” Creative Interventions Toolkit: A Practical Guide to Stop Interpersonal Violence. Self-published. www.creative-interven tions.org/wp-content/uploads/2012/06/2.CI-Toolkit-Some-Basics-Pre-Release-Version-06. 2012.pdf Davis, A. (2003). Are Prisons Obsolete? New York: Seven Stories Press. Dotson, K. (2014). “Conceptualizing Epistemic Oppression,” Social Epistemology: A Journal of Knowledge, Culture, and Policy, 28 (2), 115–138. Durazo, A., Bierria, A. and Kim, M. (eds.) (2012a). “Community Accountability: Emerging Movements to Transform Violence,” a special issue of Social Justice: A Journal of Crime, Conflict, and World Order, 37 (4). ———. (2012b). “Editor’s Introduction,” a special issue of Social Justice: A Journal of Crime, Conflict, and World Order, 37 (4). Eligon, J. (2014). “Michael Brown Spent Last Weeks Grappling with Problems and Promise,” New York Times. August 24. www.nytimes.com/2014/08/25/us/michael-brown-spent-last-weeksgrappling-with-lifes-mysteries.html Frye, M. (1983). “In and Out of Harm’s Way: Arrogance and Love,” in M. Frye (ed.), The Politics of Reality: Essays in Feminist Theory. Freedom, CA: Crossing Press. ———. (1996). “The Possibility of Feminist Theory,” in A. Garry and M. Pearsall (eds.), Women, Knowledge and Reality: Explorations in Feminist Philosophy (2nd ed.). London: Routledge. Grasswick, H. (2004). “Individuals-in-Communities: The Search for a Feminist Model of Epistemic Subjects,” Hypatia 19 (3), 85–120. Hames-García, M. (2004). Fugitive Thought: Prison Movements, Race, and the Meaning of Justice. Minneapolis: University of Minnesota Press. Haraway, D. (1988). “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective,” Feminist Studies, 14 (3), 575–599. Harding, S. (1991). Whose Science? Whose Knowledge? Thinking from Women’s Lives. Ithaca, NY: Cornell University Press. ———. (ed.). (2004). The Feminist Standpoint Theory Reader: Intellectual and Political Controversies. New York: Routledge. Hartsock, N. (1983). “The Feminist Standpoint: Developing the Ground for a Specifically Feminist Historical Materialism,” in S. Harding and M. Hintikka (eds.), Discovering Reality. Norwell, MA: Kluwer Press. Heiner, B. and Tyson, S. (2017). “Feminism and the Carceral State: Gender-Responsive Justice, Community Accountability, and the Epistemology of Antiviolence,” Feminist Philosophy Quarterly, 3 (1), Article 3. Krishnadev,C.(2014).“Ferguson Documents:Officer DarrenWilson’sTestimony,”NPR.www.npr.org/ sections/thetwo-way/2014/11/25/366519644/ferguson-docs-officer-darren-wilsons-testimony Movement for Black Lives. (2016). A Vision for Black Lives: Policy Demands for Black Power, Freedom, & Justice. Self-published. https://policy.m4bl.org/wp-content/uploads/2016/07/20160726-m4blVision-Booklet-V3.pdf
467
Moral Epistemology and Liberation Movements
Nelson, L. (1990). Who Knows: From Quine to a Feminist Empiricism. Philadelphia: Temple University Press. Pohlhaus Jr., G. (2012). “Relational Knowing and Epistemic Injustice: Toward a Theory of Willful Hermeneutical Ignorance,” Hypatia, 27 (4), 715–735. Shakur, A. (1987). Assata: An Autobiography. Chicago: Lawrence Hill Books. Woomer, L. (2017). “Agential Insensitivity and Socially-Supported Ignorance,” Episteme, 1–19. doi:10.1017/epi.2017.28
Further Readings L. Alcoff and E. Potter., eds., Feminist Epistemologies (New York: Routledge, 1993). (A good beginners anthology that explores how the social world influences knowledge and features essays from many of the theorists mentioned in the notes.) Critical Resistance. http://criticalresistance.org/resources/ (Critical Resistance, a grassroots prison abolitionist organization, offers many great resources for those interested in learning more about prison abolition, including links to the toolkits that other organizations have developed to address harm in alternative ways.) J. Medina, The Epistemology of Resistance: Gender and Racial Oppression, Epistemic Injustice, and Resistant Imaginations (New York: Oxford University Press, 2012). (A nuanced treatment of the epistemic side of oppression, including that of resisting oppression.)
Related Topics Chapter 22 Moral Knowledge as Know-How; Chapter 23, Group Moral Knowledge; Chapter 25, Moral Expertise.
468
25 MORAL EXPERTISE Alison Hills
1. Expertise and Trust One of the benefits of living in society is that there is scope for division of labour: each of us can train in a particular domain, share the fruits of our skills with others and in return benefit from their expertise. And the same goes for what we might call epistemic labour: each of us can get to know about some particular subject, becoming a (comparative) expert. The rest of us can benefit, because passing on knowledge is relatively straightforward. The key is testimony. Suppose that you want to know whether p is true. Ask an expert, trust what she says, and you will come to know that p. For instance: Novice: What do polar bears eat? Expert: Polar bears mostly eat seal, but will also eat whales, walruses and narwhals. Trusting the expert, the novice comes to know the answer to her question: polar bears mostly eat seal, but will also eat whales, walruses and narwhals. What does it mean, to trust someone’s testimony? It is to believe that p because the speaker said that p. There are many other important ways in which one can trust another person (Baier, 1986; Jones, 1996; Howell, 2014), but I will focus here on taking testimony on trust, which, I assume, one can do without having a relationship of trust in other senses. Trust can be stronger or weaker, depending on the type and strength of this reason. The strongest type of trust, which I will call deference, is to believe that p because the speaker has said that p, whatever your other reasons for or against believing that p (Hardwig, 1985). You take her testimony to be a preemptive reason for you to believe that p. By contrast, if you weakly trust the expert, you take that she has said that p as some reason for believing p, but a reason that you weigh against other reasons for believing (or disbelieving) that p. Deference has sometimes been thought problematic. Is it an abdication of epistemic responsibility to refrain from weighing evidence yourself, relying entirely on someone else? Not necessarily.Your responsibility lies in choosing the right person to defer to in the right circumstances. For obvious reasons, deference is justified in fewer circumstances than weak
469
Moral Expertise
trust but sometimes is reasonable, particularly if the expert is very reliable, whereas you know that you are unlikely to be able to weigh evidence correctly yourself. How does trusting testimony pass on knowledge? This is a much-disputed question. According to reductionist accounts of testimony, you can only gain knowledge from testimony if you know that the testifier has knowledge (or at least, if Lackey (1999) is correct, that they are reliable). According to non-reductionist accounts, you do not need to know that they are reliable: it is sufficient that they are and that you do not have evidence that they are not.1 One important feature of both accounts is that the expertise of the expert in the domain—her reliability, her knowledge—plays a crucial justificatory role in your acquiring knowledge from her (rather than mere true belief). In short, an expert in a domain is someone with knowledge of that domain (at the least, more knowledge than you have unless you’re also an expert). And if you trust an expert, you yourself will gain knowledge, provided that they speak honestly. Let us now turn to morality. Can there be moral experts? If there are any, are they similar to experts in other domains? And can you learn from them in the same way, by trusting their testimony? Let us start with the question of moral expertise. There are lots of different things that might be meant by “moral expertise”. One kind of expertise is largely practical. In this sense, a moral expert is someone who reliably does the right thing.We might think of them as having “know how” and ask how it is possible to acquire this practical skill. But here, I am interested in a more theoretical kind of moral expertise, someone with propositional moral knowledge (or rather, more propositional moral knowledge than the rest of us). Is such a moral expert possible? That depends on various issues which are mostly outside our scope here: are there moral truths? Can they be known? Can some people know more of them than others? (This is an issue if moral truths are so obscure that no one knows any of them; or if they are so obvious that everyone knows them all.) Some of these questions are discussed elsewhere in this volume.2 Here, I will simply assume that propositional moral knowledge is possible and that some people have more of it than others.There are many reasons why this might be so: they might have some relevant experiences that the others lack; they might have different emotional responses; they might have thought for longer or more carefully about complex social issues.3 Though I take these assumptions about moral knowledge to be reasonable, it is worth noting that very similar questions arise even if they are not true, provided that some people are better at making moral judgements than others and are able to articulate and so pass on those judgements to others through testimony (Enoch, 2014). If so, should you not trust their testimony rather than rely on your own judgement, since your judgement is worse than theirs? What about moral testimony (i.e., testimony about specifically moral questions)? Suppose that you are wondering about whether it is wrong to have an abortion. Could you simply ask a question: Novice: Is abortion always morally wrong? Expert: No, abortion is not always morally wrong. put your trust in the expert and so come to know that abortion is not always morally wrong? 470
Alison Hills
Many people think that there is something not quite right about this sort of interaction. But what is the problem? After all, moral questions are often quite difficult, with competing considerations of different kinds and weights. You might have good reason to think that you will struggle to answer a particular question because of this or because your own interests are so heavily involved that you cannot judge it properly. And often a lot turns on the answer to a moral question, so it is important to get it right. So surely it is a good idea to turn to a moral expert, and learn from her?
2. Problems of Trust in Moral Matters Trusting moral testimony generates two sorts of problem; the first are purely epistemic, the second are moral. I will start with the epistemic problems. Any time that we trust someone’s testimony, two related problems arise.The first is, whom should you trust? Sometimes people pretend to have expertise—or genuinely believe that they have it—which they lack. How can you tell the competent, reliable experts from the fakes? We will call this the credentials problem. Now suppose you are an expert.You have the converse problem, which we may call the problem of credibility. How do you show that you are trustworthy? Of course you can say that you are. But if you are going to convince your audience of this, they have to believe your reassurances. If they accept your reassurances, then they already trust you. And if they do not already trust you, why would they start to do so when you tell them that you really are an expert? It seems that there is nothing that you can say that will establish your own expertise. These are problems for experts in any domain, but in some they are fairly easy to solve. Sometimes there is an independent means of verification (one can go and observe the diet of polar bears, for instance); or there are socially agreed markers of expertise (the expert is employed by a university biology department, or a reputable organization like the World Wildlife Fund); and there is agreement between these socially acclaimed experts (see McGrath, 2011). But moral questions are not typically like that. There is no independent means of verifying whether or not abortion is permissible. There are no obvious, agreed, social markers of moral expertise. (Are moral philosophers moral experts? Are theologians? Are advice columnists?) And insofar as we identify anyone as a moral expert, those people do not always or even typically agree with one another. They would, in all likelihood, have a range of views on whether and when abortion is permissible. So the credentials problem is difficult to solve for moral testimony. So is the credibility problem. You cannot tell a wavering listener to go and check for herself, or display one’s university affiliation as a moral philosopher, in order to add to her conviction. It will not work (nor should it). And the problem is likely to be worse, in some respects, for at least some moral experts. In all current societies, there are people who have more social status and social power than others. They do not always use that power in ways that they should; and some of those societies are unjust. It is not always obvious to those higher up in the hierarchy that their society is unjust (especially if their own actions are particularly implicated), and it is comforting for them to think that their position is natural or fair and that they personally have not wronged anyone. It is therefore sometimes easier to tell that one is the victim of 471
Moral Expertise
injustice than its perpetrator. So there are good reason to think that many of the people who know that their society is unjust and that they have been wronged will lack social power and social status.4 This makes the problem of credibility more severe. One aspect of lacking social status and social power is a lack of “epistemic status”, that is, people tend to be biased against you, treating you as less reliable and competent—less of an expert—than they should (Fricker, 2007). Thus there is a particular problem for women in a patriarchal society, or slaves in a slave-owner society, when they are trying to pass on their knowledge that patriarchy and slavery are unjust. So there are significant social and epistemic difficulties standing in the way of establishing moral expertise and learning from moral experts. But there is a very different kind of issue too. Many people believe that it is not ideal to trust moral testimony: instead it is morally ideal to use your own judgement, making up your own mind about moral questions (Nickel, 2001; Hopkins, 2007; McGrath, 2008, 2011; Hills, 2009, 2010, 2013). Let us call this the moral problem of trust. At first it may seem puzzling. If some people have moral knowledge and others don’t, and the second group could gain moral knowledge by trusting testimony from the first, isn’t that exactly what they should do? Why wouldn’t doing so be morally ideal? There are a number of different potential explanations. I will discuss four, each of which claims that judgements based on trusting moral testimony lack some important value: autonomy, authenticity, integrity and finally (and in my view most importantly) moral worth. Perhaps the key is autonomy. Kant, in the Groundwork, distinguishes judgements that are autonomous, ones whereby the will binds itself, from ones that are heteronomous, judgements where the will is bound by something outside itself: the word of another person, for instance. If this is right, the problem with moral testimony is that moral judgements made on that basis are merely heteronomous. Why does that matter, though? Isn’t the crucial question whether or not your moral judgement is true? And so far we have no reason to think that heteronomous judgements are unreliable. Kant does not, I think, regard heteronomy as merely likely to produce error. He has a different reason for thinking that autonomy matters. This is connected to a very complex and difficult part of his moral philosophy, his theory of freedom. Kant thinks that a heteronomous will is not free; only an autonomous will, by being genuinely self-governing, is truly (positively) free. If we are not convinced by Kant’s conception of freedom, it is not clear why it matters whether our moral judgements are autonomous or not. And there is a further problem. Is it really true that moral judgements based on trusting moral testimony are heteronomous? After all, don’t you choose whom to trust, and why can’t you do so autonomously? And if you do, wouldn’t moral judgements made on that basis be autonomous after all (Driver, 2006)? Perhaps rather than autonomy, the answer lies in authenticity. Authenticity is a matter of living according to your own ideals, being true to your true self (Sartre, 1946;Taylor, 1989). What exactly is your true self, and what does it mean to be true to it? This is not entirely clear. For instance, is this “true self ” something that you discover or that you create? But the basic idea is that you choose actions and make judgements based on your own values, your own sensibility, and also that moral values are a particularly important component of you, 472
Alison Hills
lying at the core of your personal identity.We are defined by our moral ideals: what matters morally to you makes you the person who you are. The problem with moral judgements based on testimony, then, is that they do not flow from your own values and sensibility. Instead, they reflect someone else’s true self: the person whose judgement you trust (or if they have also taken the judgement on trust, yet someone else again). So judgements made this way are not and cannot be genuinely authentic (Mogensen, 2017). For instance, if you come to believe that abortion is permissible because you have been told so, that judgement does not reflect your own sense of the value of human life, of the mother’s relationship with the foetus, of the importance of control over one’s own body, and so on. But just as with autonomy, we can ask, does authenticity matter? What is the problem with a moral judgement that does not reflect your true self? The value of authenticity is not obvious. And secondly, the connection between inauthenticity and trusting testimony may not be as straightforward as it first appears. Could you not have a true self that was deferential to others? Or, at the very least, mightn’t you dislike thinking about moral questions or making weighty moral decisions? Earlier, we saw that even moral deference need not be an abdication of responsibility; perhaps it need not be inauthentic either. A third possible explanation of the problem with trusting moral testimony is in terms of integrity, specifically, the integrity of a virtuous person. According to an Aristotelian conception of virtue, a morally ideal person has a bundle of dispositions, to act, choose and feel in particular ways. Someone with the virtues has a coherent and integrated set of dispositions; her actions, emotions and choices reinforce one another, as all are based on the underlying state. For instance, someone who is kind will do acts of kindness but also be sensitive to others’ feelings, approve of acts of kindness in others and so on. These are all manifestations of one underlying virtue. What is the significance of this for moral testimony? There are a couple of issues. One is that moral judgements based on testimony are not necessarily—and in fact probably won’t be—well integrated with your other beliefs, desires, emotions and feelings. For instance, your judgement that abortion is permissible, made on the basis of testimony, may not fit with your other feelings: you may continue to have negative feelings about abortions, or it may not fit with your other views about control over one’s body or the value of human life (Howell, 2014). Is it important that one’s actions, choices, beliefs and emotions are well integrated? Perhaps that is indeed part of what we expect of a fully virtuous agent. But even if that is right, it is not straightforward to explain the problem with moral testimony. In the first place, beliefs acquired by moral testimony need not lack integration with your other beliefs and emotions: the expert may be telling you what you would have thought had you spent the time and effort thinking about the matter yourself. If the resulting moral judgement is well integrated, there seems to be nothing against taking it on testimony. But isn’t it better to make up your own mind? Secondly, if you do trust moral testimony and so arrive at a moral judgement that is quite isolated from your other beliefs, emotions and attitudes, why not change those other attitudes to fit with that judgement? After all, if the expert is right, won’t changing all your attitudes bring them closer to moral truth? And what could be wrong with that? 473
Moral Expertise
The final and in my view most promising explanation for the problem with trusting moral testimony is in terms of moral understanding and morally worthy action. Let us concede that you can gain moral knowledge by trusting moral testimony; it remains true that you cannot (or cannot easily) gain moral understanding in this way. Understanding, unlike knowledge, requires the ability to make judgements about related cases, and gaining knowledge from testimony does not guarantee that you are able to do this. Why is it important to understand why your action is right, rather than merely know that it is or why it is? You are equally likely to do the right action in both cases. (Like knowledge, understanding requires that you get the answer right.) No one disputes that doing the right action is morally important. But a morally ideal agent—one who is virtuous, one who performs morally worthy actions—will do the right action, and she will do so because she is responding to moral reasons. You can respond to moral reasons through your desires, feelings and emotions. But you can also respond to moral reasons cognitively, through your appreciation of them. That is, you can “mirror”, in your own reasoning and decision-making processes, the reasons for performing a particular action.You decide what to do on the basis of your grasp of why your action is right. If your situation were different, you would have made a different decision, responding to different moral reasons. Responsiveness to moral reasons precisely involves having and using moral understanding to decide what to do. If you make a moral judgement by trusting testimony rather than using your understanding (i.e., using your ability to draw moral conclusions from the reasons that make them true), you are relying on the judgement of someone else, on their ability to directly respond to moral reasons. If you defer, you rely wholly on them; but even if you only weakly trust them, you still partially rely on them rather than your own direct response to the considerations in question. It follows that if and when you act on your moral judgement, your action is not fully responsive to the reasons that make it right. If you defer, it is not responsive to them at all. Suppose that having and using moral understanding is important in this way. How should we think about moral expertise now? Our original conception was of someone with moral knowledge who could pass it on by means of testimony. It is obvious that this does not fit well with an account of moral epistemology according to which moral understanding is most significant. In the first place, it seems that people are moral experts in virtue of having and using moral understanding, not moral knowledge. So we need to characterize moral expertise in terms of moral understanding, not moral knowledge. Secondly, non-experts should not, ideally, form their moral judgements by trusting testimony from an expert. If the argument here is correct, it is (much) worse if you defer to a moral expert, since in that case you do not assess any reasons why the moral claim might or might not be true. Weak trust, whereby you take the expert’s testimony as a reason for belief to be evaluated along with all the other reasons that you have, is somewhat better. But even if you only weakly trust a moral expert, any belief you form is not fully and directly responsive to moral reasons, because you are in part responsive to reasons for trusting the expert, that is, reasons for taking her to be reliable, competent, honest and so on. It follows that—if responsiveness to moral reasons is a moral ideal—then even weak trust is not morally ideal.
474
Alison Hills
3. The Scope and Limits of Trust in Moral Matters Suppose that we agree that trusting moral testimony is not morally ideal. It does not follow that, when making moral decisions, you should ignore testimony altogether. It is fine— indeed it can be very morally important—to trust testimony about some issues that are highly relevant to moral questions. It is not easy to distinguish between moral and non-moral matters precisely, but there clearly is a difference between, say, facts about foetal brain development and claims about whether abortion is morally permissible.The arguments so far, about trusting moral experts, have focused on the specifically moral claims that they make. But they might be a moral expert in part because they have extra non-moral knowledge that is relevant to deciding some moral issue. Do the problems with learning from moral experts also apply to their non-moral knowledge? The moral problem, that it is morally ideal to make up one’s own mind about morality, applies specifically to deciding moral issues. It is perfectly acceptable to trust testimony from experts about non-moral factors, for this is a way for them to help you respond to moral reasons, not a means for you to make moral judgements without so responding. Because of this, if anything, you have good reason to trust testimony on non-moral matters, precisely so that you can go on to respond to moral reasons properly yourself. What about the other problems, of credibility and of credentials? Earlier we saw that they are more difficult to solve for distinctively moral issues than non-moral ones. So it should be easier to identify the right person to trust about non-moral matters and to establish your expertise. Of course, that is not to say that it is easy to show that you are expert on non-moral matters that are relevant to moral issues. For which non-moral facts might or might not be relevant is in itself a moral issue. There is a complication. We saw earlier that there might be a class of moral experts who lack social power and, as a result, tend not to be believed. Some of their knowledge may be non-moral. For instance, they may know what it is like to be a woman in a patriarchal society. They may know exactly what harms they have suffered. They might have experiences not available to the rest of us. In order to gain from their expertise, the rest of us need to develop the virtue of testimonial justice, the virtue of treating people’s testimony fairly (Fricker, 2007). When we suspect that we are prejudiced against a speaker, we can critically examine our perception of her credibility and, if appropriate, revise our judgements of her credibility upwards to compensate for the prejudice, which left them artificially low. This should help us gain knowledge that is relevant to moral problems. But these are examples of non-moral knowledge. The argument does not generalize to distinctively moral claims that they make, such as claims that women should be treated as equals and given the vote. In the first place, the virtue of testimonial justice requires us to correct for prejudice, that is, to give the person as much credibility as they would have had absent our bias. But with respect to difficult, contested moral questions, the level of credibility we should give to anyone should be quite low (Sliwa, 2012). And secondly, there is the moral problem, that it is not morally ideal for us to take their word on moral questions rather than work things out for ourselves. Suppose that it is indeed not morally ideal to trust testimony about distinctively moral questions. Does it follow that no one ever should do so?
475
Moral Expertise
Not at all. It is not always right for those of us who fall short of perfection to try to meet a moral ideal. There are a number of circumstances in which we ought to trust moral testimony, even if everything we have said so far about the importance of moral understanding is correct (as Enoch, 2014 argues). In the first place, some of us may not be capable of achieving moral understanding on some issues, or in some circumstances. Perhaps there are some people who are simply no good at using their own judgement. Perhaps they generally have moral understanding, but where their own interests or the interests of their family are involved, they are unable to step back sufficiently to judge the situation fairly. Perhaps to do so successfully they would need a kind of experience that they haven’t had and couldn’t easily get (an experience of what it is like to lack social power, for instance). The latter is possibly the situation of Peter, as described by Karen Jones: He could pick out egregious instances of sexism and racism, and could sometimes see that “sexist” or “racist” applied to more subtle instances when the reasons for their application was explained to him, but he seemed bad at working out how to go on to apply the word to nonegregious new cases. Problems arose for Peter when he could not see the reasons why the women were calling someone sexist, and could not see, or could not see as evidence, the considerations that the women thought supported viewing the would-be members as sexist. (Jones, 1999, 59–60) Jones concludes that in this case, Peter should have been willing to accept the women’s testimony that these men were sexist. I agree. In part, he may be accepting testimony about non-moral matters: what it feels like to be treated in certain ways, for instance. But he also trusts their moral testimony, that certain sorts of treatment really are sexist, and so wrong. He is right to do so because—as he himself can see—he is not able to develop his own capacities adequately to work out what moral judgement to make in new situations. If he tried to make up his own mind, he would go wrong, as he well knows: he would not be able to acquire or use moral understanding. It is quite right for him to go for the next best: second-hand moral knowledge. So even if there is a moral problem with trusting moral testimony, it is right for some people, sometimes, to do so. And that raises significant questions. For as we have already seen, it is not easy to tell in whom you should place your trust. And if you are looking for a moral expert precisely because your moral judgement is faulty (either in general or in this particular case), it is surely even more difficult for you to decide when and where you should place your trust. How can you be a “wise recipient” of moral testimony (as Jones and Schroeter (2012) put it)? In the first place, you need to be discriminating. Given the levels of disagreement about moral questions, their difficulty, and the fact that biases and interests can be influential, a default attitude of trust cannot be appropriate (whether or not you accept reductionism or non-reductionism in general about testimony). You need to decide when to look for someone to trust and when to rely on your own judgement. So you must develop a sense of your own moral capabilities: on what sort of questions are you competent, and which are beyond your best efforts? Where is your 476
Alison Hills
judgement likely to be distorted by prejudice or bias? When is the fact that your own interests are involved likely to be a problem? Secondly, you need to develop the ability to judge moral expertise in others. To do this, you need a sense of their moral capabilities and also of factors that may unduly influence them: any prejudices that they may have, for instance, or situations in which harms or benefits to them may affect their judgement improperly. In addition, even if you do not have the right level of moral understanding to make a judgement yourself, you may be able to assess the plausibility of a moral claim, or the explanation of a moral claim, made by a putative expert. Is what they are saying reasonable? If they tell you that there is nothing morally wrong with killing the innocent, they are not credible, however otherwise impressive their credentials might be. Thus to be a wise recipient of moral expertise may not require the level of moral understanding that a fully virtuous person has, but it nevertheless requires quite a bit. Also necessary is what we might call “meta-moral” expertise: abilities to assess moral expertise in oneself and others and an awareness of the kinds of factors that encourage or disable moral understanding.
4. Alternatives to Trust If the moral ideal with regard to moral questions is making up your own mind, must moral deliberation and consideration of morality be something that you do entirely alone? And relatedly, if you are a moral expert, is there any way of passing on your moral knowledge and understanding without asking for trust? Trust in testimony is not completely passive; as we have seen, you have to decide whom to trust, understand what they say, and check that it isn’t obviously mistaken. Nevertheless, to acquire understanding by means of testimony you typically have to be more active, thinking things through in test cases to ensure you have a proper grip on what is going on. Instead of trusting testimony, you may treat what the expert says as a suggestion or advice, which you accept or reject entirely on its own merits, rather than something you have reason to accept because she said it. Of course, in practice, the difference between treating what she says as testimony, to be trusted, or as advice, to be assessed independently, will not be clear (especially for weak trust). Nevertheless, there is a clear enough difference in principle between the two. The expert can encourage the novice to treat what is said as something to be assessed by the way that she presents it, that is, through interactions somewhat different from our original, where an expert made an assertion and a novice trusted her. For a start, the expert can expand and elaborate on her original claim, explaining why it is true, sketching its implications.The non-expert can ask follow-up questions so that she can grasp not just that the claim is true but why. Like so: Non-expert: Is abortion always morally wrong? Expert:
No, it is not always morally wrong. It is wrong to kill an innocent person, but abortion is not always the killing of an innocent person. 477
Moral Expertise
Non-expert: Why not? Expert:
First, because it is not always the killing of a person: an early stage embryo or foetus is not a person. Secondly, because abortion is sometimes the withdrawal of a mother’s aid to the foetus, rather than a killing.
Non-expert: So what is a person? And what is the difference between killing and withdrawing aid. . . From one point of view, this is not very different from the original assertion-trust model. Instead of one assertion, the expert is making several. And instead of trusting just one claim, the novice trusts all of them. It might seem that this doesn’t help at all. The credentials problem remains: if the non-expert doesn’t know that that the expert is trustworthy when she makes one claim, she also doesn’t know whether she is trustworthy when she says more. And, if anything, the moral problem has grown, since the non-expert is taking more, not less, on trust. That is all true, if you are deferring to an expert. Extra testimony that is consistent with an initial claim gives you no further reason to doubt her (but no extra reason to trust her). But there is something to be said in favour of further testimony if you weakly trust an expert. Remember that weak trust involves you reflecting on your other reasons for believing that p, at the same time as you take that she said that p as a reason for belief. If the expert can offer an elaboration and explanation of p, which seems plausible, other things being equal, this is some evidence in favour of her being an expert and p being true. It is not decisive evidence, of course; but the extra testimony is not useless. Nevertheless, since the resulting moral judgement is still formed partly on the basis of trust, the moral problem remains. But there is another possibility. I described the expert as making a series of assertions. But they are not unconnected to one another—rather she is offering an explanation of the truth of the original claim and therefore an argument for accepting it. As such, they may be offered and received in quite a different spirit: as something which you accept or reject, not on trust, but on the basis of your own evaluation of its plausibility. All of the problems we identified earlier for social moral epistemology were problems of trust: Whom should I trust? How can I show that I am trustworthy? The moral problem is about making moral judgements on the basis of trust, instead of drawing a conclusion on the basis of the “right-making” reasons. It follows that if you accept or reject a conclusion on the basis of an argument, you avoid all three problems. An argument is no sounder if it is made by an expert; so there are no credentials or credibility problems. And if you respond to moral reasons yourself when you make your assessment of the argument, there is no moral problem of trust. This appears to be a very successful way of transmitting moral knowledge and moral understanding. Aside from moral argument, there are other methods for helping people acquire moral understanding. Many of these are familiar both in every day moral inquiry and in moral philosophy. One is the use of analogy. Consider, for instance, Thomson’s famous example of the violin player for whom you are the life support.You must decide whether it would be right (or at least permissible) to detach the violinist and why (Thomson, 1971). Then you need to compare that situation with a typical abortion and decide whether the same factors 478
Alison Hills
are present. Finally, you have to conclude whether it is right to make the same judgement about abortions as about the violinist. Though it may be reasonably clear to a reader what Thomson believes about the permissibility of abortion, you are not encouraged simply to put your trust in her. Rather, Thomson’s analogy is guiding your moral thinking, helping you correctly to identify morally significant factors, assess their importance, and draw the right conclusion. Similarly, Peter Singer asks how it is right to treat humans with a similar mental capacity to that of animals and asks readers to draw conclusions about how to treat animals (Singer, 1995). Once again, the reader is not asked simply to defer to—or even weakly to trust—the judgement of a moral expert. You are asked to take your own view, with your attention drawn to the particular features of the situation that, you may realize, make some actions right and others wrong. Thinking through the analogy helps you to develop and use moral understanding. The moral expert has not told you the answer to the question. She has played a different but equally vital role in finding and presenting an analogy that provokes the right kind of thought. Quite possibly, you would never have otherwise identified the morally relevant factors, and as a result, you would not have acquired moral understanding without her help. Another very important method for transmitting moral understanding is through question and answer. But whereas in our original model the non-expert asked questions and the expert gave answers, here it is the other way round. A moral expert may ask a probing question that spurs the non-expert to think again or to think differently about a moral matter. Once again, the moral expert is not transmitting information, or knowledge, but rather a mode of thought. If this all sounds rather like a moral philosophy seminar, then that is, I think, no accident. One of the major purposes of moral philosophy is to try to attain and use moral understanding. But of course, you do not need to be a professor of philosophy to have moral understanding or to use these methods; they are familiar from everyday moral practice as well.5
5. Future Directions Many of the questions that were touched on briefly here deserve a more extended discussion. For instance, I concentrated on one explanation of the moral problem of trust, connected to moral understanding, moral worth and virtue, which seems most promising to me. But there is no consensus whether there really is such a problem, let alone the proper explanation of it. There is widespread agreement that trust in moral testimony is sometimes appropriate, but very little has so far been said on how it is justified, how one can acquire the virtue of testimonial justice in this particular domain, and how one can be a wise recipient of moral testimony. What should one look for in a moral expert? Does it matter how they act or whether they live up to their own standards? Does it matter whether they can explain and defend their moral claims? And so on. There has been even less attention paid to alternatives to trust, from either the expert’s or the non-expert’s perspective. But there are very many more ways of “teaching” than passing on individual pieces of propositional knowledge, as in the original assertion-trust model 479
Moral Expertise
of testimony. It is common outside of ethics as well as in the moral domain to want to get across a mode of thinking rather than one particular fact. And other kinds of interaction are better suited to that. In addition, in the moral domain, we typically want people to change their non-cognitive attitudes and their actions in line with moral reasons, not just their moral judgements. Are some kinds of interaction—especially those in which the novice is more active in thinking through the issues herself—more likely to lead to broad changes in character? Finally, though the discussion here has been restricted to morality, it has implications for how we think of expertise more broadly. It is not only in the moral domain that we want to inculcate a way of thinking about a question—recognition of the kind of factors that are relevant and their importance rather than merely pass on pieces of information. When you teach a child arithmetic, for instance, you don’t just want her to know the answer to a handful of questions but to have the kind of understanding that allows her to go on and answer the next ones by herself. So we might think of expertise quite generally as having a form consisting in having and using understanding, and learning from an expert a matter of her passing on that understanding through the sort of methods outlined here, encouraging a greater active engagement rather than merely trusting testimony.
Notes 1. The classic debate on this issue between Hume and Reid has generated a huge literature; recent contributions include Coady (1992); Lackey (2006, 2008); Fricker (1987, 1995). 2. See Chapters 13 and 14 of this volume in particular. 3. See Chapter 24 for an exploration of the ways in which the experiences of marginalized people facilitate their acquisition of certain forms of moral knowledge. 4. See Chapter 24 of this volume for similar reflections. 5. For similar reflections, see Chapter 21 of this volume.
References Baier, A. (1986). “Trust and Antitrust,” Ethics, 96 (2), 231–260. Coady, C. A. J. (1992). Testimony, A Philosophical Study. Oxford: Clarendon Press. Driver, J. (2006). “Autonomy and the Asymmetry Problem for Moral Expertise,” Philosophical Studies, 128 (3), 619–644. Enoch, D. (2014). “A Defence of Moral Deference,” Journal of Philosophy, 111 (5), 229–258. Fricker, E. (1987). “The Epistemology of Testimony,” Proceedings of the Aristotelian Society Supplementary, 61, 57–83. ———. (1995). “Critical Notice: Telling and Trusting: Reductionism and Anti-Reductionism in the Epistemology of Testimony,” Mind, 104, 393–411. Fricker, M. (2007). Epistemic Injustice. Oxford: Oxford University Press. Hardwig, J. (1985). “Epistemic Dependence,” Journal of Philosophy, 82 (7), 335–349. Hills, A. E. (2009). “Moral Testimony and Moral Epistemology,” Ethics, 120 (1), 94–127. ———. (2010). The Beloved Self. Oxford: Oxford University Press. ———. (2013). “Moral Testimony,” Philosophy Compass, 8 (6), 552–559. Hopkins, R. (2007). “What Is Wrong with Moral Testimony?” Philosophy and Phenomenological Research, 74, 611–634. Howell, R. J. (2014). “Google Morals,Virtue, and the Asymmetry of Deference,” Noûs, 48, 389–415. Jones, K. (1996). “Trust as an Affective Attitude,” Ethics, 107 (1), 4–25. ———. (1999). “Second-Hand Moral Knowledge,” The Journal of Philosophy, 96, 55–78.
480
Alison Hills
Jones, K. and Schroeter, F. (2012). “Moral Expertise,” Analyse & Kritik, 34 (2), 217–230. Kant, Groundwork of a Metaphysic of Morals (G.) trans. H. J. Paton (London: Routledge, 1991). Page references cite the volume and page number of Kants gesammelte Schriften (published by the Preussische Akademie der Wissenschaften. Berlin: W. de Gruyter, 1902). Lackey, J. (1999) “Testimonial knowledge and Transmission,” Philosophical Quarterly, 49 (197), 471–490. Lackey, J. (2006). “Knowing from Testimony,” Philosophy Compass, 1, 432–448. ———-. (2008). Learning from Words: Testimony as a Source of Knowledge. Oxford: Oxford University Press. McGrath, S. (2008). “Moral Disagreement and Moral Expertise,” in Russ Shafer-Landau (ed.), Oxford Studies in Metaethics,Vol. 4. Oxford: Oxford University Press, 87–108. ———. (2011). “Skepticism About Moral Expertise as a Puzzle for Moral Realism,” Journal of Philosophy, 108 (3), 111–137. Mogensen, A. L. (2017 “Moral Testimony Pessimism and the Uncertain Value of Authenticity,” Philosophy and Phenomenological Research, 95 (2), 261–284. Nickel, P. (2001).“Moral Testimony and Its Authority,” Ethical Theory and Moral Practice, 4 (3), 253–266. Sartre, J-P. (1946). Existentialism Is a Humanism, trans. Macomber. New Haven, CT: Yale University Press. Singer, P. (1995). Animal Liberation (2nd ed.). London: Pimlico. Sliwa, P. (2012). “In Defense of Moral Testimony,” Philosophical Studies, 158 (2), 175–195. Taylor, C. (1989). Sources of the Self:The Making of the Modern Identity. Cambridge, MA: Harvard University Press. Thomson, J. (1971). “A Defense of Abortion,” Philosophy and Public Affairs, 1 (1), 47–66.
Further Readings For a defense of trusting moral testimony in general, see K. Jones, “Second-Hand Moral Knowledge,” The Journal of Philosophy, 9, 55–78, 1999 and P. Sliwa, “In Defense of Moral Testimony,” Philosophical Studies, 158 (2), 175–195, 2012. For a claim that moral progress is frequently based on those lacking social power “teaching a moral lesson” to those more powerful, see E. Anderson “The Social Epistemology of Morality: Learning from the Forgotten History of the Abolition of Slavery,” in M. Fricker and M. Brady (eds.), The Epistemic Life of Groups: Essays in Collective Epistemology (Oxford: Oxford University Press, forthcoming); for a discussion of “testimonial justice”, see M. Fricker, Epistemic Injustice (Oxford: Oxford University Press, 2007).
Related Topics Chapter 6, Moral Learning; Chapter 12 Contemporary Moral Epistemology; Chapter 13 The Denial of Moral Knowledge; Chapter 16 Rationalism and Intuitionism:Assessing Three Views about the Psychology of Moral Judgments; Chapter 18 Moral Intuition; Chapter 20 Moral Theory and Its Role in Everyday Moral Thought and Action; Chapter 22 Moral Knowledge as Know-How; Chapter 23 Group Moral Knowledge; Chapter 24 Moral Epistemology and Liberation Movements.
481
26 MORAL EPISTEMOLOGY AND PROFESSIONAL CODES OF ETHICS Alan Goldman
1. Introduction: Professional Role Morality In epistemology, issues arise when the assumptions of common sense or ordinary perception seem to conflict with the results of the special disciplines, especially physical science. The picture of the world projected by contemporary physics seems very far from that of ordinary perception and common sense. On one view (mine among others’), such conflicts are to be resolved in terms of the ultimately best explanations for the ways things appear to our perceptual senses, although such explanations can take us far from those appearances. Similarly, in moral epistemology issues arise when cases or situations occur in which our intuitions conflict with principles we accept or in which we encounter principles endorsed or followed by some group that again clash with those of our common moral framework. One prominent area rife with such issues is that of professional practice. Various codes of professional ethics permit or require behavior that would be impermissible for those outside the professions, even those in seemingly similar circumstances, save for the lack of a professional degree or license. Professional role morality, accepted norms governing the behavior of those in professional roles, assumes a moral division of labor, in which those in particular professions are to promote a single important value, for example health of doctors’ patients or legal rights of lawyers’ clients. Those in such professional roles have special obligations in regard to the promotion of such values. Serious questions arise when these central values of the professions compete with others, as they can and do. Professional codes of ethics then not only assign and enforce such obligations but typically elevate the values to which they apply above their ordinary status. Such special norms governing professional practice augment authority to promote the values they reflect while diminishing authority to act on common moral principles or on direct perception of the morally relevant circumstances. In pursuing legal rights of clients, for example, lawyers are to sometimes ignore potential harm to others, harm that would otherwise be the overriding consideration. In this case, however, such special norms must be justified in terms of our common moral framework itself. Professionals cannot just arbitrarily adopt their own moral rules; their special positions spelled out in their codes of ethics must be accepted by society or the state and therefore be perceived 482
Alan Goldman
as expressing priorities that all should be willing to accept. The possibility of such justification therefore seems initially puzzling. How can being a professional in itself make such a moral difference? If the possibility of such special role morality for professionals seems puzzling, it must nevertheless be admitted right off that this possibility is instantiated in at least some areas outside the usual conception of the professions. Few would deny that judges ought to apply clearly applicable law in some cases in which they would otherwise morally object to the outcomes.There is here a moral division of labor with legislators and others in the legal system that restricts the moral right of judges to rule on their direct perceptions of the moral merits of these cases. Authority to act on one’s unfettered moral judgment must be distinguished from that judgment itself. Few would deny that parents can legitimately prefer the interests of their own children over the interests of others that would be overriding from the usual impartial moral perspective, again amounting to a moral division of labor (not to deny that parents can go too far in this regard). In this case, their authority to act in their children’s interest is augmented, while it is controversial whether their right to give usual weight to the interests of others is diminished. Sometimes it appears that they are required to be partial; other times they are merely permitted to do so.1 Codes of professional ethics also alter authority to act on ordinary moral judgments absent the codes. This chapter will examine the reasons for adopting and obeying such codes, without assuming in advance that these reasons provide ultimate justification. We must begin with an idea of what a profession is and what being a member of a profession amounts to or commits one to, since a plausible definition of being a professional already suggests some reasons for having and obeying a special code of professional ethics. (I say ‘special’ because if these codes simply expressed requirements of ordinary morality or law, they would not seem to be necessary.) We may define a professional as someone with specialized knowledge in the service of some value that expresses a vital social need, who belongs to an organization that is self-regulating and has a monopoly over its practice. The professional is committed above all to some predominant value relating to a vital interest or need of her client, in contrast to a business executive, who is committed first to profits for the corporation and only indirectly at most to the interests of the consumers. The professional’s arcane knowledge or expertise, evidenced by an advanced university degree, along with her commitment to use that knowledge to serve the client’s need, is the source of her authority, prestige, elevated income, and identification with colleagues. Such expertise of its members is also the source of the claim to self-regulation by the profession and to its monopoly over practice, since neither clients nor others have the knowledge to competently judge the practice of professionals, therefore warranting protection of clients against quacks who also lack the requisite knowledge and skills.
2. Reasons for Professional Codes Lacking the expertise required to satisfy their own vital needs, clients must enter into relationships of trust with professionals. The professional must promise or commit to serving these needs above all else in return for society’s or the state’s granting self-regulation and monopolistic authority to the profession’s organization. The state will license the 483
Professional Codes of Ethics
professional but will delegate this licensing authority to the professional organization. If this commitment of the professional or contract with the state as representing the interests of potential clients is to be public knowledge, as it must be to ground the clients’ trust, it will be best expressed in the state’s granted license and, from the side of the professional, in a published set of rules governing the professional’s behavior. Hence a profession will almost by definition require a code of ethics that expresses the commitment to serve the need or value that the professional is uniquely able to satisfy or protect, that he is pledged to serve above all else, and that underlies the professional-client fiduciary relationship. The need for this public expression of commitment to serve the client’s needs within some limits also stated in the code becomes more pressing as professional service becomes more impersonal, as provided by the large law firm or multi-member medical practice, often exclusively of specialists who see patients for only one condition. The fact that the rules expressing this commitment are enforceable (therefore necessarily public) and that an oath is often taken to obey them serves several functions. First, it better secures the trust of clients required for effective professional service. Second, it mitigates or removes the advantage that unscrupulous professionals would otherwise enjoy over the scrupulous ones. An enforceable code allows practitioners to pursue clients’ interests within stated moral constraints without disadvantage (Davis, 2002, 24). Third, it resolves in a uniform way conflicts among competing values and interests, allowing for stable expectations on the part of both clients and professionals as to how such conflicts will be resolved.Without such public enforceable rules, there will be a problem of coordination of actions of different members of a profession. Fourth, it makes teaching professional ethics in professional schools easier and motivates students to take such courses seriously. Fifth, since the central values of professions, for example legal rights, health (or health care), or information (as provided by journalists) reflect or consist in basic rights of individuals in an advanced democratic society, it can be argued that the maximal protection of such rights cannot be left solely to individual judgment unprotected by law or enforceable rules governing each profession entrusted with satisfying these fundamental rights. To these legitimate reasons for having enforceable professional codes can be added several self-serving ones. First, serving the client’s pressing need almost always coincides with the professional’s own interest. Lawyers are rewarded for winning cases and negotiating deals to the client’s advantage; doctors are rewarded for curing patients; and journalists for getting out the juicy stories, whatever the invasions of privacy or harms to the reputations of those whose stories are told. There is one major exception to this coincidence of interests: the provision of unnecessary or excessive but lucrative services to clients that impose both monetary and nonmonetary costs on them. Codes can prohibit such superfluous service, but these prohibitions are virtually never enforced, so that the major provisions of codes emphasizing the central value to be secured maintain an almost perfect congruence of interests between professionals and their clients. This being so, it is in the interests of the professionals to establish an institutional endorsement and requirement to pursue that value wholeheartedly. The professional is not only permitted but required to pursue his own interest effectively. That this is also the client’s interest makes such pursuit publicly respectable. Second, codes allow professionals to disclaim any responsibility for harm that might result from actions permitted or required by the codes. That some action is legally required 484
Alan Goldman
generally (except in extreme circumstances) excuses the harm that might result from performing it.Third, the authority and moral commitment of the professional, grounded in her expertise and institutionalized in the code, is the source of identification with colleagues and a sense of community that is part of the professional’s identity. This identity as a member of a professional community symbolized by its code of ethics is reinforced by more familiar if more minor symbols—a special dress code, whether the lawyer’s dark suit or the doctor’s white robe, a special jargon obscure to others, an authoritative demeanor, e.g., the doctor’s addressing you by your first name notwithstanding your title, etc. Fourth, as noted earlier, professions gain autonomy and monopoly guaranteeing high monetary rewards in exchange for this public declaration of wholehearted commitment to satisfy some vital need in society. For this reason occupations aspiring to professional status, such as nursing, attempt to achieve it by establishing professional schools and publishing codes of ethics.The published code of ethics is a badge attesting to professional status and the autonomy that comes with it for the organization and its members.
3. Special Norms and Reasons for Obeying Them Since the professional’s authority, prestige, and sense of community is grounded in her special skill at securing the value central to her profession, since clients typically consult professionals only when these particular values are threatened, and since professional codes of ethics publicly express commitment to those values, it is natural and predictable that these codes will elevate those values beyond their usual importance in relation to other values in our common moral framework. And indeed this is what we find. Thus, the American Bar Association’s Model Rules state that the client has ultimate authority to determine the purposes to be served by legal representation, within the limits imposed by law. . . [the lawyer] should defer to the client regarding such questions as . . . concern for third parties who might be adversely affected. (Gorlin, 1999, 630–631) Thus the lawyer can ignore harm to persons that would ordinarily morally prohibit her actions in pursuing legal objectives of clients. The Society of Professional Journalists’ code states that, in producing news stories, journalists should “show compassion for those who may be affected adversely” and “recognize that gathering and reporting information may cause harm.” It does not tell journalists to forgo stories on those grounds. In regard to privacy, “an overriding public need can justify intrusion into anyone’s privacy” (Gorlin, 1999, 200). In medicine, traditionally the doctor must “overall do no harm,” which meant choosing the best course of treatment for the patient regardless of the patient’s character and what he might otherwise choose for himself.The extent to which this has changed, mainly under legal pressure, will be discussed later. In law, as noted earlier, the lawyer’s interest will most often coincide with the client’s, but certainly not with that of third parties. Moral issues arise when the client’s objective is legal but not moral by ordinary standards. The Model Rules’ directive here clearly departs from ordinary moral requirements in allowing and often requiring (especially in adjudicatory 485
Professional Codes of Ethics
contexts) lawyers to zealously pursue those objectives. In journalism issues arise when the public’s interest in a story conflicts with what would otherwise be rights to privacy or against harm that might result from publication of the story. Again, from the point of view of common morality, the journalists’ code errs on the side of publication when it comes to revelation of such information. In medicine the treatment that is best for restoring the patient’s health may conflict with other values or objectives of the patient himself, whose life is alone affected. We have seen that there are nevertheless reasons for promulgating professional codes of ethics that above all express overriding commitment to the central value of the profession. And once such a code exists, those who enter the profession will have reasons for obeying it that are somewhat independent of its moral content from the perspective of common morality. In some cases entering a profession involves taking an oath to follow its code of ethics. Even when there is no formal oath, joining the professional organization might be seen as a tacit promise or commitment to abide by its published norms. While the profession makes a contract with society to satisfy a vital need in return for monopoly and selfregulation, the professional can be seen as making a similar contract with his profession and its organization. And it might be seen as unfair to colleagues and clients to fail to abide by the profession’s contract, given their expectations and their allegiance to the code. Nevertheless, the force of these reasons for obeying codes when their directives conflict with requirements of ordinary morality can be called into question. First, if we are thinking in terms of a contractual obligation, there is the question whether the contract is fair and binding. Given the power of the organization versus that of its potential new member, is she at an unfair disadvantage in the bargaining position? It is true that no one has to enter a profession, but is a promise to violate ordinary moral requirements a fair price to pay? And is a contract to do what one otherwise considers wrong binding in any case? Second, even if there is such a binding contractual obligation, it is not absolute. There must be limits to the harm justified by the obligation to obey. In particular cases of conflict, the reasons to obey the code must outweigh the reasons to be constrained by ordinary moral requirements, if the professional’s obedience to the code is to be justified. Third, beyond the question whether a professional is obligated to obey his organization’s code as given is the deeper question of how such codes ought to be written, i.e., whether they should require actions in violation of ordinary moral requirements. It is well beyond the scope of this chapter to attempt to examine in detail particular codes and the circumstances in which the problems I have been describing arise.2 Instead, I will first describe more broadly different types of codes, then focus on one exemplary contrast that illustrates the different types, and end with a recommendation for the type that poses the least problem for moral epistemology. Remember that the main problem lies in the gap between ordinary morality and professional codes, the fact that the codes license behavior that would seem to be prohibited by ordinary moral requirements while at the same time needing justification from our ordinary moral framework. The solution first requires a structural distinction in levels of justification between the justification of the code and of actions under the code, and a related distinction between ordinary moral perceptions and authority to act on those perceptions. Our standard moral framework justifies the code, which then justifies actions falling under it. On the practical level, the solution will be to endorse codes that serve their legitimate professional purpose while minimizing the size of that gap or ameliorating its problematic effects. 486
Alan Goldman
4. Rules and Standards Codes of professional ethics enforce rules and/or standards. Rules are general prescriptions to be followed when certain conditions obtain. In themselves they alter authority for acting on one’s perception of all morally relevant factors in situations in which they apply. Rules of the strongest type do not allow us to look through them to their justification conditions in deciding whether to apply them (Schauer, 1991, 5). If no dogs are allowed in a restaurant for health reasons and to avoid annoyance to customers, then even a clean and well-behaved dog will not be allowed. If presidents must be 35 years old to guarantee sufficient maturity and experience, then even exceptionally mature and experienced 29-year-olds cannot run for president (and Donald Trump can). We adopt such rules to simplify decision making when there is a special need for predictability and consistency, as there will be when sanctions or punishments are applied for violations, when there is a need to remove the appearance of bias in decision makers, or, more generally, when judgments that attempt to take all seemingly relevant factors into account will come out cumulatively worse than simply applying the rules. If rules are to serve their simplifying function, their application conditions must be limited and stated in clear nonnormative terms (“If it’s a dog, it cannot enter”). If we are not to look to their justifications directly in deciding whether to apply them (as we do with mere rules of thumb), their stated conditions of application will be taken as sufficient for applying them. Cases satisfying those conditions but differing in other seemingly relevant ways must be judged the same according to the rules. The application of such rules will therefore be over- or underinclusive in particular cases, working injustice by failing to take into account relevant factors absent in their application conditions. Such morally inoptimal outcomes must be outweighed by the cumulative long-range advantages of the rules if their adoption is to be justified. It will then not be wrong to follow a rule when more harm than good results, as long as the harm is not exceptionally excessive. Once a rule is in effect and justified on cumulative grounds, one must also take account of legitimate expectations that it will be obeyed. More sanctions are added in the case of legally enforceable rules to tip the balance further toward obedience. Such rules in the criminal law as those against theft and murder are proper for enforcing minimal moral requirements, when morally justified exceptions are easily specified and can be built into the application conditions of the rules. By contrast, there is another different kind of norm to be found in law and legally enforceable codes, the application of which is less straightforward but which is also not over- or underinclusive on its face. These are standards or principles, as opposed to rules (Hart, 1994, 131–134; Goldman, 2013). Unlike rules, standards are stated in broad normative language—“Exercise reasonable care,” “no cruel and unusual punishments,” “equal protection and due process of law,” “provide competent service.” It may be obvious what is in clear and egregious violation of these requirements, but judgment is most often involved in deciding whether a violation has occurred. Standards are typical in areas such as negligence law, where, unlike rules against theft and murder in criminal law, it will not often be obvious when there is a case of negligence. We must rely mainly on common sense, despite the complexity of considerations, to tell when care has not been reasonable or sufficient when engaging in risky activities. First, there are countless different ways in which one can be negligent. Second, in judging negligence, we 487
Professional Codes of Ethics
must weigh opposing norms regarding the value and normalcy of various activities and offenses. The application of standards typically involves such weighings. Enforcement of standards will therefore be less predictable than enforcement of rules, and punishments for violations can therefore be problematic, leading to more controversial litigation and court decisions. Standards can nevertheless still be used to enforce minimally acceptable behavior, and they can be further specified over time by decisions in cases that serve as precedents. Again, unlike rules, however, which state sufficient conditions for their application, precedents allow for differentiation of later cases when relevant differences, which can be open-ended, can be stated. Like the Kantian injunction to act according to principles that can be universalized, and unlike rules, standards themselves allow us to distinguish cases whenever any morally relevant differences in circumstances can be found.
5. Types of Codes: Medical and Legal Professional codes of ethics differ mainly in whether they consist primarily in specific rules or broader standards.The alternatives are three: having no code, having a code with specific enforceable rules, and having one consisting mainly in broad standards. Although there are codes for those in business-related areas such as financial planning and public relations, business executives do not have a published code of ethics, which may be the main reason why they are not typically classified as professionals. Yet it is surely widely acknowledged that fraud, stealing industrial secrets, and recklessly endangering consumers are unethical business practices. Prohibitions against such conduct are simply applications of ordinary moral norms to the context of business. There is no greater problem in business than in the professions in distinguishing moral from immoral practices. Nor does the absence of a published code prevent business schools from teaching business ethics. We have seen that there are nevertheless strong reasons for professional organizations to publish codes of ethics. Unlike business corporations or chambers of commerce, they are granted monopolistic controls over entrants and self-regulation. The remaining options for different types of codes are nicely illustrated by the contrast between the American Medical Association’s Principles of Medical Ethics and the American Bar Association’s Model Rules of Professional Conduct. The first principle in the medical code requires the physician to provide competent medical service with compassion and respect; the second requires that he deal honestly; the third that he respect law; and the fourth that he respect rights of patients and others (Gorlin, 1999, 341). These are clearly standards expressed throughout in broad normative terms. One must judge, for example, whether medical service has been sufficiently competent in particular circumstances, and such judgment involves weighing the risks in the procedure, the normality of the treatment, whether the patient was adequately informed of the risks, and so on. Before one can judge whether a doctor has respected her patients’ rights, one must decide what those rights are and what kind of respect must be shown to them. Traditionally, as noted earlier, there was little or no emphasis on patients’ rights, except the right to the best treatment as determined by the doctor. Mainly under pressure from law, this has changed. In expanding on its basic principles, the code now recognizes a right to informed consent of the patient to any medical treatment (Gorlin, 1999, 342). 488
Alan Goldman
But despite this acknowledgment, I suspect that few patients who have undergone major treatment in recent years, especially in hospitals, would deny that the paternalistic attitude among doctors, with all its ritualistic trappings, persists. Many doctors attempt to evade the requirement for genuinely informed consent by having patients sign an abstract and general consent form (consent to any and all treatments) before even seeing the doctors. I myself was told recently that a doctor could not see me unless and until I signed such a form.Their expertise and commitment to patient health make it natural for doctors to assume that they are better able to make optimal medical decisions. Of course this assumption ignores the fact that patients themselves might not be committed to their own health above all else (any more than doctors seem so committed to their own health, given their work schedules and other habits). The best medical treatments can conflict with other values of patients to which they give priority. The code itself, however, retains remnants of the paternalistic attitude in what it says about patient rights, priorities, and responsibilities: “Patients should be committed to health maintenance through healthenhancing behavior” (Gorlin, 1999, 343). “Autonomous, competent patients assert some control over decisions which direct their health care” (emphasis mine) (Gorlin, 1999, 342). No wonder members of my generation continue to refer commonly to “doctor’s orders.” These facts call into question the extent to which codes with only broad standards that tend to be rarely enforced affect professional practice. The medical code acknowledges an abstract right to informed consent, but the effect of this recognition has been less than a more specific rule might have required. The opposing sort of code is that of lawyers. There requirements are spelled out in specific rules, albeit under the guiding principle of zealous client advocacy and the fundamental value, not of justice, as is sometimes claimed, but of client autonomy in pursuit of legal objectives. Restrictions on client advocacy in the code are few and mainly legal—the lawyer, for example, is not to aid in illegal conduct, is to reveal adverse law but not adverse fact to a court, is not to take actions that merely harass or delay (the latter ignored in practice). Zealous advocacy is most at home in the context of criminal trials, where the state must be put to the full test of proof, but the principle is extended to other contexts, for example corporate practice, where the power relations of clients to opposing interests are reversed. The strongest argument for the elevation of the value of client autonomy to near absolute priority is that individuals are to be restrained in our legal system only by legislators and judged only by judges and juries, not by their lawyers. It is questionable, however, whether lawyers could restrain their clients, as opposed to refusing to aid them in wrongdoing. If a lawyer refuses to pursue an objective on moral grounds, the client is free to find another lawyer.This might impose some inconvenience and cost on the client but does not amount to legal restraint. This is not the place to pursue arguments on the fundamental principle of zealous advocacy or the value of client autonomy,3 which have been mentioned to provide some orientation to the American Bar Association code. For the purpose of contrasting codes, we may focus on one rule, that requiring confidentiality of information relating to clients.While the medical code says only that a doctor should safeguard patient confidences within the constraints of law or the need to protect welfare (Gorlin, 1999, 341–342), the ABA code is far more specific regarding allowable exceptions to client confidentiality.The lawyer may reveal confidential information to prevent reasonably certain death or substantial bodily harm, to 489
Professional Codes of Ethics
prevent substantial property loss, but only from a crime involving the lawyer’s services, and for certain other reasons relating to the protection of the lawyer himself and his financial interests. Otherwise the need to keep information relating to the client confidential always trumps other moral considerations. This rule is clearly far outside the parameters of ordinary morality. Outside of professional practice, the harm that might result from revealing confidential information that one has learned, together with the obligation to keep a promise of confidentiality if one has been made, must be weighed directly against the harm that might result from not revealing the information. In legal contexts in the absence of a code, we could incorporate into these considerations others unique to the lawyer-client relation that weigh further toward keeping client confidences. First, the lawyer cannot adequately represent the client without knowing all the information relevant to the client’s case or objective. Second, she will not have a chance to dissuade the client from wrongdoing unless she learns the client’s intentions.Third, the client’s right to confidentiality is part of his right against self-incrimination, which is in turn grounded in the fundamental principle of equality before the law, for both the clever and not so clever. Fourth, in light of all these considerations, the lawyer might promise to keep all information confidential. These are weighty considerations from the viewpoint of our common morality, but they still leave us far from the ABA code’s rule. First, financial ruin, sometimes leading to suicide, may be worse than bodily harm, and yet the code allows revelation to prevent the latter but not the former, if the lawyer’s services were not involved. Second, ordinary morality requires revelation to prevent more serious harm; the code only permits it in some cases. It might be replied here that the “may” instead of a “must” in the rule simply leaves it up to the lawyer’s judgment as to when revelation is required. This would make the rule read more like a standard than a genuine rule. But given the contents of the confidentiality requirement, as following from a principle of zealous advocacy, and given the specificity of allowable exceptions in this and other rules in the code, the more plausible reading is that revelation is never required, only sometimes permitted, by the code.Third, direct weighing of harms would require revelation in cases in which it is not even permitted by the code: if, for example, a client confesses to his lawyer a prior murder for which an innocent person is serving a life sentence. As to the other considerations unique to the lawyer-client relation, a lawyer need not promise absolute confidentiality in order to convince a client that she must know all the facts relevant to the case. And the code already allows some exceptions of which most clients are unaware. As to the chance to dissuade clients from wrongful behavior, complicity in serious harm through silence is too high a price to pay for knowledge of intentions to harm. Finally, although the right against self-incrimination is important in our legal system, common morality would require it to be weighed against other rights against harm, as the code also does in too few cases. Thus, the ABA’s rule on confidentiality departs strikingly from ordinary moral requirements without sound arguments for doing so. It might be possible to design a rule of equal specificity that does so less, but given the large variations in different contexts of weighing rights to confidentiality against other rights against harm, any rule of this type, which attempts to build in all exceptions, is going to allow or require significant harm where we
490
Alan Goldman
ordinarily would not. For such a rule to be justified, we would have to be even more skeptical of the unfettered moral judgments of lawyers who acknowledge a less-than-absolute obligation to keep confidences. If we find no strong reason to be so skeptical, that obligation could be expressed in a standard of the type we find in the medical code.
6. Conclusion Given the need for public endorsement of monopolistic control and self-regulation, professional organizations will continue to adhere to published codes of ethics. And given that the prestige and authority of professionals are grounded in their special knowledge and commitment to a particular value that serves a vital public need, these codes will continue to contain norms that express a special obligation to promote that value and elevate it above its usual status in our ordinary practices and common moral framework. But conflicts with other values can be mitigated by adopting one type of code rather than another and by the ways in which codes are written, enforced, and taught in professional schools. Codes containing very specific rules that must be memorized run the danger of its appearing that ethics require only following the rules or staying within their limits. Such rules negate authority to act otherwise and eventually stifle moral reflection. It becomes natural to think that what is not forbidden by a code’s rules is morally acceptable and that what is required by a code is morally required. By contrast, standards that allow opposing interests or harms to be counted, even if given somewhat diminished weight, should result in less injustice. The difficulty of enforcing standards, as opposed to rules, is not a serious consideration in practice, given how lax enforcement against colleagues tends to be, if behavior remains within the bounds of law. And it is not too difficult to punish egregious and obvious violations of standards, which are likely to be the only targets for sanctions anyway. To further offset the tendency of professionals to exaggerate the value toward which their expertise is directed, there could be more input from those outside the professions in writing and revising the standards and in teaching professional ethics courses, perhaps jointly with teachers who have professional degrees or experience. There will remain the distinction, important to moral epistemology and most clearly illustrated in professional codes, between judging an action right on ordinary moral grounds and having the authority to act on that judgment. This is an epistemological problem because it makes it more difficult for the professional to know how to act when a requirement of a code conflicts with her settled ordinary moral judgment. Once we recognize different levels of justification and a gap between specific requirements on those levels, understanding how one must act in morally charged contexts becomes more complex. But the default position, absent systemic effects of individual decisions on institutions, must be to trust the judgments of individuals in the morally complex situations in which they find themselves, guided by standards but not bound by strict specific rules. This trust can be better placed in highly educated professionals who have been sensitized in professional schools to typical moral problems in their professions. Since, as noted earlier, their interests will tend to coincide with the threatened interests of their clients, there will be little danger that those interests will be undervalued.
491
Professional Codes of Ethics
Notes 1. See Chapter 9 of this volume for discussion of evolutionary explanations of our intuitions of familial obligation. 2. But see Goldman, 1980. 3. For critical evaluation, see, for example, Goldman, 1980, ch. 3; Luban, 1988.
References Davis, M. (2002). Profession, Code and Ethics. Burlington,VT: Ashgate. Goldman, A. (1980). The Moral Foundations of Professional Ethics. Savage, MD: Rowman & Littlefield. ——. (2013). “Rules, Standards, Principles,” in H. La Follette (ed.), International Encyclopedia of Ethics. Oxford: Wiley-Blackwell. Gorlin, R. (ed.). (1999). Codes of Professional Responsibility. Washington, DC: Bureau of National Affairs. Hart, H. L. A. (1994). The Concept of Law. Oxford: Clarendon Press. Luban, D. (1988). Lawyers and Justice. Princeton: Princeton University Press. Schauer, F. (1991). Playing by the Rules. Oxford: Clarendon Press.
Further Readings First, read the actual codes, collected in Gorlin, Codes of Professional Responsibility (Washington, DC: Bureau of National Affairs, 1999). For expansion on the topics discussed, in Ethics for Adversaries (Princeton: Princeton University Press, 1999) Arthur Applbaum offers a thorough review of arguments for and against special requirements for those in professional, especially adversarial, roles. In the journal Business and Professional Ethics, 5 (2) (1986), there are several articles on self-regulation in the professions and an article by Louis Lombardi comparing the lack of a code in business with codes in the professions. In “Role Morality as a Complex Instance of Ordinary Morality,” American Philosophical Quarterly, 28, 73–80, 1991, Judith Andre argues against me and others that professional role morality does not involve principles out of line with ordinary morality. Criminal Justice Ethics, 3 (2) (1984) contains a debate between Monroe Freedman and me on confidentiality. A useful anthology on professional ethics is Albert Flores, Professional Ideals (Belmont, CA: Wadsworth, 1988). See especially the articles by Lisa Newton justifying professional codes and by Elliot Cohen arguing against zealous advocacy by lawyers.
Related Topics Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 22 Moral Knowledge as Know-How; Chapter 25 Moral Expertise.
492
27 TEACHING VIRTUE Nancy E. Snow and Scott Beck
1. Introduction The title of this chapter raises questions for anyone familiar with the history of philosophy. It brings to mind Plato’s dialogue, the Protagoras, in which Socrates asserts that virtue cannot be taught, thereby opposing the opinion of his interlocutor, the famous Sophist, Protagoras.1 Socrates believes that virtue cannot be taught because it is wisdom and no one can teach wisdom.Yet the notion that virtue cannot be taught is at odds with other views from the history of philosophy: Plato himself outlines a regimen for character development in the Republic; the Confucian tradition offers advice on the cultivation of the junzi, or excellent person; Aristotle argues in the Nicomachean Ethics that virtue is acquired through guided habituation; Rousseau writes notoriously of the different types of character formation needed for girls and boys in Emile, and John Dewey wrote of education, including character education, in the tradition of classical American pragmatism.2 So which is it? Can virtue be taught or not? Exactly what one is teaching when one “teaches” virtue matters to the question of whether virtue can be taught. When one is “teaching” virtue, one is surely not teaching a subject matter like mathematics, grammar, or literature. Even when one is teaching theories of virtue, one is not yet “teaching” virtue, though one could be contributing to character development in some respects. One could do this, for example, by making students aware of theoretical thinking about the nature of virtue, what constitutes virtue, what constitutes vice, and so on. When one is “teaching” virtue in the full, robust sense under discussion in the Protagoras and the previously mentioned texts from the history of philosophy, one is forming character. One is not simply imparting theory or a subject matter but changing lives. In this chapter, we assume that virtue can be taught in the sense that teachers can influence character development in their students. The question is, “How should this be done?” We take as our focus teaching virtue in schools.3 We explore the challenges and opportunities of teaching virtue from a variety of perspectives. In Part 1, Nancy E. Snow surveys a number of theoretical perspectives on teaching virtue that have been or are being implemented in schools. She concludes the section
493
Teaching Virtue
by identifying commonalities among the approaches. Commonalities notwithstanding, we recognize the value of differences. Our view is that there is no “one size fits all” with respect to virtue education. In this spirit, Scott Beck, the principal of Norman High School, describes in Part 2 the unique grassroots approach to character development recently initiated at his institution. In Part 3 we discuss how features of the Norman High initiative illustrate aspects of the approaches discussed in Part 1 and conclude with general observations about roles for askesis, or disciplined practice, in changing school communities and cultivating character.
2. Theoretical Perspectives on Teaching Virtue A number of theoretical perspectives on teaching virtue (or closely related constructs, in the case of social emotional learning) are prominent on the contemporary scene. Here I can discuss only a few: social emotional learning (SEL), Integrative Ethical Education (IEE), caring, positive education, and Aristotelian character education. Educators encountering this array of perspectives might be puzzled about how to choose which outlook to integrate into their schools and classrooms. On the face of it, the various approaches appear to be discrete and unconnected, like cafeteria menu items. Yet each theory embraces a core value or set of values that teaching virtue is thought to promote and prescribes a set of practices meant to develop virtue and, thereby, the core value or set in students. Awareness of this value/practice structure, I suggest, can help educators more easily to identify commonalities and differences and thereby make the array of options seem a bit less daunting.
Social and Emotional Learning (SEL) SEL has been on the scene of elementary and high school education for more than twenty years. SEL programs now operate in thousands of schools across the United States and in other countries, and more than 500 evaluations of various types of SEL programs have been used.4 The Collaborative for Academic, Social, and Emotional Learning (CASEL) was established twenty-one years ago.5 It “aspires to establish a unifying pre-school through high school framework based on a coordinated set of evidence-based practices for enhancing the social-emotional-cognitive development and academic performance of all students.”6 SEL programming aims to develop students’ capacities to “integrate cognition, affect, and behavior to deal effectively [with] daily tasks and challenges.”7 SEL seeks to develop competences in five key domains: self-awareness, self-management, social awareness, relationship skills, and responsible decision making.8 Self-awareness involves understanding one’s goals and values, accurately assessing one’s strengths and weaknesses, having positive mind-sets and well-grounded senses of optimism and self-efficacy. High levels of self-awareness include an understanding of interconnections among thoughts, feelings, and actions. Self-management requires skills and attitudes needed to regulate emotions and behavior, including the abilities to control impulses, to delay gratification, to manage stress, and to persevere through challenges. Social awareness includes the abilities to take the perspectives of those from different cultures or with different backgrounds, to empathize and to feel compassion, to understand social norms, and to recognize family, school, and community support systems. Relationship skills include 494
Nancy E. Snow and Scott Beck
clearness in communicating, active listening, and the abilities to cooperate, to resist inappropriate social pressure, to take constructive approaches to conflict, and to seek help when necessary. Finally, responsible decision making requires the ability to consider ethical standards, safety concerns, and behavioral norms for risky behaviors, to evaluate consequences realistically, and to take one’s own health and well-being as well as those of others into consideration.9 Though SEL does not explicitly teach virtue or claim to do so, it is clearly relevant to character education. Its integration of cognition, affect, and action clearly resonates with key aspects of virtue, as do the five domains it identifies as crucial foci for healthy development. Moreover, there are significant areas of overlap between SEL and the other approaches surveyed here. For example, SEL’s emphasis on cognition meshes well with IEE’s concern with cognitive development; its focus on emotional development coheres with the aims of the “Making Caring Common” project; its stress on positive mind-sets and optimism coheres with positive education’s approach; and its integration of cognition, affect, and action, coupled with its emphasis on the social dimensions of behavior, resonate strongly with Aristotelian character education. Elements from these approaches could congenially be integrated into various SEL frameworks currently in use.
Integrative Ethical Education (IEE) Pioneered by the developmental psychologist Darcia Narvaez, IEE combines insights from character education programs in psychology as well as “rational autonomy” views, such as that promoted by the psychologist Lawrence Kohlberg, that were influential in the late twentieth century.10 The idea is to unite the best of both perspectives by stressing the importance of the ability to make reliable moral judgments—emphasized by rational autonomy views—as well as the virtue-oriented approaches to moral growth championed by advocates of character education. IEE relies on three foundational ideas: (1) moral development is a form of developing expertise; (2) education is transformative and interactive; and (3) human nature is cooperative and self-actualizing.11 Expertise, gained after hundreds of hours of practice, enables experts to “see” a field or domain in ways superior to those of novices. As opposed to novices, experts typically have a more holistic vision of a domain, rely less overtly on rules, quickly and effortlessly assimilate information through nonconscious processing, and deeply desire to perform well in domain-related tasks. Chess experts, for example, see the board differently from novices, have internalized an intuitive sense of which moves work well in various circumstances, and, as a consequence, are faster and more versatile in their play. Narvaez imports these insights to the realm of ethics, arguing that ethical knowledge deepens and becomes more holistic as novices, with guided practice, become more adept at perceiving and responding appropriately to occasions for ethical action. Education is transformative in the sense that teachers are called upon to use classroom strategies and exercises that foster the perceptual capacities and cognitive skill sets required for moral expertise. Included in this holistic approach is the cultivation of ethically appropriate affective responses, such as the desire to act well in ethical domains, and the proper alignment of emotions with ethical judgments. Finally, moral expertise is developed cooperatively through shared learning experiences. Drawing on Bryk and Schneider (2002), Narvaez writes: “Successful schools and classrooms form caring communities.”12 495
Teaching Virtue
These contexts nurture capacities for self-actualization and facilitate both moral development and children’s intrinsic motivation to achieve academically.
Caring Caring, as promoted by the “Making Caring Common” project of the Harvard Graduate School of Education,13 stresses values such as caring, kindness, respect, generosity, and empathy and gives advice about how such attitudes can be transmitted by creating circles of caring.14 Members of the “Making Caring Common” project team have advocated strategies integrating social and emotional learning into the warp and woof of school life.15 Consistent with this approach, they suggest six strategies for affecting school climate in ways that can support moral and social development: 1. Make positive teacher-student relationships a priority. 2. Expect school staff to model moral, ethical, and prosocial behavior. 3. Provide opportunities for students to develop and practice skills like empathy, compassion, and conflict resolution. 4. Mobilize students to take a leadership role. 5. Use discipline strategies that are not simply punitive. 6. Conduct regular assessments of school values and climate.16 The six strategies to promote caring can be viewed as aspects of disciplined practice meant primarily to guide teachers, but also administrators and staff, in their interactions with students and with one another, thereby creating caring communities in entire schools.
Positive Education Positive education is the application of the principles of positive psychology in schools. Positive psychology is the brainchild of Martin E. P. Seligman and the late Christopher Peterson. Seligman et al. argue that skills promoting happiness, as well as skills of achievement, should be taught in schools.17 This approach has garnered significant uptake from educators at all levels. MacConville and Rae, for example, have integrated the principles of positive psychology into a curriculum for adolescents.18 In earlier versions of positive psychology, Seligman took happiness as the goal toward which humans strive. MacConville and Rae write: Seligman now believes that the topic of positive psychology is wellbeing and the “gold standard” for measuring it is flourishing. Wellbeing according to Seligman has five measurable elements that count toward it. They are: 1. positive emotion (of which happiness and life satisfaction are elements) 2. engagement 3. relationships 4. meaning 5. achievement.19 MacConville and Rae explain that the goal of well-being is to flourish. In order to flourish, an individual must have all of three “core features”: positive emotions, engagement 496
Nancy E. Snow and Scott Beck
and interest, and meaning of purpose; and at least three of six “additional features”: selfesteem, optimism, resilience, vitality, self-determination, and positive relationships.20 Central to this approach is the idea that each of us possesses character strengths. Character strengths are similar to traits but can be influenced by environmental factors.21 Noteworthy is the notion of “signature strengths”: each individual has and is capable of building their unique strengths.22 The development of the individual’s strengths as well as traits such as grit, resilience, and willpower is thought essential for a flourishing life.
Aristotelian Character Education Aristotelian character education is now enjoying a revival. The “classical” conception, as suggested by Aristotle’s Nicomachean Ethics, is inclusive and robust. Teaching virtue consists of imparting or shaping a number of crucial abilities and skills: the ability to perceive when situations call for virtue; the ability to use practical wisdom or phronēsis to make reliable moral judgments; the ability to feel appropriate emotions when occasions call for them and to regulate one’s emotions using reason; and the ability to have and act from appropriate motivations. If one has all of these abilities, one has virtue in the robust Aristotelian sense, but only if, in addition, one has them in a certain way: as entrenched parts of one’s character, or dispositions, and not as transitory or fleeting states. Having the virtues as stable character traits is meant to ensure that their possessor acts virtuously across many different kinds of situation types. For example, if she possesses honesty, she should tell the truth with her spouse, when testifying in court, on her income tax returns, and so on. Aristotle thinks that we are not naturally virtuous or vicious but have the capacity to acquire virtue through habituated action. We need to have a good upbringing and to be guided in our deliberations, actions, and emotional responses by our families, friends, and communities, as well as by good legislation. Having and acting virtuously is part and parcel of having a flourishing life. External goods, such as wealth, good children, friends, noble birth, and good looks, are also required to flourish. Kristjánsson’s is the most comprehensive contemporary effort yet to articulate and defend a program of character education based on Aristotle’s virtue ethics.23 In the main, Kristjánsson hews close to the classical account, rightly stressing the importance of cultivating phronēsis and the need to educate the educators about virtue and character. Yet he imports creative elements, such as the desirability of Socratic dialogue in bringing students to see and understand the value of virtue. Kristjánsson offers an attractive ideal at which to aim, one which values good character as partly constitutive of human flourishing and as intrinsically valuable. He thus counters a tendency by other recent authors to promote character for its instrumental value, because it is believed that having good character facilitates desirable outcomes, such as academic achievements.24
Structure: Core Value/Values and Practices The foregoing theories espouse the following core value or set of values: SEL—social and emotional learning; IEE—moral expertise; caring—caring or similar positive other-regarding traits, such as benevolence and kindness; positive education—flourishing; Aristotelian character education—flourishing. There are areas of overlap among the values themselves. For example, each involves some conception of social and emotional responsiveness. IEE overlaps 497
Teaching Virtue
with Aristotelian character education in its emphasis on making reliable moral judgments and with SEL’s emphasis on the integration of cognitive, emotional, social, and behavioral skills. The conceptions of flourishing promoted by positive education and Aristotelian character education also admit similarities. For example, each includes positive emotions, relationships, meaningful lives, and achievement. All of the approaches prescribe teaching practices for nurturing their respective values in students.Viewing these practices as forms of askesis or disciplined practice can shed light on further commonalities and differences among the theories.
Askesis In the Western philosophical tradition, the idea of askesis goes back to the Stoics, who thought that self-discipline was needed to fend off emotions, keep oneself calm and reasonable, and keep one’s mind focused on the fact that we are citizens of the universe, inhabitants of a divinely ordained cosmos, and not in control of our destinies. Askesis was the practice through which one developed disciplined habits of mind and body, cultivated character, and acquired and sustained virtue. The notion of disciplined practice as a means of cultivating character is not unique to Stoicism. Practicing virtue in some form or other is a part of many theories and worldviews. The development of virtue through guided and habituated action, so important to Aristotle, is also an example of askesis (though he seems not to conceptualize habituation into virtue as a form of self-discipline), as are Buddhist mindfulness and Confucian ritual practices. The religious rituals of monks and nuns in various traditions, such as Roman Catholicism and Greek Orthodoxy, are also examples, as are the kinds of physical, dietary, and psychological regimens used by athletes and the military. The nature, scope, and extent of practices of askesis or self-discipline vary widely, but the core notion is that a person seeks to improve herself through deliberately practicing certain types of actions or routines, with the aim of acquiring and sustaining desired mental, physical, or psycho-physical states. Viewing askesis as a method for the acquisition of virtue is not a new idea.Those acquiring virtue need deliberately to form habits of perceiving, thinking, feeling, and acting. Yet thinking about how to teach virtue in terms of askesis is novel. What might the selfdiscipline of teaching virtue in our day and age involve? The self-discipline of teaching virtue requires the teacher to familiarize herself with the content of the specific approach that she or her school chooses to implement. She needs to become informed about SEL, IEE, caring, positive education, or Aristotelian character education, for example, and adapt those approaches to the teaching of virtue in her specific context. This might seem too obvious a point to mention, but the pitfall to be avoided is a teacher’s thinking that she already knows what concepts such as SEL, moral expertise, caring, or positive emotions are and so doesn’t need to learn the nuances of the various perspectives now on offer.Yet a teacher’s intuitive notions of how best to model caring or positive emotions in students, for example, or how to teach students to make reliable moral judgments should be “fine-tuned” or adjusted through acquaintance with the theory and science behind the diverse educational approaches that incorporate these values. Self-discipline, open-mindedness, and intellectual humility are required to deepen and enhance one’s learning about how to transmit the values of caring, positive emotions, or accurate moral perception and judgment to 498
Nancy E. Snow and Scott Beck
students. As we will see in Parts 3 and 4, the variety of roles within school settings gives rise to different practices of askesis, as librarians, counselors, teachers, and administrators find creative ways to cultivate virtue in themselves and impart it to students. Often the transmission of these values to students does not involve simply leading class discussions about what it means to care about others or using strategies from SEL or positive education workbooks aimed at fostering students’ resilience. All of the approaches to character education here discussed recognize that any effective teacher of virtue should “practice what they preach.” Teachers of virtue should make efforts to model virtuous behavior and attitudes for their students, treating students and others with patience, kindness, generosity, and other virtues. Teachers who seek to instill in students the skills necessary to perceive situations calling for virtuous actions, to make good moral judgments, and to perform appropriately virtuous actions need to cultivate those skills in themselves, and to be able to model and explain to students what they are doing when they use those skills. More radically, one might think that true teachers of virtue, that is, those genuinely committed to forming their students’ virtuous lives, should be committed to living virtuously outside the school as well as in it. This is not to require perfection, but it is to urge that a genuine commitment to virtue, evidenced in a teacher’s life inside and outside of school contexts, is the best bet for thinking that she’ll successfully transmit virtue to students. If a teacher is half-heartedly committed to virtue or has serious deficits in virtue in her personal life yet tries to communicate the value of virtue in the classroom, it is not unrealistic to think that students will detect her shortcomings and disregard her message, thinking that she lacks sincerity or is a hypocrite. A teacher who attempts to cultivate virtue in students while seriously falling short in her own life thereby risks doing more harm than good by turning students off to the message that virtue is valuable. Two implications of this line of reasoning, one negative and one positive, are worth noting. The first is that some teachers might not be fit to teach virtue. Teachers with chronic attitude problems or issues such as substance abuse, for example, should probably not be enlisted to teach virtue. More positively, we can think of initiatives to teach virtue as being launched holistically so that schools become “incubators” of virtue. In such settings, teachers, administrators, staff, and students would seek to cultivate virtue in each other, such that those who are weaker in virtue might be supported in their commitment to be virtuous by others undertaking a similar endeavor. The Norman High School experience, described in Part 3, exemplifies this holistic approach.25 Finally, effective teachers of virtue, no matter which perspective they adopt, should avail themselves of age-relevant strategies and techniques for virtue cultivation that have been empirically tested in classrooms and found to be effective in promoting virtue development in students of that age group.26 We can conclude Part 2 by noting that an examination of the core values and teaching practices that are or would be adopted by practitioners of each approach to character education shows more commonalities among them than are typically recognized. Yet differences matter. Moreover, some administrators and faculty adopt a “ground up” approach, selecting aspects of different perspectives that they think integrate especially well with their schools’ histories, traditions, cultures, and circumstances. In this spirit, we now turn to the unique experience of Norman High School as described by the principal, Dr. Scott Beck, joining the journey as school personnel develop their own approach to teaching virtue. 499
Teaching Virtue
3. The Norman High School Experience As school leaders, we must ask ourselves a very straightforward question: What goal do we seek to bring to pass in our work with children? This question forces us to examine that which we believe defines success, happiness, and flourishing. In recent years, neoliberal policy agendas have called for a more strategic focus on “college and career readiness” and school accountability measures.27 These policies have narrowed the focus of schools in many cases to the production of quantifiable results measured largely through standardized testing. This emphasis on stringent indicators of academic success has resulted in a general progression away from more holistic models of working with students.28 Though rigorous academic outcomes and increased student achievement are worthy ends, the story of education does not reach its conclusion at this point. We argue that education is a means to an end goal of producing a critical citizenry and that the pursuit of character and virtue presents a more complete aim for the scope of our work with young people and the schools that serve them. We also argue that a commitment to character and virtue education need not come at the expense of a commitment to academic rigor. Character and scholarship must not be viewed as mutually exclusive but rather as twin attributes of flourishing students and as the ultimate aim of education. I serve as head principal of Norman High School, a large, comprehensive, public high school serving approximately 2,000 students in grades 9–12 and a faculty and staff approaching 150 in number. In the spring of 2015 we asked ourselves a question: what was the purpose of our work with students and community? After weeks of dialogue, research, reflection, and discussion, we landed on three broad tenets: citizenship, scholarship, and character. When we began to brainstorm the attributes that we wished for our departing seniors to possess, we devised a lengthy list that included descriptors like: critical thinker, open-minded, kind, responsible, and so on. Interestingly enough, the bulk of the desired outcomes had very little to do with traditional academic content and outcomes. The list seemed to reflect a deeper desire to help students become much more than a grade point average, an admission letter to a selective university, or a standardized test score. Our teachers approved the new school mission nearly unanimously. This process of drafting a new school mission was the catalyst that would eventually lead to a partnership between Norman High School and The Institute for the Study of Human Flourishing at The University of Oklahoma. Over the course of the past two years, Norman High School has implemented a variety of initiatives working in concert with the Institute. Our aim is straightforward: To guide students and ourselves in the development of intellectual virtue and character in an effort to bring about flourishing for members of our learning community. This process is being carried out through exposure to innovative learning experiences designed to foster deep thinking and cultivate the critical skills needed for students to thrive in the twentyfirst century while being empowered to build a life of meaning and purpose. In essence, content, curriculum, and the pursuit of character development are interconnected. While students engage in rich and authentic learning experiences designed to cultivate positive habits of mind and thought, they simultaneously are afforded the opportunities to exercise intellectual humility, autonomy, tenacity, and so forth.29 That is to say, character education and academic knowledge are not competing ends in the classroom but rather different yet 500
Nancy E. Snow and Scott Beck
connected goals that are dependent on one another. A teacher cannot cultivate virtue in the absence of student learning experiences that are worthy of virtue cultivation. Intellectual humility and open-mindedness cannot be cultivated by being subjected to rote memorization and similar, non-stimulating activities. Likewise, teachers cannot cultivate deep conceptual understanding of academic content by ignoring or being indifferent to the intellectual virtues and matters of character. It is in the forming of these virtues that students are presented the opportunity to harness the curiosity and autonomy to engage material, the open-mindedness and humility to question preconceptions and misconceptions, and the tenacity to continue learning despite temporary failures, ambiguous and novel contexts, and various other frustrations and challenges.30 An example in practice of one such experience, which also shows how character education and academic rigor can work together, can be found in a unit of study constructed by ninth-grade English teachers and school librarians. Utilizing the Guided Inquiry Design process, a theme of social justice was established and work began to craft a deep student learning experience.31 Guided Inquiry Design attempts to draw on student interest and curiosity in an effort to bolster engagement, grapple with research competencies, apply new knowledge as conceptual understanding deepens, and share new knowledge with the broader world. A social justice unit held an additional layer of character value for students as issues of empathy and justice were pondered as learners began to engage in the academic content. The academically rigorous nature of the unit of study encouraged both engagement and the cultivation of numerous intellectual virtues including open-mindedness, humility, and autonomy, while simultaneously creating a safe space for students to discuss compassion and other moral virtues.Virtue, character, and deep conceptual understanding work together in this instance to create relevant learning experiences that allow students to apply new knowledge in a variety of new situations. This application of newly constructed knowledge and blossoming character is of value to students in academic and work-related contexts and also in matters of ethical concern and those requiring moral judgment.32 It is our belief that the purpose of education is to ensure the full flourishing of all students as learners become engaged citizens, inquisitive scholars, and individuals of strong character. As students begin to possess a better and more complete understanding and awareness of themselves as both learners and people, they are better equipped to cultivate the empathy, self-control, and intrinsic curiosity that is imperative for a purposeful academic, professional, and personal life.33 These goals are being systematically addressed through: (1) an investment in professional development preparing faculty and staff to serve as coaches for students; (2) a deep exploration of how people learn and develop conceptual understanding; and (3) explicitly teaching the value of the intellectual virtues and character to teachers and students alike, while facilitating growth in this regard. In each of these initiatives, deliberate focus has been placed on fostering growth for both students and faculty/staff. Character is partly nurtured and developed through an emotional contagion effect where adults modeling strong character and virtue establish a culture and ethos favorable for the cultivation of character development in students. In these sorts of cultures, students manage to “catch” character from admirable and influential role models.34 In this regard, we view Norman High as an “incubator” for the development of virtue—we seek to create a culture or climate in which virtue is integrated into the warp and woof of daily life. 501
Teaching Virtue
Let me briefly explain the three initiatives: Life Coaching: Through the life coaching program, we seek to empower students to deal with obstacles, embolden students to persevere in the face of challenges, help students create life structures that contribute to future success, help students develop the autonomy to make their own choices, provide students with a sense of control over their education, enhance intrinsic motivation, facilitate critical thinking about their educational decisions, and discover new possibilities and potential. With faculty, we seek to enhance feelings of fulfillment as a professional educator, enhance the sense of moral purpose derived from work, boost faculty morale, and bolster perceptions of administrative support. In coaching sessions, coaches work one on one with students and assist them in prioritizing areas of their academic and personal life that they would like to improve. Through a method of asking questions, coaches encourage students to develop their own strategies and take steps to achieve their own goals. These meetings are held at regularly occurring intervals. Additionally, trained staff utilize the core tenets of life coaching and questioning in their interactions with students throughout each day. Life coaching strategically empowers students to develop a number of the intellectual virtues including autonomy, humility, open-mindedness, courage, and tenacity.35 Learning Team: A team of teachers, counselors, and librarians is working to develop the instructional and pedagogical knowledge and skills necessary to deliver learning experiences to students that simultaneously cultivate the intellectual virtues and prepare young people for the twenty-first century in authentic and relevant ways. Through this initiative we seek deeper learning for students, greater conceptual understanding of content material, and an enhanced sense of autonomy and curiosity in learners as they develop these critical skills.36 With faculty, we seek to enhance feelings of fulfillment as a professional educator, enhance the sense of moral purpose derived from work, boost faculty morale, and bolster perceptions of administrative support. Multiple teams of teachers have attended various conferences addressing brain research, citizenship, and ethics, and a team, currently composed of principals, teachers, and school librarians, has been built to facilitate the implementation of concepts and learning from the conferences, readings, research, and collaborative learning sessions into the day-today workings of Norman High School. Team members also deliver professional development to the faculty in various settings throughout the school year on matters of brain and learning research. Plans are in place to add to this team in an ongoing manner as the initiative grows and implementation matures. Expanding our efforts in this way resonates with the “circles of caring” approach advocated by the “Making Caring Common” project of the Harvard Graduate School of Education. We plan to expand our circles of virtue education by creating and encouraging diverse “avenues of inclusion” in the processes of educating for virtue and transforming school culture. Additionally, the team has undertaken the planning of a Learning and Creativity Showcase. This showcase will put
502
Nancy E. Snow and Scott Beck
student work on display for faculty/staff, district administration, parents, and the broader community. Intellectual Virtues: Through the Intellectual Virtues cultivation initiative, we seek to develop in students intellectual curiosity, intellectual humility, intellectual autonomy, intellectual attentiveness, intellectual carefulness, intellectual thoroughness, intellectual openmindedness, intellectual courage, and intellectual tenacity.37 Embedded strategically into the work of the freshman academy, weekly advisory lessons provide all ninth-grade students with exposure to the intellectual virtues and opportunities to reflect on the application of the virtues in their academic work and perceived growth of the virtues in themselves.The freshman academy is comprised of twenty-two teachers from across the subject areas and special education, two counselors, one principal, and approximately 600 students. To launch this initiative, a team comprised of principals, counselors, and teachers visited Intellectual Virtues Academy (IVA) in Long Beach, California. The team was afforded the opportunity to visit at length with school personnel, observe classes, develop a resource library and weave the language of IVA into the Norman High School advisory curriculum. IVA is a public charter school built on the cultivation of the intellectual virtues in students.38 There are significant differences between IVA and Norman High School in both size and history; and thus different steps must be taken to begin to implement character education into school practice with a full understanding that although lessons are to be learned from other schools’ implementing character education programs, every school must execute this process in personalized ways.There is no “one size fits all” in virtue education. Appreciating the contextual differences and nuances of the given institution is an important step in establishing a program that fits the needs of the school. This process requires focused leadership, broad vision, deep conceptual understanding, and ample support in the way of professional development.39 The Norman High School experiences shows the importance of at least the following: school leaders must make the case that a focus on character and virtue education does not come at the expense of academic rigor; available models must be adapted to fit the specific context; capacity must be scaffolded through professional development; there needs to be a shared and dialogically generated conception of what the program’s goals are; and outcomes and ways of measuring achievement must be agreed on given the scrutiny placed on schools residing in a culture of accountability.
4. Conclusion Reflection shows that key features of the theoretical perspectives reviewed in Part 2 of this chapter have been integrated into the Norman High School initiative in ways consistent with that school’s unique context. Caring, for example, is exemplified in the Life Coaching initiative and the Learning and Creativity Showcase, which seek to empower students and celebrate their intellectual autonomy, and resonances with “circles of caring” have already been noted. Social-emotional learning is also promoted by Life Coaching and is being studied by faculty and librarians who are familiarizing themselves with the latest scientific
503
Teaching Virtue
research on adolescent brain development. The development of good moral judgment— the beginning steps toward moral expertise—is being cultivated in students by counselors who have been trained in Life Coaching and is being integrated into classroom instruction through curriculum changes and pedagogical techniques that ask students to reflect upon their character strengths and how best to exercise them in daily life. Additionally, the character strengths and virtues promoted by positive psychology have become a part of how students and teachers conceptualize themselves at Norman High School. Consistently with Aristotelian approaches to character education, the focus on virtue development is a community endeavor and is meant to inculcate in students enduring dispositions to be virtuous. Finally, a word about askesis is in order. Part 2 takes a theoretical approach in suggesting how teaching virtue can be viewed as a form of self-disciplined practice. Part 3, which describes the practicalities of teaching virtue in a large public high school, reveals that the self-discipline involved in teaching virtue can take different forms, depending on one’s goals and the roles that teachers of virtue occupy. If one’s goal as a counselor is to empower students through coaching them to ask insightful questions, develop well-informed and thoughtful strategies for the attainment of their own goals, and take practical steps toward those ends, then life coaching provides one with a specific form of askesis in planning and interacting with students. If one is a teacher who seeks to learn more in order to effectively promote virtue development in students through in-class interactions, askesis will consist of becoming a learner oneself and finding creative ways to impart one’s knowledge to one’s charges and to share it with one’s colleagues. If one is a school administrator, askesis will consist of facilitating learning about virtue, teaching it, and developing it in the school as a whole.The context in which one works and the community that one is creating will influence the specific forms that askesis, as well as virtue and its exercise, take at various stages of the implementation of character development programs. The Norman High School experience shows not only that school personnel have much to learn from theory but also that theoreticians should learn and be inspired by the creative approaches of those practitioners who bring virtue to life in their own contexts. Before concluding, let us pause to register a challenge. Students entering high school are already well on their way in character development. For some students, firm foundations have been laid at home and in earlier school experiences. Others are not so fortunate. Aside from character development that is lacking or lagging, many students face other challenges. They are from lower socioeconomic status groups; they come from single-parent households; they do not speak English at home; they are children of immigrant parents who do not have adequate skill sets to navigate the educational system; their home environments are not safe, are not drug-free, do not support them with nourishing food, and so on.Tragically, some students are homeless. Consistently with Aristotelianism, we believe that virtue is necessary but not sufficient for a flourishing life.The material circumstances of these students’ lives need to be improved if they are fully to flourish. Consequently, the Institute for the Study of Human Flourishing, in partnership with Norman High School, is conducting Partner Parents’ Initiatives, as well as other outreach to community and civic organizations, to address these needs. Our vision is that parents and teachers should cooperate to reinforce virtue cultivation at home as well as in school and in extracurricular activities. In short, the community of virtue we seek to establish starts within school walls but does not end there. 504
Nancy E. Snow and Scott Beck
As the saying goes, “It takes a village”—the resources and efforts of entire communities are needed to promote virtue education.
Notes 1. For the Protagoras, see Plato (1980a). 2. For the Protagoras and the Republic, see Plato (1980b), pp. 308–352; pp. 575–844; for the Confucian tradition, see Confucius (1998); see Aristotle (1985); Rousseau (1979), and Dewey (1944, 1990, and 1997). 3. For perspectives on moral learning more generally, see Chapters 5 and 6 of this volume. 4. See Roger P.Weissberg, et. al.,“Social and Emotional Learning: Past, Present, and Future,” in Handbook of Social and Emotional Learning:Theory and Practice, New York: The Guilford Press, 2017, p. 3. 5. Ibid., p. 5. 6. Ibid., pp. 5–6. 7. Ibid., p. 6. 8. Ibid., pp. 6–7. 9. Ibid. 10. See Chapters 1, 2, and 5 of this volume for further discussion of Kohlberg’s approach. 11. See Darcia Narvaez, “Integrative Ethical Education.” In Melanie Killen and Judith G. Smetana (eds.), Handbook of Moral Development, 1st ed., Mahwah, NJ: Erlbaum, 2008. 12. Narvaez, p. 15. 13. See http://mcc.gse.harvard.edu/. Accessed April 17, 2016. 14. See, for example, Richard Weissbourd and Stephanie M. Jones, “Circles of Care,” Education Leadership volume 71, number 5 (2014): 42–47. 15. See Stephanie M. Jones and Suzanne M. Bouffard, “Social and Emotional Learning in Schools: From Programs to Strategies,” Social Policy Report volume 26, number 4 (2012): 1–22. 16. Richard Weissbourd, Suzanne M. Bouffourd, and Stephanie M. Jones,“School Climate and Moral and Social Development,” in T. Dary and T. Pickeral (eds.), School Climate: Practices for Implementation and Sustainability: A School Climate Practice Brief, Number 1 (New York, NY: National School Climate Center, 2013), p. 1. 17. See Seligman et. al. (2009). 18. See MacConville and Rae (2012). 19. Ibid., p. 18. 20. Ibid., pp. 18–19. 21. Ibid., p. 30. 22. Ibid., pp. 28–30. 23. See Kristjánsson (2015). 24. See Kristjánsson (2015, 1) for remarks about Paul Tough’s book, How Children Succeed: Grit, Curiosity, and the Hidden Power of Character (2012). 25. See also the approach taken by KIPP (Knowledge is Power Program) Schools: www.kipp.org/ our-approach, also discussed by Tough (2012). We can envision such schools to be places that seek to create a community of virtuous “friends,” along lines suggested by Aristotle’s discussion of friendship of character in the Nicomachean Ethics. 26. See, for example, Narvaez (2008), Seligman et al. (2009), Sternberg et al. (2009), Tough (2012), Arthur et al. (2017), and Durlak et al. (2017), parts II and III. 27. Giroux, 2012. 28. Giroux, 2012; Jubilee Centre, 2016; accessed March 20, 2017. 29. These positive habits of mind and thought are discussed by Ritchhart et al. (2011). 30. Baehr, 2015, accessed March 15, 2017; Bransford, 2000; Ritchhart et al., 2011. 31. For information about Guided Inquiry, see Kuhlthau et al., 2012. 32. Dow, 2013; Jubilee Centre, 2016, accessed March 20, 2017; Kuhlthau et al., 2012; Ritchhart, 2002. 33. Seligman, 2011. 505
Teaching Virtue
34. Jubilee Centre, 2016; accessed March 20, 2017. 35. Baehr, 2015, accessed March 15, 2017. 36. Wagner & Dintersmith, 2015. 37. Baehr, 2015, accessed March 15, 2017; Ritchhart, 2002. 38. Baehr, 2015; for the IVA Charter petition, see Dow, pp. 180–193. 39. Jubilee Centre, 2016, accessed March 20, 2017; Lickona & Davidson, 2005.
References Aristotle. (1985). The Nicomachean Ethics, trans. Terence Irwin. Indianapolis, IN: Hackett Publishing. Arthur, James, Kristjánsson, Kristján, Harrison, Tom, Sanderse, Wouter and Wright, Daniel. (2017). Teaching Character and Virtue in Schools. New York: Routledge. Baehr, Jason. (2015). “Cultivating Good Minds: A Philosophical and Practical Guide to Educating for Intellectual Virtues,” http://intellectualvirtues.org/ [Accessed March 15, 2017]. Bransford, J. D. (2000). How People Learn Brain, Mind, Experience, and School.Washington, DC: National Academic Press. Bryk,A. and Schneider, B. (2012). Trust in Schools:A Core Resource for Improvement. New York: Russell Sage. Confucius. (1998). The Analects of Confucius, trans. Roger T. Ames and Henry Rosemont, Jr. New York: The Random House Publishing Group. Dary, T. and Pickeral, T. (eds.). (2013). School Climate: Practices for Implementation and Sustainability: A School Climate Practice Brief, Number 1. New York: National School Climate Center. Dewey, John. (1944). Democracy and Education: An Introduction to the Philosophy of Education. New York: Free Press. ———. (1990). The School and the Society and the Child and the Curriculum. Chicago: University of Chicago Press. ———. (1997). Experience and Education. New York: Touchstone. Dow, P. (2013). Virtuous Minds: Intellectual Character Development. Downers Grove, IL: InterVarsity Press. Durlak, Joseph A., Domitrovich, Celene E.,Weissberg, Roger P. and Gullotta,Thomas P. (eds.). (2017). Handbook of Social and Emotional Learning:Theory and Practice. New York: Guilford Press. Giroux, H. A. (2012). Education and the Crisis of Public Values Challenging the Assault on Teachers, Students, and Public Education. New York: Peter Lang. www.kipp.org/our-approach. Jones, Stephanie M. and Bouffard, Suzanne M. (2012). “Social and Emotional Learning in Schools: From Programs to Strategies,” Social Policy Report, 26 (4), 1–22. Jubilee Centre for Character and Virtues. (2016). “A Framework for Character Education in Schools,” www.jubileecentre.ac.uk/userfiles/jubileecentre/pdf/character-education/Statement_ on_Teacher_Education_and_Character_Education.pdf. Birmingham. Killen, Melanie and Smetana, Judith G. (eds.). (2008). Handbook of Moral Development (1st ed.). Mahwah, NJ: Lawrence Erlbaum Associates. Kristjánsson, Kristján. (2015). Aristotelian Character Education. New York: Routledge. Kuhlthau, C. C., Maniotes, L. K. and Caspari, A. K. (2012). Guided Inquiry Design: A Framework for Inquiry in your School. Santa Barbara, CA: Libraries Unlimited. Lickona, T. and Davidson, M. (2005). Smart and Good High Schools: Integrating Excellence and Ethics for Success in School, Work, and Beyond. Cortland, NY: Center for the 4th and 5th Rs/Character Education Partnership. MacConville, Ruth and Rae, Tina. (2012). Building Happiness, Resilience and Motivation in Adolescents: A Positive Psychology Curriculum for Well-Being. London, Philadelphia: Jessica Kingsley Publishers. “Making Caring Common Project,” Harvard Graduate School of Education. http://mcc.gse.har vard.edu Narvaez, Darcia. (2008). “Integrative Ethical Education,” in Handbook of Moral Development (1st ed.), ed. Melanie Killen and Judith G. Smetana. Mahwah, NJ: Lawrence Erlbaum Associates, 1–25. Plato. (1980a). “Protagoras,” in The Collected Dialogues, including the Letters, ed. Edith Hamilton and Huntington Cairns. Princeton: Princeton University Press, 308–352. 506
Nancy E. Snow and Scott Beck
Plato. (1980b). “Republic,” in The Collected Dialogues, including the Letters, ed. Edith Hamilton and Huntington Cairns. Princeton: Princeton University Press, 575–844. Ritchhart, R. (2002). Intellectual Character:What It Is,Why It Matters, and How to Get It. San Francisco, CA: Jossey-Bass Pfeiffer. Ritchhart, R., Church, M., Morrison, K. and Perkins, D. N. (2011). Making Thinking Visible: How to Promote Engagement, Understanding, and Independence for All Learners. San Francisco, CA: Jossey-Bass. Rousseau, Jean-Jacques. (1979). Emile, or On Education. Intro., trans., and notes by Allan Bloom. New York: Basic Books. Seligman, M. E. (2011). Flourish: A Visionary New Understanding of Happiness and Well-Being. New York: Simon & Schuster. Seligman, M. E., Ernst, Randall M., Gillham, Jane, Reivich, Karen and Linkins, Mark. (2009). “Positive Education: Positive Psychology and Classroom Interventions,” Oxford Review of Education, 35 (3), 393–311. Sternberg, Robert, Grigorenko, Elena and Jarvin, Linda. (2009). Teaching for Wisdom, Creativity, Intelligence, and Success. Thousand Oaks, CA: Sage. Tough, Paul. (2012). How Children Succeed: Grit, Curiosity, and the Hidden Power of Character. New York: Houghton Mifflin Harcourt. Wagner, T. and Dintersmith, T. (2015). Most Likely to Succeed: A New Vision for Education to Prepare Our Kids for Today’s Innovation Economy. New York: Scribner. Weissberg, Roger P., Durlak, Joseph A., Domitrovich, Celene E. and Gullotta, Thomas P. (2017). “Social and Emotional Learning: Past, Present, and Future,” in Joseph A. Durlak, Celene E. Domitrovich, Roger P.Weissberg and Thomas P. Gullotta (eds.), Handbook of Social and Emotional Learning:Theory and Practice. New York: Guilford Press, 3–19. Weissbourd, Richard, Bouffourd, Suzanne M. and Jones, Stephanie M. (2013). “School Climate and Moral and Social Development,” in T. Dary and T. Pickeral (eds.), School Climate: Practices for Implementation and Sustainability: A School Climate Practice Brief. Number 1. New York: National School Climate Center, 1–5. Weissbourd, Richard and Jones, Stephanie M. (2014). “Circles of Care,” Education Leadership, 71 (5), 42–47.
Further Readings Teaching Character and Virtue in Schools, by James Arthur, Kristján Kristjánsson, Tom Harrison, Wouter Sanderse and Daniel Wright (New York: Routledge, 2017), is an insightful examination of teaching virtue from the Jubilee Centre in Birmingham, England. Intellectual Virtues and Education: Essays in Applied Virtue Epistemology, ed. Jason Baehr (New York: Routledge, 2017), is an interesting collection of essays on roles for intellectual virtues in education. Teaching for Wisdom, Creativity, Intelligence, and Success, ed. Robert Sternberg, Elena Grigorenko and Linda Jarvin (Thousand Oaks, CA: Sage, 2009) details strategies for fostering thinking skills in students. Aristotelian Character Education, ed. Kristján Kristjánsson (New York: Routledge, 2015) offers the most extensive defense of Aristotelian character education to date. Most Likely to Succeed, ed. Tony Wagner and Ted Dintersmith (New York: Simon & Schuster, 2015) offers a vision of schools reimagined as places of curiosity, discovery, and creativity. Smart and Good High Schools, ed.Thomas Lickona and Matthew Davidson (Cortland, NY: Center for the 4th and 5th Rs/Character Education Partnership, 2005) offers an extensive resource for schools in the application of ethics in the academic program.
Related Topics Chapter 5, Moral Development in Humans; Chapter 6 Moral Learning; Chapter 7 Moral Reasoning and Emotion; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 22 Moral Knowledge as Know-How; Chapter 25 Moral Expertise.
507
28 DECISION MAKING UNDER MORAL UNCERTAINTY Andrew Sepielli
1. Introduction Suppose that on the best scientific estimates, there is a 1-in-1000 chance that a massive asteroid is on a course to strike Earth in 20 years, wiping out humanity. Suppose further that it is possible to build a device that can redirect the asteroid away from Earth but that it must be built immediately for use in the next few years in order to be effective. The cost of the device: one million dollars. Obviously we should build such a device, even though there is a 99.9% chance we’d be wasting a million dollars to push around an asteroid that wouldn’t have hit us anyway. This doesn’t settle all the questions philosophers care about, of course. Which of the many decision-rules from which this verdict follows is the correct one? Expected value maximization? Some risk-averse or risk-seeking approach? Maybe the probabilities and values are imprecise, necessitating some fancier decision theory to account for the conclusion. And what’s the nature of these probabilities? Are they objective? subjective? evidential? epistemic? Are they robust facts, or mere projections of our confidence levels? There is a lot for philosophers to chew on here. But again, the conclusion that we should build the device, despite the fact that it costs money and is almost definitely unnecessary, is something reasonable people can agree should guide our behavior. Well, just as we can be uncertain about nonmoral propositions—“Will an asteroid hit the earth?”—it seems that we can be uncertain about moral ones: What is the correct theory of punishment? Is the act/omission distinction of moral significance? Is meat murder? Indeed, it’s hard to make sense of moral inquiry without positing such uncertainty (Sepielli, 2016). We might wonder, then, whether the kind of reasoning that applied in the asteroid case described earlier also applies in cases of moral uncertainty. Is it ever the case that, even though some moral claim may well not be true, it ought to guide our decision making anyway because if it turns out to be true, then the moral cost of not acting on it is sufficiently high? Some philosophers have argued that the answer is “yes” (Lockhart, 2000; Ross, 2006; Sepielli, 2009; Moller, 2011; Barry & Tomlin, 2016; MacAskill, 2016; Hicks, forthcoming; Tarsney, forthcoming). Maybe we should avoid killing animals for food even if there is only 508
Andrew Sepielli
some chance that it is wrong. Maybe we should radically change our criminal justice system if there’s any reason to suspect that retributivism is mistaken. Maybe we should donate much more of our money to charity than most of us do, since some very demanding moral theory might be the right one. That’s the kind of question I want to think through in this essay: However exactly we prefer to think about decision making, even moral decision making, in the face of uncertainty about the nonmoral facts, should we and can we treat decision making under moral uncertainty in more or less the same way? Call the view that we can and should “moral uncertaintism.” I’ll proceed by considering three worries about this view.
2. Worries about Probability Very few people would object to talk of a 1-in-1000 chance of an asteroid hitting Earth or of an 83% chance that a candidate will win the upcoming election. Many more, though, would look askance at, say, the claim that utilitarianism has a 35% chance of being true. Some people might be worried about the precision of such a claim: In virtue of what is the probability of this or any other theory exactly 35% (or 50%, or any number)? A first-pass answer is that the proponent of moral uncertaintism is by no means committed to precise probabilities. She may instead assign an imprecise probability to such a view. And there are already proposals in the literature for how to model imprecise probabilities and make decisions in light of them (Gardenfors & Sahlin, 1982; Levi, 1986). A more satisfying answer, though, would include an explanation of why moral propositions have the probabilities they do, whether precise or imprecise. Such an explanation would also help us respond to those who say that the problem with assigning a 35% chance to utilitarianism is not that 35% is precise but that it is intermediate between 0 and 1. How, they ask, can we say anything about moral claims other than that they have probability 1 if true and 0 if false? The roots of this worry are different on different interpretations of probability. If the probabilities here are so-called “modal chances,” and the modality in question is metaphysical, then the problem will be that basic moral claims are necessarily true or false and, as such, cannot have intermediate probabilities (Mellor, 2005). If they’re ones to which rational or coherent agents must be able to conform their own levels of confidence, then again it’s not clear that they can be intermediate, since it’s arguable that any attitude regarding the correct moral theory short of certainty is irrational. Interpreting them as “evidential probabilities” allows for intermediate assignments only if it’s right to think of basic moral claims as supported by evidence that comes in degrees (Williamson, 2002; Mellor, 2005). If evidence is thought of as carving up metaphysical possibility space, then there cannot, strictly speaking, by evidence of this sort for necessary propositions (Stalnaker, 1984). And frequency interpretations seem unable to accommodate intermediate probabilities for basic moral claims, since these claims are timeless, or atemporal. The easiest way out of this thicket is just to say that the probabilities here are subjective— that they’re degrees of belief, levels of confidence, “credences”; that it’s all in our heads. For it seems clear, as a psychological matter, that there are intermediate levels of confidence in moral claims. A person might suspect that consequentialism is true but not be sure. I might be more confident that consequentialism is true than that a particular form of it—e.g. utilitarianism—is true, although I think utilitarianism might be true. And so on. 509
Decision Making under Moral Uncertainty
This answer is not wrong, exactly, but I do think it leaves something to be desired. I don’t think we should be satisfied with simply saying that there are intermediate subjective probabilities or levels of confidence in moral propositions. We should insist on intermediate probabilities that are mind-independent in some sense. To see why, we need to consider just what kind of theoretical and practical work a theory of decision making under moral uncertainty is supposed to do. Writers on the topic have identified two roles for such a theory. Some have said that it must have some connection to blame and other reactive attitudes and their “objective correlates”—punishment and so on. This was the concern of the late-medieval and early modern Catholic theologians who first formulated such theories, which they called “reflex principles” (Jonsen & Toulmin, 1990).1 In the more contemporary literature, Alexander Guerrero (2007) rejects the view that blameless moral ignorance is necessarily exculpatory in favor of a view he calls “Don’t Know, Don’t Kill”—that one is blameworthy for killing something that might have moral status. Others have developed such theories with an eye toward their role as guides to action. The presupposition is that we cannot guide our behavior by norms about which we’re uncertain, and so in order to take our best shot at living in accordance with our moral reasons, we need to employ some norm that in some sense takes account of the probabilities of moral claims (Sepielli, 2012a). Decision-rules that are relative to subjective probabilities are ill-suited to play either role. It would be absurd to let a Nazi off the hook for heinous acts just because he was very confident in the moral view upon which he based those acts (Harman, 2011, 2015). What seems more relevant is how reasonable or well-grounded that confidence is. It is likewise implausible that the norms by which we fundamentally guide our behavior under uncertainty are norms that advert to that very uncertainty. For one thing, this would not fit with how we guide our actions under conditions of certainty or full belief. If I’m certain that P, I guide my behavior most fundamentally by norms like “If P, then I should do A,” not norms that advert to my having that very belief—“If I believe that P, then I should do A.” We don’t need to look inward at our own mental states to move forward to action. It would be odd, then, to suppose that things are different in the case of uncertainty—i.e., that I fundamentally guide my behavior by norms of the form “If my own degree of belief in P is thus-and-such, then I should do A.” This last observation suggests a way forward. If the action-guiding role for agents who are certain of P is played by norms that advert to P, the same role for agents who are uncertain of P will be played by norms that advert to whatever stands in the same relation to a credence in P that P stands in to a full belief in P. So what is that relation? It’s that of expression. “P” expresses the full belief that P. By contrast, “I believe that P” reports a full belief that P. It’s clear that “I have a .7 degree of belief that P” reports such a degree of belief. But what utterance expresses this degree of belief? Is it “There’s a .7 objective probability that P”? No. That’s how you express a full belief about objective probabilities, which we’ve already seen may not apply to fundamental moral claims. Rather, recent writers on the topic have used the term “epistemic” for those probabilities mentioned in statements that express, rather than report, degrees of belief.2 To say that there is a .7 epistemic probability that Elizabeth Warren will win the election, or that Caligula went insane due to illness, is not to say that, in addition to Warren, the election, Caligula, illness, etc., there are these extra features of the world—epistemic probabilities. Rather, it’s simply to show (rather than to tell about) one’s 510
Andrew Sepielli
own credence in those propositions, just as the straightforward claim that Warren will win the election would show one’s full belief that Warren will win. Now, the truth of the claim that Warren will win, or that the asteroid will hit the earth, is independent of whether any person believes it, notwithstanding the fact that making such a claim expresses that belief. That’s a big difference between expressing and reporting a claim. This is what we mean when we say that the truth or falsity of such a claim is “mindindependent.” Similarly, then, the mere fact that epistemic probability claims express (rather than report) credences does not entail that the truth of these claims depends on anyone’s credences. Epistemic probability claims are mind-independent, too. This makes them better suited than claims about subjective probabilities to guide action under uncertainty, since accepting them does not involve “looking inward” at our own credences; it involves merely having those credences (Sepielli, 2012a). It also means that the epistemic probabilities of moral propositions are relevant to blame and punishment in a way that the subjective probabilities are not (Sepielli, 2017). Again, I shouldn’t be let off the hook for a heinous action just because I was very confident it was right. And finally, to return to our original question in this section, there seems to be no principled reason why intermediate probabilities of the epistemic sort could not be assigned to moral propositions. For to say that there is such-and-such an epistemic probability that some moral claim is true is not to imply anything about evidence, or frequency, or metaphysical possibility, or what the fully rational agent could think. But to say that there is no principled reason why moral propositions might have intermediate epistemic probabilities is not yet to show affirmatively that they might have them. To see how they might, it’s helpful to turn to some examples. Consider the claim, “You ought to push the person off the bridge to stop the trolley.” The probability of this claim would seem to be raised, though not to 1, by the falsity of the “doctrine of double effect.” It is raised because one putative ground for the wrongness of pushing the man is the DDE, and if the DDE is false, then that ground is illusory. It is not raised to 1, though, because there may be some other, genuine ground for the wrongness of pushing the man—e.g., something having to do with the causal relationships between the agent and patients rather than with the agents’ intentions (Kamm, 2006). Or consider: It seems to me that while philosophers are willing to tolerate complexity in a moral theory, they tend to assign credence in accordance with simplicity, ceteris paribus; a theory that cites one factor as fundamentally morally relevant is more plausible than one that cites two, and so on (Kagan, 1989). Suppose that they are right in so doing. And suppose further that the well-being an action produces is at least one of the things that contributes to its moral status. It seems that these two truths raise the epistemic probability of utilitarianism but do not confirm it beyond doubt. Examples like this are familiar to almost everyone who has thought philosophically about ethics. Why, then, might someone nonetheless deny the existence of intermediate epistemic probabilities for moral claims? The main worry, as far as I can see, concerns the accessibility of moral truths—or more specifically, their apparent equi-accessability. For consider that, relative to all of the facts, the probability of a truth is 1 and a falsehood is 0. And relative to certain collections of facts, the probability of a flipped ordinary coin landing heads is 90%, and I have a greater chance of being elected president of the United States than Warren does.To get determinate, plausible, 511
Decision Making under Moral Uncertainty
intermediate probabilities, we need some principled way of distinguishing between the facts that may undergird an epistemic probability distribution and those that may not (Hajek, 2007). The problem is that many of the ways that seem appealing in nonmoral cases won’t work in moral ones. In assigning a probability to a coin landing heads or Warren winning an election, we might, say, exclude all future events from the supervenience base. Fundamental moral claims, however, are true or false atemporally. Or we might exclude events that are unknowable or beyond our ken—the microphysical structure of the coin being flipped, the neural structure of the brains of the voters whose behavior we aim to predict—but it’s not clear that there’s any analogue in the moral case. There are no moral features too small for the eye to see. Rather, it can seem that all moral claims are accessible or within our ken, especially if, as many argue, they’re knowable a priori. Otherwise, we might exclude claims that are at least as hard to ascertain the truth of as the claims whose probabilities are being assessed. The idea here is that epistemic probabilities depend on something like evidence, and evidence is something that “stands between” the thinker and the proposition the truth of which she’s trying to ascertain. One ascertains such a truth by attending to the evidence, and so the evidence should in some sense be more accessible—an intermediary. But again, it’s not obvious how this would translate to the moral realm. How would any fundamental moral claim be less accessible than any other, if indeed they’re all knowable a priori? To answer this question, I think we’d need to say more about how moral knowledge is gained. More specifically, we’d need to specify certain mechanisms as the ones by which we fundamentally acquire moral knowledge, and distinguish between those facts that are more readily accessible using these mechanisms and those that are less readily accessible. While true belief about morality could in principle be arrived at in any way whatsoever, knowledge would require the employment of this mechanism. While I won’t explore such a proposal here, it strikes me as plausible that some more knowledge is indeed harder to get, even though, again, all a priori knowledge is in some sense already there for the taking.3
3. Worries about the Possibility of Inter-Theoretic Comparisons of Value Differences Suppose I am uncertain whether I ought to do A or to do B. I have some credence in Theory 1, which recommends the former, and some credence in Theory 2, which recommends the latter. In keeping with what we said about the asteroid case described earlier, it seems that we should not simply go with the more probable theory. It seems to matter as well just how good or bad A and B are according to each of the theories—whether, e.g., there is a big difference between the two actions according to Theory 1 and perhaps a smaller difference between them on Theory 2. But it’s not obvious that these differences compare across theories (Lockhart, 2000; Ross, 2006; Sepielli, 2009; Gustafsson & Torpman, 2014; Nissan-Rozen, 2015). It is not as though utilitarianism says, “Well, the ‘gap’ between A and B according to me is bigger (smaller) than the gap according to deontology, which is false.” This problem is similar to the problem of interpersonal comparisons of well-being that arises on preference-satisfaction conceptions thereof. 512
Andrew Sepielli
No similar problem arises in “asteroid.” There the presumption is that, however morally uncertain the decision maker is, all of the moral outlooks she finds plausible hold that the “gap” in value between “Save a million dollars & asteroid avoids the earth” and “Spend a million dollars & asteroid avoids the earth” is much smaller than the gap between the latter and “Save a million dollars & asteroid strikes the earth.” There is a more or less a common scale to rank the possible outcomes, however fuzzy or imprecise. There may no such scale, though, in the case of fundamental moral uncertainty. Several philosophers have taken heed of this problem and tried to solve it. Ted Lockhart (2000, 84) proposes something that he calls the “Principle of Equity among Moral Theories” (PEMT), according to which: The maximum degrees of moral rightness of all possible actions in a situation according to competing moral theories should be considered equal.The minimum degrees of moral rightness of possible actions in a situation according to competing theories should be considered equal unless all possible actions are equally right according to one of the theories (in which case all of the actions should be considered to be maximally right according to that theory). But this proposal suffers from some technical difficulties (Sepielli, 2012b). Most of these stem from a very general feature of Lockhart’s proposal, namely that it purports to solve the problem exogenously—imposing a constraint on the value-assignments of the theories that is not derived from the theories themselves. It is doubtful, though, whether exogenous solutions are really solutions at all, for they do not purport to tell you how value differences according to the theories actually do, “antecedently,” compare to one another. Rather, they offer a recipe for how to impose such comparisons, regardless of how or whether the differences in question really do compare. If we opt for an exogenous solution, our reasons for action in the face of moral uncertainty will depend not only upon the probabilities of the various moral theories and how these rank actions but also upon the particular exogenous method we’ve chosen. On the face of it, this seems like an unwelcome result; the last of these seems irrelevant to what we ought to do. One might reply by drawing on Lockhart’s claim that the PEMT is a way of treating theories “fairly” by assigning them all equal stake in every situation. But as I’ve argued, this claim can’t be taken at face value, since moral theories are not the kinds of things we can treat fairly or unfairly. We should instead seek out an endogenous solution to the problem—one that appeals to features of the theories themselves rather than some external constraint. I have proposed that we could compare values across theories on the basis of partial “background rankings” that they have in common (Sepielli, 2009). For example, suppose that I am uncertain whether to eat factory-farmed meat. On the one hand, I am certain that there is some moral benefit to doing so: it would slightly drive up wages for farm workers. On the other, I think that maybe mistreating animals is in the relevant sense morally equivalent to mistreating humans, and so subsidizing factory farming (FF) of cows and pigs would be tantamount to subsidizing incredibly cruel treatment of humans.While I’m uncertain about a significant question, there is a background ranking of actions in which I’m confident— one on which (A) innocuously driving up wages for unskilled workers is better than (B) 513
Decision Making under Moral Uncertainty
buying an alternative to factory-farmed meat; on which (B) is better than (C) subsidizing cruel treatment of human beings; and one on which the “gap” in value between A and B is much smaller than the gap between B and C.We might say that I’m uncertain between two theories that “share” this background theory: Theory 1, which implies that paying for FF meat is morally equivalent to A, and Theory 2, which implies that it’s tantamount to C. My thought was that this background ranking could ground a comparison between the size of the gap between eating FF meat and not on Theory 1 and the size of that gap on Theory 2. (See Figure 28.1.) As Toby Ord (2008) and Brian Hedden (2016) have pointed out, this will not work.Two moral theories may agree that A is better than B, which is better than C, and even about how the A-B value-gap compares to the B-C one, without its being the case that, e.g., the A-B gap on the one theory is the same size as the A-B gap on the other. And indeed, saying otherwise leads us into contradiction. For suppose two theories rank some actions A, B, C, D, and E as follows (see Figure 28.2): The ratio of A-B to B-C is the same on both theories, as is the ratio of C-D to D-E. But in this case, there is no way to hold A-B and B-C according to Theory 1 equal to the same value-gaps according to Theory 2 while also holding C-D and D-E equal across the two theories. Jacob Ross (2006) has proposed an endogenous solution which is superior to mine. He proposes that we can ground inter-theoretic value comparisons not merely in shared rankings of actions but rather in shared values that underlie those rankings. In our example, the two moral outlooks presumably “agree” not only on the A-B-C ranking but on the fundamental values of human pleasure, human pain, equality, justice, non-domination, and so on that determine this shared ranking. Theory 1 A~FF
B
(gap in value)
C
Theory 2 A
B
C~FF
Figure 28.1 My original proposal for intertheoretic value comparison
Theory 3 A B C
D
E
Theory 4 A
B
C
Figure 28.2 How my original proposal fails
514
D E
Andrew Sepielli
Now, as it’s stated, this seems to be open to the same kind of problem that plagued my proposal.Two theories may “agree” about the ratios of value differences and may even agree on what accounts for those rankings/ratios without its being the case that they compare inter-theoretically. But I do think Ross’s proposal is a step in the right direction insofar as it tries to bring more to bear on the problem of inter-theoretic comparisons than just a set of shared rankings. It draws on features of theories that, in contexts of moral certainty, may not play any practical role over and above the rankings of actions and assignments of deontic statuses that they determine but would be of practical importance were they capable of grounding inter-theoretic comparisons. And while, again, I don’t think the substantive values to which Ross adverts will do the trick, there is another feature of moral theories that just might. We started this section with the assumption that no theory includes information about how its own value rankings line up with those of other, by its lights false, theories. Certainly, this was no part of the utilitarianism or contractualism that we all learned about in moral philosophy class. But I want to suggest that this assumption is actually false. Granted, nothing in the previous descriptions of Theory 1 and Theory 2 above is sufficient to fix any inter-theoretic comparisons. In other words, it is consistent with everything in those descriptions that Theory 1 and Theory 2 compare to one another as represented in Figure 28.3, but also consistent with the descriptions that they compare as represented in Figure 28.4. Still, though, it seems that the natural way to commensurate the two theories or prototheories is in the way represented in Figure 28.1. We imagine a university student who has never seriously thought about factory farming learning about its methods, or being exposed to the arguments against it, and coming to think that it might be on a par with the cruel treatment of human beings. Whereas her credence was once concentrated exclusively Theory 1 B
A~FF
C
Theory 2 A
B
C~FF
Figure 28.3 One way the value differences could compare
Theory 1 A~FF
B
C
Theory 2 A B
C~FF
Figure 28.4 Another way the value differences could compare
515
Decision Making under Moral Uncertainty
in Theory 1, now some of it shifts to Theory 2. It is odd to suppose that, along with this shift, the degree of wrongness she assigns to treating human beings cruelly (i.e., to C), on the condition that FF is tantamount to C, diminishes relative to the degree of wrongness she assigns to C on the condition that FF is equivalent to A instead. In other words, her new thought is not “FF and cruelty to humans might be equally bad—so I guess cruelty to humans mightn’t be so bad after all!” It’s “FF and cruelty to humans might be equally bad—so FF might be very bad!” This suggests that our first-pass characterizations of Theory 1 and Theory 2 underdescribed the objects of at least this imagined student’s credences. Call the pair of Theory 1 and Theory 2 as represented in Figure 28.1 “Pair #1,” the pair as represented in Figure 28.3 “Pair #2,” and so on. The right thing to say about our university student is that her uncertainty is divided specifically over theories that map onto to one another as the theories in Pair #1 do. There may be no way to solve the problem of inter-theoretic comparisons when the theories are Theory 1 and Theory 2 as initially described. But there is a way to solve it when the theories are those particular versions of Theory 1 and Theory 2 that comprise “Pair #1,” which is all that matters in our imagined scenario, because that’s what our imagined university student has in mind. The problem is soluble in this case because it is part of the very structure of this Theory 1/Theory 2 pair, as opposed to the other pairs, that certain of their value-gaps compare across the theories in this way. It’s as much a part of the structure of this theory-theory pair as it’s part of the structure of utilitarianism that you have more reason to do A than to do B if and only if A produces more utility than B does. There are two reasons why it’s easy to miss this simple solution to the problem of intertheoretic comparisons. First, it does seem like there are lots of cases where inter-theoretic comparisons are impossible, or if they are possible, are so rough as to be useless in the customary sorts of moral dilemmas. We may be taking a lesson from those harder cases and cross-applying it, overzealously, to the easier cases like the one here. Second, this feature of moral theories—how their value-gaps compare to those of other theories—doesn’t show up in our employment of theories in contexts of moral certainty or confidence. Normally, when we’re assessing theories, we’re considering them as items to flat out accept or reject, in which case we care only about how they rank actions, maybe assignments of deontic statuses like “required” or “supererogatory,” and maybe also their deeper explanations of why they assign the rankings and statuses that they do.We don’t normally think of the kind of comparison-licensing features as parts of the theories. But that doesn’t mean that they’re not there and that they can’t be among the aspects of theory-structure that we have in mind when we have in mind moral theories in cases of moral uncertainty.
4. Worries about Higher-Order Normative Uncertainty Just as we may be uncertain among first-order moral theories, so too might we be uncertain among theories of what to do in the face of that first-order moral uncertainty. And so there may be some pressure to posit theories yet one more level “up”—theories about what to do in the face of uncertainty about what to do in the face of first-order moral uncertainty. You can imagine how this would iterate. I say there “may be” some pressure because it depends on what our grounds were in the first place for positing some norm about what to do in the face of moral uncertainty. As we 516
Andrew Sepielli
saw earlier, these norms seem to play two roles: (1) they may be relevant to the propriety of praise, blame, reward, punishment, and so on; and (2) they may serve as guides to action. It’s not so obvious to me that someone who enlists such a norm to play the first of these roles should feel any theoretical pressure to posit any rules about what to do under higher-level normative uncertainty. It’s plausible that praiseworthiness and blameworthiness are a function of, among other things, whether one’s action accords with rules like Guerrero’s “Don’t Know, Don’t Kill,” or with casuistic “reflex principles” like Bartolomeo de Medina’s “Probabilism,” according to which an action is not formally sinful so long as there’s a “reasonable probability” that it is permitted (Jonsen & Toulmin, 1990, 164). The probabilities of these rules, unlike the probabilities of first-order moral norms, may be irrelevant to the propriety of reactive attitudes, rewards, and punishments. If that’s our chief concern, then we may have some principled basis to avoid positing higher and higher order norms, ad infinitum. Not so if our chief concern is the guidance of action by norms. For just as we can’t directly guide our conduct by first-order moral norms among which we’re consciously uncertain, neither can we guide it by rules for decision making under moral uncertainty in which we’re uncertain.The search for a rule in which we can invest our full belief and thus use as guide to action propels us ever “upward.”This is indeed my chief concern and that of most contemporary writers on moral uncertainty. The possibility of higher and higher order normative uncertainty gives rise to two problems. The first problem concerns normative coherence. Those working on this topic typically say that these decision-rules are norms of rationality or subjective rightness (Sepielli, 2014). But it’s not hard to imagine that they may issue different verdicts. The right thing to do in the face of moral uncertainty may differ from the right thing to do in the face of uncertainty about what to do in the face of moral uncertainty. And so on. So what’s the answer to the perfectly ordinary question of what it’s subjectively right, or rational, to do? Do we aggregate the verdicts somehow? Do we say that only the highest-order rules the agent has considered have any force—that the lower-order rules somehow lose their authority once she becomes uncertain about them? Second, it seems that if our uncertainty in rules is in principle boundless, then it will sometimes be impossible to guide our actions by norms. In such cases, we will have to take an unguided “leap of faith” in the face of our normative uncertainty rather than accepting a norm about how to take that uncertainty into account and acting on that norm. But then these decision-rules would not be playing the role they were enlisted to play in the first place. They end up being no improvement over any first-order moral rule as it regards action-guidance. If our commitment to their truth rested on their usefulness as guides to action, we might want to go back on that commitment. Now, these are often presented as though they’re only problems for the moral uncertaintist. “Once you’ve posited a norm that’s relative to the probabilities of moral claims, then you’ve got to keep going; there’s no principled stopping point! You’ve crossed the Rubicon! By contrast, those who posit only norms that are relative to probabilities of nonmoral propositions—e.g., the precautionary decision-rule in ‘asteroid’—are under no such pressure. There is a principled stopping point—to wit, before introducing norms that advert to the probabilities of moral propositions.” But this is a mistake. First, as we just noted, if you accept moral-probability-relative norms solely because they ground blameworthiness and the like, then there may be a principled 517
Decision Making under Moral Uncertainty
stopping point after all. Second, and more importantly, it’s not at all clear why it’s defensible to posit only a norm like the one in “asteroid” and then stop. If a moral uncertaintist “has to” keep going and posit higher and higher level norms about what to do under moral uncertainty, so “must” someone who posits a norm about what to do under nonmoral uncertainty, on pain of having to take an unguided action. So as far as action-guidance is concerned, there is no principled stopping point for anyone. No rule is guaranteed to secure full-on acceptance, which it must if it is to fully guide action. It seems, then, that everyone who cares about action-guidance, at least, had better be able to solve the coherence and guidance problems presented earlier. In response to the coherence problem, I have argued that, indeed, there will be multiple verdicts about what is subjectively right or rational—with norms at each “level” relative to the probabilities of norms at the “level” below. These norms may come into conflict in the sense of recommending different verdicts, but they cannot appear to come into conflict from the agent’s perspective. Imagine two levels of rules: First, R1. . . Rn, and second, S1. . . Sn, which tell me what to do given the probabilities assigned to R1. . . Rn. Now consider: Either I am certain or I am uncertain regarding the rules in R1. . . Rn. Suppose first that I am certain that one of R1. . . Rn is correct.Then I can simply act on that rule, sans conflict from my point of view. I don’t even need to consider the meta-rules in S1. . . Sn. But now suppose that I am uncertain among R1. . . Rn. Then I cannot guide my conduct by any rule in R1. . . Rn and must instead hope to guide it by some rule in S1. . . Sn. But again, there will not seem to be a conflict from my point of view. It simply cannot seem to me, from my point of view, that the R1. . . Rn rules guide me to do one thing and the S1. . . Sn rules guide me to do something else. I will not face anything that I will regard as a practical dilemma. This is notwithstanding the fact that perhaps the in-fact-correct rule in R1. . . Rn demands that I do one thing and the in-fact-correct rule in S1. . . Sn demands that I do something else (Sepielli, 2014). This view gains support from an account of subjective normativity in terms of a try (Mason, 2003; Mason, 2017; Sepielli, 2012a). What’s right at any given level of subjective normativity is what would count as the best try at doing what one has reason, at the levels below it (including the level of first-order, or “objective,” morality), to do. And while the best A-ing and the best try at A-ing can come apart, they cannot appear to diverge from the agent’s perspective. The latter is practically “transparent” to the former. Similar issues arise in epistemology, in connection with higher-order evidence, which includes the kind of evidence provided by peer disagreement about the force of shared, lower-order evidence (Christensen, 2010; Lasonen-Aarnio, 2014; Horowitz, 2014; Schoenfield ms). In response to the guidance problem, we should first concede that it is possible to be consciously uncertain not only about morality but about decision-rules for moral uncertainty, decision-rules for uncertainty about those decision-rules, and so on all the way up. In some such cases, it may be that an agent cannot engage in behavior that is fully norm-guided; she will instead be consigned to take an unguided “leap of faith.” But this does not show that these decision-rules have no utility as guides to action, over and above the first-order moral rules they supplement. There are two reasons for this. First, as I show, it is likely that even if I am uncertain about rules all the way up, they gradually converge in their recommendations about which actions to perform (Sepielli, 2017). And it’s the latter sort of certainty that matters from the practical point of view. Second, even if there is no convergence on a 518
Andrew Sepielli
single recommended course of action, there may be convergence on a certain disjunction of actions at the expense of others.The higher and higher level norms may not converge in recommending that I do A rather than do B but may nonetheless converge in recommending that I do either of these rather than C, D, E, and so on.
5. Conclusion We’ve surveyed three grounds for skepticism about “moral uncertaintism”—the view that we ought to respond to moral uncertainty in roughly the way we respond to uncertainty about nonnormative matters. The first worry was that we may not be able to assign intermediate probabilities to moral propositions. The second worry was that there may be no way to compare values across competing moral theories. The third worry was that the possibility of higher-order normative uncertainty may threaten both moral uncertaintism’s coherence and its capability to make good on its promise of providing agents with a guide to action. While there are other objections that critics have raised against the moral uncertaintist position, these three strike me as generating the greatest cause for concern from the least controversial starting points. We’ve seen that the uncertaintist is not without resources to respond to these concerns, although of course these debates are far from settled.
Notes 1. For more on the ancient and medieval frameworks for assigning moral responsibility, see Chapter 10 of this volume. 2. See Yalcin (2007); Swanson (2011); and Moss (2013). 3. For the distinction between foundational and non-foundational moral knowledge see Chapters 17, 18, and 19 of this volume. For critical discussion of the supposed a priority of foundational or non-inferential moral knowledge, see Chapters 16, 17, 18, 19, and 20 of this volume.
References Barry, C. and Tomlin, P. (2016). “Moral Uncertainty and Permissibility: Evaluating Option Sets,“ Canadian Journal of Philosophy, 46 (6), 1–26. Christensen, D. (2010). “Higher-Order Evidence,” Philosophy and Phenomenological Research, 81 (1), 185–215. Gardenfors, P. and Sahlin, N-E. (1982). “Unreliable Probabilities, Risk Taking, and Decision Making,” Synthese, 53 (3), 361–386. Guerrero, A. (2007). “Don’t Know, Don’t Kill: Moral Ignorance, Culpability, and Caution,” Philosophical Studies, 136 (1), 59–97. Gustafsson, J. and Torpman,T. (2014). “In Defence of My Favourite Theory,” Pacific Philosophical Quarterly, 95 (2), 159–174. Hajek, A. (2007). “The Reference Class Problem Is Your Problem Too,” Synthese, 156 (3), 563–585. Harman, E. (2011). “Does Moral Experience Exculpate?” Ratio 24 (4), 443–468. ———. (2015). “The Irrelevance of Moral Uncertainty,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics,Volume 10. Oxford: Oxford University Press. Hedden, B. (2016). “Does MITE Make Right? On Decision-Making Under Normative Uncertainty,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics,Vol. 11. Oxford: Oxford University Press. Hicks, A. (forthcoming). “Moral Uncertainty and Value Comparison,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics,Vol. 13. Oxford: Oxford University Press.
519
Decision Making under Moral Uncertainty
Horowitz, S. (2014). “Epistemic Akrasia,” Noûs, 48 (4), 718–744. Jonsen, A. and Toulmin, S. (1990). The Abuse of Casuistry. Berkeley, CA: University of California Press. Kagan, S. (1989). The Limits of Morality. Oxford: Oxford University Press. Kamm, F. M. (2006). Intricate Ethics. Oxford: Oxford University Press. Lasonen-Aarnio, M. (2014). “Higher-Order Evidence and the Limit of Defeat,” Philosophy and Phenomenological Research, 88 (2), 314–345. Levi, I. (1986). Hard Choices. Cambridge: Cambridge University Press. Lockhart, T. (2000). Moral Uncertainty and Its Consequences. Oxford: Oxford University Press. MacAskill, W. (2016). “Normative Uncertainty as a Voting Problem,” Mind, 125 (500), 967–1004. Mason, E. (2003). “Consequentialism and the ‘Ought Implies Can’ Principle,” American Philosophical Quarterly, 40 (4), 319–331. ———. (2017). “Do the Right Thing: An Account of Subjective Obligation,” in M. Timmons (ed.), Oxford Studies in Normative Ethics,Vol. 7. Oxford: Oxford University Press. Mellor, D. H. (2005). Probability: A Philosophical Introduction. London: Routledge. Moller, D. (2011). “Abortion and Moral Risk,” Philosophy, 86 (3), 425–443. Moss, S. (2013). “Epistemology Formalized,” Philosophical Review, 122 (1), 1–43. Nissan-Rozen, I. (2015). “Against Moral Hedging,” Economics and Philosophy, 31 (3), 1–21. Ord, T. (2008). Personal Communication. Ross, J. (2006). “Rejecting Ethical Deflationism,” Ethics, 116, 742–768. Schoenfield, M. (ms) “Two Notions of Epistemic Rationality”. Sepielli, A. (2009). “What to Do When You Don’t Know What to Do,” in R. Shafer-Landau (ed.), Oxford Studies in Metaethics,Volume 4. Oxford: Oxford University Press. ———. (2012a). “Subjective Normativity and Action Guidance,” in M. Timmons (ed.), Oxford Studies in Normative Ethics,Volume 2. Oxford: Oxford University Press. ———. (2012b). “Moral Uncertainty and the Principle of Equity among Moral Theories,” Philosophy and Phenomenological Research, 86 (3), 580–589. ———. (2014). “What to Do When You Don’t Know What to Do When You Don’t Know What to Do. . ,” Noûs, 48 (3), 521–544. ———. (2016). “Moral Uncertainty and Fetishistic Motivation,” Philosophical Studies, 173 (11), 2951–2968. ———. (2017). “How Moral Uncertaintism Can Be Both True and Interesting,” in M. Timmons (ed.), Oxford Studies in Normative Ethics,Volume 7. Oxford: Oxford University Press. Stalnaker, R. (1984). Inquiry. Cambridge: Cambridge University Press. Swanson, E. (2011). “How Not to Theorize About the Language of Subjective Uncertainty,” in A. Egan and B. Weatherson (eds.), Epistemic Modality. Oxford: Oxford University Press. Tarsney, C. (2018). “Intertheoretic Value Comparison: A Modest Proposal,” Journal of Moral Philosophy, 15 (3), 324–344. Williamson, T. (2002). Knowledge and Its Limits. Oxford: Oxford University Press. Yalcin, S. (2007). “Epistemic Modals,” Mind, 116 (464), 983–1026.
Further Readings H. Greaves and T. Ord, “Moral Uncertainty About Population Ethics,” Journal of Ethics and Social Philosophy, forthcoming applies uncertaintist reasoning to questions of population ethics. D. Enoch, “A Defense of Moral Deference,” Journal of Philosophy, 111 (5), 229–258 defends reliance on moral experts/moral testimony partly on grounds of moral uncertainty. D. Prummer, Handbook of Moral Theology Providence, NJ: P.J. Kenedy and Sons (1995) contains a sophisticated discussion of the “reflex principles” debated by Catholic moral theologians. M. Smith, “Evaluation, Uncertainty, and Motivation,” Ethical Theory and Moral Practice, 5 (3), 305–320, 2002 argues that noncognitivists cannot capture the phenomenon of moral uncertainty. B. Weatherson, “Running Risks Morally,” Philosophical Studies, 167 (1), 141–163, 2014 argues that acting on moral-probability-relative norms requires bad motivation.
520
Andrew Sepielli
Related Chapters Chapter 6 Moral Learning; Chapter 11 Modern Moral Epistemology; Chapter 12 Contemporary Moral Epistemology; Chapter 15 Relativism and Pluralism in Moral Epistemology; Chapter 20 Moral Theory and its Role in Everyday Moral Thought and Action; Chapter 21 Methods, Goals, and Data in Moral Theorizing; Chapter 25 Moral Expertise.
521
29 PUBLIC POLICY AND PHILOSOPHICAL ACCOUNTS OF DESERT Steven Sverdlik
Public policy concerns activities that have general effects on the well-being of members of society.We can roughly distinguish two types: activities where the state is the primary agent of policy implementation and activities where other organizations and individuals are the primary agents of implementation. The moral principles that govern all of public policy implementation can be regarded as part of applied ethics. Public policy governing state action has a particular importance, though, because the state affects some of the fundamental interests of its citizens. Reflection on the moral principles that govern state action has occurred since antiquity, but it now takes account of the widespread conviction that states must be governed democratically. This chapter considers some of the moral principles that philosophers assert apply to a central state activity, punishment. We will be looking specifically at one concept that is thought to play a role in these principles, that of moral desert. Wrongdoers deserve to be punished, it is widely thought. The concept of desert occurs in thought about other areas of public policy, especially economic justice, where it is claimed that agents who perform certain tasks deserve to be rewarded for doing so (Sher, 1987; Sher, 2016). However, in this chapter I focus on desert in punishment because it is here that the literature on desert is most developed and because in most modern societies the state monopolizes the imposition of the most severe punishments. In philosophical thought about punishment it is customary to distinguish two main approaches: consequentialism and retributivism (Boonin, 2008). Consequentialists frame the moral analysis of punishment, as they do every other issue, in terms of the possibilities of producing socially beneficial effects. The most commonly mentioned possible beneficial effects of punishment are deterrence (of a punished offender and of others), incapacitation of an offender (via confinement in prison), and reform of offenders. Consequentialists do not necessarily deny that some wrongdoers deserve to be punished, but they have tended to analyze the moral issues involved in state punishment in other terms. Retributivism, in contrast, claims that the desert of wrongdoers should play a significant role in the design of criminal justice systems. This is a claim that moral intuition seems to endorse, and retributivists have historically been sympathetic to commonsense or intuitive moral thought. 522
Steven Sverdlik
There is some irony in this, given that the founding father of retributivism is usually said to be Immanuel Kant, a very theoretically minded philosopher.1 In this chapter we will review the development of retributivist thought about the moral significance of desert claims with regard to punishment. We will see that one of Kant’s seminal claims about this was turned on its head by some later retributivists. Then a survey is presented of the factors that contemporary retributivists claim affect a wrongdoer’s desert. This will facilitate an assessment of the epistemic status of the intuitive judgments that are alleged to support important abstract claims about deserved punishment. I argue that few of these claims have strong epistemic credentials, and some are hard to evaluate. This does not necessarily mean that retributivist theory cannot be useful in formulating criminal justice policy. What it may mean is that retributivists need to exploit the theoretical devices of deontological moral theory, such as the categorical imperative.
1. The Structure and Role of Desert Claims All desert claims are said to have three argument places, and thus have the form, “X deserves Y in virtue of Z.”The “X” place is to be filled by the name or a description of a person; the “Y” place by some form of treatment, either good or bad for the person; and the “Z” place is to contain a description of the “basis” of the claim. This description states facts about the person that make it true that she deserves the specified treatment (Feinberg, 1970, 55–62). The treatment we will consider is legal punishment. We will assume that the basis of such desert claims consists of the fact that X performed a specific action or omission, as well as certain other facts to be discussed later. It is often assumed that if X deserves to be punished by a year in jail, say, then this claim establishes some sort of moral obligation on someone or some group of people—perhaps certain legal officials—to see to it that X spends a year in jail. We might also say that this desert claim constitutes or supports a reason in favor of someone’s jailing X for a year. Recently it has been emphasized that desert claims can have a different role in morality than that of directly supporting moral obligations to punish offenders or providing reasons in favor of punishing them. Desert claims are said to be “limiting conditions” or moral constraints on the operation of other moral reasons favoring punishment. In this role as moral constraint, desert may, for example, prohibit the punishment of the innocent. Or desert may set a moral ceiling on the severity of punishments, so that there is a moral prohibition on punishing a guilty offender more severely than she deserves. We can thus say that desert claims have two possible deontic roles: supporting obligations to punish and supporting limits on punishments. They could play both roles. Desert claims have an important further dimension.This is their normative strength (Feinberg, 1970, 60). If desert claims provide reasons for people like state officials to impose punishments, these reasons may vary in their strength and be more or less easily overridden by other moral reasons (such as the costs of administration). If desert claims function as constraints, these could also have varying strengths. We will examine theories where the desert of punishment is supposed to give rise to a moral obligation of some strength to impose it or a moral obligation of some strength not to impose it, i.e., an obligation that operates as a moral constraint. So we will be considering retributivism in its deontological form. Retributivist thought has recently surprisingly 523
Public Policy and Accounts of Desert
manifested itself in consequentialism, where, for example, it has been argued that deserved suffering is intrinsically valuable (Kagan, 2015).We will not be examining this sort of claim. Many philosophers of punishment accept “hybrid” or “pluralist” theories that employ desert claims for some purposes.They also often accept parts of consequentialism, especially if desert operates only as a constraint. We will be considering the retributivist elements in such hybrids, that is, the claims that giving criminals the punishments they deserve is a distinct and knowable goal of that activity or a distinct and knowable moral constraint on it.
2. History of Deontological Retributivism I will briefly review the history of deontological retributivist thought, focusing on some of the central claims about deserved punishments that its adherents have made. W. D. Ross stands out in rejecting appeals to desert, but he made some important contributions that desert theorists have used in a different form. Kant is regarded as the founder of deontological retributivism. His thinking about punishment can be summarized as follows. Certain kinds of morally wrongful acts must be prohibited by the state and punished after conviction by a court. Just punishments are governed by a principle of equality: the harm that the criminal has inflicted on the victim must be inflicted on the criminal. This is the lex talionis. For example, Kant famously insists that a convicted murderer must be executed, since any more lenient punishment takes less from the criminal than he took from his victim. Kant mentions a few other crimes, for example theft, and he proposes punishments for them that he argues are applications of the principle requiring equality of loss. He presents a famous thought experiment describing a “desert island” that is supposed to clarify the normative strength of desert claims. Legal authorities on this island have a strict or absolute moral obligation to carry out just sentences, even if doing so produces no further good (such as crime reduction) for the inhabitants (Kant, 1991, 140–142; but cp. 143). Kant famously says that public officials must see to it that a criminal receives “what his deeds deserve” (Kant, 1991, 142). So he can be seen as claiming that giving criminals what they deserve is the fundamental and exclusive purpose of punishment. Three difficulties with Kant’s position should be mentioned. First, he uses the loss of a victim as the measure of the amount of loss that must be imposed on the wrongdoer. But this measure ignores what later writers call the wrongdoer’s culpability. (More later on this term.) For example, one person could intentionally kill another person, while a second person negligently kills another person. Both killers, Kant seems to say, deserve the same punishment, namely, death. This strikes us as false (Kleinig, 1973, 116–117).2 Second, in the few examples of just punishment that Kant presents, he does not always use the principle of equality. For example, when he discusses property crimes like theft he endorses making the wrongdoer a slave of the state for a period of time or even for life (Kant, 1991, 142). Such losses would seem to be substantially greater than those of the victims of such crimes and, hence, undeserved by Kant’s own principle. Third, there are a number of acts that are wrong and should be punished by the state that do not have identifiable victims, so that the equality principle cannot be used to determine the appropriate severity. Two examples are tax evasion and driving while drunk where no harm occurs. W. D. Ross presents a sophisticated hybrid theory. Three of his arguments should be mentioned. He presents a seminal thought experiment to show that consequentialism is 524
Steven Sverdlik
false. It represents a situation in which authorities cannot catch the criminals committing a certain kind of offense. If the authorities deliberately frame an innocent person, they will deter crime and produce the most good. Moral intuition clearly rejects the consequentialist assertion that framing this person is right (Ross, 1930, 56–57). Second, Ross rejects the idea that state punishment should aim to give wrongdoers what they deserve (Ross, 1930, 57–60). But, third, he presents a structural device that influenced later retributivists. Ross argues that if X violates a moral right of Y then X “extinguishes” the moral right that he possesses which corresponds to the right that he violated. For example, if X kills Y and thus violates Y’s right to life, then X extinguishes his own right to life. (Other writers prefer to say that X “forfeits” his right to life. See, e.g., Wellman, 2012.) If X’s wrongdoing extinguishes a right of his, then society can permissibly do to him something it was previously prohibited from doing. Public officials have a duty to protect rights, and they should impose punishments insofar, but only insofar, as doing so protects the rights of citizens. Therefore, officials should only punish in amounts that maximize the (future) reduction of rights violations. They may not, however, punish a wrongdoer in a degree that violates any rights that he retains. In other words, the criminal’s action sets a moral ceiling on the severity of the punishment that may be imposed on him, since it only extinguishes some of his rights. However, if a punishment that imposes less of a loss on the criminal than he imposed on his victim maximizes the reduction of rights violations, it may and should be imposed (Ross, 1930, 60–64). So there is no general obligation to impose a punishment equal to the victim’s loss. Ross’ proposal suffers from two of the same problems that beset Kant’s stricter form of retributivism. First, Ross also ignores culpability. It strikes us as false that an agent who intentionally violates another person’s right to life, say, extinguishes just as many rights as an agent who negligently violates another person’s right to life. Second, Ross has difficulty dealing with wrongful acts that have no identifiable victims, and, therefore, involve no identifiable rights violations. However, Ross’s proposal is a pioneering example of the use of a concept like desert to establish a moral constraint or limit on the pursuit of other aims that punishment might achieve. Many later writers (e.g., Frase, 2013) use desert to play this constraining role. Thomas Nagel’s seminal paper “Moral Luck” has had a great influence on retributivist thought. The opening pages on luck “in the way things turn out” (now called “consequential” or “resultant” moral luck) are especially relevant to the philosophy of punishment (Nagel, 1979, 24–32; Nelkin, 2013). Nagel quotes a famous passage from Kant’s Groundwork which claims that the moral worth of a “good will” does not depend on its success or failure: it has the same incomparable worth in either case (Kant, 1956, 62). Nagel takes Kant’s claim to be this: factors outside of the control of an agent who is trying to do the right thing because it is right do not alter the moral worth of her action. Nagel speaks of factors outside of an agent’s control as due to “luck.” He argues that the same point about control and luck should hold with regard to morally blameworthy actions. That is: Factors outside of the control of an agent who is acting wrongly, that affect what results her action has, do not affect how morally blameworthy she is for so acting. 525
Public Policy and Accounts of Desert
Nagel says that the claim in the previous sentence denies that there is “moral luck” with respect to blameworthiness concerning how things turn out. This claim, if correct, would have profound implications in the philosophy of punishment. Luck in the way things turn out affects whether an action causes harm (or violates a person’s rights). Consider a complete legal “attempt”: X intends to kill Y and shoots at his heart. If Y has a metal cigarette case in his pocket, he will survive the shot; if he does not, he will die. Or consider a legally reckless act: X drives while texting, knowing that this is risky. If X is unlucky he will kill a pedestrian; if he is lucky he will not cause any harm.The criminal law usually punishes offenders who cause more harm more severely than those who cause less harm, other things being equal. However, some deontological retributivists find the Kantian argument rejecting consequential moral luck to be convincing, and they therefore advocate a radical revision of most systems of criminal law. Their thinking can be summarized as follows. Just punishments ought to respond in some fashion to the desert claims arising from the performance of a criminal act. (The response can consist in imposing the deserved punishment or in abiding by the limit it establishes on the severity of her punishment.) Claims about deserved punishment are based only on how morally blameworthy an offender is in performing a criminal act. But there is no moral luck with respect to blameworthiness concerning how things turn out. This means that the harm actually brought about by an agent in a given criminal offense is irrelevant in determining the correct level of severity of her punishment for it. A highly blameworthy act (like attempted murder) may cause little or even no harm and therefore call for (or permit) a severe punishment. And, on the other hand, an act that causes a great deal of harm (like a forest fire) may be caused by a slightly blameworthy act (like carelessly dropping a cigarette) and therefore call for (or only permit) a lenient punishment. Note that this argument deploys a moral principle arguably derivable from Kant’s moral philosophy and thereby rejects the very principle that Kant himself employs to scale punishments, the entirely harm-based lex talionis. The claim that desert of punishment is based only on blameworthiness, conjoined with the rejection of consequential moral luck, entails a purely formal conclusion. In order to state it I will use the word “culpability.” “Culpability” refers—in its central application—to those features of an agent at the time of action and of her action itself that constitute those parts of the basis of a desert claim for punishment that remain true whatever the results of her action happen to be. We saw that intentionally causing harm and negligently causing harm are such features of an agent at the time of action. (More will be said below about culpability.) The conclusion that these retributivists draw is this: Two agents differ in their desert of punishment if and only if they differ in culpability. Note that this proposition does not tell us what amount of punishment is deserved for any given level of culpability. Alexander et al. (2009), for example, the most important retributivists who deny that there is consequential moral luck, make no claims about what amounts of punishment are deserved for different crimes. Other deontological retributivists assert that there is consequential moral luck to some extent. They agree that an agent’s desert claim for performing a criminal act is based upon her blameworthiness in performing it, but this in turn is said to be based in part on the 526
Steven Sverdlik
amount of harm it causes.These retributivists do not consider culpability and harm each to be a sufficient condition of blameworthiness: if that were true, an agent who nonculpably causes harm would be blameworthy. Instead, their view is that some culpability is a necessary condition of blameworthiness, but the harm that a culpable act causes can alter an offender’s blameworthiness. On the other hand, these retributivists agree with their opponents that a culpable act that causes no harm can be blameworthy (Moore, 1997, 192–193, 246–247). The claim that harm can affect blameworthiness is one that moral intuition seems to support (as does Nagel). The acceptance of consequential moral luck also entails a purely formal conclusion. Keeping in mind the point about culpability being a necessary condition of desert, we can put it this way: Two agents differ in their desert of punishment only if they differ in culpability or in the amount of harm their actions cause. Again, this proposition does not tell us what amount of punishment is deserved for any given level of culpability and amount of harm caused. Michael Moore, the most prominent retributivist in this second category, also makes no claims about what amounts of punishment are deserved for different crimes (Moore, 1997). We now see that there are three basic types of deontological retributivism. All agree that just punishments must somehow respond to what offenders deserve for their actions. We might say that there is a concept of desert but three basic “conceptions” of it. Type 1 retributivism accepts this conception: an offender’s desert is based on how much harm her wrongful action causes her victim. Type 2 retributivism accepts this conception: an offender’s desert is based on the harm it causes her victim and on her culpability in performing her wrongful act. Type 3 retributivism accepts this conception: an offender’s desert is based only on her culpability in performing a wrongful act. Each of these types can be subdivided according to the deontic role that desert plays. In subtype (a), an offender’s desert creates an obligation of some sort to impose the deserved punishment; in subtype (b) an offender’s desert creates a ceiling on the permissible severity of her punishment; in (c) an offender’s desert creates both an obligation to punish and a ceiling (see Table 29.1). A hybrid theory of punishment can incorporate any one of the nine claims I have just described, as well as a specific set of claims falling under a general claim. So, for example, a hybrid theory (analogous to Ross’s) could incorporate the set of desert claims falling under (1b), where the amount of harm caused only sets ceilings on the amount of allowable punishment. Table 29.1 Types of deontological retributivism Basic Conception of Desert
Deontic Role
a
b
c
Type 1: Only harm relevant Type 2: Harm and culpability relevant Type 3: Only culpability relevant
Only an obligation Only an obligation Only an obligation
Ceiling Ceiling Ceiling
Both Both Both
527
Public Policy and Accounts of Desert
No deontological retributivist now defends Kant’s version of retributivism—presumably (1c)—in which there is an obligation—indeed an “absolute” obligation—to impose a punishment on an offender that equals the harm that she caused to a victim.
3. Fine-Grain Conceptions of Desert Let us now survey further conceptual issues about desert judgments before focusing on current discussions of their epistemology. Desert judgments have the form “X deserves Y in virtue of Z.”We are taking X to be a person and Y to be an act of legal punishment.We have said thus far that Z consists in part of X’s performing a certain action. Retributivists also assume that this action is morally wrong and that it has a certain moral gravity or “public” character. This restriction is designed to limit the desert of legal punishment so that minor acts of moral wrongdoing are excluded. The discussion in the last section explained the views of contemporary deontological retributivists about some basic kinds of fact included in the desert basis Z. Type 2 retributivists, for example, assert that these are facts about the harm that a criminal act caused to a victim and the agent’s culpability in performing it. We will now see that there are various ways of spelling out the basic ideas of harm and culpability. These will give us fine-grain conceptions of desert falling under the nine categories in Table 29.1. We can start with harm. Although it is common for retributivists to speak of “the harm that a criminal act causes to a victim,” that phrase is both too narrow and too broad to cover all of the possibilities about the causal order that might play a role in conceptions of desert falling under Types 1 and 2. A number of further issues require clarification.There are many fine-grain conceptions of desert in Types 1 and 2 corresponding to the positions taken on these issues. i. Too narrow. Sometimes less serious bad results, such as offense or inconvenience to victims, may be the relevant consequences when an offender deserves to be punished. Or, again, Ross may be correct to say that it is only rights violations. We saw that some crimes like tax evasion have no identifiable victims; they are often said to be contrary to the public interest. Then, too, there may be environmental crimes that cause damage to non-sentient parts of nature. An important further point is that criminal acts may only create risks of harm, offense, inconvenience, rights violations, damage to non-sentient parts of nature, or to the public interest. We saw that drunken driving can be an example of such a pure risk-creating act. Finally, a criminal act may be an omission, in which case the relevant undesirable result or risk may not be an effect or consequence of its occurrence at all. ii. Too broad. Even if an action causes harm, it may do so via such a long or unusual causal chain as to be regarded as not really an effect of the agent’s activity. Such an act is thought not to be the “proximate cause” of the harm. Retributivists of Type 1 and 2 must give some account of when an act is the proximate cause of a bad result. This will establish a sort of limit on consequential moral luck (Robinson, 2013, 385–393). Now we consider culpability.This is a highly complex and controversial factor. I said that “culpability” refers—in its central application—to those features of an agent at the time of
528
Steven Sverdlik
action and of her action itself that constitute those parts of the basis of a desert claim that remain true whatever the results of her action happen to be. We will now see that some retributivists assert that other factors are relevant to the culpability of criminals.The following incomplete set of subdivisions isolate sets of factors which are said by some retributivists to be relevant to the culpability of criminals. They disagree about whether the specific factors mentioned below do alter a criminal’s culpability, and if so by how much. There are many fine-grain conceptions of desert in Types 2 and 3 corresponding to the positions taken on these questions; Type 2 conceptions also subsume the possibilities just mentioned regarding “harm.” Husak (2012) surveys current discussions of points 1–4 here: 1. Moral responsibility in general. Some account must be given of the rational and moral capacities that exist when an agent is morally responsible for her actions. The absence or diminution of such capacities entails that the agent is not morally responsible for some or all of her actions or is less responsible for them. Such agents will have an excuse like insanity, mental disability, or immaturity. 2. Episodic excuses. Some account must be given of the factors that diminish or eliminate a fully moral agent’s responsibility for a particular action. These include duress, certain kinds of ignorance, intoxication, and provocation. 3. Mens rea (or “narrow culpability”). An agent may not intend to cause harm or know that her action will cause harm or be aware she is creating a risk of it, and these are thought to make performing tokens of a type of wrongful act progressively less culpable, other things being equal. Retributivists debate whether negligence is a mental condition that makes performing a token of a type of wrongful action even less culpable. Some say that wrongful acts performed negligently are not culpable at all (Alexander et al., 2009, 69–86). 4. Motives. An account is needed of the motives, like racism, that enhance a wrongdoer’s culpability, and others, like the sense of duty, that diminish it. 5. Role. Leaders of groups are thought to be more culpable than other members; principals more than accomplices. 6. Criminal record. Retributivists differ over whether a record of criminal convictions enhances an offender’s culpability with regard to a new conviction. Some say it does not; others say it does. Among the latter there is disagreement about how much a given record enhances an offender’s culpability (Frase, 2013, 180–198).3 7. Additivity of sentences. Some retributivists assert that if an offender is convicted of more than one offense at one time, then the total amount of her deserved punishment equals the sum of the punishments deserved for each of the offenses—so, for example, deserved prison sentences should be served consecutively. Others assert that the total amount of punishment an offender deserves in this situation may be smaller—so, for example, deserved prison sentences may be served concurrently to some extent (Frase, 2013, 198–208).
4. The Epistemic Status of Desert Judgments There are two contemporary writers on desert who merit discussion, given their interest in moral epistemology, broadly construed. They are Michael Moore and Paul Robinson. Both focus in different ways on intuitive judgments about desert.
529
Public Policy and Accounts of Desert
I begin with Moore, whose main work in the philosophy of punishment, Placing Blame (1997), contains a section devoted to the philosophical defense of deontological retributivism and to the epistemic theory that supports it. Moore is the most sophisticated retributivist writing on the epistemology of desert. Moore is a Type 2c retributivist: he asserts that desert claims give rise to obligations of some strength to impose punishments and that they also operate as ceilings on severity. Moore asserts further that the desert of punishment is based on facts about harm caused and culpability. He is distinctive in rejecting any consequentialist considerations in his justification of legal punishment.4 Moore is thus a “pure” retributivist and not a hybrid theorist. As noted earlier, he does not explain what amounts of punishment are deserved for any given amount of harm caused and level of culpability. Moore supports his conclusions using a non-foundationalist, coherence approach to epistemic justification.5 Our intuitive judgments about specific examples are, he says, corrigible pieces of evidence for the truth of moral principles (Moore, 1997, 105–110, 159–187).6 He uses this method to argue that (i) consequentialist considerations provide no reason to punish offenders (cp. Moore, 1997, 97–102); (ii) desert constitutes a limiting condition or ceiling on permissible severity (Moore, 1997, 94–97); and (iii) we have a moral obligation to impose the deserved punishment on those who deserve it (Moore, 1997, 98–102, 145–152). I will focus on arguments for the last two of these claims. I will grant that intuitive judgments elicited by thought experiments provide some evidence for moral principles. To establish that desert is a limiting condition, Moore presents two examples similar to Ross’s. They represent situations where consequentialism would favor deliberately punishing people who do not deserve it. We judge that it would be wrong to punish such people. This is evidence that (unconstrained) consequentialism is false. It is also evidence that desert is a limiting condition on the pursuit of any goals that punishment might serve. That is, the examples are evidence that there is a moral obligation to refrain from imposing punishment on people who do not deserve it. Examples meant to establish this can be further divided. The first division constitutes cases where a person known to be completely innocent is knowingly or deliberately punished. Unintentional or unwitting punishments of the innocent are a different matter. They will inevitably occur to some extent in any human system that punishes offenders. Moore’s examples do not prove that it is always wrong to deliberately punish completely innocent people. It is notable that Ross and H.L.A. Hart, who presented the first versions of these examples, both grant that knowingly punishing innocent persons is sometimes permissible (Ross, 1930, 61; Hart, 1968, 17–19, 20). Ross suggests that if doing this would avert a catastrophe then it would be permissible. This shows, he believes, that the obligation to refrain from deliberately punishing innocents is prima facie. Or, we could say, he believes that the moral constraint on punishment is not of infinite normative strength. Since Ross’s example itself is intuitively plausible, it is fair to say that Moore has at best given us evidence that desert creates a constraint on deliberately punishing completely innocent people but that he has not established its normative strength.7 The second division consists of cases where a person deserves some punishment but is punished more than the amount she deserves. Moore does not present an example of this. It may be that this is because he presents no account of what amounts of punishment are deserved. In any case, we can say that, given his own approach to justification, he has not 530
Steven Sverdlik
given us evidence that desert constitutes a limiting condition on the severity of punishment of those who deserve some punishment. Moore’s argument designed to establish that there is an obligation to punish those who deserve it is unusual. Retributivists who appeal to our intuitive judgments for this claim have often argued in the way that Kant did in his “desert island” example (Kant, 1991, 142; Kleinig, 1973, 67). The methodological insight governing the construction of this sort of example is the recognition that there are a number of apparently legitimate moral reasons to punish deserving wrongdoers. Many of these reasons can be seen as instances of the different ways that punishment might produce valuable results in consequentialist terms: deterrence of the offender and others, incapacitation of the offender, and reform of the offender, for example. However, we can describe a situation in which there is a deserving wrongdoer and in which none of the other possible reasons to punish this person is applicable. It is claimed that we still judge it to be obligatory to punish this person. This is said to be evidence for the claim that there is an obligation to punish deserving wrongdoers; it is also said to be evidence that this obligation exists even if no valuable further results occur because of the punishment.8 Moore accepts this form of argument. However, he is distinctive in supplementing it with a first-person variant. He asks us to imagine committing a brutal murder but then undergoing a moral transformation and feeling intense guilt over it. He claims that anyone who had fully grasped the enormity of such wrongdoing would feel and judge that she ought to be punished (Moore, 1997, 147–148, 167). We believe that guilt is, generally, a virtuous emotion that tends to reliably respond to moral facts like one’s own wrongdoing, so that we can assert that our imagined guilty response to such wrongdoing is evidence for the proposition that there is an obligation to punish wrongdoers (Moore, 1997, 144–152, 160–167, 181–184).9 Moore’s first-person example is not properly constructed to establish the main conclusion he is after. The sort of wrongful act that he asks us to imagine committing occurs in our society. This means that we may be supposing that various sorts of good results (such as deterrence) will be brought about by our punishment. If I imagine judging “I ought to be punished” after committing a brutal murder I may be supposing that this punishment will reduce the number of murders in society. A consequentialist would presumably agree that I ought to be punished in such a situation. The conclusion Moore is more interested in establishing is that there is an obligation to punish deserving wrongdoers, even if it produces no further good results. His specific thought experiment cannot establish that. Kant’s “desert island” example is constructed more appropriately. He asks us to imagine that there is a causally isolated island and that the convicted murderers who are left on it unpunished will be no threat to anyone except themselves. Let us grant that we judge that the governor in this scenario is obligated to see to it that the murderers are given the punishment they deserve, which, Kant thinks, is death. (We can choose another punishment if we disagree.) Let us grant that this is evidence for the proposition that there is an obligation to punish deserving wrongdoers, even if no further good results occur. This proposition does not establish that legal punishments are morally justified, however. This is because the example again gives us no way to judge the normative strength of the obligation. Kant’s example allows us to suppose that there is little of moral significance that is lost in imposing the deserved punishments. But in ongoing human societies there are many 531
Public Policy and Accounts of Desert
morally significant losses, borne by various people, in imposing punishments. In order to gauge the strength of the obligation to impose deserved punishment we need to consider examples where we suppose that there will be such losses. When this is done it seems that even relatively small losses outweigh the obligation to give a serious wrongdoer what he deserves in a “desert island” type of scenario. This suggests that such an obligation cannot be a strong reason to establish or maintain institutions like contemporary criminal justice systems (Sverdlik, 2016).10 Paul Robinson’s Intuitions of Justice and the Utility of Desert (2013) is a helpful complement to Moore’s work. His book is a comprehensive study of intuitive lay judgments about deserved punishment. Retributivists like Moore often appeal to intuitive judgments about desert without giving us many specifics. Robinson’s work gives us a good sense of these specifics. His book reviews the social scientific literature on intuitive judgments about appropriate amounts of punishment and also reports on the results of more than 20 new experimental studies. These studies have the merit of being methodologically sophisticated and sensitive to the sorts of factors that play a role in determining sentences in contemporary criminal justice systems (Robinson, 2013, 240–247). I will now summarize them. It should be emphasized that Robinson does not assert that his experimental results establish what punishments are truly deserved by offenders (Robinson, 2013, 163–167). 1. Subjects agree markedly in how they rank the desert of offenders who have committed offenses in the “core” of the criminal law, that is, how they rate these offenders as more or less deserving of punishment. The core offenses are crimes against the person and the “street” types of property crime (Robinson, 2013, 18–34; cp. 83–85). Robinson emphasizes the significance of this ordinal structure of intuitive thinking about desert (Robinson, 2013, 10–11).11 2. Subjects agree considerably less in their rankings of deserved punishments for offenses outside the core, e.g., prostitution and drug offenses (Robinson, 2013, 63–69; cp. 362–369). 3. Many subjects, but not all, believe there is some consequential moral luck (Robinson, 2013, 257–260, 421–438). 4. Subjects have nuanced responses to many factors that retributivists assert modify an offender’s culpability. For example, variations in mens rea are thought to vary the amount of punishment deserved for a given type of offense (Robinson, 2013, 301–400). 5. When subjects consider certain factors, many of them favor some reduction in the severity of punishment from the level they believe is deserved. These factors include offender remorse and hardship for the offender’s family. But there is considerable disagreement about which factors should mitigate punishment and by how much (Robinson, 2013, 512–532). Granting that the responses of Robinson’s subjects may not be “intuitions” in the sense that moral philosophers understand that concept, let us take them to be reasonable indications of what the relevant moral intuitions would be. We can then conclude the following. a. Robinson’s subjects may have taken their judgments of deserved punishment to set a limit on severity. This is suggested by the fact that they were sometimes prepared to 532
Steven Sverdlik
endorse less severe punishments than those they deemed deserved. But it is also possible that they believed that one prima facie obligation was outweighed by another. b. Robinson gives no evidence that there is a significant level of agreement in intuitive judgments about the amounts of punishment that are deserved—even with respect to the offenses in the core of the criminal law. He is aware of the fact that an ordinal ranking entails no specific amounts of deserved punishment. Suppose that crime A deserves more punishment than B, which deserves more punishment than C. If A is punished by 50 years in prison, B by two days in prison and C by one day, ordinality is preserved. But it is also preserved if A is punished by 38 years in prison, B by 37 years, and C by 36 years, etc. In order to generate specific amounts of punishment, an ordinal ranking needs to be supplemented with some “anchors,” that is, cardinal severity levels for some crimes.12 c. Robinson’s central studies of intuitive judgments about core offenses actually do not establish that there is substantial agreement in the ranking of them (Robinson & Kurzban, 2007; Robinson, 2013, 28–34). His subjects ranked 24 descriptive scenarios, each about three sentences long (Robinson & Kurzban, 2007, 1894–1898). But criminal offenses are defined more abstractly, and they therefore include token actions that differ greatly. That subjects agree in holding one description of a battery to be worse than one description of a burglary does not mean that they will agree in their rankings of all, or even many, batteries and burglaries. It might be that there would be significant disagreement about, for example, whether a remorseful 18-year-old offender convicted for the first time of battery deserves to be punished more severely than an offender convicted for the fifth time of burglary. d. Robinson gives little evidence about his subjects’ beliefs about the normative strength of desert claims. If the subjects believed that desert claims establish limits on severity, he did not investigate their beliefs about the normative strength of these limits. For example, Robinson did not present his subjects with cases similar to Ross’s that test whether they judge punishments more severe than what they believe is deserved to be right, all things considered.
5. Conclusion I think we can draw four more broad-ranging conclusions from our entire discussion of judgments about deserved punishments and their epistemic credentials. I close with one comment about their role in the development of public policy concerning punishment. 1. Retributivists have largely appealed to moral intuition to support their claims about the moral significance of desert. One exception is the existence of consequential moral luck. Here we might say that if intuition supports the claim that such luck exists, it also supports the claim that it does not. Or we could say that there is a plausible theoretical argument (deriving from Kant himself) that it does not. Either way, it is clear that intuition alone could not rationally settle the question of its existence. 2. Kant and Moore, among others, have used “desert island” types of thought experiment to elicit intuitions that there is a moral obligation to impose deserved punishments, even if doing so produces no further good results. However, if these thought 533
Public Policy and Accounts of Desert
experiments are designed properly they do not elicit intuitions that this obligation, if it exists, has much, if any, normative strength. 3. The most intuitively plausible claim about desert is the assertion that it establishes a limiting condition on the pursuit of consequentialist goals like deterrence. But this claim is ambiguous. It is most plausible when it concerns a case like Ross’s: the deliberate framing of a completely innocent person. Even here, though, the normative strength of this constraint does not appear to be infinite. Many retributivists, however, now take the claim also to mean that desert establishes a ceiling on the severity of the punishment of the guilty. This ceiling must be conceived of in cardinal terms, since punishments that preserve ordinality can strike people as impermissibly harsh. It is hard to see how we could now use moral intuition to test whether it is impermissible to punish guilty people more than they deserve. No retributivist has given us a clear account of what amounts of punishment are deserved, so we have no way to compare these amounts to intuitions concerning permissible severity. Furthermore, it seems likely that some resolutions of the conceptual issues sketched in Part 3 would make a divergence of desert and permissibility intuitively plausible. Suppose, for example, that the correct conception of desert entails that the severity of deserved punishment for an offense does not increase for repeat offenders. It is likely that some people would then think that it is permissible to impose more severe punishments than are deserved. 4. Even if desert establishes a limit on the severity of the punishment of the guilty, its normative strength would still need clarification. It is possible that the limit on the punishment of the guilty is weaker than the limit on the punishment of the completely innocent (cp. Kagan, 2015, 98–107). Stepping back from these conclusions, we can note how striking it is that neither Kant himself nor contemporary retributivists have much used Kantian moral theory (for example, the various formulations of the categorical imperative), or any other moral theory, to address systematically the basic issues in criminal justice policy. It seems likely that many of the underlying considerations that retributivists now emphasize—such as mens rea and episodic excuses like duress—will be accorded some weight in the moral principles that a theoretical device like the categorical imperative would generate. Whether such a device would show that there is consequential moral luck, or that desert establishes a limiting condition, or that a criminal record increases the amount of punishment an offender deserves— these are matters that retributivists might helpfully investigate. And whether the principles that this device generates would coincide with common intuitions about desert—a powerful influence on policy in any democracy—is yet to be determined.
Notes 1. See Chapter 18 of this volume for more on moral intuition and Chapter 11 of this volume on Kant’s moral theories. 2. Kant considers a few examples that pertain to culpability, in puzzling ways (Kant, 1991, 60–61; 142–143; 144–145). But he does not treat the subject systematically.
534
Steven Sverdlik
3. Retributivism is often said to base its justification of punishing an offender on facts about his past. There is a debate here on how much of his past is relevant. 4. Moore notes that retributivism can be formulated in consequentialist terms, and he expresses sympathy for the claim that deserved punishment is intrinsically good (Moore, 1997, 105; 155–159). 5. See Chapter 19 of this volume for an exploration of the distinction between coherentism and foundationalism. 6. See Chapter 18 of this volume for an exploration of the idea that moral intuitions provide defeasible evidence for particular moral claims. 7. This kind of inference, from an intuitive reaction to a particular case to acceptance of a general principle that will guide subsequent action or judgment, is explored in Chapters 18, 20, and 21 in this volume. 8. See Chapter 18 of this volume for discussion of the kind of inference Moore make here. 9. See Chapters 17, 18, and 19 of this volume for discussion of moral judgments that are directly grounded in emotions like guilt. 10. See Chapter 24 of this volume for a discussion of prison abolitionism focused on the costs of imprisonment and the moral imagination necessary to see alternatives to it. 11. Ordinal desert is distinct from comparative desert. For the latter see Kagan, 2015, Part III. 12. Robinson is uninterested in such anchors because he believes that the criminal justice systems in democratic societies must only conform to the ordinal rankings that citizens generally accept.
References Alexander, L., Ferzan, K. and Morse, S. (2009). Crime and Culpability. Cambridge: Cambridge University Press. Boonin, D. (2008). The Problem of Punishment. Cambridge: Cambridge University Press. Feinberg, J. (1970). Doing and Deserving. Princeton: Princeton University Press. Frase, R. (2013). Just Sentencing. Oxford: Oxford University Press. Hart, H. L. A. (1968). Punishment and Responsibility. Oxford: Oxford University Press. Husak, D. (2012) “ ‘Broad’ Culpability and the Retributivist Dream,” Ohio State Journal of Criminal Law, 9, 449–485. Kagan, S. (2015). The Geometry of Desert. Oxford: Oxford University Press. Kant, I. (1956). Groundwork of the Metaphysic of Morals, trans. H. J. Paton. New York: Harper & Row. ———. (1991). The Metaphysics of Morals, trans. Mary Gregor. Cambridge: Cambridge University Press. Kleinig, J. (1973). Punishment and Desert. The Hague: Martinus Nijhoff. Moore, M. (1997). Placing Blame. Oxford: Oxford University Press. Nagel, T. (1979). Mortal Questions. Cambridge: Cambridge University Press. Nelkin, D. (2013). “Moral Luck,” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2013 Edition). http://plato.stanford.edu/archives/win2013/entries/moral-luck/ Rawls, J. (1955). “Two Concepts of Rules,” Philosophical Review, 64, 3–32. Robinson, P. (2013). Intuitions of Justice and the Utility of Desert. Oxford: Oxford University Press. Robinson, P. and Kurzban, R. (2007). “Concordance and Conflict in Intuitions of Justice,” Minnesota Law Review, 91, 1829–1907. Ross, W. D. (1930). The Right and the Good. Oxford: Oxford University Press. Sher, G. (1987). Desert. Princeton: Princeton University Press. Sher, G. (2016). “Doing Justice to Desert,” Oxford Studies in Political Philosophy, 3. Sverdlik, S. (2016). “Giving Wrongdoers What They Deserve,” The Journal of Ethics, 20, 385–399. Von Hirsch, A. (1985). Past or Future Crimes. New Brunswick, NJ: Rutgers University Press. Wellman, C. (2012). “The Rights Forfeiture Theory of Punishment,” Ethics 122, 371–393.
535
Public Policy and Accounts of Desert
Further Readings The seminal modern works in the philosophy of punishment are all brief: Ross (1930, 56–64); Rawls (1955, 3–13) Hart (1968, 1–27) Nagel (1979, 24–32). They do not consider desert at length. On desert see von Hirsch (1985); Moore (1997, 153–188).
Related Chapters Chapter 8, Moral Intuitions and Heuristics; Chapter 16 Rationalism and Intuitionism: Assessing Three Views about the Psychology of Moral Judgments; Chapter 17 Moral Perception; Chapter 18 Moral Intuition; Chapter 20, Moral Theory and Its Role in Everyday Thought and Action; and Chapter 24 Moral Epistemology and Liberation Movements.
536
30 RELIGION AND MORAL KNOWLEDGE C.A.J. Coady
1. Introduction There are two extreme and opposed responses commonly evoked by the question “What has religion to do with ethics?” The first is “nothing” and the second is “everything.” These reactions are often part of everyday discourse, but they have theoretical counterparts. The first, let us call it the dismissive response, can begin in common discussions with indignant avowals such as “I have no religious faith but I lead a moral life, and I’m actually better behaved than many religious people I know. And so are many of my atheist and agnostic friends.” The second, let us call it the dominant response, might produce the claim that “Morality is nothing without religion.” This response is encapsulated in Ivan Karamazov’s remark in Dostoevsky’s novel The Brothers Karamazov: “If God does not exist, and there is no future life, everything is permitted.”1 As is often the case with extremes, neither is intellectually satisfying.To see why this is so requires some spelling out of the conceptual and normative structure behind these extreme responses. If Socrates is right, or even partly right, that moral inquiry is a reasoned investigation into how we should live, then the first reaction insists that ethics enjoys a sort of epistemic autonomy in this quest, so that religion can have nothing authentic or distinctive to tell us about how to live.The second response tends to appeal to some simplified version of “divine command” whereby ethics is (or at least should be) wholly determined by injunctions from the Almighty or designated representatives on earth. There is the further complication that religions vary, to different degrees, in what they take to be those injunctions. An undoubted truth underpinning the first response is that morally upright people can be found in all religious communities and amongst atheists and agnostics; so of course can villains.Yet it is also true, although it is dangerous to generalize about “all religions,” that a common feature of most religions is a strong ethical component in their repertoire, as the Biblical “Ten Commandments” and much else respected by Christianity, Judaism and Islam indicate. And there are related ethical teachings as well in Buddhism, Hinduism and Confucianism (if the latter is a religion).The relation between these two truths sets the parameters for our discussion of religion and moral knowledge.2 537
Religion and Moral Knowledge
Consequently, much debate has arisen about the relation of religious claims about morality to the moral views and behavior of nonreligious individuals or secular communities. Much of this turns on the status of “divine commands” and sometimes traces its roots to the dilemma proclaimed by Socrates in the Euthyphro questioning whether piety is good because the gods love it or whether the gods love it because it is good (Plato, 2008, 10a). To paraphrase Socrates in modern terms: if the religious orientation towards the Divine provides moral knowledge, is this because God’s command makes certain things right or wrong, good or bad, or because God recognizes what is independently right or wrong, good or bad? And much the same goes for the commands of polytheistic deities.
2. Divine Commands and the Socratic Dilemma: The First Horn Considered To take the first horn of the Socratic dilemma: if God’s commands create morality, then this dependency seems to raise two possibly related difficulties for a divine command ethic.The first is that making morality depend upon divine command seems to involve treating the wrong of such outrages as gratuitous cruelty, murder or rape as constituted by an arbitrary if all-powerful will—arbitrary because if there were intelligible moral reasons for God’s commands they would presumably be independent of His willing: in Socratic terms, God would be willing the good because it is independently determined by such reasons to be good, or by the nature of goodness, rather than its being good because God wills it. So, it seems that if God could determine by supreme fiat that such acts as cruelty, murder and rape are morally wrong, then God could equally have determined that they were morally right, but this flies in the face of common understanding and moral insight. Supporters of divine command theory will respond to these criticisms in various ways. One response, with its ancestry in Thomas Aquinas, is to defuse the argument from arbitrary will by locating the moral reasons within the Nature of God. God is supremely good and rational, and these traits explain why divine commands are not arbitrary and could not commend such things as cruelty or murder as good or obligatory. Those who deny the existence of God will want to deny that we know this about the divine nature, but that need not concern us here, since our problem about divine commands is posed by the supposition that God exists and gives commands. Moreover, this defense can be strengthened by the common emphasis in the classic monotheistic religions upon the idea that God loves his creation, and indeed that love is as much an essential feature of the Godhead as knowledge. If God has made human beings in His image and loves them, then it is plausible to hold that the fulfillment of their being in goodness is God’s concern and militates against arbitrary commands. Some contemporary philosophers of religion have offered variations on or close alternatives to divine command theories that incorporate something of this defense. Linda Zagzebski, for instance, has propounded a “divine motivations” theory in which God possesses certain virtues in supreme form, one of which is the loving disposition that motivates God’s will toward humans (Zagzebski, 2004). Although she is critical of common divine command theories she is inclined to think that her theory may be compatible with a modified divine command theory such as Robert Adams’s, which gives a prominent place to love as a motivation for God’s commands (Zagzebski, 2004, 258–265; Adams, 1973, 1979 and 1999). Another proposal is to restrict the command theory to the 538
C.A.J. Coady
imposition of obligations and to allow that the goodness of actions is independent of God’s explicit commands. So God’s commands cannot establish that certain things are good (or bad) but only that they must be pursued or avoided as a matter beyond their natural appeal or revulsion to human agents. (For a defense of this sort of approach, see Evans, 2013.) Many intricacies are involved in the metaphysics of God’s nature that support either the motivation or command approaches, and we cannot here enter into these. There are however two contrasting difficulties against the whole approach that might be urged. The first is that orthodox theistic religions usually hold that the nature of God is beyond the comprehension of our limited reason, and the second is that for Jewish, Christian and Islamic revelations there are scriptural episodes that show God commanding subjects to do what is morally repugnant. There are various ways to resolve the first problem, the most notable, perhaps, being Aquinas’s argument that although God’s nature is not fully comprehensible to us, we can achieve a dim understanding of the matter through analogical reasoning, so that when we say that God is good we mean that the quality we apprehend, partially, as goodness (including moral goodness) in our own lives is supremely present in the Godhead. Goodness and reason in their purest and most normative form are in the divine nature (Aquinas, 1911, pt. Ia. qu.6, art. 3, and pt. Ia, qu. 2, art. 3).3 On the second difficulty, there are many instances in the Old Testament of God commanding or doing acts that seem egregiously morally wrong. Many of the plagues God inflicts upon the Egyptians in Exodus Chapters 7–12 involve harming the innocent, culminating in the tenth plague in which God slew all the first-born Egyptian children in the land in order to persuade Pharaoh to let the Israelites leave Egypt. A second example, much debated among theologians, is God’s command to Abraham to kill his beloved son Isaac as a sacrifice to God, the so-called Binding or Akedah (Genesis 22). Abraham is about to kill him when God intervenes via an angel to prevent the killing. But without the intervention Abraham would have killed his innocent son as commanded by God, and this command seems clearly enjoining a wicked act. Regarding the Egyptian case, the first interesting point to note is that God’s justification (as it were) for the slaughter can be cast in terms that are, at least in principle, formally acceptable to many contemporary moral philosophers.The justification is a consequentialist argument that the greater good is served by the unfortunate slaughter since Pharaoh had resisted all nine previous afflictions and the innocents were sacrificed to rescue God’s chosen people from virtual slavery, a good that may be considered to outweigh the evils of the slaughter. This form of argument is available to a utilitarian outlook and also an intuitionist one, since the latter usually allows even strong “prima facie” obligations to be outweighed by others. Indeed, given the enormity of the slaughter, the form of justification for the divine command might better be cast in contemporary terms as something like a “supreme emergency,” a category introduced by Michael Walzer to describe the necessity conditions that legitimate the violation of profound moral prohibitions (Walzer, 1973 and 2004). I cite these possibilities not to endorse them, since I have reservations about all of them as used in contemporary philosophy, but only to show the parallel between a justification for God’s acts available to Moses and those common enough among many secular philosophers today. Philip L. Quinn indeed has argued that since there may be nonmoral reasons that trump moral reasons in certain circumstances, then God may have such reasons for commanding immoral acts. Quinn takes the Abraham/Isaac binding situation and Soren Kierkegaard’s treatment of it as an example and argues that it could be a 539
Religion and Moral Knowledge
case of religious reasons cum obligations overriding moral reasons cum obligations while leaving the moral reasons as still obligatory and not merely prima facie so (Quinn, 1986). Quinn here follows an emerging tendency in late twentieth-century moral philosophy to insist that not only is morality not all encompassing (i.e., that not all conclusive reasons for action are, or involve, moral reasons) but that where moral reasons are relevant they need not be rationally dominant (Wolf, 1982). A possible objection to all this is that if God is omnipotent, then surely the deity could have removed the Israelites safely by some miraculous means short of overriding significant moral obligations and perhaps challenged Abraham’s faith with a similar alternative. But this raises complex problems of the relation between God’s almighty power and the operation of human free will and natural law, issues that connect with the “problem of evil” and are beyond the purview of this chapter. One preliminary point may be briefly noted, which is that omnipotence cannot be construed as wholly unconstrained since it makes no sense, for instance, to think of the task of doing what is logically impossible as coherently achievable. And something similar may be argued of God’s acting against the conditions of human free will or, in some contexts, even natural laws that He has ordained. Less easily dealt with is an abiding background problem for such solutions, namely, that the authenticity of any given revelation purporting to come from God is plausibly the test of its coherence with basic moral understanding, such as the obligation not to kill the innocent. The case of Abraham and Isaac is paradoxical and even shocking precisely because of this. A possible religious explanation of such problematic divine acts, however, that avoids this problem of justification is to treat such narrations as symbolic or allegorical tales rather than literal historical narrations. This is a device commonly employed by many contemporary Biblical scholars, and it was also used in the early Christian church, in, for instance, the writings of St. Augustine and Origen in treating some Old Testament narrations. So the story of the plagues may be taken as merely a vivid illustration of the way that reliance upon God will ultimately free people from bondage and oppression. Some religiously committed philosophers even argue that faith in the ethical and spiritual messages of the Old Testament would be largely unaffected even by the discovery that central figures like Moses or Abraham never existed (see for instance Fleischacker, 2015, 79).
3. The Socratic Dilemma: The Second Horn Considered As for the second horn of the dilemma, the answer provided to the arbitrary commands issue also deals with the theological difficulty that if God’s commands are based on divine recognition of good or evil as independent realities, then God’s status as omnipotent creator is impaired. It deals with this because it makes the reality of good inhere in the divine nature. But even if that answer is successful, there remains the problem of the apparent redundancy of divine commands in actual human moral knowledge. Moral facts may not be ontologically independent of God, but they appear to be epistemically independent since human minds can gain access to them without the need of explicit divine command or divine revelation. And, as already mentioned, there are many palpable examples of morally good behavior by the nonreligious and indeed by those who have never heard of the favored divine commands of any given religion. It is also relevant that such well-behaved people can give, or can have provided on their behalf, various apparently impressive reasons 540
C.A.J. Coady
for their morally good behavior that seem to have no connection with divine commands of which indeed they may never have heard.Together, these two facts at least suggest that divine commands on the matter are irrelevant to moral knowledge and behavior. It is a further pertinent consideration that the significance of the phenomenological facts about good behavior and the apparently sound reasoning and feeling behind it among nonreligious people has been recognized by many religious traditions. Such natural morality available to conscience is acknowledged, for instance, by the Jewish concept of “righteous Gentiles” and by St. Paul’s Letter to the Romans recognizing the legitimacy of nonbelievers’ consciences (Paul, Romans 2:14–15). Moreover, such recognition is part of the commitment to “natural law” inherent in much Roman Catholic tradition tracing back to Aquinas. This commitment sees such natural reason as stemming from the divine reason and, as it were, sharing in it. Even so, any such natural morality seems to put constraints upon the significance of claims that God has revealed and ordered this or that. Here are three: 1. It seems imperative to explain how those who adhere to this morality revealed by natural reason (and perhaps intuition) are dependent on divine commands even where they are unaware of them as such, and even reject the existence of God. 2. There may be problems created by conflicts between the apparent verdicts of natural morality and what profess to be divine commands. 3. There seem to be areas where divine commands are silent or ambiguous yet natural morality applies, such as many hotly debated issues in contemporary bioethics or technological intrusions into private life, not to mention moral dimensions of politics, such as the possible moral superiority of democracy over monarchy. As for problem 1, the dependency might be explained by a version of the Thomistic view mentioned earlier that holds natural morality to be a manifestation in human consciences of the divine reason and goodness, a manifestation that need not be explicitly understood as such by everyone in whom it occurs. Once more, this maintains an ontological dependency, while admitting no practical methodological or epistemic dependency.4 So morally upright atheists are conforming to God’s law even when they don’t know it. This response seems coherent enough, but it brings us directly to problem 2, because some of the matters that religious people claim to be God’s commands or law seem quite contrary to what many apparently good and reasonable people think morally required or permitted. If there is the harmony that Aquinas and others seek, then something is wrong about the conflict or one or more of the parties are in the wrong. What this highlights is the crucial matter of interpretation that figures on both sides of the divide, and for religious believers affects the understanding of divine injunctions. The commandment given to Moses in Exodus “Thou shalt not kill,” to take just one instance, has been variously understood by different religious believers or traditions as enjoining pacifism, or allowing for killing of certain sorts in just wars; for opposing capital punishment entirely or allowing it in some circumstances; of forbidding, or sometimes allowing suicide, and so on. It is also clear that religious fundamentalists do not avoid resort to interpretation. They insist that they know precisely what God commands or forbids in explicit detail, either from a literal reading of inerrant texts or a literal understanding of religious institutional 541
Religion and Moral Knowledge
or charismatic authority. But even such fundamentalists deploy interpretation since, for Christians for example, there are numerous statements in the Bible that contemporary fundamentalist Christians disregard, such as strong Biblical dietary prohibitions like those forbidding the eating of sea creatures without fins and scales, such as squid, octopus and shellfish (Leviticus, 11:9 where such creatures are described as “loathsome”). Some “observant” Jews, however, while not theologically fundamentalist, do take such dramatic dietary requirements seriously as an affirmation of religious identity. There are numerous other injunctions that conflict with each other on face value: the commandment “Thou shalt not kill,” for example, conflicts with the many divine biblical authorizations of killing, including permissions to Israelis to slaughter enemy women and children (Deuteronomy 7:1–2). Aside from these matters, the Ten Commandments themselves, as well as much else, require interpretation, as we have seen with “Thou shalt not kill.” The case of Islam is significant in this respect because outsiders (and some insiders) think that Muslims’ strict adherence to the Qur’an leaves no room for interpretation on moral and doctrinal matters, but this view flies in the face of the fact that there have been important divisions within the Islamic faith about the meaning of revealed doctrines and norms from the earliest years. This is dramatically illustrated in the violent conflicts that have plagued the Middle East in recent years (though much of this conflict is at least partly due to nonreligious factors such as foreign invasion). Divisions between Sunni and Shia are, however, only the tip of a fairly large iceberg of interpretation within Islam that is magnified by the fact that Muslims operate with no central official communal authority purporting to determine epistemic and value issues with finality. Where leaders assume something like such authority, there are usually other respected figures in their community, scholars or preachers, who differ from them in interpreting scripture and tradition (for a modern discussion see Barlas, 2002). Moreover, interpretations of texts or authoritative announcements draw from many different sources, some directly religious, others not. In the case of killing or the interpretation of murder, philosophical argument about the killing/allowing to die distinction is one pertinent input to the sort of causing death meant by the prohibition, as is debate about the merits of the theory of double effect whereby some killings that are foreseen but not intended are, in some circumstances, not prohibited and not murder. There are also historical and textual discoveries related to the canonical scriptures of various religions that must at least be taken into account by believers as impinging upon interpretation, even if only to argue their irrelevance to the understanding of the text. In the case of Christianity, at least, it seems increasingly clear that modern Biblical studies as well as contemporary research in ecclesiastical history can make simple literalist readings of the teachings of both scripture and church authority, including moral teachings, problematic. This discussion of interpretation moves smoothly into addressing problem 3, since where scriptures or tradition are silent or ambiguous on moral problems, some interpretation both of the drift of scripture or tradition and the context of its statements and of the contemporary structure of the moral problem itself become imperative. The challenge posed by slavery in the modern world, for instance, is a case in which Christian texts and traditional teaching were for so long understood as compliant with the practice, if not positively supportive, but very many of the eighteenth- and nineteenth-century abolitionists were Christians and were moved by a different, more radical, understanding of the spirit of the 542
C.A.J. Coady
Gospels, and they stressed texts that could be taken to support human equality and the ways that slavery violated it. In the contemporary world, it is obvious that scripture and tradition are explicitly silent or ambiguous on many moral issues raised by such matters as the huge variety of ever-developing technological innovations, especially those involving the life sciences.This does not discourage religious leaders from proclaiming on these topics, but their efforts to determine what is relevantly implicit in the traditional religious moral injunctions require interpretations that are inevitably contentious even amongst their followers. Consider, for instance, debates about artificial reproduction or genetic enhancement where a common resort is to the notion of “playing God,” though some secular thinkers also deploy a version of this (see Coady, 2009).
4. The Problem of Epistemic Irrelevance and Four Responses Returning to our original oppositions between the dismissive and dominant responses, we have seen that the divine command theory can be so understood as to avoid the pitfalls of the dominant response so that it is not formally at odds with the realities of moral knowledge and good behavior by the nonreligious. It is also capable of being read in a way that neutralizes the dismissive critique based upon the Socratic paradox, since God’s commands need not be arbitrary nor dependent on some goodness external to the Deity. Nonetheless, after such modifications, there remains a problem of irrelevance, since accommodating the reality of moral knowledge that is accessible without recourse to divine commands or explicit revelation strongly suggests that there is in practice no fundamental need to invoke a role for God in moral knowledge. These considerations in turn raise the question of what a religious morality can contribute to natural morality, other than injunctions to honor God (or gods) and mandated ways to do that. There are at least four possible replies to this. First, a religious ethic may reveal truths that natural reason can grasp only dimly; second, it may provide a more robust motivation for moral behavior than otherwise available; third, drawing upon both the first and second to some degree, it may provide backing for absolute obligations; fourth, it may answer questions of ethical significance beyond a morality of right and wrong, duty and goodness (narrowly construed). On the fourth reply, we are close to the territory often signaled by the phrase “the meaning of life,” and this is an area of morality where religious claims are often thought highly relevant. The idea that divine commands or revelation might complement what natural reason can discover about moral goodness and obligation could build upon the widely acknowledged fallibility of human reason and insight.There is no reason to believe that our intellectual and emotional powers, impressive as they are, can at any given time definitively discover all there is to know about morality. To emphasize these points we need only cite the many examples of slow moral progress whereby over the centuries very widespread, strongly held beliefs about slavery, racial inferiorities, the status of women, sexual ethics, and the value of civic liberty have shifted dramatically amongst many communities. So, it seems that the possibility cannot be discounted that divine revelation should advance and supplement the capacity of natural reason to find moral truths. Moreover, it also seems that there is concrete evidence that several instances of moral progress have in fact been promoted, at least in part, 543
Religion and Moral Knowledge
by people claiming to have divine warrant for their novel beliefs. A striking example is the case of advocacy in the UK and the USA in the eighteenth and nineteenth centuries for the abolition of slavery, which was often led by people of Christian conviction who cited scriptural support for the equality of humans as children of God and who went against the grain of widespread past and contemporary support for slavery. An important caveat to this is that, as we have already seen, there are many biblical texts, as well as elements in Judeo-Christian tradition and authority, that had been cited in support of slavery, and indeed against some of the other advances in moral understanding. In witness to the latter, one might cite the persistent papal denunciations of civic liberty and freedom of religious conscience throughout the nineteenth century and well into the twentieth. Popes and other Catholic leaders regularly denounced liberalism, freedom of conscience, and the separation of Church and state in ways that are now simply unthinkable. So Pius IX in 1864 in the Encyclical Quanta Cura (endorsing his predecessor, Gregory XVI’s condemnation) denounced as “insanity” the view that “liberty of conscience and worship is each man’s personal right, which ought to be legally proclaimed and asserted in every rightly constituted society” (Pius IX, 1894). Nowadays, as a result of experiencing the merits of liberal democratic societies, Catholics are as vociferous in support of these ideals as any other religious group. But many of the laity in democratic countries had endorsed these values long before their leaders came to change their tune, and their endorsement was mostly a result of their experience of the benefits of civil liberty and religious freedom. Interestingly, as broad practical acceptance increased, theologians found religious reasons to endorse, indeed celebrate freedom of conscience and civil liberty. So the issue is complicated, and a good deal turns on how one distinguishes a genuine divine command or communication from a bogus one. It would take us too far afield to canvas this matter thoroughly, but there is no reason to think that religious people should be committed to any idea that such discernment must be easy. Everyone has difficulty in some circumstances determining, for instance, whether what they take to be a conscientious decision is indeed genuinely a determination of a properly formed and operating conscience or rather determined by selfishness, obstinacy or delusion. It is, however, worth noticing another point in the context of this discussion of a complementing role for religious moral knowledge. The point is, as noted earlier in connection with the Abrahamic binding, that one factor relevant to deciding whether something is a genuine divine command or revelation is how this supposedly supplementary information comports with what is given by the deliverances of that genuine natural understanding of morality it is to complete. As Peter Geach once put it in connection with the common knowledge that lying is bad, “so far from getting our knowledge that lying is bad from revelation, we may use that knowledge to test alleged revelations” (Geach, 1969, 119–120).
5. Religion and Moral Motivation This brings us to the second point about religion and moral motivation. Famously, John Locke went so far as to claim that “promises, covenants and oaths which are the bonds of human society, can have no hold upon an atheist. The taking away of God, though but even in thought, dissolves all” (Locke, 1689/2016). As already noted in connection with the dominant response, the “no hold” thesis seems contrary to common experience of 544
C.A.J. Coady
upright atheists and to much else, but it might be argued that religious people have an especially strong rationale for keeping such obligations as Locke mentions as well as other moral imperatives. Mere recognition of the good in promising (so the claim might go) cannot provide the “hold,” as Locke puts it, that divine backing delivers. Thus, it is common enough for some proponents of the motivational approach to emphasize fear of dreadful divine punishments as a goad to conformity to moral standards. Often in the monotheistic religions this is attached to the afterlife and eternal suffering in Hell, though there is also a long tradition in various religions of fearing punishment in this life, as evidenced by the way misfortunes are sometimes read as punishment for sin by divine power or powers. But other strands of religious belief treat the alienation of the believer from God by transgression or the baseness of violating God’s love in transgression of His will as the thing to be feared. And indeed, versions of this emphasis may depart altogether from fear as motive, by stressing instead the motivation of love itself; the idea being that moral imperatives provided by God who is supremely loving are obeyed because of that love and the reciprocal loveinspired obedience this commends and inspires. Attempts to make both the fear and the love motivations plausible often proceed by analogy with the way these emotions work in a child’s respect for the authority of their parents. So much more so, it is argued, with the parenthood of God over all of us. In the case of fear, an objection immediately arises about the abject nature of this motivation and its unsuitability for a role in motivating moral behavior. Fear may be a reasonable motive for adherence to law, but in the case of morality, so it is argued, such a motive disrespects the free acceptance of moral obligation that is at the heart of morality. Children whose good behavior is dominated by fearful conformity to parents or teachers are, so it is argued, deficient in moral autonomy and the proper interior attitudes to morality. Immanuel Kant’s moral philosophy places respect for the moral law as the basic motivation for moral behavior that is independent of motivations such as fear of punishment. There is certainly something right about the suspicion of fear as motivation for properly moral behavior, but against this it might be argued that fear sometimes plays an important and positive role in much necessary practical action. A healthy fear of poisonous insects or dangerous animals is needed for much safe behavior, and it is not obvious that a healthy fear cannot be a legitimate motivation for some moral actions. Fear that some tempting actions will hurt a loved one doesn’t seem inappropriate to motivate a moral response of restraint. A personality dominated by fear is indeed no model of moral character, but not all fear need be so morally debilitating. Furthermore, the fear of God and of divine punishments may be, as Peter Geach has argued, in a different category from other fears. Responding to the objection that this fearful attitude to God’s power is mere power-worship, Geach says that fear of defying an Almighty God is a power-worship that “is wholly different from, and does not carry with it, a cringing attitude to earthly powers” (Geach, 1969, 127). The love motivation, in any case, avoids the various objections to fear. To do good through love does not seem abject or a denial of autonomy; certainly, someone who spends part of their time caring for the sick or the deprived out of love for them without prospect of financial gain, self-promotion or the like is behaving morally without taint of unworthy motivation. If a rather rigid form of Kantianism denies this by insisting that the only worthy motive is respect for the self-legislated moral law, then it is at odds with common understanding of moral behavior. Moreover, love and respect for other persons seems more 545
Religion and Moral Knowledge
readily comprehensible than love and respect for an abstraction like the moral law. But a difficulty for the love motivation in the present context is that moral acts motivated by the love of God may seem removed from the love of persons or things to which the acts are directed, and this itself may detract from their supposed moral quality. A possible reply to this is to insist that the love of God very naturally spills over into love of God’s creation and creatures, as is indicated by Christ’s reduction of all the biblical commandments to the injunction to love God and neighbor. The third reply raises the issue of absolute obligations or prohibitions. These are often dismissed in much contemporary moral philosophy, since it is assumed by many moral theorists that there are no principles that cannot admit of exceptions. In the case of very deep moral prohibitions such as those on rape, intentional killing of the innocent or scapegoat punishments, such philosophers show great ingenuity in constructing more or less outlandish scenarios in which “our” intuitions are supposed to admit of exceptions. Certain forms of utilitarianism and of intuitionism have inbuilt requirements that common moral duties can be overridden without loss; the overriding proceeds by calculation of outcomes in the former and by “weighting” of prima facie duties in the latter. But some consequentialist theorists, G. E. Moore for instance, have argued that some moral rules should be adhered to even if they are non-maximal in consequences in particular cases, since not only is general adherence beneficial in a wider view, but our intellectual and moral limitations make allowing exceptions too risky (Moore, 1903/1993, section 99).5 Some recent philosophers have argued a similar case on rule-utilitarian grounds in favor of absolute prohibitions on terrorist acts and on torture (see respectively Nathanson, 2010 and Brecher, 2007). Other theorists such as Michael Walzer treat some moral obligations as absolute but allow their overriding (with grave moral loss) in situations of supreme emergency that leave the agent with dirty hands (Walzer, 1973 and 2004).There is much more that can be said on this, and I think that we are still unclear about the exact nature of the moral absolutism involved and about the effectiveness of so many of the counterexamples to it. For our purposes here, however, I will assume that if there are absolute prohibitions in morality, the skepticism of philosophers suggests that it would require a very strong motivation for an agent to hold fast in the face of temptations like those envisaged in the more plausible of the philosophical scenarios. This may even be true of the indirect consequentialist motivation advocated by such as Moore, Nathanson and Brecher. Divine command, confidence in God’s Providence, and the promise of salvation may plausibly provide a believer with that motivation.
6. Spirituality and the Meaning of Life and Existence This brings us to the fourth area of moral motivation to which religion seems specially pertinent, an area that is not exhausted by commitment to prohibitions on murder, theft and so on, or other duties of justice, beneficence, promise-keeping and the rest, although the outlook may well impact on the narrower requirements via the first three replies above. Religious views fit naturally into an area concerning an outlook on the meaning of one’s life and the spirit in which it should be lived. Such an emphasis on spirituality and meaning has been a mark of many religious traditions. Indeed, some think it an exclusively religious concern; witness Albert Einstein’s remark: “to know an answer to the question ‘What is the meaning of life?’ means to be religious” (cited in Cottingham, 2003, 9). But if the notion of 546
C.A.J. Coady
“spirituality” seems obviously at home in religious discourse, the expression has come to have widespread currency beyond religious circles. No doubt a good deal of this currency in popular thought is debased and sentimental, as it can be in religious discourse as well, but I think that often enough it involves a genuine rejection of cultural and social attitudes that seem to treat life and its strivings in too superficial and unsatisfactory a way.Whether it really requires an outlook that can be authentically viewed as religious remains contentious. Thomas Nagel has investigated this and related issues in his essay, “Secular Philosophy and the Religious Temperament” (Nagel, 2010). There, Nagel explores the idea of a harmony with the universe as a key to the outlook I am calling spirituality. The idea is to see if sense can be made of what he calls “the religious temperament” without the trappings of theology, and he quotes William James in this connection: Were one asked to characterize the life of religion in the broadest and most general terms possible, one might say that it consists in the belief that there is an unseen order, and that our supreme good lies in harmoniously adjusting ourselves thereto. (Nagel, 2010, 11; James, 1902/2002, 53) As Nagel notes, philosophers, especially in the analytic tradition, have often been skeptical about the idea of a meaning to life, some even considering it a dubious extension from the role of meaning in linguistic communication. More recently, the question has received closer attention, with many philosophers thinking of it in terms either of the absurdity of life itself, here following in the steps of Albert Camus (1991), or insisting on the individual’s subjective task of creating and giving a meaning to their lives (Baier, 1957/2008). By contrast, the religious outlook claims to find a meaning in life and any harmony therein as a gift that transcends the individual will; the giver of that gift varies with the specific religious outlook, but for the monotheistic faiths it is God, and for Christians the gift comes through Christ. Key features of this meaning must involve how to understand and live with suffering, deprivation and other trials that life inevitably involves, and the meaning claims also address the deep motivations toward the welfare of others that are said to make full sense of our lives and the consequent stance that we should take toward ourselves. The term spirituality has a complex variety of uses, but, for the religious, it often connects, at its deepest, with such an outlook on life’s meaning.There can indeed be some overlap between religious and nonreligious pictures of life’s meaning, partly because there is often a two-way dynamic between secular insights and those of religion. Even the giftedness mentioned earlier can find echoes in the way some nonreligious thinkers depart from a wholly subjective picture of our meaning-creating choices and seek some element of objectivity in what can count as constituting the meaning of life. Robert Solomon, for instance, once argued for a basic attitude of gratitude for life and existence in a way that suggests something independent to be grateful to; in his case what is independent of the individual is life itself or even the universe. As he puts it: “we are the beneficiaries of a (more or less) benign universe,” and “We might say that one is grateful not only for one’s life but to one’s life—or rather to life—as well” (Solomon, 2002, 105). In a very different way, Susan Wolf has sought objective correlates for the significance that life has, beyond our deliberate choices. Interestingly, she argues that such meaning or significance is altogether independent of morality (or indeed happiness) (Wolf, 2012). I suspect that this is plausible only if we construe morality 547
Religion and Moral Knowledge
narrowly somewhat in the terms of duties, etc., mentioned earlier, and hence it may be that the outlook in question is related to a broader sense of morality, of what might be called ethos rather than ethics, and that there are subtle connections between the two.6 Beyond the obvious metaphysical claims of theism, there remain, however, striking contrasts between these nonreligious pictures and most explicitly religious ones, particularly the idea, in some influential religions, that the gift comes from something transcendent that nonetheless has a person-like aspect. Another contrast will concern the role of prayer; yet another will involve injunctions like the startling Christian claim that you should love your enemies and do good to those who hate and persecute you; perhaps another concerns the attitude to “worldly goods” such as wealth and the related issue of the status of voluntary poverty. Questions of meaning and spirituality, whether posed in a religious or secular form, raise interesting issues of validation or justification. For those religious who rely on some form of revelation, the issues will often turn on the status of testimonies to divine or other nonnatural matters and also to the witness of lives lived in accordance with the values and outlooks in question. On the former, testimonial evidence, once much neglected in philosophy, is now of prominent and growing significance in analytic epistemology (for a good account see Gelfert, 2014); on the latter, the role of epistemic and moral exemplars in our thinking is still relatively underexplored, though it has some connections with the testimony debate, and it is clearly relevant to much putative knowledge acquisition in both moral and religious contexts. It has been interestingly discussed and defended by Linda Zagzebski (Zagzebski, 2012and 2017). Of course it is necessary to add that some claims to spirituality or a meaning for life may be mistaken, distorted or even vicious, as would be a Nazi spirituality. The Gonzo journalist Hunter S. Thompson once answered an interviewer’s question, “What does it all mean?” with the response: “It’s all about fun really. If you can’t have fun, it’s not worth doing” (Thompson, 2009, 172). Any acceptable account of the meaning of life should certainly allow some place for joy, and it is a reasonable criticism of certain gloomridden puritan pictures that they cannot accommodate this. Nonetheless, Thompson’s answer is basically a repudiation of spirituality, and his breezy hedonism will strike a jarring note of superficiality with even the most joyful religious outlooks, or indeed with anyone who takes seriously issues raised by the search for a meaning to life and existence.
7. Conclusion There are two opposing attitudes, dominant and dismissive, that tend to bedevil discussions of the relevance of religion to moral knowledge: either religion tells us everything about morality or it tells us nothing. This entry seeks a more complex understanding than that given by either the dominant or the dismissive attitude. Our exploration began with the common religious idea that moral understanding comes from grasping what God (or some equivalent) wants of us. This idea leads to various forms of divine command theory, and objections to them are scrutinized in detail. One key difficulty addressed is that raised by what is called “the Socratic (or Euthyphro) dilemma”: if moral obligation and goodness is just a matter of what God commands, then murder or rape seem rendered only contingently wrong, but if God commands their prohibition because they are morally wrong, then God’s creative omnipotence seems endangered, and morality is independent of God’s will and is presumably accessible to nonreligious thinking. 548
C.A.J. Coady
Defenders of divine command theories must show that divine will in this matter is not arbitrary but also not dependent on something external to God. They must also give some account of the obvious fact that many nonreligious people, even those who have no knowledge of putative divine commands, exhibit moral knowledge and wisdom and can lead exemplary moral lives, not to mention the fact that revelations of God’s commands, such as those in the Christian Bible, contain apparently incompatible claims regarding God’s moral injunctions.Various ways of overcoming these difficulties are explored. The entry then investigated four ways in which religious insights might be more subtly relevant to common moral knowledge. These were: (a) a religious ethic may reveal truths that natural reason can grasp only dimly; (b) it may provide a more robust motivation for moral behavior than otherwise available; (c) it may provide backing for absolute obligations; and (d) it may point beyond a relatively narrow concern with right and wrong moral behavior towards issues to do with spirituality and “the meaning of life.” This discussion also encompassed the ways that some nonreligious philosophers have sought to give secular accounts of the last of these four issues.
Notes 1. There is controversy about how this sentence is to be understood in the novel since it occurs within a questioning context and may not have represented Dostoevsky’s own belief, nor even indeed Ivan’s mature thought. Some have implausibly denied that it occurs at all in the novel. See the analysis by Andrei Volkov (Volkov, n.d.). 2. Although talk of moral knowledge, moral facts or moral truths is obviously at home with various theories of moral realism and other cognitivist theories, competing anti-realist and non-cognitivist theories need some notion that closely mimics these concepts if they are to make sense of engrained moral practices and discourse and avoid moral skepticism. They must account for views that are acceptable and unacceptable, better and worse, and outright awful. So, I will make free occasionally with notions of moral knowledge and the like without concern for further metaethical qualifications. For further discussion of these issues see Chapters 13, 14 and 20 of this volume. 3. For modern discussions of Aquinas’s approach and its relevance to a solution to the Euthyphro dilemma see Stump (2002, 90, 127–128) and Adams (1973, 318–347). His moral epistemology is further discussed in Chapter 10 of this volume. 4. Another route to the same conclusion is to distinguish between an ontological and a semantic reading of such terms as “moral obligation” so that nonbelievers can grasp the meaning of moral terms without knowing that their ultimate reference is to God’s commands. Robert Audi (2007) subtly explores this possibility and its implications. 5. For further discussion of injunctions to play it safe in cases of moral uncertainty, see Chapter 28 of this volume. 6. For further discussion of ethos see Coady (1993, especially 169–171).
References Adams, R. M. (1973). “A Modified Divine Command Theory of Ethical Wrongness,” in G. Outka and J. R. Reeder (eds.), Religion and Morality. Garden City, New York: Anchor Press. ———. (1979). “Divine Metaethics Modified Again,” Journal of Religious Ethics, 7, 66–79. ———. (1999). Finite and Infinite Goods. New York: Oxford University Press. Aquinas, T. (1911). Summa Theologiae, trans. Fathers of the English Dominican Province. London: Burns, Oates and Washbourne. Audi, R. (2007). “Divine Command Morality and the Autonomy of Ethics,” Faith and Philosophy, 24 (2), 121–143. 549
Religion and Moral Knowledge
Baier, K. (1957/2008) “The Meaning of Life: Inaugural Lecture Delivered at the Canberra University College on 15 October, 1957,” repr. in E. D. Klemke and S. M. Cahn (eds.), The Meaning of Life: A Reader. Oxford: Oxford University Press. Barlas, A. (2002). Believing Women in Islam: Unreading Patriarchal Interpretations of the Quran. Austin: University of Texas Press. Brecher, B. (2007). Torture and the Ticking Bomb. Oxford: Wiley-Blackwell. Camus, A. (1991). The Myth of Sisyphus and Other Essays, trans. J. O’Brian. New York:Vintage. Coady, C. A. J. (1993). “Ethos and Ethics in Business,” in C. A. J. Coady and G. Sampford (eds.), Ethics, Business and the Law. Sydney: Federation Press. ——. (2009). “Playing God,” in J. Savulescu and N. Bostrom (eds.), Human Enhancement. Oxford: Oxford University Press. Cottingham, J. (2003). On the Meaning of Life. New York: Routledge. Evans, C. Stephen. (2013). God and Moral Obligation. Oxford: Oxford University Press. Fleischacker, S. (2015). The Good and the Good Book: Revelation as a Guide to Life. Oxford: Oxford University Press. Geach, P.T. (1969). God and the Soul. London: Routledge & Kegan and Paul. Gelfert, A. (2014). A Critical Introduction to Testimony. London: Bloomsbury Academic. James, W. (1902). The Varieties of Religious Experience. New York: Longmans, Green and Co. Locke, J. (1689/2016) Second Treatise of Government and a Letter Concerning Toleration. Oxford, New York: Oxford World’s Classics. Oxford University Press. Also https://en.wikisource.org/wiki/A_ Letter_Concerning_Toleration [Accessed January 21, 2017]. Moore, G. E. (1903/1993) Principia Ethica. Cambridge: Cambridge University Press. Also http://fairuse.org/g-e-moore/principia-ethica [Accessed January 21, 2017]. Nagel, T. (2010). In Nagel, “Secular Philosophy and Religious Temperament,” in Secular Philosophy and Religious Temperament: Essays 2002–2008. Oxford: Oxford University Press. Nathanson, S. (2010). Terrorism and the Ethics of War. Cambridge: Cambridge University Press. Pius IX. (1894). “Quanta Cura,” Papal Encyclicals. www.ewtn.com/library/ENCYC/P9QUANTA. HTM [Access Date 19 January 2017]. Also quoted in M. Fiedler and L. Rabben (eds. 1998) Rome Has Spoken: A Guide to Forgotten Papal Statements and How They Have Changed Through the Centuries, New York: Crossroad Publishing Co, 48. Plato. (2008). Defence of Socrates, Euthyphro, Crito (ed. and trans. D. Gallop). Oxford, New York: Oxford World’s Classics, Oxford University Press. Quinn, P. L. (1986). “Moral Obligation, Religious Demand, and Practical Conflict,” in R. Audi and W. J. Wainwright (eds.), Rationality, Religious Belief, and Moral Commitment: New Essays in the Philosophy of Religion. Ithaca, NY: Cornell University Press. Solomon, R. C. (2002). Spirituality for the Skeptic: The Thoughtful Love of Life, Oxford: Oxford University Press. Stump, E. (2002). Aquinas. London: Routledge. Thompson, H. S. (2009). “Down and Out in Aspen,” interview with J. Rose in Ancient Gonzo Wisdom: Interviews with Hunter S. Thompson. London: Picador. Volkov, A. I. (n.d.) “Dostoevsky Did Say It: A Response to David E. Cortesi,” https://infidels.org/ library/modern/andrei_volkov/dostoevsky.html [Accessed January 21, 2017]. Walzer, M. (1973). “Political Action: The Problem of Dirty Hands,” Philosophy and Public Affairs, 2 (2), 160–180. ———. (2004). “Emergency Ethics,” in Arguing About War. New Haven, CT: Yale University Press. Wolf, S. (1982). “Moral Saints,” Journal of Philosophy, 79 (8), 419–439. ———. (2012). Meaning in Life and Why It Matters. Princeton: Princeton University Press. Zagzebski, L. T. (2004). Divine Motivation Theory. Cambridge: Cambridge University Press. ———. (2012). Epistemic Authority: A Theory of Trust, Authority and Autonomy in Belief. Oxford: Oxford University Press. ———. (2017). Exemplarist Moral Theory. Oxford: Oxford University Press.
550
C.A.J. Coady
Further Readings William Alston, “Some Suggestions for Divine Command Theorists,” in M. Beaty (ed.), Christian Theism and the Problems of Philosophy (Notre Dame, IN: University of Notre Dame Press, 1990) develops a version of the defense against the arbitrariness problem of divine commands somewhat similar to that discussed in this entry. John Cottingham, The Spiritual Dimension (Cambridge: Cambridge University Press, 2005) provides a clear and wide-ranging discussion of the nature of religious conviction and its relation to different forms of knowledge and value. The author places particular weight upon the emotional and intellectual aspects of self-discovery. Chapters 7 and 8 are particularly concerned to explore the meaning of spirituality. P. L. Quinn, Divine Commands and Moral Requirements (Oxford: Oxford University Press, 1978) erects a rigorous defence of the philosophical and theological coherence of divine command ethics. Much of the argument is highly formalized. David Wiggins, “Truth, Invention, and the Meaning of Life” (Proceedings of the British Academy, 1977, 62, 331–378) gives a nonreligious realist account of the meaning of life, or its meaningfulness, in terms of an agent’s relation to matters of intrinsic value. Reprinted in Geoffrey Sayre-McCord, ed., Essays on Moral Realism (Ithaca and London: Cornell University Press, 1988).
Related Chapters Chapter 1 The Quest for the Boundaries of Morality; Chapter 10 Ancient and Medieval Moral Epistemology; Chapter 25 Moral Expertise; Chapter 27 Teaching Virtue; Chapter 28 Decision Making Under Moral Uncertainty.
551
INDEX
Note: Page numbers in italics denote figures, those in bold denote tables. Aarøe, L. 212 Abelard, P. 245, 246, 247 act-consequentialism 388 – 389, 395 – 397, 398, 399 Adams, R. 538 adaptations 176 adaptive behavior 175, 177, 178 adaptive function 175, 180 – 181 adaptive toolbox of moral heuristics 164 – 168; domain specificity 164 – 165, 166, 167; ecological rationality 165 – 166; psychological plausibility 166 – 168 affect psychology 337, 340 Agrippa’s Trilemma 257 Albert the Great 230, 247 Alexander, L. 526 Alston, W. 385 altruism 93, 106, 322; excess 202 – 203; ‘expectations of future help’ hypothesis 72 – 73; ‘good-mood’ hypothesis 72; and kin selection 183 – 186, 190 – 191, 192 – 195, 195 – 196; nonhuman animals 4, 61, 62, 72 – 73, 73 – 74; reciprocal 74 amygdala 6, 87, 89, 92, 93, 145 analogy 478 – 479 ancestral domains of social interaction 177 – 178 ancient moral epistemology 230, 239, 240 – 245 Anderson, E. 446 – 447, 449, 450, 463 anger 144, 194, 205 animals: collective action 212 – 213; cruelty to 151; see also normative practice in animals Annas, J. 256, 432 Anselm of Canterbury 246 anthropology 39 – 40 anthropomorphism 74 – 75 anti-intellectualism 433 anti-realism 290 apology 296
approval/disapproval 109, 310 – 311 Aquinas, T. 230, 240, 245 – 246, 247 – 248, 538, 539 Aristotelian character education 406, 495, 497, 498, 504 Aristotle 230, 240, 242 – 244, 255, 498; eudaimonia (happiness) 209, 230, 240, 242; function argument 240; Nicomachean Ethics 230, 240, 242 – 244, 247, 493, 497; practical reason 230, 242 – 244 artificial intelligence (AI) systems 39, 152, 440, 446 as-if modeling 159, 162 – 164, 168 askesis (self-discipline) 494, 498 – 499, 504 atheism 290 – 291 Audi, R. 368 Augustine, St. 244 – 245, 247, 540 authenticity 404, 472 – 473 authoritarian moral inquiry 449 – 450 authority 3, 17, 46, 47, 60, 61; nonhuman animals 62 authority-independence 23, 24, 25, 26 – 27, 29, 30, 31, 107 autonomy 319, 321, 404, 472; moral 263, 266, 279 Ayer, A. J. 294 Barbeyrac, J. 254 Barrett, H. C. 49 – 50 Bartolomeo de Medina 517 Bates, L. A. 73 Baumard, N. 209 – 210 Bedke, M. 305, 308 – 309, 311, 312 behavioral economics 10, 162 – 164, 202 – 203 Bekoff, M. 58, 61 belief aggregation function (BAF) 444 beliefs 2, 309 – 310; basically justified 380 – 385; coherence of 375 – 378; and memory 236, 381; and (moral) perception 236, 353, 355, 357, 381; see also group belief; moral beliefs Beneke, F. E. 260
552
Index Bentham, J. 258, 259 Berger, D. 463 Bernard of Clairvaux 246 – 247 betrayal 60, 61, 65 “big mistake” hypothesis 203 Bjorkland, F. 334 – 335, 336, 337 Blackburn, S. 294 black liberation movement 462 Black Lives Matter Network 464 – 465 blame 12, 49, 510, 511 blameworthiness see culpability Boyer, P. 179 brain 180 – 181; see also neuroscience of moral judgment Brandt, R. 317 Brink, D. 413 broad affective system 129 – 132; and moral judgment 130 – 131; and rationality 129 – 130 Brown, M. 465 Buchtel, E. 22 Buddhism 498, 537 Burke, E. 256, 261 Byrne, R. W. 73 Caier, K. 21 Calhoun, C. 456 Campbell, R. 151 Camus, A. 547 capuchin monkeys 61 care/caring 17, 46, 47, 60, 61; and character and virtue education 405 – 406, 496, 497, 503; maternal 176; nonhuman animals 4, 62, 64, 68; see also altruism Carnap, R. 294 categorical imperative 263, 265 – 266, 523, 534 categoricity 2 cetaceans 73, 75; care norms 64; mass stranding or beaching 66, 71; obedience norms 62; reciprocity norms 63; social responsibility norms 65; social structure and normative practice 69 – 70; solidarity norms 65, 66, 71; see also dolphins; humpback whales; orcas Chalmers, D. 412 character strengths 497 character and virtue education 405 – 406, 493 – 507; Aristotelian character education 406, 495, 497, 498, 504; and askesis (self-discipline) 494, 498 – 499; caring 405 – 406, 496, 497, 503; core value/ values and practices 497 – 498; Integrative Ethical Education (IEE) 405, 495 – 496, 497 – 498; Norman High School initiative 406, 494, 499 – 505; positive education 406, 495, 496 – 497, 498; social emotional learning (SEL) 405, 494 – 495, 497, 498, 503 – 504; see also moral education chastity 46 cheaters 199; detection of 12, 198, 199 – 200, 200 – 201 cheating 46, 60, 61, 132, 147, 162; nonhuman animals 61, 63
children and infants 6 – 7, 8, 105 – 123; antisocial behavior 105, 106; empathetic responding 105, 106; evaluation of third–parties’ helping and hindering 111 – 117; externalization of norms 45; fairness expectations 7, 111, 117; intentions, sensitivity to 6, 7, 107, 108, 114; and moral/ conventional (nonmoral) distinction 6, 108, 110; moral judgments 22 – 23, 24, 25, 95, 106 – 110; prosocial behavior 105 – 106, 109; and sharing 209 – 210; (socio)moral intuitions 7, 109 – 111, 117 – 118; statistical learning 133, 134 – 135 chimpanzees 58, 67 – 69, 73; care norms 64, 68; collective action 212 – 213; cooperation 63, 68; fairness, sense of 63, 68 – 69; obedience norms 61, 62, 69; reciprocity norms 63, 68; social responsibility norms 65; social structure and normative practice 67 – 68; solidarity norms 66, 69; ultimatum game studies 68 – 69 choice 10, 242, 244, 247, 248 Christianity 408, 537, 539, 542 Churchland, P. S. 431 Cicero 256 civil liberty 544 Clark, A. 434 – 435 classical theory of concepts 21, 30 codes of ethics see professional codes of ethics cognitive science of morality (CSM) 5, 84, 85 – 86, 88 – 89, 91, 95; dual process model 5, 85, 88, 91, 92, 95; social intuitionist model 85, 88, 89, 91, 95; universal moral grammar (UMG) approach 5, 85, 88, 89, 90, 95 coherence 233, 236; normative 517, 518 coherentism 236, 257, 375 – 380 Collaborative for Academic, Social, and Emotional Learning (CASEL) 494 collective action 212 – 216 collective intentionality 441 commonsense morality 258, 259, 275, 276 compassion 61 conceptual competence 369 – 370 conciliationists 276 Condorcet, N. de 450 confidentiality 489 – 491 confirmation bias 9 conflict: between group 212, 213; intergroup 212 – 213 Confucianism 147, 320, 493, 498, 537 conscience 246, 248; freedom of 544 consciousness 58 conscious reasoning 140, 141 – 2, 330, 332 – 333, 334 – 335, 335 – 336, 338 consequentialism 158, 165, 209, 259, 546; act-consequentialism 388 – 389, 395 – 397, 398, 399; and punishment 522, 524 – 525, 530; rule-consequentialism 394, 396, 397, 398, 399 consequential moral luck 526 – 527, 533 consistency reasoning 151 consolation 61
553
Index constructivism 231, 232, 261, 282 – 284; Kantian 232, 264 – 266, 274, 278 – 281, 282; natural law 231, 262 – 267 contempt 144 contractualism 236, 260, 394, 397, 398 conventional (nonmoral) norms 2, 3, 41 – 42, 50, 313 cooperation 7, 10, 12, 44, 74, 110, 118, 163, 164, 195 – 216, 233, 255, 316; and ability to provide benefits 207 – 208; and behavioral economics 202 – 203; between unrelated individuals 12, 197 – 199, 321 – 322, 323 – 324; “big mistake” hypothesis 203; conditional 198, 202; decision rules 196 – 197, 198, 215; in groups 212 – 216; hunter-gatherer societies 12, 177, 203; nonhuman animals 61, 62, 63, 65, 68; and one-shot interactions 203, 204; partner choice models 12, 202, 205 – 206, 207, 208 – 209, 213 – 214; partner control models 12, 202, 203 – 204, 205, 214; and punishment threat 12, 205, 214 – 215; and repeated interactions 203 – 204; and reputation 205; unconditional 197, 198; and willingness to provide benefits 208 Cooper, B. 465 Cooper, N. 2 Cosmides, L. 199, 211 cost-benefit mechanisms 186 – 188 Creative Interventions 459, 460 culpability 407, 524, 525 – 527, 528 – 529, 530 culture: and moral development 107; and universality/variation of normative sense 3, 38 – 50 Cummins, R. 364, 365 Cushman, F. 7, 93 – 94, 126, 127 – 128 Damasio, A. R. 148, 333 – 334 Darley, J. 29 Darwin, C. 12, 175 Davis, A. 457, 458 – 459, 461, 464 decision rules 10, 178, 196 – 197, 198, 215 deference 469 – 470, 473 deleterious recessive genes 182 – 183 deliberation 91 – 94, 161, 162, 242 Delton, A. W. 10, 163, 204, 215 – 216 Denison, S. 133 deontological intuitions 7 – 8, 97, 141 deontological judgments 91, 92, 96, 209 deontological retributivism 524 – 528 deontology 158, 165, 178 – 179, 217, 331 Descartes, R. 254 desert in punishment 407 – 408, 522 – 536; amount of punishment 407, 526, 527, 529, 533, 534; and consequential moral luck 525 – 527, 533; culpability (blameworthiness) and 407, 524, 525 – 527, 528 – 529; “desert island” thought experiments 408, 524, 531 – 532, 533 – 534; equality of loss principle 524, 528; and harm caused 524, 526, 527, 528, 530; intuitive judgments 522 – 523, 532 – 533; and limits on punishment 523, 526, 527, 530 – 531; obligation to punish 407, 408, 523, 524,
527, 528, 530, 531 – 532, 533 – 534; retributivist approach to 407 – 408, 522 – 534 desire 241 Devitt, M. 25 de Waal, F. B. M. 4, 58, 68 Dewey, J. 493 dianoetic virtues 242 dictator games (DGs) 206, 207 direct action 463 disagreement see moral disagreement disgust 9, 144, 146 – 147, 149, 191; and sibling incest 191 – 192 distributed cognition 441 divine commands, interpretation of 541 – 542 divine command theory 408, 538 – 540, 543, 548 – 549 divine motivations theory 538, 539 divine punishment 545 doctrine of double effect (DDE) 361, 511, 542 dolphins: care norms 64; collective action 212; obedience norms 62; reciprocity norms 63; social responsibility norms 65; social structures 69 – 70; solidarity norms 66 domain generality 89 – 90 domain specificity 10, 89 – 90, 164 – 165, 166, 167 Doris, J. 31 dorsolateral prefrontal cortex (dlPFC) 6, 93 – 94, 141 dorsomedial prefrontal cortex (dmPFC) 89 Dostoevsky, F. 537 Dotson, K. 455 Dreier, J. 324 dual-process theories 85, 95, 129; and moral intuitions 95, 124 – 125, 161 – 162, 370 – 371; and moral judgment 7, 8 – 9, 10, 88, 91, 92, 124 – 125 dumbfounding (moral dumbfounding) 142, 145, 191 – 192, 334, 336, 339 Duns Scotus, J. 249 Durban, J. W. 73 Durkheim, E. 39 Durwin, A. J. 45 Dwyer, S. 88 ecological rationality 165 – 166 Edel, M. and Edel, A. 40, 47 – 48 education see character and virtue education; moral education EEG studies 92, 94 Einstein, A. 546 Eisenbruch, A. 207, 208 eliminativism 290 embedding problem 295 – 296 emotion 7, 9, 89, 90, 175; and intuition 150; and moral judgment 5 – 6, 88, 91 – 94, 109, 139, 140, 144 – 147, 149 – 151, 152, 298, 334, 336, 340; perception of 354 – 355; and reasoning 147 – 149 emotion learning 129 – 132, 136 emotion recognition, in nonhuman animals 62, 64 empathy 144; in children 105, 106; in nonhuman animals 58, 61, 68
554
Index Enoch, D. 297 entitlement 199, 200, 219 – 220n15 Epictetus 244 Epicurus 240 epistemic justification 85 epistemic probabilities 406, 510 – 512 epistemic resources 455, 456 epistemological frameworks 454 – 456; dominant 404, 455 – 456; and epistemic resources 455, 456; and instituted social imaginaries 455, 456; and situatedness 454 – 455 equality 232, 279 error 274, 275, 277 – 278 error theory 232 – 233, 289, 290 – 293, 295, 296, 299, 304 – 315, 395 ethical egoism 72, 379 – 380 ethical virtues 242 – 243 ethics 231, 255; normative 306, 307 ethics committees 448, 449 eudaimonia 209, 230, 240, 242, 247 Euthyphro question 260, 261, 408, 538, 548 Everett, J. 209 everyday moral thought 389 – 390; and moral principles 390 – 392, 398; and moral theory 395 – 398 evidential probability 509 evoked culture 211 – 212 evolution 96 – 97, 144 – 145 evolutionary game theory 196 – 199, 202 experimental philosophy 22 expertise: and trust 469 – 471; see also group moral expertise; moral expertise explaining away 159, 160 – 162 explicit norm guidance 4 expressivism 233, 418 extended epistemology 441 facts: normative 233, 312 – 313; see also moral facts fairness 17, 46, 47, 61, 178, 206 – 207, 208; children’s sense of 7, 111, 117; nonhuman animals’ sense of 61, 63, 68 – 69; and partner choice 206, 207 fair play 61 fast-and-frugal heuristics framework 157 – 158, 162, 164 – 168 fear, and moral behavior 545 FeldmanHall, O. 93 Feldman, R. 376, 377 feminist standpoint theory 461 Fessler, D. 27 fictionalism 297, 305 – 306, 309 fidelity 177 Fodor, J. 21 footbridge dilemma 124, 128, 141, 209, 234 Foot, P. 361 forgiveness 61 foundationalism 236, 257, 300, 356, 376, 380 – 385 foundational pluralism 394 – 395, 397 – 398, 399 Fox, R. 192
Frankena, W. 16, 18 – 19, 20 freedom 232, 248, 249, 262, 266, 278 – 279, 472; of conscience 544; moral 263, 266; religious 544; see also liberty free riders 12, 177, 213, 214, 215; categorizing 215, 216 free will 239, 292 – 293, 472 Frege, G. 260 Frege-Geach problem 295 – 296 friendship 75, 258 Fries, J. F. 260 fronto-temporal dementia (FTD) 93 Frye, M. 456, 457, 461 fundamentalism, religious 541 – 542 fundamentalist view of morality 41 Gabennesch, H. 45 Gaissmaier, W. 157 game theory see evolutionary game theory Garcia,V. 133 Geach, P. 544, 545; see also Frege-Geach problem gender, and moral development 107 generosity 12, 203 – 204, 206, 208 genetic relatedness 11, 182 – 183, 190 – 191; ancestrally reliable cues to 189 – 189; and cumulative duration of coresidence 11, 189, 190, 191, 192; degree of relatedness 182 – 183, 184; and maternal perinatal association (MPA) 11, 189, 190, 192 genetics 40 Gettier, E. 2 Gewirth, A. 2, 23 Gibbard, A. 294 Gigerenzer, G. 9, 157, 160 Gilbert, M. 441 – 442 God 240, 245, 249, 408; belief in 408; love of 545, 546; nature of 408, 538, 539 Goldberg, S. 444 Goldman, A. 444 Good Samaritan 351 Goodwin, G. 29 Graham, J. 46 – 47 Greene, J. 5 – 6, 92, 96, 126, 128, 141, 333 – 334; dual process model 7, 85, 88, 89, 91, 124 Gregor of Rimini 231, 250 grief 61, 64 Grotius, H. 231, 250, 254, 257 Grouchy, S. de 450 group belief 402 – 403, 441 – 445; divergence thesis 442, 443; justification of 402 – 403, 444; process reliabilist approach to 402 – 403, 444 – 445 group cooperation 212 – 216 group judgment aggregation procedures 402, 442 – 444, 450; discursive dilemma 442, 443; universal domain, anonymity and systematicity conditions 443, 450 group knowledge 440 – 445; collective intentionality approach 441; extended epistemology approach 441
555
Index group moral expertise 447 – 450 group moral knowledge 445 – 447; and moral principles 446; process reliabilist approach to 402 – 403, 445 – 446 group moral testimony 447 – 448, 449, 451 group selection 12, 202 – 203, 205, 214 groupthink 445, 450 Guerrero, A. 510, 517 guilt 69, 194, 387, 388, 531 Habermas, J. 261 habit learning 7, 125 – 126 Haidt, J. 3, 17 – 18, 26, 28, 31, 130, 150; dual–process (two–systems) approach 5, 9, 10, 124 – 125; on moral dumbfounding 142, 191 – 192; moral foundations theory 46, 60, 61, 69, 89; social intuitionist approach 85, 88, 89, 91, 124 – 125, 160 – 161, 235, 334 – 335, 336, 337 Hamilton, D. 184 Hamilton’s rule 185, 186, 187, 218n7 happiness 209, 230, 240, 243, 247; see also eudaimonia Harding, S. 461 Hare, R. M. 2, 15, 18, 19, 58, 385 – 386n6 harm 2, 3, 17, 23, 24, 25, 28, 46, 47, 60, 61; and desert in punishment 524, 526, 527, 528, 530; intended versus unintended 49, 92, 96, 128, 132, 143, 524, 525, 526 Harman, G. 88, 324 Hart, H. L. A. 530 Hauser, M. D. 58, 88 Hedden, B. 514 Hegel, G. W. F. 231, 265, 266 – 267 Heiner, B. 457 – 458, 460 – 461 Henry of Ghent 248 – 249 Heophetz, L. 45 Heraclitus 239 heteronomy 279, 280, 282, 472 heuristics 9, 141, 145; see also adaptive toolbox of moral heuristics; fast-and-frugal heuristics framework Hill, K. 210 historicist view of morality 41 Hobbes, T. 255, 260, 262 Holyoak, K. J. 45 homo economicus models 202 homosexuality 150 – 151 honesty 3, 61, 206 honor culture 317 – 318 Horgan, T. 235, 338 – 339, 340 Huemer, M. 365 Hume, D. 7, 106, 109, 117, 139, 144, 231, 255, 258, 261, 267, 292, 310; is-ought distinction (Hume’s Law) 1, 259 – 260; moral sentiments 260; theory of justice 262 – 263 humpback whales: care/altruism norms 64, 73 – 74; social responsibility norms 65; solidarity norms 66 hunter-gatherers 11, 177 – 178, 181, 189 – 190; collective action 212; cooperation 12, 177, 203; sharing rules 210, 211
Husak, D. 529 Husserl, E. 260 inbreeding depression 182 – 183 incest 8, 11, 131 – 132, 183; explicit prohibitions against 192; orca avoidance of 71; sibling 11, 124 – 125, 130, 131, 150, 190, 191 – 192 indignation 387, 388 infants see children and infants inference 175, 178, 179, 364; perception and 355 – 356, 357 informed consent 488 – 489 ingroups 7, 17, 46, 47, 60, 117, 177, 213 injustice, moral perception of 349, 350, 352 innateness 95 – 97 Institute for the Study of Human Flourishing, University of Oklahoma 500, 504 instrumentalism 388 insular cortex 145 Integrative Ethical Education (IEE) 405, 495 – 496, 497 – 498 integrity 404, 473 intellectualists 250 intellectual perception 369 Intellectual Virtues Academy (IVA), Long Beach, California 503 intentions 3, 7, 48 – 50, 92, 95, 96, 128, 132, 143; children’s/infants’ sensitivity to 6, 7, 107, 108, 114, 116; and desert in punishment 524, 525, 526; and outcomes 6, 108, 114 internalism-externalism debate 364 intuitionism 546; rational 231 – 232, 274, 279, 280, 282; social 85, 88, 89, 91, 95, 124 – 125, 160 – 161, 235, 298, 334 – 337, 338 intuitions: and emotion 150; as seeming states 234, 362; see also moral intuitions irreducible normativity 233, 312 – 314 Islam 408, 537, 539, 542 is-ought problem (Hume’s Law) 1, 260 Iyer, R. 60, 61, 69 James, W. 547 Janicki, M. 61, 69 Jensen, K. 68 Jones, K. 476 Joseph, C. 17, 18, 89 journalism 484, 485, 486 Joyce, R. 17, 305 Judaism 408, 537, 539 jus gentium 255, 257 justice 2, 3, 23, 24, 25, 46, 60, 61, 231, 255, 256; Hume 262 – 263; Rawls 279, 281; testimonial 475; see also injustice justification 24, 25, 28, 86, 257, 261, 280, 290, 333, 339, 342; coherence theory of 236, 257, 375 – 380; epistemic 85; foundationalist accounts of 236, 257, 376, 380 – 385; of group belief 402 – 403, 444; intuitive 364; Kant on 263 – 265; noninferential 236, 364, 384; Rawls on 212 – 282
556
Index justification aggregation function (JAF) 444 justification skepticism 232, 289, 297 – 300 justified true belief (JTB) analysis 232, 289, 290, 293, 294, 297, 441 just war theory 446 Kalderon, M. 297 Kant, I. 2, 106, 139, 260, 261, 263 – 266, 472, 525, 545; categorical imperative 263, 265 – 266; contradiction in conception test 264; on justification 263 – 265; retributivist thought 523, 524, 531, 533, 534; universalization tests 263 – 264, 266 Kaplan, H. 210, 211 – 212 Kelly, D. 27 Kierkegaard, S. 539 killing 49, 541, 542; see also footbridge dilemma; trolley problem kin 181 – 182 kin detection system 11, 188 – 191 kin selection 11; and altruism 183 – 186, 190 – 191, 192 – 195, 195 – 196 kinship estimator 111, 190 kinship index 189, 190, 192, 193 Kitcher, P. 58 know-how 402, 433 – 436 Kohlberg, L. 6, 9, 22, 107, 108, 235, 333, 495 Kornblith, H. 25 Korsgaard, C. 4, 58, 73, 274, 313 Krasnow, M. M. 205 Krebs, D. L. 61, 69 Kristjánsson, K. 497 Kumar,V. 28 – 30, 31, 151 Kunda, Z. 148 Kushnir, T. 133 Ladd, J. 16 language(s): moral 233, 325, 326; and moral/ nonmoral norm distinction 42 lateral frontal cortex 87 law 266 – 267, 487; moral 231, 545, 546; of nature see natural law; Roman 267 legal profession 404 – 405, 484, 485 – 486, 488, 489 – 491 Leibniz, G. W. 255 Levine, S. 22 lex talionis 524 liberalism 267 liberation movements 403 – 404, 454 – 468; moral epistemology outside 462 – 465; moral epistemology within 456 – 462 liberty 46, 61, 66; see also freedom Lieberman, D. 11, 190, 192 limbic system 91 Lim, J. 209 Lipps, T. 260 List, C. 442, 443, 444 Lobel, T. 192 Locke, J. 257 – 258, 260, 544, 545
Lockhart, T. 513 logic 165 Longino, H. 448 – 449, 450 loss, response to, in animals 62, 64, 68 love 545 – 546; divine 545, 546 loyalty 3, 17, 46, 60, 61; nonhuman animals 65 MacConville, R. 496 Machery, E. 41 MacIntyre, A. 15, 20 Mackie, J. L. 2, 291 – 293, 305, 307, 316 – 317 “Making Caring Common” project 405 – 406, 495, 496, 502 Malinowski, B. 39 manipulation studies 146 marriage 46 Marsh, A. A. 93 maternal care 176 maternal perinatal association (MPA) 11, 189, 190, 192 meaning of life 408, 543, 546 – 548, 549 medical profession 404, 484, 485, 486, 488 – 489 medieval moral epistemology 230 – 231, 239, 240, 245 – 250 memory, and belief 236, 381 Mencius 147, 320 mens rea 529, 532, 534 metacognition 58, 73 metaethical data 237, 409, 413 – 422; collection 414, 416, 419 – 394; dialectical conception of 416; epistemic conception of 419 – 394; as inquiryconstraining 413 – 414, 416, 418, 419; metaphysical conception of 416; method-ladenness of 415; neutrality 414, 416, 417, 419; psycho-linguistic conception of 417 – 419; as starting points 413, 416; and subject matter of metaethics 417 – 419; theory-ladenness of 414 – 415 metaethical methods 409, 411 – 413, 418; analysis 412, 413; argument 412, 413; parsimony 412, 413; reflective equilibrium 412, 413 metaethics 2, 42, 230, 236 – 237; goals 409, 410 – 411; outputs (theories or views) 409 – 411; see also metaethical data; metaethical methods Mikhail, J. 5, 48 – 49, 49, 85, 88 Mill, J. S. 236, 259, 267 model-based reinforcement learning 7, 96, 125, 126, 127, 129 model-free reinforcement learning 7, 96, 126 – 129, 136; and moral judgment 127 – 128, 129; and rationality 126 – 127 modern moral epistemology 231, 254 – 273 Montaigne, M. 46, 254, 257 Montesquieu 266 Moody-Adams, M. 324 Moore, G. E. 379, 546 Moore, M. 527, 529, 530 – 531, 533 moral agency 57, 58, 333, 404 moral ambivalence 321 moral anti-realism 290
557
Index moral autonomy 263, 266, 279 moral beliefs 2, 232, 233, 283, 293, 306, 308 – 309; group 402 – 403; justification of see justification; as projections of attitudes of approval/disapproval 310 – 311 moral certitude 311 – 312 moral cognition 2 – 3, 8 – 9, 22 – 23 moral communities 435 – 436, 447 moral competence 88 moral conservationism 305, 306, 307, 308 moral/conventional distinction 3, 6, 41 – 42, 45, 50, 108, 117 moral/conventional task studies 24, 25 – 28, 29 – 30 moral development 6 – 7, 22 – 23, 105 – 123; and culture 107; and gender 107; see also children and infants moral disagreement 38 – 39, 276 – 277, 281 – 282, 297, 299, 316 – 317, 318 – 319, 323 – 324, 326, 365 – 366 moral division of labor 482, 483 moral dumbfounding 142, 145, 191 – 192, 334, 336, 339 moral education 39, 266; see also character and virtue education moral expertise 404, 469 – 481; see also group moral expertise; trusting moral testimony moral facts 232 – 233, 279, 280, 293, 299, 304, 307, 310 – 311, 540; irreducible normativity 313 – 314 Moral Foundations Questionnaire 47 moral foundations theory 46 – 47, 60 – 61, 61, 69, 75, 89 moral freedom 263, 266 moral importance 311 – 312 moral intuitions 10, 12, 89, 90, 94, 152, 159, 175, 217, 234, 257, 360 – 374; a–theoretical characterizations of 362 – 363; automatic 88, 141, 142; calibration of 364 – 365; children’s 7, 109 – 111, 117 – 118; and conceptual competence 369 – 370; deontological 7 – 8, 97, 141; and desert in punishment 522, 532 – 533; and disagreement 365 – 366; distorting influences 367 – 368; and dual–process theory 95, 124 – 125, 161 – 162, 370 – 371; and experimental manipulation 366 – 367; and explanation 363 – 364; as insights into necessary moral facts or truths 307, 308; and intellectual perception 369; and self–evidence theory 368 – 369; as unreliable 365; utilitarian 141 – 142 morality 17, 18 – 19, 20, 21, 305, 307, 387; commonsense 258, 259, 275, 276; fundamentalist view of 41; historicist view of 41; natural 408, 541; and statistical learning 134 – 135 moral judgments 2, 5, 6, 9, 10, 16, 17, 18 – 32, 164, 174 – 175, 257, 289 – 290, 329 – 346; as attitudes of approval/disapproval 311; authority independence 23, 24, 25, 26 – 27, 29, 30, 31; as beliefs 311 – 312; and broad affective system 130 – 131; categoricalness of 20, 23; children’s development of 22 – 23, 24, 25, 95, 106 – 110; differences in people’s concept of 22, 30 – 31;
dual-process account of 7, 8 – 9, 10, 88, 91, 92, 124 – 125; and emotion 5 – 6, 88, 91 – 94, 109, 139, 140, 149 – 151, 152, 298, 334, 336, 340; epistemic profile of 308 – 311; evolutionary sources of 96 – 97, 144 – 145; first person/third person distinction 337; and intention 128; and interplay of reasoning and emotion 140, 149 – 151, 152; justification of see justification; and model-free learning 127 – 128, 129; and moral principles 330 – 331, 333, 335, 338 – 339, 340, 341, 342; and moral reasons 331, 333, 335, 337, 338, 339, 340, 341; morphological rationalist view of 235, 331, 338 – 342; and motivation 295; as a natural kind 25, 26 – 30, 31; noncognitivist view of 293 – 294, 295 – 296, 297; post hoc confabulation 332, 335 – 336, 337, 339, 341; and reasoning 5 – 6, 91 – 94, 117, 139, 140, 141 – 144, 149 – 151, 152, 330, 332 – 333, 334 – 335, 335 – 336, 338; reliability of 84, 85, 86 – 87, 90; and seriousness of transgressions 29, 30, 31; social intuitionist view of 85, 88, 89, 91, 95, 124 – 125, 160 – 161, 235, 298, 334 – 337, 338; traditional rationalist view of 235, 332 – 334; Turiel’s account of 22 – 26; universalizability of 20, 23, 24, 25, 26, 29, 30, 31, 95; see also neuroscience of moral judgment moral language 233, 325, 326 moral laws 231, 545, 546 moral learning 7 – 8, 85, 95 – 97, 124 – 138; emotion learning 129 – 132, 136; model-based reinforcement learning 7, 96, 125, 126, 127, 129; model-free reinforcement learning 7, 96, 126 – 129, 129, 136; statistical learning 132 – 136 moral luck 526 – 527, 533 moral nativism 85, 95 – 97 moral naturalism 293 moral negotiationism 306 moral norms 41 – 42, 50, 61, 299 moral objectivism 28 – 29, 42, 449, 450 moral perception 234 – 235, 347 – 359; and action 348, 357; and belief 353, 355, 357; and emotion 354 – 355, 357; and inference 355 – 356, 357; integration theory of 235, 350 – 351; and moral knowledge 352 – 353, 357; multi–level character of 351 – 352; phenomenological elements in 353 – 355; and rationalism 357; and realism 356; representational character of 350 – 353; as seeing 349 – 350 moral pluralism 179, 212, 316, 323 – 324 moral principles 330 – 331, 333, 335, 338 – 339, 340, 341, 342; and everyday moral thought 390 – 392, 398; and group moral knowledge 446 moral progress 428, 430, 431 – 432, 446, 543 – 544 moral properties 232, 234 – 235, 290 – 291, 304, 356, 357; and noncognitivism 294 – 205; objective authority 292, 293; perception of 347 – 348, 349, 350, 351, 352, 353 moral realism 90, 262, 265, 290, 297, 308, 321, 356
558
Index moral reasons 331, 333, 335, 337, 338, 339, 340, 341, 474 moral responsibility 529 moral rules 2, 3, 16, 18, 21, 249 – 250; as behavior guiding 20; as prescriptive 19 – 20; universalizability of 19, 23 moral sense 17 moral sentiments 179, 191, 259 – 260, 307 moral testimony see trusting moral testimony moral theory 387 – 388, 394 – 395, 399; and everyday moral thought 395 – 398 moral thought 233, 304, 305 – 306, 387 – 388, 399; see also everyday moral thought moral truth 307, 308, 408, 543 – 544, 549 moral uncertainty, decision making under 406 – 407, 508 – 521; and higher-order normative uncertainty 516 – 519; and inter-theoretic value comparisons 406 – 407, 512 – 516, 519; and probability 406, 509 – 512, 519 moral understanding 404, 474, 476, 477, 479 moral variation 295 moral worth 404, 472, 474, 479 morphological rationalism 235, 331, 338 – 342 motivated reasoning 143 – 144, 147 – 148 motivation 108, 233, 295, 408, 529, 543, 544 – 546, 549 Nagel, T. 525 – 526, 547 naïve normativity 59 Narvaez, D. 336 – 337, 495 naturalistic approaches 321 – 323 natural law 239, 244, 245 – 246, 249, 255 – 256, 257, 262, 263, 265, 541 natural law constructivism 231, 262 – 267 natural morality 408, 541 natural rights 258 natural selection 12, 175, 176, 178, 180, 181, 183, 198, 206, 217n2 negligence 524, 525, 526, 529 neocortex 91 Neoplatonism 240 neuroscience of moral judgment 5 – 6, 84 – 104, 140, 141, 143, 145 – 146, 148, 298, 333 – 334; domain-specificity and domain-general capacities 89 – 91; EEG studies 92, 94; moral nativism and moral learning 95 – 97; neural processes 85 – 88; neuroimaging studies 89, 94; and reason–emotion dichotomy 5 – 6, 91 – 94, 145 – 146; and reliability of judgments 84, 86 – 87 Nichols, S. 27, 29 Nietzsche, F. 21, 256 nihilism 232 – 233, 304; see also error theory Nissan, M. 26 Nolan, D. 305 noncognitivism 232, 233, 289, 293 – 297, 307, 309, 311, 312 nonhuman animals see animals; normative practice in animals
Norman High School, (Oklahoma, USA) 406, 494, 499 – 505 normative cognition 3, 5, 38 normative coherence 517, 518 normative concepts and principles 48 – 50 normative ethics 306, 307 normative facts 233, 312 – 313 normative practice in animals 3 – 4, 57 – 83; caring/ altruism 4, 62, 64, 68, 72 – 73, 73 – 74; obedience 3 – 4, 62, 62, 69; reciprocity 4, 58, 61 – 62, 63, 65, 68; social responsibility 4, 62, 65, 69; solidarity 4, 65, 66, 69, 71; see also cetaceans; chimpanzees normative sense, universality/variation across cultures 3, 38 – 50 normative uncertainty 516 – 519 normativity: naïve 59; subjective 518 normativizing 46 – 48 norms 3 – 4, 41 – 42, 158 – 159; dark side of 60; and decision making under moral uncertainty 516 – 519; enforcement of 48; externalization of 3, 42 – 46, 50; moral 41 – 42, 50, 61, 299; nonmoral (conventional) 2, 3, 41 – 42, 50, 313; reification of 45; tightness/looseness 48, 50 Nozick, R. 444 Nucci, L. 26 obedience 60, 61; nonhuman animals 3 – 4, 61, 62, 69 objectivity 1, 448 – 449; moral 28 – 29, 42, 449, 450; in scientific inquiry 448, 449 obligation 43, 44, 199, 200, 219 – 220n15, 260 – 261; and desert in punishment 407, 408, 523, 524, 527, 528, 530, 531 – 532, 533 – 534; and religion 408, 543, 546, 549 Ockham, William of 249 O’Neill, P. 366 – 367 orcas: care norms 64; incest avoidance 71; obedience norms 62; reciprocity norms 63; social responsibility norms 65; social structures 70; solidarity norms 66 Ord, T. 514 Origen 244, 540 ought-thought 3, 4, 58 – 59, 60, 71, 75 outgroups 117, 177, 213 Parfit, D. 90, 312 partner choice 12, 202, 205 – 206, 208 – 209, 213 – 214; and fairness 206, 207; and moral action versus moral character 208 – 209 partner control 12, 202, 203 – 204, 205, 214 patriotism 3 Paul, St. 541 perception 347 – 348; and action 348, 357; and belief 236, 353, 355, 357, 381; of emotion 354 – 355; intellectual 369; and seeming states 362; see also moral perception Perfors, A. 134 Petersen, M. B. 212 Peterson, C. 496
559
Index Petrinovich, L. 366 – 367 Pettit, P. 442 phenomenology of perception 353 – 355 phronesis 6, 242, 497 Piaget, J. 22, 106 – 107, 108 Pierce, J. 58, 61 Pitman, R. L. 73 – 74 Pius IX 544 Plato 240, 241, 242, 243, 256; Protagoras 241, 493 pleasure 240, 241 pluralism, moral 179, 212, 316, 323 – 324; see also foundational pluralism Polhaus Jr., G. 455 polygyny 46 positive education 406, 495, 496 – 497, 498 posterior cingulate cortex (PCC) 89 poverty-of-the-stimulus 95 practical reason 230, 239, 242 – 244, 246, 247 – 248 practical syllogism 243 practical wisdom (phronesis) 6, 242, 497 praxis 242 precuneus 89 Price, R. 313 Principle of Equity among Moral Theories (PEMT) 513 Prinz, J. 46, 324, 335 prison abolition movement 403, 457 – 465 prisoner’s dilemma (PD) 197, 202 Pritchard, H. A. 257 privacy 3, 47 probabilism 246, 517 probability 165; and decision making under moral uncertainty 406, 509 – 512, 519; epistemic 406, 510 – 512; evidential 509; subjective 511 process reliabilism 402 – 403, 444 – 445, 445 – 446 professional codes of ethics 404 – 405, 482 – 492; journalism 484, 485, 486; legal profession 404 – 405, 484, 485 – 486, 488, 489 – 491; medical profession 404, 484, 485, 486, 488 – 489; reasons for 483 – 485; rules and standards 405, 487 – 488, 489, 491; special norms and reasons for obeying them 485 – 486 professional role morality 482 – 483 projectivism 310 – 311 promise keeping/breaking 397 propositional knowledge 402, 433, 434 – 435, 470 prudence 242 – 244 prudery 46 psychological plausibility 166 – 168 psychologism 260 – 261 psychopathy 9, 93, 96, 106, 145 public goods 213 punishment 12, 22, 49, 105, 511; consequentialist approach to 522, 524 – 525, 530; cooperation and threat of 12, 205, 214 – 215; divine 545; fear of 545; of innocent people 530; nonhuman animals 62, 68 – 69; and rights 525; see also desert in punishment; prison abolition movement
purity 3, 17, 36, 47, 49, 60, 146 – 147 Putnam, H. 25 Pyrrhonian dilemma of the criterion 231, 257, 261, 265 Quinn, P. L. 539 – 540 Qur’an 542 racism/racial biases 40, 142 Rae, T. 496 Railton, P. 8, 129 – 131, 337 Rai, T. S. 45 rational intuitionism 231 – 232, 274, 279, 280, 282 rationalism 293; and moral perception 357; morphological 235, 331, 338 – 342; traditional 235, 332 – 334 rationality: and broad affective system 129 – 130; and model–free learning 126 – 127; and statistical learning 134; unbounded 158 – 160 Rawls, J. 88, 231, 232, 261, 274, 278 – 282, 307, 412, 413 realism see moral realism reason/reasoning 7, 245 – 247, 248, 249; and action 240, 241, 243 – 244, 245; conscious 140, 141 – 2, 330, 332 – 333, 334 – 335, 335 – 336, 338; consistency 151; and emotion 147 – 9; and moral judgment 5 – 6, 91 – 94, 117, 139, 140, 141 – 144, 149 – 151, 152, 330, 332 – 333, 334 – 335, 335 – 336, 338; motivated 143 – 144, 147 – 148; practical 230, 239, 242 – 244, 246, 247 – 248; right 255; unconscious 140, 142 – 4, 145, 146, 340; and will 239, 247 reasons, moral 331, 333, 335, 337, 338, 339, 340, 341, 474 reciprocity 17, 46, 60, 61, 321, 322, 323; direct 203; in hunter–gatherer societies 177; in nonhuman animals 4, 58, 61 – 62, 63, 65, 68; strong 202 – 203, 205 reflective equilibrium 150 – 151, 412, 413 regress argument 300, 380 – 381 Reid, T. 254, 258, 260 reification of norms 45 reinforcement learning 95 – 96, 125 – 126; modelbased 7, 96, 125, 126, 127, 129; model-free 7, 96, 126 – 129, 129, 136 relatedness see genetic relatedness relationship-oriented moralities 319 – 321, 324; see also cooperation relationship skills 494 – 495 relativism 28 – 29, 233 – 234, 316, 318, 322, 323 – 324, 325 – 326 religion 179, 408, 537 – 551; and concept of moral judgment 22, 30 – 31; divine command theory 408, 538 – 540, 543, 548 – 549; and morality, dominant and dismissive attitudes to 537, 543, 548; and moral motivation 408, 543, 544 – 546, 549; and moral obligation 408, 543, 546, 549; and moral truths 408, 543 – 544, 549; and natural morality
560
Index 408, 541; and spirituality and the meaning of life 408, 543, 546 – 548, 549; see also Buddhism; Christianity; Confucianism; Islam; Roman Catholicism religious freedom 544 religious fundamentalism 541 – 542 religious testimony 548 Rendell, L. 71 reproduction 10, 164 – 165, 175 – 176, 177, 180, 182 – 183, 187 republicanism 266 – 267 reputation 205 resentment 387, 388 respect 3, 17, 46, 60, 61, 428 – 429, 430, 431, 435 responsible decision making 495 retributivism 407 – 408, 522 – 534 revenge 60 rewards 95, 105 right reason 255 rights 2, 3, 23, 24, 25, 319; natural 258; and punishment 525 right temporoparietal junction (rTPJ) 6, 92, 93, 143 Robert of Melun 247 Robinson, P. 529, 532 – 533 role-reversal question 393, 398 Roman Catholicism 498, 541, 544 Roman law 267 Ross, J. 514 – 515 Ross, W. D. 236, 368, 382, 383, 524 – 525, 530 Rousseau, J.–J. 231, 260, 263, 267, 493 Rowlands, M. 58 rule-consequentialism 394, 396, 397, 398, 399 rules 2, 17 – 18; professional codes of ethics 405, 487, 488, 489, 491; see also moral rules Russell, B. 382 Ryle, G. 433 sanctity 3, 17, 46, 60, 66 Sauer, H. 335, 338 Schmidt, M. F. 45 scientific inquiry, objectivity in 448, 449 seeming states 234, 362 selection 12, 175 – 176; group 12, 202 – 203, 205, 214; pressures 176, 177, 180, 182 – 183; see also natural selection self-awareness 494 self-control 161 self-deception 9 self-discipline (askesis) 494, 498 – 499, 504 self-evidence 231 – 232, 258, 275, 368 – 369 self-interest 10, 115 – 116, 148, 162, 163 self-management 494 self-sacrifice 60; nonhuman animals 65, 66 Seligman, M. E. P. 496 sentimentalism 139, 140, 144, 147 Sextus Empiricus 254, 257, 258, 300 Shafer-Landau, R. 306, 368 Shakur, A. 461 – 462
shame 194 sharing 61, 177, 178; cultural transmission explanation 211; equality versus equity 209 – 210; evoked culture explanation 211 – 212; huntergatherers 210, 211; and luck versus effort perceptions 209 – 212; reciprocal 211 – 212 Shweder, R. 60 sibling altruism 184 – 186, 190 – 191, 192 – 195 sibling incest 11, 124 – 125, 130, 131, 150, 190; disgust and 191 – 192 siblings, sexual aversion toward 190 – 191 Sidgwick, H. 231 – 232, 255, 274 – 278, 279, 280, 283, 284, 307 sin 239 Singer, P. 151, 479 Sinnott-Armstrong, W. 298, 365, 367 situatedness 454 – 455 situational constraint 48 skepticism 232, 277, 283, 284; see also error theory; justification skepticism; noncognitivism slavery 447, 450, 464, 542 – 543, 544 Smith, A. 7, 109, 117, 139, 144, 260 smoking 149 social awareness 494 social contract theories 199, 200 – 201, 260, 262; see also contractualism social emotional learning (SEL) 405, 494 – 495, 497, 498, 503 – 504 social exchange 198; reasoning about 199 – 200 social imaginaries, instituted 455, 456 social interaction: ancestral domains of 177 – 178; multiple evolved systems regulating 178 social intuitionism 85, 88, 89, 91, 95, 124 – 125, 160 – 161, 235, 298, 334 – 337, 338 social moral domain theory 41 – 42 social practices and moral knowledge 401 – 402, 427 – 439; divergences 428, 429 – 431; and opportunities for moral growth 428, 431 – 432 social responsibility 4, 61; nonhuman animals 4, 62, 65, 69 social welfare 212 sociopathy 148 Socrates 239, 241, 242, 259, 493, 537, 538 solidarity norms 61; nonhuman animals 4, 65, 66, 69, 71 Solomon, R. 547 somatic markers 148 soul 241, 242 spirituality 408, 546 – 548, 549 Sprigge, T. 20 standards, professional codes of ethics 405, 487 – 488, 489, 491 standpoints 461 – 462 Stanford, P. K. 42, 43, 44, 45 state of nature 257, 262 statistical learning 132 – 136; and morality 134 – 135; and rationality 134 stealing 132
561
Index Stevenson, C. 294 Stoics 240, 244, 248, 255, 498 Street, S. 90, 232, 274, 282 – 284 subjective normativity 518 subjective probabilities 511 sympathy 105, 144; in nonhuman animals 58, 61 system 1 and system 2 processes 8 – 9, 88, 124, 125, 370 – 371 Tan, J. H. 167 – 168 Taylor, P. 2, 17 teleology 255 – 256 temporal pole 89 temporoparietal junction (TPJ) 89, 93 Tenenbaum, J. B. 133 testimonial justice 475 testimony: religious 548; see also trusting moral testimony theory of mind 89, 90 Thompson, H. S. 548 Thomson, J. 361, 478 – 479 Timmons, M. 235, 338 – 339, 340 tit-for-tat (TFT) strategies 198, 202, 203 Tomasello, M. 42, 43, 44, 68, 74 Tooby, J. 199, 211 Toulmin, S. 21 traditional rationalism (TR) 235, 332 – 334 Trivers, R. 193 trolley problem 9, 136, 141, 151, 209, 234, 361, 366 – 367 trust 61, 205, 208, 209; and expertise 469 – 471 trusting moral testimony 404, 471 – 480; alternatives to 477 – 479; authenticity problem 404, 472 – 473; autonomy problem 404, 472; credentials problem 404, 471, 475, 478; credibility problem 404, 471 – 472, 475; integrity problem 404, 473; and moral understanding 404, 474, 476, 477, 479; moral worth problem 404, 472, 474, 479; scope and limits of 475 – 477 truth: moral 307, 308, 408, 543 – 544, 549; of moral laws 231 truth conditions 326 Turiel, E. 2, 22 – 26, 31, 41 two-systems model see dual-process theories Tyson, S. 457 – 458, 460 – 461
unconscious moral reasoning 140, 142 – 4, 145, 146, 340 universalism 233, 316, 317, 318, 319, 321 universalizability: Kantian ethics 263 – 264, 266; of moral judgments 20, 23, 24, 25, 26, 29, 30, 31, 95 universal moral grammar (UMG) approach 5, 48, 85, 88, 89, 90, 95 utilitarianism 5, 6, 58, 88, 91, 92, 93, 141 – 142, 179, 188, 217, 259, 278, 280, 546 utility maximization 10, 158, 163, 174 value(s) 39 – 40, 135 – 136 valuing 59, 60, 283, 284 Varner, G. E. 58 vegetarians 9, 147, 149 Velleman, J. D. 324 ventromedial prefrontal cortex (vmPFC) 6, 9, 89, 92 – 93, 96, 141, 145, 148 violence, community accountability approach to 459 – 461 virtue 240, 242 – 243, 473, 479 virtue education see character and virtue education virtue ethics 196, 205 – 206, 394, 397, 399 voluntarism 230 – 231, 247, 248 – 250 Wainryb, C. 45 Walker, D. 19 – 20 Wallace, G. 19 – 20 Walzer, M. 539, 546 warfare 212, 213 war tribunals 446 Wason selection task 201 weak dispositionalism (WD) 308 WEIRD people, sharing rules 211 – 212 wellbeing 496 – 497; see also eudaimonia whales see humpback whales; orcas Whitehead, H. 71 will 239, 244 – 245, 246, 249; freedom of 239, 292 – 293, 472; and reason 239, 247 willpower 4 wishful thinking 9, 147 – 148, 376, 377, 378 Wolf, S. 547 Wright, J. 22 Xu, F. 133
Ulpian 265 ultimatum games (UGs) 68 – 69, 206 – 208, 210 unbounded rationality 158 – 160 uncertainty see moral uncertainty
Young, L. L. 45 Zagzebski, L. 538, 548
562