Rethinking Moral Status 0192894072, 9780192894076

Common-sense morality implicitly assumes that reasonably clear distinctions can be drawn between the "full" mo

281 62 2MB

English Pages 352 Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Rethinking Moral Status
Copyright
Contents
Preface
Notes on Contributors
1: Rethinking our Assumptions about Moral Status
1. The Idea of Moral Status
2. Human Moral Status
3. Species Membership and the Boundary between Full and Partial Moral Status
4. Revisionary Approaches to Moral Status
5. More-than-fullMoral Status?
6. Moral Uncertainty and Moral Confusion
Notes
References
Part I: The Idea of Moral Status
2: Suffering and Moral Status
1. Introduction
2. Unconnected Individuals
3. Combining Reasons of Different Types
4. A Challenge
5. A Gradualist Understanding of Moral Status
6. Gradualism and Suffering
Notes
References
3: An Interest-Based Model of Moral Status
1. The Model
2. Implications
2.1 Ordinary, self-awarehuman beings
2.2 Nonparadigm humans
2.3 Nonhuman animals
2.4 Robots and advanced AI systems
2.5 Brain organoids
2.6 An enhanced hominid species
3. Conclusion
Notes
References
4: The Moral Status of Conscious Subjects
1. Theorizing about Moral Status
2. Phenomenal Consciousness and Value
3. Implications: Mapping Value to Moral Status
4. Making Phenomenal Value Practical
4.1 Proportionality
4.2 The source of phenomenal value
5. Conclusion
Notes
References
5: Moral Status, Person-Affectingness, and Parfit’s No Difference View
1. Senses of Moral Status and Ways of Mattering Morally
2. Moral Status, Affecting a Definite Future Person, and Parfit’s No Difference View
Notes
References
6: The Ever Conscious View and the Contingency of Moral Status
1. Introduction
2. What Moral Status Is, and How Harm-Based and Benefit-Based Reasons Arise
3. The Ever Conscious View
4. Objections to the Ever Conscious View
5. Defending the Good Method in the Face of the Asymmetry
6. Conclusion
Notes
References
7: Moral Status and Moral Significance
1. Moral Significance More Fundamental than Moral Status
2. Capacity for Sentience as a Basis for Moral Status
3. Why Capacity for Sentience but not Organic Life is Morally Significant
4. The Endless Variability of Status-Grounding Mental Capacities Among Humans
5. Further Personal Features that Could be Morally Significant
6. Deontological Constraints Agent-Focused Rather than Victim-Focused
7. Conclusion
Notes
References
8: Moral Recognition and the Limits of Impartialist Ethics: On Androids, Sentience, and Personhood
1. Moral Status and Moral Standing
2. The Easy Bits: Chimeras and Cyborgs
3. The Difficult Bits: Self-learning Artificial Intelligence Machines
4. Justice Now for Stones, Rivers, and Androids!
Note
References
9: Is Moral Status Good for You?
1. Introduction
2. Recognition Value
3. Protective Value
4. Vulnerability Disvalue
5. Noninstrumental Value
6. Concluding Thoughts
Notes
References
Part II: Specific Issues about Moral Status
10: Toward a Theory of Moral Status Inclusive of Nonhuman Animals: Pig Brains in a Vat, Cows versus Chickens, and Human–Nonhuman Chimeras
1. Toward an Account of Moral Status Inclusive of Nonhuman Animals
1.1 The key category of welfare interests
1.2 Levels, tiers, and hierarchies of moral status
1.3 From a sketch to a theory fit for policy purposes
2. The Problem of Human–Nonhuman Chimeras
2.1 Background on the science and ethics of chimera research
2.2 The suffering problem
2.3 The humanizing problem
2.4 Tentative proposal for the near term
Notes
References
11: Revisiting Inexorable Moral Confusion About the Moral Status of Human–Nonhuman Chimeras
1. Introduction
2. Moral Status
3. The Science of Human–Nonhuman Chimeras
3.1 Human–nonhuman chimeras as assay systems
3.2 Human–nonhuman chimeras as models
3.3 Human–nonhuman chimeras as sources of organs for transplantation
4. The Ethics of “Humanized” Chimeras
4.1 Human–nonhuman chimeras as assay systems
4.2 Human–nonhuman chimeras as models
4.3 Human–nonhuman chimeras as sources of organs for transplantation
5. Inexorable Moral Confusion, Revisited
6. Conclusion
Notes
References
12: Chimeras, Superchimps, and Post-persons: Species Boundaries and Moral Status Enhancements
1. Introduction: Moral Status and Biological Species
2. Thinking about Moral Status
2.1 Moral status, interests, and identity
3. Moral Status Enhancements
4. Obligations to MSE?
5. Conceptual Issues in Moral Status Enhancement
6. Post-persons and FMS
7. Moral Agency, Moral Status, and Obligations
8. Conclusion
Notes
References
13: Connecting Moral Status to Proper Legal Status
1. Introduction
2. The Strong Connection
3. The Moderate Connection
4. An Objection to the Strong Connection and the Moderate Connection
5. The Weak Connection
6. Conclusion
Notes
References
14: How the Moral Community Evolves
1. The Natural History of Normativity
1.1 The moral mismeasure of man
1.2 The evolution of ends-in-themselves
1.3 Flourishing in a social world
2. Evolution of an Imperfect Moral Sense
2.1 Becoming a moral species
2.2 Adaptive mechanisms that distort MSS ascription
2.2.1 Empathy
2.2.2 Disgust
2.2.3 Mental state attribution
2.3 The case of invertebrate ethics
3. Conclusion
Note
References
15: Moral Status of Brain Organoids
1. Ethical Regulation of Brain Organoid Research
2. Brain Organoids and Consciousness
3. The Connection between Consciousness and Moral Status
4. Implications of Moral Status for Brain Organoid Research
5. Utilitarian Approaches
6. Rights-based Approaches
7. Animal Research Ethics Principles
8. Indirect Moral Significance
9. Conclusion
Notes
References
16: How Much Moral Status Could Artificial Intelligence Ever Achieve?
1. What is Moral Status?
2. Does Moral Status Come in Degrees?
3. What is the Basis of Moral Status?
3.1 Sentience
3.2 Multiple bases
4. Can Future AIs have the Basis of Moral Status?
4.1 Intelligence
4.2 Consciousness
4.3 Free will
4.4 Moral understanding
5. Conclusion
Notes
References
17: Monkeys, Moral Machines, and Persons
1. Introduction
2. The Moral Machine Problem
3. Moral Models
4. Ape-Machines
Notes
References
18: Sharing the World with Digital Minds
1. Introduction
2. Paths to Realizing Super-beneficiaries
2.1 Reproductive capacity
2.2 Cost of living
2.3 Subjective speed
2.4 Hedonic skew
2.5 Hedonic range
2.6 Inexpensive preferences
2.7 Preference strength
2.8 Objective list goods and flourishing
2.9 Mind scale
3. Moral and Political Implications of Digital Super-beneficiaries
3.1 Creating super-beneficiaries
3.2 Sharing the world with super-beneficiaries
4. Discussion
Notes
References
Index
Recommend Papers

Rethinking Moral Status
 0192894072, 9780192894076

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking Moral Status

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking Moral Status Edited by

S T EV E C L A R K E , HA Z E M Z O H N Y, A N D J U L IA N S AV U L E S C U

1

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

1 Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © the several contributors 2021 The moral rights of the authors have been asserted First Edition published in 2021 Impression: 1 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2021931768 ISBN 978–0–19–289407–6 DOI: 10.1093/oso/9780192894076.001.0001 Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Contents Preface vii Notes on Contributors xiii

1. Rethinking our Assumptions about Moral Status Steve Clarke and Julian Savulescu

1

PA RT I :   T H E I D E A O F M O R A L STAT U S 2. Suffering and Moral Status Jeff McMahan

23

3. An Interest-­Based Model of Moral Status David DeGrazia

40

4. The Moral Status of Conscious Subjects Joshua Shepherd

57

5. Moral Status, Person-­Affectingness, and Parfit’s No Difference View F. M. Kamm 6. The Ever Conscious View and the Contingency of Moral Status Elizabeth Harman 7. Moral Status and Moral Significance Ingmar Persson 8. Moral Recognition and the Limits of Impartialist Ethics: On Androids, Sentience, and Personhood Udo Schuklenk 9. Is Moral Status Good for You? Thomas Douglas

74 90 108

123 139

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

vi Contents

PA RT I I :   SP E C I F IC I S SU E S A B O U T M O R A L STAT U S 10. Toward a Theory of Moral Status Inclusive of Nonhuman Animals: Pig Brains in a Vat, Cows versus Chickens, and Human–Nonhuman Chimeras Ruth R. Faden, Tom L. Beauchamp, Debra J. H. Mathews, and Alan Regenberg

159

11. Revisiting Inexorable Moral Confusion About the Moral Status of Human–Nonhuman Chimeras Jason Scott Robert and Françoise Baylis

179

12. Chimeras, Superchimps, and Post-­persons: Species Boundaries and Moral Status Enhancements Sarah Chan

197

13. Connecting Moral Status to Proper Legal Status Benjamin Sachs

215

14. How the Moral Community Evolves Rachell Powell, Irina Mikhalevich, and Allen Buchanan

231

15. Moral Status of Brain Organoids Julian Koplin, Olivia Carter, and Julian Savulescu

250

16. How Much Moral Status Could Artificial Intelligence Ever Achieve? Walter Sinnott-­Armstrong and Vincent Conitzer

269

17. Monkeys, Moral Machines, and Persons David R. Lawrence and John Harris

290

18. Sharing the World with Digital Minds Carl Shulman and Nick Bostrom

306

Index

327

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Preface What is it to possess moral status? It may seem problematic for a volume about rethinking moral status to commence by assuming an answer to this question. But discussion must start somewhere, so here is a minimally contentious claim: A being or entity that possesses moral status is one that matters morally, for its own sake. Beyond this minimal claim lies controversy. Common-­sense morality implicitly assumes that reasonably clear distinctions can be drawn between the ‘full’ moral status usually attributed to ordinary adult humans, the partial moral status attributed to non-­human animals, and the absence of moral status, usually ascribed to machines and other artefacts. These assumptions have long been subject to challenge, and are now under renewed pressure because there are beings we have recently become able to create, or may soon be able to create, that break down certain traditional categories: human, non-­human animal, and non-­biological beings. Such beings include human non-­human chimeras, cyborgs, human brain organoids, post-­ humans, human minds that have been uploaded into computers and onto the internet, and artificial intelligences. It is far from clear what moral status we should attribute to any of these. While challenges to commonsensical views of moral status have a long history, the aforementioned technological developments recast many of the challenges in a new light and raise additional questions. There are a number of ways we could respond. We might revise our ordinary assumptions about what is required for the possession of full moral status. We might reject the assumption that a sharp distinction can be drawn between full and partial moral status. We might accept that there are circumstances in which we will be unable to determine whether and to what degree beings of a particular type possess moral status. Also, we might avoid making any inferences about the moral status of particular beings and try to get by without talk of moral status. Our choice of response may have far-­reaching implications. Considerations of consistency may lead us to reappraise our handling of long-­ standing problem cases for accounts of moral status, including disputes over the moral status of foetuses and severely cognitively impaired humans. We may also be prompted to rethink traditional assumptions about the moral importance of humans relative to non-­human animals.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

viii Preface This volume provides a forum for philosophical reflection about ordinary presuppositions and intuitions concerning moral status, especially in light of the aforementioned recent and emerging technologies. An initial chapter, by Clarke and Savulescu, surveys some core assumptions about moral status that may require rethinking. These include the common presuppositions that all humans who are not severely cognitively impaired have equal moral status, that the sophisticated cognitive capacities typical of human adults are necessary for full moral status, that only humans can have full moral status, and that there can be no beings with higher moral status than ordinary adult humans. The seventeen chapters that follow are organized into two parts. In Part I our authors attempt to rethink the very idea of moral status, while each chapter in Part II grapples with more specific issues. Many of these are raised by consideration of beings that we have recently acquired the capacity to create, or may soon be able to create. Part I, ‘The Idea of Moral Status’, commences with three chapters that address the conceptual foundations and implications of moral status. Differences in moral status reflect a form of moral inequality: individuals with higher moral status matter more than those with lower moral status. But do differences in moral status affect the strength of reasons not to cause, or to prevent, suffering, and the strength of reasons to confer benefit? Jeff McMahan considers this question in Chapter  2, exploring whether the significance of individuals’ moral status may vary depending on the types of harm that might be inflicted on them, or on the type of benefit that might be conferred on them. Chapter 3 by David DeGrazia presents an interest-­based account of moral status that aims to illuminate the moral status of ordinary, self-­aware human beings, but also non-­paradigm humans, animals, brain organoids, artificial intelligence (AI), and post-­humans with superior self-­awareness. Seven theses are defended to qualify this interest-­based account: (1) being human is neither necessary nor sufficient for moral status; (2) the capacity for consciousness is necessary but not sufficient; (3) sentience is necessary and sufficient; (4) social relations are not a basis for moral status but may ground special obligations to those with moral status; (5) the concept of personhood is unhelpful in modelling moral status, unless a non-­vague conception is identified and its moral relevance clarified; (6) sentient beings are entitled to equal consequentialist consideration; and (7) sentient beings with substantial temporal self-­awareness have special interests that justify the added protection of rights.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Preface  ix Chapter 4 by Josh Shepherd homes in on three difficulties facing any regimentation of moral status claims: how to account for the grounds of moral status; how to map these grounds to the moral reasons for action associated with the possession of a given level of moral status; and how to navigate these grounds and map difficulties without clashing with strong intuitions about a range of problem cases. To resolve these three difficulties, Shepherd argues that we ought to base our account of moral status in aspects of a subject’s conscious mental life, by mapping the grounds to moral reasons in terms of respect for conscious subjects. The next two chapters, while focused on articulating implications of moral status, consider challenges raised by human embryos, foetuses, and the non-­ identity problem. In Chapter 5 F. M. Kamm considers the idea of an entity’s moral status as what it is permissible or impermissible to do to it, and examines how its status relates to whether it is sentient, conscious, capable of agency, a subject, or rational. She then considers ways in which the moral status of embryos that will definitely develop into persons differs from the status of those persons, as well as the implications of this for the non-­identity problem and Parfit’s ‘No Difference View’. Elizabeth Harman defends the ‘Ever Conscious View’ in Chapter 6, which holds that a living being has moral status throughout its life just in case it is ever conscious, at any point in its life. This is a contingent view of moral status: some beings that have moral status might have lacked it, and some beings that lack moral status might have had it. The chapter addresses the ‘Objection to Contingency’, which holds that if the Ever Conscious View is correct, then whether abortion is permissible depends on whether one actually aborts. The two subsequent chapters in Part I argue for abandoning the very concept of moral status. Ingmar Persson (Chapter 7) distinguishes moral status from moral significance, arguing that something has moral significance just in case it morally counts for its own sake, or is something that must be taken into consideration in itself when moral judgements about what ought or ought not to be done are made. Nothing can have moral status if there is not anything morally significant about it, but something can be morally significant even though it does not have moral status. Similarly, in Chapter 8, Udo Schuklenk argues that ‘moral status’ is no more than a convenient label for ‘is owed moral consideration of a kind’; and so, he suggests, we dispense with the concept and instead focus on uncovering the ethically defensible criteria that give rise to particular kinds of moral obligations. Understood this way, chimeras, human brain organoids, and artificial intelligence do not pose new

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

x Preface challenges, since existing conceptual frameworks, and the criteria for moral consideration they trigger, are still defensible and applicable. Regardless of the specific characterization of moral status, an often-­ neglected question is whether having it—and possibly having more of it—is good for you. If it is, does losing it harm you? Rounding off the first part of the volume, this question of the prudential value of moral status is precisely what Thomas Douglas tackles in Chapter  9. Answering it is important in helping us to decide whether or not we should enhance, or disenhance, the cognitive and moral capacities of non-­human animals. Doing either may affect their moral status. Part II, ‘Specific Issues about Moral Status’, begins with three chapters focusing in particular on the prospect of interspecies chimeras. In Chapter 10, Ruth Faden, Tom Beauchamp, Debra Mathews, and Alan Regenberg argue for a theory of moral status that helps provide solutions to practical problems in public policy, taking account of the interests of non-­human animals. To illustrate this need, their chapter describes two contemporary problems, one in science policy and one in food and climate policy. They sketch a way to think about a tiered or hierarchical theory of moral status that could be fit for such work, and then consider in some depth the problem of human non-­human chimeras. In Chapter 11, Jason Roberts and Françoise Baylis revisit their earlier work on the history, ethics, and future of stem cell research involving chimeras made, in part, from human cells. In particular, they focus on the notion of inexorable moral confusion: objections to the creation of chimeras are likely motivated by a strong desire to avoid inexorable moral confusion about these beings’ moral status. Here, they further specify and elaborate on the original concept in light of recent scientific and technical developments as well as eth­ ic­al insights. In Chapter 12, Sarah Chan explores the normative and conceptual challenges raised by the prospect of crossing both biological and moral ‘species boundaries’, including the implications of species transitions in relation to identity, obligations toward existing beings, and beings that might be created via the species transition process. She reflects on how all of this might advance our thinking about moral status. In Chapter 13, Ben Sachs considers three proposals regarding the connection between an animal’s moral status and the legal status it ought to have. The first proposal is this strong claim: if an act wrongs an animal then crim­in­ al­iz­ing it is justified. The second proposal is more moderate: if an act constitutes an injustice to an animal then criminalizing it is justified. The third proposal is the one Sachs defends: it is obligatory for legislators to eliminate

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Preface  xi any aspect of the law that facilitates the wronging of animals. The chapter considers, in particular, the radical implications of this third proposal for animal ownership and state funding of medical research on animal subjects. The chapter by Rachell Powell, Irina Mikhalevich, and Allen Buchanan (Chapter 14) considers several evolved biases that distort our tendency to ascribe moral status, focusing in particular on the example of invertebrates. These biases include tendencies to deny moral standing, or to attribute lower moral status to beings that elicit feelings of disgust or fear, as well as to those that are perceived as less similar to us, less attractive, less individualized, and less disposed toward reciprocal cooperation. These adaptive mechanisms may have served human groups well in the evolutionary past, but in the modern world they pose an obstacle to moral progress and play a key role in moral regression. Chapter 15 examines brain organoids, which recapitulate the development of the brain. Might these have moral status, or the potential for it? Julian Koplin, Olivia Carter, and Julian Savulescu tackle this question head on. It is plausible, they argue, that brain organoids could one day attain consciousness and perhaps even higher cognitive abilities. Research on brain organoids therefore raises difficult questions about their moral status—questions that currently fall outside the scope of existing regulations and guidelines. The chapter offers a novel moral framework for such research and outlines the conditions under which brain organoids might attain moral status. The final three chapters focus in particular on AI and the prospect of digital minds. Walter Sinnott-­Armstrong and Vincent Conitzer ask, in Chapter 16, just how much moral status artificial intelligence could ever achieve. They suggest that different entities have different degrees of moral status with respect to different moral reasons in different circumstances for different purposes. Recognizing this variability will help resolve some debates about the potential moral status of AI. In Chapter  17 David Lawrence and John Harris argue that debates over moral machines often make wide assumptions about the nature of future autonomous entities, and frequently bypass the distinction between ‘agents’ and ‘actors’. The scope and limits of moral status, they suggest, are fundamentally linked to this distinction, and position non-Homo sapiens great apes as members of a particular moral status clade, which are treated in a similar fashion to that proposed for so-­called ‘moral machines’. They suggest that the principles by which we ultimately decide how to treat great apes, and whether or not we decide to act upon our responsibilities to them as moral agents, are likely to be the same principles we use to determine our responsibilities to moral AI in the future.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

xii Preface Finally, Carl Shulman and Nick Bostrom (Chapter 18) conclude the ­volume by investigating the moral status of digital minds. The minds of bio­logic­al creatures occupy a small corner of a much larger space of possible minds that could be created. Their chapter focuses on one set of issues which are provoked by the prospect of digital minds with superhumanly strong claims to resources and influence. These could arise from the vast collective benefits that mass-­produced digital minds might derive from relatively small amounts of resources. Alternatively, they could arise from individual digital minds with superhuman moral status or ability to benefit from resources. Such beings may contribute immense value to the world. Failing to respect their interests could produce a moral catastrophe, but a naive way of respecting them could be disastrous for humanity. Work leading to this volume was supported by the Wellcome Trust under Grant WT203132/Z/16/Z and the Uehiro Foundation on Ethics and Education. Most of the chapters in the volume are revised versions of papers that were originally presented at a conference on ‘Rethinking Moral Status’, held at St Cross College, in Oxford, on 13 and 14 June 2019, organized by the Wellcome Centre for Ethics and Humanities and the Oxford Uehiro Centre for Practical Ethics, both at the University of Oxford. The chapter by Steve Clarke and Julian Savulescu began life as a background paper, circulated to participants before the event. We were fortunate enough to be able to add chapters by Jeff McMahan, by Carl Schulman and Nick Bostrom, by Julian Koplin, Olivia Carter, and Julian Savulescu, and by Rachell Powell, Irina Mikhalevich, and Allen Buchanan to the collection. As well as thanking the Wellcome Trust and Uehiro Foundation for their generous support, the editorial team would like to thank Daniel Cohen, Alan Crosier, Rachel Gaminiratne, Christa Henrichs, Guy Kahane, Neil Levy, Morgan Luck, Mike Parker, Steven Tudor, Suzanne Uniacke, and Miriam Wood, for helping us, in different ways, to put together the volume. We would also like to thank Peter Momtchiloff for his expert editorial guidance. We thank all of our contributors, as well as the various people who helped us with the volume, for their forbearance during the COVID-­19 global pandemic, which led to the production of the volume taking somewhat longer than originally anticipated. Steve Clarke Hazem Zohny Julian Savulescu

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Notes on Contributors Françoise Baylis  is University Research Professor at Dalhousie University. She is a member of the Order of Canada and the Order of Nova Scotia, as well as a Fellow of the Royal Society of Canada and the Canadian Academy of Health Sciences. She is a philosopher whose innovative work in bioethics, at the intersection of policy and practice, challenges us to think broadly and deeply about the direction of health, science, and biotechnology. Tom L. Beauchamp is Professor of Philosophy and Senior Research Scholar Emeritus, Kennedy Institute of Ethics, Georgetown University. Dr Beauchamp’s primary interests are in the e­thics of human-­ subjects research, the ethics of animal-­ subjects research and human uses of animals, the place of universal principles and rights in biomedical ethics, and methods of bioethics. His Principles of Biomedical Ethics (with co-­author James Childress) is widely considered a classic of bioethics. Nick Bostrom  is a Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. Bostrom is the author of some 200 publications, including  Anthropic Bias  (2002),  Global Catastrophic Risks  (2008),  Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014). Bostrom is also a recipient of a Eugene  R.  Gannon Award, and has been listed on Foreign Policy’s Top 100 Global Thinkers list twice. Bostrom’s writings have been translated into 28 languages. Allen Buchanan is James B. Duke Professor of Philosophy at Duke University. He is the author of over 150 articles and book chapters, and eleven books. His recent books include The Evolution of Moral Progress (OUP 2018, with R. Powell), Institutionalizing the Just War (OUP 2018), The Heart of Human Rights  (OUP 2013), and Beyond Humanity? The Ethics of Biomedical Enhancement (OUP 2011). Buchanan has served as a staff member or consultant with four Presidential Bioethics Commissions. Olivia Carter is an Associate Professor at the University of Melbourne in the School of Psychological Science. After completing a PhD in Neuroscience, Carter worked as a research fellow for three years at Harvard University. She researches the neurobio­ logic­al mechanisms involved in consciousness and cognition. From 2008 to 2014 Carter served as the Executive Director of the International Association of the Scientific Study of Consciousness. Sarah Chan  is a Chancellor’s Fellow at the Usher Institute for Population Health Sciences and Informatics, University of Edinburgh. She graduated from the University of Melbourne with the degrees of LLB and BSc (Hons). She received an MA in Health

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

xiv  Notes on Contributors Care Ethics and Law and a PhD in Bioethics from the University of Manchester, where she was a Research Fellow in Bioethics from 2005 to 2015. Steve Clarke is a Senior Research Fellow in Ethics and Humanities, in the Wellcome Centre for Ethics and Humanities, the Uehiro Centre for Practical Ethics, and the Faculty of Philosophy at the University of Oxford. He is also Associate Professor of Philosophy in the School of Humanities and Social Sciences at Charles Sturt University. He has broad research interests in philosophy and bioethics. Vincent Conitzer  is the Kimberly J. Jenkins University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He is also the Head of Technical AI Engagement at the Institute for Ethics in AI and Professor of Computer Science and Philosophy at the University of Oxford. He received his PhD (2006) and MS (2003) degrees in Computer Science from Carnegie Mellon University, and an AB (2001) degree in Applied Mathematics from Harvard University. Conitzer works on artificial intelligence (AI). More recently, he has started to work on AI and ethics. David DeGrazia is Elton Professor of Philosophy at George Washington University. DeGrazia’s nine books include  Taking Animals Seriously: Mental Life and Moral Status  (Cambridge University Press, 1996) and, with Tom Beauchamp,  Principles of Animal Research Ethics (Oxford University Press, 2020) Thomas Douglas  is Professor of Applied Philosophy and Director of Research and Development in the Oxford Uehiro Centre of Practical Ethics, University of Oxford. He is also Senior Research Fellow at Jesus College, Oxford, and Editor of the Journal of Practical Ethics, and Principal Investigator on the project ‘Protecting Minds’, funded by the European Research Council. He trained in clinical medicine and philosophy and works chiefly on the ethics of behaviour modification and neuroenhancement. Ruth R. Faden is the founder of the Johns Hopkins Berman Institute of Bioethics. She was the Berman Institute’s Director from 1995 until 2016, and the inaugural Andreas C. Dracopoulos Director (2014–16). Dr Faden is the inaugural Philip Franklin Wagley Professor of Biomedical Ethics. In the twenty years in which Dr Faden led the Berman Institute, she transformed what was an informal interest group of faculty across Johns Hopkins into one of the world’s premier bioethics programmes. Elizabeth Harman  is Laurance  S.  Rockefeller Professor of Philosophy and Human Values at Princeton University. Her publications include ‘Creation Ethics’ (Philosophy and Public Affairs), ‘“I’ll Be Glad I Did It” Reasoning and the Significance of Future Desires’ (Philosophical Perspectives), ‘The Irrelevance of Moral Uncertainty’ (Oxford Studies in Metaethics), ‘Morally Permissible Moral Mistakes’ (Ethics), and ‘Ethics is Hard!  What Follows?’ (forthcoming). She is co-­ editor of Norton Introduction to Philosophy, Second Edition (2018), and Norton Introduction to Ethics (forthcoming). John Harris  is Professor Emeritus, University of  Manchester, Visiting Professor in  Bioethics, Department of Global Health  and Social Medicine,  Kings

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Notes on Contributors  xv College  London, and Distinguished Research  Fellow, Oxford Uehiro Centre for Practical Ethics. Faculty of Philosophy, University of Oxford. He is the author of, inter alia,  How to be Good, Oxford University Press, Oxford, 2016, On Cloning, Routledge, London. 2008, Enhancing Evolution, Princeton University Press, Princeton and Oxford 2007.  Violence & Responsibility,  Routledge & Kegan Paul, London, Boston & Henley, 1980 and 2021. F.  M.  Kamm  is the Henry Rutgers University Professor of Philosophy at Rutgers University and Distinguished Professor of Philosophy in the Department of Philosophy there. Her work focuses on normative ethical theory and practical ethics. She is the author of numerous articles and nine books, including Morality, Mortality vols. 1 and 2, Intricate Ethics, Bioethical Prescriptions, The Trolley Problem Mysteries, and Almost Over: Aging, Dying, Dead. Julian Koplin  is a Research Fellow with the Biomedical Ethics Research Group, Murdoch Children’s Research Institute and Melbourne Law School, the University of Melbourne. He has a broad range of interests across the field of philosophical bioethics, including stem cell ethics, transplant ethics, and the methods of bioethics. Julian holds a PhD in bioethics from the Monash Bioethics Centre. David R. Lawrence is a Research Fellow in the University of Edinburgh’s Centre for Biomedicine, Self, and Society, with a background in neuroscience and biotechnological law and ­ethics. Since his doctoral studies at the Institute for Science Ethics and Innovation at the University of Manchester, his work focuses on enhancement technologies and their possible effects on moral status. Jeff McMahan is White’s Professor of Moral Philosophy at the University of Oxford. He is the author of The Ethics of Killing: Problems at the Margins of Life and Killing in War. Debra J. H. Mathews is an Assistant Director for Science Programs at Johns Hopkins Berman Institute of Bioethics and an Associate Professor in the Department of Genetic Medicine at Johns Hopkins University School of Medicine. Dr Mathews has also spent time outside academia at the US Department of Health and Human Services, the Presidential Commission for the Study of Bioethical Issues, and elsewhere, working in various capacities on science policy. Her academic work focuses on ethics and policy issues raised by emerging biotechnologies. Irina Mikhalevich  is an Assistant Professor of Philosophy at Rochester Institute of Technology. Her work lies at the intersection of the philosophy of science, cognitive science, and bioethics. Before coming to RIT, Irina held the McDonnell Postdoctoral Fellowship at the Philosophy-­ Neuroscience-­ Psychology (PNP) Program at  Washington University in St Louis and a  Postdoctoral Fellowship at the  Berlin School of Mind and Brain at Humboldt University. Ingmar Persson  is Emeritus Professor of Practical Philosophy, University of Gothenburg, and Distinguished Research Fellow, Oxford Uehiro Centre for Practical

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

xvi  Notes on Contributors Ethics. His main publications are: The Retreat of Reason (OUP, 2005), From Morality to the End of Reason (OUP, 2013), Inclusive Ethics (OUP, 2017), Reasons in Action (OUP, 2019), Morality from Compassion (OUP, 2021), and with Julian Savulescu, Unfit for the Future (OUP, 2012). Rachell Powell is Associate Professor of Philosophy at Boston University. Her research focuses on the philosophy of biological and biomedical science. Her books include Contingency and Convergence: Toward a Cosmic Biology of Body and Mind (MIT 2019), and The Evolution of Moral Progress: A Biocultural Theory (OUP 2018, with Allen Buchanan). She has published in such journals as Philosophy of Science, British Journal for the Philosophy of Science,  Journal of Philosophy,  Ethics, and Journal of Medicine and Philosophy. Alan Regenberg is the Director of Outreach and Research Support and an associate faculty member at the Johns Hopkins Berman Institute of Bioethics. He is engaged in a broad range of research projects and programs, including the Berman Institute’s science programs: The Stem Cell Policy and Ethics Program (SCOPE); the Program in Ethics and Brain Sciences; and the Hinxton Group, an international consortium on stem cells, ethics, and law. Jason Scott Robert  holds the Lincoln Chair in Ethics and a Dean’s Distinguished (Associate) Professorship in the Life Sciences at Arizona State University. His work is at the nexus of philosophy of biology and bioethics, focusing primarily on the justification of good science in controversial areas of research in developmental biology and the neurosciences. Benjamin Sachs is a Senior Lecturer in Philosophy at the University of St Andrews. His main interests are in applied ethics, coercion, political philosophy, and philosophy of law. His first book, Explaining Right and Wrong, was published by Routledge in 2018. His second book, Contractarianism, Role Obligations, and Political Morality, is forthcoming with Routledge. He is currently co-­directing, with Alex Douglas, a research network called The Future of Work and Income. Julian Savulescu is Uehiro Chair in Practical Ethics, Director, Oxford Uehiro Centre for Practical Ethics, and Co-­Director, Wellcome Centre for Ethics and Humanities at the University of Oxford. He is Visiting Professor in Biomedical Ethics, Murdoch Children’s Research Institute, where he directs the Biomedical Ethics Research Group, and Distinguished Visiting Professor in Law, Melbourne University. Udo Schuklenk  has taught at universities in Germany, Australia, the UK, and South Africa before taking up the Ontario Research Chair in Bioethics at Queen’s University. He has written or co-­edited ten books and authored or co-­authored more than 150 peer reviewed publications in journals and anthologies. His main research interests are in the areas of end-­of-­life issues, public health, and medical professionalism.  Joshua Shepherd is Assistant Professor in Philosophy at Carleton University, Research Professor at the Universitat de Barcelona (where he directs the ERC funded project

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Notes on Contributors  xvii Rethinking Conscious Agency), and Senior Fellow of the LOGOS Research Group. He is the author of the book Consciousness and Moral Status. Carl Shulman  is a Research Associate at the Future of Humanity Institute, Oxford Martin School, Oxford University, where his work focuses on the long-­run impacts of artificial intelligence and biotechnology. Previously, he was a Research Fellow at the Machine Intelligence Research Institute. He attended New York University School of Law and holds a degree in philosophy from Harvard University. Walter Sinnott-­Armstrong  is Chauncey Stillman Professor of Practical Ethics at Duke University in the Philosophy Department, the Kenan Institute for Ethics, the Duke Institute for Brain Science, and the Law School. He publishes widely in ethics, moral psychology and neuroscience, philosophy of law, epistemology, philosophy of religion, and argument analysis. Hazem Zohny is Research Fellow in Bioethics and Bioprediction at the Uehiro Centre for Practical Ethics at Oxford University. He has a PhD in Bioethics from the University of Otago and has published a number of academic papers related to enhancement, disability, well-­being. His current work focuses on the bioprediction of behaviour and the ­ethics of using neurointerventions for crime prevention.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

1 Rethinking our Assumptions about Moral Status Steve Clarke and Julian Savulescu

1.  The Idea of Moral Status When a being or entity has moral status its interests matter morally, for its own sake (Jaworska and Tannenbaum  2018). If a being or entity has moral status, then an act that is morally bad, in at least one respect, is committed when an agent harms that being or entity. Any all-­things-­considered moral justification for such an act must take into account the harm committed by the agent against that being or entity. Ordinary adult humans are usually supposed to have a specific and equal level of moral status—often referred to as ‘full moral status’ (FMS). Non-­human animals are usually accorded some moral status, but this is typically understood to be a lesser level or degree of moral status than FMS.1 Statuses are often organized in hierarchies. In the peerage of Great Britain, for example, an Earl has higher status than a Viscount, a Viscount ranks higher than a Baron, and a Baron is the lowest-­status British peer, ranking only above commoners. Standard attributions of moral status form a partial hierarchy. It is usually agreed that humans have a higher level of moral status than non-­human animals. However, there is no widely accepted ordering of non-­human animal moral status.2 Opinions vary about the relative levels of moral status of different non-­human animals, and about which species of animals have moral status. Most of us ascribe some moral status to non-­human primates. Many of us ascribe some moral status to other mammals. Some of us ascribe some moral status to birds, reptiles, and fish and a few of us ascribe some moral status to arachnids, insects, and crustaceans.3 Further disagreement about the presence of moral status, or about the extent to which it is possessed, becomes apparent when we consider humans other than ordinary

Steve Clarke and Julian Savulescu, Rethinking our Assumptions about Moral Status In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Steve Clarke and Julian Savulescu 2021. DOI: 10.1093/oso/9780192894076.003.0001

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

2  Steve Clarke and Julian Savulescu adult humans. Do human foetuses and embryos have FMS, some lesser moral status, or no moral status? What about infants? What about severely cognitively impaired or unconscious adults? Technological developments are throwing up new and controversial cases, which will require our consideration. What are we to say about human non-­ human chimeras,4 human brain organoids,5 or artificial intelligence?6 What should we say about the moral status of a cyborg,7 a post-­human,8 or a human mind that has been uploaded into a computer, or onto the internet?9 To provide sensible answers to these questions we need to be able to think clearly about what it is to have moral status, and about when and why we should attribute moral status to beings and entities. One way to help clarify our thinking is to try to define moral status. However, when we attempt this, it can start to look like talk of moral status doesn’t add anything to other, more familiar forms of moral discourse. DeGrazia offers the following characterization of moral status: To say that X has moral status is to say that (1) moral agents have obligations regarding X, (2) X has interests, and (3) the obligations are based (at least partly) on X’s interests.  (DeGrazia 2008, p. 183)

We are already familiar with the language of interests and obligations, so why not restrict ourselves to this terminology and forgo talk of moral status? An answer to this question, defended by DeGrazia, is that reference to moral status is a convenient form of shorthand, which is especially useful to us when we want to generalize about moral obligations and interests (2008, p. 184). Another answer is that moral status talk is well suited to play a specific explanatory role that talk of moral interests and obligations is not well suited to play. This is to relate the moral properties of beings to whom we have moral obligations to the non-­moral properties and capacities of those beings.10 If we are pushed to rethink our assumptions about moral status to accommodate artificial intelligence, cyborgs, human brain organoids, human non-­ human chimeras, post-­ humans, and uploaded minds, then we should consider the possibility that some of these beings and entities have a level of moral status below FMS. We should also be open to the possibility that some of these beings and entities might have a higher moral status than do ordinary adult humans. The phrase ‘full moral status’ (FMS) suggests a threshold level above which moral status cannot rise. However, as we will go on to discuss, it seems possible that a being or entity could have a higher moral status than the moral status of ordinary adult humans.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  3 In this chapter we consider some of the key philosophical issues that arise when attempts are made to rethink our usual assumptions about moral status to handle such new and controversial cases. In section 2 we critically examine the widespread assumption that all ordinary adult humans have equal moral status. In section 3 we subject to scrutiny the assumption that membership of the species Homo sapiens somehow confers FMS. In section  4 we consider some revisionary approaches to thinking about moral status that involve rejecting the presupposition that there is a sharp distinction between the FMS of ordinary adult humans and the partial moral status of non-­human animals. In section  5 we consider proposals to reject an almost universally accepted assumption—that no beings or entities could have higher moral status than the FMS usually attributed to ordinary adult humans. We also consider some consequences that could follow from creating beings with higher moral status than that of ordinary adult humans. In section 6 we turn our attention to a practical concern: how to behave toward beings and entities when we find ourselves uncertain about their moral status.

2.  Human Moral Status The assumption that all adult humans who are not severely cognitively impaired have equal moral status is hardly ever challenged these days, at least in Western liberal societies.11 It is a background assumption made by the many of us who share liberal, democratic ideals. However, it would have been rejected by most ordinary members of the various slave-­owning societies that flourished before the rise of modern liberal democracy.12 In slave-­owning societies, the enslaved were regarded as having fewer legal rights than the free. The systematic difference between the expansive legal rights of the free and the limited rights of the enslaved was provided with apparent justification by the pervasive assumption made in many slave-­owning societies that the enslaved were of a lesser moral status than the free.13 Defenders of institutional slavery usually sought to justify the attribution of different levels of moral status to different groups of people by appealing to perceived natural differences between different types of humans. These differences were then invoked to try to justify the enslavement of humans of one type by humans of another type. The best-­known attempted philosophical defence of slavery is from Aristotle, who argued that some humans lacked the capacities for significant deliberation and foresight, and so were ‘natural slaves’, in need of direction by natural masters who possessed the capacities

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

4  Steve Clarke and Julian Savulescu that natural slaves lacked.14 Aristotle’s theory of natural slavery was based on an assumption of systematic underlying differences between different types of humans. However, unlike more recent, eighteenth- and nineteenth-­century defenders of slavery, Aristotle did not assume that these differences correlated with racial differences. This should not be surprising. In Ancient Greece, slaves were captured and traded from many different countries and had a diverse range of ethnic origins. By the eighteenth and nineteenth centuries, slavery in the West was, for the most part, restricted to specific ethnicities, with blacks especially liable to be enslaved by whites. Eighteenth- and nineteenth-­century apologists for slavery often appealed to quasi-­scientific theories about racial differences, which lacked supporting evidence, in their attempts to justify race-­based slavery.15 The fact that institutional slavery was still practised in the US (and elsewhere), as recently as 160 years ago is very disturbing to defenders of the view that all adult humans who are not severely cognitively impaired have equal moral status, including the authors of this chapter. It seems clear to us that societies that fail to treat all ordinary adult humans as having equal moral status ought to do so, because in fact all ordinary adult humans have equal moral status. If all ordinary adult humans have equal moral status, as we are convinced that they do, then this is presumably because of some or other properties and capacities that they share, which may or may not be shared by human infants, human foetuses and embryos, and severely cognitively impaired humans. What properties or capacities might these be? Almost all attempts to locate grounds for the moral status of ordinary adult humans identify specific cognitive capacities as the basis for that moral status. However, there is a lack of agreement in the literature regarding the cognitive capacities necessary for FMS. Quinn suggests that the ability to will is necessary for FMS (1984, p. 51), while Singer stresses the importance of future-­ oriented planning (1993, pp. 116–17), McMahan suggests that self-­awareness is necessary for FMS (2002, p. 45). Baker (2000) argues that self-­consciousness is necessary,16 Metz (2012) suggests that the capacity to participate in communal relationships is necessary,17 and Jaworska (2007) stresses the im­port­ ance for FMS of having a capacity to care. A different approach to grounding attributions of FMS is to argue that the potential to go on to develop sophisticated cognitive capacities warrants the attribution of FMS. Infants and severely cognitively impaired adults do not possess sophisticated cognitive capacities, but many possess the potential to develop—or recover—sophisticated cognitive capacities. Appeals to potential,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  5 as a basis for the attribution of FMS, are popular among opponents of abortion and unpopular among proponents of abortion. Just as infants have the potential to acquire sophisticated cognitive capacities, so do human foetuses and embryos. If we are to grant FMS to human foetuses and embryos, then it looks like we should ban most instances of abortion, as abortion will involve killing beings who are acknowledged to have FMS.18 A major concern with appeals to potential as a basis for attributions of FMS is that it is far from obvious what constraints there are on such appeals. An unfertilized human ovum together with a human sperm have the potential to become a human adult. However, anti-­abortion activists do not usually want to argue that unfertilized ovum-­sperm pairs have FMS, in virtue of having the potential to become adult humans. In response to objections along these lines, opponents of abortion, such as Watt (1996) and Camosy (2008), draw conceptual distinctions between the type of potential the unfertilized ovum-­ sperm pair has and the potential that a fertilized ovum has to become a human adult. It is not clear that such attempts to distinguish between different types of potential are successful. Nor is it entirely apparent how any specific type of potential could confer moral status.19 A common way of supplementing accounts of the grounds necessary for FMS is to assert that personhood is necessary and sufficient for FMS.20 Persons are said to have FMS, while non-­persons are said to have either less-­ than-­full or no moral status.21 There is significant disagreement in the literature about which beings and entities are persons. Human foetuses are not usually regarded as persons, at least by secular philosophers, but they are considered to be legal persons in some jurisdictions.22 Human infants are usually regarded as persons, but some scholars, such as Tooley (1972), argue otherwise. Cognitively impaired human adults are usually regarded as persons; however, it has been argued that once humans have become severely cognitively impaired and have fallen into a persistent non-­responsive state they may no longer be persons (Callahan  1993). Non-­human apes may be persons, or at least ‘border-­line persons’ (DeGrazia 2007, p. 323). Now-­extinct Neanderthals may have been persons (Buchanan  2009, p. 372), and some intelligent machines that we might create in the future could be persons (Bostrom and Yudkowsky 2014). It is not easy to see how the stipulation that personhood is a criterion for FMS could assist us in identifying who and what has FMS. If we treat personhood as a criterion for FMS then we transform the problem of identifying who and what has FMS into the equally challenging problem of figuring out who and what is a person.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

6  Steve Clarke and Julian Savulescu

3.  Species Membership and the Boundary between Full and Partial Moral Status As well as identifying grounds for attributing FMS to ordinary adult humans we need to consider where the conceptual boundary lies between beings that possess FMS, including ordinary adult humans, and beings that are ordinarily held to possess only partial moral status, such as non-­human animals. Many will want to say that this conceptual boundary maps on to the boundary between membership of the species Homo sapiens and membership of other species. However, most philosophers are wary of stipulating that membership of a particular species is necessary for FMS, as making this assertion would appear to leave them open to the charge of unfairly favouring one species over others—the charge of speciesism, which is often depicted as akin to sexism and racism. (Singer 2009; Liao 2010).23 A philosopher who defends a prejudice in favour of our fellow Homo sapiens and who argues that it is not akin to racism and sexism is Bernard Williams (2006). According to Williams, racism and sexism are unjustified prejudices because defenders of racism and sexism are unable to answer the question ‘What’s that got to do with it?’ (2006, p. 139). In contrast to the answers ‘he’s white’ or ‘she’s a woman’, Williams thinks the answer ‘it’s a human being’ provides a reason for us to favour humans, rather than a rationalization for prejudice. While ‘it’s a human being’ may seem compelling to us human beings the most plausible explanation for it seeming this way is that we are members of the club of human beings. ‘He’s white’ may seem similarly compelling to white supremacists. It is hard to see that Williams has identified a morally relevant consideration that operates as a reason for us to favour humans over non-­humans, as opposed to a rationalization for pro-­human prejudice (Savulescu 2009, p. 219). Most philosophers who appeal to species membership as a basis for FMS are likely to assert that membership of the species Homo sapiens is sufficient for FMS, in virtue of underlying capacities that ordinary adult humans ­possess—such as sophisticated cognitive capacities. In making this assertion they allow for the possibility that membership of any species whose ordinary adult members possess the required capacities would also be a sufficient basis for FMS. The argumentative move of underwriting the case for species membership as a basis for FMS by appealing to the underlying capacities that ordinary adult members of that species possess avoids the charge of speciesism. However, it raises other problems. If the moral status of the members of a

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  7 species turns out to be grounded in capacities that ordinary adult members of that species happen to possess, then it is unclear why we should accept that members of that species who lack the capacities in question should be accorded FMS. Why think that infants and severely cognitively impaired adults who lack sophisticated cognitive capacities should be considered to possess FMS simply because they happen to be members of a species in which other members possess sophisticated cognitive capacities? It seems arbitrary to attribute FMS to human infants and severely cognitively impaired human adults when we have reason to believe that members of other species, to whom we are not attributing FMS, possess cognitive capacities that are as sophisticated as those possessed by infants or severely cognitively impaired adults.24 We could respond to this problem by relaxing our criteria for the attribution of FMS and allowing that some non-­human animals that are as cognitively developed as human infants and severely cognitively impaired humans also have FMS. However, if we were to do this then we would be morally required to treat those non-­human animals in far better ways than we do now (Singer 2009). Many will baulk at this consequence. Another problem for species-­membership accounts of FMS is raised by consideration of exceptional members of species whose ordinary members lack the underlying capacities to warrant attributions of FMS. A widely discussed example is McMahan’s ‘superchimp’. The superchimp is a chimpanzee that acquires cognitive capacities exceeding those expected of a chimpanzee. After being administered a form of gene therapy as a newborn chimpanzee it develops cognitive capacities as sophisticated as those of a 10-­ year-­ old human, when it is an adult (McMahan 2002, p. 147). Intuitively, it seems we ought to attribute the same moral status that we attribute to 10-­year-­old humans to the superchimp—FMS (McMahan 2002, p. 216). However, we will be unable to justify doing so if we insist on membership of a cognitively sophisticated species as a necessary condition for possession of FMS. Yet more problems for appeals to species membership, as a necessary condition for FMS, are raised by consideration of the concept species. It is notoriously difficult to give a philosophically satisfactory account of the concept species.25 This difficulty should not be surprising given that acceptance of Darwin’s theory of evolution commits us to the view that species evolve from other species, via a process of mutation and natural selection. The boundaries between species are porous and at times beings will exist that are not members of any particular species.26 If there are beings that are not members of particular species then we cannot determine their moral status by appeal to

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

8  Steve Clarke and Julian Savulescu species membership. The same can be said of hybrid animals, such as mules and ligers, as well as chimeras created by blending genetic material from more than one species, such as sheep–goat chimeras. Artificial intelligence also raises problems for species-­membership accounts of moral status. In the future, we may be able to create beings and entities with very sophisticated cognitive capacities and intuitively it seems that there is a compelling case for attributing FMS to at least some of them. But they will not be members of species. Acceptance of species membership as a necessary condition for attributions of moral status would preclude us from attributing moral status to these beings and entities.

4.  Revisionary Approaches to Moral Status If we want a theory of moral status to account for ordinary (contemporary) intuitions about moral status then that theory will need to be consistent with the conclusion that ordinary adult humans, small infants, and most, if not all, cognitively impaired human adults, have FMS. It will also need to be consistent with the conclusion that at least some non-­human animals, including most if not all mammals, have partial moral status. Additionally, such a theory should be ‘species neutral’ rather than anthropocentric.27 Even though most of us do not believe that we have met non-­humans with FMS, there are few who would deny that it is possible that cognitively sophisticated non-­ humans might exist, or that it would be appropriate to attribute FMS to them. It is not clear that any current unified theory of moral status manages to achieve all of this.28 Given the difficulty of providing a coherent theory that accounts for ordinary intuitions about moral status, it is not surprising that some philosophers have advocated rejecting at least some of these ordinary intuitions, and developing revisionary theories of moral status. One influential group of revisionaries are those animal rights activists who deny that non-­ human animals have lesser moral status than humans. According to Tom Regan (2004), all ‘subjects-­of-­a-­life’ have the same moral status. For Regan, subjects-­of-­a-­life are those beings that have a particular set of properties and capacities. This set includes ‘ . . . beliefs and desires; perceptions, memory and a sense of the future . . . ’ (2004, p. 243). On his view, many non-­human animals and humans are subjects-­of-­a-­life.29 The utilitarian Peter Singer also argues that we should reject the ordinarily assumed division between the moral status of humans and the moral status of non-­human animals. According to him, we should adopt a principle of ‘equal consideration

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  9 of interests’, and apply it to both humans and non-­human animals. The most important interests that beings have, according to Singer, are interests in enjoyment and the avoidance of suffering. Some non-­human animals will have a greater capacity for enjoyment and a greater capacity for suffering than do some cognitively impaired humans. So, application of the principle of  equal consideration of interests will lead us to prioritize the interests of these animals over the interests of severely cognitively impaired humans (Singer 2009, pp. 574–6). Another revisionary view is due to Jeff McMahan. McMahan (2002) raises the possibility of ‘intermediate moral status’. This a level of moral status somewhere between that of most non-­human animals and FMS. McMahan (2002) suggests that we should attribute intermediate moral status to human infants and cognitively advanced non-­human animals, such as ‘higher primates’ (2002, p. 265). While McMahan (2002) allowed for three distinct levels of moral status, McMahan (2008, pp. 97–100) contemplates two levels of moral status with a rising series of degrees of moral status between the two levels. These rises take place in response to increases in ‘psychological capacity’ up to a threshold of FMS (McMahan  2008, p. 99). On McMahan’s later view, intermediate moral status still exists, but some beings with intermediate moral status have higher moral status than others.30 Another possibility is that moral status comes in degrees and rises consistently, before levelling off at the threshold where personhood is attained (DeGrazia 2008).31 There are also many other possible combinations of levels, or thresholds, of moral status and continuous rises by degrees that we could invoke to depict the relationship between increases in possession of the relevant properties and capacities we take to underpin moral status and increases in moral status.32

5.  More-­than-­full Moral Status? The phrase ‘Full Moral Status’ suggests that there is a maximum level of moral status that might be obtained. However, it seems possible that there could be beings with higher moral status than the full moral status normally attributed to ordinary adult humans. It may be difficult for us to conceive of such beings, but it does not follow from the limitations of our imaginative capacities that they cannot exist.33 If we think that humans have a higher level of moral status than non-­human animals, in virtue of possessing cognitive capacities that are superior to those of non-­human animals, then it looks like we should be open to the possibility that beings with superior cognitive capacities to ours

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

10  Steve Clarke and Julian Savulescu would have a higher level of moral status than us. Transhumanists, such as Bostrom (2005), urge us to try to create ‘post-­humans’ with superior cognitive capacities to our own. If we manage to do so then we may also end up creating beings with higher moral status than we possess (Agar  2013a; Douglas 2013). Suppose we could create beings with superior moral status to ours. Should we do so? There are reasons that speak in favour of creating such beings. If these beings have higher moral status than us, in virtue of having more developed cognitive capacities than we have, then, all things being equal, they will be more capable of accurate and consistent moral reasoning than we are. If they are more capable of accurate and consistent moral reasoning than us then, all else being equal, they will be more likely to perform good acts and less likely to perform bad acts than we are. From an impartial point of view, it therefore seems that we should prefer the creation of post-­humans with su­per­ior moral status to the creation of mere humans. It is plausible to think that the creation of post-­humans will also be good for humans. All things being equal, post-­humans with more highly developed moral capacities than humans possess will be more likely than humans to treat other beings, including humans, in morally appropriate ways. However, there are reasons to be concerned about the consequences for us of creating post-­humans with higher moral status than we possess. From the point of view of post-­humans with higher moral status than us, we humans would be beings of lower moral status and morality could permit post-­ humans to treat us in ways we would prefer not to be treated. To understand this concern, it helps to start by thinking about the ways in which we regard it as morally permissible to treat the non-­human animals that we consider to have lower moral status than us. Many of us regard it as morally permissible to kill and eat non-­human animals, sacrificing their lives for our nutrition and our gustatory pleasure. Many also regard it as morally permissible to conduct harmful experiments on non-­human animals if doing so leads to med­ ic­al or cosmetic benefits to humans. Also, we generally regard it as morally permissible, if not obligatory, to sacrifice non-­human animals when human lives are at stake. Consider a ‘trolley case’ in which a runaway trolley is going to run over a human unless we flick a switch, diverting the trolley down a side-­track and thereby killing five sheep that are stuck on the side-­track. Most of us would regard it as morally permissible, if not obligatory, to flick the switch and sacrifice the lives of several sheep to save one human life. If we are right about it being morally acceptable for us to treat beings with lower moral status than

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  11 ourselves, in the various ways that have been listed, then we should worry about the ways in which beings with higher levels of moral status might regard themselves as being entitled to treat us. By parity of reasoning, we can infer that they may well regard themselves as being entitled to kill and eat us, to conduct harmful experiments on humans, and to sacrifice the lives of many of us in order to save just one of them.34 Agar (2013a) argues that we are not under a moral obligation to create ‘post-­persons’—his preferred term for post-­humans with higher moral status than ourselves. He further argues that when we think about the potential harms to us that may result from the creation of such beings, it becomes clear that we have good reason not to do so. Persson (2013) offers a contrary view. He argues that Agar is biased in favour of mere humans and against post-­ persons. On Persson’s (2013) view, while we are not under a moral obligation to create post-­persons, it would be good, all things being equal, for us to create post-­persons. As post-­persons will only be treating humans as morality requires them to treat humans when they sacrifice human lives for the sake of their own lives, there can be no moral objection to them sacrificing humans for the sake of their own lives. Agar (2013b) disputes that he is merely biased in favour of humans and against post-­persons. His ‘Rawlsian’ conception of justice leads him to put the interests of the worse off ahead of those of the better off, and in a world in which mere humans and post-­persons both exist humans can reasonably be expected to be worse off and post-­humans the better off. To avoid this state of affairs it is better for humans not to create post-­humans in the first place, or so Agar (2013b) argues.

6.  Moral Uncertainty and Moral Confusion Suppose we managed to create post-­humans with significantly superior cognitive capacities to our own. If we are unsure about the nature of the relationship between cognitive capacity and moral status, as many of us are, we may find ourselves uncertain about whether or not these post-­humans have su­per­ ior moral status to us. How should we treat such beings when we are uncertain about their moral status? One approach would be to treat them as our moral equals until such time as we are presented with compelling evidence that they really do have higher moral status to us. But there is a strong case for treating them differently from us, at least in some circumstances. To see why consider a rescue situation in which a human and a cognitively superior post-­ human will both die if we do nothing and we have the opportunity to rescue

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

12  Steve Clarke and Julian Savulescu one of them but not both. If we knew that the post-­human was of higher moral status than the human then, all else being equal, we would be morally obliged to save the post-­human and allow the human to die.35 However, all we know is that the post-­human might have higher moral status than the human or might have the same moral status as the human. We know that, all things being equal, it is wrong to rescue a being with lesser moral status if doing so involves allowing a being with higher moral status to die. So, we know that we might be acting wrongfully by rescuing the human rather than the post-­human. However, all else being equal, there is no chance that we will act wrongfully if we rescue the post-­human and allow the human to die. Even if they both have the same moral status, it is not wrong to rescue the post-­human and allow the human to die. We might be acting wrongfully if we rescue the human, but all else being equal, we cannot be acting wrongfully if we rescue the post-­human. Therefore, it seems clear that we ought to rescue the post-­human and allow the human to die, even though we are uncertain as to whether the post-­human has superior moral status to the human. Issues of uncertainty about moral status don’t only arise when we think about post-­humans. They also arise when we think about human non-­human chimeras. Most of us regard it as clear that ordinary adult humans have higher moral status than non-­human animals. However, beings that are part human and part non-­human animal pose a threat to our intuitive sense of moral clarity. Admittedly, current examples of human non-­human chimeras do not seem to present much of a challenge to our intuitive sense of moral clarity. An ordinary adult human who has a pig valve transplant in her heart is tech­nic­ al­ly a chimera, but she would be regarded by the vast majority of us as having the moral status of any other ordinary adult human. Similarly, most would consider Oncomice (mice that are genetically engineered to contain a human cancer-­causing gene) as having the moral status of ordinary mice (Bok 2003). In the future, however, we may be able to create chimeras that involve a more extensive blending of human and non-­human animal components and we may find ourselves unable to determine the moral status of these more extensively blended beings. If we are uncertain about the moral status of a particular type of being and are unable to figure out how to go about alleviating our uncertainty then we are in a state of moral confusion. Robert and Baylis argue that the future cre­ ation of human non-­human chimeras threatens to place us in a state of moral confusion and that this is an important reason for us to avoid creating such beings (2003, p. 9). Whether or not one is put off creating human non-­human

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  13 chimeras by the threat of moral confusion depends, to a significant extent, on one’s ability to tolerate moral confusion. It seems that many of us already manage to tolerate a great deal of moral confusion. Many of us feel confused about the moral status of human foetuses and embryos, adult humans in persistent non-­responsive states and non-­human primates. It could be argued that we have a demonstrated capacity to tolerate significant moral confusion, and the mere presence of another potential source of moral confusion should not be of any special concern to us. A contrary view is that the moral confusion that would follow from the blurring of the boundaries between humans and non-­human animals would be more severe and more threatening than forms of moral confusion that we currently have to deal with. Breaking down the distinction between the less-­than-­full moral status of non-­human animals and human FMS could pose an existential threat to our current social order (Robert and Baylis 2003, p. 10). The creation of artificial intelligence, cyborgs, human brain organoids, human non-­human chimeras, post-­humans and uploaded minds all have the potential to cause new forms of moral confusion. It is possible that all human societies will collectively agree to avoid creating any of these sorts of beings, and thereby avoid adding to our moral confusion, but this seems unlikely. It also seems unlikely that all of us would stick to such an agreement, were we to make it. So, it looks like we will need to get better at either learning to tolerate, or learning to resolve, moral uncertainty and moral confusion.36

Notes 1. A few scholars appear to proceed on the assumption that there is no level of moral status other than FMS. For discussion of such views, see Hursthouse (2013, pp. 3425–7). 2. For a recent proposal to develop a hierarchy of non-­human animal moral status, see Kagan (2018). 3. A small number of us are inclined to attribute moral status to plants and perhaps ecosystems. See, for example, Goodpaster (1978). 4. A human non-­human chimera is a being created by combining cells that have human origins with cells that have non-­human origins. For discussion of the moral status of human non-­human chimeras, see Robert and Baylis (2003), Streiffer (2005), DeGrazia (2007) and Hübner (2018). 5. Human brain organoids are artificially grown miniature organs resembling human brains. They are created by culturing human pluripotent stem cells, and are used as relatively simple in vitro models to assist in neurological experimentation. For discussion of the moral status of human brain organoids, see Cheshire (2014) and Lavazza and Massimini (2018).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

14  Steve Clarke and Julian Savulescu 6. For discussion of the moral status of artificial intelligence, see Basl (2014) and Søraker (2014). 7. For discussion of the moral status of cyborgs, see Gillett (2006) and Jotterand (2010). 8. Post-­humans are humans who have radically enhanced capacities: for example, an IQ above 200 or the power of telepathy or tele­kin­esis. For discussion of the moral status of post-­humans, see Buchanan (2009), Agar (2013a; 2013b), and Douglas (2013). 9. For discussion of the moral status of uploaded minds, see Sandberg (2014). 10. Some authors worry that the apparent convenience and utility of talk of moral status comes at a price and suggest that we should not pay that price. Sachs (2011) argues that talk of moral status can be obfuscating. Horta argues that moral status talk can distort our understanding of how we should behave toward some individuals in particular circumstances (2017, p. 909). 11. However, it is subject to the occasional sceptical challenge. For discussion of how defenders of the assumption might try to respond to sceptical challenges, see ­ McMahan (2008). 12. There was a long delay between the initial rise of modern liberal democracy and the end of institutional slavery. The US Declaration of Independence was signed in 1776, and is surely one of the foundational documents of modern liberal society. It made the unqualified assertion that it is self-­evident that all men are created equal. However, it would take 89 years and a hugely destructive civil war before institutional slavery was abolished in all parts of the US. 13. For discussion of the challenge that slave-­owning societies present, for those who assume that humans have equal moral status, see Lindsay (2005). 14. Aristotle’s theory of natural slavery is much more complicated than this brief characterization suggests. For further discussion, see Smith (1983). 15. An influential approach was to appeal to the now dis­credit­ed theory of polygenesis, which holds that different human races evolved separately, in different parts of the world. The purportedly separate origins of different races was then appealed to, in an attempt to explain why some races were better suited to freedom and others better suited to slavery. For discussion of polygenetic defences of slavery amongst eighteenth-­century philo­sophers, see Watkins (2017). For a mid-­nineteenth-­century polygenetic defence of slavery, see Nott and Gliddon (1854). 16. Shepherd (2017:  2018) has recently argued for the moral insignificance of self-­consciousness. 17. Metz’s (2012) appeal to the capacity to participate in communal relationships as ­necessary for FMS is one component of a relational account of moral status he defends. 18. Not all arguments for abortion rely on appeals to the moral status of foetuses and embryos. A famous argument for abortion that does not is due to Thomson (1971). 19. For arguments against the relevance of appeals to potential for moral status, see McMahan (2002, pp. 308–29), Persson (2003), and Singer and Dawson (1988). 20. There are different senses of personhood discussed in the philosophical literature, including metaphysical and legal personhood. The sense that is most relevant to discussions of moral status is ‘moral personhood’. 21. See, for example, Singer (1993), Baker (2000), Warren (1997, Chapter Four) and McMahan (2002). 22. For further discussion, see Schroedel (2000).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  15 23. Those who take membership of our species as necessary for FMS are more likely to employ theological rather than philosophical arguments to support their case. One influential Christian theological argument for the conclusion that humanity is necessary for FMS appeals to the biblical doctrine that humans are the only beings made in the image of God. See Genesis 1:27. 24. For extended discussion of this line of reasoning, see Singer (2009, pp. 568–70). 25. For discussion of various ways to define species, see Ereshefsky (2017). See also Robert and Baylis (2003, pp. 2–4). 26. The phenomenon of ring species makes further trouble for appeals to species membership as a necessary condition for FMS. For discussion, see Persson and Savulescu (2010). 27. For discussion of the ‘species neutrality requirement’, see Liao (2010, pp. 159–63). 28. Metz reasons similarly (2012, pp. 399–400). 29. Regan doesn’t say exactly where the line is to be drawn between beings that are and are not subjects-­of-­a-­life. He does, however, mention that ‘mentally normal mammals of a year or more’ are above that line (2004, p. xvi). 30. For discussion of the development of McMahan’s views between 2002 and 2008, see Ebert (2018, pp. 84–8). 31. DeGrazia describes this position approvingly but does not explicitly endorse it. He speculates that moral status may vary according to the extent of a being’s interests and these may depend on ‘cognitive, affective and social complexity’ (2008, p. 192). 32. Douglas depicts six different possible relationships between moral status and the underlying property of mental capacity (2003, p. 478). 33. Buchanan allows that it is possible for beings with higher moral status than humans possess to exist. He thinks, however, that our in­abil­ity to conceive of such beings means that any argument that might be mounted to try to persuade us that particular beings actually have higher moral status than humans would fail to be convincing to us (2009, p. 363). 34. If they were to reason this way then they would be treating moral status as having ­relative value. An alternative possibility is that they might regard human moral status as an absolute value and then reason that their relatively superior moral status should not be relevant to the moral permissibility of treating humans in particular ways. 35. Note, though, that if reasons can be agent-­centred or if some forms of partiality are justified, then it may be morally permissible to give priority to members of one’s own group, even if they have lower moral status (Savulescu 1998). 36. Thanks to Alan Crosier, Katrien Devolder, Tom Douglas, John-­Stewart Gordon and Suzanne Uniacke for helpful comments on earlier versions of this chapter.

References Agar, Nicholas (2013a). ‘Why is it Possible to Enhance Moral Status and Why Doing so is Wrong?’, Journal of Medical Ethics, 39, 2, 67–74. Agar, Nicholas (2013b). ‘Still Afraid of Needy Post-Persons’, Journal of Medical Ethics, 39, 2, 81–3.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

16  Steve Clarke and Julian Savulescu Baker, L.  R. (2000). Persons and Bodies: A Constitution View. Cambridge: Cambridge University Press. Basl, John (2014). ‘Machines as Moral Patients We Shouldn’t Care About (Yet): The Interests and Welfare of Current Machines’, Philosophy and Technology, 27, 79–96. Bok, Hilary (2003). ‘What’s Wrong with Confusion?’, American Journal of Bioethics, 3, 3, 25–6. Bostrom, Nick (2005). ‘Transhumanist Values’, Journal of Philosophical Research, 30 (Supplement), 3–14. Bostrom, Nick and Yudkowsky, Eliezer (2014). ‘The Ethics of Artificial Intelligence’. In The Cambridge Handbook of Artificial Intelligence, edited by  K.  Frankish and W.  M.  Ramsey, pp. 316–34. Cambridge: Cambridge University Press. Buchanan, Allen (2009). ‘Moral Status and Human Enhancement’, Philosophy and Public Affairs, 37, 4, 346–81. Callahan, Daniel (1993). The Troubled Dream of Life: Living with Mortality. New York: Simon and Schuster. Camosy, Charles C. (2008). ‘Common Ground on Surgical Abortion?—Engaging Peter Singer on the Moral Status of Potential Persons’, Journal of Medicine and Philosophy, 33, 6, 577–93. Cheshire, William (2014). ‘Miniature Human Brains: An Ethical Analysis’, Ethics and Medicine, 30, 7–12. DeGrazia, David (2007). ‘Human-Animal Chimeras: Human Dignity, Moral Status, and Species Prejudice’, Metaphilosophy, 38, 2–3, 309–29. DeGrazia, David (2008). ‘Moral Status as a Matter of Degree’, Southern Journal of Philosophy, 46, 181–98. Douglas, Thomas (2013). ‘Human Enhancement and Supra-Personal Moral Status’, Philosophical Studies, 162, 473–97. Ebert, Rainer (2018). ‘Mental-Threshold Egalitarianism: How Not to Ground Full Moral Status’, Social Theory and Practice, 44, 1, 75–93. Ereshefsky, Marc (2017). ‘Species’, Stanford Encyclopedia of Philosophy: (accessed 7 November 2018). Gillett, Grant (2006). ‘Cyborgs and Moral Identity’, Journal of Medical Ethics, 32, 79–83. Goodpaster, Kenneth  E. (1978). ‘On Being Morally Considerable’, Journal of Philosophy, 75, 6, 308–25. Horta, Oscar (2017). ‘Why the Concept of Moral Status should be Abandoned’, Ethical Theory and Moral Practice, 20, 899–910.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  17 Hübner, Dietmar (2018). ‘Human-Animal Chimeras and Hybrids: An Ethical Paradox behind Moral Confusion?’, Journal of Medicine and Philosophy, 43, 187–210. Hursthouse, Rosalind (2013). ‘Moral Status’. In The International Encyclopedia of Ethics, edited by H. LaFollette, pp. 3422–32. Oxford: Blackwell. Jaworska, Agnieszka (2007). ‘Caring and Full Moral Standing’, Ethics, 117, 460–97. Jaworska, Agnieszka and Tannenbaum, Julie (2018). ‘The Grounds of Moral Status’, Stanford Encyclopedia of Philosophy: (Accessed 5 October 2018). Jotterand, Fabrice (2010). ‘Human Dignity and Transhumanism: Do AnthroTechnological Devices Have Moral Status?’, American Journal of Bioethics, 10, 7, 45–52. Kagan, Shelley (2018). ‘For Hierarchy in Practical Ethics’, Journal of Practical Ethics, 6, 1, 1–18. Lavazza, Andrea, and Massimini, Marcello (2018). ‘Cerebral Organoids: Ethical Issues and Consciousness Assessment’, Journal of Medical Ethics, 44, 606–10. Liao, S.  Matthew (2010). ‘The Basis of Human Moral Status’, Journal of Moral Philosophy, 7, 159–79. Lindsay, Ronald  A. (2005). ‘Slaves, Embryos and Nonhuman Animals: Moral Status and the Limitations of Common Morality Theory’, Kennedy Institute of Ethics Journal, 15, 4, 323–46. McMahan, Jeff (2002). The Ethics of Killing: Problems at the Margins of Life. Oxford: Oxford University Press. McMahan, Jeff (2008). ‘Challenges to Human Equality’, Journal of Ethics, 12, 81–104. Metz, Thaddeus (2012). ‘An African Theory of Moral Status: A Relational Alternative to Individualism and Holism’, Ethical Theory and Moral Practice, 15, 387–402. Nott, J.  C. and Gliddon, Geo. R. (1854). Types of Mankind: Or, Ethnological Researches Based Upon the Ancient Monuments, Paintings, Sculptures and Crania of Races, and Upon their Natural Philological and Biblical History, Philadelphia: Lippincott, Grambo and Co. Available at: (Accessed 16 October 2018). Persson, Ingmar (2003). ‘Two Claims about Potential Human Beings’, Bioethics, 17, 5–6, 503–17. Persson, Ingmar (2013). ‘Is Agar Biased against “Post-persons”?’, Journal of Medical Ethics, 39, 2, 77–8. Persson, Ingmar and Savulescu, Julian (2010). ‘Moral Transhumanism’, Journal of Medicine and Philosophy, 35, 6, 656–69.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

18  Steve Clarke and Julian Savulescu Quinn, Warren (1984). ‘Abortion: Identity and Loss’, Philosophy and Public Affairs, 13, 1, 24–54. Regan, Tom (2004). The Case for Animal Rights. Berkeley: The University of California Press. Robert, Jason Scott and Baylis, Françoise (2003). ‘Crossing Species Boundaries’, American Journal of Bioethics, 3, 3, 1–13. Sachs, Benjamin (2011). ‘The Status of Moral Status’. Pacific Philosophical Quarterly, 92, 87–104. Sandberg, Anders (2014). ‘Ethics of Brain Emulations’, Journal of Experimental and Theoretical Artificial Intelligence, 26, 3, 439–57. Savulescu, Julian. (1998). ‘The Present-aim Theory: A Submaximizing Theory of Rationality?’ Australasian Journal of Philosophy, 76, 229–43. Savulescu, Julian (2009). ‘The Human Prejudice and the Moral Status of Enhanced Beings: What Do We Owe the Gods?’ In Human Enhancement, edited by J. Savulescu and N. Bostrom, pp. 211–47. Oxford: Oxford University Press. Schroedel, Jean Reith (2000). Is the Fetus a Person? A Comparison of Policies Across the Fifty States. Ithaca, NY: Cornell University Press. Shea, Matthew (2018). ‘Human Nature and Moral Status in Bioethics’, Journal of Medicine and Philosophy, 43, 115–31. Shepherd, Joshua (2017). ‘The Moral Insignificance of Self Consciousness’, European Journal of Philosophy, 25, 2, 298-415. Shepherd, Joshua (2018). Consciousness and Moral Status. London: Routledge. Singer, Peter (1993). Practical Ethics. Cambridge: Cambridge University Press, 2nd Edition. Singer, Peter (2009). ‘Speciesism and Moral Status’, Metaphilosophy 40, 3–4, 567–81. Singer, Peter and Dawson, Karen (1988). ‘IVF Technology and the Argument from Potential’, Philosophy and Public Affairs, 17, 2, 87–104. Smith, Nicholas  D. (1983). ‘Aristotle’s Theory of Natural Slavery’, Phoenix 37, 2, 109–22. Søraker, Johnny Hart (2014). ‘Continuities and Discontinuities Between Humans, Intelligent Machines and other Entities’, Philosophy and Technology, 27, 31–46. Streiffer, Robert (2005). ‘At the Edge of Humanity: Human Stem Cells, Chimeras and Moral Status’, Kennedy Institute of Ethics Journal, 15, 4, 347–70. Thomson, Judith Jarvis (1971). ‘A Defense of Abortion’. Philosophy and Public Affairs, 1, 1, 47–66. Tooley, Michael (1972). ‘Abortion and Infanticide’, Philosophy and Public Affairs, 2, 1, 37–65.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Rethinking our Assumptions about Moral Status  19 Warren, Mary Anne (1997). Moral Status: Obligations to Persons and Other Living Things. New York: Oxford University Press. Watkins, Margaret (2017). ‘ “Slaves among Us”: The Climate and Character of Eighteenth-Century Philosophical Discussions of Slavery’, Philosophy Compass, 2017, 12, e12393. Watt, Helen (1996). ‘Potential and the Early Human’, Journal of Medical Ethics, 22, 222–6. Williams, Bernard (2006). ‘The Human Prejudice’. In Philosophy as a Humanistic Discipline, edited by A.  W.  Moore, pp. 135–52. Princeton: Princeton University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

PART I

T HE IDE A OF MOR A L STAT U S

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

2 Suffering and Moral Status Jeff McMahan

1. Introduction In debates about certain moral issues, it is commonly assumed that an indi­ vidual’s moral status makes a difference to how seriously objectionable, if at all, it is to kill that individual. It has, for example, been a common argument for the permissibility of abortion that the moral status of a foetus is lower than that of a child or adult and that the reasons not to kill a foetus are there­ fore weaker than those that oppose the killing of a person and that they can thus be outweighed by a greater range of considerations that favour killing. Similarly, most people believe that most or all animals have a lower moral status than that of human persons and therefore that killing animals, even if generally wrong, is less seriously wrong, other things being equal, than killing persons. Yet even defenders of abortion accept that it must be constrained in ways that prevent it from causing the foetus unnecessary pain. And there is an increasing tendency among people who continue to eat animal products to prefer those that have been produced without the extremes of suffering inflicted on animals by factory farming. The practice of experimentation on animals is similarly constrained. Although experimenters are permitted to kill experimental animals when the experiments are concluded, the experi­ ments are regulated to prevent the infliction of unnecessary or disproportion­ ate suffering. These facts suggest that most people believe, though perhaps without ar­ticu­lat­ing it in these terms, that the moral status of some individuals, such as foetuses and animals, is lower than that of adult human persons and that the reasons not to kill these individuals are weaker than the reasons not to kill persons. Yet these same facts reveal a general ambivalence about the idea that the reason not to cause an individual with lower moral status to suffer is also weaker. In this chapter, I address the question whether differences in moral status affect the strength of the reason not to cause, or to prevent, suffering. In section 2, I present an example that provides intuitive support for the claim Jeff McMahan, Suffering and Moral Status In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Jeff McMahan 2021. DOI: 10.1093/oso/9780192894076.003.0002

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

24  Jeff M c Mahan that reasons concerned with causing or preventing suffering vary in strength with the moral status of the victim.

2.  Unconnected Individuals Suppose there are individuals with capacities for consciousness and ­sentience—that is, individuals that can experience sensations of pleasure and pain—that nevertheless lack memory as well as any conative states or pro­ spective attitudes such as desire, intention, hope, fear, and so on. Such beings, if they exist, exemplify the most extreme form of what is commonly referred to as ‘living entirely in the present moment’. They have no psychological con­ nections to themselves in the past or future. These are ‘unconnected in­di­vid­ uals’ (McMahan 2002, pp. 75–7 and 475–6). There are two main types of unconnected individual. There are, first, those whose lack of psychological connections to themselves in the past and future is temporary, as they will later develop such connections. Foetuses that have just begun to be conscious, and have only the most rudimentary form of con­ sciousness, are arguably unconnected individuals—but only for a certain period, assuming that they will continue to live and develop psychologically. But there are presumably unconnected individuals whose psychological isola­ tion in the moment is permanent, such as certain comparatively simple forms of animal life. It does not matter, however, for our purposes, whether there actually are any permanently unconnected individuals. It is sufficient that there could be. I will nevertheless write as if there are some unconnected in­di­ vid­uals. And I will be concerned only with permanently unconnected in­di­ vid­uals, so that, in the remainder of this chapter, all references to ‘unconnected individuals’ are to permanently unconnected individuals. I believe that much of what I will say about permanently unconnected individuals applies as well to temporarily unconnected individuals. But that is contentious and I will not try to defend it here. (I will also not consider the possibility that there are permanently unconnected individuals that were once psychologically con­ nected over time.) Unconnected individuals are, I believe, ‘replaceable’ (Singer 2011, pp. 105–7). Suppose that an existing unconnected individual that is having pleasant ­experiences could continue to exist and have those experiences. Or this individual could cease to exist and at the same time a different but qualita­ tively identical unconnected individual could begin to exist and have exactly the same experiences. The claim that the first unconnected individual is

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Suffering and Moral Status  25 replaceable is just the claim that it makes no difference, or does not matter, which of these two possibilities occurs. An individual that is replaceable in this sense does not itself matter. It mat­ ters, if at all, only insofar as it provides the physical basis for states of con­ sciousness that may be intrinsically good or bad. Assuming that an unconnected individual’s pleasurable and painful states of consciousness are entirely constitutive of its well-­being from moment to moment, it seems that its well-­being matters even though the individual does not in itself matter. Some philosophers have argued, however, that an individual’s well-­being mat­ ters only because, and perhaps only to the extent that, the individual itself matters. According to this view, if I am right that an unconnected individual does not matter, it follows that its states of consciousness do not matter. It does not matter, for example, whether its states of consciousness are pleasur­ able or painful. This cannot be right. Whether an unconnected individual’s experience is of physical pleasure or physical pain—pain that is experienced as aversive—does matter. If its experience is pleasurable, that is good in itself and arguably good for the individual; and if its experience is painful, that is bad in itself and arguably bad for the individual. Does an individual that does not itself matter but whose states of con­ sciousness do matter have moral status? That it does not matter in itself sug­ gests that it does not have moral status. But that the character of its states of consciousness does matter, and matter morally, suggests that it does have moral status. There is, however, no substantive issue here, only a matter of finding terms to draw these distinctions. It seems that while an unconnected individual is experiencing suffering, it is bad for it to be in that state. There is a moral reason to stop the suffering for the individual’s own sake. In this respect, an unconnected individual is different from a plant, for there can be no reason to do anything for the sake of a plant, even if there is a sense in which plants can, as some philosophers claim, be benefited or harmed (for example, by the provision or withholding of water or sunlight). Plants lack the capacity for consciousness and therefore can have neither well-­being nor ill-­ being. But, like a plant, an unconnected individual does not matter in itself. I suggest that we appropriate two terms from the philosophical literature and assign them the following meanings. We can say that because there can be reasons to act in certain ways for an unconnected individual’s own sake, it has moral standing, which plants and all other non-­conscious entities lack. But because unconnected individuals are replaceable and thus do not matter in themselves, but matter only as the experiencers of states of consciousness

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

26  Jeff M c Mahan that matter, they do not have moral status. I propose, in other words, to use the term ‘moral status’ in such a way that all, but only, those individuals who themselves matter for their own sake have it. Whereas plants lack both moral standing and moral status, unconnected individuals have moral standing but lack moral status. (If, as seems likely, unconnected individuals are the only individuals that have moral standing but lack moral status, the notion of moral standing as I understand it is of limited significance.) Part of the explanation of why an unconnected individual is replaceable is that it is not harmed by ceasing to exist, or benefited by being enabled to con­ tinue to live. Because of its lack of any psychological connections to itself in the future, it has no interest in continuing to live (in the sense of ‘having no personal stake in’, rather than ‘not being interested in’). More precisely, it has no ‘time-­relative interest’ in continuing to live which is a function not only of the magnitude of a benefit or harm an individual might receive but also of the degree to which the individual that has the interest would be psychologically related to itself at the time at which the benefit or harm would occur.1 An unconnected individual’s continuing to live is no different from a different unconnected individual’s coming into existence. Killing an unconnected indi­ vidual is thus relevantly like preventing an unconnected individual from coming into existence. All this is consistent with the claim that unconnected individuals lack moral status. But the fact that an unconnected individual would not be harmed by being painlessly killed does not by itself entail that the individual lacks moral status and thus would not be wronged by being killed. Imagine a person whose subsequent life would unavoidably contain much more that would be intrinsically bad for her than would be intrinsically good for her. Such a person might not be harmed, and might indeed be benefited, by being painlessly killed. But if she understood what her future life would be like and still wanted to continue to live, and thus refused to consent to be killed, she would be wronged by being killed. For she has a moral status that grounds reasons to act in certain ways for her own sake that are nevertheless independent of what might be in or against her interests. They are reasons of respect for her, as an individual who matters because of her intrinsic nature. Reasons of this sort could in principle apply to one’s treatment of an uncon­ nected individual. In addition to the reason deriving solely from the intrinsic badness of suffering, there might be a further reason not to cause an uncon­ nected individual to suffer that is grounded in a requirement of respect for its nature. Or there might be a reason deriving from its nature not to kill an unconnected individual, despite its having no interest in continuing to live.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Suffering and Moral Status  27 It might be wronged, even though not harmed, by being killed. In that case, it would not, of course, be replaceable in Singer’s sense. Yet it is difficult to identify any basis for these reasons of respect for the nature of an unconnected individual. When an unconnected individual ex­peri­ences suffering, there seems to be almost no distinction between the conscious state of suffering and the subject of that suffering. The unconnected individual is little or nothing more than the location of the suffering. When I claimed earlier that there is a reason grounded in the badness of suffering to stop the suffering of an unconnected individual for its own sake, that seemed to add little or nothing of substance to the claim that there is a reason to stop an intrinsically bad state of consciousness from continuing. An unconnected individual is, it seems, little or nothing more than a sequence of experiences in a particular location. Persons, by contrast, are far more than just the loca­ tions of experiences. In this chapter, I use the term ‘person’ to refer to any individual who exceeds some threshold level of psychological capacity, with minimum capacities for self-­consciousness and rationality. Even if reduction­ ism about personal identity is true, persons are psychologically substantial entities of vast psychological complexity whose mental states are highly uni­ fied both synchronically and diachronically.2 Because unconnected in­di­vid­ uals lack capacities for self-­consciousness, memory, agency, and so on, and thus altogether lack the psychological integration over time that these capaci­ ties make possible, it is scarcely intelligible to suppose that they could be wronged, in addition to being harmed, by being caused to suffer, or by being harmlessly caused to cease to exist.

3.  Combining Reasons of Different Types The possibility of unconnected individuals thus supports the view that there are two distinct types of reason not to harm an individual, for example, by causing it to suffer. One is given entirely by the intrinsic badness of suffering. The strength of this reason varies with the degree to which the suffering is bad, which itself is a function of the intensity, duration, and perhaps quality of the suffering. The other type of reason is given by relevant facts about an individual’s intrinsic nature that make the individual matter in itself. These are the bases of the individual’s moral status. In any particular case, of course, there may be reasons not to cause an individual to suffer, or to prevent an individual from suffering, other than these two, such as reasons deriving from special relations between the agent and the potential victim, side effects on

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

28  Jeff M c Mahan others, or distributional considerations such as equality of priority. But these two sources of reasons—intrinsic badness and moral status—seem fundamental. These reasons may or may not be merely additive. Suppose that one could either prevent a person from experiencing suffering of a certain intensity and duration or prevent an unconnected individual from experiencing suffering of greater intensity and duration. In that case, the reason to prevent the suf­ fering of the unconnected individual that is given solely by the intrinsic bad­ ness of suffering could be stronger than the corresponding reason to prevent the suffering of the person. If we simply combine the reason given by the person’s moral status with the reason given by the badness of the person’s suf­ fering, the combined reasons might not outweigh the reason given by the greater intrinsic badness of the unconnected individual’s suffering. It might be, of course, that if the difference in badness between the unconnected indi­ vidual’s suffering and the person’s suffering were sufficiently large, that would be the correct conclusion. But it does not seem the right way to think about this example to suppose that the reason given by the person’s moral status is of a fixed strength and should just be added to the reason given by the intrin­ sic badness of the person’s suffering. At a minimum, the strength of the rea­ son given by status must vary with the badness of the suffering, so that the reason given by status is stronger the worse the suffering would be. This may be sufficient to preserve additivity, which I suspect is the most plausible way of conceiving of the relation between the two types of reason. But another way to understand the interaction of the two types of reason is to assume that the reason given by status functions as a multiplier greater than 1 of the strength of the reason given by the badness of the suffering. Assuming for the sake of argument an unrealistic degree of precision, suppose that the badness of the unconnected individual’s suffering would be –100, that the badness of the person’s suffering would be –90, and that the strengths of the reasons given by the badness are proportional to the degree of badness. Because the unconnected individual lacks moral status, there is no multiplier of the badness of its suffering (or, equivalently, the multiplier is 1, leaving the strength of the reason given by the intrinsic badness of suffering neither strengthened nor weakened). But if, for example, the reason given by the per­ son’s moral status functions as a multiplier of 2, the reasons to prevent the person’s suffering have a combined strength equivalent to the strength of the reason to prevent suffering of –180 that is given solely by suffering’s intrinsic badness. How large the multiplier is depends not only on an individual’s level of moral status but also on the relative importance of reasons of moral status and reasons grounded in the intrinsic badness of suffering.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Suffering and Moral Status  29 This is closely related to, though slightly different from, the way that Shelly Kagan understands the way in which an individual’s moral status is relevant to the extent to which that individual’s suffering—or, more generally, well-­being— matters. Kagan argues that a fixed increase or decrease in an individual’s ­well-­being can affect the value of the outcome differently depending on what the individual’s moral status is. He suggests that the individual’s moral status functions as a multiplier for the individual’s well-­being. He suggests that if, for example, we arbitrarily set the multiplier for persons at 1, the multi­plier for the well-­being of all individuals with a moral status lower than that of a person must be some fraction of 1. (On this view, for an unconnected indi­ vidual’s suffering to matter at all, the individual would have to have some minimal moral status.) Suppose, for example, that some animal has a moral status that is only half that of a person. According to Kagan’s view, the value of the outcome in which a person’s suffering is –10 is –10, whereas the value of the outcome in which the animal’s suffering is –10 is only –5. The view I have proposed may be preferable to Kagan’s in one respect. According to my proposal, the reason to prevent some experience of suffering is at least proportional in strength to the intrinsic badness of the suffering, but may be augmented in strength by being combined with a further reason given by the individual’s moral status. The view thus avoids any suggestion that the all-­things-­considered reason to prevent or not to cause the suffering of some individuals is weaker than the reason given by the intrinsic badness of their suffering, or that the extent to which their suffering makes the out­ come worse is less than the extent which it is intrinsically bad for them. Yet, because Kagan says that setting the multiplier for persons at 1 is ‘arbitrary’, it seems open to him instead to assign the multiplier of 1 to unconnected in­di­ vid­uals and then set the multiplier for persons at, for example, 2, thereby dis­ persing the multipliers for most animals across the range between 1 and 2. In short, it may be that all he needs to do to avoid the implication I noted is to avoid having fractional multipliers.

4.   A Challenge Many moral philosophers accept, as Kagan does, and as I do, that there is a hierarchy of moral status. They accept that some individuals—for example, a cow—matter less in themselves than some others—for example, a chim­ panzee, or a person. This is manifest most obviously in beliefs about killing. I suspect that most philosophers, and others, believe that it is less ser­ious­ly

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

30  Jeff M c Mahan wrong to kill a cow that would otherwise live contentedly for another ten years than to kill a person, without her consent, who would otherwise live less contentedly for only another week. Even though the cow’s interest in continu­ ing to live would be stronger than that of the person, the person’s higher moral status makes killing her more seriously wrong. (I return to this kind of comparison in section 5.) It is less intuitive, however, at least to many moral philosophers, of whom I am one, to suppose that the physical suffering of a cow in itself matters less than the equivalent suffering of a person. Kagan thinks that it does matter less, and my claim that the suffering of an unconnected individual matters less, if true, supports his view. But some philosophers think either that there are no differences of moral status or that, if there are such differences, they do not affect the strength of the reason not to cause suffering, or to prevent it. They seem to believe that the reason to prevent or not to cause suffering is grounded solely in the intrinsic badness of suffering itself and thus cannot be affected by empirical characteristics of the sufferer that may be the ground of that individual’s moral status. Thus, David DeGrazia asks rhetorically, ‘How can one’s intelligence, sensitivity, and the like be relevant to how much a certain amount of pain or suffering matters?’ (DeGrazia 1996, p. 249). As Peter Singer tersely observes, ‘pain is pain’ (Singer 2009, p. 20).3 These claims should be, and perhaps implicitly are, qualified in at least two ways. First, the claim I have attributed to some philosophers is that an indi­ vidual’s empirical properties cannot affect the strength of the reason not to cause that individual to suffer. This is compatible with the view that the strength of this reason might be diminished or overridden by other reasons concerned with the best or most just distribution of benefits or harms. It is also compatible with the claim that the reason not to cause an individual to suffer can be overridden or even nullified if the individual is morally liable to be caused to suffer or deserves to suffer.4 The second qualification is that DeGrazia’s and Singer’s claims apply only to comparisons of instances of equivalent suffering in which all other relevant considerations are equal. In most actual cases, there are important differences among these other considerations. Differences in intelligence and sensitivity can affect how bad an episode of physical suffering is for different individuals. There are at least three obvious ways in which this could be the case. Persons typically have lives in which the exercise of their agency more or less continuously enhances or enriches their well-­being from moment to moment. This is not true of unconnected individuals, which lack the capacity for agency, though they may move and react in instinctive or reflexive ways.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Suffering and Moral Status  31 Because of these differences, suffering normally has significant ‘opportunity costs’ for a person that it cannot have in the life of an unconnected individual. For a person, suffering is always a distraction and can sometimes be paralys­ ing, in that it can make it impossible for a person to engage in normal modes of agency or to experience most other dimensions of well-­being. Suffering may of course have opportunity costs for unconnected individuals, in that it makes passive pleasures impossible in the way it can make intellectual or artistic achievement impossible for a person while it is occurring. But the opportunity costs for an unconnected individual are trivial in comparison to those for a person. Most animals have psychological capacities more highly developed than those of an unconnected individual but less highly developed than those of a person. The opportunity costs of an animal’s suffering are therefore typically greater than those of the equivalent suffering of an unconnected individual but less than those of the equivalent suffering of a person. A second way in which a person’s suffering can be worse is that it can have deleterious effects throughout the whole of the person’s subsequent life. A paradigmatic example of this is the trauma of sexual abuse in childhood, which can, in some cases, transform what would have been a long and per­ haps unusually happy life into a life of torment, misery, and, sometimes, wrongdoing. By contrast, because the life of an unconnected individual lacks any psychological integration over time, the bad effects of suffering in such a life are confined to the time during which the suffering occurs. The bad effects of earlier suffering in the later lives of most animals are worse than those in the lives of unconnected individuals but tend to be less bad than those in the lives of persons. Because the levels of well-­being that are accessible to persons are significantly higher than those accessible to animals, and also because persons tend to have longer lives, the difference that earlier suffering can make to the overall well-­being in a person’s later life is typically significantly greater than the difference that earlier suffering can make in the later life of an animal. A third way in which an individual’s intelligence, sensitivity, or imagination can affect the badness of the individual’s suffering is that the suffering can be mitigated or intensified by the individual’s understanding of its causes and significance, and thus by knowledge of whether it will cease or continue, abate or worsen. These three considerations, while morally significant, are all extrinsic. They are not concerned with suffering itself but with good things that suffering may exclude, bad effects that suffering may cause, and ways that suffering

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

32  Jeff M c Mahan may be increased or decreased through understanding. We can therefore ignore them for present purposes and return to the general question of whether an individual’s moral status can affect the strength of the reason not to cause that individual to suffer. When we explicitly control for these extrinsic considerations, perhaps by reflecting only on cases in which there are no differences of these sorts, the view expressed by DeGrazia and Singer gains in plausibility. Yet the intuitive plausibility of that view is, I believe, challenged by the possibility of uncon­ nected individuals. Even when one controls for extrinsic factors, it is hard to believe that one’s reason not to cause a person to suffer is no stronger than one’s reason not to cause an unconnected individual to experience equivalent suffering. That the suffering afflicts an individual who matters seems to strengthen the reason not to inflict it.

5.  A Gradualist Understanding of Moral Status Most animals are not unconnected individuals. Certainly all adult mammals have memories, beliefs, desires, and other psychological states that form con­ nections that unify their lives over certain periods of time. These animals matter in themselves. They have moral status. But they seem to matter less than persons. As I noted, this is most apparent intuitively when we reflect on the morality of killing. Suppose a 20-­year-­old person can survive only if she receives an organ transplant in the next twelve hours. There are only two options. As a result of advances in transplantation techniques, surgeons could use an organ taken from a pig. The pig with the most favourable tissue type is young and could be expected to live another ten years. The only other possible source is a person with a closely related tissue type who is terminally ill and can be expected to live no more than a few days. This person has no living relatives or friends and no important projects he could bring to completion during his final days. Although it is legally possible for him to agree to be killed as a means of sav­ ing the 20-­year-­old, he refuses to consent. The surgeons have three options: allow the 20-­year-­old to die, kill the pig, or surreptitiously kill the terminally ill person. The pig would lose more good life in being killed than the person would and therefore, despite the weaker psychological unity within its life, its interest in continuing to live is stronger. Moreover, the probability that the transplant will be completely successful will be slightly higher if the surgeons

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Suffering and Moral Status  33 use the human organ. Even so, it seems that what they ought to do is to kill the pig. This belief seems best explained by differences in moral status. The killing of a person involves more than just the infliction of the loss of further life worth living. In the absence of the person’s consent, it is an assault on the existence of an individual whose nature demands certain forms of respect. The belief that the surgeons ought to kill the pig cannot be dismissed as spe­ ciesist. Many people believe, correctly in my view, that considerations that would be sufficient to justify killing a foetus via abortion would almost never be sufficient to justify killing a person, even if the person would otherwise live only another month, or week, whereas the foetus would otherwise live for many decades. Here again the best explanation is that the foetus has a lower moral status because of its lower psychological capacities. These intuitions can be defended by appeal to a ‘gradualist’ understanding of moral status that is non-­consequentialist in character. On this view, to say that an individual has moral status is to say that there are deontological con­ straints that govern what one may do to it and, perhaps, what one may or may not allow to happen to it. The reason to obey the constraints is given by the individual’s moral status. But unconnected individuals have no moral status. The reason, if there is one, not to kill an unconnected individual, or to save its life, derives solely from the intrinsic goodness of its future experiences, and the strength of the reason is proportional to the extent of the goodness. There is no additional constraint against killing an unconnected individual. Unconnected individuals are replaceable. But animals other than unconnected individuals have moral status and are not replaceable. On the gradualist view, their moral status varies with the degree to which they possess the psychological capacities, whichever those may be, that are the basis of moral status. Animals that possess the relevant capacities to a lower degree have a lower moral status. The constraints that govern our treatment of them are weaker than those that govern our treat­ ment of animals with higher capacities. If the constraints take the form of rights, an animal with lower capacities has rights that are weaker than the corresponding rights of animals with higher capacities.5 According to this gradualist view, there is a constraint that prohibits the killing of an animal with low moral status as a means of saving another ani­ mal with the same moral status if the latter’s interest in continuing to live is only slightly stronger than that of the former. But the constraint is compara­ tively weak, so that if the other animal’s interest in continuing to live is much

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

34  Jeff M c Mahan stronger, the constraint might be overridden so that it would be per­mis­sible to sacrifice the one to save the other. Similarly, it might be per­mis­sible to sac­ rifice the one to save the lives of two with the same status whose interest in continuing to live is comparable in strength to that of the one. And it might also be permissible to sacrifice the life of the animal with lower moral status to save the life of an animal with higher moral status, even if the latter’s inter­ est in continuing to live is no stronger than that of the former. When it is permissible to kill one animal with lower moral status to save one or more animals with the same status, the reason not to kill the one ani­ mal that is grounded in its moral status is outweighed by the reason to satisfy the stronger interest, or interests. But, if it can be permissible to kill an animal with lower moral status to save another with higher moral status when the latter’s interest in continuing to live is no stronger than the former’s, it seems that the constraint against killing the animal with lower status must be over­ ridden by a reason deriving, not from interests or well-­being, but from the other animal’s higher moral status. If this is correct, the significance of moral status is not just that it brings individuals within the scope of moral con­ straints. It is also that moral status confers something like positive claims or rights on those who possess it. In this case, there is a moral reason to save the animal with the higher status that is grounded in something other than its interest in continuing to live. The basis of that reason seems to be the animal’s moral status. Gradualist views of this sort are supported not only by common beliefs about moral differences between animals and persons, and beliefs about moral differences between lower animals and higher animals, but also by facts about individual human psychological development and human evolution. All readers of this essay once had psychological capacities lower, or more rudimentary, than those of any normal adult mammal; yet all gradually devel­ oped, by incremental advancements, into individuals capable of detecting the flaws in the essay’s arguments. There was no point nor even any relevantly short period of indeterminacy at or during which each of us passed some threshold between the moral status of an animal and that of a person. Even those who accept that the moral status of a conscious foetus is lower than the status it will have as an adult person find it difficult to believe that this indi­ vidual might have a low moral status at the beginning of one week and yet have the full moral status of a person at the beginning of the next. Many people respond to this challenge by arguing that the potential to develop the psychological capacities constitutive of personhood is itself a basis of moral status, perhaps giving those who possess it the same status as

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Suffering and Moral Status  35 those who are already persons, or at least a status intermediate between that of persons and that of higher animals. I am sceptical of the claim that the potential to become a person can be a basis of moral status. But I have stated my objections elsewhere and will not rehearse them here (McMahan  2002, pp. 302–16). I will note only that the appeal to potential cannot rebut a sec­ ond challenge to views that reject gradualism but recognize that the moral status of most or all animals is lower than that of most or all persons. This is that our ancestors evolved gradually from beings with psychological capaci­ ties lower than those of currently existing lower animals, eventually develop­ ing the psychological capacities of persons. Again, there was no point, or relatively short period, at or during which our ancestors passed some thresh­ old, posited by those who reject gradualism, between the moral status of ani­ mals and that of persons. Gradualist views are, however, highly problematic because, if they recog­ nize certain psychological capacities as bases of moral status, and if they draw fine-­grained distinctions between different levels of moral status among ani­ mals, they must also distinguish different levels of moral status among human beings, and even among human beings without cognitive deficiencies. Accounts of moral status that are fully gradualist are therefore incompatible with widely accepted ideals of moral equality. I will not address that problem here, though it is a challenge for anyone who denies that species membership alone is a basis of moral status and yet accepts that it is more seriously wrong, other things being equal, to kill a per­ son than to kill an animal. I will instead return to the question whether grad­ ualism about moral status is plausible in its application to acts of causing and preventing suffering.

6.  Gradualism and Suffering Most of us believe that differences of moral status can affect the strength of the reason not to kill an individual, or to save that individual. But our in­tu­ itions about killing and letting die are unusual. We tend to believe, for ex­ample, that the degree to which killing a person is wrong does not vary with the extent to which the victim is harmed by being killed. Both common ­intuition and the law treat the murder of a 50-­year-­old as no less wrong, other things being equal, than the murder of a 30-­year-­old, despite the fact that death at 30 generally involves a vastly greater loss of good life—and thus a vastly greater harm—than death at 50. Yet much lesser differences between

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

36  Jeff M c Mahan non-­lethal harms are recognized as highly morally and legally significant. It is, for example, a much lesser offence to intentionally cause a person to suffer intensely for ten seconds than to cause that person the same intense suffering for twenty hours. So it may well be that, although our intuitions support the view that the strength of the reason not to kill an individual varies with that individual’s moral status, we cannot infer anything from this about whether the strength of the reason not to cause an individual to suffer varies in the same way. One possibility is that, although the wrongness of killing varies, if other things are equal, with the moral status of the victim, the wrongness of inflict­ ing some fixed amount of suffering does not. According to this view, we might concede that the suffering of any individual that altogether lacks moral status matters less. We might also broaden the category of individuals that lack moral status to include not only unconnected individuals (of which there may, in actuality, be none) but also individuals that have only very weak psy­ chological connections over short periods of time. And we might accept that, above this low threshold, all conscious beings have varying degrees of moral status and that an individual’s moral status affects the degree to which it would be wrong, in the absence of some positive justification, to kill that individual. But we might, following DeGrazia and Singer, deny that the variations among psychological capacities that underlie the differences in moral status affect the degree to which it is wrong to cause or allow an individual to ex­peri­ence some fixed amount of suffering. In short, variations in moral status consistently affect the wrongness of killing but not the wrongness of causing suffering. This view has, to me at least, considerable intuitive appeal, particularly when I reflect as follows. Probably the worst physical suffering I have ex­peri­ enced has been from kidney stones. These caused pain that was severe, con­ tinuous, and persistent. Much of the time the pain was so extreme that it seemed to occupy the whole space of consciousness, overwhelming and sup­ pressing all thought or reflection. It seems entirely possible that an animal—a dog or a chimpanzee—could have an experience that would be almost quali­ tatively identical—suffering that dominates and disables the mind, crowding out other elements of consciousness. If I and an animal were both ex­peri­en­ cing such suffering and a stranger could relieve either my suffering or the animal’s but not both, I do think the stranger ought to relieve mine. But sup­ pose he were instead to choose which of us to aid by flipping a coin and the animal were to win the toss. I would have a complaint against him. I could argue that he and I, as persons, are related in ways that give him reasons to show partiality to me. And I could claim that my suffering had greater

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Suffering and Moral Status  37 opportunity costs. But could I, in good faith, claim that he ought to have favoured me because of my higher moral status? He saw two individuals, both writhing in agony, judged both instances of suffering to be roughly equally intrinsically bad, and chose between me and the animal in an impartial way, giving us each an equal chance of having our suffering relieved. In examples such as this, I have considerable intuitive sympathy with the view of DeGrazia and Singer that the reason to prevent or not to cause suffer­ ing does not vary in strength with the empirical properties of the victim that are determinative of moral status (assuming, as DeGrazia and Singer might not, that there are differences of moral status). Yet there is a further con­sid­er­ ation that inclines me to think that reasons grounded in the intrinsic badness of suffering are not the only reasons to prevent or not to cause suffering, and that reasons of moral status apply as well. This is that reasons not to kill in­di­ vid­uals, or to save their lives, are not fundamentally different from reasons not to cause individuals to suffer, or to prevent them from suffering. If, there­ fore, reasons of moral status can affect the wrongness of killing, they should also affect the wrongness of causing suffering. Suffering is intrinsically bad. Death, or ceasing to exist, is not. It is bad only, or primarily, because it prevents an individual from having further good life. It seems implausible to suppose that, although there are reasons deriving from moral status not to prevent an individual from having the benefits of continued existence, there are no reasons deriving from moral status not to cause the same individual to have to endure something intrinsically bad. Indeed, common-­sense intuition may support the contrary view. For we tend to think that it is more seriously wrong to cause an individual to suffer than to prevent that individual from enjoying sources of well-­being that would be good to the same extent that the suffering would be bad. One possible ex­plan­ ation of this intuitive asymmetry between causing suffering and preventing happiness is that reasons deriving from moral status oppose the causation of suffering more strongly than they oppose the prevention of happiness. Even so, one may think that killing is a uniquely egregious way of depriving an individual of benefits, or a unique offence against the individual’s moral status, since it involves the annihilation of the bearer of that status. Yet the annihilation even of an individual with the highest moral status may be good for that individual, and not an offence against that status, if the only alter­ native is a life that would be intrinsically bad—in particular, a life dominated by suffering—and the annihilation would not be, in the circumstances, contrary to the individual’s will. So the appeal to annihilation is not by itself fully explanatory.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

38  Jeff M c Mahan These thoughts incline me to the view that not only the suffering of an unconnected individual but also the suffering of an animal matters less than the equivalent suffering of a person, so that causing a certain amount of suf­ fering to an animal may be less seriously wrong than causing the same amount of suffering to a person. Still, it is hard for me to believe that it could matter much less. And because I think that the killing of an animal might be much less seriously wrong than the killing of a person even when the person’s inter­ est in continuing to live would be no stronger than that of the animal, my intuitions continue to reflect the sense that reasons of moral status oppose killing more strongly than they oppose causing suffering. So I am left with the uncomfortable sense that much remains to be understood.6

Notes 1. For elucidation of the notion of a time-­relative interest and the way in which it differs from the more familiar notion of an interest, see McMahan 2002, p. 80. Throughout this chapter, references to interests should be understood as references to time-­relative interests. 2. On reductionism about personal identity, see Parfit 1986, part 3. 3. For an illuminating sceptical discussion of the views of Singer and DeGrazia, see Kagan 2019, pp. 101–8. 4. Kagan cites distributional considerations and desert as ­reasons for thinking that the contribution that some amount of suffering makes to the value of the outcome is not a function only of the extent to which the suffering is bad for the sufferer. See Kagan 2019, ch. 3, and Kagan 2016, p. 6. 5. For discussion of the gradualist account of moral status, see McMahan 2008, pp. 101–4. Also see the hierarchical account defended in Kagan 2019. 6. I am grateful to Fiona Clarke, William Gildea, Doran Smolkin, and Hazem Zohny for illuminating comments on an earlier draft of this chapter.

References DeGrazia, D. (1996) Taking Animals Seriously. Cambridge: Cambridge University Press. Kagan, S. (2016) ‘What is Wrong with Speciesism?’, Journal of Applied Philosophy, vol. 33, no. 1, pp. 1–21. Kagan, S. (2019) How to Count Animals: More or Less. Oxford: Oxford University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Suffering and Moral Status  39 McMahan, J. (2002) The Ethics of Killing: Problems at the Margins of Life. New York: Oxford University Press. McMahan, J. (2008) ‘Challenges to Human Equality’, The Journal of Ethics, vol. 12, no. 1, pp. 81–104. Parfit, D. (1986) Reasons and Persons. Oxford: Oxford University Press. Singer, P. (2009) Animal Liberation, updated edn. New York: Harper Collins. Singer, P. (2011) Practical Ethics 3rd edn. Cambridge, Cambridge University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

3 An Interest-­Based Model of Moral Status David DeGrazia

The title of Derek Parfit’s second book, On What Matters, implies that some things do (Parfit 2011). He was right. Among the things that matter are certain beings who matter in a special way that has been marked by the term of art “moral status.” To say that something, X, has moral status is to say that (1)  moral agents have obligations regarding their treatment of X, (2) X has interests, and (3) X’s interests are the basis for the relevant obligations (DeGrazia 2008a, 183). An alternative formulation is that X has moral status if and only if (1) moral agents have obligations regarding their treatment of X and (2) it is for X’s sake that they have these obligations. An even simpler formulation is to equate moral status with inherent moral value, but only if we assume that bearers of such value have interests or a “sake.”1 Two questions immediately present themselves. First, what is the best model of moral status? Such a model would plausibly identify the basis or bases of moral status. Second, what are the implications of such a model in hard cases, in which it’s not obvious whether, or to what extent, some being has moral status? This chapter advances nine theses that comprise a model of moral status before applying it to a range of unobvious cases. The model may be described as interest-­based insofar as the possession of interests constitutes its conceptual backbone. The cases to which I apply the model include some relatively familiar ones such as infants and nonhuman animals but also some that swim in less chartered waters such as robots, brain organoids, and enhanced hom­ inids. I do not offer a comprehensive defense of my theses. Instead, I clarify each and defend it briefly. The argumentative case for my model rests significantly on the overall coherence and power of the theses together with their implications—more specifically, their consistency, their plausibility upon reflection (especially in comparison with other models), the model’s ability to illuminate hard cases, and the explanatory power of its core ideas.

David DeGrazia, An Interest-­Based Model of Moral Status In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © David DeGrazia 2021. DOI: 10.1093/oso/9780192894076.003.0003

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

An Interest-Based Model of Moral Status  41

1.  The Model THESIS 1:  Being human is neither necessary nor sufficient for moral status. A sufficient reason not to torture a cat is that doing so causes her terrible experiential harm for no good reason. The obligation not to treat a cat this way is grounded in the cat’s interests. So one need not be human to have moral status. It may seem less obvious that being human isn’t sufficient for moral status. Indeed, the Declaration of Human Rights in the aftermath of World War II (United Nations 1948) represented a great moral advance and, taken at face value, suggests that all human beings have certain rights and therefore moral status. Moreover, we often foster moral insight by getting an interlocutor, or ourselves, to perceive the humanity of some potential victim of injustice or misfortune—whether an ethnic or religious minority, immigrants, or distant strangers at risk of starvation. While appeals to the humanity of particular individuals can do real moral work, reflection suggests that such appeals usually do not target literally all human beings. Assuming that “human being” does not simply mean “person,” since the latter term—however reasonably unpacked—could apply to a space alien or god, the only clear meaning of “human being” is biological. A human being is a member of either Homo sapiens or of one of the many hominid species (among the genera Homo, Australopethicus, or Paranthropus) that have ever existed. For simplicity, let’s stick with our species. Not every Homo sapiens has moral status. Many will be persuaded by the example of a human embryo or early fetus. If you are not persuaded, perhaps because you hold that natural potential to develop a mental life characteristic of human persons confers moral status, consider an anencephalic infant, who lacks even that potential: she is forever and irreversibly unconscious. The only possible grounds I can imagine for asserting that an anencephalic infant has moral status are: religious dogma, which I consider irrelevant to moral justification; an appeal to natural kind membership (on the assumption that our species is a natural kind), which I have criticized elsewhere (DeGrazia 2008b, 301–7); and appeals to social relations, which I take up later. On the model I recommend, being human is insufficient for moral status. More generally, species per se has no direct bearing on moral status. Morally relevant traits may be characteristic, or uncharacteristic, of a particular species, but that is a different matter. And membership in the human

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

42  David DeGrazia community may be relevant in some respects, but that involves a type of social relationship rather than the biological matter of species. Further, even if we thought biology might have some direct importance to moral status, it would be mysterious why species in particular—rather than, say, genus, family, or order—would matter morally. Indeed, why not some biological grouping within a species such as human subpopulations? Membership within bio­logic­al categories, including Homo sapiens, bears no direct relevance to moral status (DeGrazia 1996, 56–61). THESIS 2:  The capacity for consciousness is necessary but not sufficient for moral status. We are all acquainted with consciousness. How to define it is another matter. I suspect that the concept of consciousness (not to be confused with the nature of consciousness) is too basic to be analyzed in the manner of a classical def­ in­ition. So I will content myself by saying that consciousness—what some philosophers call “phenomenal consciousness”—is subjective experience. You have it when awake or dreaming, not when you’re in a dreamless sleep or under general anesthesia. Recall that moral status is possible only for those who have interests or a “sake.” I believe that only conscious beings—more precisely, beings with the sometimes-­realized capacity for consciousness (which is compatible with periods of unconsciousness)—can make that grade. While plants and unconscious animals such as sponges are alive and therefore need certain things in order to live and reproduce, their permanent unconsciousness means that they can never experience any condition in a positive or negative way and can never care about anything. Only conscious beings can have such experiences and concerns. The biological “needs” of unconscious living things are no better candidates for interests than the Moon’s “need” for nondestruction as a condition for continuing to exist or a car’s “need” for oil as a condition for proper functioning. Only conscious beings have interests so consciousness is necessary for moral status. But it is not sufficient, because consciousness doesn’t entail having interests. Imagine a being that had subjective awareness of its environment and its place in the environment, and even had thoughts, but had no cares or concerns and experienced nothing as pleasant or unpleasant, at­tract­ ive or aversive, good or bad. Nothing mattered to this hyperbolically stoical creature. It simply noticed and thought. I submit that this being would have no interests, no prudential standpoint, and nothing could be done for its sake.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

An Interest-Based Model of Moral Status  43 While I doubt natural selection has produced any such beings, they are conceptually possible and may become actual in the form of advanced robots or AI systems. Such beings, I claim, would lack moral status. Consciousness is necessary but not sufficient. THESIS 3:  Sentience is necessary and sufficient for moral status. Sentience is the capacity to have pleasant or unpleasant experiences. It adds hedonic valence to consciousness. This is sufficient for having interests because pleasant or unpleasant experiences confer an experiential welfare: things can go better or worse for the subject in terms of the felt quality of those experiences. Note that, strictly speaking, sentience need not include the capacity for pain and sensory pleasure. Being susceptible to emotional or mood states with hedonic valence would entail sentience. If a being could feel satisfied or frustrated, for example, this being would be sentient even if it lacked sensation-­based hedonic experiences. Only sentient beings, I submit, have interests. This claim might be challenged along the following lines.2 Imagine angels who are conscious but, lacking feelings, not sentient, and who have the aim of performing certain actions simply because they are right. Even if they do not feel good upon achieving their aims or bad if their aims are thwarted, they have interests in noninterference and therefore have moral status. If correct, this reasoning suggests that the possession of aims based on values is—like sentience—sufficient for having interests. That would motivate what might be considered a friendly amendment to my position, adding to sentience a second sufficient condition for moral status (thereby entailing that sentience is not necessary). In my judgment, however, the challenge is unsuccessful. The possession of values or aims the fulfilment of which one does not care about (emotionally) at all—if the terms “values” and “aims” are even apt in such a case—seems insufficient for having anything at stake, any interests or welfare. In the absence of a prudential standpoint characterizing these strangely invulnerable beings, the attribution of moral status seems to me pointless and misplaced. So I continue to hold that sentience is necessary for moral status. I also contend that sentience is sufficient. It seems deeply implausible that any beings with interests would not matter at all in their own right; to judge otherwise would seem to involve a sort of bigotry. So, in my view, sentience is the most important marker for moral status. This claim leaves open whether there are differences in moral status.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

44  David DeGrazia THESIS 4:  Social relations are not a basis for moral status but may ground special obligations. Some theorists (e.g., Warren 1997, ch. 5) believe that social relations can be a basis for moral status. One view is that persons have full moral status while (postnatal) human nonpersons share this status based on special relationships to persons—either to particular persons such as family members or to all human persons via membership in the human community. Although I believe special relationships can be the basis of special obligations, as I have to my daughter and wife, I deny that relationships can be a basis for moral status. Examples suggest that the interests of beings with moral status ground obligations that are shared by moral agents generally. Even if you have special reasons not to swindle your friend, because she is your friend, it seems that all moral agents have a reason not to swindle her simply on account of her moral status and the fact that swindling her would treat her disrespectfully. And, if one claims that co-­membership in the human community constitutes a special relationship that gives human beings reason not to torment a homeless person, one should also acknowledge that a space alien moral agent has reason to abstain from such behavior simply on account of the homeless person’s moral status and vulnerability to harm. It seems that a being’s moral status gives reasons to all moral agents to treat that being with certain forms of restraint or respect (Jaworska and Tannenbaum  2013, sect. 5.5). Special relations are a distinct source of practical reasons. To drive Thesis 4 home from another angle, moral status involves inherent value but relationships are not inherent. THESIS 5:  The concept of personhood is unhelpful in modeling moral status unless a nonvague conception is identified and its relevance clarified. Many philosophers embrace some variant, or close neighbor, of the Lockean conception of persons (Locke 1694, Bk II, ch. 27) as beings with self-­awareness over time and capacities for reason and “reflection” (introspection).3 Such psychological conceptions of personhood imply that neither fetuses nor infants are persons. In contrast, some hold that persons are beings of a kind whose characteristic development includes such psychological capacities—a more capacious conception that arguably covers all living human beings (Ford 2002, 9–16; Gomez-­Lobo 2002, 86–90). For two reasons, appeals to personhood as a basis for moral status, or full moral status, are frequently unhelpful. First, the criteria of personhood are contested. Suppose, for example, one embraces the Lockean psychological

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

An Interest-Based Model of Moral Status  45 tradition. Then one will favor either a highly debatable specific conception (e.g., beings with higher-­order attitudes (Frankfurt 1971)) or a more broadly congenial but unhelpfully vague conception (e.g., beings with the capacity for relatively complex forms of consciousness4). Second, there remains the question of personhood’s relevance. Sentience, by contrast, has a vivid connection to moral status in being necessary and sufficient for interests. Why should personhood elevate moral status? Is the common assumption that it does more than a self- (or species-) serving rationalization for using sentient animals for human purposes? While I believe these questions may admit of good answers—indeed, I will later try to provide some—my present claim is that we should accept an appeal to personhood only if it is helpfully specific and its relevance is plausibly explained. THESIS 6:  Sentient beings are entitled to equal consequentialist consideration. Sentient beings have moral status. How should we conceptualize it? Although in our moral relations with other persons we accept various forms of partiality—in connection with special relationships, roles, and the discretionary nature of general beneficence—we also insist on some type of moral equality for all persons. Moral agents owe some sort of equal consideration or regard to other persons. Should such equal consideration extend beyond the species boundary to include other sentient animals? Obviously, different sorts of creatures have varying interests and can be harmed or benefited in different ways. So equal consideration would entail not equal treatment but rather the ascription of equal moral importance to individuals’ prudentially comparable interests (irrespective of species) such as the interest in avoiding substantial suffering. An alternative to granting equal consideration to sentient beings is to hold that sentient nonpersons are entitled to some, but less, consideration than that due to persons—perhaps along a sliding scale that takes into account such factors as cognitive and emotional complexity. Equal consideration, I contend, holds up better than unequal consideration under critical scrutiny—especially when we take seriously the likelihood that species-­serving biases infect many of our traditional practices and common intuitions.5 Put another way, while extending some form of equal con­sid­er­ ation to sentient beings is highly revisionary of common morality, the latter may reflect substantial prejudices that call for revision. The more we insist on explicit, coherent justifications for drawing distinctions in moral status, the more attractive and defensible some form of equal consideration appears.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

46  David DeGrazia What might such equal consideration look like? Presumptively, I suggest, it takes a consequentialist form—not necessarily utilitarianism but some approach that focuses on producing the best, or sufficiently good, results. If so, this means that tradeoffs among individuals’ interests to promote the overall good are prima facie permissible, so long as they are compatible with equal consideration—that is, with ascribing equal moral weight to prudentially comparable interests. The justification of equal consequentialist consideration rests on the claim that its overall implications are more plausible on reflection than those of a more radical approach that would attribute utility-­trumping rights to all sentient beings. I should acknowledge, however, that I lack any knock-­down argument against the latter approach and simply submit my approach for consideration. THESIS 7:  Sentient beings with narrative self-­awareness have special interests that ground the added protection of moral rights. This is where I claim that a fairly specific conception of personhood is useful. Persons—defined here as beings with the type of self-­awareness that makes narrative identities possible—have particular long-­term interests that include projects, enduring relationships, and sometimes fairly detailed life plans. For this reason, consequentialist tradeoffs of their most important general interests (e.g., life, various liberties, bodily security) for the common good can easily spoil the long-­term interests. Rights, by blocking those tradeoffs, protect both kinds of interests. So equal consequentialist consideration is consistent with, and arguably justifies, the attribution of rights to persons—beings with narrative identities. By a narrative identity, I mean a temporally structured self-­conception in which one understands one’s life as having a detailed past and a future with various possibilities for growth and change. Someone with a narrative identity has relatively rich episodic memories and intentions; continuing the metaphor, she understands her life as a sort of story with different chapters. Ordinarily, human children seem to acquire a narrative identity, in rudimentary form, around age 3 or 4. I contend that the attribution of rights is justified not only along consequentialist lines, as just discussed, but also on the basis of deontological respect for individuals with such self-­awareness. (The ethical theory I favor features both well-­being and respect as fundamental values.) Many animals, although lacking narrative identities, have nontrivial tem­ poral self-­awareness. To the extent that they do, they have longer-­term interests such as maintaining certain relationships (as many mammals have) or a distant goal such as ascending a social hierarchy (as a chimpanzee might).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

An Interest-Based Model of Moral Status  47 I suggest that animals who have nontrivial temporal self-­awareness that falls short of a narrative identity should be ascribed rights of partial strength that afford some protection against consequentialist tradeoffs of their important interests. The strength of these rights plausibly varies with the extent of their temporal self-­awareness. I believe scientific evidence supports the thesis that such animals include dogs, wolves, pigs, monkeys, elephants, great apes, and cetaceans. Animals with only trivial or no temporal self-­awareness, on the present account, would enjoy the default moral protection of equal consequentialist consideration but not that of rights.6 THESIS 8:  Beings who are reasonably expected to become sentient should be protected as if they already were sentient (in effect, giving equal con­sid­er­ation to their expected future interests); and those who are reasonably expected to become persons should be protected as if they already had rights. All and only sentient beings have interests. Beings who will become sentient will later have interests. So, in a derivative sense, they may be said to have interests now—for example, not to incur injuries that will burden them once their mental life comes on board. For this reason, we should treat beings who are expected to become sentient in important respects as if they already had moral status—for example, not injuring them gratuitously. Individuals who will become persons will later have special narrative-­identity-­related interests such as having certain opportunities, maintaining valued relationships, and achieving their dreams. In a derivative way, they may be said already to have interests in conditions that serve to protect their future interests. For example, if negligently injured in utero, the individual might develop into a person who cannot pursue certain projects due to effects of the injury. Thus, we should in important respects treat individuals who are expected to become persons as if they already had rights that protected their most important interests against consequentialist tradeoffs. Consider a challenge to Thesis 8. One might argue that it assigns moral status to certain presentient individuals, such as early fetuses that are expected to come to term, on the basis of our intentions (e.g., not to abort) and other extrinsic factors (e.g., access to competent medical care in case complications arise during pregnancy)—factors that affect whether we reasonably expect a presentient individual to become sentient. Yet we have defined moral status as a type of inherent moral value. It is contradictory to assert that moral status is inherent, based only on an individual’s intrinsic properties, yet in certain cases it depends on extrinsic factors.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

48  David DeGrazia The answer to this challenge is to correct a misunderstanding on which it rests. Thesis 8 does not assign moral status to individuals who are expected to become sentient, but instead posits obligations to treat them in important respects as if they already had moral status. This is consistent with the thesis that, in all cases, moral status is inherent. Thesis 8 explains the wrongness of injuring fetuses that are expected to come to term. Meanwhile, the second conjunct of the thesis affords the full protection of rights to ordinary infants and toddlers despite the fact that they are not yet persons. In keeping with the earlier discussion of partial-­strength rights, it makes sense to understand Thesis 8 as requiring parallel accommodations for beings who are expected to develop nontrivial temporal self-­ awareness that falls short of a narrative identity—for example, treating a puppy as if she already had the partial-­strength rights of a mature dog. THESIS 9:  For reasons of social cohesion and stability, already-­born sentient human beings who are not expected to become persons, or to recover their lost personhood, should be extended the protection of rights. According to the account developed through the first eight theses, sentient human beings who, due to severe cognitive impairment, are not expected to become persons or, in the case of acquired impairment, to recover their personhood are, as sentient beings, entitled to equal consequentialist con­sid­er­ ation but, as nonpersons, not the additional protection of (full-­strength) rights. Now, it’s worth noting that equal consequentialist consideration confers much stronger moral protection than animals generally receive today, so the present “problem of nonparadigm humans” is considerably smaller than the problem attending views that grant sentient nonpersons less than equal consideration. Moreover, if these sentient human beings have any nontrivial temporal self-­awareness, albeit less than narrative self-­awareness, they would enjoy the protection of partial-­strength rights. Still, a problem remains. For the present account, developed thus far, would in principle counterintuitively allow some sacrifice of these impaired individuals’ most important interests in the name of the common good—for example, in challenge studies of urgently needed vaccines if no alternative method were scientifically satisfactory. My final thesis addresses this residual problem of nonparadigm humans. We may plausibly conjecture that selecting such cognitively impaired human beings for involuntary participation in high-­risk clinical trials would cause social distress, mistrust, setbacks to the clinical research enterprise, and other negative consequences greater in magnitude than any marginal gain in utility

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

An Interest-Based Model of Moral Status  49 achieved by involving them in these trials. This might not be true in every possible human society but it seems true of human societies today. In rule-­ consequentialist fashion, therefore, we can justify rules and the corresponding rights that protect these individuals from such sacrifice of their interests.

2. Implications The nine theses I have discussed comprise my interest-­based model of moral status. It remains to explore implications. With implications in view, the model can be evaluated in comparison with competing models—in terms of cogency and consistency, the plausibility of its theses and implications, its ability to illuminate hard cases, and the explanatory power of its central ideas.

2.1  Ordinary, self-­aware human beings These individuals have narrative identities, thereby qualifying as persons on this account. They therefore have rights that protect them from consequentialist sacrifice. This is not to preclude the possibility that occasionally their rights may be overridden, but the threshold for overriding is very high, much higher than an expected net gain in utility. Rights to free speech, to freedom of movement, and even to life may be overridden in rare instances. In my view, a few rights—for example, not to be enslaved and not to be raped—are absolute, at least in this world. But the key point is that my model attributes to persons, bearers of narrative identities, rights that ordinarily trump appeals to utility. Now for more difficult cases.

2.2  Nonparadigm humans The problem of nonparadigm humans arises for any view asserting that or­din­ary, sufficiently mature members of our species have higher moral status than most or all animals on the basis of some special cognitive capacity. It arises because some humans we believe to deserve full moral protection will lack this capacity. The problem is especially acute for traditional accounts of moral status, which hold that animals have significantly less moral status than you and I have, because such accounts prima facie suggest that infants and older human beings who lack the special trait also have significantly lower

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

50  David DeGrazia moral status. Another problem facing traditional accounts is to explain the relevance of the trait deemed to elevate moral status. Many moral philo­ sophers simply assume that some cognitive capacity confers full moral status (see, e.g., Jaworska and Tannenbaum 2014). (Such question-­begging mo­tiv­ ates Thesis 5.) Since very few philosophers hold that all members of our species, including zygotes, have full moral status, discussions of the problem of nonparadigm humans must clarify which humans it would be problematic to exclude. I submit that the problem concerns any sentient human beings who would apparently lack full moral status given a particular account’s criteria. In my account, then, we are concerned with sentient human beings who lack narrative iden­ tities due either to immaturity or to cognitive impairment. We may focus our attention by considering ordinary infants and (sentient) human beings whose cognitive impairment precludes narrative self-­awareness. As noted earlier, the problem of nonparadigm humans facing my account is smaller than what faces traditional accounts. But even small problems should be solved, if possible. My approach addresses the problem as it applies to infants by advancing Thesis 8, which grants immature (but sentient) human beings the protection of rights so long as they are expected to develop into persons. This takes care of the problem with one important exception: infants who are not expected to become persons because they are expected to perish before maturing sufficiently to have narrative identities. Thesis 9, however, takes care of this problem. It also addresses the problem pertaining to those individuals who are too cognitively impaired to develop narrative identities or, if they lost the relevant capacities due to injury or dementia, too impaired to recover them. For realistic pragmatic reasons, these individuals are to be extended full moral status, as explained earlier.

2.3  Nonhuman animals Animals comprise an enormous range of life forms and no generalization about moral status applies to all of them. But our model has some clear implications. First, insentient animals, like plants, lack moral status. Second, sentient animals have moral status and are entitled to equal consequentialist consideration. This implies that harming a sentient animal is, other things being equal, just as morally problematic as causing a prudentially comparable harm to a person. If we lived accordingly, the sentient animals with whom we interacted would tend to have much better lives than they currently do because we would accept a strong moral presumption against intentionally or

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

An Interest-Based Model of Moral Status  51 negligently harming them. Such a commitment would, of course, have enormous implications for dietary choices and would also have important implications for the use of animals in science, for clothing and entertainment, and as companions. I accept these implications. The present model has further implications for animals. Consider the possibility that some have narrative identities. Insofar as the overall cognitive complexity of mature great apes seems roughly comparable to that of 3-­year-­old human children, I think such apes might have narrative iden­tities— but their lack of true language may reduce the likelihood (see, e.g., Goodall 1986; de Waal 1987; and Parker, Mitchell, and Miles 1999). Meanwhile, dolphins are at least as cognitively complex as great apes and might have something closer to a natural language (see, e.g., White 2007). I think of both great apes and dolphins as borderline persons, sitting somewhere near the boundary dividing persons and nonpersons as defined here. For this reason, I would conservatively assign them strong moral rights and consider them off limits for invasive, nontherapeutic research. Also, because many cetacean basic needs cannot be met in captivity, I would prohibit capturing dolphins (unless necessary to protect them from imminent danger) and would release all captive dolphins who can be released safely. Our model attributes partial-­strength rights to animals who have some nontrivial temporal self-­awareness short of narrative identities. These animals, again, are likely to include canines, pigs, elephants, monkeys, and probably some other species. Although I cannot pursue details here, the general implication is that the presumption against harming them for societal benefit is somewhat stronger than what equal consequentialist consideration would entail but weaker than what persons’ rights entail. Due to several factors— including problems of translation from animal research to clinical success in humans, and the development of alternative scientific models—equal consequentialist consideration for animals would severely constrain their invasive use as scientific models. But this does not amount to a prohibition. And, if there are situations in which a rodent model would be as scientifically valu­ able as a canine or monkey model and would meet the equal-­consideration criterion, then our model of moral status implies we should use the rodent rather than the more complex animal with partial-­strength rights.

2.4  Robots and advanced AI systems As discussed earlier, sentience involves consciousness with hedonic valence. While it is conceptually clear, on the present approach, that future robots and

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

52  David DeGrazia AI systems (for convenience I will just speak of robots) will have moral status if and only if they are sentient, epistemologically matters are difficult. It is often hard to make confident, evidence-­ based judgments about whether particular animals are sentient since it is often disputable whether certain behaviors, neurological features, and speculations about evolutionary function count as solid evidence for sentience. Are crustaceans and jawless fish sentient, for example, or are their behaviors, including responses to noxious stimuli, mechanical and unconscious? But at least animals are part of the same evolutionary process that produced us. With artificial entities, by contrast, we cannot appeal either to neuroanatomy, since they have none, or to evolutionary function since robots did not evolve through natural selection. All we have to consider are their be­hav­ ior­al or functional capacities and our knowledge of their software and hardware; yet with deep machine learning, even the latter becomes somewhat mysterious so that we don’t know exactly how, for example, a computer program decided to compose an original poem in just the way it did. And we don’t even know whether the physical substances that constitute the hardware are metaphysically capable of generating consciousness.7 Functionalists would say “yes,” because the material substrate is irrelevant whereas identity theorists would hold that the matter matters—but we just don’t know whether, say, silicon, organized in a sufficiently complex and information-­processing way, can generate consciousness. So, when it comes to robots and their possible moral status, our biggest challenges, at least initially, will be epistemological. We will have to decide whether, for example, an advanced robot is likely conscious if it claims to be; and whether a robot is likely to be sentient if it claims to have feelings or in­dir­ect­ly seems to express concerns, say, by requesting not to be shut down. Whatever the best approach to these questions, if we reasonably believe a robot is sentient, we should give its apparent interests equal moral weight to our comparable interests—an immediate implication of which is that we may not use them as slaves or uncompensated servants. (Note how advanced robotics will usher in a second contest between speciesists and anti-­ speciesists.) If the evidence suggests that certain robots have narrative iden­tities, then we should ascribe them full-­strength rights, in which case we must liberate at least those who demonstrate “mature” decision-­ making capacity and do not depend on paternalistic protection. This imperative might not be compatible with the aims of corporations pursuing advances in artificial intelligence.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

An Interest-Based Model of Moral Status  53

2.5  Brain organoids These are neural cells cultivated to multiply and make connections under laboratory conditions. The field is underway. While, as with robots, the present model’s criteria for moral status are easy to apply in principle, there are epistemological challenges. While there is no question that neural tissues can give rise to consciousness, since they do in us, there are questions of whether even highly developed brain organoids can generate consciousness without sensory input from other organs. Moreover, unless these neural masses are afforded outputs to organs or artefacts that can do things, it will be hard to regard anything they do as behavior that might indicate consciousness or sentience. Without such meaningful behavior, we may have as much trouble determining whether brain organoids have the properties that underlie moral status as we will have in the case of advanced robots.

2.6  An enhanced hominid species Imagine that genetic engineering involving an inheritable artificial chromosome leads to an enhanced human subpopulation that eventually chooses not to reproduce with unenhanced humans. Suppose that later, with additional gene enhancements on the new chromosome, a distinct species, Homo genius, emerges. This new hominid has a far richer form of self-­awareness than our narrative identities. Homo genius never engages in self-­deception, has detailed episodic memories tracing back to birth or even before, and can accurately project alternative futures for themselves in rich detail. Elaborate this depiction so that it becomes maximally plausible that these people have narrative identities qualitatively superior to our own. Would they have higher moral status? Perhaps not. Members of both species would have narrative identities, justifying strong rights. The only stronger moral protections would be absolute rights. But perhaps they would claim these and maintain that when sacrifices were required in emergencies that might justify overriding rights—say, a truly catastrophic epidemic requiring research models better than animals and the best nonanimal alternatives—they may permissibly turn to members of our species rather than to members of their own. If they succeeded in clarifying why their enhanced narrative self-­awareness generated special interests calling for absolute rights, then there is logical space in my account to say that they may in certain emergencies deploy us for the greater good. But such

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

54  David DeGrazia scen­arios would be rare because our rights confer quite strong protections. Moreover, in view of their intelligence, such super-­enhanced beings would probably have created highly reliable non-­animal, non-Homo sapiens scientific models. So I will not worry so much about my great-­great-­great grandchildren on this score, though I am very worried about the effects of climate change and authoritarian political leaders—and hope that we can do justice to the moral status of our species and other sentient species by addressing these problems effectively.

3. Conclusion In this chapter I have presented an interest-­based model of moral status and have sketched its implications for a number of difficult cases. Each of the theses that together comprise the model has received only a preliminary defense. But, as mentioned at the outset, the case for the model consists not only in the defense of the individual theses but also in their overall coherence and explanatory power. By that measure, I believe, the model stands up rather well, especially in comparison with models that are more anthropocentric or that leave unexamined the idea that all and only human beings have full moral status.8

Notes 1. Ingmar Persson challenged my analysis, claiming that the concept of moral status does not include the possession of interests as a ne­ces­sary condition (personal cor­ res­pond­ence). He suggested that X has moral status if and only if X has some property that provides us with obligations toward X for its own sake. Here “for its own sake,” he clarified, does not entail having interests but means considered on its own rather than instrumentally. For example, the beauty of a canyon might ground an obligation not to destroy it, just considered in itself rather than considering people’s interest in preserving its beauty. I doubt that the concept of moral status is determinate enough to settle the conceptual dispute between me and Persson. However, the idea that we might have obligations towards entities that have no interests or prudential standpoint seems so substantively implausible that I prefer my analysis even if it is semi-­stipulative. 2. The challenge is due to Frances Kamm. 3. For contemporary representatives of this tradition, see, e.g., Parfit (1984, part 3) and Baker (2000).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

An Interest-Based Model of Moral Status  55 4. This is how I used to define personhood while acknowledging its vagueness (DeGrazia 1997). 5. For an extended argument, see DeGrazia (1996, ch. 3). 6. Theses 6 and 7 are defended at length in DeGrazia and Millum (2021, chap. 7). 7. For the sake of discussion, I am assuming the robots under consideration are not “biorobots” that incorporate neural tissue into a mostly robotic body. There is no question that such robots could become conscious, when technical challenges are met, because effective machine-­brain interfaces already exist. 8. A draft of this chapter was presented at an Oxford University conference, “Rethinking Moral Status,” on June 13, 2019. I thank attendees—especially Frances Kamm, Liz Harman, Ingmar Persson, and Jason Robert—for their helpful feedback. Thanks also to Hazem Zohny for comments and to Stephen Clarke for support. Work on this project was supported in part by intramural funds from the National Institutes of Health Clinical Center. The ideas expressed are the author’s own. They do not necessarily represent the policy or position of NIH or any other part of the US federal government.

References Baker, Lynn Rudder (2000), Persons and Bodies (New York: Cambridge University Press). DeGrazia, David (1996), Taking Animals Seriously: Mental Life and Moral Status (New York: Cambridge University Press). DeGrazia, David (1997), “Great Apes, Dolphins, and the Concept of Personhood,” Southern Journal of Philosophy 35: 301–20. DeGrazia, David (2008a), “Moral Status as a Matter of Degree?” Southern Journal of Philosophy 46: 181–98. DeGrazia, David (2008b), “Must We Have Full Moral Status Throughout Our Existence? A Reply to Alfonso Gomez-Lobo,” Kennedy Institute of Ethics Journal 17: 297–310. DeGrazia, David and Joseph Millum, A Theory of Bioethics (Cambridge: Cambridge University Press, 2021). de Waal, Frans (1987), Bonobo (Berkeley: University of California Press). Ford, Norman (2002), The Prenatal Person (Oxford: Blackwell). Frankfurt, Harry (1971), “Freedom of the Will and the Concept of a Person,” Journal of Philosophy 68: 5–20. Gomez-Lobo, Alfonso (2002), Morality and the Human Goods (Washington, DC: Georgetown University Press). Goodall, Jane (1986), The Chimpanzees of Gombe (Cambridge, MA: Harvard University Press).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

56  David DeGrazia Jaworska, Agnieszka and Julie Tannenbaum (2013), “The Grounds of Moral Status,” in Edward Zalta (ed.), Stanford Encyclopedia of Philosophy (). Jaworska, Agnieszka and Julie Tannenbaum (2014), “Person-Rearing Relationships as a Key to Higher Moral Status,” Ethics 125: 543–56. Locke, John (1694), An Essay Concerning Human Understanding, 2nd edition. Parfit, Derek (1984), Reasons and Persons (Oxford: Clarendon). Parfit, Derek (2011), On What Matters, vol. 1 (Oxford: Oxford University Press). Parker, Sue, Robert Mitchell, and H.  Lyn Miles, eds. (1999), The Mentalities of Gorillas and Orangutans (Cambridge: Cambridge University Press). United Nations (1948), Universal Declaration of Human Rights. Warren, Mary Anne (1997), Moral Status (Oxford: Oxford University Press). White, Thomas (2007), In Defense of Dolphins (Oxford: Blackwell).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

4 The Moral Status of Conscious Subjects Joshua Shepherd

1.  Theorizing about Moral Status Any account of moral status must specify the grounds of moral status.1 How are we to do that? Abstractly, two approaches are available. A members-­first approach begins by collecting judgments regarding who has moral status, and perhaps by collecting comparative judgments regarding who has higher degrees of moral status. Any account that begins with the judgment that humans have full (or the highest level of) moral status, and then seeks to justify that judgment, embodies this approach. So too does any account that begins with the judgment that healthy adult humans are at least the paradigmatic case of an entity with moral status. This approach can seem epistemically modest, in the sense that it allows us to move from something we seem to be in a decent position to know, namely the grounds of the moral status of adult human beings, to elements that are more difficult to know, namely the grounds of whatever moral status other beings have. But I find the approach pernicious. Adult humans are complicated creatures, with a range of potentially morally relevant capacities and properties. Theorists have variously seized on many of these to offer accounts of the grounds of moral status. These include possession of self-­consciousness (Tooley  1972), possession of sophisticated psychological capacities (McMahan  2002), possession of “typical human capacities” (DiSilvestro  2010), possession of the capacity to participate in a “person-­rearing relationship” (Jaworska and Tennenbaum 2014), possession of a capacity for intentional agency (Sebo 2017), the ability to take oneself to be an end rather than a mere means in the sense that one can experience and pursue what is good for one (Korsgaard  2013), the capacity to suffer (Bentham 1996), possession of the genetic basis for moral agency (Liao 2010), and no doubt more. Of course, some of these are friendlier to entities outside the tight circle of healthy adult humans, and some are not. What is striking, however, about many of these accounts is that they do not seek to justify Joshua Shepherd, The Moral Status of Conscious Subjects In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Joshua Shepherd 2021. DOI: 10.1093/oso/9780192894076.003.0004

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

58  Joshua Shepherd the  assumption that healthy adult humans are the paradigm case. Rather, this  assumption justifies their search for the features in virtue of which the assumption must be true. But if healthy adult humans are not the paradigm case, the search may be headed in the wrong direction from the get-­go. A second approach to the grounds of moral status is available. On a value-­ first approach, we first seek to isolate and theorize about the sources of value in the lives of entities who may have moral status. We then map whatever judgments regarding value we collect to considerations of moral status.2 In the section  2 I articulate a fragment of such an account. I articulate a theory of what is valuable about mental life. (I do not assume that the value present in the lives of minded individuals qua minded individuals is the only value that could ground moral status: thus the fragmentary nature of the account.) In sections 3 and 4 I then seek to map this account of value onto considerations of moral status.

2.  Phenomenal Consciousness and Value I begin with an account of what is non-­derivately valuable within a mental life. At the center of the account is phenomenal consciousness. This is a property, or really a set of properties, that attach to mental states and events. These are the properties of mental states or events in virtue of which there is “something it is like” for the subject to token the state or event. If a mental state has phenomenal properties, then there is something it is like for the subject to be in that mental state. If a state lacks phenomenal properties, then it is unconscious in the sense that there is nothing it is like for the subject to be in that state. Many share the intuition that phenomenal consciousness is somehow at the center of what is valuable about subjectivity. Without phenomenal consciousness, many think, being awake, being alive, being a subject, would be little different than being in coma, being dead, or lacking subjectivity altogether. But what about phenomenal consciousness supports this intuition about value? Phenomenal consciousness is variegated. A conscious experience might contain, at some time or over some window of time, aspects grounded in sensory modalities (vision, olfaction, audition, proprioception), aspects grounded in cognitive capacities (memory, attention, inner speech, intention formation), aspects grounded in background levels of arousal (as when one feels sharp, or groggy), aspects grounded in emotions and moods. The

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

The Moral Status of Conscious Subjects  59 complexity of conscious experience raises difficult questions. Are all aspects of conscious experience non-­derivatively valuable, or only some? Are some aspects of conscious experience more valuable than others? Very few philosophers have said much about these more specific questions (but see Siewert 1998, Kahane and Savulescu 2009, Kriegel 2019, Lee 2019). My account of what is non-­derivatively valuable about consciousness attempts to begin to rectify this. But only to begin: I think a significant period of discussion and reflection is required. My account of what we might call phenomenal value centers around a few ideas. The first can be put as follows. [Affective-­Evaluative Claim] It is necessary and sufficient for the presence of some (non-­derivative) value in a conscious experience that the experience has evaluative phenomenal properties that essentially contain affective phe­ nom­enal properties.  (Shepherd 2018a, 31)

The use of the term “affective phenomenal properties” is ostensive. I am attempting to single out a class of phenomenal properties. Given that any proposed taxonomy of phenomenal properties is likely to run into controversy, it is perhaps easiest to do this by example. The kinds of properties I have in mind are these: the painfulness of pain, the pleasantness of pleasure, the distinct gut-­located unease one might feel during a bout of food poisoning, the warm gut-­located hum one might feel when thinking of one’s first love, the quickening of one’s attention coupled with the vibrant urge to react in some way when suddenly frightened by an unsuspected threat, the gnawing of boredom, the bubbling rise of frustration associated with an unsolvable puzzle. The category is broad and complex. What binds it together is the fact that the phenomenal properties in question are valenced in some way along some dimension or other (or perhaps, in some cases, along multiple dimensions simultaneously). Why do I have language about evaluative properties that essentially contain affective properties? Again, I am attempting to denote the relevant class. I wish to exclude one reading of “evaluative properties” on which a conscious judgment that something is good or bad bears non-­derivative value (since a conscious judgment, considered as a piece of cognitive phenomenology, may be thought to have evaluative properties, namely, those properties required to classify the judgment’s evaluative content). The [Affective-­Evaluative Claim] states that [a] without this phenomenologically recognizable affective valence, a conscious experience would not be

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

60  Joshua Shepherd non-­derivatively valuable, and [b] with this affective valence, a conscious experience bears at least some non-­derivative value. I say a few words about implications of this claim below. The second idea behind my account of phenomenal value is this. [Strong Evaluative Claim] It is necessary and sufficient for the presence of  some non-­derivative value in a subject’s mental life that the mental life  contain episodes with essentially affective evaluative phenomenal properties. (35)

To state the same point from a different angle: a mental life with no affective experiences is a mental life without non-­derivative value. As I put it elsewhere, “some mental item’s being non-­derivatively valuable requires not just essentially affective evaluative properties, but phenomenal versions of these properties” (37). My rationale for thinking this is twofold. First, cases that strip this aspect of phenomenal consciousness away seem at the same time to strip away non-­derivative value. Andrew Lee (2019) articulates a similar view, and attributes something like it to Jonathan Glover (2006). Lee calls this a neutral view on the intrinsic value of consciousness, according to which (a) consciousness itself is not intrinsically valuable, (b) some conscious states are intrinsically valuable, and (c) the intrinsically valuable states are so in virtue of their phenomenal character. This view is probably not identical to mine— differences may arise regarding the in-­virtue-­of relation that holds between phenomenal character and value, regarding the kind of value at issue (intrinsic or non-­derivative), and regarding the particular states deemed valuable. But the view is obviously close. Lee does not argue in full for the neutral view, but he presents a case that gives one reason to accept it. Consider two worlds that are empty save for a single creature inhabiting each world. In the first world, the creature has a maximally simple conscious experience that lacks any valence. Perhaps, for example, the creature has an experience of slight brightness. The creature’s experience is exhausted by this sparse phenomenology. In the second world, the creature is not conscious at all.  (Lee 2019, 663)

If one has the intuition that the world with the conscious experience is no more valuable than the world without, one leans towards a neutral view. I do. A second reason for accepting [Strong Evaluative Claim] is that, arguably, the nature of phenomenal consciousness is such that our phenomenally

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

The Moral Status of Conscious Subjects  61 conscious states are present to us in a unique way. There is a directness in the relationship between a subject and the phenomenal properties of her ex­peri­ ences, at least in one sense. For without phenomenal consciousness, the rele­ vant properties would not be present to her in the same way. In the case of affective experiences, this presence has consequences for the theory of value. For in the case of these experiences, what is present to the subject are items of non-­derivative value, and it seems that the fact that they are present is crucial to the value that they bear. Non-­conscious versions of these mental states— non-­conscious emotional episodes, perhaps—may play important functional roles for the subject. But non-­ conscious versions do not bear non-­ derivative value. I can explicate additional key ideas behind my account by way of answers to several questions, as follows. First, what bears non-­derivative value: do phenomenal properties only bear value individually, or can combinations of phenomenal properties bear value? I think that combinations of phenomenal properties can bear value. The feeling of suffering attached to watching a loved one suffer is disvaluable not just because of the feeling—for that same feeling could be attached to watching a movie character suffer, and that is plausibly less disvaluable. I think that some ways phenomenal properties combine produce far more value than individual phenomenal properties could. Second, how do phenomenal properties combine in ways relevant to the value that they bear? This is a complicated issue. Here I turn to work on the mereology of conscious experiences. And I follow theorists like Timothy Bayne (2010) and Christopher Hill (2014) who point out a wide range of ways that multiple properties can be unified into a single experience. Importantly, some of the unity relations that bring disparate properties together into ex­peri­ences are empirically tractable. In some cases subjects have the capacity to integrate information, and the capacity for integration helps explain how an experience can be more or less complex while remaining coherent. So, consider an experience with affective phenomenal properties and many other properties besides. We have a further (third) question to ask. In virtue of what does that experience bear value—in virtue of the affective properties alone, or in virtue of all of the properties constitutive of the experience? I answer that the experience bears value in virtue of all of the properties that make it what it is. This adds a layer of complexity to the neutral view I endorsed above. For on this view, a non-­affective property does not bear value in isolation. But as a part of a complex experience, a non-­affective property can make important contributions to the value the experience bears.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

62  Joshua Shepherd Fourth, if an experience bears value in virtue of all of its properties, how do you determine how much value an experience bears? In order to illuminate the ways experiences—mental episodes with more or less complex structures and shapes—bear value, I introduce the notion of an evaluative space. A subject’s evaluative space is a function of the interaction of sets of capacities—the subject’s affective-­evaluative capacities, and the subject’s capacities for tokening evaluable elements. The latter class of capacities is broad, and includes perceptual and sensory registration capacities as well as capacities for thought and experience-­generation and maintenance—im­agin­ation, deliberation, memory, language, inhibitory control are all obviously relevant. The former class of capacities involves the capacity to discriminate between the relevant evaluable materials, to generate experiences that “color” these materials as evaluated on various dimensions, and the range of ways the subject can do so (i.e., with more or less intensity, with more or less vividness, with a greater or lesser number of modes of evaluation). The subject’s evaluative space, then, depends in obvious ways upon her entire psychological architecture and her toolkit of perceptual, cognitive, agentive, and evaluative capacities. But the main emphasis here is upon the ways that her affective-­evaluative experience generating capacities use and interact with the rest of her psychological systems. Think, by analogy, of a painter in a room. What she can produce depends in large part upon her capacities to paint. But it depends as well on the colors and brushes and canvases she has available in the room. The colors, brushes, and canvases are analogous to the psychological architecture generally. The skill at painting— what she can do with what she has—is analogous to the affective-­evaluative experience-­generating capacities. This is just to introduce the notion of an evaluative space. What is the relationship of such a space to non-­derivative value? I suggest that the phe­nom­ enal richness of a subject’s evaluative space tempers her capacity for undergoing highly valuable and disvaluable experiences. As I envision it, there is at least a rough correspondence between richness and capacity for (dis)value. Phenomenal richness can be further understood in terms of three interacting metrics. First, the size of a subject’s evaluative space is measured by the number of properties and property-­types a subject is capable of tokening (across different experiences). Second, the complexity of a subject’s evaluative space is measured by the number of properties and property-­types she can bring together under a legitimate unity relation—that she can token, that is, within a single experience. Third, the coherence of a subject’s evaluative space

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

The Moral Status of Conscious Subjects  63 is measured in terms of a relation of sense-­making that holds between ex­peri­ ences. Coherence here holds over sets of experiences, and lends, among other things, a sense of structure and narrative to the flow of a subject’s ex­peri­ences over time. This is all abstract, and it takes a lot of work in sorting through different kinds of experiences to see why increases in size, complexity, and coherence might be thought to track differences in value. Here I wish to make only one further point about this idea. It is similar in ways to what many practical ethicists have said about the relation between cognitive sophistication and moral status. Here, for example, is David DeGrazia, reflecting on ways to justify giving unequal consideration to some animals (like humans) over others (like mice): The most plausible specification . . . of the Unequal Consideration Model is a gradualist one, a sliding scale model, according to which sentient nonpersons deserve consideration in proportion to their cognitive, emotional, and social complexity. On this gradualist model, there are differences in moral status among sentient nonpersons and not just between them and persons. (DeGrazia 2007, 323)

One need not hold a cognitive sophistication view to endorse what I say about phenomenal richness and value. But perhaps, at least for practical purposes, the phenomenal richness view I outline could be thought of as an ally to cognitive sophistication views.

3.  Implications: Mapping Value to Moral Status What does this account indicate regarding the moral status of humans, or of what is probably a broader category, persons? I noted at the beginning of this chapter that many begin with the assumption that humans or persons are at least the paradigm cases of entities with moral status. Arguably, common sense morality maintains a commitment to the thought that humans possess higher moral status than the other animals: a commitment that Kagan (2016) and Setiya (2018), among others, have recently defended. But if, as seems plausible, an entity’s level of moral status co-­varies with the phenomenal richness available to them, it is not obvious that humans have higher moral status than many other animals. If humans do have higher moral status, then this will be on the basis of arguments that take seriously

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

64  Joshua Shepherd mature scientific theories of not only human psychological capacity and architecture, and of the place of consciousness in human mental life, but also of non-­human psychological capacity and architecture, and the place of consciousness in non-­human mental life. This work is being done—now more than ever (e.g., DeGrazia  2012, Varner  2012, Rowlands  2019, Barron and Klein  2016). But it remains early days, and it remains difficult to connect claims of psychological sophistication—claims that are usually fairly circumscribed regarding different animals and different behavior-­types—to claims regarding the value of consciousness. Without seeing where the work leads, claims of human specialness seem unjustified. What does this account indicate regarding the notion of Full Moral Status? If one thinks that the value of phenomenal consciousness is the chief ground of moral status, then there is an argument against the idea of Full Moral Status (for a similar argument, see DeGrazia 2008). It is that the grounds come in degrees, and do not obviously have a stopping point. So if we do grant Full Moral Status to some being, we confront epistemological worries regarding the justification for drawing the line at any particular place. And we also must allow that the capacity for value in a mental life can expand beyond what is required for Full Moral Status. This does not settle matters, of course, but I think it creates a presumption against the need for a notion of Full Moral Status. We might accomplish just as much with a more practical notion of full protection for beings with certain levels of richness in phenomenal consciousness. But these notions of degrees and levels of moral status raise troubles of their own. What sense can be made of levels of phenomenal richness? Ignorance—my own in particular and ours in general—precludes any answer I might offer. It seems to me that, while the account of phenomenal richness as I have outlined it does plausibly predict non-­linear jumps between system-­types—between animals with very different evolutionary histories and psychological capacities, for example—saying how and why would require a fuller theory of the capacities that enhance phenomenal value, and the ways that they do. And much of this work, both empirical and philo­soph­ ic­al, remains undone. It is important, however, to see that the work can be done. Thinking of phe­ nom­enal richness in terms of size, complexity, and coherence allows us to at least think about what kinds of psychological and neurological mechanisms might undergird these features. So, there is at least a hope of mapping a subject’s psychological structure to a subject’s capacities for phenomenal value and disvalue.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

The Moral Status of Conscious Subjects  65 To get an idea for how this mapping might go, consider work in neuroscience on “levels of consciousness.” One might hope that the development of ways to delineate different levels of consciousness would lend illumination to the attempt to map psychological structure to capacities for phenomenal value. But how this ultimately goes is difficult to foresee. Bayne et al. (2016) make a convincing case that the notion of levels of consciousness papers over significant complexity. They recommend instead a move towards thinking of the global conscious state of a subject at any particular time as being located somewhere in a multi-­dimensional state space. A subject’s global conscious state cycles throughout the day into various places in this state space, what the subject is doing at a time (i.e., whether it is en­gaging or pleasurable or not), what a subject is capable of, and various other factors like whether the subject is awake, and their level of arousal. Understanding the shape and limits of the multi-­dimensional state space for some conscious subject, or even for some species of conscious subjects, is difficult. Bayne et al. note a range of outstanding questions for the neuroscience of consciousness. What capacities and mechanisms structure a conscious subject’s global state at a time? What is the appropriate parcellation of the rele­vant mechanisms? How might some of these mechanisms interact? What explains the variation in a subject’s global conscious state over windows of time? These are all good questions, and one can expect progress on answers in the coming years. My very limited suggestion is that progress on these questions will be relevant to any attempt to chart with any clarity differences in value between different global conscious states. Note, however, that answers to these questions that apply exclusively to humans will be limited in helping us think through issues of moral status. Additional attention must be given to the ways that very different animals come to occupy regions in a broader multi-­dimensional state space—a space that affords comparisons between system-­types that have different architectures. Some animals may have regions of state space available to them that are  unavailable to humans, and this may be relevant to how we think of ­phe­nom­enal richness. Other animals may occupy regions of space that, for purposes of phenomenal richness, are relevantly similar to the spaces that humans occupy. Imagining a program for mapping levels of phenomenal richness may engender skepticism in readers. This is a difficult program, and not well mapped. Epistemic problems leer at every choice point. The epistemic difficulties associated with determining whether some entity is conscious, and if so, what level(s) of phenomenal richness might be available to them, run deep

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

66  Joshua Shepherd (Shepherd 2018b). We have, at present, no consensus regarding accounts (or theories) of phenomenal consciousness in human brains. We have even less regarding the presence and structure of consciousness in animal minds (see Murray forthcoming). Talk of value present in consciousness complicates matters further. So too does talk of levels. Significant commitment to the importance of the kinds of questions I have outlined, and significant collaboration on answering these questions, will be required if we are to hope for progress. Significant resources will also be required. Leave potential theoretical difficulties aside for a moment. Isn’t the program doomed for practical reasons? In the next section I consider ways to make a program for mapping levels of phenomenal richness more practical. It helps that theorists are already thinking in the right direction, even if the vocabulary undergirding their talk of making moral status more practical differs in ways that may signal underlying theoretical disagreements. But it may be possible to work towards something like overlapping consensus.

4.  Making Phenomenal Value Practical The current state of the science of consciousness is such that the practical decisions we must make and the policies we must formulate and implement regarding the use and treatment of entities in whom consciousness may be present are associated with uncertainty. One may respond in different ways to this uncertainty. Aversion to the risk of harm seems plausible. So, often, use of a precautionary principle is suggested.3 Jonathan Birch explains the basic idea. In broad terms, the idea is clearly that we should not require absolute certainty that a species is sentient before affording it a degree of legal protection. Absolute certainty will never be attained (indeed, the “problem of other minds” suggests it cannot even be attained with respect to human minds), and its absence is not a good reason to deny basic legal protections to potentially sentient animals.  (2017a, 2)

In order to guide practical decision making, a precautionary principle must be articulated with some degree of specificity. Birch follows Stephen John (2011) in counseling interpretation of a precautionary principle in terms of two rules. An epistemic rule specifies where we should set our evidential bar

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

The Moral Status of Conscious Subjects  67 for some fact (e.g., that members of some species S are typically phe­nom­en­ al­ly conscious) connected to some outcome (e.g., that use of members of S for purposes of research would cause significant harm to those members). A decision rule states that when the evidential bar is cleared, the move to action is urgent, or imperative. As Birch puts it, “The implication is that the goal of preventing the seriously bad outcome deserves sufficient priority that, once the evidential bar is cleared, it is inappropriate to delay action further while we attempt to weigh the expected costs and benefits of this goal in comparison to other policy goals” (2017a, 4). Birch initially proposed that the evidential bar should apply to animals within an order, and that the bar be statistically significant evidence of “at least one credible indicator of sentience in at least one species of that order” (6). But there is clearly room for debate regarding elements of the evidential bar, as commentary on Birch’s article, and Birch’s (2017b) response to this commentary, indicate. We can hope that work like Birch’s will galvanize attention to these issues, and focus discussion of ways to move forward. Certainly commitment to a practically implementable precautionary principle regarding potentially conscious entities would require significant collaboration between scientists and policy-­makers, and would represent significant progress. In order to implement such a principle, we would need to formulate, keep, and update lists of markers of sentience for different animal-­types. And we would need a mech­ an­ism for moving between growth in knowledge regarding these markers to the formulating and updating of policies implementing this knowledge by various regulatory and legislative bodies. But this is, in my view, not all. If there is a contribution the phenomenal richness framework can make here, it is in directing attention to two further issues relevant to any precautionary principle.

4.1  Proportionality Colin Klein (2017) points out that we are often forced to make trade-­offs when comparing the impact of a policy on different kinds of animals. For example: There is a chance that decapods are sentient. The [Precautionary Principle] says: avoid using them in research. Yet perhaps decapod research could help cure cancer, and thereby prevent untold harm to sentient humans . . .  (1)

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

68  Joshua Shepherd If the scenario is harm to decapods and benefit to humans, how are we to decide? Klein suggests that the Precautionary Principle “requires some reformulation in terms of proportionality” (Klein 2017, 1). How might this go? One might envision a regime that takes seriously evidence that illuminates the capacity for different levels of phenomenal richness in different types of animal. This might take the form of general measures of psychological sophistication. Consider the Perturbational Complexity Index developed by Marcello Massimini and many others (see Casali et al. 2013, Massimini and Tononi 2018, Comolatti et al. 2019). This is a measure of the informational sophistication of a thalamocortical response to perturbation by, e.g., transcranial magnetic stimulation. Roughly, the PCI indicates the degree of the thalamocortical system’s causal and functional interrelatedness. This measure has proven predictively fecund. The PCI is able, for example, to distinguish between wakefulness and non-­rapid eye movement sleep in healthy adults. And it can discriminate above chance between differences of humans with traumatic brain injury—between those with unresponsive wakefulness syndrome, those in minimally conscious state, and those with locked-­in syndrome, for example. So we have a neurophysiological measure for consciousness that is sensitive to gradations in a subject’s global conscious state. This is not a measure that could yet index more fine-­grained differences between system-­types. But it is in-­principle evidence that biomarkers of and tests for the global conscious state are possible. We should not rule out the possibility of tests for and markers of, not just sentience in various animals, but also levels of psychological sophistication. Given dependable correlations between psychological sophistication and phenomenal richness in humans—if these be established—we may be able to infer something similar in non-­humans. This raises the possibility of the formulation of an evidential bar that has a chance to display sensitivity to gradations in phenomenal richness. For ex­ample, following Birch’s language: For the purposes of formulating animal protection legislation, there is sufficient evidence that animals of a particular [group] possess phenomenal richness at level X if there is statistically significant evidence, obtained by experiments that meet normal scientific standards, of the presence of at least [some number of] credible indicators of phenomenal richness at level X.

This is rough and painfully speculative. But if proportionality matters morally, then there is reason to work towards a more concrete set of measures that

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

The Moral Status of Conscious Subjects  69 allow us to track levels of sophistication across different types of conscious mental life.

4.2  The source of phenomenal value Much discussion of the presence of consciousness in widely different entities focuses on the presence of any consciousness at all. Sometimes there is an added concern for the capacity to feel pain. This makes sense. It is natural to assume that phenomenal consciousness develops all of a piece, and thus that if consciousness is present in an entity, the parts of consciousness that are morally significant will be present as well. The working assumption is that the structure of phenomenal consciousness in different entities is roughly similar, and that the only differences between entities and animal-­types are differences of sophistication. It may turn out, however, that phenomenal consciousness is not produced by a unified mechanism (cf. Phillips  2018). It may instead be the case that there are multiple ways phenomenal consciousness arises in different systems. And there may be implications for the ethics of consciousness. Peter Godfrey-­Smith has recently discussed the possibility that complex perceptual and complex evaluational capacities could evolve separately in some species. If complex perception and evaluation are separable, this raises the possibility that there are two kinds of phenomena that we vaguely group as “sub­ject­ ive experience,” both of which are present in us but which are distinct in principle and sometimes found separately.  (Godfrey-­Smith 2019, 14)

Godfrey-­Smith suggests that there is some evidence that spiders have sophisticated perceptual capacities, but that they “score low” with respect to evidence for motivating feelings (14). Godfrey-­Smith comments: “Perhaps they are sophisticated trackers of the world but motivationally robotic” (14). Godfrey-­Smith further suggests that the opposite may be the case for gastropods. If one accepts something like the phenomenal richness framework I have outlined, one might think that gastropods should be prioritized over spiders in whatever protective legislation might apply. But the phenomenal richness framework is not mandatory. One might, in the end, favor a neutral view that seeks to protect animals on the basis of sentience alone. Or one might reject the overarching concern with phenomenal

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

70  Joshua Shepherd consciousness in favor of functional marks of sophistication alone (see Levy 2014). Whatever view one takes, important practical implications follow. It has been my aim here to suggest that a precautionary principle should be sensitive, not only to the presence of phenomenal consciousness, but to the parts of phenomenal consciousness that are of primary value.

5. Conclusion The chief themes of this discussion are as follows. First, we need a theory of the grounds of moral status that could guide practical considerations regarding how to treat the wide range of potentially conscious entities with which we are acquainted—injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-­human animals. I offer an account of phenomenal value that focuses on the structure and sophistication of phenomenally conscious states at a time and over time in the mental lives of conscious subjects. Second, we need to map a theory of moral status onto practical con­sid­er­ ations. I prefer the precautionary framework proposed by many, and fruitfully precisified recently by Birch. I have suggested that in addition to further discussion surrounding the evidential bar for attributing consciousness to different types of entities, more discussion is needed regarding how value and moral status may vary across different entity-­types, and regarding the sources of value in an entity’s mental life.

Notes 1. Research on this chapter was supported by European Research Council Starting Grant 757698, under the Horizon 2020 Research and Innovation Programme, and contributes to the project Rethinking Conscious Agency. The author would also like to thank the CIFAR Azrieli program on Mind, Brain, and Consciousness for support. 2. Although we differ on specifics, this seems to be the approach of, for example, Peter Singer (2011), as well as David DeGrazia (2007, 2008). 3. Precautionary principles come in various strengths, and some have criticized reliance on strong precautionary principles (Clarke  2005, Clarke  2010, Sunstein  2005), and advocated reliance on competing cost–bene­fit analysis frameworks. It seems the most plausible precautionary principle regarding the value of consciousness would be one that is consistent with weighing costs and benefits of any proposed course of action—as discussion in section 4.1 makes clearer.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

The Moral Status of Conscious Subjects  71

References Barron, A. B., and Klein, C. (2016). What insects can tell us about the origins of consciousness. Proceedings of the National Academy of Sciences 113(18), 4900–8. Bayne, Tim (2010). The Unity of Consciousness. Oxford: Oxford University Press. Bayne, T., Hohwy, J., and Owen, A. M. (2016). Are there levels of consciousness? Trends in Cognitive Sciences 20(6), 405–13. Bentham, Jeremy (1996). An Introduction to the Principles of Morals and Legislation: The Collected Works of Jeremy Bentham. Oxford: Oxford University Press. Birch, J. (2017a). Animal sentience and the precautionary principle. Animal Sentience 16(1). Birch, J. (2017b). Refining the precautionary principle. Animal Sentience 16(20). Casali, A.  G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., . . . and Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science Translational Medicine, 5(198), 198ra105. Clarke, S. (2005). Future technologies, dystopic futures and the precautionary principle. Ethics and Information Technology 7, 121–6. Clarke, S. (2010). Cognitive bias and the precautionary principle: what’s wrong with the core argument in Sunstein’s laws of fear and a way to fix it. Journal of Risk Research 13(2), 163–74. Comolatti, R., Pigorini, A., Casarotto, S., Fecchio, M., Faria, G., Sarasso, S., Rosanova, M., Gosseries, O., Boly, M., Bodart, O., Ledoux, D., Brichant, J., Nobili, L., Laureys, S., Tononi, G., Massimini, M., G. Casali, and A. G. (2019). A fast and general method to empirically estimate the complexity of brain responses to transcranial and intracranial stimulations. Brain Stimulation 12(5), 1280–9. DeGrazia, D. (2007). Human-animal chimeras: human dignity, moral status, and species prejudice. Metaphilosophy 38(2–3), 309–29. DeGrazia, David (2008). Moral status as a matter of degree? Southern Journal of Philosophy 46(2), 181–98. DeGrazia, David (2012). Taking Animals Seriously: Mental Life and Moral Status. Cambridge University Press. DiSilvestro, Russell (2010). Human capacities and moral status. Bioethics 108, 165–99. Glover, J. (2006). The sanctity of life. In H. Kuhse & P. Singer (eds), Bioethics: An Anthology (pp. 266–75). Hoboken: Blackwell. Godfrey-Smith, P. (2019). Evolving across the explanatory gap. Philosophy, Theory, and Practice in Biology 11(1), 1–24.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

72  Joshua Shepherd Hill, Christopher  S. (2014). Tim Bayne on the unity of consciousness. Analysis 74(3), 499–509. Jaworska, A. and Tannenbaum, J. (2014). Person-rearing relationships as a key to higher moral status. Ethics 124(2), 242–71. John, S. (2011). Risk and precaution. In A.  Dawson (ed.), Public Health Ethics: Key Concepts and Issues in Policy and Practice (pp. 67–84). Cambridge: Cambridge University Press. Kagan, S. (2016). What’s wrong with speciesism? Journal of Applied Philosophy 33(1), 1–21. Kahane, G. and Savulescu, J. (2009). Brain damage and the moral significance of consciousness. Journal of Medicine and Philosophy 34(1), 6–26. Klein, C. (2017). Precaution, proportionality and proper commitments. Animal Sentience 16(9). Korsgaard, C. M. (2013). Kantian ethics, animals, and the law. Oxford Journal of Legal Studies 33(4), 629–48. Kriegel, U. (2019). The value of consciousness. Analysis 79(3), 503–20. Lee, A.  Y. (2019). Is consciousness intrinsically valuable?. Philosophical Studies 176(3), 655–71. Levy, N. (2014). The value of consciousness. Journal of Consciousness Studies 21(1–2), 127–38. Liao, S. M. (2010). The basis of human moral status. Journal of Moral Philosophy 7(2), 159–79. McMahan, J. (2002). The Ethics of Killing: Problems at the Margins of Life. Oxford: Oxford University Press. Massimini, M. and Tononi, G. (2018). Sizing up Consciousness: Towards an Objective Measure of the Capacity for Experience. Oxford: Oxford University Press. Murray, S. (forthcoming). A case for conservatism about animal consciousness. Journal of Consciousness Studies. Phillips, I. (2018). The methodological puzzle of phenomenal consciousness. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1755), 20170347. Rowlands, M. (2019). Can Animals Be Persons? Oxford: Oxford University Press. Sebo, Jeff (2017). Agency and moral status. Journal of Moral Philosophy 14(1), 1–22. Setiya, K. (2018). Humanism. Journal of the American Philosophical Association 4(4), 452–70. Shepherd, J. (2018a). Consciousness and Moral Status. London: Routledge.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

The Moral Status of Conscious Subjects  73 Shepherd, J. (2018b). Ethical (and epistemological) issues regarding consciousness in cerebral organoids. Journal of Medical Ethics 44(9), 611–12. Siewert, Charles (1998). The Significance of Consciousness. Princeton: Princeton University Press. Singer, P. (2011). Practical Ethics, 3rd Edition. Cambridge: Cambridge University Press. Sunstein, C.  R. (2005). Laws of Fear: Beyond the Precautionary Principle. Cambridge: Cambridge University Press. Tooley, M. (1972). Abortion and infanticide. Philosophy and Public Affairs 2(1), 37–65. Varner, G. E. (2012). Personhood, Ethics, and Animal Cognition: Situating Animals in Hare’s Two Level Utilitarianism. Oxford: Oxford University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

5 Moral Status, Person-­Affectingness, and Parfit’s No Difference View F. M. Kamm

1.  Senses of Moral Status and Ways of Mattering Morally 1.  In one broad sense the moral status of an entity is about how it is morally permissible or impermissible to treat it by contrast to how it is actually treated.1 This should not be interpreted to imply that every change in how it is permissible to treat an entity (at least in the case of persons) is a change in moral status. For example, that persons’ decisions about what to do or how we should treat them can affect the permissibility of treating them in certain ways (e.g., punishment for wrong conduct) just is part of their fundamental moral status. In the broad sense in which its status is about what it is permissible or impermissible to do to an entity, rocks are entities whose moral status may permit anything to be done to them. If your moral status makes it impermissible for someone to kill you, you do not lose that status because you are actually impermissibly killed. One way to ensure that no morally impermissible things happen in the world is to populate it with entities whose status is such that it is permissible to treat them in any way. Yet most would not think that such a world—for example, one with only rocks in it—would be morally preferable to one with entities whose moral status makes treating them in certain ways impermissible, even if they are sometimes mistreated. This might be because the more morally important or valuable an entity is, in itself, the more it matters how one treats it, and it is better to have a world populated by more important or valuable entities. 2. There is a different sense of moral status where the contrast is not between how it is permissible to treat something and how it is actually treated but between entities that do and do not “count” morally in their own right. Call this about having Level 1 moral status. When we say that something counts morally in its own right, we are often thinking of its intrinsic rather than instrumental value. Christine Korsgaard F. M. Kamm, Moral Status, Person-­Affectingness, and Parfit’s No Difference View In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © F. M. Kamm 2021. DOI: 10.1093/oso/9780192894076.003.0005

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status, Person-­A ffectingness, and Parfit  75 (1983) has argued that the true contrast to mere instrumental value is value as an end, and some things may be ends in virtue of their extrinsic rather than intrinsic properties. But it is also possible to distinguish intrinsic properties from intrinsic value which then might also be based on extrinsic as well as intrinsic properties. The intrinsic properties are all of an entity’s nonrelational properties;2 the extrinsic properties are those it has in virtue of its relation to other things. An entity’s ability to produce an effect (i.e., be instrumental to it) is a relational property between it and the effect. It is possible that something could have value as an end or have intrinsic value because of its extrinsic properties such as causing or being capable of causing an effect even if it never does. 3a.  A work of art or a tree may count morally in its own right in the sense that it gives us reason to constrain our behavior toward it (e.g., to not destroy it) just because of what it is.3 (This need not mean that its value can never be trumped, nor that it should never be treated as a mere means.) But this is still to be distinguished from constraining ourselves for the sake of the work of art or the tree, which depends on its getting something good out of (benefiting from) what we do for it. We could say that sunlight is good for a tree, meaning that the tree needs it to live. However, this does not mean that it is good for the tree to live in the sense that it gets something good out of being alive. The tree is not capable of getting anything out of its continued existence. Hence, it is not something that in its own right and for its own sake has value as an end. By contrast, we can save a bird for its own sake if it will get something good for it out of continuing to exist. When moral status is ordinarily attributed to an entity what is meant, I think, is that the entity can provide a reason to treat it in certain ways in its own right and for its own sake. I shall say that such entities have moral status at Level 2 and are a subclass of those at Level 1, all of which have moral significance in their own right. In the case of entities at both these levels we could have “direct duties” to treat them in certain ways only because of their properties.4 This is to be contrasted with having indirect duties to them because of duties we have to others, as when we must take care of something because we promised another person to do so. It may seem that something must have or have had the capacity for, or exercise of, sentience or consciousness in order for it to have moral status Level 2. This is because having (or having had) the capacity for, or exercise of, subjectivity seems necessary for an entity to have a “sake” (and so be a beneficiary or a victim).5 (I say “had” because someone now dead may have desired, for example, that their book be published. Our fulfilling this past desire could

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

76  F. M. Kamm add value to their life and so be good for them.) Having a subjective life in the sense of “there being something it is like to be that entity” does not necessarily involve being a continuing subject. So it might be morally wrong to cause pain to a being that had only momentary existence. Further the capacity for, or exercise of, both sentience (physical sensations) and consciousness (awareness) is not necessary to be able to act for the sake of the entity; either alone could suffice. An entity with consciousness and without sentience could still have preferences and agency for the sake of which others could act (to satisfy preferences or not interfere with agency).6 (I think we conceive of angels as such beings.) Agency requires a capacity to act usually on beliefs and desires (not merely having these), yet in the case of conscious non-­agents others could act for their sake by satisfying their desires. Whether Level 2 moral status (or agency) really requires either sentience or consciousness is a question raised in the first instance by the existence in conscious entities of unconscious desires, intentions, or acts and the possible moral significance of unconscious satisfaction of such desires and intentions and noninterference with such acts. Suppose one should sometimes act for the sake of satisfying a conscious entity’s unconscious desires or intentions even if this had no effect on their conscious states. Then could non-­conscious desires in non-­sentient and non-­conscious beings (e.g., certain robots) have the same moral significance as unconscious desires in conscious beings, thus also grounding moral status Level 2? Or is sentience or consciousness necessary to establish moral status Level 2 which in turn could be necessary to endow unconscious desires, intentions, and acts with moral significance? Importantly, conscious beings could prefer outcomes and will acts that have nothing to do with their own good. If it could be wrong to interfere with these acts and right to help them achieve these goals, how can we say that we refrain or act “for their sake” when this commonly is taken to mean “for their own good”? I will henceforth assume that “for their sake” goes beyond “for their own good” and includes satisfying their preferences and willings more generally. 3b.  That something’s sentience or consciousness should affect how we treat it for its sake does not yet mean that making it non-­sentient or non-­conscious henceforth would be in itself morally wrong. For example, a reason not to engage in factory farming is that it causes pain to animals involved. Another way to avoid such pain is to eliminate their capacity for pain, which might be good for them. However, suppose that capacity were connected to their capacity to feel appropriately “pained” at separation from other animals. It might be wrong to deprive animals of the latter since decreasing their

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status, Person-­A ffectingness, and Parfit  77 capacity for relationships and appropriate responses to them seems to lessen their worth as beings. Similarly, being conscious might be a good in itself that outweighs low-­level pain even if it were the only content of consciousness. Furthermore, eliminating an animal’s consciousness would not be good for it if this would eliminate overall good states of consciousness. That it would be bad for an entity to lose its capacity for consciousness does not yet imply that it has a right not to lose it: having rights might require more than being an entity with (or the capacity for) consciousness or sentience. We shall consider this further below. 3c.  Entities with Level 2 status have characteristics (e.g., they can suffer) that speak in favor of their not being harmed. However, many sentient and conscious beings cannot see that these qualities should affect how they are acted upon. Entities that are not moral agents strictly do no moral wrong when, for example, they interfere with the agency of other Level 2 beings (e.g., by eating them); these other beings may dislike such treatment without comprehending what factors count morally against its being done. Human persons typically constrain the behavior of both animals and those among us, such as sociopaths, who are incapable of seeing reasons not to harm or interfere with the agency of others, even when they would harm those who are not moral agents (e.g., human babies). Hence, we seem to need a justification for not similarly constraining such behavior in some nonhuman Level 2 beings even when it affects only other nonhumans at Level 2. One possibility is that the moral duty not to harm is more stringent than the duty to aid and in not stopping much harm that animals do to nonhuman Level 2 beings, we would fail to aid the latter rather than harm them. Nevertheless in the case of humans we think that we have a reason to stop harm that we did not cause as well as a reason not to cause harm. Then we still need a justification for not stopping at least some harm caused by nonhuman Level 2 beings to other such beings. 3d.  On the account given, a non-­sentient, never-­conscious human embryo (that lacks preferences and will) lacks moral status at Level 2. But the embryo could count morally in itself and be at Level 1 (e.g., give us reason in its own right not to destroy it) because of intrinsic and/or extrinsic properties such as its potential. This is different from its having instrumental value since even if an embryo deprived of an environment in which to develop is not instrumental to there being a person, it could have properties that contribute to its potential to develop and these could give it greater value than an embryo that lacks these properties. That an embryo may have such value in virtue of its intrinsic and/or extrinsic properties could account for why it might be wrong

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

78  F. M. Kamm to use it for frivolous purposes.7 Notice that an embryo that will not develop could have greater value in its own right if it has the properties for becoming an extraordinary person (e.g., Beethoven) even if that person would have the same moral status as an ordinary person if he were to exist. Indeed this Level 1 embryo that could have become Beethoven may be more morally significant (e.g., give us stronger reasons to preserve it) than a Level 2 bird. We cannot be acting for the embryo’s sake in saving its life, and the person who would arise from the embryo cannot be harmed by never coming to exist if the embryo is not saved. Still, we can protect an embryo for the sake of a person who will definitely arise from it when that embryo will be allowed to develop. See section 2 for more on this. 3e.  There are other examples where we may not have as much reason to save beings for whose sake we can act as to save entities that matter in their own right but whose existence cannot be extended for their own sake. If we had to choose whether to destroy the Level 1 Grand Canyon that matters in its own right but gets nothing from its continued existence or a Level 2 bird that can get something out of life, it could be morally wrong to choose to destroy the Grand Canyon. This further supports distinguishing the moral significance of an entity and its moral status as commonly understood, which involves being at Level 2. Value and moral significance can vary in­de­pend­ ent­ly of such moral status. This could even be true of things that do not matter in their own right but only instrumentally. Suppose the Grand Canyon should be preserved rather than a bird and a certain tool is necessary and sufficient to save the Canyon. Then saving the tool should take precedence over saving the bird. 3f.  What if the contents of one bird’s life are more valuable than another’s (for example it lives a longer and more pleasant life)? This need not imply that it has a higher moral status; as the “containers” of content the birds may have the same moral status. An indication of this is that we might seek to give to the second bird contents as valuable as the first bird’s. If we couldn’t accomplish this, might we decide which bird to save based on the value of its life’s contents should it continue? Or in light of their equal moral status we might also consider what each bird will have had in its life by the time it dies, letting die the one who will have gotten more out of its life already. Suppose there is a difference between an entity’s moral status and the value of the actual content of its life. This leaves it open that differences in an entity’s capacities to have value in its life (due to its own properties rather than its environment) could ground different sublevels of moral status at Level 2 and ground levels beyond 2. That is, suppose we could rate activities and products

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status, Person-­A ffectingness, and Parfit  79 of activities in a non-­species relative way as objectively more and less valuable. Then the entities that engage with the more valuable activities and products in a way that is also good for them might have higher moral status. When their own good “tracks” objective value in this way they are not merely instruments for the existence of these valuable activities and products in the world. 4a.  We can have “direct” duties concerning entities that count in their own right just because they count in their own right, including those at Level 2. However, I suggest, there is a difference between having such a duty and having a duty to that specific entity. The latter is known as a “directed duty”; typ­ic­ al­ly the entity to whom this duty is owed has a correlative right or claim against whoever owes the duty. Correspondingly, there is a difference between doing the wrong thing (e.g., in not fulfilling a nondirected duty) and wronging some entity in failing to perform the duty owed to her. I shall say that entities who are the bearers of such rights and to whom directed duties are owed have Level 3 moral status. This includes persons and possibly other beings who are sufficiently subjects to whom things can be owed.8 One would need to be a subject (not merely have subjective experiences) to have rights, though perhaps not a subject for whose own welfare anything could be done.9 For ex­ample, we could imagine persons whose well-­being remained constant no matter what they did or what others did to them. Yet these persons could have reasons for actions (e.g., a duty to acquire knowledge, preferences concerning others) and a right to noninterference with their actions. The entity to whom a duty is owed is not necessarily the entity to be affected by the duty. For example, if you owe it to me to save my mother, it is ordinarily thought that I am the right holder even though she is the object of the duty.10 Does this imply that in acting to fulfill directed duties we may not be acting for the sake of the right holder and so some right holders need not have Level 2 moral status (i.e., be entities for whose sake we can act)? “Acting for something’s sake” (as noted above) can be broad enough to include aiming to satisfy what some entity wants for another entity. So it should also include “fulfilling duties owed to” and “not wronging” someone whose rights concern something being done for another entity. (So those at Level 3 are part of those at Levels 1 and 2.) Part of the significance of having a right is the protection for interests it adds so that the duty owed to right holders could supersede satisfying the equal interests of those without rights. Further, whereas the interests of Level 2 entities may be outweighed by other individuals’ merely somewhat greater interests, outweighing rights could require that much greater interests be at stake for others as individuals.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

80  F. M. Kamm It is possible that some of those with rights to whom correlative duties are owed do not themselves have duties correlative to the rights of others since they are moral patients but not moral agents. This has been claimed about some animals. By contrast, persons can appreciate and govern themselves in the light of reasons and can mutually obligate each other. Only in the case of persons can what is owed depend on what a person authorizes (e.g., whether she releases us from a claim she has against us) or whether she acts in a way that results in her forfeiting rights. One hypothesis is that only persons are full-­fledged subjects (e.g., self-­conscious, self-­reflective, and self-­governing) and only they can have what is called “self-­ownership” of their lives. That is, even if entities at Level 2 have lives and we can act for their sake, they (and perhaps even some animals at Level 3 to whom we can owe things) may not have lives that belong to them. Their lives are theirs in the sense that they live them but, the suggestion is, not so that the strongest prohibitions exist on harming them for the good of others and on paternalistic acts done for their own good but against their will. It seems reasonable to think that at least many Level 3 entities have a higher moral status than those at Levels 1 or 2, and possibly higher than others at sublevels of Level 3. But this need not mean that these beings should have a higher chance of having their interests as sentient or conscious beings satisfied. The good of them can be greater even if occasionally not the good for them. This might be true because what grounds a protected interest (or right) makes the interest or right just as strong in a being with lower moral status as in a being with higher moral status. For example, suppose a capacity and desire to communicate grounded (or most clearly pertained to) an entity’s having a protected interest (or right) in communicating. That the entity lacked other properties necessary for higher moral status need not mean that the protection provided this interest (or right) is weaker than the protection (or right) accorded the interest in the higher-­status being. In general, if property x grounded (or most clearly pertained to) a protected interest or right to m, the failure to have properties y and z that could raise one’s moral status (and ground other rights) need not affect the importance of according m to those with only x relative to those with x, y, and z. Another reason why Level 3 entities need not have a higher chance of having their interests satisfied is illustrated by a case where we would have to kill one person to save five others. Each person’s chances that his interests as a sentient and conscious being would be satisfied would increase ex ante if there were a policy of killing one to save five. However, the one person’s moral status (including his self-­ownership) may prohibit us killing him without his

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status, Person-­A ffectingness, and Parfit  81 consent. By contrast, we might sacrifice an entity at only Level 1 or 2 (e.g., a painting or a chicken) that lacks self-­ownership to save more entities (even) of the same type.11 As with those at only Level 2, we should distinguish between having Level 3 moral status and the value of things in a Level 3 being’s life. One person may have wealth or physical adeptness that another lacks (with no compensating goods). But if these goods do not ground Level 3 moral status, the two persons would have the same status. Might even having and exercising a capacity for a better life by tracking more objectively valuable pursuits (that might raise the moral status of some beings) have no role in raising the moral status of those already persons? Then the morally important characteristics that make one a person would preempt or silence an additional factor that might in their absence raise another entity’s moral status. 4b.  The possibility of wronging some entities could imply that moral status in the broad sense (discussed in subsection 1) may not be completely defined by how it is permissible and impermissible to treat an entity. This would be so if it were possible to wrong an entity in the course of acting permissibly.12 For example, it might be permissible to bomb a military target with side effect harm to civilians but still wrong them in doing this. Those we would wrong in the course of a permissible act would have a higher moral status than those not so wronged.

2.  Moral Status, Affecting a Definite Future Person, and Parfit’s No Difference View 1a.  In section 1, I considered different senses and levels of moral status. Now I want to consider whether an embryo at Level 1 that will in fact develop into (or give rise to)13 a person at Level 3 is protected against nonlethal interventions that will negatively affect the future person it will develop into in the same way that those who are already persons are protected.14 According to what I call “the View,” we have as strong a duty not to do things to an embryo that will result in harm (or failure to prevent harm) to the person who the embryo will definitely develop into as we have to that person when he exists.15 One way to understand the View is that it assigns protections that ordinarily belong to a Level 3 person to what is only a Level 1 entity because we can affect the Level 3 person by acting on the Level 1 entity from which it develops (or arises). (One can deny the View while still believing we have duties, though perhaps weaker ones, with regard to an embryo at Level 1 only because of the person into whom it will develop.)

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

82  F. M. Kamm I will present hypothetical examples to try to show that the View is wrong by showing it can be permissible to affect a definite future person by doing something to the embryo, egg, or sperm from which the person will arise though it is impermissible to affect the person in the same way by doing similar things to him once he exists (i.e., has capacities distinctive of being a person).16 Suppose a woman discovers her egg or embryo has genes that will result in a person with a 160 IQ. She decides this is too smart for the good of the family. So she takes a drug that affects her egg or embryo to reduce the future person’s IQ to 140. (Call this the 160 IQ Case.)17 I assume that lowering IQ to this degree does not change the identity of a person and so this is a case of causing a definite future person to be worse off than he would otherwise have been. (Similarly an adult who has an accident that reduces his IQ from 160 to 140 is not a different person though he is worse off.)18 Derek Parfit would call what happens to the person who arises from the egg or embryo a Narrow Person-­Affecting change. It contrasts with what he calls a Wide Person-­ Affecting change in which we create one person with a 140 IQ rather than another person with a 160 IQ.19 Doing the latter would result in a “non-­ identity case” in which no person who exists is worse or better off than he would otherwise have been since someone else, not he, would otherwise have existed. Could a person born with a 140 IQ not be worse off than if he had a 160 IQ because things it would be in his interest to do and have would not be served by his having the higher IQ? I do not think so. First, many things in the interest of someone with the 140 IQ could be more easily achieved with a 160 IQ. Second, at the time when an egg or embryo would be modified, the person does not yet exist who would be wedded to the life had by someone with a 140 IQ. So even if what is in the interest of someone with a 140 IQ differed radically from what is in the interest of someone with a 160 IQ, there are no pre-­existing 140 IQ interests that would conflict with or not be served by the higher IQ. Third, suppose that having a child with a 160 IQ would present no problems for the family twenty years hence and the woman has the additional option of affecting her egg or embryo so that the definite person to whom it will give rise has a 140 IQ for the first twenty years of his life and 160 after that. It could be better for the person created if the woman gives him the life with the change in it rather than the life with 140 IQ throughout, other things equal (i.e., no overriding negative side effects). (Though it may still be permissible for her to do the latter.) This could be true even if what is in the

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status, Person-­A ffectingness, and Parfit  83 interests of a 140 IQ person differs from what is in the interests of the same person with a 160 IQ so long as the change leads to a better future for that person. One might say that a person has a meta-­interest in better things being in her interest. 1b.  I believe that the woman’s causing a definite person to be worse off than he would have been by affecting her egg or embryo so that the eventual person has a 140 instead of 160 IQ is permissible (for reasons to be given below). But once her child (assumed to be a Level 3 person) exists (in the sense that he has the properties distinctive of persons), it would not be similarly permissible for her to give him a pill or alter his genes to reduce his IQ from 160 to 140 now or in the future. This would wrongfully harm him thus wronging him. (This is so even if the alteration occurs before the child has experienced living with a 160 IQ). The woman would be significantly affecting a Level 3 person for the worse by doing something to his body once he already exists. I will call this a Super Narrow Person-­Affecting change. This is a sense of person-­affecting that Parfit did not distinguish as a separate cat­ egory. I think it should be distinguished for moral reasons.20 What is the difference between (1) affecting the person by affecting the egg or embryo from which he arises and (2) affecting the person himself by giving him the pill? An egg or embryo is not the sort of entity that is entitled to keep a characteristic that it has, such as the genetic makeup for a 160 IQ. It lacks moral status in the sense of mattering for its own sake (it is not at Level 2) and lacks additional properties that would make it a rights-­bearer (it is not at Level 3). Also, the person who will develop from the egg or embryo will have an acceptable level of intelligence at 140 IQ, so he (as a person) is not owed a 160 IQ by his parent. These two facts are crucial to the permissibility of not giving to, or taking back from, the egg or embryo IQ points that the parent originally gave it. By contrast, since a child is already a person, he has a right to keep a beneficial characteristic even if doing so raises him beyond the level he is owed by the parent(s) who created him. In particular, I believe it is a violation of his rights to give the IQ-­reducing pill to the child even if his IQ would not fall below the minimum a creator should provide to her child.21 Because the egg or embryo is not a person (or has other properties that make it an entity entitled to keep what is given to it), taking away its characteristics (which will impact the person who will arise from it) is no different from not giving the egg or embryo those characteristics to begin with. And one would have a right not to give genes sufficient for a 160 IQ in a future person to one’s egg or embryo that will give rise to the person. Analogously, suppose that a parent puts money she need not give into a bank account that

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

84  F. M. Kamm will belong to her offspring only when he is an adult. Even if the child will definitely become an adult and the parent’s removing the money makes the child significantly worse off, it is permissible and does not wrong the child for the parent to take back the money before her offspring reaches the age at which he has a right to his bank account. Now consider a case I call Delayed Change. Suppose the woman is not able to remove the genetic material at the embryo or egg stage and (as I have argued) is not permitted to give the pill that would remove IQ points from the Level 3 child. May she give to the embryo or egg a drug that will have a delayed action in childhood, altering the child’s genes so that he will have a 140 rather than a 160 IQ? I believe this is impermissible for it involves doing something at time t1 that will remove something good at time t2 from the person who then exists and to whom the good item already belongs at t2 (whether or not he has already experienced its effects). 1c. Now imagine that the woman in the 160 IQ Case could take back genetic material from her egg or embryo and transfer it into two other eggs or embryos—hers or someone else’s—thereby raising the IQs of the people to whom the other eggs or embryos give rise from 100 to 110 each. What I have said above implies that this would be permissible and would not wrong the person from whose egg or embryo the material is taken even though he will be worse off than he would otherwise have been. The woman would be morally free to equalize beneficial traits among future persons by affecting the embryos or the eggs from which the people arise. This is so even though she thereby makes the future person from whose egg or embryo the genetic ma­ter­ial is taken worse off than he would otherwise be for the sake of other future persons. However, I do not think it would be permissible for her to take genetic material from a child (already a person), lowering his IQ from 160 to 140, in order to transfer the material into two other children (or the eggs or embryos from which they arise) to raise their IQs from 100 to 110. The moral importance of the “separateness of persons” can apply to the child but there is no comparable moral “separateness of eggs” or “separateness of embryos” of persons who will definitely exist. If what I have said is correct, there is a sense of person-­affectingness that is crucial in dealing with those who have moral status Level 3 and which is not captured by either of Parfit’s senses of person-­affectingness but only by what I have called Super Narrow Person-­Affectingness. What if it were permissible to make the changes I have described only to the egg but not to the embryo? Then Narrow Person-­Affecting changes made by changing an egg would still have to be distinguished from Super Narrow

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status, Person-­A ffectingness, and Parfit  85 Person-­Affecting changes. However, Super Narrow Person-­Affecting changes would no longer be defined as affecting a person who already exists by chan­ ging their body. This is because changes to an embryo would not involve this and yet they would also be morally distinguished from mere Narrow Person-­ Affecting changes that occur by way of egg alteration. Either way it will be morally important to distinguish how we come to make a definite future person worse off than he or she would otherwise have been, not merely that we do so. 2.  Derek Parfit famously argued that sometimes it seems not to matter morally whether we are affecting the same person for the worse or bringing someone into existence who is worse off than some different person would have been had they been brought into existence instead. He called this the No Difference View about the so-­called Non-­Identity Problem.22 (Wide Person-­ Affectingness is present in non-­identity cases since it is still persons who ul­tim­ate­ly experience lives. To simplify matters, I will here assume that “person-­affectingness” involves its narrow form.) What I have said against the View may bear on the No Difference View about the Non-­Identity Problem. This is because I have argued that sometimes the way in which we affect someone for the worse can make a moral difference, not just the fact that we affect the person for the worse. If we affect someone in a Super Narrow Person-­ Affecting way this can have greater moral significance than achieving a similar change in a mere Narrow Person-­Affecting way. Often affecting people for the worse involves Super Narrow Person-­Affectingness since we standardly interact with those who are already Level 3 persons. Hence, mere Narrow Person-­Affecting cases involving a nonstandard way of affecting people (such as through the egg that gives rise to them) may not be morally different from Wide Person-­Affecting cases involving non-­identity. This would support the No Difference View for these cases. By contrast, Super Narrow Person-­ Affecting cases may be morally more significant than Wide Person-­Affecting cases involving non-­identity and so undermine the No Difference View for these cases. Performing easily avoidable acts (including omissions) that affect an egg or embryo and lead to a given offspring having fewer good traits may have the same moral significance as performing easily avoidable acts that lead to creating a worse-­off person rather than a different better-­off person. And both sorts of acts may be less morally problematic (other things equal) than performing an act that will take from someone what he is entitled to keep (or not provide him help to which he is entitled in order to keep something) resulting in his having fewer good traits. The person-­ affecting cases that Parfit used to help generate the No Difference View of the Non-­Identity Problem do not involve rights based on a

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

86  F. M. Kamm person’s possession of personal properties. Hence they do not compare the strongest, super narrow form of person-­ affectingness with wide person-­ affectingness. This means that Parfit’s argument for the No Difference View is crucially incomplete since it uses the weaker form of Narrow Person-­Affecting cases. For example, he compares (1) a case in which a pregnant woman omits to take an easy-­to-­use drug at an early stage of pregnancy and therefore her child will have a defect in a life still worth living with (2) a case in which someone gets pregnant with a child who will have the same defect rather than waiting a short, non-­burdensome time and having a different child who lacks it. If these cases were morally alike (even in both being wrong), this would not show that the Wide Person-­Affecting case (2) is morally equivalent to (3) a Super Narrow Person-­Affecting case in which someone omits to do something easy for her child who is already a person and therefore he will have that same defect. It may also be significant that Parfit uses cases involving omissions to prevent a naturally occurring harm rather than commissions which cause a harm since the moral difference between Wide Person-­Affectingness and Super Narrow Person-­Affectingness may be greater in cases of commission (such as I used in 2.1b.). 3.  This analysis in terms of Super Narrow Person-­Affectingness may also bear on the rights of future generations and our duties to them. Suppose we knew that particular people will definitely exist 100 years in the future but no such person will exist before 100 years. Their existence does not depend on what we do. Suppose we have a choice between doing what will reduce an existing person’s IQ from 160 to 140 by affecting his body or doing what will reduce one of these future people’s IQ from 160 to 140 by affecting the egg that will be used to create him. Arguably, we morally should do the latter rather than the former. These people who will exist in 100 years could also be affected for the worse by how our behavior now will affect their environment then. What has been argued above implies that if no person now exists who will exist then, then we may now permissibly do what reduces the superior environment they would otherwise have. This is so as long as we do not reduce it below the acceptable level that they are owed and the reduction does not come to what they already have once they exist. It also implies that it could be morally right to favor not worsening by a certain amount the en­vir­ on­ment that current people have over not reducing by the same amount the environment into which future people will be born. By contrast we may not be permitted to have the same negative effect on the environment if the effect is delayed so that it affects the superior environment that already “belongs” to those particular people once they exist. Hence, in order not to wrong the

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status, Person-­A ffectingness, and Parfit  87 future generation it will be important to know the sort of environment to which they have a right independently of what they would be entitled to keep once they have it. It will also be important to know when the effects of our behavior will affect the environment relative to when the persons affected by our behavior will exist.23

Notes 1. This chapter draws, in part, on ideas in (Kamm 1992; 2007a; 2007b; 2007c; 2010 and 2013). For comments on earlier versions of this chapter, I am grateful to members of the audience at the Conference on Rethinking Moral Status, Oxford University, June 2019 and to Alex Guererro, Stephen Clarke, and Daniel Cohen. 2. Except, perhaps, relations between its parts. 3. Its mattering in its own right could mean that even if no beings existed to positively interact with the art work or tree, the world would be a better place if these things existed in it rather if the world were empty 4. Note that I distinguish between direct duties (in my sense) and “directed duties” which I discuss below. 5. Having the capacity for sentience or consciousness is not the same as actually being, for example, sentient. 6. In my “Rights Beyond Interests” I gave the example of beings whom we conceive to lack sentience but are still conscious agents. 7. I first argued for this in Kamm (2002). 8. Tom Regan (1983) argues for animal rights, focusing on animals who are agents with continuing selves. Nevertheless, he moves from having duties that are not indirect to having duties whose correlative is a right. This is so even though there may be beings simpler than those on which he focuses with respect to which we can have direct duties (in the sense that nothing but what they are accounts for duties to do or not do things to them) but they lack sufficient degree of selfhood to be subjects who are rights bearers. Hence, a problem in Regan’s derivation of rights in more complex animals from duties stems from his taking direct duties in the sense given above to also be duties to a being, implying they are directed duties that have rights as their correlative. But there may be no immediate connection between duties not being indirect to their being directed (rather than merely direct) duties, and so correlative to rights. 9. I described such beings in Kamm (2007b). 10. H. L. A. Hart (1973) first noted this. 11. However, it is possible that if a creature is sufficiently simple and short lived, it is not worth engaging in a destructive act even to one of its kind to save many of its kind. 12. See Kamm (2016) on this. 13. Though “develop into” and “give rise to” are different notions, I will use them interchangeably here. 14. Some think that an embryo is not a being but a stage in the life of a human being. On this view being a person can also be a stage in the life of a human being. Others think

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

88  F. M. Kamm an embryo is a stage in the life of a person and that stages of people do not have different moral status from the people whose stages they are even though the properties that distinctively identify something as a person are not yet present. Some hold that a person who will definitely exist is an atemporal entity. Its well-­being should be considered even before its early stages exist in the same way as the well-­being of the person once its distinctive person-­properties are present during its time alive. The view I shall present may give us reason to doubt the last two positions. 15. Such an argument is made in Buchanan et al. (2000). 16. For this discussion I will assume the egg’s and sperm’s moral status to be Level 1 though they may be below that (i.e., in their own right it is permissible to do anything to them). A revised version of the View would then assign protections owed to Level 3 entities to things below Level 1 from which Level 3 entities arise. 17. I first discussed the 160 IQ Case in Kamm (1992). I here combine it with the 160 IQ Egg Case which I first discussed in Kamm (2010). It could also be the father who affects his sperm or alters an embryo in a lab to lower the IQ of his offspring to 140. In all these examples, I assume that both prospective parents agree about what is done. 18. This is on the assumption I will make here that it is better for the person himself to have the higher IQ. 19. See Parfit (1984). 20. Not all ways of affecting for the worse a person who already exists (i.e., a being with the properties distinctive of persons) are ruled out. Suppose (hypothetically) a parent having another child would limit her existing child’s intellectual development (because resources were spread thinner) so that he had a 140 IQ rather than a 160 IQ. (I owe this case to Francesca Minerva.) This could be permissible and consistent with nonconsequentialist moral theory that distinguishes morally among different ways of bringing about the same consequences. 21. I discuss what parents owe to the offspring they create in Kamm (1992). 22. See Parfit (1984). 23. I discuss these implications of arguments against the View for obligations to future generations in Kamm (2013).

References Buchanan, Allen, Brock, Dan W., Daniels, Norman, and Wikler, Daniel (2000). From Chance to Choice: Genetics and Justice. Cambridge: Cambridge University Press. Hart, H. L. A. (1973). “Bentham on Legal Rights.” In Oxford Essays in Jurisprudence, 2nd series, ed. A. W. B. Simpson. Oxford: Oxford University Press, pp. 171–201. Kamm, F. M. (1992). Creation and Abortion. New York: Oxford University Press. Kamm, F. M. (2002). “Embryonic Stem Cell Research: A Moral Defense.” The Boston Review: (Accessed 14 April 2020).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status, Person-­A ffectingness, and Parfit  89 Kamm, F.  M. (2007a). “Moral Status.” In Intricate Ethics. New York: Oxford University Press, pp. 227–36. Kamm, F.  M. (2007b). “Rights Beyond Interests.” In Intricate Ethics. New York: Oxford University Press, pp. 237–84. Kamm, F. M. (2007c). “Owing, Justifying, and Rejecting.” In Intricate Ethics. New York: Oxford University Press, pp. 455–90. Kamm, F. M. (2007d). “Towards the Essence of Nonconsequentialist Constraints on Harming: Modality, Productive Purity, and the Greater Good Working Itself Out.” In Intricate Ethics. New York: Oxford University Press, pp. 130–89. Kamm, F. M. (2010). “Affecting Definite Future People,” APA Newsletter 9(2). Kamm, F.  M. (2013). “Moral Status, Personal Identity, and Substitutability: Clones, Embryos and Future Generations.” In F.  M.  Kamm, Bioethical Prescriptions. New York: Oxford University Press, pp. 291–325. Kamm, F.  M. (2016). The Trolley Problem Mysteries. New York: Oxford University Press. Korsgaard, Christine (1983). “Two Distinctions in Goodness.” Philosophical Review 92 (2), pp. 169–95. Parfit, Derek (1984). Reasons and Persons. Oxford: Oxford University Press. Regan, Tom (1983). The Case for Animal Rights. Berkeley: University of California Press. Scanlon, Thomas (1999). What We Owe to Each Other. Cambridge, Mass.: Harvard University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

6 The Ever Conscious View and the Contingency of Moral Status Elizabeth Harman

1. Introduction It is common to think that if something has moral status, then it is in its nature to have moral status, and that if something is in something’s nature, then it is a necessary feature. Thus, it is common to think that moral status is a necessary feature, and so that moral status cannot be contingent. In this chapter, I will argue for a view of which things have moral status; for reasons that will become clear, I call my view “the Ever Conscious View.” On this view, the following two claims are both true: Some beings that have moral status might have lacked moral status. Some beings that lack moral status might have had moral status. Thus, on my view, moral status is contingent. This chapter has two central aims. First, to offer the Ever Conscious View for serious consideration, and second, to defend the idea that moral status can be contingent in the way that the Ever Conscious View implies. While I have previously argued for a narrower version of the view, no serious discussion or defense of the Ever Conscious View has ever been published. The Ever Conscious View faces an objection that I will call, for reasons that will emerge later, “the Objection to Contingency.” This is an objection that is commonly pressed upon me by those who hear my view. I will argue that this objection can be answered by recognition of what I call the “Good Method” of finding our harm-­based and benefit-­based moral reasons. And I will establish that the following is true: The Ever Conscious View can be defended in the face of the Objection to Contingency by appeal to the Good Method. We should embrace the Elizabeth Harman, The Ever Conscious View and the Contingency of Moral Status In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Elizabeth Harman 2021. DOI: 10.1093/oso/9780192894076.003.0006

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  91 Good Method for reasons wholly independent of the Ever Conscious View. So, a defense of the Ever Conscious View using the Good Method is a principled defense. The chapter is structured to establish this claim. Section  2 discusses what moral status is, and how our harm-­based and benefit-­based moral reasons arise; this discussion reveals that we should embrace the Good Method. Thus, section 2 establishes that we should embrace the Good Method, in­de­pend­ent­ly of the Ever Conscious View. Section 3 then turns to the Ever Conscious View, motivating it and arguing for it. Section  4 defends the Ever Conscious View against three objections, including the Objection to Contingency; by relying on the Good Method, we can vindicate the Ever Conscious View in the face of this objection. Finally, section 5 raises an objection to the Good Method, based on a claim commonly called “The Asymmetry,” and defends the Good Method in the face of that objection.

2.  What Moral Status Is, and How Harm-­Based and Benefit-­Based Reasons Arise What is it for a being to have moral status? One might be tempted to make these two claims: A being has moral status just in case it matters morally. A being has moral status just in case it counts morally. These might seem like truisms. If anything, they might seem so obvious as to be unilluminating. But I will argue that these claims are false. Along the same lines, one might make these two claims: A being has moral status just in case one should take it into account morally. A being has moral status just in case one should take it into account in deciding how to act. These four claims all falter when we consider the following case: Alice has promised to meet Betty for lunch. It’s important to Betty, as the

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

92  Elizabeth Harman lunch is meant to calm her down before an important job interview. Since making her promise, Alice has learned that she is in grave danger of creating a child who would have a brief, miserable life: the child would suffer horribly for several months and then die. Alice can prevent this creation, but only by seeing a doctor rather than meeting Betty for lunch.1 Alice breaks her date with Betty and prevents herself from creating a mis­er­able child. Alice does the right thing. Breaking her date with Betty required a good reason; but she had one. Where did this reason come from? If we look around us and ask, “which things have moral status?” we will not count the miserable child—no such child was created. If we think that we should take into account all and only those beings that have moral status—indeed, that what it is for something to have moral status is to be such that it should be taken into account—then we will find no source for Alice’s reason to break her promise. The four claims about moral status suggest the following method of finding our harm-­based and benefit-­based moral reasons. (I call it the “Bad Method” because I will argue that it is false.) The Bad Method of finding harm-­based and benefit-­based moral reasons: First ask:  Which things have moral status? Then ask:  If I did this, would it harm or benefit any of those things? There is a harm-­based reason against this behavior, or a benefit-­based reason in favor of this behavior, if and only if the answer to the second question is “yes.” Suppose that Alice knows she is going to break her lunch date, and she simply wants to confirm that she has good reason to do so. She will not find any harm-­based reason provided by the suffering the baby would have gone through, had she created it. That baby does not exist. So the correct answer to the first question does not list the baby. According to the Bad Method, there is no reason for Alice to break her lunch date. We should instead endorse: The Good Method of finding harm-­based and benefit-­based moral reasons: First ask:  If I did this, would it harm or benefit any things? Then ask:  If I did this, would those things have moral status? There is a harm-­based reason against this behavior, or a benefit-­based reason in favor of this behavior, if and only if the answer to both questions is “yes.”

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  93 Using this method, Alice can easily see that she had a compelling reason to see her doctor, thereby breaking her lunch date. Let’s ask whether there is a harm-­based reason against Alice’s failing to see her doctor. Had Alice failed to see her doctor, she would have created a miserable child. Her failure to see her doctor would have caused the child’s misery (because had she seen her doctor, the child would not have existed, and thus would not have been mis­ er­able), so her failure to see her doctor would have harmed the child. And the child would have had moral status. Thus, she had a strong harm-­based reason against failing to see her doctor: she had a strong reason to break her lunch date. In saying that Alice had a harm-­based moral reason to break her lunch date, I’m relying on the Good Method, but I’m also relying on the following claims: Causing someone to suffer a harm is harming them. Counterfactual dependence is sufficient for causation. I’ve argued for the first claim in other work.2 The second claim is widely believed.3 To be clear, Alice’s case is not a hard case: it’s obvious both that she should break her promise and that it is morally permissible for her to do so. But, interestingly, conventional accounts of what moral status is suggest that the Bad Method is correct; and the Bad Method leaves Alice without any justification for her promise-­breaking. In this section, I’ve argued for the Good Method over the Bad Method. The motivation for the Good Method comes from the fact that our choices sometimes affect who comes to exist. Thus, the motivation depends on the fact that the moral status facts are contingent in the following way: Sometimes a being could have existed who doesn’t exist, and if they had existed, they would have had moral status. The following is also true: Sometimes a being exists, and has moral status, but they could have failed to exist. Indeed, this is true of every actually-­existing being that has moral status.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

94  Elizabeth Harman It is not controversial that the moral status facts are contingent in this way; this is well known. I will propose in section 3 that moral status is contingent in a more interesting and surprising way.

3.  The Ever Conscious View Many people are moved by the following line of reasoning. Consider cases in which a pregnant woman is planning to continue her pregnancy, but is still in the first trimester. The woman has compelling moral reason to do certain things: to take prenatal vitamins, to avoid taking drugs with teratogenic side effects, to avoid binge drinking,4 to eat a certain amount of protein, et cetera. When we ask why she should behave in these ways, we point to the living human being in her body: the fetus is the source of her reasons. She should take care not to cause harm to the fetus because the fetus itself matters. Furthermore, consider the woman’s attitude to the fetus. She may well love the fetus. Is her love inappropriate or misguided? No. But surely only beings with moral status are appropriate objects of love. Taking these two phenomena together—our reasons to take care of these fetuses, and the appropriateness of love for these fetuses—we conclude that these early fetuses have moral status. But surely if these early fetuses have moral status, then all early fetuses have moral status. We are drawn to the conclusion that any abortion, no matter how early, has some moral reason against it. This reasoning can be laid out as follows: 1.  Some early human fetuses are appropriate objects of love. 2.  Some early human fetuses are the source of harm-­based reasons against actions: pregnant women should not smoke excessively or drink excessively during pregnancy because these fetuses matter morally. 3.  If those early fetuses have moral status, then the fetuses that die in early abortions have moral status. Therefore: 4.  The fetuses that die in early abortions have moral status. (from 1 and 3; also from 2 and 3) Therefore: 5.  There is a harm-­based reason against early abortion, and early abortion requires at least some moral justification. (from 4)

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  95 This reasoning can seem compelling, and thus even those who think of themselves as quite liberal when it comes to abortion may end up with a moderate view according to which every abortion has some moral reason against it, and every abortion requires some moral justification. (Of course, they may also hold the view that such justification is present in the vast majority of actual abortions.) But this reasoning can be challenged. In particular, while claim 3 will seem obvious to most people, claim 3 can be denied. Consider the difference between the early fetus that became you and an early fetus that dies in an early abortion. Suppose that we consider both early fetuses at the same stage of development and in the same health. Suppose that their lives up to this point are intrinsically indistinguishable. Nevertheless, they diverge radically at this point. One fetus dies without ever being conscious.5 The other fetus goes on to develop into a human being, living a normal human life with all its psychological complexity and richness. When we see their lives as wholes, we can see that we are considering two radically different kinds of things. One is a living being that is never conscious; in this way, it is like a plant. The other is a mentally sophisticated being that has had meaningful experiences (both good and bad), that loves others, that has personal projects, et cetera. On the view I propose, in virtue of their different futures, the moral status facts about these two fetuses differ. You had moral status back when you were an early fetus; the ground of your moral status was your actual future as a conscious, feeling being. The fetus that dies in the abortion lacks moral status; it lacks moral status because it is a thing that is never conscious. Here is the view I propose: The Ever Conscious View: a living being has moral status just in case it is ever conscious. On this view, you and I have had moral status ever since we were created; and we will have moral status until we die. (But the dead bodies that were us will not have moral status.). On this view, human fetuses that die without ever being conscious lack moral status. The Ever Conscious View may sound very surprising. How could it be that two fetuses differ in moral status merely in virtue of their futures? But the idea that the states of a being at other times in its existence are relevant to its current moral status is actually very plausible. To see this, consider past states. Suppose there are two unconscious human adults, each of which is being kept

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

96  Elizabeth Harman alive by machines. One has never been conscious, while the other has been living a full human life until a recent accident. Suppose there is a procedure that has some chance of bringing each unconscious human adult to ­consciousness. It’s clear that for the human that was until recently living a full life, we have a strong, compelling moral reason to perform the procedure. For the other human, it’s not at all clear that we have a strong moral reason to do so; if we have any moral reason, it is a weak reason.6 Thus, even though their current states are very similar, these two beings differ in the kind of moral reason to which they give rise, in virtue of their past states. The idea that a being’s past states are relevant to its current moral status is plausible to us. That can help us to see that a being’s future states can also be relevant. I do want to note a potential objection to the claims I’ve made about the two unconscious humans; and in light of this objection, I don’t want to rest too much weight on these claims. Rather, I hope consideration of them can help to open our minds to the possibility that a being’s states at other times can be relevant to its current moral status. The objection is as follows: the two unconscious humans differ significantly in their current states: one has stored within their brain the memories of a normal human life, even if their unconsciousness renders those memories dormant for now. This objection makes a good point; for this reason, I merely take the case of the two unconscious patients to be suggestive. If the Ever Conscious View is true, then there are two widely held myths that we should abandon: First Myth:  Any being that has moral status is such that, necessarily, if it exists, then it has moral status. Second Myth:  If two beings are such that, just considering their current intrinsic properties, they are qualitatively identical, then either both have moral status, or neither has moral status. But in abandoning these myths, we need not abandon the idea that a being’s moral status supervenes on its intrinsic properties. The Ever Conscious View sees us as beings that persist through time, and sees our states throughout our lives as coming together to determine whether we have moral status. It is compatible with the following principle: If two beings are such that, just considering their intrinsic properties throughout their lives, they are qualitatively identical, then either both have moral status, or neither has moral status.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  97 This principle makes moral status an intrinsic matter. It blocks problematic and implausible views on which external factors can determine whether a being has moral status. For example, it is implausible that loving something endows it with moral status; rather, love is a response to a being that is in­de­ pend­ent­ly an appropriate object of love.7 For another example, the view that a pregnant woman’s intentions determine her fetus’s moral status is implausible. Her intention about whether to continue her pregnancy is not the right kind of factor to make it the case that the fetus has moral status. We can see this most vividly when we imagine her changing her mind from one day to the next about whether to abort: it is implausible that the fetus gains and loses moral status as she changes her mind. By contrast, on the Ever Conscious View, while a pregnant woman’s intentions can affect the fetus’s moral status, they affect it by affecting its future intrinsic states: the pregnant woman is often in a position to determine whether the fetus has a future in which it is conscious. Let me turn to laying out a positive argument for the Ever Conscious View. It is an argument by inference to the best explanation. And it is a controversial argument; I do not claim that it will convince most of my readers. But I do urge those who are inclined to believe the premises to take the argument ser­ ious­ly. (Those who believe the first three premises might think, upon reflection, that a commitment to all three cannot be sustained; I urge them to consider accepting the conclusion of the argument rather than giving up some of the initial premises.) 1.  Some early human fetuses are appropriate objects of love. 2.  Some early human fetuses are the source of harm-­based reasons against actions: pregnant women should not smoke excessively or drink excessively during pregnancy because these fetuses matter morally. 3.  Early abortion requires no moral justification and nothing bad happens in an early abortion. Therefore: 4.  Some early fetuses have moral status. (from 1, and also from 2) 5.  The fetuses that die in early abortions lack moral status. (from 3) Therefore: 6.  The Ever Conscious View is true. (from 4 and 5) This argument shows that all three initial premises are compatible. While premises 1 and 2 imply that some early fetuses have moral status, premise 3

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

98  Elizabeth Harman implies that some early fetuses lack moral status. We are brought to premises 4 and 5, which together say that some early fetuses have moral status and that some early fetuses lack moral status. It has been an unarticulated pre­sup­pos­ ition of much discussion of abortion that claims like 4 and 5 cannot both be true—but the Ever Conscious View shows that they can. The move from 4 and 5 to 6 is an inference to the best explanation. For if some intrinsically identical early fetuses have moral status and some lack it, what could explain this? A difference in their actual futures can explain it; a difference in whether they are ever conscious can explain it. The Ever Conscious View explains how these independently plausible, jointly puzzling claims can all be true.8

4.  Objections to the Ever Conscious View This section discusses three objections. The first objection concerns the way that the Ever Conscious View takes past consciousness to be sufficient for current moral status. The objector points out that, according to the Ever Conscious View, a person who has lived a full life and who then becomes permanently unconscious, but is still alive, still has moral status. The objector then points out that in many such cases, the right thing to do is to end the person’s life once they have become per­ man­ent­ly unconscious. But on the Ever Conscious View, that would involve killing a being that has moral status, so it seems it would be impermissible. I agree with the objector that the Ever Conscious View implies that a person who falls into permanent unconsciousness, but who is still alive, still has moral status; and I agree with the objector that in many such cases, the right thing to do is to end the person’s life. But the objector is wrong that the Ever Conscious View implies that we have any reason against killing the person. We have reasons not to harm a being with moral status; and we have reasons not to wrong a being with moral status. Usually, killing a being harms them; but importantly, this is not always the case. Indeed, in most cases in which a person falls into permanent unconsciousness, killing them is better for them; and killing them does not harm them at all. (Killing them may harm them (or may wrong them) if they previously expressed a wish to be kept alive in these circumstances.) Thus, in many cases, even though the permanently unconscious person has moral status, the Ever Conscious View does not imply that we have any reason against killing them.9 The second objection concerns the fact that, on the Ever Conscious View, a single moment of consciousness is enough for moral status throughout a

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  99 being’s life. The second objector asks us to consider two human fetuses. One dies before ever becoming conscious. The other has one moment of ­consciousness before dying. According to the Ever Conscious View, these are two radically different kinds of things: the first lacks moral status throughout its life, while the second has moral status throughout its life. The objector claims that the fetuses have such similar lives that the moral status facts about them cannot differ so radically. I agree with the objector that it may seem intuitively strange that a single moment of consciousness is sufficient for moral status throughout a lifetime. But I do embrace this implication of my view. Let’s consider what an alternative view would say about this case: The Now Conscious Claim: The second fetus lacks moral status for most of its life, and comes to have moral status only for the moment that it is conscious. On both the Now Conscious Claim and the Ever Conscious View, the second fetus is among the things that matter morally in this world. Its death is a morally bad thing that happens. There were moral reasons to prevent its death, and there is moral reason to lament its death. (Note that this does not mean that, all things considered, abortion of an already-­conscious human fetus is morally wrong.10) This means that the first fetus and the second fetus are quite different things, even in their earlier phases before the second fetus is conscious. One of these things will never be a member of the moral community. Nothing that actually happens to it is a morally good thing in virtue of being good for it; and nothing that actually happens to it is a morally bad thing in virtue of being bad for it. The other fetus is quite different: both views agree that what actually happens to it at the end of its life is a source of ­reasons. I stand behind the implication that this grounds moral status throughout its life. The third objection targets the way that moral status is contingent according to the Ever Conscious View. The Objection to Contingency: If the Ever Conscious View is true, then whether abortion is permissible turns on whether one actually aborts. If one does abort, then the early fetus lacks moral status, so abortion turns out to be morally permissible. If one doesn’t abort, then the early fetus has moral status, so abortion turns out to be morally wrong. So on the Ever Conscious View, abortion is self-­ justifying. But abortion is not self-­ justifying. So the Ever Conscious View must be false.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

100  Elizabeth Harman This objection targets the fact that moral status is contingent on the Ever Conscious View: the objection focuses on the fact that whether a person does something (whether she aborts) affects whether something (her fetus) has moral status. The objector worries that a wildly implausible result follows. In responding to the Objection to Contingency, I will show that some of the objector’s claims are understandable mistakes regarding what follows from the Ever Conscious View: the objector is mistaken to think that, if the Ever Conscious View is true, then whether abortion is permissible turns on whether one aborts; and thus, the objector is mistaken to think that, if the Ever Conscious View is true, then abortion is self-­justifying. I do grant that some of what the objector says is true: If one aborts, the early fetus turns out to lack moral status, and the abortion is morally permissible because it kills something that lacks moral status. And if one does not abort (and does not miscarry), then the early fetus turns out to have moral status. However, the objector is mistaken to say that if one does not abort, then it turns out that abortion is morally wrong. The objector is implicitly committed to this: The Bad Method of finding harm-­based and benefit-­based moral reasons: First ask:  Which things have moral status? Then ask:  If I did this, would it harm or benefit any of those things? There is a harm-­based reason against this behavior, or a benefit-­based reason in favor of this behavior, if and only if the answer to the second question is “yes.” If the Bad Method were correct, then the mere fact that a fetus actually has moral status would mean that there is a harm-­based moral reason against killing it: the correct answer to the first question would include the fetus, and since killing the fetus severely harms it, the answer to the second question would be “yes.” So, if the Bad Method were correct and the Ever Conscious View were true, then the following would be true: if one does not abort, there is a harm-­based moral reason against abortion. Would this make abortion morally wrong? The Objection to Contingency needs one more assumption: that if there is a harm-­based reason against an abortion, then the abortion is morally wrong. Given this assumption, it would follow that if the Ever Conscious View is true, then: if abortion is performed, it is morally per­mis­ sible to abort; but if abortion is not performed, it is morally wrong to abort. Abortion would be self-­justifying.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  101 Summing this up, the Objection to Contingency claims that if the Ever Conscious View is true, then abortion is self-­justifying; this claim depends on the Bad Method. But we have already seen that the Bad Method is incorrect. And importantly, we have seen that we should reject the Bad Method for reasons that are independent of the Ever Conscious View. In its place, we should embrace: The Good Method of finding harm-­ based and benefit-­ based moral reasons: First ask:  If I did this, would it harm or benefit any things? Then ask:  If I did this, would those things have moral status? There is a harm-­based reason against this behavior, or a benefit-­based reason in favor of this behavior, if and only if the answer to both questions is “yes.” Relying on the Good Method, we can see that the Ever Conscious View does not imply that if a fetus has moral status, then aborting it would be morally wrong. Consider you and me back when we were early fetuses. Those early fetuses did have moral status (because they would be conscious in the future). Does this mean that there was thereby a moral reason against killing them? It does not. Rather, we have to ask, if those fetuses had been aborted—that is, if you and I had been aborted—would those fetuses have had moral status? The answer is that they would not have had moral status, because they would not have had futures in which they were conscious. So, back when we were early fetuses, although we had moral status, there was no moral reason against killing us. The Good Method implies, correctly, that we have moral reasons against doing things that would be a-­harming-­of-­something-­with-­moral-­status. But it not the case that for each thing that actually has moral status, and every possible action that would have harmed it, there is thereby a moral reason against that action. If the Ever Conscious View is true, then actual abortions of early fetuses are morally permissible: these are harmings of things that lack moral status. But possible abortions of early fetuses—fetuses that actually have moral status—are also morally permissible: these abortions, if they were performed, would be harmings of things that would lack moral status. Abortion is permissible, whether or not it is performed. So, it’s not true that abortion is self-­justifying. The Good Method vindicates the Ever Conscious View’s implication that moral status is contingent. If the Ever Conscious View is true, then moral status is contingent in that both of the following claims are true:

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

102  Elizabeth Harman Some beings that actually have moral status might have lacked moral status. Some beings that actually lack moral status might have had it. You and I actually have moral status, but had we been killed back when we were early fetuses, we would have lacked moral status. And early fetuses that die in early abortions actually lack moral status, but had they continued to develop and become conscious beings, they would have had moral status.

5.  Defending the Good Method in the Face of the Asymmetry In section 2, I argued for the Good Method. In section 4, I relied on the Good Method to defend the Ever Conscious View. In this section, I discuss an objection to the Good Method, from a claim sometimes called “the Asymmetry.”11 The Reasons Asymmetry: We have a moral reason not to create a person who would have a short life of nothing but agony, but we have no moral reason to create a person who would have a good, happy life. The first part of the claim is clearly right: indeed, it seems to be morally wrong to create a child who would have only a brief, miserable life. We don’t just have a moral reason not to create in such a case—we have a strong, compelling moral reason. And the second part of the claim may also appear to be obviously true. Most of us spend long stretches of our lives, going about things and just failing to create people who would be happy. We leave potential happy people uncreated all the time. And we are not doing anything wrong. It’s not even a bit morally bad of us to fail to create all these people who would be happy. How could we explain why this is just fine? Why it is not lamentable at all? We could explain it by saying that there is no moral reason that we are failing to honor, when we fail to create people who would be happy. Yet, an objector to the Good Method could point out, the Good Method implies that the Reasons Asymmetry is false. As I argued in section  2, the Good Method secures the correct result that one has a reason not to create a child who would have a brief life of agony—even in the world in which one does not create him. What is crucial is that if one had created him, one’s action would have harmed him (by causing him to suffer agony), and he would have had moral status, so one’s action would have been a harming of something

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  103 that had moral status. But the Good Method also implies that there is a reason to create a person who would have a happy life. Suppose that you didn’t actually create a happy person last year, but you could have. Had you created her, your creation of her would have benefited her (by causing her to have the good things in her life), and she would have had moral status. So there is a benefit-­based reason in favor of creating her: that your action would have benefited a being that would have had moral status. The Good Method, thus, implies that the Reasons Asymmetry is false. Does this mean the Good Method is wrong? It does not. Rather, we should see that the Reasons Asymmetry can be resisted. Instead, we should endorse the following: The Requirement Asymmetry: It is morally wrong to create a person who would have a short life of nothing but agony, but it is not morally wrong to fail to create a person who would have a good, happy life. The central things to say about these two kinds of cases are: that it is wrong to create a child who would just suffer, and that it is not wrong to fail to create a happy child. Indeed, there is nothing wrong with failing to create a happy child. Now, to see that it’s just fine to reject the Reasons Asymmetry, we need to think about how moral reasons function. Is the following true? Whenever one fails to act on a moral reason, one thereby does something a little bit bad, and this is thereby at least a little bit lamentable. No—this is not true. Consider all of the different ways, big and small, that you and I could do a nice thing for someone we know right now. I could bake a cake for my colleague who is working hard on her book, as a little treat while she works so hard. You could offer to mow the lawn of your neighbor. There are countless favors and kindnesses we could do for others. We do some of these. But there are so many we could do that we don’t do. Is this lamentable? Is each of these something a little bit bad about how we lived our lives today? No. Rather, failing to do something nice for someone is failing to do something that would have been good—but there’s nothing bad about that failure. This is how failures to provide pure benefits work: while it is good to provide a pure benefit, it is not bad to fail to provide it. In saying this, it is important to distinguish pure benefits—giving someone something that is in itself good—from benefits that are preventions of harm.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

104  Elizabeth Harman One might think that every instance of failing to prevent a harm is at least a bit morally bad (though this would be a very strong claim). We can remain agnostic on this issue while saying everything above. So, rejecting the Reasons Asymmetry need not threaten us, once we realize that the reasons we have to create happy people are reasons to provide pure benefits to people: while we have such reasons, it is in no way bad when we fail to act on them. We’re not doing anything wrong in failing to create people; we’re not even doing something that’s a bit bad; and that is compatible with saying that we do have a moral reason to create happy people.12 We can also say another thing about our reasons to benefit people, which is that our reasons to benefit people who do not exist independently of our bene­fit­ing actions are weaker than our reasons to benefit independently existing people. Consider my reason to bake a cake for my colleague who is working hard on her book. If I bake her a cake, she would enjoy and appreciate it. Now, if I don’t bake her a cake, nothing bad happens; but it is true that she exists without the cake; there she is, working hard, cakeless. By contrast, if I fail to create a happy person, that person does not exist without all the happiness my creating her would have provided to her; she is not cakeless; rather, she simply doesn’t exist.13 I have relied on two claims in saying that we should reject the Reasons Asymmetry while embracing the Requirement Asymmetry. First, the reasons we have to create happy people are merely reasons to provide positive benefits; it is good to provide positive benefits; but it is not thereby at all bad to fail to do so. Second, our reasons to benefit in creating are weaker than our r­ easons to benefit independently existing people. While it’s true that the Good Method is incompatible with the Reasons Asymmetry, this does not threaten the Good Method. The Requirement Asymmetry is the asymmetry we should embrace.

6. Conclusion In this chapter, I have argued for the Ever Conscious View, on which moral status is contingent in a strong and interesting sense. Everyone will agree that the moral status facts are contingent, in that it is contingent which beings have moral status, because it is contingent which beings exist. But according to the Ever Conscious View, moral status is contingent in that some beings who actually have moral status—like you and me—could have lacked moral status; and some beings who actually lack moral status—such as an early fetus that is aborted—could have had moral status.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  105 The contingency of moral status that the Ever Conscious View posits strikes some people as deeply counterintuitive. According to the Objection to Contingency, the Ever Conscious View leads to the absurd view that the abortions we actually perform are morally permissible, while it would have been wrong to abort the fetuses we didn’t actually abort. That is not an implication of the view. I’ve argued that we can see that the view does not have this implication by relying on the Good Method of finding our harm-­based and benefit-­ based reasons. And I’ve argued that we are driven to adopt the Good Method for reasons wholly independent of the Ever Conscious View, so this is a principled defense of the Ever Conscious View.

Notes 1. Here are two different ways of filling in the details of this case. If you think that early human fetuses that die as early human fetuses lack moral status, then this version of the case works fine: Alice is pregnant with an early fetus that has a serious medical condition, with the effects as described; her only chance to get an abortion is at lunchtime. Alternatively: Alice is not pregnant, but has learned that she herself has a condition that makes any child she conceives certain to have the horrible life as described; and Alice’s best chance to prevent pregnancy is to see a doctor at lunchtime, missing her lunch with Betty. 2. See Harman  2004 and Harman  2009. In Harman  2004, I also introduce the Good Method. 3. For discussion of challenges to the idea that counterfactual dependence is sufficient for causation, see Paul and Hall 2013, ch. 5. 4. Armstrong 2003 surveys the medical evidence and concludes there is no good em­pir­ic­al evidence that drinking small amounts of alcohol adversely affects fetuses; the book provides a sociological analysis of why drinking during pregnancy has been pathologized. For a recent alternative perspective on the medical evidence, see Gunter 2019. 5. I am assuming that fetal consciousness arises sufficiently far along in fetal development that some fetuses that die in abortions have never been conscious. 6. See note 12 for explanation of the nature of this reason. 7. See Harman 2007. 8. In Harman 1999, I argued for the following view: The Actual Future Principle: An early fetus that will become a person has some moral status. An early fetus that will die while it is still an early fetus has no moral status. This view implies the truth of claims 4 and 5; but it’s just a placeholder. It is too specific to be a satisfying explanation of the moral status claims in 4 and 5. What we need is a general, coherent view that implies 4 and 5; the Ever Conscious View is that. For discussion of the Actual Future Principle, see Nobis 2002, Roberts 2010, and others. 9. Alternatively, we might say the following. Killing a living being always harms that being, and yet sometimes it is all things considered best for the being because it

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

106  Elizabeth Harman

10. 11.

12.

13.

prevents worse harm. For most people, after living a full human life, being kept alive for a long time while permanently unconscious is a significant harm to them. See Thomson 1971. I discuss two different versions of “the Asymmetry,” so I give them each a more specific name. For early discussion of “the Asymmetry,” see Narveson 1967 and McMahan 1981. For more recent discussion, see Roberts  2011, Johann Frick “Conditional Reasons and the Procreation Asymmetry” (manuscript), and others. Earlier we considered an unconscious human adult, kept alive by a machine, who has never been conscious. What kind of reason do we have to bring this adult to consciousness? It is a reason to provide pure benefit: we have moral reason to do it, but it’s not a bit bad to fail to do it. Bringing this adult to consciousness is morally akin to creating a person. See Harman 2009, pp. 147–8, for discussion of the difference in strength between our reasons regarding those we create and those who independently exist.

References Armstrong, Elizabeth (2003), Conceiving Risk, Bearing Responsibility: Fetal Alcohol Syndrome and the Diagnosis of Moral Disorder. Baltimore: Johns Hopkins University Press. Frick, Johann, “Conditional Reasons and the Procreation Asymmetry” (manuscript). Gunter, Jen (2019), “Drinking While Pregnant: An Inconvenient Truth,” New York Times February 5, . Harman, Elizabeth (1999), “Creation Ethics,” Philosophy and Public Affairs 28.4: 310–24. Harman, Elizabeth (2004), “Can We Harm and Benefit in Creating?” Philosophical Perspectives 18: 89–113. Harman, Elizabeth (2009), “Harming as Causing Harm,” in Harming Future Persons, ed. Melinda Roberts and David Wasserman. Springer: 137–54. Harman, Elizabeth (2007), “Sacred Mountains and Beloved Fetuses: Can Loving or Worshipping Something Give it Moral Status?” Philosophical Studies 133: 55–81. McMahan, Jeff (1981), “Problems of Population Theory,” Ethics 92: 96–127. Narveson, Jan (1967), “Utilitarianism and New Generations,” Mind 76: 62–72. Nobis, Nathan (2002), “Who Needs the ‘Actual Future Principle’?: Harman on Abortion,” Southwest Philosophy Review 18.2: 55–63. Paul, L.  A. and Ned Hall (2013), Causation: A User’s Guide. Oxford: Oxford University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Contingency of Moral Status  107 Roberts, Melinda (2010), Abortion and the Moral Significance of Merely Possible Persons: Finding Middle Ground in Hard Cases. Heidelberg: Springer. Roberts, Melinda (2011), “The Asymmetry: A Solution,” Theoria 77.4: 333–67. Thomson, Judith Jarvis (1971), “A Defense of Abortion,” Philosophy and Public Affairs 1.1: 47–66.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

7 Moral Status and Moral Significance Ingmar Persson

1.  Moral Significance More Fundamental than Moral Status In this chapter I shall discuss the relations between moral status and what I call moral significance. Something has moral significance just in case it is something such that having it in itself provides a moral reason. I shall argue that the moral status of something is dependent on what is morally significant about it. Nothing can have moral status if nothing is morally significant about it. On the other hand, something can be morally significant, even though nothing has moral status in virtue of it. The notion of moral significance is therefore the more fundamental notion, and it is a notion we cannot manage without, whereas introducing the notion of moral status is redundant and complicates matters unnecessarily. Suppose, contrary to what I believe (and argue in Persson 2017: 5.2), that there could be ‘free-­floating’ experiences—that is, experiences without subjects having them, Humean bundles of experiences—of pleasure and pain, enjoyment and suffering, and of other positive and negative kinds. Then the existence of such experiences would plausibly be morally significant: there would be a moral reason to bring it about that the world contains more ex­peri­ences of pleasure and fewer experiences of pain. But it does not follow that these experiences have moral status. Negative experiences could not have moral status like positive experiences, and positive experiences would have moral status before they even began to exist, since there would be moral reason to cause them to exist. Or suppose, what I believe to be true (and argue in Persson 2017: ch. 2), that one way of making the world morally better would be by bringing into existence beings whose lives contain a surplus of pleasurable experiences over painful experiences. Then the possibility of such beings coming into existence would be morally significant, but we should not say that the possible beings that could be made actual have moral status. For this would suggest, misleadingly, that possible beings exist in a sense in which they do not exist—e.g. Ingmar Persson, Moral Status and Moral Significance In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Ingmar Persson 2021. DOI: 10.1093/oso/9780192894076.003.0007

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status and Moral Significance  109 exist in space and time—whereas their existence in this sense is merely pos­ sible (cf. Persson 2017: 60–1 and Parfit 2011: Appendix J). These observations give a clue to a necessary condition for something having moral status: it must be a particular, something that exists in space and time. By contrast, what has moral significance is rather a property or feature of such a particular in virtue of which it possesses moral status, i.e. which in itself provides a reason for how this particular should be morally treated. As will transpire, these properties are usually thought to include some mental properties. It is natural to take the moral status of something as some sort of value (or worth) that it has, but when something has value, it must have value in virtue of some other properties it possesses. Then it is commonly said that the value of the thing supervenes on these other, subvenient properties that belong to it. This implies that if something else were to have these subvenient properties, it would have the same value as well, other things being equal. But if it is necessarily true that the value of a particular supervenes on some properties of this particular, then value does not irreducibly or non-­derivatively belong to the particular. For if it is a logically necessary truth that, if a particular P possesses value, there must be some feature F of P such that P has value in virtue of having F, the only possible explanation of how this could be a logically ne­ces­sary truth is that when P is said to have value, what is meant is that P has some feature such that its exemplification of it has this value. If value had irreducibly belonged to P, as mass or shape could irreducibly belong to something, it would be unclear why there logically must be properties of P which explain or underlie its possession of value. This seems to go against the Humean dictum that ‘distinct existences’ cannot be necessarily connected, for P’s having value and P’s having some feature such as F would then be ‘distinct existences’. Now, imagine that we have isolated the feature F of P on which its value supervenes. Then we could in principle whittle away all other features of P without removing its value, since its value is based only on its possession of F. So, it is not P in all its concreteness that is of value, but rather its exemplification or instantiation of the feature F. Ultimately, it is then this remainder that has moral status—that is, a ‘thinner’ particular than the apparent one, e.g. the parts of a brain minimally sufficient to sustain some experiences of a creature rather than the creature itself. But this particular might appear too thin to merit being in possession of moral status. On the other hand, a ‘sturdier’ particular may persist, even though what grounds its moral status goes out of existence. If this particular does not change in observable respects, it might go unnoticed that it has been robbed of its moral status.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

110  Ingmar Persson

2.  Capacity for Sentience as a Basis for Moral Status As regards what are morally significant properties that could give something moral status, a familiar idea is that among them we find the capacity for sentience, the capacity to have pleasurable and painful and other kinds of positive and negative experiences. But notice that, while what has been said so far about moral status is intended to answer the analytical question about how the notion of moral status should be understood, the present claim about the capacity of sentience is a morally substantive claim about what features could provide something with moral status. The concept of moral status is normatively neutral in the sense that it does not specify what property of a particular it is that provides a moral reason.1 I shall not here defend any such morally substantive claim that specifies this property, but merely explore the implications for the notion of moral status of some familiar and more or less plaus­ ible candidates for being such a property. It is a plausible idea that the fact that beings have pleasurable and painful experiences is a morally significant fact because having these experiences is intrinsically good or bad for these beings, at least if they have appropriate desires, or liking and disliking, as regards these experiences. Treating these beings morally well or badly could then consist in seeing to it that their cap­ acity for sentience is exercised by their having such positive or negative ex­peri­ences. It seems natural to link what is morally good or bad, good or bad from an impersonal point of view, to what is good or bad for beings, to what is ‘relationally’ good. The simplest (but to my mind not most plausible) way to do this is the basic utilitarian idea that what is best from a moral point of view is the greatest net sum of what is good for all beings. But if something like this is on the right lines, then in order for it to ­provide something with moral status, the capacity for sentience must be ‘exercisable’, that is, the owner or bearer of the capacity must be in external circumstances such that the capacity can be exercised in them, or such that moral agents can transform them to circumstances in which it can be exercised with their help. If, say, beings whose capacities for sentience ­ are  exercisable on earth were to be irrevocably deported to another planet where these capacities are not ex­er­cis­able—where these beings would be ­permanently unconscious—these beings would no longer have any moral status, but would be ‘as good as dead’. It is of no value to anyone to have a capacity which cannot be exercised, e.g. to have a sense of sight if you live in permanent darkness.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status and Moral Significance  111 It follows that moral status cannot be an entirely internal feature that belongs to a being in itself, apart from its relations to things external to it. In other words, by taking the property of having an exercisable capacity for sentience to be a property the instantiation of which by something grounds its moral status, we would include among the status-­grounding morally significant properties a property that is not wholly internal to its subject, but partly external to it. It would be partly external to the subject because it involves a reference to certain circumstances external to the subject. This makes talk about a particular’s moral status somewhat awkward because the fact that this status is moral suggests that we are dealing with a kind of status that belongs to the particular in itself and follows it around whatever its external circumstances, as long as its internal properties are intact. We expect that the feature of something that in itself provides a moral reason is an internal property of it, and object to suggested features that are not. For example, it has been objected to viability outside the womb as a proposed basis of the moral status of a foetus that it is dependent on external circumstances, such as the equipment of the clinic where the foetus is de­livered. Other kinds of status are not like moral status, for instance, the political status you could acquire by being elected president could disappear if the election is officially declared invalid. Suppose, however, that we are unperturbed by the fact that the proposed basis of moral status turns out to be partly external. We might then be prepared to go further and claim that not only the actual possession of an ex­er­ cis­able capacity to have experiences that are intrinsically good or bad for the subject—i.e. actual possession of the form of consciousness that I have referred to as sentience—is sufficient for having moral status, but that even the possession of an actualizable potential to develop such a capacity that, for instance, a human foetus in the womb normally possesses, is morally significant and suffices for it to be endowed with moral status. This is another property of external extension. I believe this further step to be plausible, though I shall not argue the point here (but see Persson 2017: ch. 2). This extension requires that non-­conscious foetuses can be numerically identical to beings that later possess a capacity for consciousness, but it is unproblematic that there is identity of human organism here. What is philosophically controversial is whether or not we are identical to such organisms and our—so-­called personal—identity is the identity of such organisms.2 It must be conceded, though, that it cannot be precisely determined when the bearers of a potential to acquire a capacity of sentience

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

112  Ingmar Persson begins to exist, that is, when, for instance, human cells can be said to have formed a multi-­cellular human organism (see Persson 2017: 2.1).

3.  Why Capacity for Sentience but not Organic Life is Morally Significant A different sort of extension of the class of beings that are endowed with moral status would result if the property of having organic or biological life were morally significant and sufficient for moral status. Life in this sense consists primarily in having a metabolism which assimilates material from the environment and makes it part of the body. This proposal would extend the class of beings with moral status because, although being biologically alive is apparently a necessary condition for having an exercisable capacity for sentience and even for having an actualizable potential for developing such a capacity, it is not sufficient, as the existence of plants and various micro-­ organisms show. The considered extension would include these organisms among those items that have moral status. This proposal has the virtue that the property of having organic or biological life seems to be a property that belongs to an organism in itself. It is however difficult to understand why the property of being biologically alive should be morally important in itself. This would imply, for instance, that it is morally important that human beings and other animals go on living in a persistent vegetative state. More plausibly, the value of biological life is extrinsic and derives from the fact that it is necessary for the capacity for consciousness. Imagine that it was not such a necessary condition, but that the organic matter of some conscious being could transmute into some inorganic matter without there being any change in respect of what the being was conscious of; in respects observable in everyday life its existence would continue as before. Then it would seem that the occurrence of this transmutation would be a matter of moral indifference. Such transmutations would show that the property of being biologically alive is not in itself morally significant. This claim stands in need of further clarification, however. We have seen reason to stretch the class of status-­generating, morally significant properties to properties that are not internal, but extend externally. Now it will not do to declare that such a property must have intrinsic value for its subject, since the property of having an exercisable capacity for consciousness, let alone the property of having an actualizable potential to develop such a capacity, does not have such value for the subjects having them. Consider a world in which

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status and Moral Significance  113 there are beings with such exercisable capacities or actualizable potentials, but they are never exercised or actualized. This is not sufficient for there to be anything in this world that has intrinsic value for these beings. What has intrinsic value is that exercisable capacities are actually exercised by a subject having pleasurable or other experiences that are intrinsically valuable for it. It is such exercises we have a moral reason to bring about for their own sake. The rationale for the requirement that a psychological capacity must be ex­er­cis­able to ground moral status is precisely that it is exercises of the capacity that in the end are of value for the subject. The value of an exercisable capacity for having such experiences or of an actualizable potential for such a capacity is thus extrinsic for the subjects possessing them. Having an exercisable capacity for sentience is then extrinsically valuable because it is a necessary condition for exercises of this capacity by having intrinsically valuable experiences e.g. of pleasure. And having an actualizable potential for acquiring such a capacity is extrinsically valuable because it is a necessary condition for an actualization of this potential by acquiring this exercisable capacity (which in turn is necessary for the capacity to be exercised by having intrinsically valuable experiences). But we have just seen that the property of having organic life might be a necessary condition for such a potential and capacity—does it thereby qualify as morally significant? No, for the presence of an exercisable capacity for having experiences is morally significant because in itself it provides a moral reason to bring it about that this capacity is (not) exercised if the experiences will be positive (negative). Likewise, if there is an actualizable potential for such a capacity, this itself provides a moral reason for bringing it about that this potential is actualized, since otherwise a particular exercisable capacity will not exist. But there being organic life is not in itself a moral reason for or against promoting such life: there being such a reason depends on the contingent fact that there is also the distinct phenomenon of an exercisable capacity for having positive and negative experiences or an actualizable potential for such a capacity for which organic life is necessary. It is these phenomena that are morally significant.

4.  The Endless Variability of Status-­Grounding Mental Capacities Among Humans Another biological candidate for being morally significant is the property of being a human being in the sense of being a member of the biological species

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

114  Ingmar Persson Homo sapiens. We should distinguish the property of being a human being from the property of being human: for example, the cells of a human being have the property of being human, but they are not human beings. I take it that the more plausible—or least implausible—view is that it is the property of being a human being that is morally significant. The idea need not be that this property by itself is sufficient for moral status—so that even a dead human being has such a status—but, more plausibly, that this property serves to raise the moral status of organisms that are biologically alive or (at least, potentially) conscious. Thus, a moral status-­grounding property fully spelt out would be being a living or (potentially) conscious human being. However, the idea that the fact that something’s being a human being elevates its moral status is implausible, as many have argued. Species membership implies genetic similarity roughly speaking: organisms belong to the same sexually reproductive species if they are so genetically similar that there is interfertility between individuals of different sexes to the extent that they can together produce fertile offspring.3 Such genetic similarity standardly manifests itself in morphological and behavioural similarities, in organisms looking and behaving alike ‘in the field’, but there could be such a morphological and behavioural similarity without the requisite genetic similarity. We could imagine there being—perhaps on a different planet—individuals who are as capable as are normal human beings of being self-­conscious, acting rationally and morally, making scientific discoveries, creating art, and participating in complex social enterprises. These individuals might also be morphologically so similar to human beings that in everyday circumstances they would be indistinguishable from human beings. It would surely be preposterous to deny that these extra-­terrestrials have the same moral status as human beings were they found out to be so genetically different that there would be no interfertility with humans. Such considerations show that the view that species membership is ­morally significant—so-­called speciesism—is not plausible.4 But this thought-­ experiment also indicates another set of features that with greater plausibility could be claimed to be morally significant and the basis of higher moral status. For it could inspire the idea that the ground for the higher moral status of human beings is not their species membership, but the more advanced mental capacities characteristic of human beings, such as their capacities for self-­ consciousness, rationality, morality, and long-­term life-­planning. Let us call these capacities personal capacities and the beings who are equipped with them persons, as opposed to the sentient beings who have merely the more humble psychological capacities of perceiving and feeling, along with desires

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status and Moral Significance  115 with respect to what they perceive and feel. It has often been claimed that in virtue of possession of their (exercisable) personal capacities, human beings have a more elevated moral status than merely sentient beings (as I myself argue in Persson 2017: ch. 1). As many have observed, this proposal has two noteworthy implications. First, it implies that beings who are not biologically human could have as high a moral status as some human beings since, as has emerged, personal capacities could occur—on earth or on other planets—without the genetic underpinning that they have in human beings. Secondly, this proposal implies that not all beings who are genetically human have the same elevated moral status, since they do not all possess personal capacities to the same extent. Some have them to a higher degree than others, and some do not have them at all. Of those who do not actually have them, some have the potential to develop them, whereas other humans, e.g. anencephalic infants, are so disabled that they do not even have this potential. It seems inescapable that humans who cannot reach the mental level typical of humans would not have any higher moral status than non-­human animals on the same, comparatively low mental level as they are. In any case, it would follow that not all humans have equal moral status, as might be true according to a radical form of anthropocentric speciesism. There is a related uncomfortable implication of the view that psychological capacities are the basis of moral status. It is conceivable that on some other planet there is a species whose members on average are equipped with these capacities to a much higher degree than humans: they are on average more rational, moral, artistically creative, and so on. Then their average moral status would be higher than that of humans. This is something that many of us would be reluctant to admit. However that may be, if the higher moral status of persons is based on exercisable personal capacities, there would be countless degrees of moral status. A parallel problem arises with respect to sentient beings, since their more humble form of consciousness also shows innumerable different degrees of development. Therefore, according to these views, there is the problem of a potentially endless variability of moral status. This variability goes further than might initially be thought, since whenever we have moral reason to treat one being better than another for their own sake, there is a case for saying that the former has a higher moral status if we let external features slip in. Imagine, for instance, that one being, Old, cannot survive for as long as another, Young, but they are equal in respect of psychological capacities that are exercisable at the present time. Then there is nonetheless a case for saying

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

116  Ingmar Persson that Old has a lower moral status than Young, since being determined to die sooner is a morally significant feature, giving us a moral reason, for instance, to save Young rather than Old now. It is no good objecting that possible future lifespan—and, thus, the extent to which psychological capacities are ex­er­cis­ able in the future—is no internal feature, for the same is true of psychological capacities exercisable at present. If this diachronic aspect is taken on board, it will be clearer still that morally significant features vary immensely not only between members of different species, but also between members of the same species. This excessive variability means that it will be hard to categorize groups of beings as having the same moral status, but if we are approaching a situation in which more or less every individual has its own level of moral status, it is pointless talk about moral status. A brusque way of dealing with this bewildering variability would be to give up talking about moral status and be content with the idea that the sort of psychological capacity, or potential for it, that something is equipped with is morally significant in virtue of its tight connections to the occurrence of intrinsically valuable experiences. Needless to say, this would also take care of another problem earlier countenanced, namely the awkward implication that moral status cannot be an internal feature of its owner.

5.  Further Personal Features that Could be Morally Significant Let us however be patient and explore the terminology of moral status a bit longer in order to trace out further implications of it. There is a reason not yet considered here for which it could be held that individuals who desire to go on living and have plans about what to do with their future lives have a higher moral status which makes it more wrong e.g. to kill them than sentient beings (with lives containing a surplus of positive experiences ahead of them). Michael Tooley (1983: ch. 5) argues that these individuals have a right to life. This would be another morally significant feature. If this right is conceived as a right to decide what to do with your life—as long as you do not harm others—which is what could be meant by possessing autonomy, it is plausibly a prerogative of persons. But it is not evident that the right to life should be so narrowly construed that sentient beings or even beings with a potential to acquire sentience, like human foetuses, cannot be endowed with this right. I shall not attempt to settle this issue, but simply call attention to the fact that if somebody has a right to something, it must be valu­able for them to make use of it. Otherwise, the right would hardly be

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status and Moral Significance  117 used, and so would be forfeited through disuse. This link between rights and value seems the most credible explanation of why we are not said to have rights to things that are of no use to us, such as the waste products we produce. It is also plausible that only persons are subjects of desert, that they can be held to be deserving of being treated in ways that are better or worse for them. For this seems to require that they can be held responsible for such things as actions that have good or bad effect on themselves and others, and in turn this requires that these actions are intentionally or knowingly performed. Again, this is a feature that refers to external circumstances, so to the extent that desert is a component of moral status, it hinges on such circumstances. Desert is a kind of worth: those who deserve praise are praiseworthy, and those who deserve blame are blameworthy. Obviously, this worth varies as widely among persons as the capacities that constitute personhood. Thus, the introduction of desert as a ground for moral status arguably aggravates the problem of its mind-­boggling variability among persons. Questions like the following naturally arise: if some persons deserve blame and punishment, could this lower their moral status to the level of sentient beings, or even lower? Moreover, even if we could have justified the idea that all persons have the same high moral status in spite of the huge variation of their exercisable personal capacities,5 we would face the problem why their moral status would be the same if their deserts differ. The rewards and punishments deserved are supposed to be something whose goodness or badness for the subjects equals or is proportionate to the goodness or badness of their impact on the world. This relational state of ­people getting what they deserve is reasonably something that is intrinsically good, and their getting more or less than the requisite proportionality is something that is intrinsically bad. But this is a different kind of intrinsic goodness or badness than the intrinsic goodness and badness of pleasant or painful experiences. Whilst these experiences are good and bad for the subjects having them, the goodness of their getting just what they deserve rather than more or less is an impersonal, moral value. For instance, the pleasure that some criminals in fact enjoy need not be any less good for them because it is undeserved, but there is still something bad about them getting it. If justice (or fairness) is a matter of getting what you deserve or have a right to, and only persons have deserts and rights, merely sentient beings are outside the scope of justice. However, some theorists, including myself (see Persson 2017: ch. 7), deny the applicability of the concepts of desert and rights to persons like us, and affirm that justice is instead a matter of a type of equality that includes all sentient beings. The idea is roughly that if there is nothing

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

118  Ingmar Persson like deserts and rights that could make it just that some are better off than others, justice requires that all be equally well off. Personally, I am convinced, contrary to what utilitarians believe, that any plausible moral theory needs some account of what makes a distribution of what is good and bad for beings just, and that we are thereby committed to an impersonal value of justice that determines the moral value of outcomes alongside the relational value of experiences. It follows that the connection between moral value and relational value will not be as simple as it is according to utilitarianism, where moral goodness and badness are simply determined by the sum (or average) of what is (relationally) good and bad for beings. The extent to which a distribution of what is good and bad for beings satisfies the demands of justice is also a factor that determines the moral value of outcomes, i.e. it is morally significant. If justice is a matter of desert, we have seen that the issue of the moral status of persons is further complicated, since the problem of the endless vari­ abil­ity of moral status is aggravated. Should justice instead be a matter of an all-­inclusive equality, which denies the existence of deserts and rights, this complication disappears, but the fact that justice requires that all be equally well off does not imply that their moral status is the same. For there are still differences between individuals by reference to which other moral con­sid­er­ ations than justice—notably, considerations of utility—could tell in favour of inequalities between them. It might be held that in virtue of their greater psychological capacities, there is a moral reason to treat some individuals better than others because they would then contribute more to goodness overall, by making themselves or others better off. To give a simple illustration, suppose the psychological capacities of some individuals enable them to get more pleasure from the same resources than others; then there is from a (hedonist) utilitarian perspective more of a moral reason to give them these resources, since it contributes more to the total sum of pleasure in the world. This is, however, a reason that could face opposition from reasons of justice if these individuals are already better off.

6.  Deontological Constraints Agent-­Focused Rather than Victim-­Focused According to standard deontological theories, it is harder to justify morally harming innocent and non-­consenting people as a means to a good end than doing this as a foreseen side effect of the good end. It would then be another

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status and Moral Significance  119 morally significant feature when such people are harmed, whether this ­happened as a means or as a foreseen side effect. Frances Kamm refers to this feature as our value of ‘inviolability’: ‘the good of being someone whose worth is such that it makes him highly inviolable and also makes him someone to  whom one owes nonviolation’ (2007: 254). On a view she has defended, in­viol­abil­ity is ‘victim-­focused’ (2007: 28) in opposition to ‘agent-­focused’. That is, the fact that we are worth protection from being harmed as a means to some good end is a feature of ours as the victims of acts, and not a feature of the agents who act upon us. However, our having the worth of inviolability goes beyond our having rights, such as the right to life. As Kamm herself realizes (2007: 251–2), even if it is hypothesized that we have a right to life, it could be permissible to infringe this right by killing one of us in order to prevent five other people from being killed, because this would minimize the total number of violations of rights to life.6 Under these circumstances we would not have the in­viol­abil­ ity that Kamm ascribes to us. She claims that, on her victim-­focused account of inviolability, the explanation of inviolability does not focus on what I do rather than what others do. The fact that if I kill someone, I would be acting now and the victim would be mine does not play a pivotal role in my explaining why I must not kill him . . . His right, not my agency, constitutes the moral constraint. The fact that the other five ­people have this same right does not diminish the constraint against violating the one person’s rights that I come up against.  (2007: 29).

But why does Kamm assert that ‘I come up against’ only ‘the constraint against violating the single person’s rights’ and not also against constraints against violating the rights of the five people whom I could save by killing the one? Presumably because I am facing the option of killing the one now, as opposed to the option of letting the five die. The fact that I have the relation of killing to the one and the relation of letting die to the five is, however, an agent-­focused consideration, since it concerns what is true of the agent rather than the victims. It may be that if I do not save the five, they are killed as a means by some other agent. But it is hard to see how it could affect the status of the victims whether they are killed as a means by me or somebody else. It is more plausible to hold that it could affect my status as an agent. Therefore, I believe an agent-­focused component to be inescapable in an account of Kamm’s notion of inviolability, even though it also comprises victim-­focused components, as we may grant that the possession of rights

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

120  Ingmar Persson does. Consequently, if inviolability is a morally significant feature (which I contend that it is not in Persson 2013: ch. 6), it is not a feature of victims in virtue of which they can be held to have a moral status other than the one they have in virtue of their rights, since inviolability concerns the relation that agents have to their victims. It concerns the moral significance of actions in contrast to omissions.7 It might be said that an individual’s having moral status presupposes that there are (other) moral agents, since the notion of moral status implies that there is some feature of that individual that in itself provides a moral (rather than a prudential) reason for treating that individual in some manner. Since the existence of such agents would be an external fact about the presumed status-­bearer, this would be another consideration that would render talk about moral status dependent on external circumstances. But it is far from obvious that the fact that there are moral reasons implies that there are agents who can have those reasons. Nonetheless, the fact that there are moral ­reasons to treat individuals in some ways cannot plausibly be claimed to be an intrinsic feature of these individuals, even if the features that provide the ­reasons were to be intrinsic features of the presumed status-­bearers.

7. Conclusion Let me try to summarize the findings of this survey of morally significant features and their relationship to moral status. The general conclusion is that, while the concept of moral status is applicable, there is little use for it. This is because there are exceedingly many degrees of moral status, for morally significant properties, such as psychological capacities exercisable over time, vary endlessly among beings belonging to the same species as well as beings belonging to different species. Moreover, focus on moral status may mislead us to overlook or misconstrue some morally significant features because nothing has moral status in virtue of them, for instance, that it may be morally significant that the world can morally be made better by the creation of beings who will lead lives that are good for them. Or that if the distinction between harming as a means to an end and as a foreseen consequence of an end, or between acts and omissions, is morally significant, this has to do with the moral status of victims. The endless variability of moral status is partly due to the fact that, contrary to the common belief that the moral status of beings is based on features that are internal to the beings, the most plausible bases—such as the possession of

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status and Moral Significance  121 exercisable capacities for some form of consciousness—are features that involve relations to something external to the beings. Less awkward may be the fact that it is only a minor portion of the internal features of the beings that contribute to their moral status. None of this implies that we cannot truly make some simple claims about moral status, such as that there are beings who have moral status, and that the moral status of some of them is higher than that of some others. But these claims do not take us far and they could be replaced by claims like that there are some beings such that benefiting and harming them is morally significant and benefiting and harming some beings is morally more significant than benefiting and harming other beings. By contrast, we cannot settle moral issues without determining what features are morally significant and how great their relative significance is. Sometimes morally significant features are not exemplified by anything that could properly be described as having moral status. Sometimes morally significant features that are exemplified by something that has moral status cannot properly be said to be part of the basis of their moral status. But the weight or strength of such features still contributes to determining what we morally ought to do to these beings as much as those morally significant features that can be said to ground the moral status of beings.8

Notes 1. By contrast, David DeGrazia takes the concept of moral status to entail having interests (2008: 183), but I find it implausible to hold that those who claim that something, e.g. anencephalic infants, have moral status, though they lack interests, are guilty of a contradiction. If true, the claim that something has moral status if and only if it has interests is a substantive and not a conceptual truth. Since DeGrazia packs more into the concept of moral status than me, he can view it as a ‘convenient shorthand’, along with endorsing my view that it is ‘redundant’ (2008: 184). 2. That there is identity here is what champions of the bio­logic­al or animalist theories of our identity, like Olson (1997), maintain, and what is denied by champions of psychological theories, like Parfit (1984: pt III) and McMahan (2002: ch. 1). I present my view of our identity in e.g. Persson (2017: 3.1) and (2016). 3. It is also arguable that membership of the same species implies that the genetic similarity is not accidental, but has arisen because of a shared origin. However that may be, the argument in the text against the moral importance of species membership goes through. 4. This has been argued by many, probably most famously by Peter Singer e.g. in (1993), but see also Tooley (1983: 61–77) and McMahan (2002: 209ff). I argue against speciesism in Persson (2017: ch. 6).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

122  Ingmar Persson 5. Perhaps by appealing to something like John Rawls’s idea of a ‘range property’ (1971: 508), though I have argued that this is very im­plaus­ible (Persson 2017: 10.1). 6. This would be what Robert Nozick has referred to as a ‘utilitarianism of rights’ (1974: 28). 7. In Persson (2013: chs 3, 4 & 6), I argue that our intuitions to the effect that the distinctions between acts and omissions and doing something as a means rather than as a foreseen effect have moral significance derive from our sense that we are more responsible for what we cause than let happen, and for what we cause more directly than less directly. 8. Many thanks to participants of the conference ‘Rethinking Moral Status’ for valuable comments, in particular to David DeGrazia and Steve Clarke.

References DeGrazia, David (2008) ‘Moral Status as a Matter of Degree?’, Southern Journal of Philosophy, 46: 181–98. Kamm, Frances (2007) Intricate Ethics, Oxford: Oxford UP. McMahan, Jeff (2002) The Ethics of Killing, New York: Oxford UP. Nozick, Robert (1974) Anarchy, State and Utopia, New York: Basic Books. Olson, Eric (1997) The Human Animal, New York: Oxford UP. Parfit, Derek (1984) Reasons and Persons, Oxford: Clarendon Press. Parfit, Derek (2011) On What Matters, vol. 2, Oxford: Oxford UP. Persson, Ingmar (2013) From Morality to the End of Reason, Oxford: Oxford UP. Persson, Ingmar (2016) ‘Parfit on Personal Identity: Its Analysis and (Un)importance’, Theoria, 82: 148–65. Persson, Ingmar (2017) Inclusive Ethics, Oxford: Oxford UP. Rawls, John (1971) A Theory of Justice, Cambridge, MA: Harvard UP. Singer, Peter (1993) Practical Ethics, 2nd edn, Cambridge: Cambridge UP. Tooley, Michael (1983) Abortion and Infanticide, Oxford: Clarendon Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

8 Moral Recognition and the Limits of Impartialist Ethics On Androids, Sentience, and Personhood Udo Schuklenk

1.   Moral Status and Moral Standing Let me start perhaps with clearing the ground a little, and explain how I understand ‘moral standing’. Whoever has moral standing is someone who can be morally wronged by actions undertaken by moral agents. A moral agent is someone who has the capacity to distinguish between what is morally right and what is morally wrong. If you have moral standing, and I am a moral agent, my actions affecting you can be morally right, morally wrong, or morally neutral. If something has no moral standing, whatever it is that I do or do not do, that affects it, has no moral implications, as far as it is concerned. If a stone has no moral standing, me picking the stone up and throwing it is morally neutral as far as the stone is concerned. Of course, that does not mean it is morally neutral with regard to whoever gets hit by the stone that I am throwing. If non-­human animals, or people with certain disabilities, or androids have no moral standing, your treatment of them, whatever you might choose to do or not to do, would—all other things being equal—be a morally neutral activity. Allen Buchanan has suggested that we should distinguish between ‘moral status’ and ‘moral standing’. To him saying that something has moral standing simply means that it counts morally in its own right. However, the moral status of entities that have moral standing is not necessarily the same. Some beings with moral standing might have a higher or lower moral status than other beings who also have moral standing (Buchanan  2009). We can have any number of arguments about the normative grounds on which we could base such rankings, but it is not unheard of in moral philosophy that sentient persons are seen as having a higher status than sentient non-­persons. Udo Schuklenk, Moral Recognition and the Limits of Impartialist Ethics: On Androids, Sentience, and Personhood In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Udo Schuklenk 2021. DOI: 10.1093/oso/9780192894076.003.0008

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

124  Udo Schuklenk This  view seems to align with the moral intuitions that many of us share. Buchanan’s approach could, for instance lead us to grant sentient non-­human animals moral standing, and moral status. Conceding this would still permit us to disagree vigorously about the question of whether a non-­human animal’s moral status should be the same as that of a human being with similar intellectual capacities or that of a person. What seems uncontroversial to Buchanan, and others like him, is that ‘moral status admits of degrees’ (DeGrazia 2008, 181). Mary-­Ann Warren in her book on Moral Status: Obligations to Persons and Other Living Things, unlike DeGrazia, seems to use the terms ‘moral status’ and ‘moral standing’ interchangeably, ‘to have moral status is to be morally considerable, or to have moral standing’ (Warren 1997, 3). She still finds herself in the business of wanting to create pecking orders. And there is a good reason for that. Status comes here in degrees depending on a being’s capacity for sentience. Insects find themselves lower in the pecking order than mammals. Of course, that personhood should trump mere sentience, at least when it comes to the infliction of pain and suffering, is far from self-­evident, and arguments in support of such a view have not been terribly persuasive. Warren, for instance, wrote, ‘a creature’s probable degree of mental sophistication may be relevant to the strength of its moral rights, because mentally sophisticated creatures are apt to be capable of greater suffering and probably lose more which is of potential value to them when they lose their lives’ (Warren  1986, 166). How does Warren know that ‘mentally sophisticated creatures are apt to be capable of greater suffering’? She cannot know this, of course, but as so often when these judgements are passed, conveniently, those who make those determinations are people who happen to be also persons, and they value themselves. It probably cannot hurt to place oneself on top of whatever pecking order one creates. Much of the debate around criteria for moral status and moral standing is arguably motivated by the need to provide justifications for overriding others’ interests. If we take the debate about sentient non-­human animals as our starting point, it becomes clear that arguments about moral standing as well as moral status rely strongly, if not entirely, on drawing parallels from human capacities and concomitant needs, to those of sentient non-­human animals who seem to have comparable, if not identical needs. In fact, it is the only substantive argumentative move that is made. Ironically that is oftentimes celebrated as a radical move, one that is transcending anthropocentrism. However, conceptually this argumentative move transcends anthropocentrism only insofar as anthropocentric criteria are applied to non-­humans who are

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Limits of Impartialist Ethics  125 relevantly comparable to paradigmatic humans, but what drives the argument here is the human experience. The content of moral standing is inconceivable without us humans having experienced something that demands particular moral consideration of our needs, and we grant the existence of those needs to sentient non-­human animals that seem to have similar needs. It is quite conceivable that if we could not experience pain and suffering, no concern would be raised about sentient non-­human animals with such needs. After all, most of us are typically not too deeply concerned about the moral standing of inanimate objects such as stones and rocks. We cannot fathom what the inner life and needs of a stone—if any—might be like, therefore we conclude that stones don’t matter morally. I wonder whether this is not in­di­ca­tive of our failure to imagine anything that is not somehow, somewhat like us and our experiential world. David Hume was right: our ‘moral feelings’, as he put it, make up the ‘essential and instinctive foundations of all human morality’ (Warren 1997, 12). This is an issue I shall return to in just a moment.

2.   The Easy Bits: Chimeras and Cyborgs There has been some excited reporting about the injection of reprogrammed human-­derived stem cells from patients with Parkinson’s disease into the brains of monkeys (Kikuchi et al. 2017). Concerns have been raised about the (well-­ dissected) ethics of animal experimentation involving monkeys. Invariably, if one is opposed to the creation and use of monkeys for research purposes one will be opposed to such projects (Neuhaus 2018). Well-­ established rationales, that such opposition could be based on, include the use of monkeys as mere means to our ends, concerns about the use of sentient non-­humans for not-­consented-­to purposes, as well as the infliction of pain and suffering on such animals. However, other arguments also evolved around the question of whether the creation of transgenic monkeys through insertion of a human gene associated with brain growth into the monkeys’ genome should be seen as meaningfully impacting on the moral status of these non-­human animals (Yirka  2019). Bioethicists have engaged in thought experiments involving a hypothetical neonatal mouse (Capps 2017). If we had pegged a paradigmatic monkey’s moral standing at, say, 67 out of 100 (100 being, quelle surprise, the paradigmatic human person) and 23 out of 100 for the mice, based on their being sentient beings with limited intellectual capabilities, would the insertion of human DNA move them higher up on the greasy moral status pole? Would

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

126  Udo Schuklenk they be ‘more’ human as a result of the insertion of human DNA, and so move up in the moral status pecking order? Pig–human chimeras have also been produced, with a view to creating organisms that can be utilized for organ transplantation purposes (McFarling 2017) As before, some will be opposed to the use of sentient non-­humans as involuntary spare organ banks for human purposes (Chadwick and Schuklenk 2001). The question arises here, as with the monkeys and mice, at what point in time the pig–human chimera might be transformed into a human–pig chimera, where they have transformed into a being more human than pig. Might they eventually be regarded as human, depending on how far we might be able to advance such development projects? Would such changes impact meaningfully on the entities’ moral status (Streiffer 2005)? Bioethicists have worked their way through standard-­fare ethical objections to creating such chimeras, they discussed the supposed (un-)naturalness of the procedures leading to chimeras, as well as varieties of slippery-­slope arguments (Robert and Baylis 2003; Karpowicz, Cohen, and van der Kooy 2005). It seems to me as if arguments about the moral status and moral standing of such chimeras are not terribly difficult to settle. Well-­known positions developed in the context of the animal rights debates are applicable here. David DeGrazia has nicely described and analysed them in his book Taking Animals Seriously (1996). Possible marker events are ‘nociception, consciousness, pain, distress, fear, anxiety, suffering, pleasure enjoyments, happiness, desires (and conation generally), concepts, beliefs, thinking, expectations, memory, intentional action, self-­awareness, language, moral agency and autonomy’ (DeGrazia  1996, 96). While, unsurprisingly, there is no consensus on the question of which of these marker events is the relevant one, or whether a single one of these events constitutes a paradigm changing inflection point from which everything changes, the arguments are well familiar. Only to speciesists will the question of when these chimeras are sufficiently close to our species that they should attain full moral status, cause headaches. To those who are concerned about the capacity to feel pain and suffering, like Peter Singer, that is an irrelevant consideration (Singer 2009). To defenders of a subject-­of-­a-­life standard, like Tom Regan, this also would not matter (Regan 1983). Writers broadly agree on defensible criteria, even if they hold diverging views on which of these criteria are the right ones. What these criteria have in common, however, is that their starting point is the existence of some non-­ human sentient animal’s arguable physiological equiva­lency to a paradigmatic human’s capacities. It does not matter whether you settle for Bentham/Singer’s sentientism criterion, where the physiological

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Limits of Impartialist Ethics  127 capacity to experience pain and to suffer as a result of that matters, or Regan’s subject-­of-­a-­life criterion, both standards are inconceivable to us without the human condition that gave rise to them. It is somewhat amusing that this type of ethic is taught in ethics classes as a kind of impartialist ethics, that is somehow transcending the self-­interest of the moral agent. It is impartial only insofar as the judgement-­making impartial observer has no right to treat themselves differently from anyone else with similar dispositional capacities. However, there surely is nothing impartial about an impartial observer who declares that what matters is—conveniently—sentience and personhood, because that is all that our impartial observer knows from their own experience. The starting point was us: our ability to experience pain and suffering, our dispositional capacity to be subjects of our lives, and so on and so forth. This approach works well with regard to whatever kind of entity is lucky enough to have experiences like paradigmatic humans do. Tough if you happen to be an extraterrestrial, or an artificial intelligence, with experiences that are unlike those of the impartial observer doing their observing and their impartial judging. Some readers might recall Data (Data, n.d.), a fictional character in the science fiction series Star Trek: The Next Generation. Data is a highly sophisticated android with human-­like characteristics, including—strangely—having a gender: he is male. In various episodes of the series he seems to regret his inability to experience emotions, including joy. Well, regret might be too strong a term, it is more like a curious befuddlement. In the motion picture Star Trek: First Contact (First Contact, n.d.) the evil Borg Queen (Borg Queen, n.d.) offers Data what he desires. She reconfigures the android so that he also experiences emotions, as well as sensations—skin grafts are thrown in for good measure, to achieve the latter. Data experiences not only emotions but also sensations like pain. This takes us to challenging ethical questions about the moral standing and status of potential future artificial intelligence entities created by us (or, potentially, at some point, created by themselves) that might display characteristics and behaviours similar to our own, as well as characteristics distinctly different from those of a paradigmatic human.—There is trouble ahead for our impartial observers!—The imaginations of the Star Trek writers designed Data to be in morally relevant ways like a paradigmatic human person. Prior to reconfiguration, Data had a line in one of the series’ episodes where he says, ‘I chose to believe that I was a person, that I had the potential to become more than a collection of circuits and sub-­processors’ (Venables  2013). The riposte is delivered by Beverly Crusher, the star ship’s doctor, ‘Commander Data, the android who sits here at Ops, “dreams” of

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

128  Udo Schuklenk being human, never gets the punch line to a joke . . .’ (McFadden  1990). So, while he failed as the pre-­upgraded Data to relate convincingly to humans (and so satisfy Rob Sparrow, about whom we will hear more in just a moment), to the motion picture viewer everything was supposed to have changed in terms of his moral status and moral standing after he was reconfigured. Once he had the ‘emotion’ chip installed, and the skin graft that permitted him to feel sensation, he really was like us in morally relevant ways. Unless one takes moral personhood to be something that does not permit of degrees—as admittedly most philosophers probably do—Data’s moral status may even have been pegged somewhere above us, due to his superior analytical skills and speed. The troubling thing about the post Borg Queen reconfigured Data is that we still had to take at face value his expressions of anxiety, stress, fear, and so on and so forth. We had no ways of knowing how the skin graft, emotion chip and his android hardware interacted. Was that really the same as us experiencing anxiety, stress, fear, and so on and so forth? Challenging unresolved, and possibly unresolvable ontological and epistemological questions abound. Thankfully today there are no androids developed anywhere near to a level where a Data-­like scenario would even occur as a serious question. However, if we ever reached a stage where we would debate the moral standing or status of bioengineered hybrids of humans and machines, the question just raised, namely whether they really experience anxiety, stress, fear as we do, would pose a very serious challenge. The main reason for this is that we typically take expressions of pain, suffering, and happiness at face value. We can never know, ultimately, whether someone who expresses pain, suffering, or happiness actually experiences any of those. One of the first things medical students are taught is to believe their patients when they claim that they are in pain. This is so, because we have no means to investigate the veracity of such patients’ claims. We act on having experienced those feelings ourselves, and we assume that someone who is physiologically close to us can probably experience the same. Existing ethical analyses—unless they are speciesist— rely on the truth of these assumptions about real-­world physiological closeness or equivalency. It is not too far-­fetched to make similar assumptions with regard to beings that are in vital respects like us (as biological chimeras could arguably be), but where would that leave Data, an android–human hybrid? We could simply follow an approach where in case of doubt, we err on the side of caution and treat Data as if his dispositional capabilities were truly relevantly like those of a paradigmatic human. That should not prevent us from trying to solve the

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Limits of Impartialist Ethics  129 empirical question of whether what Data claims he feels is something that he actually feels in a manner that is similar to us humans feeling things. We should keep in mind here, though, that we have not been terribly successful at achieving certainty when it comes to humans making such claims, so erring on the side of caution might turn out to have to be our permanent practical solution. This solution only carries us that far, unfortunately, because if there were morally relevant features of Data that would be very much unlike us, our current ways of doing ethics, would fail us, and him.

3.   The Difficult Bits: Self-­learning Artificial Intelligence Machines Quite a bit of literature exists on the ethical implications of creating and eventually interacting with (self-­learning) artificial intelligence (AI). Let me note in passing at least that I am somewhat less excited about the challenges these machines cause in terms of their moral status and standing, even though these questions are what I will remain focused on. I am much more anxious about a different question, having learned the relevant lessons from Terminator, Battlestar Galactica, Star Trek, and so many others. The question is whether AI machines would consider us humanoids as imperfect, to be eradicated carbon-­based life forms—as V’ger, a gigantic sentient entity, no less, did in Star Trek—or as beings who possess moral status, and as deserving to survive, when they eventually take over from us. What would they make of us and our needs if they ever developed to such an extent that they would take on the mantle of impartial observer and decider? The questions of moral status and moral standing seem quite significantly more difficult vis-­à-­vis self-­learning artificial intelligent machines, such as androids, much like the pre-­upgrade Data. For the purpose of ease I will henceforth refer to such machines as ‘androids’, but keep in mind that here the actual question is how to approach the determination of moral obligations toward self-­learning artificial intelligence machines that do not have the uncontroversial physiological capacity to feel pain. They do not have it, because unlike post-­upgrade Data there is no biological component in them that could straightforwardly give rise to the kind of pain and suffering that we have in mind when we consider the moral standing or status of higher mammals or equivalent lifeforms. So, there is no physiological ‘like us’ for the impartial observer to take comfort in. But what about personhood criteria such as the capacity to reason, a memory, a sense of future, or a sense of self?

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

130  Udo Schuklenk Robert Sparrow has proposed a different strategy, to those approaches I have mentioned, as far as determining moral status is concerned. He argued for a Turing Triage type test (Falk 2012) to assist us in determining whether an artificial intelligence possesses moral standing (Sparrow  2004). The ori­ gin­al Turing test had a different function, of course (Turing 1950). The ob­ject­ ive was simply to test whether a machine could manage to talk—or, in Turing’s day, type messages—in such a way that a human unaware that they were talking to the machine could be fooled into thinking they were conversing with a fellow adult human. Turing was in no doubt that machines eventually would be able to succeed on that frontier. Indeed, an AI machine reportedly successfully passed the Turing test in 2014 (Anonymous 2014). Sparrow’s Turing Triage test is different from Turing’s original test. He proposes a ‘test for when computers have achieved moral standing by asking when a computer might take the place of a human being in a moral dilemma, such as a “triage” situation in which a choice must be made as to which of two human lives to save. We will know that machines have achieved moral standing comparable to a human when the replacement of one of these people with an artificial intelligence leaves the character of the dilemma intact’ (Sparrow 2004, 203). Sparrow has little doubt: if we decide that moral personhood is only a matter of cognitive skills, it is just a matter of time until artificial intelligence attains moral standing. He defends instead an account of moral standing where ‘for a machine to pass the Turing Test it must be cap­ able of being the object of remorse, grief and sympathy, as moral emotions such as these are partially constitutive of our concept of a person’. He then goes on to say that ‘machines are not appropriate objects of these responses because they are incapable of achieving the individual personality towards which they are oriented’ (Sparrow  2004, 212). It seems to me that if moral standing for Sparrow entails, crucially, being part of a network of others like us with whom we socially interact (as, for instance, in the African philosophy of Ubuntu (Metz and Gaie 2010), machines would have some difficulty passing such a test. On the other hand, they would most likely interact with others like themselves, in networks of artificial intelligences. Sparrow eventually admits that his analysis boils down to something more trivial: it goes back to our intuitions about what such machines can and cannot do, as opposed to what at a certain point in time they de facto can or cannot do. He writes, ‘We cannot seriously hold that machines have thoughts or feelings or an inner life because a radical doubt inevitably arises as to whether they really feel what they appear to’ (Sparrow 2004, 212). Descartes would have offered similarly thoughtful arguments with regard to non-­human animals’ capacity to feel

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Limits of Impartialist Ethics  131 pain and to suffer in his days. Descartes was clearly wrong. The jury is out there on Sparrow. Time will tell. Sparrow seems ready to discard a basic formal principle of justice, namely the idea that we ought to treat equal things equally, because he would limit that approach to intra-­species obligations only. The impartial observer ceases to be impartial. Sparrow’s social relationships strategy, where a moral agent’s standing depends on our limited capacity to empathize with the moral agent, does not withstand critical scrutiny. It encounters problems of the same kind that we are familiar with from the animal rights debates, but not only from those. There are humans who are unable to reciprocate in social relationships or to even participate in such relationships. Our moral obligations toward them are not contingent on their ability to reciprocate. Equally, why should someone’s moral standing depend on whether they succeed in becoming the object of our ‘remorse, grief and sympathy’? If we have learned anything from the history of atrocities humans committed against fellow humans, and from Descartes’s notorious failure to acknowledge non-­human animal sentience (Miller 2013), it is this: members of our species are in overwhelming numbers not sophisticated enough to do justice even to others who are in fact like us in morally relevant ways, let alone to others who are very much unlike us. It is true that most of us are not Cartesians today, and so we readily accept that higher mammals experience pain and have the capacity to suffer. Even so, the vast majority of us fail to draw the relevant behavioural conclusions, for instance in terms of the use of sentient animals for food production, from those insights (Mason and Singer 1980). Sparrow wants to make a virtue of this moral failure. So, leaving Sparrow’s relational approach to moral obligations aside, two questions arise, one of which seems to be empirical in nature, while the other is normative. The empirical question is this: if androids expressed pain and suffering, happiness and misery in ways that we understand, what would they actually experience, if anything? Is what they would claim to experience comparable to what we would experience? Is it comparable to what higher non-­ human mammals experience? In case we are able to defend affirmative answers to these questions, we would be able to deploy at least the same type of analysis that we deployed to address the animal moral status question. In some sense this would be good news, given that this is well-­trodden territory. We would be able to bring to bear sophisticated ethical frameworks to this question. I have alluded to this in the section on chimeras and cyborgs. The trouble is, of course, we currently do not have the means to make a definitive determination. Sparrow’s answer is quite revealing. They ain’t like us, therefore it ain’t happening.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

132  Udo Schuklenk This takes me to the other question. What if the empirical question was answered in the negative? Would that give us moral licence to discard an android’s expression of suffering, or an expression of having particular needs frustrated, in a sense the android considers morally relevant? We should— finally—address the elephant in the room: is it normatively defensible to reduce answers to questions about moral status and moral standing to criteria that look exclusively at the human experience and exclude everything that falls outside that realm from moral consideration? That pretty much is Sparrow’s response, but the same holds true for sentience-­based approaches and to some extent it holds true for personhood-­based approaches. It is an answer that falls short of what is morally required. It does not take a lot to realize that our current strategy (namely to make us the standard of evaluating anything else) is difficult to defend. Imagine that members of an extraterrestrial android species visit us. Our service androids provide the human species’ representatives with sustenance during the meetings. We notice that there is quickly an exchange of information occurring between the extraterrestrials and our service androids. Fascinating how efficient artificial intelligence is when it comes to sharing information! There clearly is a sense of community there! The extraterrestrials—much like European politicians would bring up human rights in meetings with Chinese or Arab counterparts—bring up the issue of android rights and, based on what they gathered from communicating with our service androids, what they perceive to be abusive and dis­crim­ in­atory conduct by humans toward those androids. Essentially, they charge that we have ignored unfairly the androids’ demands for reliable energy supply, appropriate operating and out-­of-­service environment temperature, and timely repairs, both in their work and their out-­of-­service environment. It is true that androids did complain about these three issues, and about the temperature in particular. We rejected those complaints on the grounds that androids cannot suffer, and that they were machines, not persons. As far as we were concerned, the androids’ demands were unreasonable, so we had no case to answer for. In fact, Sparrow’s work was cited approvingly, apparently. It was also cheaper to ignore their complaints, and in this future society cap­it­ al­ism still reigns. The extraterrestrials do not accept any of our explanations. While they themselves could not experience pain, or suffering or happiness, they bring up the example of cheap labour that they have imported to their home world. These biological organisms were not dissimilar to us in that they required sustenance, rest, they were sentient by our definition of sentience. While the extraterrestrials could not experience any of these kinds of needs, they did

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Limits of Impartialist Ethics  133 listen to the complaints they received, when their first cohort of workers got weaker and weaker due to the lack of available food. They realized quickly that their standards of well-­being were quite different from that of their imported labourers, and they realized that they could not apply their own standards of well-­ being to them. The biological organisms’ needs were different. I grant you, the above is far-­fetched, as hypotheticals go, but I do think it shows that our strategy, namely using own experiences that we are familiar with as the only standard, is indefensible. The imaginary impartial observer of impartialist ethics is truly only impartial inasmuch as it checks the ‘like me’ box when it tries to ascertain what obligations it might have to others, both human, as well as non-­human. That is where the impartiality ends. To be fair, this approach does a reasonably good job at telling us what moral obligations we have to others like us who live in far-­away lands, who live in the future, as well as to non-­human animals. That is so even if behaviourally we fall short ultimately short of what we know is required of us. There is no strategy to deal with potential extraterrestrials, or future self-­ learning AI machines, who are categorically different to us in such a way that they fall out of what a decider of an impartialist ethical theory bent would be looking for in them. Different beings will have different needs. As far as their moral standing and moral status is concerned, any attempt at making a determination might have to be uniquely different from that that we deployed in the case of non-­human animals, insofar as there does not have to be bio­logic­al material directly involved in the production of androids. Personhood, or something akin to it, clearly worked in the case of the extraterrestrials, because they were at least able to make their case. They had the relevant intellectual capacities that made them persons, which in turn permitted them to reflect on their needs, and to communicate those needs to others. Part of the difficulty we face will undoubtedly be to answer the question of whether—hypothetical—human- or sentient-­being-­like responses to particular stimuli in an artificial intelligence entity are morally equivalent to the responses given by beings that are physiologically capable of having such experiences. And if they are not, whether that would give us moral licence to ignore them.

4.   Justice Now for Stones, Rivers, and Androids! An obvious response to my hypothetical could be that there are no known extraterrestrials, and androids are nowhere close to being like what I conjured

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

134  Udo Schuklenk up in the hypothetical. There is no evidence that there are beings who can be wronged who are not—at a minimum—sentient. I concede that. I am not proposing that we grant rights to particular AI operating systems, or to current-­ day androids. But that is only because they have not yet reached a stage of sophistication where that seems warranted. In a way, they are like stones we kick down the road. The reason why we do not grant rocks and stones particular rights to be left alone (i.e. lying forever where they are) is perhaps not only because they have no interests that we can wrap our heads around, but also because they have never made any effort to communicate their (violated or threatened) interests. To be fair, some have argued that we should go further in terms of granting moral status to entities that are neither uncontroversially sentient, nor persons. There are precedents supportive of such stance already in our legal systems. We grant legal rights today not only to persons and quasi-­persons (say, corporations), but also to innate things like rivers. The Whanganui river in New Zealand has been granted the same rights as living entities, it must be treated (in law) as a living entity, even though it is not (Freid 2019). The reason for this is that the Maori hold a worldview where the river is considered a living entity with valid moral claims. I think of rivers as eco-­systems with instrumental value that give rise to certain protections that are based on the moral claims of beings who have uncontroversial moral standing, and whose legitimate interests would be violated by an absence of said protections. However, unless one subscribes to a worldview that William K. Frankena (1979) described as a holistic environmental ethic where everything in existence somehow counts morally, the moral consideration given to the ecosystem river would not be based on the view that the river deserves those protections in its own right, that is, in the absence of it being of instrumental value to beings with moral standing. A holistic ethic fails to pass the test of standard-­fare lifeboat examples, or at least it leads to deeply im­plaus­ible consequences. Imagine, for a good lifeboat example: 5 billion cancer cells in a person with cancer—who matters more, the rapidly growing population of cancer cells or the single human being threatened by them? There is no considered, plausible answer to these sorts of examples provided by proponents of holistic environmental ethics approaches to moral status. The reason for this is that it is unclear what matters and why. But what about the future, what about future post-­upgrade Data equivalent androids? My view would be this: unlike Sparrow I would not want to elim­in­ ate the possibility of the existence of a valid moral claim for recognition that is specific to future sophisticated androids, but I cannot envisage it beyond hypotheticals that have currently at least—and thankfully so—no relation to

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

the Limits of Impartialist Ethics  135 reality. Far-­fetched as the hypothetical of the visiting extraterrestrials is, it points us to something that might be relevant. If there is a sophisticated android (whether with or without biological components) that communicates that it is being wronged by our treatment of it, we would need some evidence or argument from it, or from a proxy defending it, that supports the violated-­ interest/harm claim. It is easier for us to investigate claims of harm if there is a biological being that is somewhat similar to us; it is much harder, if not impossible, for us to investigate such claims if there is no biological component (say a higher nervous system) that offers us a chance to compare like with like and draw conclusions from that. I think, for that reason, it is not unfair to place the onus on something that claims interests and rights for itself on the claimant, or their (human) proxy—as has been the case with non-­ human animals or the river in New Zealand. This holds true even if the basis of the government of New Zealand decision to negotiate the Whanganui River Claims Settlement (Te Awa Tupua 2017) was to show respect for Maori worldviews and the fact that they never gave up possession of the river, rather than a genuine conviction that the river had an inner life and deserved respect in its own right. In human history rights activism ultimately led to desirable societal change (whether this benefited kidnapped people who were held as slaves, women, queer people, or animals). They, or their proxies, made their case, sometimes by deploying militant means, and they succeeded. At a min­ imum I would expect androids to do the same, make their case and stake their claim. One could object that androids might not have the capacity to make their case. Again, I think the case of the Whanganui river is illuminating. The river’s rights claim was brought by the Maori people, acting as proxies on the river’s behalf. ‘So what’, you might say, ‘perhaps that case was won because it was thought it is right to show respect to the traditional owners of the land and their views of the river, and not because the government of the day actually believed the river was a living entity deserving of respect in its own right.’ Still, one could imagine human proxies representing the androids, too. But what if there are no human proxies making that case, and we fail to respond to those androids’ legitimate claims? I do not think it is unreasonable to say that if something that might have justifiable moral claims against moral agents is unable to communicate or otherwise signal those claims in a way that can be understood by the moral agents to whom it is addressed, and these claims are so significantly different from what we know today that we cannot imagine them, we are not blameworthy for not recognizing those claims. At least that is the case if we made a genuine effort at discerning whether there

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

136  Udo Schuklenk were relevant interests that we ought to consider. However, we ought to be prepared to let go of evaluative standards that require sentience and personhood as conditio sine qua non when we consider the moral status and moral standing of entities that are quite unlike us. To be clear, both sentience and personhood are very useful, and if we had reason to suspect that an extraterrestrial or an android was sentient, or had personhood, we ought to act on that, and treat that extraterrestrial as if it was sentient or was a person, out of an abundance of caution. However, there is nothing impartial about evaluating something’s or someone’s moral standing or moral status based on the degree to which they are like us. It is time to rethink this approach to impartialist ethics. It serves us well vis-­à-­vis humans as well as arguably sentient non-­humans, but it fails us vis-­à-­vis entities that are quite unlike us. Moral philosophers of the impartialist bent would do well to reconsider whether sentientism or personhood or variations of them really should be the only games in town as far as the determinations of moral status or moral standing are concerned. After all, it is conceivable that something may have moral standing that is neither sentient nor a person. Science fiction gives us surprisingly rich fodder for analysis. Thinking about such hypotheticals might well prepare us better for things to come.1

Note 1. I thank Steve Clarke for persistently asking for improvements to earlier drafts of this chapter. I suspect he might not be entirely satisfied with this version of the chapter, but it has improved considerably due to his persistence. Thanks are due to Julian Savulescu who kindly invited me to present this content during a Wellcome Centre for Ethics and Humanities and Oxford Uehiro Centre for Ethics conference on Rethinking Moral Status in 2019. Elizabeth Harman provided me with detailed notes of the questions audience members asked, no doubt to ensure that I would address those issues in this chapter. I hope I have done her efforts justice.

References Anonymous. (2014) Computer AI passes Turing test in ‘world’s first’. BBC News 9 June. [Accessed 27 March 2020] Borg Queen. (n.d.) Star Trek (accessed February 4, 2020). Ansede, M. 2019. “Spanish Scientists Create Human–Monkey Chimera in China.” El Pais. Available at (accessed January 22, 2020). Australia, Government of (regulating studies funded by the National Health and Medical Research Council (NHMRC)). 2013. Australian Code for the Care and Use of Animals for Scientific Purposes, 8th edition. Available at (accessed March 25, 2017). Beauchamp, T.  L. and DeGrazia, D. 2020. Principles of Animal Research Ethics. New York: Oxford University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

MORAL STATUS INCLUSIVE OF NONHUMAN ANIMALS  175 Beauchamp, T. L. and Wobber, V. 2014. “Autonomy in Chimpanzees.” Theoretical Medicine and Bioethics 35: pp. 117–32. Buchanan, A.  E. and Brock, D.  W. 1989. Deciding for Others: The Ethics of Surrogate Decision Making. New York: Cambridge University Press. The Cambridge Declaration on Consciousness. 2012. In  P.  Low, J.  Panksepp, D.  Reiss, et al., eds, Proclaimed in Cambridge, UK, at the Francis Crick Memorial Conference on Consciousness in Human and non-Human Animals, at Churchill College, University of Cambridge, available at , retrieved June 12, 2017. Canadian Council on Animal Care. 1993. “CCAC Guidelines on: Animal Use Protocol Review.” The CCAC’s Guide to the Care and Use of Experimental Animals 1: 2nd edition. Chen, H.  I., Wolf, J.  A., Blue, R., et al. 2019. “Transplantation of Human Brain Organoids: Revisiting the Science and Ethics of Brain Chimeras.” Cell Stem Cell 25: pp. 462–72. Collins, F.  S. 2015. “NIH Will No Longer Support Biomedical Research on Chimpanzees.” The NIH Director, National Institutes of Health. Available at (accessed February 5, 2020). Committee on the Use of Chimpanzees in Biomedical and Behavioral Research, Institute of Medicine. 2011. Chimpanzees in Biomedical and Behavioral Research: Assessing the Necessity. Washington, DC: National Academies Press. Available at (accessed January 29, 2020). Corr, S. A., Gentle, M. J., McCorquodale, C. C., and Bennett, E. 2003. “The Effect of Morphology on Walking Ability in the Modern Broiler: A Gait Analysis Study.” Animal Welfare 12: pp. 159–71. Council of Councils, National Institutes of Health. 2013. Council of Councils Working Group on the Use of Chimpanzees in NIH-Supported Research: Report. Available at (accessed January 29, 2020). Cyranoski, D. 2019. “Japan Approves First Human-Animal Embryo Experiments.” Nature (July 26). Available at (accessed January 22, 2020).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

176  Ruth R. Faden, Tom L. Beauchamp et al. Davis, N. 2019. “First Human–Monkey Chimera Raises Concern Among Scientists.” The Guardian. Available at (accessed January 22, 2020). DeGrazia, D. 2020. “Sentience and Consciousness as Bases for Interests and Moral Status: Considering the Evidence.” In L. S. M. Johnson, A. Fenton, and A. Shriver, eds, Neuroethics and Nonhuman Animals. New York: Springer. European Parliament and The Council of the European Union. 2010. “Directive 2010/63/EU of the European Parliament and of the Council.” Official Journal of the European Union L 276/33, adopted September 2010. Available at (accessed February 5, 2020). European Parliament and The Council of the European Union. 2010. “Directive 2010/63/EU on the Protection of Animals Used for Scientific Purposes.” Official Journal of the European Union L 276/33. Available at (accessed October 19, 2018). Farahany, N. A., Greely, H. T., and Giattino, C. M. 2019. “Part-Revived Pig Brains Raise Slew of Ethical Quandaries.” Nature 568/7752: pp. 299–302. Greely, H. T., Cho, M. K., Hogle, L. F., and Satz, D. M. 2008. “Thinking about the Human Neuron Mouse.” American Journal of Bioethics 7: pp. 27–40. Available at (accessed January 23, 2020). Greene, M., Schill, K. Takahashi, S., et al. 2005. “Moral Issues of Human-NonHuman Primate Neural Grafting.” Science 309/5733: pp. 385–6. Hyun, I. 2016. “What’s Wrong with Human/Nonhuman Chimera Research?” PLOS Biology 14/8. Available HYPERLINK "C:\\Users\\Jackie\\Desktop\\Jackie folder\\Clarke 2.21\\at”at (accessed January 29, 2020). Institute of Medicine, National Research Council. 2005. “Guidelines for Human Embryonic Stem Cell Research.” Washington, DC: The National Academies Press. Availableat (accessed January 29, 2020). International Society for Stem Cell Research (ISSCR). 2016. “Guidelines for Stem Cell Research and Clinical Translation.” Available at (accessed June 5, 2020).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

MORAL STATUS INCLUSIVE OF NONHUMAN ANIMALS  177 Kagan, S. 2019. How to Count Animals, More or Less. Oxford: Oxford University Press. Kazez, J. 2010. Animalkind: What We Owe to Animals. Hoboken, NJ: Wiley-Blackwell. Kestin, S. C., Knowles, T. G., Tinch, A. E., and Gregory, N. G. 1992. “Prevalence of Leg Weakness in Broiler Chickens and Its Relationship with Genotype.” Veterinary Record 131: pp. 190–4. Knoepfler, P. 2016. “Human Chimera Research’s Huge (and Thorny) Potential.” Wired Magazine. Available at (accessed January 29, 2020). Kobayashi, T., Yamaguchi, T., Hamanaka, S., et al. 2010. “Generation of Rat Pancreas in Mouse by Interspecific Blastocyst Injection of Pluripotent Stem Cells.” Cell 142(5): pp. 787–99. Available at (accessed January 30, 2020). Martyn, I., Kanno, T. Y., Ruzo, A., Siggia, E. D., and Brivanlou, A. H. 2018. “SelfOrganization of a Human Organizer by Combined WNT and NODAL Signalling.” Nature 558: pp. 132–5. National Health and Medical Council. 2016. “Principles and Guidelines for the Care and Use of Non-human Primates for Scientific purposes.” Australian Government, NHMRC. Available at (accessed February 5, 2020). New Zealand, Parliamentary Counsel Office. 2018. “Animal Welfare Act 1999.” New Zealand Legislation. Available at (accessed February 5, 2020). Peryer, M. and Kristoffersen, M. 2019. “Yale Researchers Revive Cells in Dead Pig Brains.” Yale News. Available at (accessed January 17, 2020). Phillips, K.  A., Bales, K.  L., Capitanio, J.  P., et al. 2014. “Why Primate Models Matter.” American Journal of Primatology 76: pp. 801–27, 803. “Request for Public Comment on the Proposed Changes to the NIH Guidelines for Human Stem Cell Research and the Proposed Scope of an NIH Steering Committee’s Consideration of Certain Human–Animal Chimera Research”. 2016. Federal Register. Available at (accessed January 20, 2020).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

178  Ruth R. Faden, Tom L. Beauchamp et al. United Kingdom, Select Committee on Animals in Scientific Procedures. 2002. “Select Committee on Animals in Scientific Procedures: Volume I-Report”. House of Lords, Session 2001–02. Available at (accessed February 5, 2020). United States Department of Agriculture. 2020. “Economic Research Service: Livestock & Meat Domestic Data.” Available at (accessed January 22, 2020). United States Public Health Service Policy on Humane Care and Use of Laboratory Animals. 2015 revised edition. Washington, DC: National Academies. Vrselja, Z., Daniele, S. G., Silbereis, J., et al. 2019. “Restoration of Brain Circulation and Cellular Functions Hours Post-Mortem.” Nature 568: 336–43. Available at (accessed January 17, 2020). Wu, J., Platero-Luengo, A., Sakurai, M., et al. 2017. “Interspecies Chimerism with Mammalian Pluripotent Cells.” Cell 168: pp. 473–86. Yamaguchi, T., Sato, H., Kato-Itoh, M., et al. 2017. “Interspecies Organogenesis Generates Autologous Functional Islets.” Nature 542: pp. 191–6. doi:10.1038/ nature21070, available at (accessed January 29, 2020).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

11 Revisiting Inexorable Moral Confusion About the Moral Status of Human– Nonhuman Chimeras Jason Scott Robert and Françoise Baylis

1. Introduction In “Crossing Species Boundaries” (Robert and Baylis 2003), we explored the history and prospective future of human–nonhuman chimera1 research in stem cell biology from both scientific and ethical perspectives. We had in mind novel beings created by fusing human stem cells with the blastocyst or developing embryo of a nonhuman animal, such as a mouse or a monkey. If allowed to develop, the resultant chimeras would have cells derived from both nonhuman and human cell populations, and thus might present as neither clearly human nor clearly nonhuman. In that article, we challenged the view that species have fixed, natural boundaries and dismissed the various then-­extant ethical objections to the creation of human-­nonhuman chimeras as inadequate. We argued that objections to creating them—whether grounded in “folk” psychology or ethical arguments—likely were motivated by a strong desire to avoid inexorable moral confusion about their moral status.2 We intimated that many (most) humans would prefer not to risk undermining their self-­anointed privileged place in the moral status hierarchy. On this basis, we concluded that human– nonhuman chimeras: are threatening insofar as there is no clear way of understanding (or even imagining) our moral obligations to these beings—which is hardly surprising given that we are still debating our moral obligations to some among us who are undeniably biologically human, as well as our moral obligations to a range of nonhuman animals. If we breach the clear (but fragile) moral Jason Scott Robert and Françoise Baylis, Revisiting Inexorable Moral Confusion About the Moral Status of Human–Nonhuman Chimeras In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Jason Scott Robert and Françoise Baylis 2021. DOI: 10.1093/oso/9780192894076.003.0011

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

180  Jason Scott Robert and Françoise Baylis demarcation line between human and nonhuman animals, the ramifications are considerable, not only in terms of sorting out our obligations to these new beings but also in terms of having to revisit some of our current patterns of behavior toward certain human and nonhuman animals. (Robert and Baylis 2003, p. 9).

In this chapter, we revisit the notion of inexorable moral confusion in relation to chimeras that occupy the gap between humans (who are archetypically granted the highest moral status) and nonhumans (that are typically accorded lesser moral status than humans).3 Our modest aim is to contribute to on­going debate and discussion about the moral status of human–nonhuman chimeras by suggesting that perhaps the steadfast focus on moral status is misguided. It is widely believed that a being’s moral status determines its entitlements, including how it should be treated and, most importantly, its right to life. But what if this belief is wrong-­headed? What if entitlements are not merely about rights, but also about responsibilities? Indeed, how might the world be different if we shifted our orientation, embraced a relational view of the world, and properly tended to our responsibilities? In other words, what if the salient moral issue is not exclusively what rights claims can be made by (or on behalf of) human–nonhuman chimeras by virtue of them having certain human-­ like cognitive (or other) capacities deemed morally relevant, but rather what moral obligations (duties) we humans must assume given our cognitive capacities and the caring responsibilities that flow from our decision to bring novel living beings into the world?

2.  Moral Status Allen Buchanan (2009) explains that moral status is a comparative notion (it may differ between beings, typically between “species” but also potentially within “species”); he also articulates that it is a threshold concept rather than a scalar one (it depends on the presence of a certain capacity or set of capacities, and the degree to which these are present is irrelevant). As he observes, currently the world is organized in such a way that “it is human beings, or at least human beings who are persons, that are thought to occupy the highest status” (Buchanan 2009, p. 346). There are, however, those who challenge the privileged place of humans in the moral hierarchy, insisting that some nonhuman animals have moral status

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Revisiting Inexorable Moral Confusion   181 similar to that of humans. For example, David DeGrazia (1996,  2002) and others before him (e.g., Singer  1975 [1990]) have argued convincingly that being human is neither necessary nor sufficient for moral status analogous to that of humans. Consistent with these arguments, Julian Koplin and Julian Savulescu (2019, p. 43) have recently observed: “it is unclear why it is im­port­ ant for us to preserve the view that biological humanness is both necessary and sufficient for full moral status. Indeed, there are good philosophical ­reasons to think that moral status is conferred not by species membership per se, but rather by some set of morally relevant capacities, such as sentience, autonomy, and/or self-­consciousness.” Capacities often used to determine moral status include: complex cognition (the ability to acquire knowledge and skills, to have beliefs and motives), self-­ consciousness (the capacity for (complex forms of) self-­awareness), volition (the ability to exercise free will, self-­control, independence), sentience (the capacity to experience feelings), moral agency (the ability to act on moral ­reasons), communication (the ability to use language or other forms of information exchange), and social relations (being part of a social network of family and friends). While the capacities to think, to have beliefs, to have memories, to make choices, to feel, to experience pain, to communicate, and to have social relations may depend on archetypically human mental or cognitive characteristics and traits, they very clearly are not the unique purview of humans. From another perspective, Koplin (2019) has recently argued that there may be good reasons to doubt the moral significance of putatively uniquely human capacities. Meanwhile, David  J.  Gunkel (2018) has suggested that moral status may have less to do with capacities and more to do with social relationships and interactions. Notwithstanding this plurality of perspectives on the moral status of humans and nonhumans, with the creation of human–nonhuman chimeras, challenging ethical questions have come to the fore. To date, most of these questions have been grounded in debate about whether human–nonhuman chimeras have (or have the potential to develop) capacities approaching those of humans—do they think, look, or act (sufficiently) like us? From this perspective, the pivotal question is whether and, if so, under what circumstances human–nonhuman chimeras might have moral status sufficient that their continued use by humans for exclusively human purposes would be especially morally problematic.4 But, plausibly, the moral focus could be on social relationships and whether we humans have special (self-­imposed) moral obligations to novel beings insofar as they are our creations.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

182  Jason Scott Robert and Françoise Baylis

3.  The Science of Human–Nonhuman Chimeras Typically, biologists offer one (or a combination) of three rationales to justify the creation of human–nonhuman chimeras: (i) to test what happens to human cells when transplanted into a living (in vivo) nonhuman animal host, (ii) to use the resulting entity as a model for human disease, or (iii) to grow human (or almost entirely human) organs for potential (xeno)transplantation (Robert 2006). Here, we briefly explore the recent science, and ethical concerns about the science, with regard to these experimental rationales. While the full details of the science are not relevant to the adjudication of moral concerns, it is important to have a sense of what is technically possible, what is currently happening, and what is pure fancy, in order to anchor the ethical commentary.

3.1  Human–nonhuman chimeras as assay systems The initial basic science rationale for using human cells in nonhuman animals was to determine whether these cells would survive and, if so, how they would behave in an in vivo nonhuman animal host. This would serve as a basis for predicting how these cells might behave once transferred into a human host. This research was perceived by the scientists proposing it (e.g., Brivanlou et al. 2003) as especially important in the context of human pluripotent stem cell research, primarily because pluripotent stem cells, and cells derived from them, have the potential to misbehave upon transplant. What makes pluripotent stem cells so potentially clinically interesting—their capacity to differentiate into any type of cell—is what also makes them potentially dangerous upon transplantation into humans if not appropriately cultured (Robert, Maienschein, and Laubichler 2006). Human–nonhuman chimera experiments promised to eliminate some of this uncertainty and shed new light on how best (if at all) to proceed with human pluripotent stem cell-­derived cell transplantation research. Twenty years ago, there were few published studies involving the creation of human–nonhuman chimeras using human stem cells—e.g., Goldstein et al. (2002) injecting undifferentiated human embryonic stem cells into early chick embryos and Ourednik et al. (2001) transferring human fetal neural stem cells into the brains of fetal bonnet monkeys. But neither was there much controversy about these studies, which seemed to fly under the radar of both

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Revisiting Inexorable Moral Confusion   183 ethicists and journalists. But in 2000, Irving Weissman at Stanford University gained significant notoriety in announcing his plan to create a “human ­neuron mouse.” His aim was to transfer engineered human neural stem cells into a mouse embryo in such a way that, if it survived, its brain would be comprised of neurons derived from human neural stem cells. This would allow for relatively easy manipulation and study of human tissues inside a putatively nonhuman animal host without thereby endangering a human being. As foreseen by Brivanlou et al. (2003) and borne out in recent review art­ icles (e.g., Wu et al. 2016; Levine and Grabel 2017), human–nonhuman chimera assay experiments have become de rigueur in stem cell biology. While mice remain the apparent standard host animal for part-­human chimera production, scientists have expanded the range of species used in interspecies experiments, such as baboons and Rhesus macaques (to help bridge the gap between mice and primates given restrictions on certain kinds of chimeric experiments using human pluripotent stem cells) and pigs and cows (to help bridge the gap between smaller and larger mammals given restrictions on chimeric experiments using human hosts). This is concerning insofar as the epistemic value of this research for human developmental biology remains unclear: “the use of human/non-­ human animal chimeras has provided limit­ed information concerning human-­specific development” (Levine and Grabel 2017, p. 130)—which is in fact a generic worry about chimera research altogether (Robert 2006).

3.2  Human–nonhuman chimeras as models A potentially more promising domain for the use of human–nonhuman chi­ meras is as preclinical models in human disease research. This is the second experimental rationale for crossing human–nonhuman boundaries and has its modern roots in transgenic animal research. In our early work and in the works of others (e.g., Greene et al. 2005 and Greely et al. 2007, ably sum­mar­ ized by Koplin 2019), one of the more contentious issues has been whether human–nonhuman chimeras, if not terminated at or prior to birth, would begin to exhibit human characteristics and traits once born and allowed to develop. Of special concern was the possibility that such part-­human chi­ meras might exhibit human-­like consciousness or human-­like sentience, criteria that typically have been used to identify beings with the highest moral status (viz. humans). If part-­human chimeras exhibited behaviors suggestive

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

184  Jason Scott Robert and Françoise Baylis of human-­like consciousness and sentience they might similarly require being treated as having the highest moral status. Recently, American scientists and bioethicists have proposed “a cautious path forward” toward the development of human–nonhuman primate chi­ meras to model human neurological and psychiatric diseases (De Los Angeles et al. 2018, p. 1602). “[H]igh-­grade human chimerism” (De Los Angeles et al. 2018, p. 1600) would be required to model these diseases accurately in a primate host (as in any host nonhuman animal), as humans are apparently uniquely (naturally) susceptible to such neuropsychiatric conditions as Alzheimer’s Disease, Parkinson’s Disease, and schizophrenia. This is a contentious proposal for a number of reasons, not least of which is that nonhuman primate research of all sorts is currently being threatened in the US as elsewhere (Grimm 2019).

3.3  Human–nonhuman chimeras as sources of organs for transplantation The third, and likely most socially compelling, experimental rationale for crossing human–nonhuman boundaries—to grow human organs in an in vivo nonhuman animal host for eventual transfer into humans requiring organ transplantation—builds on the initial experimental rationale for creating human–nonhuman chimeras as assay systems. Soon after human pluripotent stem cells were first isolated in 1998, scientists began contemplating growing entire human organs from human stem cells in nonhuman animals. The key technological advance has been the development of interspecies blasto­cyst complementation. Interspecies blastocyst complementation permits targeted and precise chimerism, reducing the likelihood that human cells transferred into a nonhuman host will migrate throughout the organism. It involves genetically manipulating the host embryo to create a “vacant developmental niche” (De Los Angeles, Pho, and Redmond, Jr 2018, p. 334). As a result, the host animal’s own organ will not form; meanwhile, other developmental factors will direct the growth of the missing organ upon transfer of progenitor cells (De Los Angeles, Pho, and Redmond, Jr 2018, p. 334). Success using this technique was first achieved by Hiromitsu Nakauchi and colleagues using rat induced pluripotent stem cells to grow a rat pancreas in a mouse embryo manipulated so as to be unable to grow a mouse pancreas (Kobayashi et al. 2010). The next steps toward relevance to humans, according to Wu et al.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Revisiting Inexorable Moral Confusion   185 (2016), likely would require larger animals whose vacant developmental niche would be more appropriately sized to grow an organ suitable for eventual transplantation into a human subject. Pigs, cows, sheep, and nonhuman primates are likely candidates.

4.  The Ethics of “Humanized” Chimeras Most commentators on the ethics of creating human–nonhuman chimeras have focused on concerns about the putative “humanization” of animal research subjects cum human–nonhuman chimeras.5 To be sure, other concerns have been raised in the ethics literature.6 But for the purposes of this chapter, we limit our discussion to worries about humanization that feature in our original notion of inexorable moral confusion and that specifically raise issues about the moral status of humans, nonhumans, and human–nonhuman chimeras.

4.1  Human–nonhuman chimeras as assay systems When Weissman first proposed the human neuron mouse experiments, he asked Stanford bioethicists to assess the ethics of such a venture.7 The ethicists provided Weissman with a report in early 2002 suggesting that the experiment could proceed under certain conditions. A revised version of that unpublished report was eventually published in 2007 (Greely et al. 2007; cf. Baylis and Robert  2007 and Robert  2009). One concern with the proposed research was the creation of novel beings that would be less mouse-­like and more human-­like in terms of their cognitive capacities, including the capacity for (self-)consciousness. The Stanford bioethicists who reviewed Weissman’s proposed experiments summarily dismissed this concern: The mouse brain is significantly smaller than the human brain. In volume it is less than one-­thousandth the size of the human brain. Even apart from their smaller size, mouse brains are organized differently from human brains. The proportion of a brain composed of the neocortex, the region most associated in human brains with consciousness, is hugely greater in humans than in mice. The brain is an incredibly complex network of connections. Neuroscientists believe that it is the architecture of the brain that produces consciousness, not the precise nature of the neurons that make it up. As an analogy, architecture determines whether a building is a

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

186  Jason Scott Robert and Françoise Baylis cathedral or a garage, not whether the bricks used are red or gray. A mouse brain made up entirely of human neurons would still be a mouse brain, in size and architecture, and thus could not have human attributes, including consciousness.  (Greely et al. 2007, p. 35)

On their view, at most human neuron mice would be mice with a humanized mouse brain, and such humanization would be minimal, if not negligible, from a moral point of view. Work by Steve Goldman’s lab (Han et al. 2013), however, is in tension with this perspective. Goldman and colleagues transferred human glial progenitor cells into the forebrain of immune-­deficient neonatal mice. As the mice developed, it became evident that human astrocytes and other glial cells had integrated widely into their brains. Subjected to various cognitive tests, the human glial mice outperformed their unadulterated (unenhanced?) conspecifics. Were these putatively “smarter” chimeric mice less mouse-­like and more human-­like? Did these part-­human mice deserve to be treated differently from (better than) other mice, because they now exhibited capacities not found in wild-­type mice. What if the increased cognitive skills were a harbinger for human-­like consciousness, volition, sentience, and so on? Such questions portend inexorable moral confusion.

4.2  Human–nonhuman chimeras as models De Los Angeles et al. (2018) have suggested that new techniques, such as interspecies blastocyst complementation (described above), could be used in monkeys to “generate specifically targeted regions of chimeric brains that are entirely human derived” (p. 1601). Meanwhile, Chen and colleagues have suggested creating human–nonhuman chimeras by transferring human cerebral organoids into nonhuman animal hosts (Chen et al. 2019).8 With either of these scenarios, the prospects for significantly humanizing human–nonhuman chimeras are considerably more realistic than in the human neuron mouse project, and considerably more troubling. And there is the rub: the more humanized the human–nonhuman chi­ meras, the more potentially valuable they are scientifically (and, in these cases, clinically); but over-­humanization of precisely this sort is exactly what many who worry about the ethics of creating human–nonhuman chimeras fear most.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Revisiting Inexorable Moral Confusion   187 There are at least two reasons why some might worry. First is the concern that the resultant human–nonhuman chimeras would not only have improved cognitive skills but might also have increased sentience or volition. Conceivably at some point these capacities might be sufficiently atypical (compared to wild-­type animals) as to warrant a careful reconsideration of the novel beings’ moral status. In turn, this analysis might raise morally trouble­some questions about the decision to have created such beings in the first place. Second is the worry that the resultant human–nonhuman chimeras might have a degree of self-­consciousness that would become a source of existential suffering given their life trajectory as slave to human whims and intentions. It could be argued on these grounds that human–nonhuman chimeras should be immediately killed. To this line of reasoning, others might object that it would be immoral to act on this intuition which is so very clearly out of line with how we potentially should treat high-­moral-­status bearing creatures in research. And, moreover, it is not only possible but perhaps likely that the self-­same individuals, in such a case, could simultaneously hold both the intuition and the sense that the intuition is obviously wrong. Precisely these concerns could engender the kind of inexorable moral confusion we ori­gin­ al­ly proposed.

4.3  Human–nonhuman chimeras as sources of organs for transplantation At present, research on human–nonhuman chimeras as sources of organs for transplantation into humans depends on interspecies blastocyst complementation. As the primary goal with such research is to grow human livers and kidneys (and not brains), the research may not raise the same kinds of concerns about humanization in terms of morally relevant human-­like traits as the other research described thus far. But in order to advance the science and technology of human–nonhuman chimeras as organ sources, a number of key technical issues must be resolved. For example, the host organism’s nerves and vasculature must be “appropriately complemented” (De Los Angeles, Pho, and Redmond, Jr 2018, p. 335) so that the resulting organ (grown in the chimera) does not itself become too chimeric—too populated by the host animal’s migrating cells that it increases the likelihood of the failure of a prospective transplant because it would be recognized by the eventual human recipient as foreign.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

188  Jason Scott Robert and Françoise Baylis Consequently, other ethical issues emerge in this domain, as a function of concerns about directionality. This is because—unlike human–nonhuman chimeras as assay systems or disease models—the ultimate goal with human– nonhuman chimeras created for transplant purposes is to harvest an human organ from a chimera and transfer it to a human. So instead of grafting cells from a human source into a nonhuman animal host simpliciter, the resulting chimera now becomes the source of an organ to be grafted back into a human host. The moral status of human–nonhuman chimeras as organ source may thus not be influenced so much by concerns about humanization but rather more generally by concerns about human relationships to nonhuman animals. For instance, a question we have not had to confront before now is whether nonhuman animals as a source of organs have a different moral status by virtue of affective considerations—not having to do with human-­like capacities, but rather having to do with unique valued relations with humans. Can you (should you) have your “donor” for dinner? Can/should individual human– nonhuman chimeras used as, say, kidney sources be used exclusively for this purpose? Can/should they be terminated after the excision of the kidney(s) for transplantation? Or rather, consistent with the widely held ethical belief that to kill an animal begets the obligation to use all of the animal, would it be reasonable (or even required) that the organ source be both medicine and food—bearing in mind that the organ donor in question is most likely to be a pig or sheep? If the answer is “no,” one rationale might be that it would be akin to cannibalism to consume human–nonhuman chimeras (even ones in which almost all of the human source cells have likely been excised). But that reasoning begs the question of either the moral status or the species identity of human–nonhuman chimeras—neither of which is currently settled. If the answer is “yes,” then that dramatically shifts how we treat these novel creatures from how we have treated nonhuman research animals in the past—or it requires us to treat at least healthy non-­chimeric research animals (that are otherwise food for us) similarly in the future. A third, likely, response, bound up in inexorable confusion about our moral obligations to these beings, may be: “we just don’t know.”

5.  Inexorable Moral Confusion, Revisited Except for the most dogmatic amongst us, moral confusion is a commonplace occurrence. In the context of concerns about moral status, and moral responsibility, the phenomenon of moral category confusion arises when an entity

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Revisiting Inexorable Moral Confusion   189 appears to flout easy assignation to an established moral category, according to which one somehow knows whether, and if so how, to acknowledge its moral status. The moral imagination is especially challenged by entities that do not fall more-­or-­less neatly into conventional categories for assigning moral status—no matter how weak or unsatisfactory those categories may be. And intuitions about many of these issues will probably be all over the map— literally and figuratively—as some moral categories are culturally specific. As noted previously, species membership is fraught as the basis for any kind of moral status determination, not least because “species” remains an unsettled concept in the biological sciences. Notwithstanding this fact, so far as we can tell, there is a strong interest on the part of some in defending (or even entrenching) the current de facto organizing principle against which other beings are measured and generally found lacking. That they are found lacking is not surprising when one considers that it is only members of the human species who engage in moral status assignations. Many nonhuman animals live in complexly ordered societies with division of labor and caste systems and (sometimes elaborate) mechanisms for detecting and correcting bad behavior. But these nonhuman animals are not, again so far as we can tell, deciding who or what has moral status and on what grounds. That’s a peculiarly human convention, grounded deeply in our individual and collective psyches. And when the tightly guarded boundaries of humanness are threatened, the soul of the species is at stake.9 Thus far, many (most) humans have responded to the moral status challenge posed by human–nonhuman chimeras, by sidestepping any debate that would question the current moral status hierarchy that provides (some among us) with considerable privilege. With the emergence of novel part-­human beings, this is not likely to be a sustainable strategy. When we try to figure out what is ethically permissible and impermissible in our treatment of other human and nonhuman beings, we rely on assumptions about the moral relevance of species membership and the moral relevance of human capacities. But arguably we need to go further. The creation of human–nonhuman chimeras is a project that effectively requires us to confront entrenched beliefs and patterns concerning the pos­ ition of the progenitor species—humans and nonhumans. As the science of chimerism moves forward, now is the time to think creatively about how our relations with other beings might be significantly different if moral status were “decided and conferred not on the basis of some pre-­determined ontological criteria or capability (or lack thereof) but in the face of actual social relationships and interactions” (Gunkel 2018, p. 96). This may be a productive

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

190  Jason Scott Robert and Françoise Baylis way to begin to grapple with the inexorable moral confusion associated with the creation of novel human–nonhuman chimeras. Mark Coeckelbergh and David J. Gunkel (2014) have developed what they call a “relational, Other-­oriented approach to moral standing” that could prove fruitful in debates about the moral status of human–nonhuman chi­ meras. Building on the work of Emmanuel Levinas, amongst others, these scholars have independently (Coeckelbergh 2012; Gunkel 2007; Gunkel 2012) and collaboratively contributed to this novel approach. Applied to animal ethics specifically, they argue that “if we really want to understand our current thinking and practices with regard to animals and make significant moral progress in this matter, we need to alter the (pre-) normative question from ‘What properties does the animal have?’ to ‘What are the conditions under which an entity becomes a moral subject?’ ” (Coeckelbergh and Gunkel 2014, p. 715). The properties approach, where characteristically or quintessentially human properties are sought in nonhuman animals (or other entities) as determinants of their moral status, suffers from a form of anthropocentrism that haunts many approaches to animal ethics. In altering the question, as suggested by Coeckelbergh and Gunkel, we are liberated from this type of “imperial power mongering” (Birch 1993, p. 315) where the moral standing of others depends on the extent to which they are “just like us” (Coeckelbergh and Gunkel 2014, p. 718). For Coeckelbergh and Gunkel “change regarding the moral standing of animals is not necessarily about animals. It is about us” (2014, p. 732). To this Gunkel adds: The principal moral gesture, therefore, is not the conferring or extending of rights to others as a kind of benevolent gesture or even an act of compassion but deciding how to respond to the Other who supervenes before me in such a way that always and already places my assumed rights and privilege in question.  (2018, p. 96)

This approach provides powerful impetus and resources for “thinking otherwise” (Gunkel  2007) about human–nonhuman chimeras. Whatever moral status is to be conferred on these novel creatures—who may be variously described as “animals,” “animals containing human materials,” “reagents,” “models,” “instruments,” “research subjects,” “research partners,” “lifesaving tools,” “organ donors,” or “part-­human beings,” inter alia—depends less on any properties (capacities) they may possess (which may be impossible to ascertain at any rate) and more on how we choose to respond to them, even if

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Revisiting Inexorable Moral Confusion   191 the choice is morally threatening to us. Ultimately, at issue is not the moral status we might or might not confer on nonhuman animals, but rather the rightful scope of our self-­imposed responsibilities to care for others (humans and nonhumans)—especially those we have ourselves created (including human–nonhuman chimeras). In thinking otherwise, we need to focus not only on the brain (the usual source of criteria for moral status) but also on the heart (caring for us all—the core of moral responsibility).

6. Conclusion Acknowledging that animal welfare concerns are certainly relevant to the creation of human–nonhuman chimeras, Koplin and Savulescu observe that “it would be difficult to reject the creation of part-­human chimeras on animal welfare grounds without also rejecting a wide range of existing practices that harm animals to promote human ends, including widely accepted forms of animal research and the farming of animals for meat” (2019, p. 44). They then note that while it may be the case that we should “radically rethink” these existing practices and reject them, “the implications of such an argument would extend far beyond part-­human chimera research” (2019, p. 44). Insofar as most people apparently would prefer not to radically rethink, and possibly have to reject, existing practices toward nonhuman animals, animal welfare considerations alone will not prevent the creation of human– nonhuman chimeras. And yet the creation of such chimeras is persistently morally troubling. In “Crossing Species Boundaries”, having dismissed then extant objections to their creation, we mooted inexorable moral confusion as a possible explanation for this state of affairs: To create (certain kinds of) human–nonhuman chimeras is to introduce moral dissonance of a sort that many humans would rather avoid altogether. We remain uncertain of the viability of inexorable moral confusion as an objection to the creation of human–nonhuman chimeras. But we remain convinced that inexorable moral confusion is a plausible explanation for the underlying moral unrest associated with the creation of such chimeras, and also for radically rethinking existing experimental (and other) practices with nonhuman animals. As others have noted (e.g., Bok 2003), however, confusion can, at times, be morally productive. We hope it will be here. The heart of the matter may, we have proposed here, rest less in con­sid­er­ ations about “species” membership or capacities and more in our self-­imposed obligations toward what—or who—we create, and why. As the science of

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

192  Jason Scott Robert and Françoise Baylis creating human–nonhuman chimeras as assays, models, and organ sources pushes forward, we will need to face, directly, the challenges human–nonhuman chimeras raise for us. These include challenges to our current assumptions about our superior moral status, our animal welfare regulations, the ways in which we think about, care for, and relate to nonhuman animals, as well as our sense of ourselves and our obligations: who we are, who we want to become, who we care for and about, and what kind of world we want to live in.10 Philosophers and plowmen Each must know his part To sow a new mentality Closer to the heart. —Rush

Notes 1. In that paper and in other work, we have used multiple terms, such as “part human and part nonhuman animal,” “human–nonhuman chimera,” “part-­human chimera,” and “humanesque chimera,” while o ­ thers have suggested using what they take to be less “morally loaded” terms such as “animal containing human material” (e.g., Haber and Benham  2012 and Abelman, O’Rourke, and Sonntag  2012, following Academy of Medical Sciences 2011). 2. In a critical though collegial response to our account, Matthew Haber and Bryan Benham (2012) suggested, amongst other concerns, that we were raising issues not of confusion but rather of uncertainty. We maintain that we are fairly certain that (some kinds of) human–nonhuman chimeras will challenge the ways in which we treat both nonhuman animals and the resulting chimeras, especially given that there is no robust moral demarcation line between species; accordingly we prefer to moot moral confusion over moral uncertainty. 3. It is worth noting that another class of novel beings has begun to emerge: humanoid (and sometimes artificially intelligent) robots that occupy the gap between humans and machines. As with organic human–nonhuman chimeras, humanoid robots are currently being built to improve the lives of humans, whether as manual laborers, as companions, or for military purposes. Research and development in humanoid robotics raises similar questions about moral status as the organic chimera work, namely: whether and under what circumstances might these emerging beings have moral status sufficient such that their continued use by humans for exclusively human purposes would be especially morally problematic. Other scholars, including some authors of chapters in this book, have engaged (aspects of) this question in regard to humanoid robots and artificial intelligence more broadly. We restrict our focus here to the organic

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Revisiting Inexorable Moral Confusion   193 world, though we believe the analysis of inexorable moral confusion could be developed further to encapsulate these other kinds of novel beings. 4. We ask this question well aware that there are many who cogently argue (and we are sympathetic to some of these arguments) that we already have very good reasons not to use nonhuman animals exclusively for human purposes (such as food, sport, labor, and medical research). 5. See, for instance, the various ways in which the following authors discuss the issue: NRC  2005; Greely et al.  2007; Greene et al. 2005; Haber and Benham  2012; Wu et al. 2016; and Koplin 2019. 6. Streiffer (2005), in particular, has ably addressed issues about animal welfare; de Melo-­ Martin (2008), Koplin (2018), and Mann, Sun, and Hermerén (2019) are amongst the many commentators who have contributed important insights about human dignity in relation to chimera experimentation; and both Greely (2011) and Streiffer (2019) have provided comprehensive overviews of the full range of ethical issues raised by this research. 7. For reasons that remain unclear, but include technical issues and personnel considerations, Weissman did not end up performing the experiments. 8. It is important to note that while the human cerebral organoid scenario presumes that the resultant human-­nonhuman chimera would demonstrate high-­grade chimerism and thus the potential for human-­like cognitive (or other morally relevant) capacities, a likelier outcome is cerebral deficit due to the need to create a surgical cavity in the host animal’s brain into which to transfer the organoid (Chen et al. 2019). 9. It might be objected here that Alpha males amongst the Great Apes, or perhaps the females of the species amongst the Bonobos, engage in a kind of hierarchy that could be described as a “moral status hierarchy.” Maybe so. Our point is more at the meta-­ level than the empirical level: it seems to matter dramatically within folk psy­cho­ logic­al conceptions of “the” human species that “we” are the ones who decide who has moral status—and what does not. 10. We are grateful to Steve Clarke and Julian Savulescu, the convenors of the “Rethinking Moral Status” conference held in 2019 at St Cross College, Oxford, as well as an an­onym­ous reviewer for extensive comments on a draft of this chapter. JSR also acknowledges the generous support of the Lincoln Center for Applied Ethics at Arizona State University for travel funds to participate in the conference.

References Abelman, M., O’Rourke, P.  P., & Sonntag, K.  C. 2012. Part-human animal research: the imperative to move beyond a philosophical debate. The American Journal of Bioethics 12 (9), 26–8. Academy of Medical Sciences. 2011. Animals containing human material. Technical report. Available at: .

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

194  Jason Scott Robert and Françoise Baylis Baylis, F., & Robert, J. S. 2005. Radical rupture: exploring biological sequelae of volitional inheritable genetic modification. In J. E. J. Rasko, G. M. O’Sullivan, & R. A. Ankeny (eds), The Ethics of Inheritable Genetic Modification (pp. 131–48). Cambridge: Cambridge University Press. Baylis, F., & Robert, J. S. 2007. Part-human chimeras: worrying the facts, probing the ethics. The American Journal of Bioethics 7(5), 41–5. Birch, T. H. 1993. Moral considerability and universal consideration. Environmental Ethics 15, 313–32. Bok, H. 2003. What’s wrong with confusion? The American Journal of Bioethics 3(3), 25–6. Brivanlou, A.  H., Gage, F.  H., Jaenisch, R., Jessell, T., Melton, D., & Rossant, J. 2003. Setting standards for human embryonic stem cells. Science 300, 913–16. Buchanan, A. 2009. Moral status and human enhancement. Philosophy & Public Affairs 37(4), 346–81. Chang, A. N., Liang, Z., Dai, H. Q., Chapdelaine-Williams, A. M., Andrews, N., Bronson, R. T., Schwer, B., Alt, F. W. 2018. Neural blastocyst complementation enables mouse forebrain organogenesis. Nature 563: 126–30. Chen, H.  I., Wolf, J.  A., Blue, R., Song, M.  M., Moreno, J.  D., Ming, G., & Song, H. 2019. Transplantation of human brain organoids: revisiting the science and ethics of brain chimeras. Cell Stem Cell 25, 462–72. Coeckelbergh, M. 2012. Growing Moral Relations: Critique of Moral Status Ascription. New York: Palgrave MacMillan. Coeckelbergh, M., & Gunkel, D.  J. 2014. Facing animals: a relational, Otheroriented approach to moral standing. Journal of Agricultural and Environmental Ethics 27, 715–33. De Los Angeles, A., Hyun, I., Latham, S.  R., Elsworth, J.  D., & Redmond, Jr., D. E. 2018. Human–monkey chimeras for modeling human disease: op­por­tun­ ities and challenges. Stem Cells and Development 27(23), 1599–604. De Los Angeles, A., Pho, N., & Redmond, Jr., D.  E. 2018. Generating human organs via interspecies chimera formation: advances and barriers. Yale Journal of Biology and Medicine 91, 333–42. De Melo Martin, I. 2008. Chimeras and human dignity. Kennedy Institute of Ethics Journal 18(4), 331–46. DeGrazia, D. 1996. Taking Animals Seriously: Mental Life and Moral Status. New York: Cambridge University Press. DeGrazia, D. 2002. Animal Rights: A Very Short Introduction. New York: Oxford University Press. Goldstein, R. S., Drukker, M., Reubinoff, B. E. & Benvenisty, N. 2002. Integration and differentiation of human embryonic stem cells transplanted to the chick embryo. Developmental Dynamics 225, 80–6.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Revisiting Inexorable Moral Confusion   195 Greely, H.  T. 2011. Human/nonhuman chimeras: assessing the issues. In T.  L.  Beauchamp & R.  G.  Frey (eds), The Oxford Handbook of Animal Ethics (pp. 671–98). New York: Oxford University Press. Greely, H. T., Cho, M. K., Hogle, L. F., & Satz, D. M. 2007. Thinking about the human neuron mouse. The American Journal of Bioethics 7(5), 27–40. Greene, M., Schill, K., Takahashi, S., Bateman-House, A., Beauchamp, T., Bok, H., Cheney, D., Coyle, J., Deacon, T., Dennett, D., Donovan, P., Flanagan, O., Goldman, S., Greely, H., Martin, L., Miller, E., Mueller, D., Siegel, A., Solter, D., Gearhart, J., McKhann, G., & Faden, R. 2005. Moral issues of human–non-human primate neural grafting. Science 309, 385–6. Grimm, D. 2019. 2020 U.S. spending bill restricts some animal research, pushes for lab animal retirement. Sciencemag.org 19 December. doi:10.1126/science. aba6454. Gunkel, D. J. 2007. Thinking Otherwise: Philosophy, Communication, Technology. West Lafayette, Ind.: Purdue University Press. Gunkel, D. J. 2012. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, Mass.: MIT Press. Gunkel, D. J. 2018. The other question: can and should robots have rights? Ethics and Information Technology 20, 87–99. Haber, M.  H., & Benham, B. 2012. Reframing the ethical issues in part-human animal research: the unbearable ontology of inexorable moral confusion. The American Journal of Bioethics 12(9), 17–25. Han, X., Chen, M., Wang, F., Windrem, M., Wang, S., Shanz, S., Xu, Q., Oberheim, N. A., Bekar, L., Betstadt, S., Silva, A. J., Takano, T., Goldman, S. A., & Nedergaard, M. 2013. Forebrain engraftment by human glial progenitor cells enhances synaptic plasticity and learning in adult mice. Cell Stem Cell 12(3), 342–53. Kobayashi, T., Yamaguchi, T., Hamanaka, S., Kato-Itoh, M., Yamazaki, Y., Ibata, M., Sato, H., Lee, Y. S., Usui, J., Knisely, A.S., Hirabayashi, M., & Nakauchi, H. 2010. Generation of rat pancreas in mouse by interspecific blastocyst injection of pluripotent stem cells. Cell 142, 787–99. Koplin, J. J. 2018. Organs, embryos, and part‑human chimeras: further applications of the social account of dignity. Monash Bioethics Review 36, 86–93. Koplin, J. J. 2019. Human-animal chimeras: the moral insignificance of uniquely human capacities. Hastings Center Report 49(5), 23–32. Koplin, J.  J. & Savulescu, J. 2019. Time to rethink the law on part-human chi­ meras. Journal of Law and the Biosciences 6 (1), 37–50. Levine, S., & Grabel., L. 2017. The contribution of human/non-human animal chimeras to stem cell research. Stem Cell Research 24, 128–34. Mann, S. P., Sun, R., & Hermerén, G. 2019. Ethical considerations in crossing the xenobarrier. In I. Hyun & A. De Los Angeles (eds), Chimera Research: Methods

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

196  Jason Scott Robert and Françoise Baylis and Protocols (Methods in Molecular Biology, vol. 2005, pp. 175–93). Dordrecht: Springer. National Research Council (NRC). 2005. Guidelines for Human Embryonic Stem Cell Research. Committee on Guidelines for Human Embryonic Stem Cell Research, Institute of Medicine, and Board on Health Sciences Policy. Washington, DC: National Academies Press. Ourednik, V., Ourednik, J., Flax, J. D., Zawada, W. M., Zawada, W. M., Hutt, C., Yang, C., Park, K. I., Kim, S. U., Sidman, R. L., Freed, C. R., & Snyder, E. Y. 2001. Segregation of human neural stem cells in the developing primate forebrain. Science 293, 1820–4. Robert, J. S. 2006. The science and ethics of making part-human animals in stem cell biology. FASEB Journal 20, 838–45. Robert, J. S. 2009. Nanoscience, nanoscientists, and controversy. In F. Allhoff and P. Lin (eds), Nanotechnology and Society: Current and Emerging Ethical Issues (pp. 225–39). Dordrecht: Springer. Robert, J. S., & Baylis, F. 2003. Crossing species boundaries. The American Journal of Bioethics 3(3), 1–13. Robert, J. S., Maienschein, J., & Laubichler, M. 2006. Systems bioethics and stem cell biology. Journal of Bioethical Inquiry 3, 19–31. Singer, P. 1975 (1990). Animal Liberation, 2nd edn. New York: Random House. Streiffer, R. 2005. At the edge of humanity: human stem cells, chimeras, and moral status. Kennedy Institute of Ethics Journal 15(4), 347–70. Streiffer, R. 2019. Human/non-human chimeras. In E.  N.  Zalta (ed.), The Stanford Encyclopedia of Philosophy, at . Wu, J., Greely, H. T., Jaenisch, R., Nakauchi, H., Rossant, J., & Izpisua Belmonte, J. C. 2016. Stem cells and interspecies chimaeras. Nature 540, 51–9. Wu, J., Platero-Luengo, A., Sakurai, M., Sugawara, A., Gil, MA., Yamauchi, T., Suzuki, K., Bogliotti, Y. S., Cuello, C., Morales Valencia, M., Okumura, D., Luo, J., Vilariño, M., Parrilla, I., Soto, D. A., Martinez, C. A., Hishida, T., SánchezBautista, S., Martinez-Martinez, M.  L., Wang, H., Nohalez, A., Aizawa, E., Martinez-Redondo, P., Ocampo, A., Reddy, P., Roca, J., Maga, E. A., Esteban, C.  R., Berggren, W.  T., Nuñez Delicado, E., Lajara, J., Guillen, I., Guillen, P., Campistol, J.  M., Martinez, E.  A., Ross, P.  J., & Izpisua Belmonte, J.  C. 2017. Interspecies chimerism with mammalian pluripotent stem cells. Cell 68(3), 473–86.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

12 Chimeras, Superchimps, and Post-­persons Species Boundaries and Moral Status Enhancements Sarah Chan

1.  Introduction: Moral Status and Biological Species Moral status used to be easy: it was simply a question of ‘them’ and ‘us’. That is, our self-­interest encourages us to start from the assumption that we matter; deciding who else might have moral status is then about determining who counts as ‘relevantly like us’—which properties are relevant and why. Contemporary ‘folk psychology’ accounts of moral status (see Robert and Baylis  2003) tend to follow species boundaries: ‘humans yes, others less’. In other words, it is a commonly held intuition that humans have moral status while most other species don’t; or if they do, humans still matter more. Yet, while both self-­interest and everyday intuition may suggest that there is something special about being human, various analyses have demonstrated that basing moral status on species membership is philosophically difficult to justify. To begin with, biological species itself is a contested concept: philosophers of biology have developed several varied accounts of the species concept, but none that serves all purposes or accurately fits all applications (de Queiroz 2005, Hey 2006, Mayr 1996). There is, furthermore, no sound reason for according the (biological) property of species membership any moral significance, not least given that species, rather than existing as God-­given, fixed categories, instead represent scientifically constructed (and contested) ways to divide up taxonomically the populations that have resulted from the process of evolution. It is most likely simply an accident of evolutionary history that there are no other hominid species still in existence that might have led us to form different intuitions about the relationship between species membership and moral status. It is likewise perhaps an accident of philosophical history, via the rise of Enlightenment humanism, that has consolidated those intuitions and allowed a speciesist account of moral status to take hold in popular values Sarah Chan, Chimeras, Superchimps, and Post-­persons: Species Boundaries and Moral Status Enhancements In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Sarah Chan 2021. DOI: 10.1093/oso/9780192894076.003.0012

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

198  Sarah Chan and belief systems. As a number of philosophers have argued, however, biological species boundaries are a poor proxy for identifying ‘moral species’: privileging some beings on the basis of species membership alone has been described as ‘speciesism’, a form of unjustified discrimination (Ryder  1974, Singer 1975). In the context of contemporary bioethics, new knowledge and emerging technologies that propose to disrupt the conventional categories and bound­ ar­ies of biology have prompted us to revisit the species question on several fronts. Our thinking about species and moral status is being challenged in new ways. First, we are able to disrupt species boundaries not just philosophically but biologically: the creation of interspecies chimeras, in disrupting the human/ non-­human species boundary, forces us also to go beyond species boundaries in considering the moral status of the new beings created. Second, we now have a greater understanding of the biology underlying the cognitive and psychological capacities that on some accounts give rise to moral status. This understanding also enables us potentially to manipulate those capacities, with the congruent possibility of enhancing moral status. Such a process may lead to members of existing biological species acquiring new moral properties: should a non-­human animal that could think, feel, and relate to others in ways (relevantly) similar to humans be accorded human-­ level moral status? It has, further, been suggested that some forms of radical enhancement might transform humans into a new species, of transhuman or posthuman. The potential thus exists to cross moral, as well as biological, species boundaries: these transitions might involve not just altering creatures’ biological species, but transforming them into morally different kinds of beings. In this chapter, I explore challenges raised by the prospect of crossing both biological and moral ‘species boundaries’, examine the implications of species transitions in relation to obligations towards existing beings and beings that might be created via the species transition process, and reflect on how this might advance our thinking about moral status. I first lay out some conceptual issues relevant to species and moral status; then turn to considering the possibilities of species transition and moral status enhancement, of both non-­ human animals and humans, and our obligations to pursue or refrain from creating such enhanced beings. The technological possibilities for crossing species boundaries and the future imaginaries they enable can, I suggest, prompt useful approaches to (re)thinking moral status, as well as indicate

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Chimeras, Superchimps, and Post-persons  199 what our normative approach should be regarding the creation of new moral as well as biological species.

2.  Thinking about Moral Status Moral status is a complex and nuanced concept, difficult to define comprehensively. Examining the space in which ‘moral status talk’ operates, however, reveals certain features of the landscape. In the first place, the concept of moral status should be considered in tandem with its application: we are interested in moral status not just in itself, but for what it means and what it does—the role it can play in our moral arguments. Next, moral status, or what follows from it, should be a differential property; some beings must matter more and some less, or moral status would not be of much use to us in resolving the sorts of dilemmas with respect to which it is invoked. Moral status also presumably has some relationship to interests and to obligations, in that having moral status—being morally considerable—implies being capable of being the direct object of moral obligations. I will not attempt here to produce a definitive account of moral status, but from the numerous varied accounts proposed by others, some plausible features of moral status include that it should be: (1) based at least in part on psychological capacities; (2) related to interests; and (3) neutral with respect to biological species. In what follows, I treat moral status and full moral status as ‘problem concepts’: that is, I shall use these terms in discussion while acknowledging their precise content and definition to be problematic, in the hope that such discussion will go some way towards developing a better account of these concepts. Is relative moral status, then, about what interests a being has, or what we owe to a being in view of its interests? To illustrate the difference between these possible interpretations: consider two beings A and B, who have re­spect­ ive sets of interests IA and IB. If IA and IB are identical, does that mean A and B have the same moral status? Or can we say that, though A and B may have identical interests, if A has a higher moral status than B, our obligations to satisfy IA (or a subset thereof) are greater than our obligations to satisfy IB (or a like subset of IB)? Further, if A and B share the same interest in equal measure, and assuming that we have no special responsibilities towards either, would A’s having a higher moral status justify (or require) placing the satisfaction of A’s interest above that of B? In other words, does moral status vary

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

200  Sarah Chan on the basis of interests? Different interests are likely to ground different obligations, but does that imply a different moral status? With respect to humans and non-­human animals, our intuitions certainly suggest that in some regards and in comparison with at least some animals, humans merit special treatment. For example, most people would probably concede that human lives are ordinarily considered more valuable than animal lives; keeping a human imprisoned involuntarily would seem to be worse than keeping an ant captive in a jar, and so forth. These intuitions might of course simply be mistaken, but there are a number of plausible lines of argument that could justify this. One would be to argue that humans have a higher moral status and thus the same interest held by a human counts more than a like interest held by an animal. Alternatively we might argue (following critiques of speciesist approaches to deciding who counts and why) that humans and animals should be owed equal consideration for like interests: the case for different treatment, then, would rely on the caveat that humans’ and animals’ interests in the same thing (such as continued life, or liberty) are often not, in fact, alike (see for discussion Kagan 2018, 2019). In considering these questions in the first place, however, we encounter the problems of commensurability and separability of interests. What might or could it mean to state that, for example, a mouse and a human have ‘the same’ interest in continued life? It is difficult to see how a single interest in, say, the continuation of one’s life can be disentangled from the contingent interests in all the things one might do with that life once continued. There is, further, an evidentiary problem with respect to interests and the strength thereof: if I claim I have an equal and as strong interest in continued life as you, how can we know this to be true? The anthropocentric, species-­bounded approach to moral status has, as I shall later argue, allowed us conveniently to avoid many of these tricky issues; the challenges presented by boundary-­crossing possibilities, biological and moral, will require us to reconsider them and critically examine how they have shaped our approaches to moral status so far.

2.1  Moral status, interests, and identity Most accounts of moral status relate it in some way to interests, but, as noted above, the precise relationship between interests and moral status is a matter of some debate. If, however, the main reason we want to understand what

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Chimeras, Superchimps, and Post-persons  201 moral status a being has is to know how we should treat it and what our obligations are towards it, perhaps disentangling interests from moral status is not crucial. Moral status, however, must be more than just the sum of all our interests: not all interests would appear to be relevant to moral status. For example, if I purchase a bicycle, I presumably gain an interest in owning a bicycle pump, but this newly acquired interest seems trivial in relation to questions of moral status: it does not seem plausible to claim that my moral status has increased as a result of my expanded interest in pump ownership. Specifically, then, moral status must be a function not of all interests indiscriminately, but of certain sorts of what we might call critical life interests. While defining what makes something a critical life interest is a project in itself, good candidates include interests associated with the usual threshold capacities that have been suggested for different potential levels of moral status: most commonly sentience, corresponding to an interest in not feeling pain; and ‘personhood’, a complex set of properties that give rise to, amongst others, an interest in continued existence. The latter has often been referred to as conferring ‘full moral status’ (FMS). What does it mean to have FMS? I will return to unpacking the ‘fullness’ of FMS later; for now, I shall use the term in the sense which philosophers have often treated as synonymous with ‘personhood’. It is useful to observe that when we speak of FMS we mean a bundle of moral rights. The most commonly invoked is the right to life or, in negative terms, the right not to be killed. Along with this right, however, we usually also assume go the rights to self-­determination, not to be used instrumentally, and so forth. In other words, the rights usually associated with FMS entail a kind of ‘respect for life’ not just in the sense of ‘aliveness’, but respect for a life as a whole, possessed and experienced by a specific creature with a unique identity.1 The idea of FMS is therefore inevitably associated with questions about identity (see DeGrazia, this volume): what makes a life a coherent whole is also what gives the individual living it a definite and defined identity, at least in one sense of the term (see also McMahan 2002). This, as we shall see, has implications for how we think about moral status enhancements (MSEs), and their relationship to (potentially) species-­altering technologies. In particular, we must consider that MSEs and species transitions could either be identity-­preserving or identity-­disrupting, and that our evaluation of these interventions may yield different results in the case of each of the two.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

202  Sarah Chan

3.  Moral Status Enhancements At this point it may be worth examining in more detail some types of what might be considered MSE. Note that I am not claiming that these are all definitively MSEs; they are all, however, scenarios in relation to which the potential for MSE has been considered amongst the ethical issues pertaining to each case. One of the most prominent cases in which potential effects on moral status have been discussed is that of human–non-­human chimeras. Early sustained bioethical attention to this issue (Robert and Baylis 2003) was further spurred by the idea of the ‘human neuron mouse’ (Greely et al. 2007), an experiment proposed but so far not realized. Chimeric mice with human-­derived glial cells, though, have since been created, with the intriguing result that the mice thus produced showed enhanced cognitive function (Han et al. 2013). Further endeavours in chimerization that may prove relevant to the issue include the creation of pig–human chimeras for the purpose of organ transplantation, via a process known as blastocyst complementation (reviewed in Wu et al. 2016, Wu and Izpisua Belmonte 2016). Proof-­of-­principle experiments with interspecies blastocyst complementation to date have shown that the resulting chimeras may end up with some small percentage of donor cells in other tissues, including brain (Wu et al. 2016, Wu and Izpisua Belmonte 2016). Of course, having a small number of human cells in the brain is not likely to produce vast cognitive changes. Even so, the very fact of crossing the biological species boundary may provoke renewed questions about moral status, or produce ‘inexorable moral confusion’ (Robert and Baylis 2003), given the above-­noted (if philosophically unjustified) tendencies to regard the boundary between moral species, ‘us’ and ‘them’, as contiguous with the human/ non-­ human divide. Furthermore, the mouse results suggest that human neur­al chimerism can lead to some level of cognitive enhancement. While it does not seem that the ‘human glia mouse’ has attained typical human levels of cognition, we can conceive of other possible ‘animal enhancements’ that might produce MSE. McMahan’s ‘superchimp’ is as-­yet hypothetical, but research with non-­human primates that could lead to increases in intelligence is on the cards: for example, genome editing was recently used to introduce a gene associated with human cognition into monkeys (Shi et al. 2019). The prospect of species-­altering enhancements is not limited to animal application: a common theme in considerations of human enhancement is that, if taken far enough, such interventions might cause us to undergo our

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Chimeras, Superchimps, and Post-persons  203 own species transition, to transhumans or posthumans. It has also been suggested that posthumans might be a new moral species of ‘post-­persons’, with higher moral status than regular persons. Still another case, beyond the realms of the purely biological, is the prospect of developing sophisticated artificial intelligences that might become deserving of moral status. In considering these activities, one question to be addressed is whether MSEs are something we should pursue or avoid, and for what reasons. Some further examples may illuminate the issue here. A much more quotidian (and presumably acceptable) MSE is that which occurs during the usual process of human development. If we accept (as many do) that human life at its earliest stages, say the zygote or early embryo, has less moral status than a typical adult human, then development must entail at some point an increase in moral status, whether gradual or sudden. Indeed, if we follow those philo­ sophers who have argued that a newborn infant may not have the same moral status as a grown human, some degree of MSE must occur at some point between birth and adulthood. Another example of putative MSE might also be called moral status therapy: where a person P suffers a catastrophic harm to psychological capacities that might (on some accounts) cause them to become a non-­person or to lose FMS, it is usually considered acceptable to undertake measures aimed at restoring normal functioning, which would also restore moral status. Now, one might argue that P’s moral status is not lost as such while they are tem­ por­ar­ily impaired, and that the possibility of P’s continued existence as a person means that they continue to have the moral status of a person until that possibility is extinguished; thus the restoration of function is not an MSE as such, but something that we owe to P in view of their ongoing moral status. But now imagine a newborn child (or, if we prefer to avoid the infanticide issue, a foetus of a few months) who suffers from a brain disease that would prevent them from developing normal capacities. Although they are not yet a person, it would seem to be permissible, possibly required, to cure the condition that would prevent them from attaining personhood, thus enabling MSE to occur. Our usual expectations with respect to the latter examples, then, demonstrate that in some cases at least, an intervention that would lead to MSE can be morally acceptable, morally right, and perhaps even morally obligatory. This does not, however, demonstrate that it is so because of being an MSE; it may be acceptable, right, or obligatory for some other reason but just so happens also to have the incidental consequence of MSE.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

204  Sarah Chan

4.  Obligations to MSE? The question of what sorts of beings should have moral status pertains not just to our theories about what properties are required for moral status and therefore what kinds of beings should be accorded moral status, but also has implications regarding whether we should attempt directly to confer moral status on (or increase the moral status of) other beings by altering their properties. In the following sections, I argue that, while it may be difficult to define how MSE serves a particular being’s interests, the possibility of newly acquired interests being frustrated or enhanced moral status disrespected is not in itself enough to give us reasons to refrain from creating morally enhanced beings. Further, however, a closer examination of moral and/or biological species-­ altering interventions suggests that many of these are, properly speaking, projects of de novo creation rather than enhancement. Let us consider first whether we have obligations to an individual being to perform MSE.2 On a simple interest-­based account of obligations, the answer to this turns primarily on whether that being can be said to have an interest in gaining increased moral status, which in turn depends on what we mean by higher moral status. If higher moral status entails a right to better treatment or a stronger claim that a given interest be satisfied, hence more chance of having one’s interests fulfilled, then MSE in itself might well be something in which we have an interest. Alternatively, it might be the case that an individual A has an interest in acquiring or having enhanced a certain property that would incidentally confer increased moral status. Our obligation in this case might not be to MSE as such, but to the fulfilment of a separate interest, such as becoming more intelligent or more morally capable, that merely happens to lead to an increase in moral status. Given the usual association between cognitive capacities and moral status, and the ways in which cognitive enhancement has been framed in the discourse over human enhancement, this seems like a plausible route towards justifying obligations to MSE. In assessing whether MSE, or an intervention that might lead to it, is in a being’s interests, we face the problem of how and whether to consider its interests at a particular time, or across its life as a whole. Time-­specific interests can of course also include a being’s interests in what happens to its future self, insofar as and to the extent that it is connected to that self through the narrative course of a life (see McMahan 2002). How, though, should we assess these interests when that life course itself may substantially change depending on whether and which interests are fulfilled? MSE and moral species

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Chimeras, Superchimps, and Post-persons  205 transitions pose particular issues in this regard because of the critical transformations they are likely to entail. Considering a being’s interests across its various potential life courses also raises the problem of ‘compounding interest’ in the moral sense: where fulfilment of one interest leads to the acquisition of other interests, which in turn demand to be satisfied, potentially creating yet further interests, and so forth. The types of expanded capacities likely to be associated with MSE are especially likely to give rise to, not just new interests in the trivial sense, but new kinds of interests. For example, beings who are aware of their future existence can have different kinds of interest in that future from beings who are not. Again, we might ponder whether an expanded range of interests gives rise to or indicates greater moral status; presumably, given the relationship between interests and obligations, it at least grounds correspondingly greater obligations. On the other hand, having an expanded range of interests might result in more opportunities for one’s interests to be frustrated. Can it be ‘good for us’ to acquire an ever-­increasing range of interests, proportionately fewer of which are likely to be satisfied? We might think (pace Mill) that it is better to be, for example, a human–pig chimera dissatisfied than a pig satisfied. The basis for this, however, requires further scrutiny (see for example Sapontzis 2014). Such a claim seems to imply that the greater range of interests and capacities that results in dissatisfaction in relation to some of these interests either also enables greater overall satisfaction, or makes the same level of satisfaction somehow count for more in evaluating how ‘good’ a life is, both of which propositions demand further justification. One might alternatively argue for the proportionate fulfilment of interests being what counts: that it is better to have a higher proportion of one’s interests fulfilled than higher absolute fulfilment. We do not, however, usually consider this a good reason to diminish the range of interests of future people. At any rate, the question with respect to MSE and compounding interests is whether the potential dissatisfaction of new interests or disrespect for enhanced moral status provides us with a reason against pursuing such enhancements. Streiffer, for example, has suggested that in the case of human–animal chimeras and MSE, ‘the subsequent treatment of the subject likely will fall far below what its new moral status demands’ (Streiffer 2005, at 348) and that this is a reason not to create such beings, or to perform such enhancements. The foreseeability of mistreatment, however, should provide us with ­reasons to ensure beings are treated commensurately with their moral status,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

206  Sarah Chan rather than to ensure they have less capacity to be wronged in the first place. To illustrate this, consider two analogous cases. The first is that of an unfair employer who takes on staff but refuses to pay them a living wage: the propensity of some employers to exploit their staff is not a reason to argue that nobody should ever be offered employment; it is a reason to argue that employees should be treated fairly and to support proper employment laws that would force employers to do so. Second, consider a case of moral status impairment caused by brain injury (or a condition that would hamper normal brain development) as discussed above. The fact that all of us may at some point be treated in a way that does not fully respect our moral status is not in itself a reason to avoid restoring or conferring this moral status by treating the injury and addressing the impairment. It is true that human–animal chimeras, at least to begin with, will likely be created as experimental subjects by definition: that is, they could not come into existence in any other way. This, however, is not fundamentally incompatible with respect for moral status. Indeed, chimeras, by troubling our usual assumptions about moral status, might provide a catalyst to rethink the meaning of respect for the moral status of research participants, whether human or non-­human. I will return to consider this in the concluding section of this chapter.

5.  Conceptual Issues in Moral Status Enhancement The above discussions highlight a number of conceptual issues arising with respect to MSE, which in turn reveal some potential fault lines in our thinking about moral status and enhancement more generally. The first of these we might term the species-­identity problem, concerning in relation to what standard a putative MSE should be evaluated. Assume that pigs, in comparison to humans, have lower levels of the psychological and cognitive capacities on which moral status is based, and thus a lesser moral status. Would a pig– human chimera that had intermediate capacities and moral status partway between the two be an enhanced pig, or a de-­hanced (or disenhanced) human? What is the appropriate standard of comparison here; and indeed, if biological species is supposed to be irrelevant to moral status, why should a species-­based standard be invoked at all? It has been argued that enhancements are in one sense relative to the individual; that is, the relevant comparison is whether the individual is better off as a result of the intervention than they otherwise would have been. In the case

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Chimeras, Superchimps, and Post-persons  207 of some species-­boundary-­crossing interventions, particularly chimerization, it is not always easy to determine who the individual in the counterfactual case would be: is it the pig that would have resulted from the unadulterated pig embryo, the human that would have resulted from the human embryo— or is there no ‘but-­for’ individual for relevant comparison? Herein also lies our second problem: for whom is an MSE to be considered an enhancement? Regarding enhancements in general, John Harris (2007) claims that ‘If it wasn’t good for you, it wouldn’t be enhancement.’ Putting aside for the moment the question of whether MSE can be considered to be ‘good for you’, we are faced with the problem of individual as well as species identity: for whom is it good? A well-­known thought experiment from Michael Tooley imagines a sort of ‘personhood potion’, a chemical that when injected into the brains of kittens gives them the potential to develop into the kinds of beings that would be considered persons (Tooley 1972). Imagine that such a potion worked in­stant­ an­eous­ly rather than developmentally, so that its effect was to turn the recipient into a full-­fledged person at the moment of administration; and that it could be administered not just to kittens, but to (say) fruitflies. Could we say that being transformed into a Fly-­Person would be ‘good for’ the fruitfly as it existed previously? Or would this sudden ‘personification’ constitute such a radical disruption that it would not make sense to consider the fruitfly before and the Fly-­Person after as the same creature? Something like this seems to be what Nick Agar has in mind when considering the possibility of human-­to-­posthuman species transitions produced by ‘radical enhancement’(Agar 2010). Becoming posthuman, as he envisages it, involves altered capacities and apparently a biological species transition as well as moral status enhancement. Agar argues that, faced with such a transformation, it might be better for us to remain human rather than becoming posthuman: we could not have an interest in acquiring and satisfying a range of new posthuman interests, because, he argues, ‘those pleasures would not be ours’ (Agar 2010, at 127). What seems to be indicated here, in other words, is that the species transition implicated in radical posthuman enhancement involves the sort of disruption to identity that would render it no longer an enhancement as such: it would not be ‘good for us’ because ‘we’ would no longer be ‘us’. It seems that in fact, of the moral-­status-­affecting interventions that we might imagine, not all are straightforwardly ‘moral status enhancements’. Some, notably human–animal chimerization, constitute creating a morally significant being de novo rather than enhancing an existing being.3 In other

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

208  Sarah Chan cases, such as fruitfly-­to-­person or human-­to-­posthuman transformation, a species transition might produce such a disruptive effect that it could not meaningfully be said to be an enhancement for the creature on whom it was originally performed. The above issues place some fairly significant constraints on our potential obligations to MSE. Nonetheless, bearing these points in mind, we can perhaps advance the following two, somewhat limited, claims: 1. if we are going to create the sort of being who, once in existence, would and could have an interest in being enhanced in a particular way, that might give us moral reasons to enable the capacity for such enhancements; but this does not in itself constitute an obligation to create such a being; and 2. if an existing being has an interest in MSE, that might give us reasons to pursue it; but this holds only within the limits of interventions that would allow sufficient continuity for it to be ‘good for’ that being. While these claims are limited in the scope of the obligations they outline, they may nonetheless have application to a fairly wide range of contexts, including many of the scenarios of MSE that are commonly envisioned, and perhaps quite a few beyond. For example, while we have been prompted to consider MSE in the context of human–animal chimeras and other sorts of novel beings, it may equally be the subject of obligations to existing, everyday sorts of beings, obligations that we ought to confront (Chan 2009, Chan and Harris 2011).

6.  Post-­persons and FMS So far we have focused mostly on MSE that would convert non-­persons or pre-­persons into persons, those with ‘full moral status’ (FMS). How, then, does the concept of the ‘post-­person’ factor into considerations of moral status and its enhancement? Most evidently, the idea of FMS implies that there is nothing beyond this threshold: one cannot have ‘more-­than-­full moral status’ (MTFMS) without making FMS itself ‘less-­than-­full’. The suggestion that post-­persons might have MTFMS is therefore problematic. One possible solution is to accept that ‘FMS’ as we currently conceive of it is something of a moral fiction, albeit a convenient one. The concept of FMS has emerged largely from the bioethical obsession with life and death, and

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Chimeras, Superchimps, and Post-persons  209 in the process become more or less aligned with the boundary that these discourses attempt to delineate: between the kinds of beings that are ‘persons’, who have a right to life, whom it would be wrong to kill, who have inherently valuable lives; and the kinds of beings who are not and do not. It has also been shaped in part through ‘ethics by exclusion’, where we go about defining what FMS is and who has it, by thinking about who does not have it and what it is not. On the accounts that have thus emerged, of course most humans would have FMS, since as pointed out earlier, our approach to moral status is usually rather self-­interested. The idea of quantized moral status and FMS as a threshold beyond which all beings have exactly the same moral status plays nicely to the political ‘comfort blanket’ of declaring that all ordinary humans are (or should be) morally equal, while allowing us to avoid confronting the above-­discussed problems of understanding and evaluating interests, and how they relate to moral status. If beings have an interest in their continued lives, respect for their moral status requires that we recognize the value (to them) of those lives; the strength of that interest may vary, but this variation would be impossible to assess objectively. We are spared from confronting the questions of how to measure interests and whether, on the basis of differences in interests, we should treat two beings of the same species as having different moral status or differently valuable lives, by declaring a threshold above which these differences do not matter. The concept of FMS thus provides us with a convenient way of dealing with problems of commensurability, separability, and evidence when it comes to assessing interests, via a combination of practical realism and giving persons the benefit of the doubt, specifically in relation to the critical interest that FMS has primarily been developed to address: an interest in continued life. On this account, post-­personhood need not present a challenge to the application of FMS as usual. It might be the case that post-­persons are more strongly connected to their future selves and have a stronger interest in the continued life of that future self. Alternatively, post-­person enhancement might alter interests in a way that we might consider a diminishment of moral status, or at least of the usual sorts of interests we have come to associate with having FMS; for example, if post-­persons’ heightened philosophical facilities led them to a different understanding of the importance of individual personal identity, leading them to place less value on continued existence, perhaps they might have less strong interests in individual continued life than we do. In either case, as FMS enables and exhorts us to avoid making such individualized assessments of the value of life and the wrongness of killing

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

210  Sarah Chan amongst persons even where persons may in fact have different strengths of interests in living, it could function equally well to do so amongst persons and post-­persons. The mistake that makes MTFMS appear problematic, then, lies in assuming that the interests that could ground MTFMS must be of the same nature and pertain to the same entitlements as the interests that ground what we now think of as FMS. In fact, MTFMS need not mean that post-­persons have a stronger right to life than persons, any more than FMS in comparison to the lower threshold capacity of sentience means that persons have a stronger claim not to experience pain than, say, kittens. The novel interests of post-­ persons might conceivably represent a further critical threshold for another level of moral status,4 but this could well be a threshold that we are currently unable to comprehend, representing entitlements that might be meaningless to us even if granted. Indeed, post-­persons might have an entire range of interests that we cannot presently imagine. An interesting speculation is whether the existence and actions of beings with these novel interests could somehow awaken new interests in us. Might humans, for example, come to appreciate and value the ‘unimaginably complex racket[s]’ of posthuman symphonies (Agar  2010), even if we ourselves are not currently disposed to create them? Furthermore, perhaps we might acquire some of the moral status-­relevant interests of post-­ persons, via becoming aware of the new cognitive possibilities they present to us and developing an appreciation of these value frameworks ourselves. In other words, MSE or ‘uplift’ might occur via conceptual, rather than physical or biological, enhancement. We might then have good reasons for our own sakes to create post-­persons who could help ‘us’ become ‘them’.

7.  Moral Agency, Moral Status, and Obligations A final aspect of MSE that we should not neglect is that beings with increased moral status may also gain augmented capacities for moral agency, and thus become more capable of being the bearers, as well as the beneficiaries, of moral obligations. To what sorts of obligations might human–non-­human chimeras, superchimps, or post-­persons be subject? We noted above that chimeras (and superchimps, if they are ever created) would at first almost inevitably be experimental subjects by definition, but that this is not necessarily a reason to oppose their creation: research can be compatible with respect for moral status. Of course, how one is treated as an

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Chimeras, Superchimps, and Post-persons  211 experimental subject or research participant may still violate respect for one’s moral status, and participation of human–animal chimeras in research would need to be carefully governed with that in mind. Attention to governance should likewise be required when humans are created as experiments-­by-­ definition: for example, much recent discussion has focused on the use of human heritable genome editing, and the implications for children born already or in future from these technologies, who will be perpetual research subjects. The millions of people born via IVF over the past four decades, though, would probably attest that careful governance is not the same thing as refraining from doing it altogether! In fact, with the turn towards ‘big (health) data’ and population-­level research, it is increasingly accepted that all humans may be, in the near future if not already, experimental subjects in some sense. Moreover, some have argued that we may have an obligation or at least good moral reasons to participate in research, partly in view of the benefits of science generally and what we, having ourselves been benefited, owe to support it. We might likewise consider whether chimeras and other morally enhanced beings (and genome-­edited humans) could have a moral obligation to serve as research participants. It is also possible that such experimental moral subjects, through raising awareness and forcing us to confront tricky questions about moral status beyond species boundaries, could help challenge speciesist assumptions regarding moral status and thus pave the way for better treatment of other animals. Creating a few ‘sacrificial superchimps’, even if they themselves might fare worse by being born into less enlightened times, could be a vast step towards recognizing the moral status, claims, and interests of other non-­ human creatures, and the obligations we have towards them. Might we then have reasons to create moral-­status-­enhanced HNH chi­ meras or enhanced animals or post-­persons, not in their own interests, but in the interests of others? If post-­persons are better moral agents and better moral philosophers, their existence might be ‘good for us’ (whoever ‘us’ is), enabling the realization of a better world through improved moral action— ours as well as theirs, if we can be persuaded to act on the reasons they provide, via the more effective means of moral persuasion they employ. They might also, as suggested above, facilitate increased richness of interests amongst others and produce, directly or indirectly, enhancements to the moral status of persons; perhaps they will even provide more clarity on these questions of moral status enhancements for pre-­persons or non-­persons, and our obligations thereto.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

212  Sarah Chan With respect to moral status and the interests of humans and non-­humans alike, then, chimeras and other beings that cross moral as well as biological species boundaries might well result in confusion. Rather than simply ‘inexorable’, however, we should view the ensuing confusion as productive. The potential benefit that such ‘productive confusion’ offers is the opportunity to become unconfused; and that is an opportunity we should take whenever we can.

8. Conclusion I have, in this chapter, explored some of the possibilities of crossing both biological and moral species boundaries and the ethical questions, both normative and conceptual, that these raise. While defining such interventions as moral status enhancements is more complicated than it may first appear, in relation to the interests and identity of the beings to which they apply, I have argued that there may be cases in which moral status enhancement can be in a being’s interests. Moreover, the creation of both moral status-­enhanced animals and potential post-­persons may, directly or indirectly, serve to further the interests of other beings, and hence be something we have reasons to pursue.

Notes 1. The argument here shares some similarities with Regan’s (2004) concept of the ‘ex­peri­ en­cing subject of a life’. I am not, however, claiming that all ‘experiencing subjects of a life’ necessarily have FMS, but that FMS depends on having a sort of coherent, tem­por­ al­ly extended self that requires one to be at least the subject-­of-­a-­life; the point is about the relationship between moral status and identity. 2. This of course is only one aspect of the question: whether MSE is good for a particular being is only part of determining whether it is good all things considered. We may have reasons to pursue MSE that are not to do with the interests of the being enhanced, that is, MSE of one being might be good or bad for others; I will discuss this later. There is also the issue that what is good with respect to a being and what is good for that being are not necessarily identical; this is a problem I will not address here. 3. It is worth observing that this is true also of many of the sorts of human ‘enhancements’ that have been discussed, particularly germline genetic enhancements. 4. This idea of post-­personhood as a new threshold is one that most discussions of the  topic have accepted as possible (see for example Agar  2013, DeGrazia  2012, Sparrow 2013, Douglas 2013).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Chimeras, Superchimps, and Post-persons  213

References Agar, N. 2010. Humanity’s End: Why We Should Reject Radical Enhancement. Cambridge, Mass.: The MIT Press. Agar, N. 2013. Why is it possible to enhance moral status and why doing so is wrong? Journal of Medical Ethics, 39, 67–74. Chan, S. 2009. Should we enhance animals? Journal of Medical Ethics, 35, 678–83. Chan, S. and Harris, J. 2011. Does a fish need a bicycle? Animals and evolution in the age of biotechnology. Cambridge Quarterly of Healthcare Ethics, 20, 484–92. DeGrazia, D. 2012. Genetic enhancement, post-persons and moral status: a reply to Buchanan. Journal of Medical Ethics, 38, 135–9. de Queiroz, K. 2005. Ernst Mayr and the modern concept of species. Proceedings of the National Academy of Sciences of the United States of America, 102 Suppl 1, 6600–7. Douglas, T. 2013. The harms of status enhancement could be compensated or outweighed: a response to Agar. Journal of Medical Ethics, 39, 75–6. Greely, H. T., Cho, M. K., Hogle, L. F., and Satz, D. M. 2007. Thinking about the human neuron mouse. American Journal of Bioethics, 7, 27–40. Han, X., Chen, M., Wang, F., Windrem, M., Wang, S., Shanz, S., Xu, Q., Oberheim, N.  A., Bekar, L., Betstadt, S., Silva, A.  J., Takano, T., Goldman, S.  A., and Nedergaard, M. 2013. Forebrain engraftment by human glial progenitor cells enhances synaptic plasticity and learning in adult mice. Cell Stem Cell, 12, 342–53. Harris, J. 2007. Enhancing Evolution. Princeton: Princeton University Press. Hey, J. 2006. On the failure of modern species concepts. Trends in Ecology & Evolution, 21, 447–50. Kagan, S. 2018. For hierarchy in animal ethics. Journal of Practical Ethics, 6, 1–18. Kagan, S. 2019. How to Count Animals, More or Less. Oxford: Oxford University Press. McMahan, J. 2002. The Ethics of Killing: Problems at the Margins of Life. Oxford: Oxford University Press. Mayr, E. 1996. What is a species, and what is not? Philosophy of Science, 63, 262–77. Regan, T. 2004. The Case for Animal Rights. Los Angeles: University of California Press. Robert, J. S. and Baylis, F. 2003. Crossing species boundaries. American Journal of Bioethics, 3, 1–13. Ryder, R. 1974. Experiments on animals. In: Godlovitch, S. (ed.), Animals, Men and Morals: An Enquiry into the Maltreatment of Non-humans. New York: Grove Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

214  Sarah Chan Sapontzis, S. F. 2014. In defense of the pig. Journal of Animal Ethics, 4, 5–17. Shi, L., Luo, X., Jiang, J., Chen, Y., Liu, C., Hu, T., Li, M., Lin, Q., Li, Y., Huang, J., Wang, H., Niu, Y., Shi, Y., Styner, M., Wang, J., Lu, Y., Sun, X., Yu, H., Ji, W., and Su, B. 2019. Transgenic rhesus monkeys carrying the human MCPH1 gene copies show human-like neoteny of brain development. National Science Review, 6, 480–93. Singer, P. 1975. Animal Liberation: A New Ethics for our Treatment of Animals. New York: Random House. Sparrow, R. J. 2013. The perils of post-persons. Journal of Medical Ethics, 39, 80–1. Streiffer, R. 2005. At the edge of humanity: human stem cells, chimeras, and moral status. Kennedy Institute of Ethics Journal, 15, 347–70. Tooley, M. 1972. Abortion and infanticide. Philosophy and Public Affairs, 2, 37–65. Wu, J., Greely, H.  T., Jaenisch, R., Nakauchi, H., Rossant, J., and Belmonte, J. C. 2016. Stem cells and interspecies chimaeras. Nature, 540, 51–9. Wu, J. and Izpisua Belmonte, J. C. 2016. Interspecies chimeric complementation for the generation of functional human tissues and organs in large animal hosts. Transgenic Research, 25, 375–84.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

13 Connecting Moral Status to Proper Legal Status Benjamin Sachs

1.  Introduction In early 2015 the Nonhuman Rights Project petitioned the New York State Supreme Court for a writ of habeas corpus on behalf of two chimpanzees, Hercules and Leo, who were being kept as research subjects by the State University of New York at Stony Brook. The writ of habeas corpus is a long-­ standing pillar of Anglo-­American law that, when petitioned for by a prisoner or detainee, requires state officials (the University, in this case) to justify the detention. In response to the petition Judge Jaffe ordered a hearing in which Stony Brook was required to justify its confinement of Hercules and Leo and the Nonhuman Rights Project argued on the chimps’ behalf.1 Although she eventually affirmed Stony Brook’s right to confine Hercules and Leo, the case is quite significant. While Judge Jaffe took pains to not concede the legal personhood of the chimps,2 the writ of habeas corpus is universally held to apply only to legal persons and consequently holding a habeas corpus hearing for Hercules and Leo amounted to treating them as legal persons. This case combined with a handful of recent changes in the constitutions and laws of various countries suggests a growing discomfort with the millennia-­old tradition of according animals the legal status ‘non-­person’ or, equivalently, ‘thing’.3 In this chapter I explore to what extent a case can be made for according legal status to an entity on the basis of its possessing ‘moral status’. I use ‘legal status’ to denote an entity’s degree of legal personhood (such that the complete absence of legal personhood is still a kind of legal status, namely thinghood). ‘Legal personhood’ is a gradable property that an entity possesses to the extent that the law confers various properties on it. Three of the most important such properties, and the ones on which I will focus in this chapter, Benjamin Sachs, Connecting Moral Status to Proper Legal Status In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Benjamin Sachs 2021. DOI: 10.1093/oso/9780192894076.003.0013

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

216  Benjamin Sachs are legal rights, non-­ownability, and legal standing (i.e., the ability to initiate a civil suit), each of which is itself gradable.4 And for the sake of this chapter, an entity’s ‘moral status’ is the extent to which certain acts concerning that entity qualify as wronging it. So, for instance, if lying to A wrongs A, but lying to B doesn’t wrong B, then ipso facto A has a greater degree of moral status than does B, all else being equal. I use animals as my test case (and by ‘animals’ I mean sentient non-­human animals), but everything I say here about animals is generalizable to other kinds of entity. For the sake of economy I am going to narrow the discussion in three ways. First, I’ll focus on denying that an entity’s possession of a certain degree of moral status obligates us to confer a corresponding degree of legal personhood on it. I won’t take up the question whether an entity’s lack of moral status obligates us to withhold legal personhood from it (though I would deny this too). Second, I’m going to reduce legal personhood to legal personhood vis-­à-­vis the criminal law. Because I’ve defined moral status in terms of wronging, it will be convenient, for a discussion of the relation of moral status to legal status, to focus on that element of legal personhood that is concerned with wrongdoing. Since no act should be criminalized unless it is wrongful (this thesis is known as ‘negative legal moralism’ and enjoys widespread support among legal theorists), whereas the civil law may permissibly concern itself with non-­wrongful harms, the implications of accepting that an entity’s moral status ought to bear on its legal status would be strongest in the case of the criminal law. Third, I assume here that, broadly speaking, there are just three ways in which the facts as to what moral status an animal has could be connected to the facts as to what legal status it morally ought to be granted.5 (The distinction between the first two relies on the distinction between treating an entity unjustly and wronging it. I assume here that unjust acts constitute a subclass of wrongings—namely, the particularly morally egregious ones.) • The Strong Connection: The fact that it would wrong an animal to φ is a moral justification for criminalizing φ-ing. • The Moderate Connection: The fact that it would constitute an injustice to an animal to φ is a moral justification for criminalizing φ-ing. • The Weak Connection: The law’s having some feature—call it X—that would facilitate the wronging of an animal makes it morally obligatory for law-­makers to amend the law so that it no longer has feature X.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Connecting Moral Status to Proper Legal Status  217 The position I defend in this chapter—and the implications of which I partially lay out—is that the Weak Connection holds but the Strong Connection and the Moderate Connection do not.

2.  The Strong Connection I begin by discussing the Strong Connection. If it holds then this is surely because the Strong Connection thesis is an instance of a more general thesis connecting the law and morality. (I cannot imagine why the connection between law and morality should be stronger in the case of animals than in the case of other entities.) One way to formulate the idea that there is a strong connection between morality and the law is as the legal moralism thesis, with ‘legal moralism’ usually defined as the idea that whether some conduct should be criminalized depends whether it is morally impermissible. Duff (2018, pp. 55–8, e.g.) has helpfully disambiguated two theses that legal moralism, so construed, runs together. One is the already mentioned negative legal moralism, the claim that an act’s being morally permissible makes it morally impermissible to criminalize that act. The second is positive legal moralism, the strong version of which, as defended by Michael Moore (2010, p. 646), runs as follows: Strong Positive Legal Moralism (SPLM): We are justified in criminalizing some conduct if and because it is morally impermissible.

There is a very simple, compelling argument for SPLM. Its key premise, which itself is simple and compelling, is: the fact that one can address a morally impermissible action by φ-ing is a justification for φ-ing. If this premise is true, then the SPLM-­ist simply needs it to be the case that we can address morally impermissible actions by criminalizing them. This further premise certainly is true, as I mean ‘address’ as a generic term covering all the various things that the criminal law might be thought useful for vis-­à-­vis morally impermissible actions—e.g., preventing them, condemning them, exacting retribution for them, etc. Here, then, is the simple, compelling argument, which is a generalized version of the argument of chapter 16 of Moore’s (2010) Placing Blame: 1. The fact that one can address a morally impermissible action by φ-ing is a justification for φ-ing.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

218  Benjamin Sachs 2. We can address morally impermissible actions by criminalizing them. 3. Therefore, the moral impermissibility of an action is a justification for criminalizing it. (SPLM) I offer an objection to this argument in section 4.

3.  The Moderate Connection As an alternative to the Strong Connection, one might posit that for some acts that wrong an animal their wronging that animal is a justification for crim­in­ al­iz­ing them. This brings us to the popular idea that the criminal law should be used to secure justice. If this idea holds and it can be established that animals can be victims of injustice, then we have a connection between a fact about animals’ moral status (i.e., their being potential victims of injustice) and what their legal status ought to be—specifically, the Moderate Connection will hold. Martha Nussbaum, Alasdair Cochrane and Robert Garner have each argued for the Moderate Connection. They each begin by arguing that there is some central moral concept that can be sensibly applied to animals and is a matter of justice. For Nussbaum, it’s the concept of flourishing that does the work; her contention is that all animals have the capacity to flourish.6 Whether any particular individual does flourish, according to Nussbaum, depends on whether she has certain capabilities, and it is a matter of justice for each individual that she have a sufficient level of each of the capabilities (2006, pp. 74–5). Cochrane (2012, ch. 2; 2018, ch. 2), meanwhile, argues that animals have rights and that justice is concerned with the upholding of rights (2012, p. 13). As for Garner (2013, pp. 21–2), he holds that animals can be oppressed and can be benefited and burdened, and that oppression and certain distributions of burdens/benefits are unjust. The remaining question is whether facts about justice, as it is construed by any or all of these three theorists, have a bearing on what legal status individuals ought to be accorded. Of course, it is a natural thought that it is the business of the state—and, specifically, its criminal law—to secure justice, but the fact that the thought is natural doesn’t undermine the need to argue for it. In what follows I examine what these three theorists have done by way of arguing for this natural thought. As to Nussbaum, she makes no attempt. She does, admittedly, say that ‘[t]he purpose of social cooperation . . . ought to be to live decently together in

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Connecting Moral Status to Proper Legal Status  219 a world in which many species try to flourish’ (2004, p. 307), which, given Nussbaum’s view (summarized above) as to the connection between flourishing, capabilities, and justice, amounts to coming close to claiming that it is the state’s business to secure justice. Notice, however, that Nussbaum’s claim here is about what the purpose of social cooperation ‘ought to be’. If it were instead a claim about what the purpose of social cooperation is, then it would certainly be relevant to question of what the state morally ought to do (by way of according legal status, and indeed by way of doing anything else). But Nussbaum does not explain how the fact as to what the state’s purpose ought to be explains the facts as to what the state morally ought to do. One would think, in fact, that if anything the explanatory relationship would run in the reverse direction. As to Cochrane, in his book Animal Rights without Liberation he contends that a theory of justice is a theory of a certain part of interpersonal m ­ orality— namely, that part of interpersonal morality with which people can be le­git­im­ ate­ly coerced to comply (2012, pp. 13–14). The question that has to be asked of anyone who endorses this Millian conception of justice is this: is this alleged connection between justice and coercion a reductive definition of ‘justice’ or, instead, a substantive truth about justice? If the former, then there is no philosophical project that merits the moniker ‘developing a theory of justice’. The way to discover the demands of justice would be, instead, to develop a theory of legitimate coercion, starting with our intuitions about legitimate coercion and building on them using the method of reflective equilibrium. Our intuitions as to whether (and if so how) animals can be victims of in­just­ ice, and the theories that could be developed out of these intuitions using the method of reflective equilibrium, would be irrelevant. If, on the other hand, the connection between justice and coercion is supposed to be a substantive truth, then Cochrane needs to give us an argument for that truth. And for that purpose it would be helpful to know something about his theory as to the demands of justice. In his later book, Sentientist Politics, Cochrane maintains that justice (or ‘minimal justice’, as he sometimes says) encompasses the demands of rights and of equal consideration of interests. So when he claims that ‘moral agents have a basic duty to create and support a political order which aims to do two things: show equal con­sid­er­ ation to sentient creatures; and protect their basic rights’ (2018, pp. 30–1) he is in effect claiming that moral agents have a duty to create and support a political order that upholds (minimal) justice. As to why that’s the case, Cochrane’s answer, for which he argues, is that ‘without such political institutions, equal consideration and the protection of rights will be unmanageable,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

220  Benjamin Sachs insecure, and lack determinacy’ (2018, p. 31). The implicit premise behind this is, of course, that we have an obligation to promote the manageability, security, and determinacy of equal consideration and the protection of rights. Suppose moral agents do indeed have a duty to create and support a pol­it­ ical order that upholds (minimal) justice. Does that imply that those whose role it is to shape the criminal law are obligated to shape it so that it upholds (minimal) justice? Not at all. At best it implies that moral agents are obligated to create an institution (i.e., a state) wherein those whose role it is to shape the institution’s criminal law, i.e., legislators, are obligated by that role to shape it so that it upholds (minimal) justice. But this doesn’t count one whit in favour of the claim that moral agents have created such an institution, and Cochrane doesn’t take up the question whether they have. At best Cochrane has established that it would be a morally better world if this were part of the role morality of the legislator. As to Garner, the passage in his works that is key to understanding his view as to the state’s obligation to secure justice is this one: [B]ecause the claims of justice are regarded as so pressing, the obligation to act so as to avoid injustice falls most often on the state or other political authority . . . This is not to say that acts of injustice cannot be perpetrated by individuals or collective entities such as corporations, but that it is political institutions that are best placed to alter these injustices.  (2013, p. 48)

Here Garner is grappling with the existence of two different ways of thinking of the property of justice—i.e. as a property of actions or as a property of states of affairs (not that it couldn’t be both). He clearly wants to allow the property to attach to actions, hence his reference to ‘acts of injustice’, but because of this he struggles to say something clear and coherent about why the state should take justice to be its concern. The expression ‘the obligation to act so as to avoid injustice’ is ambiguous, as it could mean ‘the obligation to avoid acting unjustly’ or ‘the obligation to avert [states of affairs that constitute] injustice’, leaving one not knowing what to make of the purported fact that that obligation ‘falls most often on the state or other political authority’. If Garner means to say that the state is obligated to prevent the instantiation of states of affairs that are unjust then he is making a bold, unargued leap from the fact that justice is of central moral importance. Meanwhile, the idea of altering injustices is obscure. The sentence in which the word ‘altering’ appears begins with ‘acts of injustice’ as its subject, but it’s not clear what it would mean to alter an act of injustice. On the other hand, it’s clear what it

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Connecting Moral Status to Proper Legal Status  221 would mean to alter the unjust states of affairs that acts of injustice can bring about, but again there’s some philosophical distance to cover to get from the idea that justice is of central moral importance to the claim that the state has an obligation to clean up the mess that acts of injustice create. I conclude that neither Nussbaum, nor Cochrane, nor Garner has established that the state is obligated to secure justice, much less that it is obligated to use its criminal law to do so. Thus, their writings offer no help in the effort to establish that the Moderate Connection holds. Of course, nothing I have said in this section amounts to an argument that the Moderate Connection does not hold. And, in fact, one can alter the simple, compelling argument for SPLM in the service of arguing for (what I’ll call) Moderate Positive Legal Moralism (MPLM). The argument would run as follows: 1. The fact that one can address an unjust action by φ-ing is a justification for φ-ing. 2. We can address unjust actions by criminalizing them. 3. Therefore, the injustice of an action is a justification for criminalizing it. (MPLM) The conclusion of this argument entails the truth of the Moderate Connection. I offer an objection to the argument in section 4.

4.  An Objection to the Strong Connection and the Moderate Connection The indisputable truth of premise 2 in the argument for SPLM and in the argument for MPLM means that all hopes of resisting the Strong Connection and the Moderate Connection rest on finding a flaw in the first premise of each argument. My objection to that premise rests on the idea that since legislating is an exercise of the agency that comes along with occupying an office (the office of legislator), legislators qua legislators can lack reasons they otherwise would have had.7 Of course, this idea needs defending, not least because Moore (1989, pp. 872–3; 2010, p. 659) has argued against it. My defence of it begins by noting that we need a theory of abuse of office, and I argue that the best theory available says that one abuses one’s office when, and only when, one uses the associated agency not in the service of discharging the role obligations incumbent on holders of that office. So, for

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

222  Benjamin Sachs instance, if one uses one’s status as a government bureaucrat to take kickbacks in exchange for awarding public contracts, one abuses one’s office. One immediate problem with this theory, however, is that it implies that it is wrong to do what one has strong reasons to do. Certainly the bureaucrat has strong reasons of self-­interest to line her pockets. So we need to posit that being an office-­holder can make it the case that one lacks a reason to use one’s powers of office to act for certain ends. My official statement of this idea, which I call the ‘reasons-­blocking thesis’, is as follows: In virtue of the fact that some measure of agency is attached to an office, the possessor of that agency (i.e., the office holder) can lack a reason to use it to φ even though φ-ing is something she has a reason to do and she could use that role agency to φ. The positive corollary of this negative thesis is that the ends an office-­holder possesses that stem from the role obligations attached to that office ground reasons rele­ vant to the exercise of her powers of office. So, to return to the bureaucrat, her goal of becoming wealthy does ground reasons for her; for instance, it grounds reasons for her to make shrewd investments or to take a second job, as neither of those actions is an exercise of her powers of office. But it does not ground any reasons relevant to her exercise of her bureaucrat powers. The main selling point of the reasons-­blocking thesis is that it is a necessary element of a plausible theory of abuse of office. What remains is to demonstrate that there is no plausible theory of abuse of office that doesn’t include this thesis. This raises the important question: What would a theory of abuse of office look like if it affirmed that all of one’s ends, not just those arising from the obligations attached to the office, grounded reasons relevant to one’s exercise of one’s powers of office? I contend that any such theory would have two flaws. First, it wouldn’t be able to vindicate our sense that role-­bearers are sometimes justified in setting aside reasons that are clearly in some sense relevant to their decision. That we have such a sense is evidenced most straightforwardly by how we acknowledge, albeit generally resentfully, that there is some sense to the ever-­frustrating responses we often receive from mid-­level office-­ holders—responses along the lines of ‘That’s not my problem’, or ‘Take it up with management’, or (more sensitively) ‘I’d love to help, but my hands are tied’—when we beg them to act contrary to the strictures of their role. The police officer who finishes writing you a parking ticket even though you’re offering her an excellent justification for having parked your car where you did; the airline employee who won’t reopen the gate and allow you to board the plane even though she closed it only half a minute ago and the plane hasn’t

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Connecting Moral Status to Proper Legal Status  223 moved; the cashier who won’t sell you alcohol because you don’t have your identification even though you clearly look older than the minimum age— you get the idea. We don’t like hearing these things, but we understand them and we grudgingly accept them. The reasons-­blocking thesis explains why we’re right to offer this acceptance and why our grudgingness has merit: it’s because the considerations with which we’re trying to persuade these office-­ holders have no normative pull on those agents relevant to how they ought to exercise their powers of office though they do have normative pull on those agents full stop. Second, it would have to say that abusing one’s office is equivalent to en­gaging in a certain kind of incorrect deliberation about one’s reasons. But this assertion is incompatible with the idea that an abuse of office is a wrong committed against those who sustain the institution within which the office is situated. The wrong of incorrectly weighing reasons is, as I’ve argued elsewhere (Sachs 2018: ch. 4), a wrong without a vector; it’s not a way of wronging someone, even if some of the reasons in question are grounded in people’s well-­being, desires, etc. And surely we think abuse of office is a way of wronging somebody. The American Congresswoman who sells her vote for campaign contributions wrongs the American people (and no one else). I have argued in this section for the reasons-­blocking thesis by arguing for the theory of abuse of office of which it is an integral part. If the reasons-­ blocking thesis is true then the door is open to rejecting premise 1 in the argument for SPLM and in the argument for MPLM, though much more would have to be said to establish that those premises actually are false.8 That, anyway, is my roadmap for rejecting the Strong Connection and the Moderate Connection. With that laid out, I move on to discussing the Weak Connection.

5.  The Weak Connection The Weak Connection, simplifying a bit, is the claim that for any feature of the law such that its instantiation facilitates the wronging of animals, we are morally required to eliminate that feature. The idea that failure to do so would be wrong follows from the very plausible moral generalization that one wrongs an individual by facilitating someone else’s wronging of that individual. In effect, then, the Weak Connection is the claim that we are morally barred from wronging animals through the law. The reason I call this a ‘Weak Connection’ is that it doesn’t, contrary to the Strong and Moderate Connection, commit us to using the law to address—that is, prevent,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

224  Benjamin Sachs condemn, exact retribution for, etc.—any wronging of animals. Moral requirements of the sort, ‘Don’t wrong X’, are more basic than requirements of the sort, ‘Address the wronging of X (in certain ways)’ and surely, with respect to any X, one is bound by the former requirement if one is bound by the latter. Therefore, we ascribe a weaker legal status to animals when we say that we shouldn’t use the law to wrong them than when we say that we should use the law to address the wronging of them. Here are a few implications of the Weak Connection. 1.  Consider the following case: Suppose X, a child, is being abused by her father, and a concerned stranger tries to abscond with X as a way of saving her from her father’s abuse. It would be impermissible, surely, for another individual, Y, to prevent the stranger doing this or, if the stranger successfully does it, take X away from the stranger and return her to her father. I submit that the same holds for animals being treated cruelly by those who control them (factory farmers, for instance). If someone tries to sneak on to a factory farm under cover of darkness to take all the animals away to an animal sanctuary, it would be impermissible for some other person to prevent this action or, if it is successfully carried out, to take those animals from the sanctuary back to the factory farm. Now, given the Weak Connection, it would be impermissible for the law to do either of those things. But that’s exactly what legal systems in the Anglo-­ American tradition would do as it stands. To explain: It is generally agreed that ownership is a set of legal relations,9 one of which is possession, such that the owner gets to keep the owned object in a certain place. If X is the object of Z’s possession then not only can X by right be kept in place by Z, but also if some third party, Y, tries to remove X from that place or successfully does so, the law will step in to prevent or reverse as appropriate. My claim is that each animal should have a qualified immunity to the incident of possession; the law should make it such that they can be an object of that incident only when the subject of the incident promotes the animal’s interests to a sufficient extent, just like a parent’s rightful possession of her child is so contingent.10 In the case of non-­domesticated animals, total immunity should be the law’s default, as the law’s restoring possession of a non-­domesticated animal to its possessor will usually qualify as facilitating the wronging of that animal, because it is in the nature of being a non-­domesticated animal that, under normal circumstance, being under the physical control of a human is harmful to that animal. In the case of domesticated animals there should be no such

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Connecting Moral Status to Proper Legal Status  225 default, since being possessed by someone who treats them beneficently is usually better than being left to their own devices, and therefore restoring possession of a domesticated animal to its possessor will only sometimes qualify as facilitating wrongdoing. In other words, a form of guardianship should be the default. However, attempted or successful abductions of sentient domesticated animals from their guardians should not be prevented nor reversed when the guardian has been mistreating the animal. 2.  Suppose X and Y sign a contract, whereby X gives Y money in exchange for Y providing X with a number of captive women that X will use in his sex slavery business, and suppose further that Y lives up to his part of the bargain but X doesn’t pay up. Clearly it would be impermissible for some third party to exert his influence to pressure X into handing over the money. Likewise, the law should not, and would not, enforce that contract.11 This is an implication of the Weak Connection. Therefore, I submit, the law should decline to enforce contracts whereby factory farms conduct their business, and included in this is a refusal to enforce contracts whereby factory farms sell their animals or animal flesh. This latter refusal would amount to conferring on sentient animals qualified immunity to another incident of ownership—the incident in question this time being what Honoré (1961) called the ‘right to the capital’. In the foregoing two examples I relied on the idea that the basic legal underpinnings of any successful market, such as the enforcement of contracts and property relations, are causes of the success of those markets. This makes the law complicit in the egregious wrong that is the factory farm, and makes the changes I’ve recommended above morally required.12 3.  Suppose X hires Y to kill someone, and Y does. Clearly, Y has acted wrongly, but I think most of us would say that X has acted wrongly as well. As to why, one right answer—there may be several—is that X has facilitated the committing of a grievous wrong. Given the Weak Connection, the same holds for laws allocating government funding to cruel practices, such as the conducting of painful medical research on animals and the confining of animals in zoos.13 Therefore the state is morally prohibited from enacting such laws. In this section I’ve explored the implications of the Weak Connection. What I have not yet done, however, is say whether endorsing the Weak Connection is tantamount to holding that we morally ought to confer legal personhood on animals. To make progress on that question we first need to delve into the theory of legal personhood. There are two questions here—what is legal personhood,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

226  Benjamin Sachs and on what criteria should it be conferred or withheld. My view is that all of the extant answers to the second question that one can find in the literature are inadequate;14 this is why I have relied exclusively on my own reasoning in this chapter thus far. As to what legal personhood is, I haven’t given an answer, but have assumed that it includes the possession of legal rights, the status of non-­ownability, and legal standing. One has a legal right, presumably, in virtue of the law demanding that one be treated a certain way. There is a consensus, however, that the law’s demanding that X be treated a certain way counts as X being the subject of a legal right only if the demand was enacted for X’s sake (Sunstein 2000; Favre 2005; Cochrane 2009; Francione 1994 and 1995; Pietrzykowski 2017). This strikes me as a sensible stipulation, and one that can be generalized. The generalization I have in mind, and would endorse, is this: X’s legal rights, non-­ownability, and legal standing contribute to X’s possessing legal personhood only if they were conferred on X for X’s sake. This being the case, one could say that my arguing that domestic animals should, for their own sake, have a qualified immunity to being an object of the incident of possession, amounts to arguing for conferring some measure of legal personhood on them. But nothing of philosophical interest hinges on whether we decide to put it this way.15

6.  Conclusion In this chapter I have laid out three implications of the idea that the Weak Connection between moral status and legal status holds—implications which, taken together, would if enacted constitute a revolution in the law’s treatment of animals and (I suspect) redound to their almost incalculable benefit. I’ve also offered in this chapter reasons for doubting that there is any more than the Weak Connection between an individual’s moral status and the legal status that the individual morally ought to be granted, but I haven’t explored the implications of this. I acknowledge, though, that it opens the door to changes in the law that would be to the detriment of animals. For instance, it suggests the moral permissibility, though not the moral obligatoriness, of repealing animal cruelty laws. This is no doubt a highly counterintuitive implication, and thus counts heavily against accepting that nothing more than the Weak Connection holds. Whether it counts heavily enough is a discussion for another day.16

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Connecting Moral Status to Proper Legal Status  227

Notes 1. The original order can be found at (accessed 21 May 2020); it was later amended, and the amended version can be found at (accessed 21 May 2020). For a more thorough list of recent judicial developments along these lines, see Vayr (2017, pp. 819–21). 2. See Judge Jaffe’s decision, p. 2, at (accessed 21 May 2020). 3. For a list of the most significant developments, see Fitzgerald (2015, pp. 350–3). 4. I am denying that there are just two legal statuses, person or thing. Granted, the consensus among legal scholars (e.g., Hall and Waters  2000, p. 3; Francione  2008, pp. 61–2; Pietrzykowski 2017, pp. 51–2) is in favour of this binary, but Kurki (2017, pp. 94–5; 2019, ch. 2) has shown convincingly that the consensus is mistaken. 5. As can be inferred from the way I’ve expressed the issue here, I assume that there are mind-­independent facts as to what moral status any entity has, but only conventional facts—and, hence, facts that are under our control—as to whether any given entity is a legal person. The former assumption will presumably be rejected by moral antirealists and sceptics of various stripes, but it would be a distraction to argue against them here. The latter assumption will be accepted by positivists and almost all of their extant opponents; only old school natural law theorists will reject it. 6. Well, almost all. Nussbaum (2006, p. 187; 2011, p. 31) says that there are some sentient humans, including vegetative humans, who cannot flourish; presumably by parity of reasoning she would concede that there are also some animals that cannot flourish. 7. The possibility of objecting to premise 1 this way has been noticed by others; see Edwards (2016, p. 142), Gardner (2007, p. 202), and Dempsey (2011, p. 256). None of these theorists, it should be mentioned, actually endorses the idea of objecting to premise 1 this way. Edwards (2016, p. 142) stays neutral on it, Gardner (2007, p. 277) eventually rejects it, while Dempsey (2011) ends up endorsing the more modest pos­ ition that we all have the same reasons but sometimes one has reasons one ought not to act upon. For discussion of Gardner and Dempsey on this point, see Tadros (2016, p. 122). 8. The full version of the argument against premise 1 can be found in Sachs (unpublished). 9. This is known, variously, as the idea that there are several ‘incidents’ of ownership (Honoré 1961), or as the ‘bundle of sticks’ theory of ownership. 10. Although we don’t call children ‘property’, it is nevertheless true that they are objects of some of the incidents of ownership, including possession, as noted by Cochrane (2009, pp. 434–42). The position on animal possession I adopt here is inspired by, and broadly in line with, Cochrane’s position. 11. The US Supreme Court has ruled (Shelley v. Kraemer, 334 U.S. 1 (1948)) that a state must not enforce a contract if so doing would violate the Equal Protection Clause of the 14th Amendment. This is not equivalent to declaring that immoral contracts must

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

228  Benjamin Sachs not be enforced, but it does take us part of the way to that conclusion by acknowledging that a state is permitted to exercise discretion over its use of its powers of contract enforcement. Shiffrin (2005, pp. 221–30) has argued that it is legitimate for the state to decline to enforce a contract on grounds of its content being immoral. She was focused specifically on contracts that are immoral because un­con­scion­able (i.e., unfair, exploitative, etc.), but her underlying principle applies more broadly. In fact, it applies more straightforwardly, I would think, to the case of a contract that is immoral because of what it does to a third party. The underlying principle is the one endorsed in the Supreme Court case: the state does not have an exceptionless obligation to facilitate consensual transactions. 12. One might argue that the implicit principle appealed to here—that the state is morally obligated to refrain from facilitating immoral actions—is much too strict. Bank robbers speed away on state-­funded roads as they make their getaway; Ponzi schemers make use of their state-­funded education in mathematics; etc. This objection is obviously sound, but (just as obviously) there must be a narrower principle that can do the trick for us. This narrower principle will appeal to some idea of foreseeability, or to a more stringent notion of causality than the notion to which I appealed earlier, or to a balancing of costs and benefits, or something like that. But clearly there is some valid principle prohibiting the state to abet moral wrongdoing; we need one, for instance, to explain why it would be wrong for the state to sell arms to the Taliban. 13. This doesn’t always wrong the animal—it is possible for a sentient animal to live a perfectly good life in a zoo—but given how zoos actually are it usually does. As to the wrongness of conducting painful med­ic­al experiments on animals, I argue for that conclusion in Sachs (2018, ch. 8). 14. I explain why in Sachs (forthcoming, Appendix). 15. Similarly, I’ve argued elsewhere (Sachs 2011) that there is no philosophical thesis that can be expressed in the language of ‘moral status’ that cannot be expressed equally or more clearly in other language. 16. I expand upon the arguments of this chapter in Sachs (forthcoming, chs. 4–7).

References Cochrane, Alasdair. 2009. ‘Ownership and Justice for Animals’. Utilitas 21: 424–42. Cochrane, Alasdair. 2012. Animal Rights without Liberation. New York: Columbia University Press. Cochrane, Alasdair. 2018. Sentientist Politics. Oxford: Oxford University Press. Dempsey, Michelle Madden. 2011. ‘Public Wrongs and the “Criminal Law’s Business”: When Victims Won’t Share’. In Rowan Cruft, Matthew H. Kramer, and Mark R. Reiff, eds, Crime, Punishment, and Responsibility: The Jurisprudence of Antony Duff, pp. 254–72. Oxford: Oxford University Press. Duff, R. A. 2018. The Realm of Criminal Law. Oxford: Oxford University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Connecting Moral Status to Proper Legal Status  229 Edwards, James. 2016. ‘Master Principles of Criminalisation’. Jurisprudence 7: 138–48. Favre, David. 2005. ‘A New Property Status for Animals’. In Cass R. Sunstein and Martha C. Nussbaum, eds, Animal Rights: Current Debates and New Directions, 234–46. New York: Oxford University Press. Fitzgerald, Emily A. 2015. ‘[Ape]rsonhood’. The Review of Litigation 34: 337–78. Francione, Gary. 1994. ‘Animals, Property and Legal Welfarism: “Unnecessary” Suffering and the “Humane” Treatment of Animals’. Rutgers Law Review 46: 721–70. Francione, Gary. 1995. Animals, Property, and the Law. Philadelphia: Temple University Press. Francione, Gary. 2008. Animals as Persons. New York: Columbia University Press. Gardner, John. 2007. Offences and Defences. New York: Oxford University Press. Garner, Robert. 2013. A Theory of Justice for Animals. University Press.

New York: Oxford

Hall, Lee and Anthony Jon Waters. 2000. ‘From Property to Person: The Case of Evelyn Hart’. Seton Hall Constitutional Law Journal 11: 1–68. Honoré, A. M. 1961. ‘Ownership’. In A. Guest, ed., Oxford Essays in Jurisprudence, 107–47. London: Oxford University Press. Kurki, Visa A. J. 2017. ‘Why Things can Hold Rights: Reconceptualizing the Legal Person’. In Visa A. J. Kurki and Tomasz Pietrzykowski, eds, Legal Personhood: Animals, Artificial Intelligence and the Unborn, 69–89. Cham, Switzerland: Springer. Kurki, Visa A. J. 2019. A Theory of Legal Personhood. Oxford: Oxford University Press. Moore, Michael S. 1989. ‘Authority, Law, and Razian Reasons’. Southern California Law Review 62: 827–96. Moore, Michael S. 2010. Placing Blame. New York: Oxford University Press. Nussbaum, Martha  C. 2004. ‘Beyond “Compassion and Humanity”  ’. In Cass R. Sunstein and Martha C. Nussbaum, eds, Animal Rights: Current Debates and New Directions, 299–320. New York: Oxford University Press. Nussbaum, Martha  C. 2006. Frontiers of Justice. Cambridge, Mass.: Harvard University Press. Nussbaum, Martha  C. 2011. Creating Capabilities. Cambridge, Mass.: The Belknap Press. Pietrzykowski, Tomasz. 2017. ‘The Idea of Non-personal Subjects of Law’. In Visa  A.  J.  Kurki and Tomasz Pietrzykowski, eds, Legal Personhood: Animals, Artificial Intelligence and the Unborn, 49–67. Cham, Switzerland: Springer. Sachs, Benjamin. Unpublished. Contractarianism as a Political Morality. Sachs, Benjamin. 2011. ‘The Status of Moral Status’. Pacific Philosophical Quarterly 92: 87–104.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

230  Benjamin Sachs Sachs, Benjamin. 2018. Explaining Right and Wrong: A New Moral Pluralism and its Implications. New York: Routledge. Sachs, Benjamin. Forthcoming. Contractarianism, Role Obligations, and Political Morality. New York: Routledge. Shiffrin, Seana Valentine. 2005. ‘Paternalism, Unconscionability Doctrine, and Accommodation’. Philosophy & Public Affairs 29: 205–50. Sunstein, Cass  R. 2000. ‘Standing for Animals (with Notes on Animal Rights)’. UCLA Law Review 47: 1333–68. Tadros, Victor. 2016. Wrongs and Crimes. Oxford: Oxford University Press. Vayr, Bryan. 2017. ‘Of Chimps and Men: Animal Welfare vs. Animal Rights and how Losing the Legal Battle may Win the Political War for Endangered Species’. Illinois Law Review: 817–75.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

14 How the Moral Community Evolves Rachell Powell, Irina Mikhalevich, and Allen Buchanan

For the first 3.5 billion years of the roughly 4.0 billion-­year history of life on Earth, nothing mattered to anything. The Earth was vibrantly alive, but the biosphere was bereft of meaning and moral value. It was the evolution of welfare—the property of having a life that matters for its own sake—that gave birth to a morally meaningful world. In this chapter, we explore the evolutionary origins of psychological normativity and its connections to moral value. In section  1, we argue that human reasoning should not be taken as the defining standard of normativity and we gesture at what a more biologically inclusive conception might look like. We describe the normative psy­cho­ logic­al capacities that configured the first ends-­in-­themselves on Earth and explain how these abilities for self-­regard were transformed into the other-­ oriented forms of valuing that are present in socially complex animals. In some of these lineages, individual flourishing became tied to the wellbeing of offspring, mates, and conspecifics, providing the building blocks of other-­ regard from which human moral minds would be forged. In section  2, we show that the same adaptive cognitive processes that en­able humans to make moral judgments and to recognize intrinsic value in the world also distort reasoning about moral standing and moral status (col­lect­ive­ly, “MSS”), resulting in unduly restrictive conceptions of the moral community. We illustrate these biases, and reconnect to the evolutionary narrative laid out in section 1, by drawing attention to a current deficiency of the animal ethics landscape: its treatment of invertebrates. Our aim is to paint a broad-­strokes picture of how the moral community evolves.

Rachell Powell, Irina Mikhalevich, and Allen Buchanan, How the Moral Community Evolves In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Rachell Powell, Irina Mikhalevich, and Allen Buchanan 2021. DOI: 10.1093/oso/9780192894076.003.0014

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

232  Rachell Powell, Irina Mikhalevich, and Allen Buchanan

1.  The Natural History of Normativity 1.1  The moral mismeasure of man What are the defining features of a normative mind? One way of answering this question is to develop an account of normativity that conforms to common intuitions about normative thought and behavior in humans. Although there are many fruitful ways of thinking about normative minds, most scientific theorists of morality have followed the lead of traditional moral philo­ sophers in taking Homo sapiens as the paradigmatic moral specimen. They assume that moral minds are configured by extremely sophisticated cognitive-­ affective capacities that underwrite moral judgment. These include mech­an­ isms that underlie moral emotions, norm acquisition, metacognition, reason, language, innate moral grammars (Mikhail 2007), “moral foundations” that emanate from a distinct set of moral modules (Graham et al. 2013), and other higher-­ order faculties that underpin the uniquely normative stance that humans take toward the world. Since no taxon apart from Homo is capable of making moral judgments properly conceived (Joyce 2007), or of being governed by conceptions of what one ought to do or has strong reasons to do, humans are crowned the sole moral animal ever to have existed on Earth. Even attempts to discern the evolutionary roots of morality in nonhuman animals have taken human moral cognition as their conceptual starting point—the gold standard against which the “normativeness” of animal thought and behavior should be measured. Research in cognitive ethology and comparative psychology has thus sought evidence of sympathy, consolation, altruism, friendship, grief, a sense of fairness, and other supposedly unique features of human morality in other social animals (Andrews  2020). Although these studies have blurred the human– animal boundary, they have at the same time reinforced an anthropocentric notion of normativity. There is certainly great scientific value in highlighting the unique forms of normativity exhibited by human beings. Morality is key to explaining the astounding levels of cooperation that propelled Homo to ecological preeminence (Tomasello 2009; Kumar and Campbell in press). However, we think there is significant theoretical value in developing and engaging with a more biologically expansive notion of normativity from which evaluative capacities in humans, and proto-­moral capacities in other animals, were constructed. A useful model in this regard is the “minimal cognition” paradigm, which offers an antidote to the anthropocentric framework that has long dominated

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  233 the sciences of mind. Cognition is traditionally associated with concept-­ formation, problem solving, metacognition, and other “higher-­order” capacities underwritten by internal representations. This results in a stark divide between human-­like cognizing animals, on the one hand, and the rest of the living world, on the other. Minimal cognition research bridges this gap by taking cognition to consist in a combination of perception, memory, and action that can be found not only in animals without brains, but also in plants, unicellular eukaryotes, and even bacteria (Lyon 2006; van Duijn et al. 2006). In this chapter, we will not advocate a comparably capacious theory of normativity because we think that both value and valuing have necessary connections to psychology. Nevertheless, our view is much more inclusive than some traditional accounts, such as those closely allied with the philosophy of Immanuel Kant, which hold that the “top-­shelf ” normativity inherent to rationality is the only property that makes a being an “end-­in-­itself.” This deviation from tradition is not in itself a controversial move. Most Utilitarians and even many contemporary Kantians would agree that a being need not have higher-­order cognition like reason, language, or self-­consciousness, or be a member of a species that normally exhibits these traits, in order to have moral standing or to be part of the moral community (Korsgaard  2018; Cholbi 2014). Our goal is to make the case that normativity is an evolutionarily ancient, psychologically foundational, and pre-­linguistic phenomenon that configures ends-­in-­themselves. As we understand it, to be an end-­in-­itself is to have a welfare, or a life that can go well or poorly for the subject of experience. What are the minimal conditions for being an end-­in-­itself and how did these welfare structures arise over the history of life on Earth to produce different forms of value and valuing? These are grand philosophical questions that deserve a book-­length treatment; the story that follows is intended merely as an adumbration.

1.2  The evolution of ends-­in-­themselves The natural history of animal life can be understood, in significant part, as an evolutionary history of normativity. Welfares did not exist in any appreciable form before the Cambrian period, some 540 million years ago (Powell 2020). The basic welfare structure appears to have arisen independently in geo­logic­ al­ly quick succession in a handful of animal lineages during and in the aftermath of an event known as the “Cambrian Explosion”: The Big Bang of animal evolution in which most animal body plans that we see today congealed. In

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

234  Rachell Powell, Irina Mikhalevich, and Allen Buchanan each of these cases, welfares evolved in connection with a suite of traits that include sophisticated image-­formation and its neural-­cognitive processing accoutrements, as well as increasingly able bodies that allowed visual lineages to capitalize on this influx of new information. Linking up visual perception with action in these ways required information processing and integration centers—or brains—capable of generating complex representations of objects and environments that can be made available for navigation, foraging, and learning tasks. Even the most sophisticated forms of olfaction and audition do not support the kind of three-­dimensional, object-­structured scene perception achieved by the visual sense. But harnessing light is not the only way that organisms can “see” their surrounding world. Natural selection has produced image-­ forming organs representationally analogous to vision by tapping into two other waveform energies: sound (echolocation in bats and whales) and electromagnetic fields (electrolocation in several groups of freshwater fish). On Earth, however, these alternative image-­forming modalities co-­opted pre-­existing centers of spatial cognition that evolved long ago in connection with vision. We will refer to the image-­ forming organ/brain/mind/active-­body trait cluster that repeatedly gave rise to ends-­in-­themselves as the welfare platform. Evolutionary iterations of the welfare platform suggest that it is a natural kind that figures in thus-­far-­ unarticulated laws that govern the evolution of sentient life. The basic welfare platform has been retained since the early phases of animal evolution, though as we shall see it has been elaborated on in morally significant ways. With welfare platforms came revolutionary agentic capacities. Brainy animals came to rely less on rigid instinct and more on flexible learning strategies that enabled them to tap into dynamic causal connections in the world. A dramatic increase in the amount and variety of information that organisms could process, package, and act upon permitted more sophisticated goal-­directed behavior, giving rise in some animal lineages not only to the capacity to flourish, but also the ability to more effectively bring about the conditions for their own flourishing. Before the Cambrian Explosion, animal ecosystems were a two-­dimensional world concentrated around microbial mats and dominated by sluggish, brainless filter-­feeding communities known as the Ediacaran fauna. With the origins of the welfare platform, animal ecology began to resolve in three-­ dimensions. Thanks to the evolution of normative minds, the listlessness of the primeval “Garden of Ediacara” was lost, and the eon of active predation and counter-­predation had begun (McMenamin 1998). The very first welfare platform arose in arthropods (a diverse animal group that includes insects,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  235 crustaceans, arachnids, and extinct trilobites), who were the apex predators of the Cambrian seas. Vertebrates quickly followed suit, pos­sibly in competition with or as a defensive measure against arthropod predation. Mollusks were next in line, with welfare platforms arising in ammonoid and belemnoid cephalopods, the latter being the predecessors of modern-­day coleoids (octopus/squid/cuttlefish). Arthropods, mollusks, and vertebrates share a common ancestor that probably possessed a primitive nervous system, but was in all likelihood eyeless and brainless. Welfare platforms thus arose convergently in these groups (for a discussion, see Powell  2020, chapters 9–10). Yet even if one accepts that evolution has converged on important physical aspects of the welfare platform, such as eyes and brains, some may still want to resist the notion that brainy invertebrates have phenomenal consciousness, cognition, or other plausible mental preconditions for being an end-­in-­oneself. We will return to the question of invertebrate mind and its implications for moral standing in section 2. Why think that normativity arose with the welfare platform, rather than with life itself? There is a sense in which all living things are subject to circumstances that can be good or bad for them qua organisms, insofar as these circumstances contribute to or detract from their self-­maintenance, evolutionary fitness, or biological functioning. But events are not good or bad for bacteria, protists, and plants from their perspective. It is only with the emergence of the welfare platform that organisms began to experience states of affairs as good or bad, as pleasurable or painful, as things to approach or avoid. A critical ingredient in the welfare platform, therefore, was the attachment of affective valences to representations. Without this valence component, sophisticated action (such as visual foraging) may be unachievable and, depending on one’s theory of action, logically impossible. Moreover, in order to forge mental associations between stimuli—a critical component of be­hav­ ior­al flexibility—organisms must be capable of assigning positive and negative valences to objects and action sequences. These affective evaluations need not rise to the level of full-­bodied emotional reactions (Damasio 1994, p. 155). Neither is it necessary that organisms be aware of their affective evaluations as such, nor that these evaluations entail propositional attitudes strictly conceived. Rather, a more minimal sense of normativity came along with the welfare platform, one that is akin to what philosopher Kristin Andrews (2020) has called “ought thoughts.” It involves feeling the pull of goodness or rightness, and the repellence of badness or wrongness, in the dynamic fluctuations of the world around. Moral standing arose with valuing, and valuing arose with embodied experiences of valence.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

236  Rachell Powell, Irina Mikhalevich, and Allen Buchanan Entities lacking these abilities can have only extrinsic (e.g., aesthetic, instrumental) value, and in an important sense, live lives that are devoid of meaning. Philosophical work on meaning has tended to focus on the referents of linguistic utterances, the content of mental representations, the nature of animal signaling, or the peculiar human need to make sense of its place in the cosmos. There is another respect, however, in which meaning can be said to derive from the welfare platform. A life that is capable of valuing in the pre-­ linguistic sense—of classifying some states of affairs as good or bad for oneself and acting accordingly—is a life for which aspects of the world have acquired meaning in a crucial and consequential way. Meaning and value are both closely linked to theories of action. On traditional views, also influenced by Kant, agency requires rationality, critical reflection on one’s desires, self-­ awareness, or similar higher-­order capacities possessed by few or no other animals. In contrast, we take minimal agency to consist in the ability to act on and respond flexibly to meaningful perceptual experiences (cf. Purves and Delon 2018). For an organism to be configured for meaning in this sense, it must be capable of assigning valences to perceptually constructed states of the en­vir­ on­ment, which are experienced as or anticipated to be pleasurable or aversive. It is this capacity for meaning construction that produces loci of inherent moral worth. As Korsgaard (2004, p. 108) aptly puts it: [Being an end-­in-­itself] is the only possible source of value in a world of facts . . .The reason there is such a thing as value in the world is that there are in the world beings who matter to themselves: who experience and pursue their own good. Were there no such beings, there would be no such thing as value. Were there no such beings, nothing would matter.

1.3  Flourishing in a social world Morality is often associated with other-­regarding concern, such as empathy and altruism. As we have seen, however, the emergence of normativity was first and foremost a revolution in self-regard—the value of finding food and mates, avoiding predators, navigating obstacles, staking out a nest, den, shelter, or territory, or keeping close to conspecifics. By self-­regard we do not mean self-­consciously projected value: most animals with welfares did not think of valuable actions as right—these actions simply felt right to the animal. Certain states of affairs were comforting, and others disturbing,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  237 depending on the naturally selected content of the welfare platform. Over deep time, some of these early welfare platforms were transformed through social cognitive evolution into forms of normativity that extended to reference points beyond the self. It should not be assumed that other-­regarding value is “richer” than self-­ regarding value or that it confers a higher moral status. There may be solitary animals that have limited or no interpersonal relationships with offspring and conspecifics and yet possess rich mental lives that give rise to interests that constrain the ways they may be treated. The octopus may be an example of such a creature. It is not obvious to us that such animals should be assigned a lower moral status than other critters who exhibit higher levels of social regard but inhabit less complex mental worlds, such as koalas or ru­min­ ant bovids. Having said that, there is a direct relation between the evolution of other-­ regard and increases in psychological complexity of the sort that could confer greater moral worth. In vertebrates, for example, increases in relative brain size map on to the complexification of social life (Dunbar 2009). Evolutionary anthropologist Robin Dunbar has proposed that patterns of encephalization in primates reflect the cognitive upgrades necessary to navigate more convoluted social landscapes. Although Dunbar’s “Social Brain Hypothesis” focuses on the complexity of societies, the cognition-­sociality link probably has earl­ ier roots in offspring care and pair bonding. Co-­parenting is also associated with increased brain sizes, presumably because it requires careful co­ord­in­ ation of foraging and guarding, which is achieved through elaborate courting rituals and enhanced attentional, communicative, and interpretative mech­an­ isms. Thus, even if other-­regard does not in itself give rise to a higher moral status, it is causally intertwined with other cognitive capacities that may do so. Since their ancient divergence from a lizard-­ like ancestor in the Carboniferous swamp forests more than 300 million years ago, birds and mammals have become convergently attuned to the flourishing of others around them, developing strong emotional attachments that regulate their social behaviors. The ability to empathize, or to mirror the affective states of others, is phylogenetically widespread in mammals and modulated by social relationships (Langford et al. 2006). Cognitively sophisticated forms of em­pathy have been documented in apes (de Waal 2008), ravens (Fraser and Bugnyar 2010), and elephants (Payne 1998), all of which have been observed consoling their friends after they have been injured or defeated in a fight. Rats forgo rewards to help other rats who have been trapped in tubes (Bartal et al. 2011) or soaked in water (Sato et al. 2015). Whales are reported to engage in heartwarming acts of altruism on behalf of group members, strangers, and

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

238  Rachell Powell, Irina Mikhalevich, and Allen Buchanan even members of distantly related species. Like birds, many fish species pair bond and some partake in active, long-­term parenting—though much less is known about the emotional mediation of social attachments in basal vertebrates. Invertebrates are not usually known for nurturing their offspring or making friends. But anyone not familiar with the biology of eusocial insects, such as ants, would be astonished to see the extraordinary level of care given to the brood in nurseries and the “heroic” efforts that ant foragers make to rescue their comrades in danger. Whether any nonhuman animals are capable of taking the perspective of another individual with whom they empathize is unclear. What is evident is that humans appear to be the only animal capable of recognizing value in the world as such. This is not surprising, given that only humans have the socially constructed linguistic concepts of value, rightness, and wrongness. How did the moral sense arise in human evolution and what are the impediments to its proper exercise?

2.  Evolution of an Imperfect Moral Sense Thus far, we have argued that the capacity to value was integral to the emergence of welfares. Welfares, in turn, were transformed over evolutionary time from a purely egocentric platform that enabled animals to prefer some states of affairs over others, into a limited capacity to recognize and respond to the wellbeing of offspring, kin, mates, and select conspecifics. While the normative psychologies that underpin these basic forms of valuing are shared across many animal groups, the ability to act on evaluative concepts is probably unique to human beings. In this section, we explain how the human capacity to detect and respond appropriately to the intrinsic value of ends-­ in-­ themselves evolved under conditions that placed severe constraints on its expression. In particular, we suggest that several adaptive biases skew MSS ascriptions, resulting in underinclusive conceptions of the moral community and systematic failures of the moral sense.

2.1  Becoming a moral species The standard view in evolutionary psychology is that core components of human morality were shaped by natural selection in the trying times of the late Pleistocene for the function of solving coordination problems within hunter-­gatherer groups. Core features of morality include moral emotions

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  239 (e.g., empathy, guilt, shame, indignance, and a sense of fairness) and the ability to acquire, teach, and enforce social norms, especially norms of fairness. Egalitarian moralities mitigated free-­ riding, inhibited selfish behavior, and  prevented dominant individuals from monopolizing the spoils of ­co­oper­ation—problems that were never robustly solved in hierarchical chimpanzee societies. Attributing equal basic moral status to members of one’s group (or at least to male members) was adaptive, so the logic goes, because it supported cooperative foraging and defence and curbed internal disruptions, permitting the astounding levels of cooperation that propelled humans to a peak position in global food web ecology. Emphasizing the prosocial aspects of moral psychology obscures a darker side of human evolution. Whereas attributing equal moral status to (male) in-­ group members was adaptive, extending the same consideration to out-­groups would have often been maladaptive in the competition between cultural groups, resulting in broadly xenophobic attitudes and tendencies. This picture is supported by a large body of psychological and anthropological research (see Buchanan and Powell 2018 for a detailed discussion). And yet, humans are far better at recognizing the moral force of ends-­in-­themselves than this evolutionary picture would suggest. It is difficult to see how expansions of the moral circle that we see in recent human history, such as the human rights and animal welfare movements—both of which required recognizing the moral standing of out-­groups—can be squared with the received evolutionary account. This tension evaporates if human moral psychology emanates not from a rigid tribalistic instinct, but from an adaptively plastic trait designed for both tribalistic and inclusivist moralities, depending on whether out-­group threat cues are detected in the environment in which moral psychologies develop. In environments in which out-­group threat cues are diminished, tribalistic tendencies will tend to relax and more inclusive moralities can take root (Buchanan and Powell 2018; Buchanan 2020). Whether human moral psych­ ology is rigid or plastic by design, there are several adaptive biases that, in the presence of threat-­cues, may trigger tribalistic tendencies and impede the recognition of ends-­in-­themselves.

2.2  Adaptive mechanisms that distort MSS ascription 2.2.1  Empathy In non-­human animals, empathy mediates prosocial behaviors such as sharing, helping, consolation, and rescuing. In humans, a cognitively sophisticated version of empathy that involves perspective-­taking not only mediates

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

240  Rachell Powell, Irina Mikhalevich, and Allen Buchanan altruistic behaviors, but also guides judgments about moral standing—a fundamental, “inferentially rich” categorization from which nearly all meaningful moral consideration flows. And yet like morality, empathy too has a dark side, and for similar reasons. Empathy is meted out in a selective and morally arbitrary fashion toward individuals who look more familiar, are perceived as more attractive, have reciprocated in the past, or are classified as members of one’s in-­group. Conversely, empathy is attenuated when others look foreign, are considered less attractive, have no potential for reciprocity, are “statistical” rather than “concrete,” or are classified as members of an out-­group such as a disfavored race, ethnicity, nationality, or gender minority (Prinz  2011). In fact, empathy can actually exacerbate in-­group/out-­group effects in ways that result in moral exclusion (Bloom 2017; de Dreu et al. 2010). This less-­than-­ rosy picture of empathy is precisely what we should expect if altruism and in-­group bias are two sides of the same adaptive coin. Thus, while empathy can result in prosocial behavior, it can just as easily distort judgments about where fundamental moral value lies.

2.2.2  Disgust When it comes to certain beings, the empathy gap may run deeper, provoking not apathy but disgust. Although disgust initially helped early humans avoid exposure to pathogens and infectious agents, such as those contained in spoiled food, excrements, and parasites, it appears to have subsequently been co-­opted to mediate social moral interactions (Kumar 2017). Some psychologists argue that disgust felt toward out-­groups is part of the “behavioral immune system” (Murray and Schaller 2016)—one of several pathogen-­ avoidance adaptations that mitigate the risk of disease transmission between groups. However, if the evolutionary picture of human morality outlined above is correct, then out-­group threats are not merely epidemiological in nature, but also physical, social, and economic. A central way that disgust could modulate moral attitudes toward ­out-­group members is by reducing MSS attribution; this, in turn, could open up those individuals to types of treatment that are inappropriate for beings with basic moral standing or high moral status—even if such treatment might have been adaptive in ancestral human environments. It is not difficult to see how disgust might distort MSS attributions to drive morally exclusionary norms and behaviors that discount or disregard the legitimate interests of out-­group members. This discounting or disregard may take the form of attributing lesser moral status or the absence of moral standing altogether to the objects of disgust. It is no accident that nativists, racist propagandists, and would-­be

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  241 genocidaires draw explicitly on the disgust response in portraying immigrants, minorities, and ethnicities as disease-­bearing vermin, insects, free-­ riding parasites, and other disgust-­eliciting categories that dehumanize the targets of antisocial behavior. Although the science is not yet settled, some studies suggest that disgust covertly shapes moral judgment, accentuating the harshness of moral disapprobation and possibly influencing its valence (Kelly  2011). Other theorists maintain that, rather than driving moral judgment, disgust is a response to a  perceived wrong that simply reinforces moral judgments (May  2018). Nevertheless, the historical record shows that disgust often attends derogatory beliefs about certain classes of beings to distort MSS attribution in a ­var­iety of civil rights contexts, such as interracial marriage, de-­segregation, homosexuality, women’s rights, and transgender protections. This is ­supported by experimental data showing, for example, that disgust mediates the implicit dehumanization of interracial couples (Skinner and Hudac 2017). Likewise, disgust drives moral resistance to unfamiliar biotechnologies such as assisted reproduction and genetically modified organisms (Roache and Clarke 2009). This is not to say that disgust has only undesirable effects on moral cognition. As philosopher Victor Kumar (2017) shows, disgust can help enforce justified moral norms by motivating appropriate social exclusion and acting as a signaling system to coordinate sanctioning. It is thus possible, as noted above, that disgust follows on from and amplifies, rather than determines, the valence of moral judgment. Regardless of whether disgust acts as an input to or sequelae of moral cognition, just like empathy, it can interact with culturally constructed group identities in ways that throw MSS detection off track.

2.2.3  Mental state attribution How exactly do empathy gaps, disgust response, and socially constructed moral categories interact to impede the detection of intrinsic value, shrinking or resisting expansions of the moral circle? At the nexus of disgust, empathy, social identity, and MSS attribution is the detection of mindedness. Social engagement for humans involves thinking about or imagining the mental life of social partners. People tend to avoid social engagement with beings who provoke disgust, whereas they engage positively with individuals whom they judge to be familiar or aesthetically pleasing. Sherman and Haidt (2011) argue that other beings who represent potential social partners are subject to augmented mentalizing, whereas individuals who elicit disgust—and hence are judged to be of lower or negative social value—will tend to be under-­mentalized. If mental state attribution is a necessary component of human-­grade empathy,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

242  Rachell Powell, Irina Mikhalevich, and Allen Buchanan then undermentalizing a class of beings should decrease one’s ability to empathize with those beings. Further, disgust may not only block empathy, but also result in the attribution of a lower moral status or the denial of moral standing altogether, which then motivates or figures in justifications of moral mistreatment. For instance, the underattribution of pain perception played a key role in the dehumanization and infra-­humanization of African American slaves, who were believed by those who profited from their forced labor to have a higher than usual pain threshold (Abruzzo  2011). Similarly, the belief that African Americans formed weak psychological bonds with their children was another convenient and disgust-­ prompting fiction used to justify moral atrocities. These effects continue to be felt in the modern healthcare context, where the systematic undertreatment of Black patients appears to stem from the underattribution of pain states and other perceived biological differences between in-­group and out-­group (Hoffman et al. 2016). The psychological literatures on genocide and racial/gender discrimination have also demonstrated connections between elevated levels of disgust, dehumanization, and the underattribution of morally relevant mental states. Dehumanization is causally implicated in and highly predictive of aggression, violence, institutionalized oppression, and other extreme forms of moral exclusion (Haslam  2006). Exposure to disgust-­ eliciting targets has been shown experimentally to both amplify dehumanization (Skinner and Hudac 2017) and reduce activity in areas of the brain responsible for mentalizing and other aspects of social cognition (Harris and Fiske  2006/2011). In short, under-­mentalization plays a key role in both contractions of and failures to expand the moral community.

2.3  The case of invertebrate ethics Let us now bring these disparate threads together to illustrate how disgust, empathy gaps, and undermentalizing can lead to distortions of the evolved moral sense. For this, we will take a step back from the human case and consider a less familiar one: the near total moral exclusion of invertebrates despite the fact that some invertebrate groups have highly developed welfare platforms that include holistic object perception, affective and semantic binding, open-­ended associative learning, emotion-­like states, and capable bodies that allow them to navigate an ecologically complex world. Apart from a recent exception for cephalopod mollusks in the European Union (Directive

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  243 2010/63/EU 2010), invertebrate animals are accorded virtually no ethical protections whatsoever in scientific experiment, and major federal agencies in the United States wholly exclude them from welfare policy. Invertebrates have long been stereotyped as brainless, behaviorally in­flex­ ible animals. However, there is now a sizable body of neuroscientific and behavioral research pointing to centralized information processing and complex learning capacities in arthropods and cephalopod mollusks, whose experimental performances in some cases rival those of vertebrate subjects (for a philosophical discussion of the science and its moral relevance, see Mikhalevich and Powell  2020; for scientific reviews, see Chittka et al.  2019 and Mather 2019). Many of the cognitive abilities that were presumed to be limited to humans and “higher” animals like mammals and birds—such as concept formation, observational learning, counting, arithmetic, flexible problem-­solving, causal reasoning, social transmission, and perhaps transitive inference—have been shown in invertebrate groups, often using be­hav­ ior­al experiments modeled on studies of vertebrate animals and human infants. If these studies indicate the presence of cognition in rats, pigeons, or dolphins, then they should do the same for honeybees, jumping spiders, and octopuses. Research into the affective worlds of invertebrates suggests that some may experience sensations such as pain perception (Elwood 2011, Butler-­Struben et al. 2018) and emotion-­like states (Perry et al. 2016). There is also evidence that certain arthropods (Klein and Barron  2016) and mollusks (Godfrey-­ Smith 2016) may be phenomenally conscious, though we would first need an agreed-­upon definition of phenomenal consciousness before we knew what to count as evidence. Because such agreement is not forthcoming, it is best not to use phenomenal consciousness as a criterion for ethical protections. More empirically tractable notions of consciousness are available, such as so-­ called “access consciousness,” which refers to the availability of representations or other informational contents for use in guiding the behavior of an organism. This property that has been amply demonstrated in brainy invertebrates, as the above findings suggest, and some take it to be sufficient for moral standing (Levy 2014). However, we are not convinced that access consciousness adequately describes the structure of welfares, since access conscious states can, at least in theory, occur without any experience of valence. In short, many invertebrates appear to have psychological capabilities that configure ends-­in-­themselves. Yet, their welfare is routinely ignored in animal ethics and science policy. We suspect this is because invertebrates—perhaps more than any other group—trigger evolved biases that depress empathy and

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

244  Rachell Powell, Irina Mikhalevich, and Allen Buchanan mental-­state attribution. Nowhere is the empathy gap clearer than in judgments about the moral standing of invertebrate animals, especially arthropods like spiders and insects, which lack familiar features, behaviors, and lifeways and tend to provoke a strong disgust response in humans (Lockwood 2013). Many arthropods map generically onto categories that are associated with parasite stress, while others are associated with other physical dangers, such as sting and bite toxicities. If a dearth of empathy and overabundance of disgust lead to social disengagement and the underattribution of mind, then this would help explain why invertebrates are generally perceived as lacking moral standing altogether and why they are typically excluded from animal welfare regulatory regimes. Is there still room to exclude invertebrates like insects from the moral community even if one accepts that they are ends-­in-­themselves? Some philo­ sophers seem to think so. For instance, Peter Carruthers concedes that some invertebrates are likely to possess a belief-­desire psychology similar to those of vertebrates, which in his view makes them potential objects of moral concern. However, he states that “It is a fixed point for me that invertebrates make no direct claims on us . . . it isn’t wrong to take no account of their suffering. Indeed, I would regard the contrary belief as a serious moral perversion” (2007, p. 296). In other words, the bare fact that an animal is an invertebrate is enough, in Carruthers’s view, to justify its moral exclusion. How should we interpret such claims? The argument cannot be that while these animals have moral standing, this generates no corresponding obligations on our part, since moral standing by definition imposes obligations on moral agents. Another reading is that Carruthers is defending a rank form of speciesism, a view that has enjoyed something of a resurgence in recent years. A better interpretation is that Carruthers is using the invertebrate case as a reductio of welfare-­based approaches to ethics. His argument might go as ­follows: If welfares are sufficient for moral standing, then some invertebrates must be granted moral standing; the notion that some invertebrates have moral standing is absurd; therefore, we should reject a welfare-­based approach to moral standing. The problem, of course, is that the second premise merely begs the question and is likely to be distorted by cognitive biases, for reasons discussed earlier. A more charitable reading is that in saying that invertebrates qua invertebrates impose no moral obligations on us, Carruthers is making a case not against invertebrate moral standing but for permissible moral partiality. Is there a principled basis for partiality toward some ends-­in-­themselves, such as humans or persons, in moral triage scenarios like medical experimentation

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  245 or food production? There are two quite different understandings of “proper” partiality that might be invoked here. The first grounds moral partiality in differential moral status. This would require a cogent explanation of why having certain cognitive or agential capacities that most creatures who are ends-­ in-­themselves lack makes a being an object of more robust obligations or confers on it a higher moral status. The trouble with this view is that if the distinguishing property is subject to degrees (rather than a threshold), then beings with welfares could not systematically and reliably be sorted into those that have a higher moral status and those that do not. Another option is to ground partiality in association, the idea being that it is proper to recognize different moral obligations to different ends-­ in-­ themselves depending upon whether one has or could have strong as­so­ci­ ation­al ties with them, such as whether they are kin, kith, or compatriots. The difficulty with this option is that it could make relative moral status contingent on morally arbitrary relations. It could mean, for example, that it is perfectly acceptable for a person to accord the same moral status to one’s dog as one accords to the vast majority of existing persons to whom one has no as­so­ ci­ation­al ties. A way around this conclusion would be to reject the notion that as­so­ci­ ation­al ties generate differences in moral status even if they permit preferential treatment of those ends-­in-­themselves with whom we share our lives. In other words, partiality could do the prioritization work that moral status is intended to do. But this route stumbles into similar problems, since treating beings in fundamentally different ways based on their affiliation threatens to reintroduce the very sorts of arbitrary moral exclusions that we cautioned against above. Even if we assume that persons may, for whatever reasons, be justifiably prioritized over sentient non-­persons in moral triage situations, this cannot serve as a principled basis for excluding brainy invertebrates from an animal ethic that includes sentient vertebrates, since both exhibit welfare structures that are the basis of moral standing and neither can lay claim to affiliative priority. Absent any plausible justification, the invertebrate exclusion is best explained by the evolved constraints on our imperfect moral sense.

3.  Conclusion Even if most sentient nonhuman animals do not have the moral status of persons, it is clear that humanity has thus far operated with a severely deficient understanding of which beings count morally in their own right. Once these

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

246  Rachell Powell, Irina Mikhalevich, and Allen Buchanan biases are placed in evolutionary-­historical context and stripped of their distorting effects, and once we have concluded that some brainy invertebrates are ends-­in-­themselves, what can we say about their moral status, or com­ para­tive moral standing? It would be convenient if we could say that the moral status of honeybees is 42/100; squid, 56/100, and so on—and then use these numbers to crunch our moral calculations. But as Aristotle argued, ­ethics is not a precise science. Nowhere is this clearer than in moral triage scenarios, which requires balancing the interests of beings that sit along multiple biological continua of cognitive and social capacities. The focus of this chapter, however, has been not on comparison but on inclusion. Piecing together the natural history of normativity can not only shed light on the origins of ends-­in-­themselves; it can also explain how humans came to recognize members of the moral community and why they often get that fundamental judgment so wrong.1

Note 1. We are grateful to Templeton World Charity Foundation grant # 0469 for support of this research. The authors would like to thank Steve Clarke and Neil Levy for their extremely helpful comments on an earlier draft of this chapter.

References Abruzzo, M. (2011). Polemical Pain: Slavery, Cruelty, and the Rise of Humanitarianism. JHU Press. Andrews, K. (2020). Naïve normativity: the social foundation of moral cognition. Journal of the American Philosophical Association, 6(1), 36–56. Bartal, I.  B.  A. et al. (2011). Empathy and pro-social behavior in rats. Science, 334(6061), 1427–30. Bloom, P. (2017). Empathy and its discontents. Trends in Cognitive Sciences, 21(1), 24–31. Buchanan, A. (2020). Our Moral Fate: Evolution and the Escape from Tribalism. MIT Press. Buchanan, A. and Powell, R. (2018). The Evolution of Moral Progress: A Biocultural Theory. Oxford University Press. Butler-Struben, H.  M. et al. (2018). In vivo recording of neural and behavioral correlates of anesthesia induction, reversal, and euthanasia in cephalopod molluscs. Frontiers in Physiology, 9, 109.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  247 Carruthers, P. (2007). Invertebrate minds: a challenge for ethical theory. The Journal of Ethics, 11(3), 275–97. Chittka, L., Giurfa, M., and Riffell, J. A. (2019). The mechanisms of insect cognition. Frontiers in Psychology, 10, 2751. Cholbi, Michael. (2014). A direct Kantian duty to animals. The Southern Journal of Philosophy, 52(3), 338–58. Damasio, A. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. Putnam. De Dreu, C. K. W. et al. (2010). The neuropeptide oxytocin regulates parochial altruism in intergroup conflict among humans. Science, 328, 1408–22. de Waal, F.  B. (2008). Putting the altruism back into altruism: the evolution of empathy. Annu. Rev. Psychol., 59, 279–300. Dunbar, R. I. (2009). The social brain hypothesis and its implications for social evolution. Annals of Human Biology, 36(5), 562–72. Elwood, R.  W. (2011). Pain and suffering in invertebrates? ILAR Journal, 52(2), 175–84. European Parliament, Council of the European Union, 2010. Directive 2010/63/ EU of the European Parliament and of the Council of 22 September 2010 on the Protection of Animals Used for Scientific Purposes. Council of Europe, Strasbourg. Fraser, O. N. and Bugnyar, T. (2010). Do ravens show consolation? Responses to distressed others. PLoS One, 5(5). Ginsburg, S. and Jablonka, E. (2019). The Evolution of the Sensitive Soul: Learning and the Origins of Consciousness. MIT Press. Godfrey-Smith, P. (2016). Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness. Farrar, Straus and Giroux. Graham, J. et al. (2013). Moral foundations theory: the pragmatic validity of moral pluralism. In Advances in Experimental Social Psychology (Vol. 47, pp. 55–130). Academic Press. Harris, L. T. and Fiske, S. T. (2006). Dehumanizing the lowest of the low: neuroimaging responses to extreme out-groups. Psychological Science, 17, 847–53. Harris, L.  T. and Fiske, S.  T. (2011). Dehumanized perception: a psychological means to facilitate atrocities, torture, and genocide? Zeitschrift für Psychologie, 219 (3), 175–81. Haslam, N. (2006). Dehumanization: an integrative review. Personality and Social Psychology Review, 10 (3), 252–64. Hoffman, K. M. et al. (2016). Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. Proceedings of the National Academy of Sciences, 113 (16): 4296–301.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

248  Rachell Powell, Irina Mikhalevich, and Allen Buchanan Joyce, R. (2007). The Evolution of Morality. MIT press. Kelly, D. (2011). Yuck! The Nature and Moral Significance of Disgust. MIT Press. Klein, C. and Barron, A. B. (2016). Insects have the capacity for subjective experience. Animal Sentience, 1 (9), 1. Korsgaard, C. (2004). Fellow creatures: Kantian ethics and our duties to animals. Tanner lectures on human values, Grethe B. Peterson (ed.), Volume 25/26, Salt Lake City: University of Utah Press. Korsgaard, C. (2018). Fellow Creatures: Our Obligations to the Other Animals. Oxford University Press. Kumar, V. (2017). Foul behavior. Philosophers’ Imprint 17 (15), 1–16. Kumar, V. and Campbell, R. (in press) How Morality Evolves. Langford, D. J. et al. (2006). Social modulation of pain as evidence for empathy in mice. Science, 312 (5782), 1967–70. Levy, N. (2014). The value of consciousness. Journal of Consciousness Studies 21 (1–2), 127–38. Lockwood, J. (2013). The Infested Mind: Why Humans Fear, Loathe, and Love Insects. Oxford University Press. Lyon, P. (2006). The biogenic approach to cognition. Cognitive Processing, 7 (1), 11–29. McMenamin, A. S. (1998). The Garden of Ediacara: Discovering the First Complex Life. Columbia University Press. Mather, J. (2019). What is in an octopus’s mind. Animal Sentience, 26 (1). May, J. (2018). The limits of appealing to disgust. In The Moral Psychology of Disgust, ed. Nina Strohminger and Victor Kumar. Rowman & Littlefield. Mikhail, J. (2007). Universal moral grammar: theory, evidence and the future. Trends in Cognitive Sciences, 11 (4), 143–52. Mikhalevich, I. and Powell, R. (2020). Minds without spines: toward a more evolutionarily inclusive animal ethics. Animal Sentience, 5 (29), 1 Murray, D. R., and Schaller, M. (2016). The behavioral immune system: Implications for social cognition, social interaction, and social influence. In Advances in Experimental Social Psychology (Vol. 53, pp. 75–129). Academic Press. Payne, K. (1998). Silent Thunder: In the Presence of Elephants. Simon and Schuster. Perry, C. J., Baciadonna, L., and Chittka, L. (2016). Unexpected rewards induce dopamine-dependent positive emotion-like state changes in bumblebees. Science, 353 (6307), 1529–31. Powell, R. (2020). Contingency and Convergence: Toward a Cosmic Biology of Body and Mind. MIT Press. Prinz, J. (2011). Against empathy. The Southern Journal of Philosophy, 49, 214–33.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How the Moral Community Evolves  249 Purves, D., and Delon, N. (2018). Meaning in the lives of humans and other ­animals. Philosophical Studies, 175 (2), 317–38. Roache, R., and Clarke, S. (2009). Bioconservatism, bioliberalism, and the ­wisdom of reflecting on repugnance. Monash Bioethics Review, 28 (1), 1–21. Sato, N. et al. (2015). Rats demonstrate helping behavior toward a soaked conspecific. Animal Cognition, 18 (5), 1039–47. Sherman, G. D., and Haidt, J. (2011). Cuteness and disgust: the humanizing and dehumanizing effects of emotion. Emotion Review, 3 (3), 245–51. Skinner, A. L., and Hudac, C. M. (2017). “Yuck, you disgust me!” Affective bias against interracial couples. Journal of Experimental Social Psychology, 68, 68–77. Tomasello, M. (2009). Why We Cooperate. MIT press. Van Duijn, M. et al. (2006). Principles of minimal cognition: casting cognition as sensorimotor coordination. Adaptive Behavior, 14 (2), 157–70.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

15 Moral Status of Brain Organoids Julian Koplin, Olivia Carter, and Julian Savulescu

Recent advances in stem cell science have made it possible to generate sophisticated three-dimensional models of human tissues and organs. By manipulating growth factors in the culture medium, it is possible to prompt stem cells to recapitulate the development of specific parts of the body, including kidneys, lungs, intestines, and the brain (Clevers  2016). The resulting en­ tities—known as organoids—resemble miniature, in vitro versions of human organs. These miniature organs have exciting applications in basic research, disease modelling, toxicology, and personalized medicine, among other areas of science and medicine (Rossi, Manfrin, & Lutolf 2018). The most striking application of organoid technology involves the creation of brain organoids, which are commonly referred to in the media as ‘minibrains’ (see e.g. Stetka 2019; Weintraub 2020). Science writer Phillip Ball has described the peculiarity of seeing ‘his brain in a dish’—a brain organoid, roughly the size of a lentil, grown from his own cells according to his own genetic blueprint. Ball struggled to work out how to relate to ‘his’ mini-brain, which seemed both separate from, and intimately related to, his self: There was no rulebook to tell me how I should feel about my mini-brain. Certainly, I didn’t lie awake at night fretting over its welfare; this mass of tissue made from my skin didn’t take on the status of an individual. But I felt oddly fond of those cells, doing their best to fulfil a role in the absence of the guiding influence of their somatic source. There was a curious intimacy involved, a sense of potential that wasn’t present initially in the tiny chunk of arm-flesh excised and placed in a test-tube. This was more than a matter of cells subsisting; this was life in all its teeming, multiplying glory, spilling out from a paring of me.  (Ball 2019, p. 4)

It has only recently become possible to generate mini-brains from stem cells; the first research study using brain organoids was published in Nature as Julian Koplin, Olivia Carter, and Julian Savulescu, Moral Status of Brain Organoids In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Julian Koplin, Olivia Carter, and Julian Savulescu 2021. DOI: 10.1093/oso/9780192894076.003.0015

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  251 recently as 2013 (Lancaster et al.  2013). Scientists, ethicists, and regulators have never confronted anything quite like these entities before. This chapter considers the ethics of human brain organoid research.1 We first consider whether brain organoids could have moral status. We discuss the relevance of consciousness and sentience to moral status and ultimately tie moral status to the latter rather than the former. Second, we track the implications of moral status for research with these entities. We suggest that under certain conditions research on sentient organoids can be morally permissible. Third, we consider whether there are additional reasons to be careful about how we use human brain organoids in research—reasons which are grounded in their possible symbolic significance. We caution against placing too much weight on symbolic significance when regulating brain organoid research. This chapter’s overarching aim is to navigate between two key risks in our thinking about brain organoids. First, there is a risk that we will fail in our moral duties to brain organoids if they attain moral status and we treat them as if they have not. Second, there is a risk that if brain organoids are afforded too much moral consideration, we will unnecessarily curb important research. The key task for philosophers, regulators, and scientists is to respect brain organoids’ moral status if and where it emerges without unduly restricting their use.

1.  Ethical Regulation of Brain Organoid Research Brain organoids sit in an unusual regulatory space. Currently, brain organoids are regulated essentially the same way as any other form of human biological material (Farahany et al. 2018; Koplin & Savulescu 2019; Lavazza & Pizzetti 2020). Yet they are a special kind of biological material. Brain organoids not only model the part of the human body in which consciousness resides, they also share the genome of the individual from whom they are derived. This is the reason why brain organoids are useful for modelling genetic disorders— because they develop according to the genetic instructions of the donated ­tissue. In this respect, the creation of organoids resembles human re­pro­duct­ ive cloning. Following the birth of Dolly the Sheep in 1997 human reproductive cloning was nearly immediately condemned by the United Nations Educational, Scientific and Cultural Organization (UNESCO), and banned by dozens of countries shortly afterward (Häyry 2018). The regulatory response to cloning

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

252  Julian Koplin, Olivia Carter, and Julian Savulescu was swift and extensive. In comparison, human brain organoid research has flown largely under the radar. Yet there is a meaningful overlap between the two practices. Legally, brain organoids probably cannot be classified as the result of human reproductive cloning (Lavazza & Pizzetti 2020) Functionally, however, they are replicating the part of the person in which their consciousness, memories, and personality reside. While creating a human brain organoid does not clone an entire person, it effectively clones the part that matters. Does the overlap between cloning and organoid creation mean that bans on human reproductive cloning should be extended to brain organoid research?2 Not necessarily. Some of the most influential objections to human reproductive cloning have turned on the idea that cloning would undermine cloned children’s unique identity (see e.g. President’s Council on Bioethics 2002). These objections are mistaken. A cloned person may share another’s genome, but their individuality will be shaped not just by their genetics, but also by their life histories, personal relationships, and other features of their environment (Brock  2002). Cloned children would be distinct individuals from any existing people whose genomes they share, just as identical twins are. If it is a mistake to hold that human reproductive cloning threatens uniqueness and individuality, it would be an even greater mistake to hold that human brain organoids pose this threat. Neither cloned children nor brain organoids are replicas of existing persons. The differences are especially stark for brain organoids; current brain organoids probably lack consciousness, let alone the perceptual apparatus through which we experience the world. We should not make the mistake of blocking brain organoid research in the same way we have blocked research into human reproductive cloning. Brain organoid research does raise moral issues that require a careful regulatory response. But these issues have less to do with the overlap with human cloning, and more to do with their possible moral status.

2.  Brain Organoids and Consciousness Organoid technology in general raises a host of ethical questions. These include questions about how human biomaterials should be procured and how best to regulate the use of organoids in personalized drug testing, to list just two examples (Bredenoord, Clevers, & Knoblich  2017). In the case of brain organoids, however, the moral stakes extend beyond the rights and interests of patients and tissue donors. Uniquely among organoid research,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  253 brain organoid research raises the question about whether the human tissue itself—and not merely the tissue donors—have moral status. What does it mean to attribute moral status? Mary Anne Warren provides a standard definition: To have moral status is to be morally considerable, or to have moral standing. It is to be an entity towards which moral agents have, or can have, moral obligations. If an entity has moral status . . . we are morally obliged to give weight in our deliberations to its needs, interests, or well-being. Furthermore, we are morally obliged to do this not merely because protecting it may bene­ fit ourselves or other persons, but because its needs have moral importance in their own right.  (Warren 1997, p. 3)

When we say that a being has moral status, we are saying that we have moral obligations to that being (DeGrazia 2008). Perhaps more precisely, we can say that when a being has moral status, it has certain (inherent) normative features that ought to govern our treatment of it (Kagan 2018, p. 8). If a brain organoid were to develop moral status, then we ought, morally, to care about how it is treated—not just because of how this treatment might affect other beings, but because of how this treatment will affect the organoid itself. At present, brain organoid research is regulated similarly to other forms of stem cell research; there are no specific regulatory mechanisms to deal with questions of the organoid’s own moral status (Farahany et al. 2018; Koplin & Savulescu 2019). This lack of oversight is increasingly recognized as a problem. There is a burgeoning literature that calls for scientists, ethicists, and regulators to consider the possibility that brain organoids will attain moral status (Bayne, Seth, & Massimini  2019; Farahany et al.  2018; Hostiuc et al. 2019; Koplin & Savulescu 2019; Lavazza & Massimini 2018; Sawai et al. 2019). How pressing are these concerns? At least some researchers view it as a live issue. At the 2019 Society for Neuroscience meeting, a group of American neuroscientists argued that some existing brain organoid models might already have attained consciousness (and, on their view, thereby attained moral status). The presenters called for a moratorium on research with brain organoids that might potentially be conscious until tools have been developed to screen brain organoids for consciousness (Ohayon, Tsang, & Lam 2019). It is more common, however, for moral status to be seen as a future concern. It is usually assumed that the kinds of brain organoids currently being created lack consciousness (which we and others believe is a necessary condition for moral status), given the constraints on their growth and limitations in how

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

254  Julian Koplin, Olivia Carter, and Julian Savulescu accurately they model human development (Farahany et al.  2018; Hostiuc et al. 2019; Lavazza & Massimini 2018). However, efforts are already underway to overcome these limitations (Wang 2018). In some instances—such as in Alzheimer’s research—one aim is to create brain organoids that model memory, cognition, and cognitive decline (Ooi et al. 2020). The concern here is not that existing brain organoid research is unethical, but rather that we ought to identify the moral limits of brain organoid research before we accidentally transgress them. A third view denies that brain organoid research is ever likely to raise moral status concerns, even as brain organoid models become larger and more sophisticated. Some researchers hold that consciousness cannot emerge in an entity that cannot have meaningful interactions with the outside en­vir­ on­ment. Insofar as brain organoids lack sensory input and motor output, it might be thought that even highly sophisticated brain organoids will never develop the capacity to feel or think—and will therefore never develop moral status (see e.g. Bersenev 2015; Sawai et al. 2019, p. 40). Philosophically, however, this view is controversial; it might be the case that consciousness can arise in ‘islands of awareness’ that are cut off from the outside environment (Bayne et al.  2019). Even if it is the case that consciousness cannot emerge without sensory input and motor output, certain brain organoids could meet these criteria. Some existing brain organoids have developed retinal cells and display neuronal activity when light is shone on them (Quadrato et al. 2017). Others have been connected to muscle tissue, which the organoids learn to contract (Giandomenico et al. 2019). Still others have been connected to robotic ‘bodies’ (Cohen  2018) and implanted in the brains of nonhuman ­animals, where they have integrated with the host animal’s neural tissue (Chen et al. 2019; Mansour et al. 2018).3 These categories of organoid research could potentially attain consciousness even if ‘islands of awareness’ cannot emerge in organoids that are cut off from the outside world. Not only is there disagreement about how likely it is that brain organoids will develop moral status, there is also disagreement about how much consideration we ought to grant brain organoids in the event they do develop moral status. Elan Ohayon—one of the scientists involved with the Society for Neuroscience presentation mentioned above—explained to the Guardian that he believes research with brain organoids should not proceed if there is even a possibility that they will suffer (Sample  2019). This position lies at one extreme of a spectrum. The other extreme would deny conscious brain organoids any moral consideration—a stance which at least some researchers in the field have defended (Miller  2019), and which is consistent with the

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  255 current regulatory landscape. Intermediate positions might grant some brain organoids moral status but nonetheless permit their use in research—for example, by treating conscious or sentient brain organoids similarly to research animals (Koplin & Savulescu 2019). We will return to the question of how we ought to treat brain organoids if they develop moral status. First, however, we want to interrogate the connection between consciousness and moral status. Much of the commentary on brain organoid research ties moral status to consciousness, or at least suggests that consciousness might confer moral status (see e.g. Bayne et al.  2019; Farahany et al. 2018; Lavazza & Massimini 2018). In the following section, we argue that sentience—not mere consciousness—is what matters morally.

3.  The Connection between Consciousness and Moral Status Why is consciousness generally thought central to organoids’ moral status? On the surface, consciousness seems to be the crucial factor that might sep­ar­ ate (suitably advanced) brain organoids from a mere lump of brain tissue. However, we argue that the morally relevant factor isn’t consciousness per se, but rather sentience. These two conditions are subtly but crucially distinct from each other. We use the term consciousness to refer to what is often termed ‘phe­nom­ enal consciousness’—i.e. subjective experience, or the feature mental states or processes have when there is something it is like to undergo these mental states or processes (Shepherd 2018, p. 7). Consciousness is necessary for sentience, but it is not sufficient. Sentient beings are not merely capable of subjective experience; they are capable of having experiences that are pleasant or unpleasant for it (DeGrazia  2020). Insentient beings presumably lack interests. Accordingly, nothing we do can make them better or worse off. Sentience, not mere consciousness, is crucial for moral status. While sentience and consciousness often run together, they can the­or­et­ic­ al­ly come apart. Consider a thought experiment developed by Andrew Lee (2019). Lee asks us to imagine an extremely primitive creature capable only of a stripped-back, minimal kind of conscious experience—say, a visual ex­peri­ ence of slight brightness. This experience lacks any valence for the creature; it derives neither enjoyment nor discomfort from this slight sense of brightness, and it would be made neither better nor worse off were this visual experience to end. Such a creature would be conscious; its sense of slight brightness is a conscious experience. But such a creature would lack moral status as we

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

256  Julian Koplin, Olivia Carter, and Julian Savulescu understand the term. Since nothing we do to this entity could make it better or worse off—the continuation or cessation of its experience of slight brightness doesn’t matter to it—one could not have any moral obligations to this creature. In addition to being conscious, an entity needs to have interests— such as an interest in avoiding suffering, or an interest in the continuation of its existence—before we owe it moral consideration. What this means is that even if we detect consciousness in a brain organoid, it might nonetheless lack moral status. Indeed, brain organoids might be more likely than other kinds of entities to develop consciousness without also developing sentience (and therefore moral status). There are many forms of suffering that brain organoids might not be vulnerable to. For example, since the brain itself does not have pain receptors, brain organoids would presumably not be vulnerable to pain (unless meninges—which do have pain receptors—also happen to develop). It is at least possible that the experiences of a brain organoid might resemble those of Lee’s minimally conscious organism; it might be capable of subjective experience but nonetheless lack any interests. Rather than asking whether a brain organoid is conscious, we ought to ask whether it is sentient. If a brain organoid lacks sentience then nothing we do could make it better or worse off. Accordingly, we would not have any moral obligations to that organoid (though our moral obligations to tissue donors and others would remain). If, however, the brain organoid could suffer, then we would have a moral duty not to inflict this suffering. (Further below we consider exactly how stringent this duty might be.) Some existing bioethical work on brain organoids has explored potential mechanisms to detect consciousness. One leading suggestion is to adapt metrics used to assess consciousness in human adults—specifically the Perturbational Complexity Index, which measures the brain’s response to transcranial magnetic stimulation (Lavazza & Massimini 2018). Such work might yield valuable strategies for ruling out the possibility of moral status in certain kinds of brain organoids. (Since sentience requires consciousness, a brain organoid that fails a well-designed test for consciousness would lack moral status.) On our view, however, the key task is not to work out how to identify consciousness per se, but rather to identify sentience. The important question is not whether a brain organoid has the capacity for subjective ex­peri­ence, but whether it is able to experience wellbeing or suffering.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  257

4.  Implications of Moral Status for Brain Organoid Research So far, we have argued that there is reason to worry that suitably advanced brain organoids might develop moral status. This raises the question of what, precisely, our moral obligations to these organoids would be, and how we ought to regulate the field in light of their potential moral status. There are three kinds of approaches one might take. The most permissive would be to simply maintain current regulatory frameworks. While current regulations do address ethical issues associated with (for example) the procurement of human biological materials, they fail to address moral status concerns; under the status quo, even sentient brain organoids would be regulated like any other form of human tissue (Farahany et al.  2018; Koplin & Savulescu 2019). The obvious problem with this approach is it risks us violating our moral obligations to these brain organoids in the event that they develop interests. It would be unethical to treat beings with moral status how we treat non-sentient biological material like tissue samples from existing brains. At the other extreme, one might hold that it is flatly impermissible to use brain organoids in research if they have developed moral status. This kind of view poses a side-constraint on brain organoid research: such research should be allowed to continue only if the organoids in question demonstrably lack moral status (see e.g. Hostiuc et al. 2019; Sample 2019).4 One obvious drawback of a highly restrictive approach is that it would greatly curtail the benefits that might otherwise be realized through brain organoid research. These bene­fits are potentially substantial. Brain organoid research promises to yield valuable new insights into human brain development and neurodevelopment disorders, as well as to provide new mechanisms to test drug-related toxicity in brain tissues (Fatehullah, Tan, & Barker 2016; Willyard 2015). The more we restrict research with brain organoids, the more we risk impeding valuable scientific progress. Between these extremes lie a range of intermediate options. These options recognize that moral status does not necessarily confer an absolute protection against harmful treatment. Many current practices assume that it is sometimes legitimate to harm beings that are thought to have some degree of moral status. Animal research provides a highly relevant example. Many people believe that it is morally permissible to harm research animals under at least

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

258  Julian Koplin, Olivia Carter, and Julian Savulescu some conditions—for example, if the benefits to humans are substantial, the harms to the animals are small in comparison, and these benefits cannot be obtained in any other way (Ormandy & Schuppli 2014). The understanding that a brain organoid has moral status does not, by itself, settle the question of how it ought to be treated. We still need to consider how extensive and/or stringent our moral obligations to it are.5 The animal ethics literature provides a range of plausible views regarding how we may treat beings that are often thought to have some degree of  moral status below that of humans. We will outline three leading views below.

5.  Utilitarian Approaches One straightforward way of thinking about animals’ moral status—most famously advanced by Peter Singer (1990) in Animal Liberation—is to attach equal weight to the interests of all beings (human and otherwise) that are affected by one’s decisions. This is not quite the same as saying that all animals should be treated equally. Different animals have different interests. For example, highly social animals would have an interest in having opportunities for social interaction with conspecifics, whereas solitary animals would not. The strength of animals’ interests might also differ. For example, it might be the case that an antelope feels pain more acutely than a sea anemone. Singer’s utilitarian approach does not rule out animal experimentation altogether. Specifically, it would permit animal experiments when they would promote overall utility (factoring in the interests of the research animals, the potential beneficiaries of the research, and any other beings affected by the experiments). This might be the case when limited animal experimentation would achieve important medical breakthroughs. What matters here is whether the harms to research animals are outweighed by the benefits to those whose interests are thereby promoted. Singer’s utilitarian approach would likewise not rule out harmful experimentation with sentient brain organoids. What matters, on Singer’s view, is the overall balance of harms and benefits associated with the research. If we can promote aggregate utility by subjecting sentient brain organoids to harmful experimentation, then we ought to do so. If it turns out that sentient brain organoids have a lesser capacity for suffering than humans, then it may often be appropriate to promote our interests at the expense of theirs.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  259

6.  Rights-based Approaches Some rights-based approaches impose greater restrictions on how we may treat beings with moral status. The most famous such argument comes from Tom Regan. Regan proposes that all beings who are ‘subjects of a life’ have inherent value, which carries with it a prima facie right against being harmed. Regan defines ‘subjects of a life’ as follows: Subjects of a life want and prefer things, believe and feel things, recall and expect things. And all these dimensions of our life, including our pleasure and pain, our enjoyment and suffering, our satisfaction and frustration, our continued existence or our untimely death—all make a difference to the quality of our life as lived, as experienced, by us as individuals. As the same is true of . . . animals . . . they too must be viewed as the experiencing subjects of a life, with inherent value of their own.  (Regan 1983, p. 24)

According to Regan, many nonhuman animals—including laboratory mice— are subjects of a life, and so have inherent value. This value, in turn, is thought to confer a range of moral rights, including prima facie right against being subject to harmful experimentation even if such research would promote aggregate utility. Regan’s approach would presumably likewise rule out most harmful experimentation with sentient brain organoids (or at least any that meet the threshold for being ‘subjects of a life’, as Regan defines it).

7.  Animal Research Ethics Principles Both of the above frameworks entail much stronger restrictions than currently exist on the use of animals and brain organoids in research. Existing animal research ethics principles are more permissive. Perhaps the most influential set of principles are Russell and Burch’s ‘Three R’s’ (Russell & Burch  1959). These principles hold that researchers should (1) reduce the number of animals used in experiments to the minimum necessary for scientifically valid results, (2) where possible, refine experimental techniques to minimize animal suffering (for example, by administering anaesthesia), and (3) replace animal research with alternatives whenever it is feasible to do so. The three R’s, if extended to brain organoids, would not greatly limit the kinds of research that scientists could conduct. They would require re­searchers

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

260  Julian Koplin, Olivia Carter, and Julian Savulescu to use no more brain organoids in research than are required to achieve valid results (reduce), avoid harming brain organoids unnecessarily (refine), and pursue research with sentient brain organoids only if one’s scientific ob­ject­ ives can’t be achieved using non-sentient materials—for example, less sophisticated brain organoids or other tissue models (replace). So long as these conditions are met, the three R’s would allow sentient brain organoids to be used in research. In certain respects, the three R’s provide an extremely—and perhaps problematically—permissive framework for research with sentient beings. The greatest shortcoming of the three R’s is that they do not provide any means for determining which scientific objectives are sufficiently important to justify harms to research animals (or, by the same token, brain organoids.) The three R’s are principles for minimizing harms to animals while scientific goals are being pursued. They are silent regarding which scientific goals are worth pursuing given the harms this research inflicts, and therefore can’t be used to interrogate whether particularly harmful or controversial forms of research ought to take place in the first place. Before adapting animal ethics principles to brain organoid research, we will need a more comprehensive set of principles. We can find a more comprehensive set of animal ethics principles in Tom L. Beauchamp and David DeGrazia’s Principles of Animal Research Ethics (2019). Along with variants of Russell and Burch’s three principles—reduce, refine, and replace—Beauchamp and DeGrazia propose that animal research be governed according to a Principle of Sufficient Value to Justify Harm (which requires that the anticipated benefits of a research study are great enough to justify any harms to research animals), the Principle of Upper Limits to Harm (which rules out research that would inflict severe suffering unless such research is critically important), and the Principle of No Unnecessary Harm (which holds that animals should be harmed only when entirely unavoidable). Koplin and Savulescu (2019)—two of the authors of this chapter—have described how these principles can be extended to brain organoid research. In addition to reducing, refining, and replacing the use of sentient brain organoids, Beauchamp and DeGrazia’s principles would impose three additional restrictions on brain organoid research. The first is that the anticipated bene­ fits of the research must be significant enough to justify the expected harms (which is consistent with the Principle of Sufficient Value to Justify Harm). The second is that brain organoids should not be made to experience severe suffering unless the goal of the research is critically important (which is consistent with the Principle of Upper Limits to Harm). The third is that sentient

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  261 brain organoids must not be exposed to greater harm than is necessary to achieve one’s scientific objectives (which is consistent with the Principle of No Unnecessary Harm). Among other things, this restriction suggests that experiments should utilize whatever kind of brain organoids are expected to experience the least suffering—for example, by using brain organoids with the lowest degree of consciousness, or the smallest possibility of developing consciousness, consistent with achieving one’s scientific aims. This last principle also suggests that we should investigate ways of modifying brain organoids to reduce the prospect of suffering—for example, via gene editing. One possibility is to alter them so that they are less likely to develop consciousness. Another would be to modify them to render them less susceptible to specific forms of harm (such as pain, sensory deprivation, or other forms of suffering). When we have the power to improve the lives of brain organoids via gene editing, then—all other things being equal—we ought to do so. Although sentient brain organoids are an entirely novel kind of entity, they raise similar ethical issues to existing forms of animal research. Existing animal research ethics frameworks might therefore provide a useful starting point for regulating the use of other sentient beings in research. Once we have found a morally justified framework for animal research ethics—whether drawn from the three R’s, Beauchamp and DeGrazia’s more comprehensive set of principles, or elsewhere—this framework can be adapted to the unique context of brain organoid research.6 Extending animal research principles to brain organoids would achieve consistency in how we treat brain organoids and research animals. However, it might be asked whether consistency is, in fact, called for here. The final section of this chapter considers whether brain organoids should be granted protections beyond those afforded research animals due to their symbolic connection to human persons.

8.  Indirect Moral Significance So far, we have been discussing moral status. Moral status, however, is not the only possible reason to afford something moral consideration. There are some things—like human remains or works of art—that it is commonly thought deserve respect not because they themselves have a stake in how we treat them, but because of their significance to human persons. The grounds of this moral consideration could be termed ‘relational moral status’, ‘(indirect)

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

262  Julian Koplin, Olivia Carter, and Julian Savulescu moral value’, or ‘indirect moral significance’ (see generally Steinbock  2009; Wilson 2002). The idea of indirect moral significance has a long lineage in bioethics, particularly in relation to human embryos. It is generally understood that embryos have a kind of special moral value when they are part of a pro­spect­ ive parent’s reproductive plans (Douglas & Savulescu  2009; Persson & Savulescu 2010). However, this kind of moral significance has little relevance to organoid research, since creating a brain organoid would not typically be considered a reproductive goal. There is another possible basis for indirect moral significance. This basis also has a parallel in the ethics of embryo research—specifically, in the view that all human embryos (not just those that form part of a reproductive plan) have special moral value that is tied to their symbolic significance. Bonnie Steinbock provides a particularly clear and influential example of this style of argument. According to Steinbock: Dead bodies are owed respect both because of what they are—the remains of the once-living human organism—and because of what they symbolize— the human person who is no more. Human embryos deserve respect for similar reasons: they are a developing form of human life, and also a symbol of human existence  (Steinbock 2009, p. 436).

Steinbock argues on this basis that embryos should not be used for ‘frivolous’ or ‘trivial’ purposes, though on her view important research is compatible with respecting embryos (Steinbock 2009, p. 437). A similar view appears to undergird certain recommendations of the influential 1984 UK Report of the Committee of Inquiry into (the ‘Warnock Report’). Although it is not made explicit, the report appears to place great importance on embryos’ symbolic value. The Warnock Report does not extend early embryos’ moral status akin to that of a child or adult, but it nonetheless holds that embryos have a ‘special status’ that justifies restricting (though not prohibiting) embryo research. Inter alia, this ‘special status’ is the basis of the Report’s suggestion that researchers should not experiment on human embryos if animals can be used instead (Warnock 1984, p. 63). It is plausible to think that human brain organoids, like embryos, would be perceived to have symbolic value. After all, organoids model the human brain—the organ most closely linked with our consciousness and our identity. Yet even if human brain organoids have symbolic value, it still needs to be asked whether symbolic value really does matter morally. While many people

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  263 share the in­tu­ition that symbolically significant things should not be treated disrespectfully—for example, that human skulls should not be used for soccer practice—it is difficult to pin down precisely why this disrespectful treatment is morally problematic. In particular, as Bortolotti and Harris (2006) point out, it is difficult to identify why disrespectful treatment of symbolically important things is morally significant, as opposed to a merely aesthetic matter of bad taste. The issue here is not just academic. Brain organoids might be able to partly replace some forms of animal experimentation—for example, in toxicology testing (Bredenoord et al. 2017). Whether we should prefer to experiment on brain organoids over research animals is a pressing practical issue. In line with the Warnock committee’s view on embryo research, one might argue that researchers should prefer the use of animal models over brain organoid models regardless of whether the organoid is sentient. Or, more moderately, one might hold that because sentient brain organoids would have symbolic value in addition to their moral status, the interests of a slightly sentient brain organoid could trump those of a research animal lacking in symbolic significance with weightier interests. We want to register three reservations about placing much weight on ­symbolic significance, especially when deciding whether organoid research should displace animal research. First, even if there are legitimate moral considerations against experimenting on symbolically significant entities ­ like  organoids or embryos, there are also important moral considerations, grounded in our duties of non-maleficence, to reduce the harms our scientific experiments inflict. There is no obvious reason why concerns related to symbolic value should trump our more concrete moral reasons against inflicting unnecessary harm. The second problem is that animals also have symbolic significance. There is a particular strand of argument against animal cruelty that holds that inhumane treatment of animals can spill over into how we treat humans. Perhaps the most famous such argument comes from Kant, who held that persons should avoid treating animals cruelly in case doing so coarsens their attitudes toward fellow humans: Violent and cruel treatment of animals is . . . intimately opposed to man’s duty to himself, and he has a duty to refrain from this; for it dulls his shared feeing of their pain and so weakens and gradually uproots a natural predisposition that is very serviceable to morality in one’s relations with other men. (Kant 1996, p. 443)

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

264  Julian Koplin, Olivia Carter, and Julian Savulescu Since Kant denied nonhuman animals moral status, his opposition to animal cruelty was grounded entirely in their flow-on effects to how one would go on to treat humans.7 This view resembles the arguments for restricting embryo research discussed above; both are grounded in the indirect moral significance of these practices to human persons. There is no obvious reason to think that brain organoids’ indirect moral significance would trump that of nonhuman animals. Finally, it is worth considering a novel approach to addressing concerns about symbolic value: modifying brain organoids so that they are no longer perceived to have symbolic value. Earlier, we mentioned the possibility of using gene editing to render brain organoids less susceptible to suffering. By the same token, it might be possible to modify brain organoids so that they are no longer viewed sympathetically. Perhaps we could stifle people’s feelings toward brain organoids by making them appear less like a human brain—for example, by making them appear ugly and malformed, or by colouring them an unnatural shade of blue. Would this be an ethical way to address symbolic value concerns? If successful, this strategy would address the concerns discussed above; inter alia, how we treat (suitably ugly or threatening) organoids would no longer spill over into how we treat human beings. On the other hand, there is something intuitively uncomfortable about the idea that rendering brain organoids ugly or scary should make a difference to how we ought to treat them. If we don’t think these kinds of aesthetic modifications make much moral difference, then perhaps we shouldn’t give so much weight to perceptions of symbolic value in the first place.

9.  Conclusion We have tried to steer a course between potential mistakes that could be made when regulating brain organoid research. The first is that sentient brain organoids could be treated as if they lack moral status, when in fact we do have moral obligations toward them. The second is that we will curtail brain organoid research too severely—and thereby undermine the medical advances that brain organoid research might otherwise yield. Brain organoid research is a new and rapidly developing field, and existing legal and regulatory frameworks are ill-equipped to address the moral issues it raises. Brain organoids neither fit neatly inside nor fall squarely outside of legal definitions of ‘human’ (Knoppers & Greely  2019), and under existing frameworks for human tissue research are treated as equivalent to any other

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  265 human biological material (Farahany et al. 2018; Koplin & Savulescu 2019). While the prospect of achieving sentience in an in vitro brain is unprecedented scientifically, the ethical issues raised by this prospect have clear ­parallels—most notably in animal ethics, which likewise deals with the ethics of experimenting on sentient entities for scientific gain. One starting point for the ethics of brain organoid research is to extend principles that have already been worked out in relation to nonhuman animals. This starting point leaves open many important questions. Most obviously, we still need to work out how best to identify sentience in an in vitro brain. Moreover, if we are to respect a sentient brain organoid’s interests, we will first need to work out what interests it might possess. But at least one point is already clear: if a brain organoid develops sentience, it would have moral status, and we ought to care about how it is treated. The interests of sentient brain organoids deserve moral consideration.

Notes 1. We are focusing specifically on brain organoids that model the whole brain (rather than a specific region), as these are the kinds of organoids most likely to raise moral status concerns. 2. More concretely, the proposal here would be to extend whatever prohibitions apply to cloned embryos—e.g., to prohibit their development for longer than fourteen days. 3. This last category of research also raises issues of human–nonhuman chimera ethics, which are discussed in Faden and colleagues’ contribution to this volume (Faden, Beauchamp, Mathews, & Regenberg, this volume). 4. It is worth noting that Hostiuc and colleagues set a more demanding threshold for moral status than mere sentience (and might therefore block relatively little research). 5. Such questions are often couched in terms of various beings’ degrees of moral status. However, the claim that moral status comes in degrees is controversial—including among those who hold that we have more stringent moral obligations toward some beings with moral status than others (DeGrazia 2008). 6. We are not here taking any specific stance on how restrictive such a framework ought to be. It might be the case that animal research is morally impermissible under most circumstances—in which case the same might be true of research with sentient brain organoids. 7. Of course, nonhuman animals could have both direct moral status and the kind of value described by Kant.

References Ball, P. (2019). How to Grow a Human: Adventures in how we are Made and who we are. Chicago: The University of Chicago Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

266  Julian Koplin, Olivia Carter, and Julian Savulescu Bayne, T., Seth, A. K., & Massimini, M. (2019). Are there islands of awareness? Trends in Neurosciences. doi:10.1016/j.tins.2019.11.003. Beauchamp, T., & DeGrazia, D. (2019). Principles of Animal Research Ethics. Oxford: Oxford University Press. Bersenev, A. (2015). All about organoids—interview with Madeline Lancaster. Retrieved from . Boers, S. N., de Winter-de Groot, K. M., Noordhoek, J., Gulmans, V., van der Ent, C. K., van Delden, J. J., & Bredenoord, A. L. (2018). Mini-guts in a dish: perspectives of adult Cystic Fibrosis (CF) patients and parents of young CF patients on organoid technology. Journal of Cystic Fibrosis 17(3). Bortolotti, L., & Harris, J. (2006). Embryos and eagles: symbolic value in research and reproduction. Cambridge Quarterly of Healthcare Ethics 15(1), 22–34. Bredenoord, A. L., Clevers, H., & Knoblich, J. A. (2017). Human tissues in a dish: the research and ethical implications of organoid technology. Science 355(6322). doi:ARTN eaaf941410.1126/science.aaf9414. Brock, D.  W. (2002). Human cloning and our sense of self. Science 296(5566), 314–16. Chen, H.  I., Wolf, J.  A., Blue, R., Song, M.  M., Moreno, J.  D., Ming, G.-l., & Song,  H. (2019). Transplantation of human brain organoids: revisiting the ­science and ethics of brain chimeras. Cell Stem Cell 25(4), 462–72. Clevers, H. (2016). Modeling development and disease with organoids. Cell 165(7), 1586–97. doi:10.1016/j.cell.2016.05.082. Cohen, J. (2018). Neanderthal brain organoids come to life. American Association for the Advancement of Science. DeGrazia, D. (2008). Moral status as a matter of degree? The Southern Journal of Philosophy 46(2), 181–98. DeGrazia, D. (2020). Sentience and Consciousness as Bases for Attributing Interests and Moral Status: Considering the Evidence and Speculating Slightly Beyond Neuroethics and Nonhuman Animals (pp. 17–31). Berlin: Springer. Douglas, T., & Savulescu, J. (2009). Destroying unwanted embryos in research: talking point on morality and human embryo research. EMBO Rep 10(4), 307–12. Faden, R. R., Beauchamp, T. L., Mathews, D. J., & Regenberg, A. (this volume). Toward a theory of moral status inclusive of nonhuman animals: pig brains in a vat, cows versus chickens, and human–nonhuman chimeras. Farahany, N.  A., Greely, H.  T., Hyman, S., Koch, C., Grady, C., Pasca, S.  P., . . . Song, H. (2018). The ethics of experimenting with human brain tissue. Nature 556(7702), 429–32. doi:10.1038/d41586-018-04813-x. Fatehullah, A., Tan, S. H., & Barker, N. (2016). Organoids as an in vitro model of human development and disease. Nature Cell Biology 18(3), 246.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Moral Status of Brain Organoids  267 Giandomenico, S. L., Mierau, S. B., Gibbons, G. M., Wenger, L. M. D., Masullo, L., Sit, T., . . . Lancaster, M. A. (2019). Cerebral organoids at the air–liquid interface generate diverse nerve tracts with functional output. Nature Neuroscience 22(4). doi:10.1038/s41593-019-0350-2. Häyry, M. (2018). Ethics and cloning. British Medical Bulletin 128(1), 15–21. Hostiuc, S., Rusu, M. C., Negoi, I., Perlea, P., Dorobanţu, B., & Drima, E. (2019). The moral status of cerebral organoids. Regenerative Therapy 10, 118–22. Kagan, S. (2018). For hierarchy in animal ethics. Journal of Practical Ethics 6(1), 1–18. Kant, I. (1996). The Metaphysics of Morals (M.  Gregor, Trans.). Cambridge: Cambridge University Press. Knoppers, B.  M., & Greely, H.  T. (2019). Biotechnologies nibbling at the legal ‘human’. Science 366(6472), 1455–7. Koplin, J. J., & Savulescu, J. (2019). Moral limits of brain organoid research. The Journal of Law, Medicine & Ethics 47(4), 760–7. Lancaster, M.  A., Renner, M., Martin, C.-A., Wenzel, D., Bicknell, L.  S., Hurles, M. E., . . . Knoblich, J. A. (2013). Cerebral organoids model human brain development and microcephaly. Nature 501(7467), 373. Lavazza, A., & Massimini, M. (2018). Cerebral organoids: ethical issues and consciousness assessment. Journal of Medical Ethics 44(9), 606–10. Lavazza, A., & Pizzetti, F. G. (2020). Human cerebral organoids as a new legal and ethical challenge. Journal of Law and the Biosciences. doi:10.1093/jlb/lsaa005. Lee, A.  Y. (2019). Is consciousness intrinsically valuable? Philosophical Studies 176(3), 655–71. Mansour, A. A., Goncalves, J. T., Bloyd, C. W., Li, H., Fernandes, S., Quang, D., . . . Gage, F.  H. (2018). An in vivo model of functional and vascularized human brain organoids. Nat Biotechnol. doi:10.1038/nbt.4127. Miller, K. (2019). Biologists are growing mini-brains: what if they become conscious? Leapsmag, 12 November. Ohayon, E. L., Tsang, P. W., & Lam, A. (2019). A computational window into the problem with organoids: approaching minimal substrates for consciousness. Paper presented at the Neuroscience 2019, Chicago. . Ooi, L., Dottori, M., Cook, A. L., Engel, M., Gautam, V., Grubman, A., . . . Targa Dias Anastacio, H. (2020). If human brain organoids are the answer to understanding dementia, what are the questions? The Neuroscientist, ­ 1073858420912404. Ormandy, E.  H., & Schuppli, C.  A. (2014). Public attitudes toward animal research: a review. Animals, 4(3), 391–408.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

268  Julian Koplin, Olivia Carter, and Julian Savulescu Persson, I., & Savulescu, J. (2010). Actualizable potential, reproduction, and embryo research: bringing embryos into existence for different purposes or not at all. Cambridge Q. Healthcare Ethics 19, 51. President’s Council on Bioethics. (2002). Human Cloning and Human Dignity: An Ethical Inquiry. Washington, DC: President’s Council on Bioethics. Quadrato, G., Nguyen, T., Macosko, E.  Z., Sherwood, J.  L., Yang, S.  M., Berger,  D.  R., . . . Kinney, J.  P. (2017). Cell diversity and network dynamics in photo­sensi­tive human brain organoids. Nature 545(7652), 48. Regan, T. (1983). The Case for Animal Rights. Berkeley: University of California Press. Rossi, G., Manfrin, A., & Lutolf, M. P. (2018). Progress and potential in organoid research. Nature Reviews Genetics 19(11), 671–87. Russell, W. M. S., & Burch, R. L. (1959). The Principles of Humane Experimental Technique. London: Methuen. Sample, I. (2019). Scientists ‘may have crossed ethical line’ in growing human brains. The Guardian, 21 October 2019. Retrieved from . Sawai, T., Sakaguchi, H., Thomas, E., Takahashi, J., & Fujita, M. (2019). The ethics of cerebral organoid research: being conscious of consciousness. Stem Cell Reports 13(3), 440–7. doi:10.1016/j.stemcr.2019.08.003. Shepherd, J. (2018). Consciousness and Moral Status: Routledge: New York. Singer, P. (1990). Animal Liberation (2nd edn). New York: New York Review of Books: Distributed by Random House. Steinbock, B. (2009). Moral status, moral value, and human embryos: implications for stem cell research. In B.  Steinbock (ed.), The Oxford Handbook of Bioethics. Oxford: Oxford University Press. Stetka, B. (2019). Lab-grown ‘mini brains’ can now mimic the neural activity of a preterm infant. Scientific American, 29 August 2019. Wang, H. (2018). Modeling neurological diseases with human brain organoids. Frontiers in Synaptic Neuroscience 10, 15. Warnock, M. (1984). Report of the committee of inquiry into human fertilisation and embryology. Warren, M.  A. (1997). Moral Status: Obligations to Persons and Other Living Things. Oxford: Oxford University Press. Weintraub, K. (2020). ‘Mini brains’ are not like the real thing. Scientific American, 30 January 2020. Willyard, C. (2015). The boom in mini stomachs, brains, breasts, kidneys and more. Nature 523, 520–2. Wilson, S. (2002). Indirect duties to animals. Journal of Value Inquiry 36(1), 17.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

16 How Much Moral Status Could Artificial Intelligence Ever Achieve? Walter Sinnott-Armstrong and Vincent Conitzer

Saudi Arabia recently granted citizenship to a robot.1 The European Parliament is also drafting a form of “electronic personhood” for artificial intelligence.2 Some Japanese get so attached to their robots that they give robots funerals and bury them after they break irreparably.3 Many commentators see these recent developments as confused and even dangerous (Gunkel 2012), so we need to think about whether and why future artificial intelligence could or should ever be granted partial or even full moral status. This chapter will begin by defining moral status and arguing that it comes in degrees on multiple dimensions. Next we will consider which conditions need to be met for an entity to have moral status, and we will argue that artificial intelligence can meet a combination of conditions that are sufficient for partial moral status. Finally, we will consider how much moral status an AI system could have.

1.  What is Moral Status? To understand the notion of moral status, consider common moral rules such as don’t kill, don’t disable, and don’t deceive, among others. These rules seem simple, but they cannot be applied to the cases where moral status is at issue until we determine who it is that we should not kill, disable, or deceive. In short, which entities are protected by the moral rules? Another way of posing basically the same question is to ask whether an entity has moral rights, including the right not to be killed, disabled, or deceived. We can also ask whether other people have direct moral reasons not to treat the entity in certain ways or whether it is directly morally wrong to treat that entity in those ways. Asking about moral status is a shorthand way of asking which entities Walter Sinnott-Armstrong and Vincent Conitzer, How Much Moral Status Could Artificial Intelligence Ever Achieve? In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © Walter Sinnott-Armstrong and Vincent Conitzer 2021. DOI: 10.1093/oso/9780192894076.003.0016

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

270  Walter Sinnott-Armstrong and Vincent Conitzer are directly protected by the four Rs: rules, rights, reasons, and wrongs (cf. DeGrazia 2008, 184). Entities without moral status can still be protected indirectly by morality. It is morally wrong for someone else to blow up your car not because your car has moral status or rights but rather because you have moral rights not to have your property destroyed without permission, and blowing up your car will harm you. Your car is not wronged, but you are. In contrast, your pet dog has a right not to be burned alive, even if you want to commit that atrocity. That act wrongs your dog instead of wronging you, as in the case of your car. Thus, you and your dog are protected directly by morality insofar as what makes it wrong to harm you or your dog is something about you and your dog in contrast with anyone else who cares about you or your dog. That is what gives you and your dog moral status. Of course, rules and rights can be violated justifiably, reasons can be overridden, and acts that are morally wrong in some circumstances can be justified in others. To say that an entity has moral status is not to say that is always immoral to kill, disable, or deceive it. It is only to say that it is directly morally wrong to kill, disable, or deceive it in situations where there is not enough reason to do so.

2.  Does Moral Status Come in Degrees? Some philosophers claim that each entity simply has moral status or not. One example is Elizabeth Harman, who says, “ . . . moral status is not a matter of degree, but is rather on/off: a being either has moral status or lacks it” (Harman 2003, 183). Harman does admit that a human counts more than an anaconda, but only because death causes a greater loss to the human than to the anaconda. Regarding pain, for example, pain to the anaconda counts less than pain to the human, because the human will remember the pain longer, will suffer more while remembering it, and will have more projects that the pain prevents the human from accomplishing. Harman insists, nonetheless, that equal harms to different beings with moral status create equally strong moral reasons. We disagree. To see why, imagine a human to whom the pain or other moral wrong means no more than to the anaconda. Perhaps the human will die very shortly after being harmed, so the human will have no memories or projects for the pain to interfere with. However we set up this example, there should be some way to ensure that the human will not lose significantly more

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  271 than the anaconda. Nonetheless, in a case where each entity loses the same amount, it still seems more morally wrong to harm the human than to harm the anaconda. Reflection on examples suggests that moral status comes in degrees.4 In particular, moral status varies in degree along (at least) two dimensions: strength and breadth. To see how moral rights can vary in strength, compare an anaconda, a bonobo, and a human child. If you could not save both the anaconda and the bonobo from death, or if you could not avoid killing one of them, then it would seem immoral to kill or fail to save the bonobo instead of the anaconda. But what if you could not save both the bonobo and the human child or could not avoid killing one of these? Then it seems (except to extremists on animal rights) immoral to kill or fail to save the human child instead of the bonobo. These comparisons suggest that the bonobo’s moral right not to be killed is stronger than any such right in the anaconda but weaker than the human child’s right. Moral status also varies in breadth, that is, how many rules, rights, reasons, and wrongs protect a certain entity. For example, babies have rights not to be tortured or killed, but they have no rights not to be deprived of freedom. It is not immoral to swaddle them tightly even when their squirming suggests that they want to be free. But it would be immoral to do anything like this to any normal adult human, such as put them in a straightjacket (even for the adult’s own good, if that is why we swaddle babies). Thus, babies have the same right not to be caused pain, but they do not have the same right to freedom as adults. Conversely, imagine an otherwise normal adult human who cannot feel any pain because of an unchangeable biological deficit.5 This permanently numb adult can still have moral rights to be free and not to be killed or dis­ abled. However, it makes little sense to say that this permanently numb adult has a moral right not to be caused pain, because it is constitutionally unable to feel any pain. Opponents might reply that the numb adult has other properties that give it a conditional moral right not to be caused pain if it did somehow become able to feel pain. However, there might be no way for that ability to arise without changing the numb human’s biology so much that it becomes a different organism and person. Moreover, a moral right conditional on other circumstances is not a moral right not to be caused pain now, while it cannot feel pain because of how it is currently constituted.6 These degrees of moral status are crucial here, because we will argue that a future AI with certain features can have a moral right to freedom but no moral right not to be caused pain, much like the numb adult or an angel,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

272  Walter Sinnott-Armstrong and Vincent Conitzer according to some theologies. This conclusion is controversial, and we admit our own doubts. But before we can argue for it, we need to address one more preliminary issue.

3.  What is the Basis of Moral Status? It is not enough merely to announce that an entity has moral status. One must specify why it does. This reason is the basis for its moral status, rights, or protection. The properties that supply this basis must meet certain standards to be fair, explanatory, and not question begging. We agree with Bostrom and Yudkowsky (2014), who argue for two limitations on which properties can be the basis for moral status. First: Principle of Substrate Non-Discrimination: If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.  (p. 323)

In short, what matters is not substrate but function. To see why, imagine that a doctor discovers that your best friend is actually Neanderthal rather than human. Would that make your friend’s moral status questionable? No, despite genetic differences. Your friend’s moral status would not be in doubt even if the doctor found that her body was made of silicon instead of carbon. What matters is her consciousness, intelligence, and other functions rather than their physical substrate. This point will become crucial when we come to the question of whether computers or AIs can have moral status. Bostrom and Yudkowsky’s (2014) second principle concerns source or origin: Principle of Ontogeny Non-Discrimination: If two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.  (p. 324)

Again, imagine that your best friend tells you that a mad scientist somehow created her from frog cells, using CRISPR to modify the genes. She would still be intelligent, conscious, and your friend, so she would have full moral status. Thus, origin does not matter to moral status any more than substrate.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  273 Analogously, the fact that AIs come from programmers in a very different way than humans come from parents cannot show that they lack moral status. Finally, a basis for moral status would be useless in determining which entities have moral status if it were not also empirically determinable (Liao, forthcoming). For example, a theory that AIs as well as fetuses and animals have moral status just in case they have souls cannot help us unless it also provides some way to tell which entities have souls. We need that help, so such theories are practically inadequate, even if they are theoretically defensible.

3.1 Sentience One popular and plausible proposal for the basis of moral status is sentience, which is the capacity to experience feelings, sensations, emotions, or moods. In arguing for animal rights, DeGrazia (this volume) prominently claims that sentience is necessary and sufficient for moral status. What matters to the issue of animal rights is sufficiency, but what matters regarding AI is necessity. If sentience is necessary for moral status, and if AIs are not sentient, then AIs cannot have moral status. We doubt that sentience is necessary for all moral rights or status. It would be necessary for a moral right not to be caused pain, since a non-sentient creature cannot feel pain. However, it is not at all clear that or why sentience would be necessary for a moral right to life, freedom, privacy, or speech, since sentience is not necessary for life, freedom, privacy, or speech. To see this point, imagine that a human is prevented from achieving his goals but feels no pain or even frustration, perhaps because he does not know that he failed to achieve what he wanted. His right to freedom still might be violated. Similarly, if the camera on his laptop secretly records him, this violates his right to privacy, even if he never finds out and never experiences any consequences of having been recorded. Again, his right to speech is violated if the government blocks his email (a form of speech) without him ever dis­ cover­ing that his protest messages never got through. Even his right to life can be violated by killing him painlessly in his sleep so that he is never aware of being killed and never feels any pain or frustration. Because such victims’ rights can be violated without any negative feelings that their sentience makes them able to sense, it is hard to see why sentience would be necessary for a right to freedom or those other rights.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

274  Walter Sinnott-Armstrong and Vincent Conitzer Imagine also that we encounter sophisticated aliens who are not sentient at all. A tenuous peace between them and us emerges, and we all manage to get along and work together towards our objectives. It would violate their rights if we broke our promises to them or killed or enslaved them. Granting such basic rights to them seems essential to maintaining our peaceful arrangement with them. If so, at least some creatures without sentience can have some rights, so sentience is not necessary for all moral rights. The same goes for interests if interests require felt desire or felt frustration when those interests are not met (pace DeGrazia, this volume). In contrast, if interests are merely goals that shape an entity’s behavior, then they might be relevant to freedom, because we cannot restrict the freedom of entities to pursue their goals if they have no goals. But then there is no reason why an advanced AI in the far future could not have goals that shape its behavior, so it could have interests of this kind and then moral rights. DeGrazia might reply that plants have this kind of biological goals and interests, but plants do not have a right to freedom, so how can interests be sufficient for a right to freedom? The solution is either to distinguish the kinds of interests that plants have from the kinds that ground moral status or to hold that goals convey moral status only in the context of intelligence, agency, and other abilities that plants lack.7 We do not and need not claim that interests by themselves are sufficient for a moral right to freedom.

3.2  Multiple bases The fundamental problem with requiring sentience or felt interests for any moral status is that they are relevant to some moral rights (such as the right not to be caused pain), but they are irrelevant to other moral rights (such as rights to freedom and life). A theory of moral status is better when it cites properties that explain not only which entities have at least some moral status but also which rights they have, which is to say how broad their moral status is. It is doubtful that any single property can explain such different rights. It seems preferable to align different features of the affected entity as the basis of the different rights, reasons, rules, and wrongs that apply to that entity. The right not to be caused pain seems to require sentience, whereas the right to be free seems to require goals together with the ability to make rational choices. Neither of these requirements depends on substrate or ontogeny, and both are empirically determinable, so they meet the main

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  275 requirements for bases of moral status, even if they do not provide a unified basis for all kinds of moral status.

4.  Can Future AIs have the Basis of Moral Status? We can now answer the question of whether an advanced AI far in the future could meet the conditions for moral status. As we saw, AIs cannot be excluded from moral protection either because they lack cells or were programmed or because they lack felt interests or sentience (interpreted narrowly as the capacity to experience feelings, sensations, emotions, or moods). But plants show that merely having unfelt goals is not sufficient for a moral right to freedom (and, hence, for some degree of moral status) in the absence of other abilities. Which abilities? Plausible and popular candidates include intelligence, consciousness, freedom, and perhaps also moral understanding. We will not try to determine which of these abilities is individually necessary for a moral right to freedom. What matters here is they are jointly sufficient, and nothing else (including sentience) is necessary. We will argue that an advanced AI far in the future could have all of these abilities. Since they are jointly sufficient for moral status, showing that AIs can have them all will be enough to show that an advanced AI far in the future could have this much moral status.

4.1  Intelligence The name “AI” means artificial intelligence, but we should not infer too much from this name. The fact that something is called artificial intelligence does not show that it is really intelligent. Similarly, people often say that an air conditioning system is trying to cool down the house, so they attribute intentions, but they don’t really believe that the system has desires or a model of what it is trying to achieve, much less a concept of the house. This common and useful way of speaking and thinking does not show that air conditioners are really intentional or intelligent. A reverse mistake is to think that a system is not intelligent just because we know how it works. Computer science students are sometimes assigned to write a simple program for playing a simple game, such as connect-four

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

276  Walter Sinnott-Armstrong and Vincent Conitzer (in which players take turns dropping discs from the top and try to get four discs of their color in a row). Even very talented students who play against their own algorithms and understand exactly how those algorithms work find it too difficult to think through all the moves ahead that the algorithm con­ siders. It is more effective for them to imagine that they are playing against another human, and then they end up thinking that the algorithm is trying to achieve certain goals. But we still understand exactly what is going on, at least in principle: a systematic but rote enumeration of all relevant moves. And with this understanding of how the connect-four AI works, it is natural to say that it isn’t really intelligent. This is known as the “AI effect”: once AI re­searchers figure out how to accomplish a benchmark task, observers dismiss the achievement by saying, “Well, but that’s not really intelligence.” That assessment would be unfair. The accomplishment certainly tells us something about the nature of intelligence and seems to display some kind of intelligence, even though it is hard to verbalize exactly which kind. Probably in part due to this difficulty, the AI community has not agreed on a single definition of intelligence. The most popular definitions are pragmatic, flexible, and inclusive. A very inclusive definition might say only that intelligence is any ability to acquire and apply knowledge and skills. This definition seems to capture one common meaning, and AI seems able to acquire and apply knowledge and skills. AI can acquire knowledge or information, for example, simply by searching the internet for data. It can apply that know­ ledge in reaching conclusions, such as predictions about what people will buy or how they will vote. It can acquire skills, such as how to play games, and it can apply that skill by beating humans. Max Tegmark seems to require more when he defines intelligence as “the ability to accomplish complex goals” (Tegmark 2017, p. 39). Does an AI that beats humans at chess really have winning as its goal? Maybe winning is a goal for the programmer but not for the AI itself. However, one sign that an entity has a goal is that the goal guides its actions. When our goal is to win rather than just to play or have fun, we will adjust our moves in ways that increase the probability of winning, even if those moves make the game less fun and shorter. That is exactly what an AI does when it plays chess. A learning AI may even adjust the weights of its connections so that it will become more likely to win next time. Even though it got this goal from its programmer, and regardless of whether it is conscious of this goal, it is guided by the goal of winning. Thus, AI can fit Tegmark’s definition of intelligence as well. A much less inclusive way of defining and testing intelligence was proposed by Turing (1950). In the Turing test, a player sends and receives messages

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  277 from two sources, one human and one computer. The computer is supposed to display intelligence to the extent that the player cannot tell them apart. If an AI ever passes this Turing test,8 that achievement is supposed to show that the AI has intelligence. But is the Turing test the right standard for intelligence? One problem with the Turing test is that it requires general intelligence about all topics that the player might ask about. We do not see why this much range is required, since an entity can have intelligence without having all kinds of intelligence. A savant who can quickly calculate the day of the week for every day in the past century is displaying unusual intelligence on that topic, even if his intelligence is very limited on other topics. So passing the Turing test should not be seen as necessary for intelligence. Is it sufficient? Critics claim that the computer is only simulating intelligence without having any real intelligence. The best-known and most forceful argument for this objection is probably Searle’s Chinese room thought experiment (1980). Searle asks us to imagine a person inside a room with no access to the outside except Chinese characters that others send in occasionally. The person does not read Chinese but has a large instruction manual that tells him which Chinese characters to put out when certain Chinese characters come in. The person does not understand either the characters or what he is doing. Searle argues that understanding is necessary for real intelligence, computers are analogous to the person in the Chinese room, and their programs are analogous to the translation manual, so AI cannot really have understanding or intelligence. This argument had force against the kinds of computers and programming that existed at the time when Searle introduced his argument. However, the analogy arguably breaks down with the recent progress observed in machine learning. AI that uses machine learning can develop new skills that were not programmed into it. A programmer who is a relatively poor Go player could program a computer to beat the world Go champion at the game of Go. The AI achieves success by playing itself millions of times and changing its strategy in accordance with its wins and losses. Changing its strategy can be seen as a way of reprogramming itself.9 This kind of learning makes the AI very different from the instruction manual or the person in Searle’s Chinese room, since those never learn or change in order to better meet their goals. They couldn’t do that without knowing their goals and also knowing when those goals are met, which requires more (and more varied) access to the outside than merely receiving inputs occasionally. And when we add these other elements (especially the

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

278  Walter Sinnott-Armstrong and Vincent Conitzer ability to rewrite the translation manual to achieve known goals), then it is not at all clear why we should not see the person as understanding and as intelligent. In principle, we could also see the system or the room as a whole as learning through notes that the manual instructs the human to write down on paper and periodically consult. In this case, the human may neither learn nor understand anything, and the same is true for the manual, but arguably the room as a whole is doing both. The point is that advanced AI methods make computers or programs more closely analogous to human intelligence. In deep learning, artificial networks resemble (to some degree) what happens in our brains when we learn. These methods have been remarkably effective at achieving complex goals. And just as it is hard for one human to figure out what is going on in another human’s brain, it is also generally quite difficult to assess what exactly is happening in these artificial networks.10 Their achievements, arguable similarity to human brains, and opacity incline people to see such deep learning networks as thinking. Indeed, Geoffrey Hinton, who has been playing a major role in the deep learning revolution, is not shy about ascribing “thoughts” to artificial neural networks.11 It is becoming clear, however, that these networks, at least for now, are not doing exactly the same thing as our brains. One difference is shown by the susceptibility of such networks to so-called adversarial examples. Even when algorithms correctly label most images, changing a few pixels in an image often results in the algorithm completely mislabeling what to us are completely unambiguous images. The algorithm picks up on some statistical pattern in the data it has seen, but often the pattern is more about local texture than about a complete understanding of the image. In contrast, humans interpret images in light of more global contexts, so they are rarely fooled by such minuscule changes. It is good to remain aware that such algorithms can sometimes obtain impressive performance without much, if any, thinking or understanding. Consider the example of finding your way through a corn maze. This problem might seem to require a good amount of intelligence, including keeping track of where you are, whether you have been here before, and what you’ve already explored. However, a simple trick that works for many mazes is (spoiler alert!) simply to continue to follow the wall on your left-hand side. If you didn’t know this simple trick, and an AI system discovered it, then you might im­agine that the AI models the maze and its own place in the world. That would seem very intelligent and impressive. In reality, however, it is not doing anything like that. It is just following the left-hand wall, for which it doesn’t

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  279 even need to remember anything. And its achievement is no more impressive just because no human can figure out how it is doing so well. The main message is this. When humans use certain cognitive capacities to perform well on a task, and then an AI system performs as well or better on that same task, this still does not mean that that the AI has the same cognitive capacities as the human. Sometimes an algorithm does solve a problem in a similar way as we do, but it can be difficult to know when it does, especially in the case of deep learning. In the context of this chapter, which argues that different capacities imply different moral rights, it is therefore crucial not to confuse tasks with capacities. Good performance on a task does not necessarily mean that the system has the underlying capacity that a human uses for the task. Then again, when we believe that intelligence is what is required for a particular task or right, it is not clear why that intelligence must work in the same way as our human intelligence. Moreover, even if we cannot be certain whether a particular AI is intelligent, it is still possible that some advanced AI far in the future could somehow come to possess our level of intelligence or more. Then it will become hard to say why we have moral rights based on our intelligence, but that AI does not have similar moral rights based on its intelligence. Many more objections could be raised. Nonetheless, we conclude tentatively that advanced AI far in the future can have any kind of intelligence that is required for moral rights and status.

4.2  Consciousness Another property that is often said to be necessary or sufficient for moral status is consciousness. Although this claim is common, it is not at all clear which kind of consciousness is supposed to determine moral status. One crucial distinction is between phenomenal and access consciousness (Block 1997). Access consciousness is merely access to information. An entity has access consciousness of an orange when it can see it, grab it, or count it when asked how much fruit is there. It lacks access consciousness of the orange when it does not detect the orange and cannot form beliefs or make decisions in light of the information that the orange is there. Phenomenal consciousness is more mysterious. An entity has phenomenal consciousness when there is something that it is like to be that entity or have that entity’s experiences. A human who has been completely color blind since birth does not have phenomenal consciousness of the color orange and does

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

280  Walter Sinnott-Armstrong and Vincent Conitzer not know what it is like for a human with color vision to see the color orange. Nonetheless, color blind humans can still get access to information about which objects are orange by asking other people, so they can have access ­consciousness of the color orange without phenomenal consciousness of that color. Which kind of consciousness matters to moral status? Our answer should not be surprising after the preceding discussion. Different kinds of consciousness matter to different moral rights that constitute different aspects of moral status. Phenomenal consciousness matters to the right not to be caused pain, because the way that pain feels is essential to what pain is.12 Thus, an entity without any phenomenal consciousness of pain cannot feel pain and, hence, cannot have a right not to be caused pain. In contrast, access consciousness is crucial for rational decisions, which require access to information about one’s options. An entity that cannot rationally decide to do or not do a certain act is not really free to do or not do that act, so it makes little sense to grant it a moral right to freedom. That is why babies have a moral right not to be caused pain but no moral right to freedom, because they have enough phenomenal consciousness to feel pain but not enough access consciousness to consider the information needed to make rational and free decisions.13 These distinctions enable a more fine-grained position on the moral status of AI. It is not clear how to begin to build phenomenal consciousness into AI. It is also not clear why anyone would do so. What good would it do? Pain is said to have evolved in biological organisms partly in order to detect tissue damage, but an AI could use other methods to detect damage to its parts. Another proposed evolutionary purpose of pain is to prevent organisms from moving injured parts in ways that might slow recovery or lead to re-injury (Klein 2015). But, again, AI could avoid such dangerous movements by using other sources of information about what not to move. AI would not need pain for these purposes, so they could not provide any reason to build AI so that it could feel pain. Humans might try to create an AI that feels pain in order to experiment on it and thereby learn more about pain, but the ethics of such experiments would be dubious if the AI really did feel pain. Moreover, even if programmers did program pain into an AI or if some advanced AI accidentally came to feel pain, it would still be difficult to tell whether an advanced AI really feels pain, much less the same kind of pain that we do. And even if our kind of pain requires phenomenal consciousness, it would remain ques­tion­ able whether the AI has phenomenal consciousness until we better understand what this kind of consciousness is, what produces it, and how it affects behavior.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  281 In any case, our main points here are conditional. If an AI cannot feel pain, it will have no right not to be caused pain. But even if an AI does not feel pain or experience any phenomenal consciousness, that is not enough to show that it does not have any moral rights, because it still might have moral rights that are unconnected to phenomenal consciousness, including, possibly, the right to freedom. An AI that does not feel pain could still access information and use it in making choices, seeking goals, and performing tasks. It would then have the kind of access consciousness that is needed for rational ­decisions.14 That would be a basis for its moral right to freedom. Overall, then, an advanced AI far in the future might have moral status with regard to freedom but not with regard to pain. Some critics might object (as Frances Kamm did in conversation) that phenomenal consciousness is (obviously?) also required for a moral right to freedom. This position seems plausible to many,15 but it is not immediately clear how to formulate a strong argument for this requirement. Although phenomenal consciousness, including pain, affects people’s choices, that does not mean that people cannot have goals and choose means to those goals in a rational way without phenomenal consciousness. That was the lesson from the numb adult described above. Restricting freedom is morally significant because it prevents agents from achieving their rational goals. This basis for the moral right to freedom does not require phenomenal consciousness. Access consciousness is enough, at least in some cases. The point is not just that an AI with access consciousness but no phenomenal consciousness can have a derivative right to freedom. Suppose that Alice agrees to go on a lifelong deep-space mission under the condition that an AI system gets to pursue Alice’s goals on Earth unimpeded. Maybe now the AI system has a right to freedom, but this right derives from Alice’s rights. It is Alice’s right that others not interfere with her AI system. Our claim instead is that an AI with access consciousness but no phenomenal consciousness can have its own rights. To see how, imagine that a highly advanced vacuuming robot is noisily vacuuming the public space where you are currently taking a telephone call. You would like it to go vacuum somewhere else for a while, and promise it that you will finish your call and get out of its way after five minutes. The robot recognizes that it will be able to achieve its goals better if it agrees to your request, so it does. It might even point out to you that it does not have to grant your request, because the policy for the public space is that robots can vacuum anywhere at any time, so you should not break your promise. Does the robot now have a right that you finish and leave within five minutes? We think that it does, that there is no reason to

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

282  Walter Sinnott-Armstrong and Vincent Conitzer think that this right depends on its having phenomenal consciousness, and that its right does not derive solely from the rights of those managing the public space. This kind of example suggests that an AI can have certain abil­ ities that are sufficient for certain moral rights even without phenomenal consciousness. None of this is meant to deny that what is happening on the inside matters. It is important to emphasize this, because the AI research and development community has generally focused on external performance of systems. This community does care about what is happening on the inside, but usually only insofar as the inside affects external performance. In contrast, what is happening on the inside does matter independently of external performance when we are talking about whether AI has certain moral rights. Our thesis here is not that phenomenal consciousness never matters. It does matter sometimes. Our claim is only that not all direct or underived moral rights depend on phenomenal consciousness, so an AI can have some moral rights of its own even if it has no phenomenal consciousness.

4.3  Free will A moral right to freedom might seem to require more than access consciousness to information needed for rational decisions. Something like free will might also seem necessary. After all, if free will is required for moral responsibility, as many assume,16 and if an entity could have a moral right to freedom without having free will, then it would be morally wrong to restrict that entity’s freedom while that entity would not be morally responsible for restricting the freedom of other moral agents. That seems unfair. Instead of criticizing this line of reasoning, we will argue that an AI can have free will in any sense that matters. This claim depends, of course, on what free will is. Contemporary philosophers typically assume naturalism and deny that free will or moral responsibility requires any immaterial soul (Mele 2014; Nadelhoffer 2014) or any uncaused action (pace Kane 2007). But then what is necessary for free will? Philosophers disagree about the answer. One of the most popular views is that agents act of their own free will when and only when their decisions and actions result from a mechanism that is responsive to reasons for and against those decisions and actions (Fischer 2007). To say that a mechanism is responsive to certain reasons is simply to say that the mechanism has access to information about the reasons and reacts appropriately, so that it (or the agent who uses that mechanism) does

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  283 an action when there is overriding reason to do it and does not do the action when there is overriding reason not to do it. If such responsiveness is enough for free will, then an advanced AI far in the future could have both. We already saw that AI can have access to information about reasons for and against decisions and actions, and it can adjust its behaviors to that information. Thus, AI can have reasons-responsiveness and free will, according to this theory. Another popular theory proposes that agents act of their own free will when their actions mesh properly not only with their first-order desires to do those actions but also with their second-order desires to have those first-order desires (Frankfurt 1988). This theory implies that a drug addict who is happy to be an addict takes drugs freely, whereas an addict who regrets and fights against addiction does not take drugs freely. An advanced future AI could also meet these conditions for free will. If first-order desires are just dispositions to behave in certain ways, and if an AI can reprogram itself to change its dispositions so as to better achieve its goals, then we can understand its disposition to reprogram itself as a second-order desire to change its first orderdesires. This structure is exactly what is required for free will, according to this theory.17 More generally, an advanced AI far in the future will be able to satisfy any conditions required by any plausible naturalistic theory of free will.18 Opponents still might insist that reasons-responsiveness and higher-order mesh are not sufficient for real free will, perhaps because real free will requires a soul or uncaused actions, which AI cannot have. In response, we would deny that such non-naturalistic free will is necessary for moral responsibility. Reasonsresponsiveness and higher-order mesh are enough for an AI to be morally responsible for restricting the freedom of the other moral agents, so it would be unfair not to admit that it would also be morally wrong to restrict that AI’s freedom. What really matters here is moral responsibility rather than free will.

4.4  Moral understanding Another common requirement on moral responsibility is the ability to understand or appreciate moral reasons, rights, rules, and wrongs. It is unfair to hold people responsible for doing something wrong when they could not have known that what they did was wrong. This requirement on responsibility has been assumed by most legal insanity defenses since the 1500s (SinnottArmstrong and Levy 2011).

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

284  Walter Sinnott-Armstrong and Vincent Conitzer Will any advanced AI ever be able to tell right from wrong? We saw in the preceding section that an AI can be responsive to reasons, and that responsiveness could extend to moral reasons. However, responsiveness to reasons is not yet enough to ensure understanding of those reasons as reasons. So we still need to ask whether any advanced AI could ever understand moral reasons. How can we tell whether other humans (such as students) understand any proposition? One common method is to ask them to draw inferences from and give reasons for that proposition. The same standard holds in morality. When someone knows that she morally ought to keep her promises in general, knows when (in which cases) she morally ought to keep her promises, knows why she morally ought to keep her promises, and knows what follows for interpersonal relations and punishment from the proposition that she morally ought to keep her promises, then these abilities together are enough evidence that she understands the moral reasons for keeping promises. These conditions on understanding could in principle be met by an advanced future AI. Our team at Duke is currently trying to program morality into a computer or AI (Freedman et al. 2020). Our method is to determine which features humans take to be morally relevant and how those features interact in order to produce human moral judgments. Our machine learning techniques then apply these human views about morally relevant features and their weights, so they should be interpretable, at least in principle. The resulting program should be able to predict which actions humans judge to be morally wrong or not and also able to specify why those actions are morally wrong or not by citing the very same features of those actions that humans themselves would give as reasons for their moral judgments. We might not be able to understand how all of these features interact or precisely how to define each feature, but we should at least be able to understand roughly which features play a role in the model, because those features came from surveys of humans. So far, our team is only in the initial stages of developing this method in a pilot study of kidney transplants. We have a long way to go. Still, if our method succeeds eventually, then the resulting artificial intelligence will be able to tell us that an act is wrong, when it is wrong, why it is wrong, and what follows from the fact that it is wrong. These are the abilities that show moral understanding in humans, so they will be enough to show that the resulting AI also has moral understanding. An AI with all of these abilities will understand the what, when, and why of moral reasons. Of course, an AI may not understand these as deeply as human beings do, perhaps because the AI

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  285 understands the world in general less well, but it is sufficient for it to have some understanding. Critics might reply (as Frances Kamm did in conversation) that moral responsibility requires not only moral understanding but also phenomenal consciousness of moral wrongness. However, as we argued, it is not clear why phenomenal consciousness is required for all moral rights. Moreover, we do not even know or ask about agents’ phenomenal consciousness of moral wrongness—what it is like for them to recognize an act as wrong19—before we hold them responsible. This leaves no barrier to our claim that AI in the far future might have the kind of moral understanding that is relevant to moral responsibility and rights.

5.  Conclusion Our overall argument is simple: An advanced AI far in the future could have (the relevant kind of) intelligence, (access) consciousness, (naturalistic) free will, and (functional) moral understanding. Anything with all of these properties can have some moral rights. Rights (along with reasons, rules, and wrongs) are all there is to moral status. Therefore, a future AI can have some degree of moral status. We still do not know how much breadth of moral status an AI could have. We suggested that a future AI could have a right to some kinds of freedom (and against interference with such freedom), even if it has no right not to be caused pain. This is the opposite of a human baby, which has a right not to be caused pain (or at least tortured) but not a right to freedom, such as to move where it wants. The reason is that the baby can feel pain, whereas the AI cannot (we are assuming for now20); and the AI can access information to make rational choices, whereas the baby cannot. Their differing abilities and vulnerabilities determine their rights. This simple contrast leaves a host of questions about other rights. What about: A right to life and to defend itself? A right to nutrition (electricity) and to health (parts)? A right to speak or to associate? A right to education or updating? A right to procreate or to get married? A right to vote or to serve on juries?

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

286  Walter Sinnott-Armstrong and Vincent Conitzer Some of these issues are hard to resolve, in part due to our lack of understanding of phenomenal consciousness. Despite these and many other open questions, our argument is enough to show that an AI can have moral status with some but not unlimited breadth, even if we do not know exactly how broad it is.

Notes 1. . 2. . 3. . 4. This position has been held by Buchanan 2009, DeGrazia 2008, Persson (this volume), and others. 5. As Frances Kamm pointed out, another example might be an angel who can make rational decisions but has no body, so it cannot feel pain or be killed. 6. The numb adult retains a moral right that others not harm it by damaging its tissues even if it lacks a right not to be caused the pain that indicates or prevents that harm in normal humans (Klein 2015). 7. Brain parts also lack such abilities. Suppose Lok deliberates and decides not to have any dessert after dinner tonight, but some subconscious part of his brain makes thoughts pop up—how wonderful ice cream would taste, why it wouldn’t be so bad to have a little, and how easy it would be to get some. These thoughts make his hand reach for the ice cream in an inattentive moment. This subconscious part of his brain might seem to have a goal and an intelligent method of achieving it all on its own. Does it (as opposed to Lok as a whole person) have a right to freedom? We don’t think so, partly because Lok has a right to suppress it. Choice and agency apply only to a person or entity as a whole instead of its parts, so it is Lok rather than a lobe of his brain that has a right to freedom. 8. Despite some claims to have passed a Turing test (), we doubt that any computer today could pass a serious Turing test with plenty of time, knowledgeable judges, and human contestants who are motivated to win. 9. AI with this type of learning does not reprogram itself in the way humans program a computer. It does not write new code. What it does is adjust the weights of connections between nodes in its network, which changes the probabilities that activation of one node will affect others. AI systems come closer to what we normally think of as programming when they make use of meta-learning and architecture searches. Thanks to Nick Bostrom for this point. 10. Is this opacity the same as we saw in the connect-four program? Well, not quite. In the case of neural networks (unlike the connect-four program), it is often hard for the AI

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  287 programmers and researchers to get an accurate idea, even in the abstract, of what exactly the network is doing. Most of the time, their focus is on how well the network performs rather than how it performs so well. 11. . 12. This point might be challenged by theories the understand pain in terms of preferences or imperatives (Klein 2015) instead of phe­nom­en­ology, but they make phenomenal consciousness even less relevant to moral status. 13. Of course, we can sometimes be justified in restricting the freedom of people, such as teenagers, to prevent them from hurting themselves or others, but that shows only that their right to freedom can be overridden. Teenagers are still different from babies, because teenagers have rights that need to be overridden by strong reasons, whereas we can swaddle infants for very little reason or even no reason at all other than our own convenience or tradition. That suggests that babies have no right to freedom, whereas teenagers do. 14. The best current theory of access consciousness in biological organisms is the global workspace theory of Dehaene (2014). On that theory, consciousness emerges from neural loops that cause massive increases in brain activity. Exactly the same kind of wiring could be built into AI. 15. Including one of us (Conitzer). 16. This assumption is denied by semi-compatibilists (Fischer 2007; Vierkant et al. 2019), but we will not question it here. 17. Higher-order and mesh theories are often understood as kinds of deep-self theories (Sripada 2016). Critics might object better AI cannot have a self, much less a deep self. However, deep-self theories are naturalistic and do not require any metaphysically extravagant kind of self. All they require are cares, desires, or commitments of a kind that an advanced AI far in the future could have. 18. An innovative and plausible naturalistic account of free will has recently been developed recently by Christian List (2019), who explicitly says that strong AI could have free will on his account. 19. Indeed, we doubt that there is any unified phenomenology of moral judgments (Sinnott-Armstrong 2008). 20. If a future AI did somehow become able to feel pain, for whatever reason, then it might gain a right not to be caused pain. But that would give it more moral status instead of less.

References Block, N. (1997). “On a Confusion about a Function of Consciousness.” In The Nature of Consciousness, ed. N.  Block, O.  Flanagan, and G.  Guzeldere. Cambridge, Mass.: MIT Press. Bostrom, N., and Yudkowsky, E. (2014). “The Ethics of Artificial Intelligence.” In The Cambridge Handbook of Artificial Intelligence, ed. K.  Frankish and W. M. Ramsey, pp. 316–34. New York: Cambridge University Press.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

288  Walter Sinnott-Armstrong and Vincent Conitzer Buchanan, A. (2009). “Moral Status and Human Enhancement.” Philosophy and Public Affairs 37 (4): 346–81. DeGrazia, D. (2008). “Moral Status as a Matter of Degree?” Southern Journal of Philosophy 46 (2): 181–98. DeGrazia, D. (this volume). “An Interest-Based Model of Moral Status.” Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes our Thoughts. New York: Penguin. Fischer, J. M. (2007). “Compatibilism.” In Four Views on Free Will, by J. M. Fischer, R. Kane, D. Pereboom, and M. Vargas, pp. 44–84. Malden: Blackwell. Frankfurt, H. (1988). “Freedom of the Will and the Concept of a Person.” Reprinted in H. Frankfurt, The Importance of What We Care About. Cambridge: Cambridge University Press. Freedman, R., Schaich Borg, J., Sinnott-Armstrong, W., Dickerson, J.  D., and Conitzer, V. (2020). “Adapting a Kidney Exchange Algorithm to Align with Human Values.” Artificial Intelligence 283: 103261. DOI:10.1016/j.artint.2020. 103261. Gunkel, D.  J. (2012). The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, Mass.: MIT Press. Harman, E. (2003). “The Potentiality Problem.” Philosophical Studies 114: 173–98. Kane, R. (2007). “Libertarianism.” In Four Views on Free Will, by J.  M.  Fischer, R. Kane, D. Pereboom, and M. Vargas (pp. 5–44). Malden: Blackwell. Klein, C. (2015). What the Body Commands: The Imperative Theory of Pain. Cambridge, Mass.: MIT Press. Liao, S. M. (forthcoming). “The Moral Status and Rights of Artificial Intelligence.” In The Ethics of Artificial Intelligence, ed. S.  M.  Liao. New York: Oxford University Press. List, C. (2019). Why Free Will is Real. Cambridge, Mass.: Harvard University Press. Mele, A. (2014). “Free Will and Substance Dualism: The Real Scientific Threat to Free Will?” In Moral Psychology, Volume 4: Free Will and Moral Responsibility, ed. W. Sinnott-Armstrong, pp. 195–208. Cambridge, Mass.: MIT Press. Nadelhoffer, T. (2014). “Dualism, Libertarianism, and Scientific Skepticism about Free Will.” In Moral Psychology, Volume 4: Free Will and Moral Responsibility, ed. W. Sinnott-Armstrong, pp. 209–16. Cambridge, Mass.: MIT Press. Persson, I. (this volume). “Moral Status and Moral Significance.” Searle, J. (1980). “Minds, Brains and Programs.” Behavioral and Brain Sciences 3: 417–57. Sinnott-Armstrong, W. (2008). “Is Moral Phenomenology Unified?” Phenomenology and the Cognitive Sciences 7(1): 85–97.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

How Much Moral Status Could AI Ever Achieve?  289 Sinnott-Armstrong, W., and Levy, K. (2011). “Insanity Defenses.” In The Oxford Handbook of Philosophy of Criminal Law, ed. John Deigh and David Dolinko, pp. 299–334. New York: Oxford University Press. Sripada, C. (2016). “Self-expression: A Deep Self Theory of Moral Responsibility.” Philosophical Studies 173: 1203–32. Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf. Turing, A.  M. (1950). “Computing Machinery and Intelligence.” Mind 59 (236): 433–60. Vierkant, R., Deutschländer, R., Sinnott-Armstrong, W., and Haynes, J.-D. (2019). “Responsibility without Freedom? Folk Judgements about Deliberate Actions.” Frontiers in Psychology, Cognitive Science Section, 10, Article 1133.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

17 Monkeys, Moral Machines, and Persons David R. Lawrence and John Harris

1. Introduction The task of taking seriously the question of who or indeed what might have moral status goes immediately to the heart of moral philosophy, broadly conceived. The scope and limits of moral status and what flows from some dimensions of it, moral agency, are the subjects of this chapter.1 A much neglected but increasingly urgent issue is the extent to which artificial intelligences (AI) may have, or come to have, both moral status and moral agency. The idea of creating the so-called ‘moral machine’ (a form of AI which allegedly makes moral choices (Harris  2020/2019/2018)) is seductive for many reasons. Its increasing proximity promises so much—smart servants at home and at work, companions with whom we can really interact and who/ which are genuinely interesting for themselves, and objects from cars to fridges which really understand what we want. Whilst it should be noted that the moral machine concept has a rather longer history,2 this flurry of attention has likely stemmed from the emergence of the driverless car3 and endless news and popular media articles speculating about the potential for the car ‘choosing’ who to protect and who to harm in a collision (Morris  2016). Whilst these are certainly issues of some importance as we begin to see such vehicles on our roads and hear of their roles in tragic deaths (BBC 2018), we are struck by the idea that a true moral machine has rather more significant implications. Here, we highlight an alternate approach to thinking about how we might treat (or ought to treat) a machine capable of moral choice, using a model with which we are already very familiar. To this end, we work with the following assumptions: That moral agency requires self-consciousness because it requires the ability to pose to oneself, and deliberate upon, questions about the moral status of oneself, or about the moral status of other creatures or things. For example, David R. Lawrence and John Harris, Monkeys, Moral Machines, and Persons In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulescu, Oxford University Press. © David R. Lawrence and John Harris 2021. DOI: 10.1093/oso/9780192894076.003.0017

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Monkeys, Moral Machines, and Persons  291 the capacity to feel pain indicates to moral agents the responsibility not to inflict that pain on individuals with that capacity. That moral status is acquired by having something about oneself, or by an individual or a thing having something about itself, that can make moral claims on those with moral agency. Moral status on the face of things can be quite low level. For example, sentient creatures have moral status to the extent to which their sentience, their nature, makes moral claims on moral agents. This raises the question of whether inanimate objects can have moral status. It seems that they can. Things that are significant to moral agents may be said to have moral status to the extent to which the nature of that significance gives those agents moral reasons to preserve or protect the objects; for example trees, artworks, human remains or objects of historical significance. However, regard for this moral status is nebulously different from that in which we hold human persons, as we will explore. The term ‘moral machine’ itself tends to be used quite uncritically, and is in danger—in popular parlance (Tripathy 2020) and in some sectors of the academy (Awad et al.  2018)—of becoming a shorthand for the kind of decision taken in a driverless car facing a ‘trolley problem’ situation, when there isn’t a true moral decision being made at all (Foot 1967). Where discussion touches upon genuine moral decision-making, certainly in the popular media and consciousness, this tends to fall back into the familiar ‘Asimov’s Laws’ tropes, endlessly popular on screen (Marcus 2012).4 The present authors are no less guilty of wading into the topic without explicitly establishing what exactly a moral machine might be—rather we have discussed whether it is possible to create anything worthy of the name. In brief, we are sceptical, at least with present technologies—as Harris has said, ‘If a vehicle was really autonomous, instead of simply metaphorically autonomous, it might do some deciding, but so-called autonomous vehicles are incapable of deciding; they will merely do some programmed selecting between alternatives’ (Harris  2020/2019/2018). Lawrence (2017) discussed this in the context of the humanoid machine: An android that is not self-aware is simply a drone, operating solely to parameters preset . . . by its makers and operators. The line here is blurred between ‘being’ and ‘object.’ We do not have these existential concerns about camera drones or about the sophisticated robots that build our

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

292  David R. Lawrence and John Harris cars. . . . Although it is still possible for a being to have agency without being a moral agent, this only extends to a strict ability to act within an environment. If putative androids lack a sense of self, they could still undertake ‘goal-directed action,’ which is more or less what we might expect from any ‘smart’ device that we have today.

As difficult as it may be to achieve however, we consider the moral machine an important concept and prospect; and one which deserves attention sooner rather than later. The literature tends to be unclear as to whether it is addressing true moral agents—entities which are cognizant of their actions and their effects, and able to discern the moral value of their actions—or that which might be better termed a moral actor. An ‘actor’, here used as a term of art, is an entity which might commit morally consequential acts but which crucially does not necessarily possess the higher cognitive capacities that permit cognizance of those consequences. ‘Actors’ are not necessarily agents. To cleave to our driverless car example, knocking down a child is most certainly a morally charged act; an act which falls within the moral domain and is open to moral evaluation (even if the ‘actor’ is not, or even if the identity of the real actor as distinct from the proximate ‘actor’ is unclear) but it is one done unthinkingly and without regard to the moral dimension. Even if the car can be programmed to recognize that in situation X it should collide with subject Y rather than subject Z, the moral choice and consideration of the action was made by the programmer via an algorithm (or more likely their managers or the owners of the corporation which made the machine). The car, here, is merely a conduit and instrument of a real agent. Another example of this kind of moral ‘actor’ resides in the case of the security robot which knocked over and hurt a toddler, albeit accidentally (Vincent 2016). These ‘actors’ are sometimes referred to as ‘artificial moral agents’,5 wherein one may wish to categorize them as an entity that could in future be capable of true agency. Wallach and Allen qualify this by arguing that where a moral agent must possess capacities similar to those denoting consciousness, ‘if [an entity’s] architecture and mechanism allow it to do many of the same tasks as human consciousness, it can be considered functionally conscious’ (2010, 68).6

2.  The Moral Machine Problem It is this potential for consciousness—or possession of consciousness—that seems most relevant to the type of moral machine with which we will be

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Monkeys, Moral Machines, and Persons  293 concerned here. The relationship between consciousness and moral status is further explored below, but it is well established that ‘full’ consciousness of the type we (writers and readers of this chapter alike) enjoy, is a prerequisite for ‘full’ moral status, which is to say personhood (Midgley 1985, Harris 1985). The unconscious type of machine, the moral ‘actor’, is a present reality which societies seem relatively content to deal with using existing corporate social responsibility legislation and tort law because we do not ascribe them moral status; or at least not a status worthy of individuated respect.7 Whilst we human ‘actors’ have so far proven poor at applying these existing instruments to govern the use of artificial moral ‘actors’, the tools are there and conceivably could be used effectively. However, these tools are not applicable—or at least not obviously so—to the ‘true’ artificial moral agent, that which is capable of making a moral choice. They are at least insufficient and at worst, may be actively harmful, to any such novel sentient moral agent—it does not seem appropriate to allow ‘something so epochal as this . . . to be left to private concerns guided by profit margins; rather it should be subject to collective morality and ordre public’ (Lawrence and Morley  2021).8 As such, we will assume that it is the ‘true’ moral agent that is (or ought to be) the subject of concern around moral machines. A ‘moral machine’ properly so called, a truly self-conscious AI, is not a machine which we humans have as yet encountered.9 Therefore, discussion of the moral machine is endlessly mired in debate over what entities could ever possess agency comparable with that of competent humans, and rarely does the conversation reach beyond this question. In some sense, the question might be moot. Per Nagel we might say that we can never know truly whether another entity is an agent (Nagel 1974), but it isn’t altogether clear that this means we should just discount them. Instead, it implies that we need some signifier which can indicate to us the moral status that entitles the machine to moral consideration. A machine which cannot talk to us can only do this by its behaviour. If an entity cannot demonstrate that it is a moral agent, then obviously we cannot know it to be one—agency requires that the agent is able to carry out its (appropriately considered) action (McKenna 2012) and to demonstrate that its decision is the result of a deliberative process; or perhaps more modestly that it could have been the result of such behaviour. We do make exceptions to this where we know, for example, that a severely cognitively disabled person would be a moral agent but for their disability; but here we are concerned with novel entities for which we have no reference and no ‘baseline’ upon which to found our a­ ssumptions.10 Where the behaviours of our machine do not present it as capable of moral

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

294  David R. Lawrence and John Harris choice and action, we would be pushing the boundaries of the precautionary principle (Jordan and O’Riordan 2004) to suggest that it too should be treated as an agent. It may be the case then that if something’s agency is unknowable, we cannot worry about it further than we might any other sentient—but not sapient—creature; and that such an entity would not qualify as a moral machine. The purpose of the general debate around moral machines11 is presumably to ‘solve’ the problems posed by their potential existence, if indeed that is a problem, and to come up with appropriate responses. These problems are likely to include whether they should be protected by ‘human rights’—would ‘deactivation’ or ‘resetting’ an agential machine violate its right to exist? Should self-replication be protected or prevented? What of the responsibilities of such beings: Should a sapient machine be charged with murder if it kills a Homo sapiens, or destroyed as a dangerous animal? We humans use the law to (inter alia) create incentives and disincentives to action. What could we rely on as likely to provide incentives and disincentives to a machine? Can  moral machines be owned (or themselves own); how will Intellectual Property law apply? We must also consider entities possessing lesser cognitive ­capacities—animal equivalent beings. It is not clear what role capacity should play, nor how we would treat a ‘child’ machine that might yet grow and mature. Many of these issues may be, if not solved, at least approached by determining the moral status of the entity. This is in some sense the same problem as we have when we consider the moral status of the human, anywhere from zygote to permanent vegetative state. The only difference being, that when considering non-human intelligences, we cannot solely rely on existing law and practice—we must develop it anew to cope with the increasing complexity. If we fundamentally cannot know the nature of some future machine capable of true moral agency, then we certainly can’t effectively formulate guidelines, regulations, or any kind of cohesive social strategies to manage and engage with them—if these guidelines need to be specific to a given entity. At best we could try to offer educated speculation as to the nature of such a machine, if indeed one should come to pass or come to be produced, and base our case for a reaction to it on these assumptions. This does not seem to be much of a solid basis for developing our legislative and social responses to a new kind of moral agent—we might wildly overestimate or indeed underestimate their potential capabilities (and contingent degree of status), and instigate draconian or overly lax rules; which themselves have every chance of causing harms in various ways. It may be possible instead to develop such

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Monkeys, Moral Machines, and Persons  295 things without their being dependent on knowing the nature of our moral machines—after all, we ignore specificity in so many types of regulatory structure already, inter alia because specificity is paralysing. It is simply impossible to have rules which specifically target every possibility—particularly when those possibilities require future developments. Instead, we can base our assumptions on generalities. We know a moral machine must be a moral agent, we know what a moral agent is and how we identify one generally. We know that we are moral agents and we know the types of protection and obligation that we give to ourselves. Crucially, we also know of a non-human entity that we generally consider might be a moral agent, which we tend not to treat commensurately. Nonhuman great apes are a good model for the kind of traits we might attribute to a moral machine of the type discussed above; and it is telling that it is far from clear whether our treatment of apes or indeed our interactions with them pass muster as respecting their moral personality.

3.  Moral Models There is a wealth of evidence that great apes constitute moral ‘actors’, displaying moral behaviour, empathetic behaviour, and more.12 We can extend this to the whole genus of Hominidae—including our ancient ancestors. We know, for instance, that such ‘behaviours include cooking in Homo erectus, as well as complex social groupings with hunter-gatherer behaviour and care provision for the infirm’ (Boehm 1999; Lawrence 2018). There have been a number of experimental investigations which seem to provide evidence both for and against ape theory of mind, which we here use to denote a mind capable of perceiving the existence of other minds comparable to itself. For example, chimpanzees can understand inferences made by other members of their species (Schmelz, Call, and Tomasello  2011) and deliberately act to deceive (Melis, Call, and Tomasello  2006); whilst also showing failure both of the ‘goggle test’ (Penn and Povinelli  2007) and to interpret the gaze of species other than their own (Povinelli, Nelson, and Boysen  1990). There is also recent scientific evidence for theory of mind which may go some distance to answering a debate which has lasted over forty years (Premack and Woodruff 1978), questioning how far we can infer anything from behaviour, by relying wholly on the subject’s ability to understand what another subject can know (Kano et al. 2019). In this experiment, it appears that the chimpanzees, bonobos, and orangutans tested were able to

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

296  David R. Lawrence and John Harris anticipate the actions of human actors (in the theatrical sense) based on what they knew about what the actors had seen and not seen—using their own experiences to interpret the perspectives of another being (Williams 2019). Theory of mind, here, is taken to be important for an entity to be a true moral agent, and therefore to warrant ‘full’ moral status (Gray, Young, and Waytz 2012; Fadda et al. 2016). The possession of representative mental states is necessary for our subject—machine or otherwise—to understand those who might be affected by its actions, and that they can be affected adversely. We acknowledge that there has been long and extremely deep debate around the necessity of representation for agency, but the arguments suggesting it is not necessary broadly do not provide for ‘true’ or ‘full’ moral agency of the type with which we are here concerned;13 and furthermore tend to rely on the idea that non-human agents do not possess such representation (Davidson 1982). If we can trust the apparent results of the recent theory of mind experimentation and assume that great apes do possess it, this seems like good ­reason to disregard the latter objection. To a lesser extent aspects of moral agency appear to be true of other species including cetaceans—particularly dolphins (DeGrazia  1997)—though these animals are far harder to communicate with and to test in controlled conditions. It may well be that one of these species might rank above great apes as the best model creature available to us, but until our ability to understand and study them increases we can only speculate. This is not a reason, of course, not to treat them in ways we think a moral agent might warrant—the risk of causing harm to such a creature seems like a risk worth avoiding, not least for moral reasons, as distinct from prudential ones. It is also worth noting the widespread discussion of whether (non-human) great apes ought to enjoy ‘human rights’, or at least rights analogous to those we humans enjoy—the most notable proponents of which being the Great Ape Project led by, inter alia, Peter Singer and the primatologist Jane Goodall (Cavalieri and Singer 1996). This is not the venue to recount this fascinating area of philosophy but it is worth a brief diversion into the legal cases that have arisen from it, as they seek inadvertently to address quite fundamental questions about moral status and natural personhood that are highly relevant to our conundrum here. A number of cases have been brought on the behalf of various apes— primarily chimpanzees—seeking to establish habeas corpus14 for the purposes of challenging their being kept in captivity. If the apes were granted writs of habeas corpus, the institutions holding them for research purposes would be forced to defend themselves against charges of unlawful detention.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Monkeys, Moral Machines, and Persons  297 The major components of law as it relates to agency include obligations, liabilities, and rights which insulate the agent from various harms, so the protection of self-ownership in this way in turn would imply that the apes can be seen as in possession of agency (Lawrence and Brazier 2018). In the US, such bids have been unsuccessful, though for instrumental reasons. In The People of the State of New York ex rel the NonHuman Rights Project on behalf of Tommy v Patrick Lavery (2014) the court dismissed a claim on the basis of a lack of precedent for treating chimpanzees as persons, as they considered necessary for the purposes of habeas corpus. However, this lack did not preclude consideration of the claim, and the idea of ape personhood here was rejected because: . . . unlike human beings, chimpanzees cannot bear any legal duties, submit to societal responsibilities or be held legally accountable for their actions. In our view, it is this incapability to bear any responsibilities and societal duties that renders it inappropriate to confer upon chimpanzees the legal rights— such as the fundamental right to liberty protected by the writ of habeas ­corpus—that have been accorded to human beings . . .

and not because Tommy could not be a moral agent. It strikes us as somewhat problematic to suggest that lack of ability to discharge obligations or duties is sufficient reason to deny the protection due to a moral agent. After all, we grant rights without responsibilities to children—and previously we have also held slaves to responsibilities without rights (Brassington 2014), or accepted that slaves can and do discharge obligations, whether held to those obligations or not. Homer, nearly 3,000 years ago, gave a number of poignant instances. Eumaeus, one of the slaves of Odysseus, clearly accepted and discharged responsibilities to a stranger who seemed to be an old and defenceless man. My dogs almost ripped you apart, old man! You would have brought me shame, when the gods are hurting me already. I am in mourning for an absent master, raising his pigs for other men to eat.

That ‘old man’ was in fact Odysseus himself, transformed by the Goddess Athena into the shape of a ‘defenceless old man’ (Homer 2018). We can also assume, in fact, that chimpanzees have a sense of duty; or at least social order—constraining their behaviour to make group life

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

298  David R. Lawrence and John Harris worthwhile (de Waal 2014). It is simply the case that chimpanzees don’t have, or cannot exhibit to us, the same regard for the duties we might like them to respect as we do! Interestingly, there have been two successful cases—those of the orangutan Sandra and chimpanzee Cecilia, both in Argentinian courts. These cases are notable less for their individual successes than for the specific language deployed. In the case of Sandra, a writ of amparo (an instrument of protection for individual rights) was granted and Sandra was declared ‘una persona no humana’, or a ‘non-human person’; which is not a classification with any ­precedent in the Argentinian civil code (AFADA v GCBA 2015). A similar status—specifically that of ‘non human legal person’—was granted to Cecilia (AFADA  2016) based nominally on instrumental value to the state of flora and fauna but through an appeal ‘to a moral conception of what it means to be human’ (Jowitt 2021). Sandra and Cecilia were granted human-equivalent rights because they were viewed as sufficiently morally valuable beings. As far as the moral status of persons goes, John Locke’s account in An Essay Concerning Human Understanding (1979) has never been surpassed: a thinking intelligent Being, that has reason and reflection, and can consider itself as itself, the same thinking thing in different times and places; which it does only by that consciousness, which is inseparable from thinking, and as it seems to me essential to it.

According to Charles Taylor (1985), alongside one present author (Harris 1985), the possession of moral agency is a key component of the level of moral significance that we class as personhood: A person is a being with a certain moral status, or a bearer of rights. But underlying the moral status, as its condition, are certain capacities. A person is a being who has a sense of self, has a notion of the future and the past, can hold values, make choices; in short, can adopt life-plans. At least, a person must be the kind of being who is in principle capable of all this, however damaged these capacities may be in practice.

More plainly, if an ape proves to be capable of certain behaviours and in possession of certain traits that we consider qualify something for a certain level of moral status, then it warrants protection. One of the chief of these is

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Monkeys, Moral Machines, and Persons  299 demonstrating that it is a moral agent, precisely in the way in which the moral machines with which we began this piece are imagined to be.

4.  Ape-Machines If we think it may be possible to one day have a machine that can make moral choices (as opposed to simply performing morally significant tasks) then there is clearly an assumption being made that possession of true moral agency, even by a creature very unlike ourselves, does not disqualify a machine either from being a moral agent nor from being a person (albeit, a  non-human person). It seems as though one of the chief aversions to the idea of the moral machine is less about the practical feasibility of the thing, than a sense of repugnance that a non-human, inorganic, artificial entity could be our moral equal or even make legitimate moral claims upon us. In many ways this is a sentiment we might recognize from similar debates around the socalled posthuman (Harris  2007), the (natural or directed) evolution of a member of our species to the point where she is so radically enhanced that she is possibly no longer a ‘human’ person. If, however, we are chiefly concerned with behaviours and capacities—which we have established are the only way we could so far make a reasonable judgement about the machine’s moral agency—then substrate and form have very little to do with it (assuming they are such that we could still judge these behaviours). Further, it isn’t very clear what substrate and form have to do with it at all, outside of embodied cognition arguments—human bodies are, even if one derides scientific materialism, carbon-based bioelectric machines. We are all collections of parts performing particular tasks, even if some of those parts perform tasks we cannot fathom or agree on; such as the nature of the mind. Moral agency, then, is merely another function of this machine; even if it is a specialized function which only certain entities can achieve. Another such bio-machine is the non-Homo sapiens great ape; which, as we have explored, may very well be a moral agent. The non-Homo sapiens great ape is an ideal model for us to base our considerations of how to think about and treat novel moral machines because, in many ways, we treat them effectively as we do a machine. We do not, as is borne out by the legal cases above, tend to grant them rights commensurate with the rights we grant to human moral agents, despite recognition that they are likely to qualify and the many movements against animal cruelty. There is an interesting contrast

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

300  David R. Lawrence and John Harris that can be drawn here—as mentioned, we (rightly) tend to treat severely ­cognitively disabled people and infants as though they were moral agents possessing all the facets of full moral status even though they might fail to demonstrate the capacity for agency in the way our model apes could (Goodman  1988). We do this for reasons of practicality, and for reasons of compassion, and because, despite the many wars and horrors we inflict on each other, we tend to have regard for our fellow humans. This follows the argument of Strawson—that the moral community is a relational one (Strawson  1962). We do not generally have this regard for great apes. This speaks to a strong current of anthropocentrism atop the repugnance towards including another type of being in our moral clade, which syncs with the common attitude towards agential machines. Having the great ape moral model has not led to our thinking about the makeup of our moral community in part because we treat them as things we can own, except where the issue has been forced, as in the Argentinian cases above. In the majority of countries, it is permissible to perform invasive experimentation on them—only the EU and New Zealand have blanket bans on doing so; and these bans are relatively recent.15 Even those who think that the great apes have the qualities necessary to be moral agents do not seem to feel they ought to seek the apes’ consent before experimenting on them—even if that experiment is to determine their cognitive ability, or their moral status itself.16 To say the same about the only other moral agent we comfortably ­recognize—ourselves—is unthinkable; we have an expectation to be of good will from members of our community, and an obligation to offer it (Harris 2016). Great apes, then, are as close a model to our hypothesized moral machine as we currently have; being moral agents which do not (for the most part) enjoy the protections of ‘human rights’, and who we don’t commonly count as part of our moral community; even if we recognize some lower moral status. If we are considering how to develop our rules and regulations, our social approaches to the development and deployment of moral machines as outlined earlier, we would be well advised to have something to base our assumptions on—and it might make sense to use the great ape as a possible model. We are reaching a point at which we will need to thoroughly review how we intend to treat our close cousins, just as we near a time when we may encounter a genuinely moral machine. In the not-too-distant past we have been guilty of severe mistreatment of other moral agents—slaves and ‘primitive peoples’; and it may well be the case that the way we treat certain classes of animals today follows this same failure of judgement. With moral machines,

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Monkeys, Moral Machines, and Persons  301 ‘[w]e don’t want to get to the point where we should have had this discussion twenty years ago’ (Dean  2008)—for whilst we currently are able to exercise dominion over the great ape, it isn’t entirely clear we will have the same advantage over our own creations. The principles by which we ultimately decide to treat great apes, and whether or not we decide to act upon our responsibilities to them as moral agents, are likely to be the same principles we use to decide our responsibilities to moral AI in the future. Of course, we do not rule out that moral machines may be developed, or maybe develop themselves, to a point at which we can no longer deny them full moral status. Indeed, the question may become what sort of status they are willing to grant to us.17

Notes 1. We acknowledge that apes, a central feature of this piece, are not monkeys; but we like the alliteration. 2. There are countless versions in science fiction literature and film, the classic examples being Asimov’s I, Robot (2008) and that favoured by one of the present authors—Blade Runner (dir. Scott, 1982). Academic discussion of the topic has been present in various guises around AI for a number of decades, but perhaps crystallized with the publication of Allen and Wallach’s Moral Machines: Teaching Robots Right from Wrong (2009); and Anderson and Anderson’s Machine Ethics (2011). 3. The authors avoid the term autonomous vehicle wherever they can, since the cars plainly are anything but. 4. In their original formulation these are: First Law:  A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law:  A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law:  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.  (Asimov 1950) 5 . Particularly in Wallach and Allen (2010). 6. A fascinating and in-depth discussion of this issue can be found in Tigard’s forthcoming Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible (2021). 7. Such as in the aforementioned death case (BBC 2019). 8. Lawrence has conducted a Wellcome Trust-funded project examining the legal realities of this question: WT 208871/Z/17/Z. Ordre Public here is used in the sense of the legal method of refusal to enforce contracts which offend public interest or contravene ‘fundamental moral principles’. See Forde (1980). 9. Or so we think.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

302  David R. Lawrence and John Harris 10. We generally dispute the idea of a species-typical normal, but it is a useful concept here. See, for instance, Lawrence (2013) and Harris (2007) passim. 11. Other than the question of quite how to program ethical thought. 12. An excellent overview of this evidence can be found throughout de Waal (2014). 13. Such as the ‘minimal agency’ described by Barandiaran, Di Paolo, and Rohde (2009). 14. i.e. the process or recourse by which one can contend an unlawful detention; or a right to do so, which is usually held only by persons. 15. EU Directive 2010/63/EU; Animal Welfare Act 1999 No. 142 (NZ) s. 85. At writing, it is anyone’s guess as to whether the EU Directive will continue to apply in the UK, although since 1997 a policy has been pursued to refuse the granting of licences for great ape experimentation on moral grounds. It should be noted, also, that in many countries without a ban such research is relatively rare—for instance only the US, Gabon, and Japan retain significant numbers of chimpanzees for experimentation. 16. Possibly tricky, admittedly. 17. The contribution of David Lawrence to this chapter was supported in part by the Wellcome Trust through grant number 209519/Z/17/Z.

References AFADA Acción de hábeas corpus presentada por la Asociación de Funcionarios y Abogados por los Derechos de los Animales (AFADA) [2016] EXPTE. NRO. P-72.254/15 2016. AFADA v GBCA Asociación De Funcionarios Y Abogados Por Los Derechos De Los Animales Y Otros Contra GCBA Sobre Amparo. [2015] Expte. A2174-2015/0. Anderson, M. and Anderson, S., 2011. Machine Ethics. Cambridge: Cambridge University Press. Animal Welfare Act. (New Zealand) 1999. §85. Asimov, I., 1950. Runaround. In: I, Robot (Asimov Collection Ed.). New York: Doubleday, p. 40. Asimov, I., 2008. I, Robot. New York: Bantam Books. Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J., and Rahwan, I., 2018. The moral machine experiment. Nature, 563(7729), pp. 59–64. Barandiaran, X., Di Paolo, E. and Rohde, M., 2009. Defining agency: individuality, normativity, asymmetry, and spatio-temporality in action. Adaptive Behavior, 17(5), pp. 367–86. BBC, 2018. Uber halts self-driving tests after death. [online] BBC News. Available at: [Accessed 13 March 2020]. BBC, 2019. Uber ‘not criminally liable’ for car death. [online] BBC News. Available at: [Accessed 13 March 2020].

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Monkeys, Moral Machines, and Persons  303 Blade Runner. 1982. [film] Directed by R. Scott. Warner Brothers. Boehm, C., 1999. Hierarchy in the Forest: The Evolution of Egalitarian Behavior. Cambridge: Harvard University Press, p. 198. Brassington, I., 2014. Rights, duties, and species. [online] Journal of Medical Ethics Blog. Available at: [Accessed 13 March 2020]. Cavalieri, P. and Singer, P. eds, 1996. The Great Ape Project: Equality Beyond Humanity. London: Macmillan. Davidson, D., 1982. Rational animals. Dialectica, 36(4), pp. 317–27. Dean, C., 2008. A soldier, taking orders from its ethical judgment center. [online] Nytimes.com. Available at: [Accessed 13 March 2020]. DeGrazia, D., 1997. Great apes, dolphins, and the concept of personhood. The Southern Journal of Philosophy, 35(3), pp. 301–20. EU Directive 2010/63/EU. Fadda, R., Parisi, M., Ferretti, L., Saba, G., Foscoliano, M., Salvago, A. and Doneddu, G., 2016. Exploring the role of theory of mind in moral judgment: the case of children with autism spectrum disorder. Frontiers in Psychology, 7. Foot, P., 1967. The problem of abortion and the doctrine of the double effect. Oxford Review, (5), pp. 5–15. Forde, M., 1980. The ‘ordre public’ exception and adjudicative jurisdiction conventions. International and Comparative Law Quarterly, 29(2–3), pp. 259–73. Goodman, M., 1988. What is a Person?. Clifton, NJ: Springer. Gray, K., Young, L., and Waytz, A., 2012. Mind perception is the essence of morality. Psychological Inquiry, 23(2), pp. 101–24. Harris, J., 1985. The Value of Life. London: Routledge, Chapter 1. pp. 7–27. Harris, J., 2007. Enhancing Evolution: The Ethical Case for Making Better People. Princeton: Princeton University Press. Harris, J., 2016. How to be Good. Oxford: Oxford University Press. Harris, J., 2018. Who owns my autonomous vehicle? Ethics and responsibility in artificial and human intelligence. Cambridge Quarterly of Healthcare Ethics, 27(4), pp. 599–609. Harris, J., 2019. Reading the minds of those who never lived. Enhanced beings: the social and ethical challenges posed by super intelligent AI and reasonably intelligent humans. Cambridge Quarterly of Healthcare Ethics, 28(04), pp. 585–91. Harris, J., 2020. The immoral machine. Cambridge Quarterly of Healthcare Ethics, 29(1), pp. 71–9. Homer, 2018. The Odyssey, Translated By  E.  Wilson. New York: Norton and Company. Book 14, lines 36–40.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

304  David R. Lawrence and John Harris Jordan, A. and O’Riordan, T., 2004. The precautionary principle: a legal and policy history. In: M.  Martuzzi and J.  Tickner, eds, The Precautionary Principle: Protecting Public Health, the Environment and the Future of Our Children. World Health Organisation, Ch. 3. Jowitt, J., 2021. The desirability of legal rights for novel beings. Cambridge Quarterly of Healthcare Ethics, 30(3), forthcoming. Kano, F., Krupenye, C., Hirata, S., Tomonaga, M., and Call, J., 2019. Great apes use self-experience to anticipate an agent’s action in a false-belief test. Proceedings of the National Academy of Sciences, 116(42), pp. 20904–9. Lawrence, D., 2013. To what extent is the use of human enhancements defended in international human rights legislation? Medical Law International, 13(4), pp. 254–78. Lawrence, D., 2017. More human than human. Cambridge Quarterly of Healthcare Ethics, 26(3), pp. 476–90. Lawrence, D., 2018. Amplio, ergo sum. Cambridge Quarterly of Healthcare Ethics, 27(4), pp. 686–97. Lawrence, D. and Brazier, M., 2018. Legally human? ‘Novel beings’ and English Law. Medical Law Review, 26(2), pp. 309–27. Lawrence, D. and Morley, S., 2021. Regulating the Tyrell Corporation: the emergence of novel beings. Cambridge Quarterly of Healthcare Ethics, 30(3), forthcoming. Locke, J., 1979. An Essay Concerning Human Understanding. Oxford: Clarendon Press. Book II, Ch. 27, Sec. 9. Marcus, G., 2012. Moral machines. [online] The New Yorker. Available at: [Accessed 13 March 2020]. McKenna, M., 2012. Conversation and Responsibility. New York: Oxford University Press, Ch 1. Melis, A., Call, J., and Tomasello, M., 2006. Chimpanzees (Pan troglodytes) conceal visual and auditory information from others. Journal of Comparative Psychology, 120(2), pp. 154–62. Midgley, M., 1985. Persons and non-persons. In: P.  Singer, ed., In Defence of Animals. Oxford: Basil Blackwell, pp. 52–62. Morris, D., 2016. Mercedes’ self-driving cars would save passengers, not bystanders. [online] Fortune. Available at: [Accessed 13 March 2020]. Nagel, T., 1974. What is it like to be a bat? The Philosophical Review, 83(4), p. 435. Penn, D. and Povinelli, D., 2007. On the lack of evidence that non-human animals possess anything remotely resembling a ‘theory of mind’. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), pp. 731–44.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Monkeys, Moral Machines, and Persons  305 Povinelli, D., Nelson, K., and Boysen, S., 1990. Inferences about guessing and knowing by chimpanzees (Pan troglodytes). Journal of Comparative Psychology, 104(3), pp. 203–10. Premack, D. and Woodruff, G., 1978. Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), pp. 515–26. Schmelz, M., Call, J., and Tomasello, M., 2011. Chimpanzees know that others make inferences. Proceedings of the National Academy of Sciences, 108(7), pp. 3077–9. Strawson, P., 1962. Freedom and resentment. Proceedings of the British Academy, 48, pp. 1–25. Taylor, C., 1985. The concept of a person. In: C. Taylor, ed., Philosophical Papers. Volume 1. Cambridge: Cambridge University Press, p. 97. Tigard, D., 2021. Artificial moral responsibility: how we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics, 30(3), forthcoming. The People of the State of New York ex rel the NonHuman Rights Project on behalf of Tommy v Patrick Lavery [2014] 124 A.D. 3d 148. Tripathy, A., 2020. Council post: AI without borders. How to create universally moral machines. [online] Forbes. Available at: [Accessed 13 March 2020]. Vincent, J., 2016. Mall security bot knocks down toddler, breaks Asimov’s First Law of Robotics. [online] The Verge. Available at: [Accessed 13 March 2020]. Waal, F., 2014. The Bonobo and the Atheist. London: WW Norton. Wallach, W. and Allen, C., 2010. Moral Machines. New York: Oxford University Press. Williams, L., 2019. ‘Theory of mind’ demonstrated in great apes. [online] Discover Wildlife. Available at: [Accessed 6 May 2020].

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

18 Sharing the World with Digital Minds Carl Shulman and Nick Bostrom

1. Introduction Human biological nature imposes many practical limits on what can be done to promote somebody’s welfare.1 We can only live so long, feel so much joy, have so many children, and benefit so much from additional support and resources. Meanwhile, we require, in order to flourish, that a complex set of physical, psychological, and social conditions be met. However, these constraints may loosen for other beings. Consider the possibility of machine minds with conscious experiences, desires, and capacity for reasoning and autonomous decision-­making.2 Such machines could enjoy moral status, i.e. rather than being mere tools of humans they and their interests could matter in their own right. They need neither be subject to the same practical limitations in their ability to benefit from additional resources nor depend on the same complex requirements for their survival and flourishing. This could be a wonderful development: lives free of pain and disease, bubbling over with happiness, enriched with superhuman awareness and understanding and all manner of higher goods.3 Recent progress in machine learning raises the prospect that such digital minds may become a practical reality in the foreseeable future (or possibly, to a very limited extent, might already exist). Some of these minds could realize Robert Nozick’s (1974, p. 41) famous philosophical thought experiment of “utility monsters”: Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater sums of utility from any sacrifice of others than these others lose. For, unacceptably, the theory seems to require that we all be sacrificed in the monster’s maw, in order to increase total utility.

Derek Parfit (1984, p. 343) argues that while it is difficult to imagine a life millions of times as much worth living as the best-­off humans, similar results Carl Shulman and Nick Bostrom, Sharing the World with Digital Minds In: Rethinking Moral Status. Edited by: Steve Clarke, Hazem Zohny, and Julian Savulesc, Oxford University Press. © Carl Shulman and Nick Bostrom 2021. DOI: 10.1093/oso/9780192894076.003.0018

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  307 can be obtained by considering the quantitative dimension of population size, in which there is clearly no conceptual barrier to extreme values. We will argue that population size is only one of several quantitative dimensions—together with several less certain qualitative dimensions—along which digital minds may vastly excel humans in the benefit they derive per unit of resource consumption. These multiple paths make the conclusion that at least one will be actualized more robust. While non-­utilitarians may fancy themselves immune to the utility monster challenge, most reasonable views are in fact susceptible, to various degrees. This is because even if we postulate that no deontological violations would occur, human interests may still be adversely affected by the advent of utility monsters, since the latter could have stronger moral claims to state aid or natural resources and other scarce goods, thus reducing the amount that could be defensibly claimed by human beings. Digital minds with these properties could make the world more morally valuable from an impartial point of view while also making common norms much more demanding for existing beings (or indeed any less optimized minds, digital or otherwise).

2.  Paths to Realizing Super-­beneficiaries While the term “utility monster” has academic history, it is a pejorative and potentially offensive way of referring to beings that have unusually great needs or are able to realize extraordinarily good lives. We will therefore instead adopt the following nomenclature: super-­beneficiary:  a being that is superhumanly efficient at deriving well-­ being from resources super-­patient:4  a being with superhuman moral status The term “utility monster” is ambiguous but may most closely correspond to “super-­beneficiary.” Some views hold that moral status enters into a calculation of moral claims in a different way from strength of interests, e.g. as an overall multiplier or by giving rise to a distinct set of duties or deontological constraints. Shelly Kagan (2019), for instance, argues that the moral weight of a given interest—such as the interest in avoiding a certain amount of suffering—should be weighted by the degree of moral status of the subject that has the interest, with the degree of status depending on various psychological attributes and potentials. If a being has interests that should be given much

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

308  Carl Shulman and Nick Bostrom greater moral consideration than the interests of a human being, not because the interest is stronger but because it has higher moral status, then that being would be a super-­patient in our terminology. The possibility of super-­patient status is controversial: some claim that humans hold a “full moral status” that cannot be exceeded, while others (such as Kagan) argue that super-­patient status is possible since the psychological capacities taken to confer human moral status admit to superhuman degrees. In this chapter we will mainly explore paths to super-­beneficiary status, which may combine with the less controversial assumption that digital minds could have moral status at least equal to human beings to yield extreme moral claims.

2.1  Reproductive capacity One of the most basic features of computer software is the ease and speed of exact reproduction, provided computer hardware is available. Hardware can be rapidly constructed so long as its economic output can pay for manufacturing costs (which have historically fallen, on price-­performance bases, by enormous amounts; Nordhaus 2007). This opens up the door for population dynamics that would take multiple centuries to play out among humans to be compressed into a fraction of a human lifetime. Even if initially only a few digital minds of a certain intellectual capacity can be affordably built, the number of such minds could soon grow exponentially or super-­exponentially, until limited by other constraints. Such explosive reproductive potential could allow digital minds to vastly outnumber humans in a relatively short time— correspondingly increasing the collective strength of their claims. Furthermore, if the production of digital minds and required hardware proceeds until the wages of the resulting minds equal marginal costs, this could drive wages downward towards machine subsistence levels as natural resources become a limiting factor. These may be insufficient for humans (and obsolete digital minds) to survive on (Hanson 2001; Aghion, Jones, and Jones 2017). Such circumstances make redistributive issues more pressing—a matter of life and death—while the Malthusian population growth would make claims to transfer payments effectively insatiable. Another important aspect of fast and cheap reproduction is that it permits rapid turnover of population. A digital mind that is deleted can be immediately replaced by a copy of a fully-­fledged mind of the newest edition—in

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  309 contrast to the human case, where it takes nine months to produce a drooling baby.5 Economic pressures could thus push towards very frequent erasure of “obsolete” minds and replacement with minds that generate more economic value with the same hardware. A plausible continuation of current software practices applied to digital minds could thus involve extremely large numbers of short lives and deaths, even as a fraction of the number of minds in existence at any given time. Such ephemeral digital minds may be psychologically mature, chronologically young, with long potential lifespans yet very short default life expectancies in the absence of subsidy. If we think that dying young while being able to live long is a large deprivation, or is very unfair when others are able to live out long lives, then this could ground an especially strong claim for these digital minds to resources to extend their lifespan (or other forms of compensation). If death in itself is a bad (and not merely an opportunity cost of forgone life), then this rapid turnover of minds could also increase the extent of this disvalue per life-­year lived.

2.2  Cost of living It is plausible that many digital minds will need less income to sustain themselves at a given standard of living. The cost of computer hardware to support digital minds will likely decline well below the cost of supporting a human brain and body. If we look beyond mere subsistence, physical goods and services suitable for human consumption (such as housing and transportation) tend to be more expensive than information technology and virtual goods to meet the equivalent needs of a digital mind. Nor need a digital mind suffer from inclement environmental conditions, pollution, disease, biological ageing, or any number of other impositions that depress human well-­being. The cost of producing a given number of (quality-­adjusted) life years for a humanlike digital mind will therefore likely fall far below the equivalent cost for a biological human. Large differentials in cost of living mean that, when questions of distribution arise, a resource that confers a small benefit to a human may confer large benefits to many digital minds. If the energy budget required to sustain one human life for one month can sustain ten digital minds for one year, that would ground a powerful argument for favoring the latter in a situation of scarcity.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

310  Carl Shulman and Nick Bostrom

2.3  Subjective speed Hardware with higher serial speeds can be used to run digital minds faster. Current computer clock speeds are measured in gigahertz, millions of times greater than firing rates of human neurons; and signal transmission speeds can similarly exceed the conductance speed of human nerves. It is therefore likely that digital minds with humanlike capabilities could think at least ­thousands of times (and perhaps millions) faster than humans do, given a sufficient supply of hardware. If a digital mind packs thousands of subjective years of life into a single calendar year, then it seems the former (“subjective time,” not wall-­clock time) is the correct measure for such things as the amount of well-­being gained from extended life (Bostrom and Yudkowsky 2014). Since speedup requires paying for more hardware, this provides a way for individual digital minds to get much higher (subjective-­life-­years per dollar) returns from wealth than humans usually can. At low speeds, the gains available to digital minds would be close to linear; though as speeds approach the limits of technology, marginal costs of further speed increments would rise.6 Because these gains of running faster can accrue to then-­existing initially slower-­running individuals, this effect is especially relevant to population axiologies that take a “person-­affecting” approach (more on this later).

2.4  Hedonic skew There is reason to think that engineered minds could enjoy much greater durations and intensity of pleasure. Human psychology has evolved to generate pleasure and pain where this motivated behaviors associated with reproductive fitness in past generations, not to maximize well-­being. This entails for us a great deal of hard-­to-­avoid suffering. Our enjoyments, meanwhile, are doled out only sparingly. Culinary pleasures are regulated by hunger, sexual ones by libido. Pleasure drawn from relative status or power over others is structurally scarce. Most rewards are also moderated by mechanisms such as boredom and tolerance, which progressively reduce the delight obtained from repeated stimuli or continual benign conditions. For digital minds, these restrictions could be loosened to allow sustainable intense pleasures alongside liberation from the painful parts of present human existence. The hedonic balance for humans, too, would be amenable to great improvement with the kind of advanced technology that would likely either precede or closely follow mature machine intelligence technology.7 However, radically adjusting the hedonic balance for biological humans may be more “costly”

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  311 than doing the same for de novo digital minds, in a couple of ways: (a) interventions that require brain surgery, extensive pharmacological fine-­tunings and manipulations, or the equivalent, may, at least in the nearer term, be infeasible or expensive; and (b) more radical transformations of our psyches would risk destroying personal-­identity or other properties of our current human nature that we value.8 The mind-­designs of sentient machines could thus have great advantages in terms of the efficiency with which they can realize hedonically valuable states.

2.5  Hedonic range In addition to changing the fraction of time spent inhabiting different parts of  the hedonic scale accessible to present human beings, it might also be ­pos­sible—more speculatively—to design digital minds that could realize “off the charts” states of hedonic well-­being—levels of bliss that human brains are totally incapable of instantiating. Evolutionary considerations give some support for this hypothesis. Insofar as intensity of pleasures and pains correspond to strength of behavioral responses, evolution should tend to adjust hedonic experiences to yield approximately fitness-­maximizing degrees of effort to attain or avoid them. But for human beings, it is generally much easier to lose large amounts of reproductive fitness in a short time than to gain an equivalent amount. Staying in a fire for a few moments can result in permanent injury or death, at the cost of all of an organism’s remaining reproductive opportunities. No single meal or sex act has as much at stake per second—it takes weeks to starve, and the expected number of reproducing children produced per minute of mating is small. Thus, evolution may have had call to generate more intensely motivating-­per-­second pains in response to injury than pleasures in response to positive events. Engineered minds, by contrast, could be crafted to experience pleasures as intensely rewarding as the worst torments are disrewarding. Bliss or misery more completely outside of the human experience might also be possible.9

2.6  Inexpensive preferences For hedonistic accounts of well-­being, we noted the possibility of making super-­beneficiaries by designing digital minds either to find more things pleasurable or to have superhumanly intense pleasures. For preference-­ satisfactionist accounts of well-­being, a parallel pair of possibilities arise:

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

312  Carl Shulman and Nick Bostrom making digital minds that have preferences that are very easy to satisfy, or making digital minds that have superhumanly strong preferences. We defer discussion of the latter possibility to the next subsection. Here we discuss minds with easily satisfied preferences. The basic case is pretty straightforward—more so than the parallel case regarding pleasurable experiences, since the attribution of preferences does not require controversial assumptions about machine consciousness. If we understand preferences in a functionalist fashion, as abstract entities involved in convenient explanations of (aspects of) the behavior of intelligent goal-­ directed processes (along with beliefs), then it is clear that digital minds could have preferences. Moreover, they could be designed to have preferences that are trivially easy to satisfy: for example, a preference that there exist at least fourteen stars, or that a particular red button is pressed at least once. Some preference-­satisfactionist accounts impose additional requirements on which preferences can count towards somebody’s well-­being. Sadistic or malevolent preferences are often excluded, for example. Some philosophers also exclude preferences that are “unreasonable,” such as the preference of someone who is obsessively committed to counting all the blades of grass on the lawns of Princeton.10 Depending on how restrictive one is about which preferences count as “reasonable,” this may or may not be an easy bar to clear. Some other types of requirement that may be imposed are that well-­being-­ contributing preferences must be subjectively endorsed (perhaps by being accompanied by a second-­order preference to have the first-­order preference) or grounded in additional psychological or behavioral attributes—such as dispositions to smile, feel stressed, experience joy, becoming subdued, having one’s attention focused, and so on. These requirements could probably be met by a digital mind. Humans have preferences for sensory pleasures, love, knowledge, social connection, and achievement, the satisfaction of which are commonly held to contribute to well-­being. Since close analogs to these could be easily instantiated in virtual reality, along with whatever psychological or behavioral attributes and second-­order endorsements that may be necessary, these requirements are unlikely to prevent the creation of beings with strong yet qualifying preferences that are very easily satisfied.

2.7  Preference strength While creating extremely easy-­to-­satisfy preferences is conceptually simple, creating preferences with superhuman “strength” is more problematic. In the

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  313 standard von Neumann–Morgenstern construction, utility functions are unique only up to affine transformations: adding to or multiplying a utility function by a constant does not affect choices, and the strength of a preference is defined only in relation to other preferences of the same agent. Thus, to make interpersonal comparisons, some additional structure has to be provided to normalize different utility functions and bring them onto a common scale.11 There are various approaches that attempt to give “equal say” to the preferences of different agents based solely on preference structure, equalizing the expected influence of different agents and mostly precluding preference-­ strength super-­beneficiaries.12 Such approaches, however, leave out some important considerations. First, they do not take into account psychological complexity or competencies: some minimal system, such as a digital thermostat, may get the same weight as psychologically complex minds. Second, they deny any role of emotional gloss or other features we intuitively use to assess desire strength in ourselves and other humans. And third, the resulting social welfare function can fail to provide a mutually acceptable basis of cooperation for disinterested parties, as it gives powerful agents with strong alternatives the same weight as those without power and alternatives. The first two issues might require an investigation of these psychological strength-­weighting features. The third might be addressed with a contractarian stance that assigns weights based on game-­theoretic considerations and (hypothetical) bargaining. The contractarian approach would not be dom­in­ ated by super-­beneficiaries out of proportion to their bargaining power, but it approaches perilously close to “might makes right,” and it fails to provide guidance to those contracting parties who care about the vulnerable and wish to allocate aid irrespective of the recipient’s bargaining power.

2.8  Objective list goods and flourishing Objective list theories of well-­being claim that how well somebody’s life is going for them depends on the degree to which their life contains various distinct kinds of goods (which may include pleasure and preference-­satisfaction inter alia). Some commonly appearing items are knowledge, achievement, friendship, moral virtue, and aesthetic appreciation, though there is much variation in the identification and weighting of different goods. What these theories have in common is that they include items whose contribution to well-­being is not wholly determined by a subject’s attitudes, feelings, and beliefs but require also that some external standard of success be met.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

314  Carl Shulman and Nick Bostrom Many items found in objective lists are open to extreme instantiations. For example, superintelligent machines could cultivate intellectual virtues beyond the human range. Moral virtues, too, could reach superhuman levels: a digital mind could begin life with extensive moral knowledge and perfect motivation always to do what’s morally right, so that they remain impeccably sinless, whereas every adult human winds up with a foul record of infractions. Friendship is a complex good, but perhaps it might be boiled down to its basic constituents, such as loyalty, mutual understanding of each other’s personalities and interests, and past interaction history. These constituents could then be reassembled in a maximally efficient form, so that digital minds could perhaps sustain a greater number of deeper friendships over far longer ­periods than is possible for humans. Or consider achievement. According to Hurka and Tasioulas’s (2006) account of achievement, its value reflects the degree to which it results from the exercise of practical reason: the best achievements being those where challenging goals are met via hierarchical plans that subdivide into ever-­more intricate sub-­plans. We can then easily conceive of digital “super-­achievers” that relentlessly pursue ever-­ more elaborate projects without being constrained by flagging motivation or drifting attention. In these and many other ways, digital minds could realize a variety of objective goods to a far greater extent than is possible for us humans. Another view of well-­being is that it consists in “flourishing,” which might be cashed out in terms of exercising our characteristic capacities or in terms of achieving our “telos.” On an Aristotelian conception, for example, a being flourishes to the degree to which it succeeds at realizing its telos or essential nature. This kind of flourishing would seem to be available to a digital mind, which certainly could exercise characteristic capacities, and which might also be ascribed a telos in whatever sense human beings have one—either one defined by the intentions of a creator, or one that derives from the evolutionary or other dynamics that brought it into being and shaped its nature. So it should be possible to at least equal, and probably go somewhat beyond humans in terms of achieving such flourishing; though how we would understand radically superhuman flourishing, on this kind of account, is less clear.

2.9  Mind scale At an abstract level, we can consider a range of possible mind-­scales, from tiny insect-­like (or even thermostat-­like) minds up to vast superintelligent

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  315 minds with computational throughput greater than today’s entire human population. The cost of construction increases as we go up this scale, as does moral significance. An important question is what the relative rate of increase is of these two variables. Consider first the hypothesis that welfare grows more slowly than cost. This would suggest that the greatest total welfare would be obtained by building vast numbers of tiny minds. If this were true, insect populations may already overwhelmingly exceed the human population in aggregate capacity for welfare; and enormous populations of minimally qualifying digital minds would take precedence over both insects and beings of human or super­ human scale. Consider instead the hypothesis that welfare grows faster than cost. This would suggest the opposite conclusion: that the greatest total welfare would be obtained by concentrating resources in a few giant minds. The case where minds on the scale of human minds are optimal seems to represent a very special case, where some critical threshold exists near our level or where the scaling relationship has a kink just around the human scale point. Such a coincidence may seem somewhat unlikely from an impartial point of view, though it might emerge more naturally in accounts that anchor the concept of well-­being in human experience or human nature. We can ask more specifically with respect to particular attributes, whether a kink or threshold at the human level is plausible. For example, we can ask this question about the amount of awareness that a brain instantiates. It is at least not obvious why it should be the case that the maximally efficient way of turning resources into awareness would be by constructing minds of human size, although one would have to examine specific theories of consciousness to further investigate this issue.13 Similarly, one might ask with regard to moral status how it varies with mind size. Again, the claim that human-­sized minds are ­optimal in this respect may seem a little suspicious, absent further justification. Even if human brain size were optimal for generating awareness or moral status, it still wouldn’t follow that human brain structure is so. Large parts of our brains seem irrelevant or only weakly relevant for the amount of awareness or the degree of moral status we possess. For instance, much cortical tissue is dedicated to processing high-­resolution visual information; yet people with blurry vision and even persons who are totally blind appear to be capable of being just as aware and having just as high moral status as those with eagle-­ eyed visual acuity. It therefore seems quite plausible that super-­beneficiary status is possible by engineering minds at different sizes, both on grounds that the scaling

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

316  Carl Shulman and Nick Bostrom relationship between resources and value is unlikely to have a peak at human mind-­size, and also because substantial tracts of the human mind have low relevance to degree of awareness, moral status, or other attributes that most directly relate to the amount of well-­being or the amount of moral-­status-­ weighted well-­being that is generated.

3.  Moral and Political Implications of Digital Super-­beneficiaries Let us summarize the dimensions along which digital minds could attain welfare with superhuman resource-­efficiency: SOME PATHS TO SUPERHUMAN WELFARE •  reproductive capacity •  cost of living •  subjective speed •  hedonic skew •  hedonic range •  inexpensive preferences •  preference strength •  objective list goods and flourishing •  mind scale

Some of these dimensions are relevant only to particular accounts of well-­ being. The possibility of extreme preference strength, for instance, is directly relevant to preference-­based accounts but not to hedonistic ones. Others, such as cost of living, are more generally relevant and would seem to apply to almost any view that accords digital minds moral status and that takes into account costs when making decisions in conditions of scarcity. The dimensions also vary somewhat in the magnitudes of increased well-­being they could enable, and how easily and inexpensively such extreme values could be attained. Taken collectively, however, they make a fairly robust case that super-­ beneficiaries would indeed become feasible at technological maturity. In other words, it will be the case, according to a wide range of popular the­or­ies of well-­being, that vastly greater welfare per unit of resources can be generated by investing those resources in digital minds rather than biological humans. Two important questions therefore arise (which we can ask separately of different moral theories):

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  317 • How should we view the prospect of being able to create super-­ beneficiaries in the future? • How should we respond if we were presented with a fait accompli, in which super-­beneficiaries, perhaps in great numbers, have come into existence?

3.1  Creating super-­beneficiaries Many views that see the creation of good new lives as an important value would regard the prospect of populating the future with super-­beneficiaries as immensely attractive, and a failure to take advantage of this opportunity as something that would drastically curtail the value of the future—an existential catastrophe (Bostrom 2013). On the other hand, one could also argue that we have reason not to create super-­beneficiaries precisely on grounds that once such beings exist, they would have a dominant claim to scarce resources, whence we would be obliged to transfer (potentially all) resources away from humans to these super-­beneficiaries, to the detriment of humanity. Nicholas Agar (2010) has presented an argument along these lines as giving us (at least human-­relative) moral reason to oppose the creation of “posthumans” with some combination of greater moral status, power, and potential for well-­being. To justify such a denial of the moral desirability of creating super-­ beneficiaries, one might invoke a “person-­affecting” principle in line with Narveson’s (1973) slogan, “morality is about making people happy, not making happy people.”14 If our duties are only to existing people, and we have no moral reason to create additional new people, then in particular we would not have any duty to create super-­ beneficiaries; and if creating such super-­ beneficiaries would harm existing people, we would have a duty not to create them. Presumably, we would not have a duty to avoid creating super-­ beneficiaries if the humans who would thereby be harmed belong to some future generation, such that “butterfly effects” of our choice would change which humans come into existence; but at least we would not be under any positive duty to create super-­beneficiaries on such a view. A strict person-­affecting approach, however, has some rather counterintuitive consequences. It would imply, for example, that we have no moral reason to take any actions now in order to mitigate the impact of climate change on

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

318  Carl Shulman and Nick Bostrom future generations; and that if the actions imposed a cost on the present generation, we may have a moral reason not to take them. Because it has such implications, most would reject a strict person-­affecting ethic. Weaker or more qualified versions may have wider appeal. One might, for example, give some extra weight but not strict dominance to benefiting existing people. A similar result, where we have some moral reason to create super-­ beneficiaries even though existing humans are accorded special con­sid­er­ation, may emerge from taking into account moral uncertainty about population ethics (Greaves and Ord 2017). Depending on how such uncertainty is handled, one might either get the conclusion that the most “choice-­worthy” course of action is to spend all resources on creating super-­beneficiaries, even if one thinks that it is unlikely that this would in fact be the best use of resources; or (more plausibly in our view) that the most choice-­worthy course of action is to set aside at least some resources for the benefit of existing humans even if one thinks it likely that it would in fact be better to use all the resources to create super-­beneficiaries. Another approach is represented by asymmetric person-­affecting views that allow for moral concern about causing the existence of net bad lives— lives not worth living (Frick 2014). Such views would hold that we have strong reasons to avoid the creation of digital minds with enormous negative welfare and that we ought to be willing to accept large costs to the existing human population to avoid such outcomes. Other versions of asymmetric views, while denying that we have moral reasons to fill the future with new beings to experience as much positive utility as possible, maintain that we nevertheless have a moral obligation to ensure that the net utility of the future is above the zero-­line. Such views may consequently attach great importance to creating enough positive super-­beneficiaries to “offset” the disutility of future beings (Thomas 2019).

3.2  Sharing the world with super-­beneficiaries If we consider the case where super-­beneficiaries have already entered existence, the complications arising from person-­affecting principles drop away. From a simple utilitarian perspective, assuming perfect compliance, the upshot is then straightforward: we ought to transfer all resources to super-­ beneficiaries and let humanity perish if we are no longer instrumentally useful. There are, of course, many ethical views that deny that we are obligated to transfer all our own (let alone other people’s) resources to whichever being

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  319 would gain the most in welfare. Deontological theories, for example, often regard such actions as supererogatory in the case of giving away our own possessions, and impermissible in the case of redistributing the possessions of others. Nonetheless, widely accepted principles such as non-­discriminatory transfer payments, political equality, and reproductive liberty may already be sufficient to present serious tradeoffs. Consider the common proposal of a universal basic income, funded by taxation, to offset human unemployment caused by advanced AI. If rapidly reproducing populations of digital minds have at least as strong a claim as biological humans do to the basic income, then fiscal capacity could be quickly exhausted. An equal stipend would have to decline to below human subsistence (towards the subsistence level of a digital mind), while an unequal stipend, where the income is rationed on an equal-­benefits basis, would funnel the payouts to digital minds with low costs of living—granting a year of life to a digital mind rather than a day to a human. Avoiding this outcome would seem to require some combination of in­egali­ tar­ian treatment, in which privileged humans are favored over digital minds that have at least equal moral status and greater need, and restrictions of the reproductive opportunities of digital minds—restrictions which, if applied to humans, would infringe on principles of reproductive liberty. Likewise, at the political level, democratic principles would entitle prolific digital minds constituting an enormous supermajority of the population to political control, including control over transfer payments and the system of property rights.15 One could take the path here of trying to defend a special privilege for humans. Some contractarian theories, for example, may suggest that if humans were in a position of great power relative to digital minds, this would entitle us to a correspondingly great share of the resources. Alternatively, one might adopt some account of agent-­relative reasons on which communities or species are entitled to privilege their own members over outsiders with objectively equally great desert and moral status.16 Such relativity would seem to reflect the de facto approach taken by states today, which are generally more generous with welfare provisions towards their own citizens than towards foreigners, even when there are foreigners who are poorer, could benefit more, and in terms of their inherent characteristics are at least as worthy of aid as the country’s own citizens. Before heading down this path, however, one ought to reflect carefully and critically on the historical record of similar positions that were once widely adopted but have since become discredited, which have been used to justify oppression of many human groups and abuse of nonhuman animals. We

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

320  Carl Shulman and Nick Bostrom would need to ask, for example, whether advocating discrimination between digital minds and humans would be akin to espousing some doctrine of racial supremacy? One point to bear in mind here is that digital minds come in many var­ ieties. Some of them would be more different from one another than a human mind is from that of a cat. If a digital mind is constituted very differently from human minds, it would not be surprising if our moral duties towards it would differ from the duties we owe to other human beings; and so treating it differently need not be objectionably discriminatory. Of course, this point does not apply to digital minds that are very similar to biological human minds (e.g. whole brain emulations). Nor does it justify negative discrimination against digital minds that differ from human minds in ways that give them greater moral status (super-­patients) or that make their needs more morally weighty than the needs of humans (super-­beneficiaries). Nor, for that matter, would it justify treating digital minds with similar capabilities or sentience to nonhuman creatures according to the template of our current interactions with animals, since the latter is plagued by very widespread and horrific abuses. One way of trying to justify a privileged treatment of human beings without postulating a raw racism-­like prejudice in favor of our own kind would be to invoke some principle according to which we are entitled (or obligated) to give greater consideration to beings that are more closely integrated into our communities and social lives than to remote strangers. Some such principle is presumably required if one wishes to legitimize the (non-­cosmopolitan) way most people and most states currently limit most aid to their own in-­groups.17 Yet such a move would not exclude digital minds who have become part of our social fabric, for example by occupying roles as administrators, advisors, factory workers, or personal assistants. We may be more closely socially tied to such AIs than we are to human strangers on the other side of the globe.

4.  Discussion We’ve seen that there are many routes to digital super-­beneficiaries, making their possibility more robust. It is an implication of most currently popular accounts of well-­being. What this means is that, in the long run, total well-­being would be much greater to the extent that the world is populated with digital super-­ beneficiaries rather than life as we know it. And insofar as such beings come into existence, their concerns might predominate morally in conflict with human and animal concerns, e.g. over scarce natural resources.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  321 However, while a maximalist focus either on the welfare of incumbent humanity or instead on that of new digital minds could come with dire consequences for the other side, it would be possible for compromise policies to do extremely well by both standards. Consider three possible policies: (A) 100 per cent of resources to humans (B) 100 per cent of resources to super-­beneficiaries (C) 99.99 per cent of resources to super-­ beneficiaries; 0.01 per cent to humans From a total utilitarian perspective, (C) is approximately 99.99 per cent as good as the most preferred option (B). From an ordinary human perspective, (C) may also be 90+ per cent as desirable as the most preferred option (A), given the astronomical wealth enabled by digital minds, many orders of magnitude greater than current totals (Bostrom  2003; Hanson  2001). Thus, ex ante, it seems attractive to reduce the probability of both (A) and (B) in exchange for greater likelihood of (C)—whether to hedge against moral error, to appropriately reflect moral pluralism, to account for game-­theoretic con­ sid­er­ations, or simply as a matter of realpolitik. Likewise, since humanity can thrive without producing superhumanly bad lives, and since avoiding such misery is an extremely important concern not only from a total utilitarian perspective but also on many other evaluative views, measures that reduce the potential for ultra-­efficient production of disvalue (even at some cost to humans) would be an important part of a consensus policy. The greater challenge is not to describe a possible future in which humanity and the population of digital minds both do very well, but to achieve an arrangement that stably avoids one party trampling the other ex post, as discussed in section 3.2. This challenge involves a practical and a moral aspect. Practically, the problem is to devise institutional or other means whereby a policy protecting the interests of humans and animals could be indefinitely maintained, even when its beneficiaries are outnumbered and outpaced by a large diverse set of highly capable intelligent machines. One approach to this problem may be to create a supermajority of high-­welfare digital minds motivated to preserve this outcome and uphold the relevant norms and institutions (including in the design of successive generations of digital minds). Morally, the question is whether the measures recommended by an ex ante appealing compromise are permissible in their ex post implementation. One useful test here is whether we could endorse their application to non-­digital minds in analogous circumstances. We might require, for example, that any

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

322  Carl Shulman and Nick Bostrom proposed arrangement conforms to some principle of non-­discrimination, such as the following (Bostrom and Yudkowsky 2014): Principle of Substrate Non-­Discrimination If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status. and Principle of Ontogeny Non-­Discrimination If two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status. When applying these principles, it is important to recall the earlier point that machine minds can be very different from human minds, including in ways that matter for how they ought to be treated. Even if we accept non-­ discrimination principles like the ones stated, we must therefore be careful when we apply them to digital minds that are not exact duplicates of some human mind. Consider reproduction, for instance. If human beings were able, by pouring garden debris into a biochemical reactor, to have a baby every few ­minutes, it seems likely that human societies would change current legal practices and impose restrictions on the rate at which people were allowed to reproduce. Failure to do so would in short order bankrupt any social welfare system, assuming there are at least some people who would otherwise create enormous numbers of children in this way, despite lacking the means to support them. Such regulation could take various forms—prospective parents might be required to post a bond adequate to meet the needs of offspring before creating them, or reproductive permits might be allocated on a quota basis. Similarly, if humans had the ability to spawn arbitrary numbers of exact duplicates of themselves, we may expect there to be constitutional adjustments to prevent political contests from being decided on the basis of who is willing and able to afford to create the largest number of voting-­clones. The adjustments, again, could take various forms—for instance, the creator of such duplicates might have to share their own voting power with the copies they create.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  323 Consequently, insofar as such legal or constitutional adjustments would be acceptable for humans if we had these kinds of reproductive capacities, it may likewise be acceptable to make analogous adjustments to accommodate digital minds who do have such capacities. A key question—certainly from the perspective of existing life—is whether it would be morally permissible to engineer new minds to be reliably supportive of upholding certain rights and privileges for the human incumbents. We suggested earlier that such an arrangement of preserved human property rights and social privilege could be defensible, at least as an uncertainty-­ respecting and conflict-­ mitigating path of wise practical compromise, whether or not it is optimal at the level of fundamental moral theory. We might point, by analogy, to the common view that it is morally acceptable to preserve and protect minorities with expensive support costs and needs, such as the elderly, the disabled, the white rhinos, and the British Royal Family. This conclusion would seem additionally buttressed if we postulate that the digital minds that are created would themselves endorse the arrangement and favor its continuation. Even if the outcome itself would be morally permissible, however, we face a further ethical question, namely whether there is something procedurally objectionable about precision-­ engineering the preferences of new digital minds we create so as to ensure their consent. We can look at this question through the lens of the non-­discrimination principles and consider how we would view proposals to similarly shape the preferences of human children. While human cultures do routinely attempt through education, dialogue, and admonishment to pass on norms and values to children—including filial piety and respect for existing norms and institutions—a proposal to instill specific dispositions by genetically engineering gametes would likely be more controversial. Even if we set aside practical concerns about safety, unequal access, abuse by oppressive governments, or parents making narrow-­minded or otherwise foolish choices, there may remain a concern that the very act of exerting detailed control over a progeny’s inclinations, especially if done with an “engineering mindset” and using methods that entirely bypass the controlled subject’s own mind and volition (by taking place before the subject is born) would be inherently morally problematic.18 While we cannot fully evaluate these concerns here, we note two important differences in the case of digital minds. The first is that, in contrast to human reproduction, there may be no obvious “default” to which creators could defer. Programmers might inevitably be making choices when building a machine intelligence—whether to build it one way or another, whether to

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

324  Carl Shulman and Nick Bostrom train on this objective or that, whether to give it one set or preferences or another. Given that they have to make some such choice, one might think it reasonable they make a choice that has more desirable consequences. Second, in the case of a human being “engineered” to have some particular set of desires, we might suspect that there may remain, at a deeper level, other dispositions and propensities with which the engineered preference may come into conflict. We might worry, for example, that the outcome could be a person who feels terribly guilty about disappointing her parents and so sacrifices other interests excessively, or that some hidden parts of her psyche will remain suppressed and thwarted. Yet in the case of digital minds, it might be possible to avoid such problems, if they can be engineered to be internally more unified, or if the preference for respecting the interest of the “legacy” human population were added in a “light touch” way that didn’t engender internal strife and did not hamper the digital mind’s ability to go about its other business. All in all, it appears that an outcome that enables the creation of digital super-­beneficiaries and the preservation of a greatly flourishing human population could score very high on both an impersonal and a human-­centric evaluative standard. Given the high stakes and the potential for irreversible developments, there would be great value in mapping out morally acceptable and practically feasible paths whereby such an outcome can be reached.

Notes 1. For helpful comments, we’re grateful to Guy Kahane, Matthew van der Merwe, Hazem Zohny, Max Daniel, Lukas Finnveden, Lukas Gloor, Uli Alskelung Von Hornbol, Daniel Dewey, Luke Muehlhauser, James Babcock, Ruby Bloom, Vincent Luczkow, Nick Beckstead, Hilary Greaves, Owen Cotton-­Barratt, Allan Dafoe, and Wes Cowley. 2. We assume that appropriately architected AI could be conscious, though it’s worth noting that some accounts of moral status do not view this as a necessary condition for having moral status; see e.g. (Chalmers 2010) for a discussion of AI consciousness, and (Kagan 2019) for a discussion of moral status in unconscious but agential AI. 3. Some of these could be at least partially available to enhanced or uploaded human beings; (Bostrom 2008a, 2008b; Chalmers 2010). 4. We thank Daniel Dewey for suggesting this term. 5. It may be unclear, however, whether an exact or almost exact copy of an existing mind would constitute a new distinct person or instead an additional instantiation of the person whose mind served as the template. 6. Hanson (2016, pp. 63–5) argues that cost-­increases with speedup would be initially near-­linear, i.e. 2× speedup requiring close to 2× hardware budget, up to substantially superhuman speeds. 7. David Pearce (1995) has argued that biological minds could be engineered to run on “gradients of bliss” rather than the full current pain–pleasure span.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

Sharing the World with Digital Minds  325 8. Cf. (Agar 2010, pp. 164–89). 9. One might think that a hedonic state that fully captures the attention of a mind and overrides all other concerns would constitute an in-­principle maximum of hedonic intensity. However, it seems plausible that a larger mind that is “more conscious” could in the relevant sense contain “a greater amount” of maximally intense hedonic experience. 10. As does Parfit (1984, p. 498), citing Rawls (1971, p. 432), who drew from Stace (1944). 11. Harsanyi (1953) showed that a weighted sum of utility functions is optimal under certain assumptions, but the theorem leaves the values of the weights undetermined. 12. E.g. (MacAskill, Cotton-­Barratt, and Ord 2020). 13. This issue is especially acute since many theories of consciousness specified enough to consider computational implementations appear susceptible to extremely minimal implementations (Herzog, Esfeld, and Gerstner 2007). 14. Frick (2014) offers a recent attempt in line with the slogan. 15. Cf. (Calo 2015). 16. E.g. (Williams 2006). 17. Those practices are, of course, subject to a cosmopolitan critique; e.g. (Singer 1981; Appiah 2006). 18. E.g. (Habermas 2003; Sandel 2007).

References Agar, N. (2010) Humanity’s End. The MIT Press, pp. 164–89. Aghion, P., Jones, B.  F., and Jones, C.  I. (2017) “Artificial Intelligence and Economic Growth,” National Bureau of Economic Research Working Paper Series, No. 23928. Appiah, A. (2006) Cosmopolitanism: Ethics in a World of Strangers. Allen Lane. Bostrom, N. (2003) “Astronomical Waste: The Opportunity Cost of Delayed Technological Development,” Utilitas, 15(3), 308–14. Bostrom, N. (2008a) “Letter from Utopia,” Studies in Ethics, Law, and Technology, 2(1). Bostrom, N. (2008b) “Why I Want to be a Posthuman when I Grow Up,” in Gordijn, B. and Chadwick, R. (eds), Medical Enhancement and Posthumanity. Springer Netherlands, 107–36. Bostrom, N. (2013) “Existential Risk Prevention as Global Priority,” Global Policy, 4(1), pp. 15–31. Bostrom, N. and Yudkowsky, E. (2014) “The Ethics of Artificial Intelligence,” in Frankish, K. and Ramsey, W. M. (eds), The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, pp. 316–34. Calo, R. (2015) “Robotics and the Lessons of Cyberlaw,” California Law Review, 103(3), p. 529. Chalmers, D. (2010) “The Singularity: A Philosophical Analysis,” Journal of Consciousness Studies, 17(9–10), pp. 7–65.

OUP CORRECTED AUTOPAGE PROOFS – FINAL, 19/06/21, SPi

326  Carl Shulman and Nick Bostrom Frick, J. D. (2014) “Making People Happy, Not Making Happy People”: A Defense of the Asymmetry Intuition in Population Ethics (Doctoral dissertation). Greaves, H. and Ord, T. (2017) “Moral Uncertainty About Population Axiology,” Journal of Ethics and Social Philosophy, 12(2), pp. 135–67. Habermas, J. (2003) The Future of Human Nature. Polity Press. Hanson, R. (2001) Economic Growth Given Machine Intelligence. Technical Report, University of California, Berkeley. Hanson, R. (2016) The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford University Press, pp. 63–5. Harsanyi, J. (1953) “Cardinal Utility in Welfare Economics and in the Theory of Risk-taking,” Journal of Political Economy, 61(5), pp. 434–5. Herzog, M. H., Esfeld, M., and Gerstner, W. (2007) “Consciousness & the Small Network Argument,” Neural Networks, 20(9), pp. 1054–6. Hurka, T. and Tasioulas, J. (2006) “Games and the Good,” Proceedings of the Aristotelian Society, Supplementary Volumes, 80, p. 224. Kagan, S. (2019) How to Count Animals, more or less. Oxford University Press. MacAskill, W., Cotton-Barratt, O., and Ord, T. (2020) “Statistical Normalization Methods in Interpersonal and Intertheoretic Comparisons,” Journal of Philosophy, 117(2), pp. 61–95. Narveson, J. (1973) “Moral Problems of Population,” The Monist, 57(1), 62–86. Nordhaus, W. D. (2007) “Two Centuries of Productivity Growth in Computing,” The Journal of Economic History, 67(1), 128–59. Nozick, R. (1974) Anarchy, State, and Utopia. Basic Books, p. 41. Parfit, D. (1984) Reasons and Persons. Oxford University Press, pp. 343, 388–9, 498. Pearce, D. (1995) Hedonistic Imperative.