Precautionary Reasoning in Environmental and Public Health Policy (The International Library of Bioethics, 86) 3030707903, 9783030707903

This book fills a gap in the literature on the Precautionary Principle by placing the principle within the wider context

140 31 7MB

English Pages 353 [349] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgments
Contents
Abbreviations
List of Figures
List of Tables
1 Precautionary Reasoning
1.1 The Precautionary Principle
1.2 Foundations of Precautionary Reasoning
1.3 The Plan for This Book
References
2 Precautionary Reasoning and Decision Theory
2.1 Overview of Decision Theory
2.2 Decision-Making Under Ignorance
2.3 The Problem of Value Uncertainty
2.4 The Problem of Implausible Outcomes
2.5 Transitioning from Decision-Making Under Ignorance to Decision-Making Under Risk
2.6 Interpretations of Probability
2.7 Decision-Making Under Risk
2.8 Problems with Expected Utility Theory
2.9 Social Choice Theory
2.10 Reflections on Democracy
2.11 Conclusion
References
3 Precautionary Reasoning and Moral Theory
3.1 What Are Moral Theories?
3.2 Utilitarianism
3.3 Kantianism
3.4 Virtue Ethics
3.5 Natural Law
3.6 Natural Rights
3.7 John Rawls’ Theory of Justice
3.8 Environmental Ethics
3.9 Feminist Ethics
3.10 Conclusion
References
4 The Precautionary Principle
4.1 Some Background on the Precautionary Principle
4.2 Definitions of the Precautionary Principle
4.3 Criticism #1: The Precautionary Principle Is Vague
4.4 Criticism #2: The Precautionary Principle Is Incoherent
4.5 Reasonableness of Precautionary Measures
4.6 Criticism #3: The Precautionary Principle Is Opposed to Science, Technology, and Economic Development
4.7 Defining the Precautionary Principle
4.7.1 The Precautionary Principle, Decision Theory, and Moral Theory
4.8 Other Interpretations of the Precautionary Principle
4.9 Applying the Precautionary Principle
4.9.1 Case 1: Changing Jobs
4.9.2 Case 2: Autonomous Vehicles
4.10 Usefulness of the Precautionary Principle
4.11 Objections and Replies
4.12 Conclusion
References
5 Precautionary Reasoning and the Precautionary Principle
5.1 Foundations of Precautionary Reasoning Redux
5.2 Individual Decisions
5.3 Decisions for Others
5.4 Social Choices
5.5 Arguments for Democracy
5.6 Problems with Democracy
5.7 Public, Stakeholder, and Community Engagement
5.8 Choosing Decision-Making Rules
5.9 Conclusion
References
6 Chemical Regulation
6.1 Pharmaceuticals
6.2 Dietary Supplements
6.3 Alcohol and Tobacco
6.4 Pesticides
6.5 Toxic Substances
6.6 Air and Water Pollution
6.7 Chemicals in the Workplace
6.8 Precautionary Reasoning and Chemical Regulation
6.9 Regulation of Toxic Substances
6.10 Regulation of Drugs
6.11 Regulation of Electronic Cigarettes
6.12 Protecting Susceptible Populations from Chemical Risks
6.13 Expected Utility Theory and Chemical Regulation
6.14 Conclusion
References
7 Genetic Engineering
7.1 DNA, RNA, Genes, and Proteins
7.2 Genes and Reproduction
7.3 Genotypes and Phenotypes
7.4 Genetic Engineering
7.5 Applications of Genetic Engineering
7.6 Regulation of Genetic Engineering
7.7 Two Overarching Objections to Genetic Engineering
7.8 Applying the Precautionary Principle to Genetic Engineering
7.9 Genetic Engineering of Microbes
7.10 Genetic Engineering of Plants
7.11 Genetic Engineering of Animals
7.12 Genetic Engineering of Human Beings
7.13 Somatic Genetic Engineering
7.14 Germline Genetic Engineering
7.15 Benefits of Germline Genetic Engineering
7.16 Risks of Germline Genetic Engineering
7.17 Germline Genetic Engineering and the Precautionary Principle
7.18 Germline Genetic Engineering for Preventing Monogenic Disorders
7.19 Germline Genetic Engineering for Preventing Polygenic Disorders
7.20 Germline Genetic Engineering for Enhancement
7.21 Conclusion
References
8 Dual Use Research in the Biomedical Sciences
8.1 A Brief History of Biowarfare and Bioterrorism
8.2 Dual Use Research
8.3 Legal Issues Concerning Publication of Dual Use Research
8.4 Ethical Dilemmas Concerning Dual Use Research
8.5 Evaluating the Risks and Benefits of Dual Use Research
8.6 Applying the Precautionary Principle to Dual Use Research
8.7 Conclusion
References
9 Public Health Emergencies
9.1 Public Health Emergencies
9.2 Ethical and Policy Issues Related to Emergency Preparedness and Response
9.3 Were Lockdowns a Reasonable Response to the COVID-19 Pandemic?
9.4 Testing and Approving Medical Products Used in Public Health Emergencies
9.5 Allocation of Scarce Medical Resources
9.6 Disaster Preparedness
9.7 Conclusion
References
10 Conclusion
10.1 Summary of Key Arguments and Conclusions
10.2 Applications
10.3 Limitations and Further Research
10.4 Final Thoughts
References
Bibliography
Index
Recommend Papers

Precautionary Reasoning in Environmental and Public Health Policy (The International Library of Bioethics, 86)
 3030707903, 9783030707903

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

The International Library of Bioethics 86

David B. Resnik

Precautionary Reasoning in Environmental and Public Health Policy

The International Library of Bioethics Founding Editors David C. Thomasma David N. Weisstub Thomasine Kimbrough Kushner

Volume 86

Series Editor Dennis R. Cooley, North Dakota State University, History, Philosophy, & Religious Studies, Fargo, ND, USA Advisory Editor David N. Weisstub, Faculty of Medicine, University of Montreal, Montréal, QC, Canada Editorial Board Terry Carney, Faculty of Law Building, University of Sydney, Sydney, Australia Marcus Düwell, Philosophy Faculty of Humanities, Universiteit Utrecht, Utrecht, The Netherlands Søren Holm, Centre for Social Ethics and Policy, The University of Manchester, Manchester, UK Gerrit Kimsma, Radboud UMC, Nijmegen, Gelderland, The Netherlands David Novak, University of Toronto, Toronto, ON, Canada Daniel P. Sulmasy, Edmund D. Pellegrino Center for Clinical, Washington, DC, USA

The International Library of Bioethics – formerly known as the International Library of Ethics, Law and the New Medicine comprises volumes with an international and interdisciplinary focus on foundational and applied issues in bioethics. With this renewal of a successful series we aim to meet the challenge of our time: how to direct biotechnology to human and other living things’ ends, how to deal with changed values in the areas of religion, society, and culture, and how to formulate a new way of thinking, a new bioethics. The International Library of Bioethics focuses on the role of bioethics against the background of increasing globalization and interdependency of the world’s cultures and governments, with mutual influencing occurring throughout the world in all fields. The series will continue to focus on perennial issues of aging, mental health, preventive medicine, medical research issues, end of life, biolaw, and other areas of bioethics, whilst expanding into other current and future topics. We welcome book proposals representing the broad interest of this series’ interdisciplinary and international focus. We especially encourage proposals addressing aspects of changes in biological and medical research and clinical health care, health policy, medical and biotechnology, and other applied ethical areas involving living things, with an emphasis on those interventions and alterations that force us to re-examine foundational issues.

More information about this series at http://www.springer.com/series/16538

David B. Resnik

Precautionary Reasoning in Environmental and Public Health Policy

David B. Resnik National Institutes of Health National Institute of Environmental Health Sciences Research Triangle Park, NC, USA

ISSN 2662-9186 ISSN 2662-9194 (electronic) The International Library of Bioethics ISBN 978-3-030-70790-3 ISBN 978-3-030-70791-0 (eBook) https://doi.org/10.1007/978-3-030-70791-0 © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

This book is dedicated to my parents, Michael David Resnik and Janet Depping Resnik.

Acknowledgments

For helpful discussions, comments, and resources I am grateful to Paul Doetsch, Kevin C. Elliott, Christine Flowers, Ramin Karbasi, Sheldon Krimsky, Christian Munthe, Les Reinlib, Michael D. Resnik, Daniel Steel, Jackie Stillwell, and two anonymous reviewers. Research for this book was sponsored by the Intramural Program of the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). It does not represent the views of the NIEHS, NIH, or US government.

vii

Contents

1

Precautionary Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 The Precautionary Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Foundations of Precautionary Reasoning . . . . . . . . . . . . . . . . . . . . . 1.3 The Plan for This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 2 11 12

2

Precautionary Reasoning and Decision Theory . . . . . . . . . . . . . . . . . . . 2.1 Overview of Decision Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Decision-Making Under Ignorance . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Problem of Value Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 The Problem of Implausible Outcomes . . . . . . . . . . . . . . . . . . . . . . 2.5 Transitioning from Decision-Making Under Ignorance to Decision-Making Under Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Interpretations of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Decision-Making Under Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8 Problems with Expected Utility Theory . . . . . . . . . . . . . . . . . . . . . . 2.9 Social Choice Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Reflections on Democracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 16 18 22 23 25 26 32 35 38 44 44 45

Precautionary Reasoning and Moral Theory . . . . . . . . . . . . . . . . . . . . . 3.1 What Are Moral Theories? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Utilitarianism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Kantianism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Virtue Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Natural Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Natural Rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 John Rawls’ Theory of Justice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Environmental Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Feminist Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49 50 51 54 58 60 62 65 67 69 70 71

3

ix

x

4

Contents

The Precautionary Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Some Background on the Precautionary Principle . . . . . . . . . . . . . 4.2 Definitions of the Precautionary Principle . . . . . . . . . . . . . . . . . . . . 4.3 Criticism #1: The Precautionary Principle Is Vague . . . . . . . . . . . . 4.4 Criticism #2: The Precautionary Principle Is Incoherent . . . . . . . . 4.5 Reasonableness of Precautionary Measures . . . . . . . . . . . . . . . . . . 4.6 Criticism #3: The Precautionary Principle Is Opposed to Science, Technology, and Economic Development . . . . . . . . . . 4.7 Defining the Precautionary Principle . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 The Precautionary Principle, Decision Theory, and Moral Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Other Interpretations of the Precautionary Principle . . . . . . . . . . . 4.9 Applying the Precautionary Principle . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Case 1: Changing Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.2 Case 2: Autonomous Vehicles . . . . . . . . . . . . . . . . . . . . . . . 4.10 Usefulness of the Precautionary Principle . . . . . . . . . . . . . . . . . . . . 4.11 Objections and Replies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75 76 78 79 83 86

93 95 98 98 101 104 105 106 107

5

Precautionary Reasoning and the Precautionary Principle . . . . . . . . 5.1 Foundations of Precautionary Reasoning Redux . . . . . . . . . . . . . . 5.2 Individual Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Decisions for Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Social Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Arguments for Democracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Problems with Democracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Public, Stakeholder, and Community Engagement . . . . . . . . . . . . . 5.8 Choosing Decision-Making Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

111 111 113 115 118 118 119 122 124 126 127

6

Chemical Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Pharmaceuticals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Dietary Supplements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Alcohol and Tobacco . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Pesticides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Toxic Substances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Air and Water Pollution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Chemicals in the Workplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Precautionary Reasoning and Chemical Regulation . . . . . . . . . . . . 6.9 Regulation of Toxic Substances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Regulation of Drugs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.11 Regulation of Electronic Cigarettes . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 Protecting Susceptible Populations from Chemical Risks . . . . . . . 6.13 Expected Utility Theory and Chemical Regulation . . . . . . . . . . . .

129 130 136 137 139 140 144 145 146 149 151 156 157 158

89 91

Contents

xi

6.14 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 7

Genetic Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 DNA, RNA, Genes, and Proteins . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Genes and Reproduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Genotypes and Phenotypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Genetic Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Applications of Genetic Engineering . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Regulation of Genetic Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Two Overarching Objections to Genetic Engineering . . . . . . . . . . 7.8 Applying the Precautionary Principle to Genetic Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Genetic Engineering of Microbes . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.10 Genetic Engineering of Plants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.11 Genetic Engineering of Animals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.12 Genetic Engineering of Human Beings . . . . . . . . . . . . . . . . . . . . . . 7.13 Somatic Genetic Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.14 Germline Genetic Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.15 Benefits of Germline Genetic Engineering . . . . . . . . . . . . . . . . . . . 7.16 Risks of Germline Genetic Engineering . . . . . . . . . . . . . . . . . . . . . . 7.17 Germline Genetic Engineering and the Precautionary Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.18 Germline Genetic Engineering for Preventing Monogenic Disorders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.19 Germline Genetic Engineering for Preventing Polygenic Disorders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.20 Germline Genetic Engineering for Enhancement . . . . . . . . . . . . . . 7.21 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

165 165 169 172 174 177 182 184 187 187 190 198 204 208 209 212 214

8

Dual Use Research in the Biomedical Sciences . . . . . . . . . . . . . . . . . . . 8.1 A Brief History of Biowarfare and Bioterrorism . . . . . . . . . . . . . . 8.2 Dual Use Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Legal Issues Concerning Publication of Dual Use Research . . . . . 8.4 Ethical Dilemmas Concerning Dual Use Research . . . . . . . . . . . . 8.5 Evaluating the Risks and Benefits of Dual Use Research . . . . . . . 8.6 Applying the Precautionary Principle to Dual Use Research . . . . 8.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

241 242 247 253 256 257 260 265 266

9

Public Health Emergencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 9.1 Public Health Emergencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 9.2 Ethical and Policy Issues Related to Emergency Preparedness and Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277

220 221 224 226 228 228

xii

Contents

9.3

Were Lockdowns a Reasonable Response to the COVID-19 Pandemic? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Testing and Approving Medical Products Used in Public Health Emergencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Allocation of Scarce Medical Resources . . . . . . . . . . . . . . . . . . . . . 9.6 Disaster Preparedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

286 290 293 295 296

10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Summary of Key Arguments and Conclusions . . . . . . . . . . . . . . . . 10.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Limitations and Further Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Final Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

305 305 308 313 315 316

281

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321

Abbreviations

AIDS AU BPA BSL Bt BWC CAR CBD CDC CI COVID-19 CRISPR CSA DDT DHHS DNA DSHEA DSMB ECA ED ENM EPA EU EUA EUT FBI FDA FIDRA FOIA FQPA GDP GGE

Acquired Immunodeficiency Syndrome Act Utilitarianism Bisphenol A Biosafety Laboratory Bacillus thuringiensis Biological Weapons Convention Chimeric Antigen Receptor Convention on Biodiversity Centers for Disease Control and Prevention Categorical Imperative Coronavirus Disease of 2019 Clustered Regularly Interspaced Short Palindromic Repeats Controlled Substances Act Dichlorodiphenyltrichloroethane Department of Health and Human Services Deoxyribonucleic Acid Dietary Supplement Health and Education Act Data and Safety Monitoring Board European Chemicals Agency Emergency Department Engineered Nanomaterial Environmental Protection Agency European Union Emergency Use Authorization Expected Utility Theory Federal Bureau of Investigation Food and Drug Administration Federal Insecticide, Fungicide, and Rodenticide Act Freedom of Information Act Food Quality Protection Act Gross Domestic Product Germline Genetic Engineering xiii

xiv

GM GMO HFEA HIV IACUC IPCC IRB LAI MERS NIH NRC NSABB NSAEM NSAID NTP OSHA p PIGT PNAS PNGT PP PPM PPP RAC RCT REACH RNA RU SARS SCA SGE SWF TIBA TSCA UN UNSCO USAMRIID USDA WTP

Abbreviations

Genetically Modified Genetically Modified Organism Human Fertilisation and Embryology Authority Human Immunodeficiency Virus Institutional Animal Care and Use Committee Intergovernmental Panel on Climate Change Institutional Review Board Laboratory Acquired Infection Middle Eastern Respiratory Syndrome National Institutes of Health National Research Council National Science Advisory Board for Biosecurity National Academies of Sciences, Engineering, and Medicine Non-Steroidal Anti-Inflammatory Drug National Toxicology Program Occupational Safety and Health Administration Probability Preimplantation Genetic Testing Proceedings of the National Academy of Sciences Prenatal Genetic Testing Precautionary Principle Parts Per Million Potential Pandemic Pathogen Recombinant DNA Advisory Committee Randomized Controlled Trial Registration, Evaluation, Authorisation and Restriction of Chemicals Ribonucleic Acid Rule Utilitarianism Sever Acute Respiratory Syndrome Sickle Cell Anemia Somatic Genetic Engineering Social Welfare Function 2,3,5-Triiodobenzoic Acid Toxic Substance Control Act United Nations United Nations Economic, Scientific, and Cultural Organization United States Army Medical Research Institute of Infectious Diseases US Department of Agriculture Willingness to Pay

List of Figures

Fig. 4.1 Fig. 7.1

Fig. 7.2

Fig. 7.3

Fig. 7.4

Fig. 7.5

Fig. 7.6

Fig. 7.7

Fig. 7.8

Fig. 7.9

Decision tree for applying the precautionary principle . . . . . . . . . Deoxyribonucleic acid, National Human Genome Research Institute, public domain, https://www.genome. gov/genetics-glossary/Deoxyribonucleic-Acid . . . . . . . . . . . . . . . DNA replication, National Human Genome Research Institute, public domain, https://www.genome.gov/gen etics-glossary/DNA-Replication . . . . . . . . . . . . . . . . . . . . . . . . . . DNA transcription and translation, Copyright 2017 by Terese Winslow, U.S. government has certain rights, used with permission, https://www.cancer.gov/publicati ons/dictionaries/cancer-terms/def/translation . . . . . . . . . . . . . . . . Mitosis, National Human Genome Research Institute, public domain, https://www.genome.gov/sites/default/ files/tg/en/illustration/mitosis.jpg . . . . . . . . . . . . . . . . . . . . . . . . . . Meiosis, National Human Genome Research Institute, public domain, https://www.genome.gov/sites/default/ files/tg/en/illustration/meiosis.jpg . . . . . . . . . . . . . . . . . . . . . . . . . Research and clinical applications of stem cells, Copyright 2008 by Terese Winslow, U.S. government has certain rights, used with permission, https://stemcells.nih.gov/res earch/promise.htm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sickle cell disease inheritance, centers for disease control and prevention, public domain, https://www.cdc.gov/ncb ddd/sicklecell/traits.html . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Genetic modification of bacteria to produce insulin, National Library of Medicine, public domain, https:// www.nlm.nih.gov/exhibition/fromdnatobeer/exhibition-int eractive/recombinant-DNA/recombinant-dna-technologyalternative.html . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Strategies for creating transgenic mice, Tratar et al. (2018), Creative Commons License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99

166

167

168

170

171

172

173

175 176 xv

xvi

Fig. 7.10 Fig. 7.11

Fig. 7.12 Fig. 7.13

Fig. 8.1

Fig. 8.2

Fig. 8.3

Fig. 9.1 Fig. 9.2

Fig. 9.3

List of Figures

Using CRISPR to edit a gene, Costa et al. (2017), Creative Commons License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CAR T cell therapy, Copyright 2017 by Terese Winslow, U.S. government has certain rights, used with permission, available at: https://www.cancer.gov/publications/dictionar ies/cancer-terms/def/car-t-cell-therapy . . . . . . . . . . . . . . . . . . . . . Gene editing with gene drive, GM Watch (2019), Creative Commons License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bioengineered food label, U.S. Department of Agriculture, public domain, https://www.ams.usda.gov/rules-regula tions/be/consumers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smallpox lesions on the torso of a patient in Bangladesh in 1973 (Source James Hicks, Centers for Disease Control and Prevention, public domain, https://www.cdc.gov/sma llpox/clinicians/clinical-disease.html#one) . . . . . . . . . . . . . . . . . . Electron micrograph image of spores from the Sterne strain of Bacillus anthracis bacteria, Centers for Disease Control and Prevention, public domain, https://www.cdc. gov/vaccines/vpd/anthrax/photos.html . . . . . . . . . . . . . . . . . . . . . A depiction of a generic influenza virus, showing its DNA inside a protein covered shell, Centers for Disease Control and Prevention, public domain, https://www.cdc.gov/flu/ images/virus/fluvirus-antigentic-characterization-medium. jpg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SARS-CoV-2 structure, Singh (2020), creative commons license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1918 flu pandemic: the Oakland Municipal Auditorium in use as a temporary hospital. The photograph depicts volunteer nurses from the American Red Cross tending influenza sufferers in the Oakland Auditorium, Oakland, California, during the influenza pandemic of 1918. Wikimedia Commons https://commons.wikimedia.org/ wiki/File:1918_flu_in_Oakland.jpg, creative commons license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stop the spread of germs, Centers for Disease Control and Prevention, https://www.cdc.gov/coronavirus/2019ncov/travelers/communication-resources.html, public domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

178

179 182

198

243

245

250 272

276

278

List of Tables

Table 2.1 Table 2.2 Table 2.3 Table 2.4 Table 2.5 Table 2.6 Table 2.7 Table 2.8 Table 2.9 Table 2.10 Table 2.11 Table 2.12 Table 2.13 Table 2.14 Table 2.15 Table 2.16 Table 2.17 Table 3.1 Table 3.2 Table 4.1 Table 4.2 Table 4.3 Table 4.4 Table 5.1 Table 5.2 Table 5.3 Table 5.4 Table 5.5 Table 6.1 Table 6.2

Taking an umbrella to work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision matrix for investments: maximin . . . . . . . . . . . . . . . . . Regret matrix for investments . . . . . . . . . . . . . . . . . . . . . . . . . . . Expanded regret matrix for investments . . . . . . . . . . . . . . . . . . . Optimism-pessimism matrix for investments (optimistic) . . . . . Optimism-pessimism matrix for investments (pessimistic) . . . . Decision matrix for investments: principle of indifference . . . . Decision matrix for investments with expected utilities . . . . . . Decision matrix for investments with expected utilities . . . . . . Decision matrix for cancer treatment with expected utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Illustration of Condorcet’s voting paradox . . . . . . . . . . . . . . . . . Illustration of unanimity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Illustration of irrelevance of independent alternatives . . . . . . . . Illustration of dictatorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Positional vote counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Voting with cardinal utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . Voting with cardinal utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distributions of wealth in three societies . . . . . . . . . . . . . . . . . . List of moral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Standards of EVIDENCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decision matrix for investments . . . . . . . . . . . . . . . . . . . . . . . . . Decision matrix for investments with expected utilities . . . . . . Criteria for reasonableness of precautionary measures . . . . . . . Traveling by Car vs. Train . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Traveling by Car vs. Train . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Approaches to decision-making for others . . . . . . . . . . . . . . . . . Problems with democracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Considerations for using the precautionary principle to make decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drug schedules under the Controlled Substances Act . . . . . . . . Types of government protection from chemical risks . . . . . . . .

17 18 19 20 20 21 22 33 33 34 40 40 41 41 42 43 43 53 70 80 84 84 87 113 114 116 119 126 135 147 xvii

xviii

Table 7.1 Table 7.2 Table 7.3 Tables 8.1 Table 9.1

List of Tables

Biosafety levels, based on information available at Public Health Emergency (2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key distinctions in human genetic engineering . . . . . . . . . . . . . Decision matrix for using GGE to prevent a serious, monogenic disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select agents and toxins list, https://www.selectagents. gov/SelectAgentsandToxinsList.html . . . . . . . . . . . . . . . . . . . . . Top 25 global causes of death in 2017 (data from Ritchie and Roser 2019b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

188 205 222 246 275

Chapter 1

Precautionary Reasoning

Every day we make decisions involving risks, benefits, and precautions. We engage in what I call precautionary reasoning in a variety of decision-making contexts, including lifestyle choices (e.g. smoking tobacco, riding motorcycles, eating excessively), financial decisions (e.g. investing money, loaning money, purchasing goods), health care choices (e.g. seeking medical treatment, taking preventative measures), and public policy decisions (e.g. approving drugs, developing new technologies, enacting environmental or public health protections). Precautionary decisions range from the mundane (e.g. whether to take an umbrella to work) to the profound (e.g. whether to permit human genome editing), and they may impact anywhere from a few people to the entire world. Since the mid-twentieth century, philosophers, economists, psychologists, political scientists, and legal theorists have written a great deal about the ethical, legal, and policy issues related to assessing and managing risks. In the book, I will develop a comprehensive framework for thinking about risks that draws insights from different scientific and humanistic disciplines and has applications for environmental and public health policy. The framework will help us gain a better understanding of precautionary reasoning in general as well as a form of precautionary reasoning known as the precautionary principle (PP).

1.1 The Precautionary Principle While phrases like “better safe than sorry” and “an ounce of prevention is worth a pound of cure” have been part of our commonsense thinking about risks for centuries, The PP was developed as an alternative to evidence-based approaches, such as risk management and cost/benefit analysis, used by governments to make policy decisions involving public health and the environment. The key insight of the PP is that we may need to take action to deal with possible harms even when scientific evidence © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_1

1

2

1 Precautionary Reasoning

concerning those harms is lacking or inconclusive. The PP has generated a great deal of controversy since its origins in Swedish and German environmental legal scholarship in the 1970s (Sandin 2004; Sunstein 2005). In the last thirty years, the PP has been incorporated into numerous international treaties and declarations and governmental policies (European Commission 2000; Kriebel et al. 2001; Sunstein 2005). In 1992, an influential statement of the PP appeared as Principle 15 of the Rio Declaration on Environment and Development: In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing costeffective measures to prevent environmental degradation. (United Nations Conference on Economic Development 1992)

Opponents of the PP—and there are many—have argued that the principle is vague, incoherent, and fundamentally opposed to scientific, technological, and economic progress (Holm and Harris 1999; Brombacher 1999; Goklany 2001; Foster et al. 2000; Sunstein 2005; Peterson 2006, 2007; United States Chamber of Commerce 2010). Proponents of the PP have tried to answer these charges and formulate versions of the principle that stand up to critical scrutiny (Cranor 2001; Resnik 2003, 2004; Sandin 2004; Sandin et al. 2002; Munthe 2011, 2017; Steel 2015; Hartzell-Nichols 2013, 2017). In this book, I will examine the PP and other approaches to precautionary reasoning in greater depth. I will argue that while the PP remains highly contentious, it has potential value as a public policy tool because it captures some of our commonsense ideas about reasonably balancing risks and benefits in the face of uncertainty (Munthe 2011; Steel 2015; Hartzell-Nichols 2013, 2017). The PP can play an important role in public health and environmental policy within a larger framework of precautionary reasoning. The approach one should take to precautionary reasoning depends on the various conditions—scientific, technological, personal, moral, social, and political—that form the context of the decision. In some contexts, the PP may be an appropriate tool to use in decision-making; in others, it may not be. Also, we may change decision-making approaches as conditions change. For example, one could use the PP for making decisions when scientific evidence concerning a risk is inconclusive and then shift to cost/benefit analysis as evidence reaches a sufficient level of conclusiveness. On this view, the PP complements other approaches to precautionary reasoning and can provide useful guidance when these approaches fail to adequately protect society from risks.

1.2 Foundations of Precautionary Reasoning Before I can defend these ambitious, wide-ranging claims concerning the PP and its role in public policy, I need to say a few words about precautionary reasoning in general that will play an important role in the development of my arguments and conclusions. I will rely upon these foundational points throughout the book.

1.2 Foundations of Precautionary Reasoning

3

First, we usually have an array of precautionary measures we can use to deal with risks,1 including avoidance,2 minimization (or reduction), and mitigation (Munthe 2011). We may decide to implement various combinations of these measures—or no measures at all—to manage the risks we face. If we judge that the benefits of an activity are not worth the risks, we may decide to avoid the activity or postpone it until conditions change.3 For example, suppose that John judges that the benefits of skydiving are not worth the risks for him, so he vows never to skydive. Jane, on the other hand, decides that the benefits of skydiving are worth the risks, and she decides to go skydiving. However, she still wants to minimize the risks of skydiving, so she gathers information on skydiving companies and chooses the one with the best safety record and highest ratings. Very often, we decide to mitigate risks when we expect that they may materialize, despite our efforts to avoid, prevent, or minimize them. For example, most automobiles include a spare tire in case another tire goes flat. The spare does not prevent flat tires, nor does it minimize this risk, but it provides drivers with a way of dealing with flat tires with minimal cost or inconvenience. It is important to appreciate that there are often numerous ways of dealing with risks besides avoiding them, and that strategies for minimizing and mitigating risks can play a key role in social policy. All too often it seems, discussions of controversial issues in public and environmental health focus on avoiding risks, to the exclusion of other options (Munthe 2011). Ignoring these other options can lead to policies that are unrealistic and ineffective, because it may be very difficult to avoid some risks. For example, one might argue that discussions of climate change have focused too much on strategies for avoiding climate change by reducing human causes of global warming and have not paid enough attention to mitigating the environmental

1I

am using ‘risk’ in a very general sense here as a possible harm or adverse outcome. The risk assessment literature distinguishes between hazards and risks. A hazard is something that can cause harm, such as lightning, and a risk is the probability that something which has been identified as a hazard will cause harm, such as the chance of being struck by lightning (National Research Council 2009). While this way of defining risks has useful applications in hazard identification, risk assessment, and risk management, it is not how we ordinarily talk about risks. For example, Webster’s Dictionary (2019) defines risk as “the possibility or chance of loss, danger or injury.” Though I will mostly use the ordinary definition of risk in this book, I will, at various places, talk about possible harms to indicate that I am referring to harms that we do not have probability estimates for. 2 We often speak of preventing harms or risks, but prevention is either avoidance or minimization. For example, we can prevent the risk of being bitten by a shark by not swimming in the ocean (i.e. avoiding the risk) or by swimming only close to shore (i.e. minimizing it). 3 Prohibiting a risk is a way of avoiding it. For example, a government agency could avoid exposing the public to a dangerous chemical by prohibiting (or banning) it. Outright bans on chemicals tend to be rare, so most government regulations that manage risks are a type of risk minimization. For example, allowing a pesticide to be used only under certain conditions would be a form of risk minimization. One can also avoid a risk by imposing a moratorium on an activity until one has more knowledge of risks and benefits. Moratoria are useful policy tools because they allow society temporarily to avoid (or postpone) risk-taking activities, such as technology development. In Chapter 7, I will discuss a moratorium as a policy option for human genome editing.

4

1 Precautionary Reasoning

and public health effects of global warming (Intergovernmental Panel on Climate Change 2014). For another example4 of different ways of dealing with risks, suppose that a pharmaceutical company has developed a new anti-coagulant medication and has submitted an application to the Food and Drug Administration (FDA), a US government agency charged with regulating drugs (Gassman et al. 2017). Data from clinical trials indicate that the drug is more effective than standard medications at controlling clotting and its associated risks, such as ischemic stroke, heart attack, deep vein thrombosis, pulmonary embolism, and death. The drug also can cause excessive bleeding, which has risks, such as bruising, anemia, and hemorrhagic stroke. However, these risks are about the same as those associated with other medications, and these risks can be minimized if levels of the drug in the blood stream are tightly controlled so they do not exceed safe levels (Piran and Schulman 2019). The FDA reviews the evidence from clinical trials of the drug conducted by the company and weighs the drug’s benefits and risks. The agency must consider the risks and benefits of approving the drug or not approving it (i.e. avoiding its risks). If the FDA decides to approve the drug (and not avoid the drug’s risks), it could take measures to minimize the drug’s risks, such as mandating that the label of the drug include information concerning risks, approving the drug only for use in certain categories of patients who are most likely to benefit from it, and requiring the company to conduct additional research after the drug is approved to obtain more information about the drug’s risks and benefits when used in medical practice (Gassman et al. 2017). Doctors who prescribe the drug can mitigate its risks by monitoring patients closely and treating patients who develop bleeding problems with medications that counteract the drug’s effects (Piran and Schulman 2019). Second, we may face risks related to taking action or not taking action (Munthe 2011). For example, suppose that Wayne is considering whether to get a seasonal influenza vaccine this year. The vaccine has some minor, temporary risks, such as a mild fever and fatigue lasting about a day, and pain or bruising at the injection site. The benefit of the vaccine is that it will reduce his probability of getting the flu by 60%. If he gets the flu, he could have significant fatigue, body aches, headaches, fever, cough, and congestion, lasting one to two weeks. In this case, Wayne must choose between the risks of taking action and getting vaccinated and the risks of not taking action and possibly contracting the flu. Since one might argue that not taking action is a form of action, we could more precisely say that Wayne must choose between the risks of active action (getting vaccinated) and passive action (not getting vaccinated). Third,precautionary reasoning is inherently normative because risk management has a moral, social, and political dimension (Moser 1990; Shrader-Frechette 1991; Hannson 2003, 2010; Munthe 2011). In this book, I will use the word ‘reasonable’ to capture this normative dimension of precautionary reasoning. Precautionary measures, such as avoiding, minimizing, or mitigating risks, can be judged to be reasonable or unreasonable (Munthe 2011). Taking a risk is reasonable if it is morally, 4 This

is a hypothetical example based on facts. I will discuss some real examples in Chapter 6.

1.2 Foundations of Precautionary Reasoning

5

or in some cases politically,5 acceptable to take. If taking a risk is not reasonable, then one should avoid or minimize it. A reasonable person, in this sense, is someone who makes prudent and responsible decisions related to risks and benefits, given his or her options and circumstances.6 Reasonableness differs from instrumental rationality insofar as instrumental rationality refers to taking effective means to obtaining one’s ends, which may or may not be morally or politically acceptable (Audi 2001; Gewirth 2001). Reasonableness is founded upon a normative framework for the assessment of ends or goals, while instrumental rationality is not. Instrumental rationality is value-neutral.7 For example, it might be rational for Kevin to use a shotgun to kill his wife’s lover, if his goal is to kill his wife’s lover and shooting her lover with a shotgun is an effective means of killing him. It would not be reasonable for Kevin to shoot his wife’s lover, however, because killing is immoral, and it is unreasonable to have the goal of killing someone. Though it is usually reasonable to act rationally, sometimes it may not be. For example, suppose the most effective way of feeding nursing home patients is to give them the same meal each day, which optimizes nutritional value and minimizes cost. We would not view this approach to feeding nursing home patients as reasonable because it would severely limit their dietary choices and therefore not respect their dignity and worth. For another example, consider taking a risk for one’s self versus imposing a risk on someone else. We generally allow mentally competent, well-informed adults to take a variety of risks, such as smoking, drinking, and skydiving, which do not involve harm to others (Feinberg 1986). The obligation to not harm others is a widely accepted moral principle that requires us to not only avoid intentionally harming others but also to avoid imposing excessive risks of harm on them (Feinberg 1984; Beauchamp and Childress 2012). Putting these two ideas together, we could say that while it would be reasonable for Sophia to drive an automobile 150 miles per hour on a racetrack because she is only taking a risk for herself and she has a right to do so, it would not be reasonable for her to drive an automobile the same speed on a highway because this would impose excessive risks on other people. While normal driving inherently imposes risks on others, we regard these risks as reasonable, given the social and economic benefits of driving, when drivers do not exceed a safe speed. Reasonableness applies not only to individuals but also to organizations and groups. In the drug approval case mentioned above, for example, it would be reasonable for the government agency to approve the new anti-coagulant if it has carefully

5 By ‘politically acceptable’ I mean ‘resulting from a fair social decision-making process.’ I discuss

political fairness in greater depth in Chapters 3, 4, and 5. tort law, a reasonable person is someone who uses good judgment and does not impose unreasonable risks on others (Kionka 2015). A risk is reasonable, in this sense, if the benefits of the risk to the individual and society outweigh the burdens of the risk. The notion of reasonableness I am using in this book in similar to the notion of reasonableness found in tort law, but it goes beyond it because it includes moral considerations. 7 By ‘value-neutral’ I mean neutral with respect to moral, social, or political values. Instrumental rationality is not neutral with respect to logical values, such as consistency (Resnik 1985). 6 In

6

1 Precautionary Reasoning

reviewed the evidence from clinical trials and determined that the benefits of the medication outweigh its risks, and the agency implements measures to minimize these risks, such as safety labelling. It would be unreasonable for the agency to approve the drug without carefully reviewing the evidence, determining that its benefits outweigh its risks, or taking steps to minimize the drug’s risks (Gassman et al. 2017). Reasonableness is a part of group decision-making insofar as the group’s decision can be viewed as fair (or unfair). For example, suppose some friends are trying to decide where to sit at a baseball game and that the stadium has protective netting only for 10% of the seats, which are behind home plate. Some members of the group want to sit behind the protective netting to avoid getting hit by a foul ball, but others want to sit on the third base side of the field to have a chance at catching a foul ball and to be closer to the bathrooms. Assuming that the friends all want to sit together, they will need to decide how to manage the risks related to their seating choice. One might argue that the group’s choice would be reasonable if it results from a fair decision-making procedure, such as voting. (We shall discuss more examples of group decision-making below.) Fourth, the decisions we make concerning risks, benefits, and precautions depend on a variety of contextual factors (or conditions) including, but not limited to (Shrader-Frechette 1991; Munthe 2011): • • • •

The circumstances (or facts) related to the decision; Our available options; Our values, which we use to evaluate the outcomes related to the options; Our knowledge (or lack thereof, i.e. uncertainty) concerning outcomes, including our knowledge of probabilities or causal relationships; • Our tolerance for risk and uncertainty8 ; • Interpersonal and social relationships (i.e. whether we are making decisions only for ourselves or for or with other people). For an example of how circumstances and values may impact precautionary decisions, suppose that for the last 15 min Greg has been experiencing shortness of breath, dizziness, nausea, chest and shoulder pain and that his symptoms have gotten progressively worse. Greg is a 55-year-old male who is otherwise in good health. He has a wife and two grown children. He wants very much to live long enough to travel with his wife and spend some time with his grandchildren. Most people would agree that Greg should go to the nearest emergency department (ED) or call an ambulance as soon as possible to avoid dying from a heart attack or suffering other adverse health effects. This would be a reasonable precautionary measure, given his circumstances (WebMD 2019). 8 In

this book I will distinguish between risk and uncertainty. A decision is made under risk when we know the probabilities of the different outcomes. A decision is made under uncertainty when we do not know the probabilities (Resnik 1987). Probabilities may be known with degrees of certainty. For example, one might argue that a probability which is based on data from scientific tests or experiments is more certain than a probability that is based on a subjective guess (Earman 1992). I will discuss these points in more depth in Chapters 2, 4, and 5.

1.2 Foundations of Precautionary Reasoning

7

But suppose one changes the facts a bit. Suppose that the person who is having chest pains is Gene, an 85-year-old Hospice patient who is dying from colon cancer and is in intractable pain. Under these circumstances, it might not be reasonable for Gene to go to the ED or call 911. If Gene is in intractable pain and looks forward to death as an end to his suffering, then he may decide to do nothing. In both situations, a person faces a choice about how to deal with risk of heart attack. In the first situation Greg, wants to avoid or minimize the risk of a heart attack; in the second, Gene welcomes this outcome and considers it not to be a risk but a benefit. The possible outcomes are the same in both cases, but the decision-makers evaluate them differently based on their circumstances and values. Now suppose that the person having the symptoms is Grace, an otherwise healthy 35-year-old woman with a history of general anxiety disorder. While she could be having a heart attack, it is more likely that she is not. Whether she decides to go to the ED may depend, in part, on the resources available to her. If Grace has only a modest income, no health insurance, and cannot afford to waste time or money on expensive medical tests to find out that she is not having a heart attack, then she may decide not to go to the ED immediately and to see if her symptoms resolve (Smolderen et al. 2010). We could consider other circumstances that may be relevant to her decision-making, such as practicalities and finances. If Grace decides to go to the ED, and there is more than one nearby, Grace could consider whether the available EDs differ in terms of cost and quality of care (assuming she has the time to do this). Interpersonal and social relationships could also be an important part of the context for this decision. Thus far we have considered the decision to seek treatment for a possible heart attack from the perspective of the person having the symptoms. But suppose we consider the decision from the perspective of a health insurer or society. A health insurer might decide not to cover the cost of care for patients who seek treatment for symptoms that are not indicative of having a heart attack, because the insurer might consider this to be a waste of resources (Smolderen et al. 2010). If society is facing a shortage of resources for emergency care, questions may arise concerning the allocation of resources to treat people who are not likely to be having heart attacks. In a public health crisis, such as an earthquake, hurricane, or mass shooting, EDs may be inundated with other patients who have greater medical needs than those who think they may be having heart attacks. Knowledge may impact decision-making in all these possible heart attack scenarios. In Greg’s case, if he is initially experiencing a few minutes of shortness of breath and dizziness, this would probably not be enough evidence of a possible heart attack to seek treatment. As his symptoms intensify, however, his confidence in the decision to seek treatment would also increase (WebMD 2019). For example, if his symptoms start out with shortness and breath and nausea, and then include chest pain, shoulder pain, and dizziness, he could then seek treatment. If his symptoms resolve, however, he could forego seeking treatment. For example, if he has shortness of breath and dizziness and some slight pain his chest that gets better after five minutes and does not return, he might decide that there is no need to go to the ED. Knowledge could also impact Grace’s decision-making. As evidence that she is

8

1 Precautionary Reasoning

probably having a heart attack mounts, she could decide that it is worth going to the ED to seek treatment. For another example of how knowledge may impact decision-making, suppose that Gina is hiking in the woods when she comes upon patch of mushrooms. She loves the taste of mushrooms and is very hungry. She picks a large, brown mushroom and is deciding whether to eat it. About 3% of the 5000 types of mushrooms identified by mycologists are poisonous. The adverse effects of mushroom poisoning range from gastrointestinal distress (e.g. nausea, vomiting, pain), to organ failure and death (1% of cases) (Eren et al. 2010). If Gina knows nothing about how to identify poisonous mushrooms, she should not eat it, so that she can avoid the risk of being poisoned. It makes no sense for her to risk her health and possibly life for the brief enjoyment eating the mushroom. Suppose, however, that Gina has years of experience with identifying mushrooms and that she knows (with a probability of greater than 99% but not 100%) that this mushroom is safe to eat. Under these circumstances, one could argue that it would be reasonable for Gina to eat the mushroom. Under different circumstances, Gina might decide to eat the mushroom even if the probability that it is safe is lower than 99%. If Gina is lost in the woods and faces a 50% risk of starvation if she does not eat soon, she might decide to eat the mushroom if the probability that it is safe is only 90%, since it is better to take a 10% risk of poisoning than a 50% or starvation. We could include interpersonal and social relationships in this example if we suppose that Gina is making the decision not for herself but for other people. Suppose that she is leading a group of mushroom-lovers on a hike through the woods and a member of the group picks a mushroom and asks her if it is safe to eat. One might argue that in this case, she should exhibit a higher degree of precaution than she would for herself.9 While it might be morally acceptable for her to risk take a 1% chance of poisoning to enjoy eating a mushroom, it would be unacceptable for her to take this same risk for someone else, if they have not consented to it, because she has an obligation not to impose unreasonable risks on others. Perhaps she should only recommend that someone else eat a mushroom if she judges that the probability that it is safe is closer to 100%. Matters would become a bit more complicated if Gina is helping the group decide which mushrooms to include in a pasta sauce they plan to eat for dinner and that these mushroom-lovers differ in their willingness to risk poisoning for the sake of eating mushrooms: some members would accept a 1% chance of poisoning, while others might only accept a 0.1% or less chance of poisoning.10 Suppose also, that taste is inversely-related to risk: the riskier mushrooms tend to taste better than the safer ones. The group could take different approaches to decision-making. Under a democratic approach, Gina could assess the safety of the mushrooms individually 9 Another

case that illustrates this point is regulation of smoking. It is legal in the US and many countries for adults to take the risk of smoking for themselves, but not legal for them to impose this risk on others by smoking in public. 10 Another way of putting this is that they differ with respect their tolerance for risk, or risk-aversion (Thaler 2016). See discussion of investments below.

1.2 Foundations of Precautionary Reasoning

9

and the group could vote on which ones should be included in the sauce. Those who do not agree with the group’s decision could refuse to eat the sauce. While democracy would seem to be the best way of making these kinds of decisions concerning risks, as we shall see in Chapters 2 and 5, it has some problems and limitations that need to be taken into account to ensure that group decisions are fair. One of these is that democracy may not always give adequate consideration to vulnerable or disenfranchised groups (Gutmann and Thompson 2004; Fishkin 2011; Resnik et al. 2018). For example, suppose that a few members of mushroom club are more vulnerable to the toxic effects of mushrooms than others. Should the group choose to eat only the safest mushrooms to protect these members from harm even if most of the other members would prefer mushrooms that are not as safe but tastier? Or suppose that some members of the group are more assertive and persuasive than others. Should the group try to counterbalance or buffer their influence on its decision-making? Although this example seems a bit contrived, similar sorts of issues arise in public policy decisions involving risks, such as regulation of new technologies. Government agencies in charge of regulating technologies must consider the probabilities and uncertainties related to risks and benefits of different options. In the anti-coagulant example discussed above, the agency considered the probabilities related to the safety and efficacy of the new medication and made a decision on behalf of the public. In so doing, it had an obligation to take the public’s views concerning risk, benefit, and uncertainty into account. Disagreements about risk, benefit, and uncertainty often lead to differing attitudes toward the drug approval process. Members of the public who are wary of risks and uncertainties related to drugs may demand more evidence for approval than those who are not. Members of the public with a higher tolerance for risk and uncertainty may focus on the benefits of drugs and do not want approval to be delayed while agencies are waiting for more evidence (Hawthorne 2005; Gassman et al. 2017; Wadman 2019). Important questions concerning justice and democracy arise when agencies are making decisions on behalf of the public, because agencies have an obligation to take the views of the public into account, including minority opinions. Agencies also have an obligation to protect vulnerable groups from harm (Whiteside 2006; Shrader-Frechette 2007; Hartzell-Nichols 2013, 2017; Resnik et al. 2018). Another important consideration in group decision-making concerning risks is whether the group should be able to prevent some of its members from taking risks. Suppose a member of the mushroom club wants to try eating a mushroom that is likely to produce euphoria and hallucinations but could also have toxic and potentially life-threatening effects, but that most members of the club frown upon this type of risky experimentation. Should members of the club be able to prohibit him or her from eating the mushroom to protect him or her from harm? Would that be an unjustified infringement on members’ freedom of choice? This type of ethical conundrum raises the sorts issues that arise in controversies concerning the legalization of drugs, alcohol, and tobacco. I will consider this topic again in Chapter 6.

10

1 Precautionary Reasoning

Fifth, there are a variety of rules or procedures11 we can use to make precautionary decisions; Sixth, which decision-making rule or procedure one should use depends, in large part, on contextual factors (or conditions) related to the decision; and Seventh, it is reasonable to consider revising decision-making rules or procedures when contextual factors (or conditions) change. I will develop these three closely related, crucial points more fully later in the book, but for now I will illustrate them with some simple examples related to investment choices. Investments choices involve taking financial risks to receive financial benefits. Consider three people, Todd, Tabatha, and Tony, who take different approaches to investing. Todd is not exceedingly wealthy, but he has some money to invest in the stock market. Todd decides he would rather invest his money in a stock that has a low risk and a low rate of return than on one that has a high risk and a high rate of return. When it comes to investment decisions, Todd is risk-averse. Tabatha, however, is very wealthy and she is willing to take high risks for high rewards. Tabatha is riskseeking. Tony takes an approach that falls somewhere in between Todd and Tabatha. Tony makes investment decisions based solely on maximizing expected financial gains over losses: Tony is risk-neutral. As this simple example illustrates, there are different approaches that one can take to making precautionary decisions relating to investments. Now suppose circumstances change. Suppose that Todd inherits a great deal of money and decides to take greater risks on the stock market, so he changes his decision-making strategy from risk-averse to risk-neutral. Suppose, also, that Tabatha decides to become more precautionary after her wealth declines, and she decides to change her strategy from risk-seeking to risk-averse. As this example illustrates, contextual factors (such as one’s degree of wealth) could make an impact on the decision to change one’s approach to precautionary reasoning. To show how interpersonal and social relationships can impact these decisions, suppose that Jake is a broker who is providing investment advice to Todd, Tabatha, and Tony. Since he is making precautionary decisions for other people, he has an ethical obligation to be mindful of their circumstances and investment strategies. Jake should know, for example, his clients’ risk profiles before recommending different stocks (Chen 2019). The investor’s risk-profile would be part of the context of Jake’s decision. Jake would also need to be mindful of any changes to his or her client’s circumstances or risk profile; for example, if a client gains or loses a lot of money or becomes more risk-averse or more risk-seeking. Interpersonal and social relationships are also important when people make group decisions concerning investments. Suppose the Todd, Tabatha, Tony, Rita and some other investors decide to pool their resources and form an investment club. Members of club each contribute $10,000 to invest in stocks, and the group decides which stocks to buy. Members of the club could express their preferences by voting. Although different members of the club may have different risk profiles, they would agree 11 By “procedure” I mean a method of making a decision. For example, voting is a procedure for making group decisions.

1.2 Foundations of Precautionary Reasoning

11

as part of their membership to accept the group’s decision concerning investment choices. This investment example is similar to the mushroom example discussed above, except people are taking risks with the money instead of with their health and lives. For a final example of how one might change an approach to investing, suppose that Rita is a wealthy woman who has $100,000 she would like to invest in a local business in the community. She consults with her broker, Jake, who reviews and analyses different investment options. Rita is risk-neutral and wants to earn the highest expected rate of return on her investments. Jake advises her that she is likely to make the highest rate of return by investing in Business A. However, suppose that after Rita receives Jake’s advice, she learns that Business A engages in business practices that are unethical even if they are legal. Suppose, for example, that Business A buys goods from other companies that exploit workers in developing nations. Although Rita has never given much thought to the ethical implications of her investment choices, she decides to revise her decision-making approach and asks Jake to recommend other, ethical businesses to invest in. She decided to change her approach to investment decisions from “maximize expected returns on investments” to “maximize expected returns on ethical investments.” Moreover, this choice was entirely reasonable, given her change of values.

1.3 The Plan for This Book I will elaborate on these foregoing seven foundational points in Chapters 2 through 5 and apply them to debates about managing environmental and public health risks in Chapters 6 through 9. My plan for the book is as follows. In Chapter 2, I provide an overview of some of the key concepts, principles, and problems of formal decision theory. I will argue that while decision theory provides us with some valuable insights into rational decision-making and has useful applications for personal and social choices, it does not show us how to make reasonable decisions, because its rules and strategies are morally-neutral, and for a decision to be reasonable it must take moral and social values into account. To make reasonable precautionary decisions, we need to examine theories that tell us which outcomes we ought to pursue, and how we ought to go about pursuing them. In Chapter 3, I will describe and critique several approaches to precautionary reasoning derived from prominent moral theories, including utilitarianism, Kantianism, virtue ethics, natural law and nature rights theories, environmental ethics, feminist ethics, and John Rawls’ egalitarian view. I will argue that while these theories provide us with a rich array of values we can take into consideration when engaging in precautionary reasoning, they all face some substantial objections that limit their usefulness as an overall approach to decision-making involving risks, benefits, and precautions. If we do not accept a single, over-arching moral theory that resolves all value conflicts, we must deal with an assortment of incommensurable values that we must consider, weigh, and prioritize when making choices

12

1 Precautionary Reasoning

concerning risks, benefits, and precautions. To make reasonable, precautionary decisions we must come to terms with moral pluralism and uncertainty in a way that respects and appreciates competing values (Resnik 2018). In Chapter 4, I will explore the PP in greater depth and respond to its critics. I will propose a clear and coherent version of the PP and discuss the relationship between the PP and decision theory and moral theory. I will also show how the PP applies to individual and group decisions and respond to objections to my view. I will argue that while the PP should not be adopted as an all-encompassing rule for decision-making, it can play an important role in specific public policy contexts. In Chapter 5, I will use the key points made in the previous four chapters to expand upon my approach to precautionary reasoning. I will argue that while there are many different rules or procedures we can use in precautionary reasoning, including the PP as well as those based on decision theory or moral theory, which one we should use depends largely on contextual factors, such as our knowledge, values, social circumstances. Moreover, we should consider changing rules or procedures when these factors (or conditions) change. I will also consider different types of group decision-making, including democracy and decision-making by experts. I will argue that the case for using the PP is most compelling when we face scientific (or epistemological) uncertainty concerning the possible outcomes related to different options, moral uncertainty concerning our values, or both. In Chapters 6 through 9, I will apply my approach to precautionary reasoning to various issues in environmental and public health, including regulation of chemicals, including toxic substances, drugs, and electronic cigarettes; genetic modification of plants, animals, and human beings; scientific research that could be used to produce serious harm to the public, the environment, the economy, or national security (also known as “dual use” research); and public health emergencies; in particular, the COVID-19 pandemic. In Chapter 10, I will summarize the key arguments and conclusions of the book.

References Audi, R. 2001. The Architecture of Reason: The Structure and Substance of Rationality. New York, NY: Oxford University Press. Beauchamp, T.L., and J.F. Childress. 2012. Principles of Biomedical Ethics, 7th ed. New York, NY: Oxford University Press. Brombacher, M. 1999. The Precautionary Principle Threatens to Replace Science. Pollution Engineering (Summer): 32–34. Chen, J. 2019. Risk Averse. Investopedia. Available at: https://www.investopedia.com/terms/r/ris kaverse.asp. Accessed 18 Jan 18 2021. Cranor, C. 2001. Learning From Law to Address Uncertainty in the Precautionary Principle. Science and Engineering Ethics 7: 313–326. Earman, J. 1992. Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. Cambridge, MA: MIT Press. Eren, S.H., Y. Demirel, S. Ugurlu, I. Korkmaz, C. Aktas, and F.M. Güven. 2010. Mushroom Poisoning: Retrospective Analysis of 294 Cases. Clinics (Sao Paulo) 65 (5): 491–496.

References

13

European Commission. 2000. Communication for the Commission on the Precautionary Principle. Available at: https://publications.europa.eu/en/publication-detail/-/publication/21676661a79f-4153-b984-aeb28f07c80a/language-en. Accessed 18 Jan 2021. Feinberg, J. 1984. Harm to Others. New York, NY: Oxford University Press. Feinberg, J. 1986. Harm to Self . New York, NY: Oxford University Press. Fishkin, J.S. 2011. When the People Speak: Deliberative Democracy and Public Consultation. Oxford, UK: Oxford University Press. Foster, K.F., P. Vecchia, and M.H. Repacholi. 2000. Science and the Precautionary Principle. Science 288 (5468): 979–981. Gassman, A.L., C.P. Nguyen, and H.V. Joffe. 2017. FDA Regulation of Prescription Drugs. New England Journal of Medicine 376 (7): 674–682. Gewirth, A. 2001. Rationality vs. Reasonableness. In Encyclopedia of Ethics, 2nd ed., ed. L.C. Becker and C.B. Becker, 1451–1454. New York, NY: Routledge. Goklany, I.M. 2001. The Precautionary Principle: A Critical Appraisal of Environmental Risk Assessment. Washington, DC: Cato Institute. Gutmann, A., and D. Thompson. 2004. Why Deliberative Democracy? Princeton, NJ: Princeton University Press. Hannson, S. 2003. Ethical Criteria of Risk Acceptance. Erkenntnis 59 (3): 291–309. Hannson, S. 2010. The Harmful Influence of Decision Theory on Ethics. Ethical Theory and Moral Practice 13 (5): 585–593. Hartzell-Nichols, L. 2013. From “The” Precautionary Principle to Precautionary Principles. Ethics, Policy, and Environment 16: 308–320. Hartzell-Nichols, L. 2017. A Climate of Risk: Precautionary Principles, Catastrophes, and Climate Change. New York, NY: Routledge. Hawthorne, F. 2005. Inside the FDA: The Business and Politics Behind the Drugs We Take and the Food We Eat. New York, NY: Wiley. Holm, S., and J. Harris. 1999. Precautionary Principle Stifles Discovery. Nature 400 (6743): 398. Intergovernmental Panel on Climate Change. 2014. Climate Change 2014: Mitigation of Climate Change. Cambridge, UK: Cambridge University Press. Kionka, E.J. 2015. Torts, 6th ed. St. Paul, MN: West Publishing. Kriebel, D., J. Tickner, P. Epstein, J. Lemons, R. Levins, E.L. Loechler, M. Quinn, R. Rudel, T. Schettler, and M. Stoto. 2001. The Precautionary Principle in Environmental Science. Environmental Health Perspectives 109 (9): 871–876. Moser, P.K. 1990. Rationality in Action: General Introduction. In Rationality in Action: Contemporary Approaches, ed. P.K. Moser, 1–9. Cambridge, UK: Cambridge University Press. Munthe, C. 2011. The Price of Precaution and the Ethics of Risks. Dordrecht, Netherlands: Springer. Munthe, C. 2017. Precaution and Ethics: Handling Risks, Uncertainties, and Knowledge Gaps in the Regulation of New Biotechnologies. Bern, Switzerland: Federal Office for Buildings and Publications and Logistics. National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: National Academies Press. Peterson, M. 2006. The Precautionary Principle is Incoherent. Risk Analysis 26 (3): 595–601. Peterson, M. 2007. Should the Precautionary Principle Guide Our Actions or Our Beliefs? Journal of Medical Ethics 33 (1): 5–10. Piran, S., and S. Schulman. 2019. Treatment of Bleeding Complications in Patients on Anticoagulant Therapy. Blood 133: 425–435. Resnik, D.B., D.R. MacDougall, and E.M. Smith. 2018. Ethical Dilemmas in Protecting Susceptible Subpopulations from Environmental Health Risks: Liberty, Utility, Fairness, and Accountability for Reasonableness. American Journal of Bioethics 18 (3): 29–41. Resnik, D.B. 2003. Is the Precautionary Principle Unscientific? Studies in the History and Philosophy of Biology and the Biomedical Sciences 34 (3): 329–44. Resnik, D.B. 2004. The Precautionary Principle and Medical Decision Making. Journal of Medicine and Philosophy 29: 281–299.

14

1 Precautionary Reasoning

Resnik, D.B. 2018. The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust. Cham, Switzerland: Springer. Resnik, M.D. 1985. Logic: Normative or Descriptive? The Ethics of Belief or a Branch of Psychology? Philosophy of Science 52 (2): 221–238. Resnik, M.D. 1987. Choices: An Introduction to Decision Theory. Minneapolis, MN: University of Minnesota Press. Sandin, P. 2004. Better Safe than Sorry: Applying Philosophical Methods to the Debate on Risk and the Precautionary Principle. Theses in Philosophy from the Royal Institute of Technology, Stockholm. Sandin, P., M. Peterson, S.O. Hansson, C. Rudén, and A. Juthe. 2002. Five Charges Against the Precautionary Principle. Journal of Risk Research 5 (4): 287–299. Shrader-Frechette, K.S. 1991. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley, CA: University of California Press. Shrader-Frechette, K.S. 2007. Taking Action, Saving Lives: Our Duties to Protect Environmental and Public Health. New York, NY: Oxford University Press. Smolderen, K.G., J.A. Spertus, B.K. Nallamothu, H.M. Krumholz, F. Tang, J.S. Ross, H.H. Ting, K.P. Alexander, S.S. Rathore, and P.S. Chan. 2010. Health Care Insurance, Financial Concerns in Accessing Care, and Delays to Hospital Presentation in Acute Myocardial Infarction. Journal of the American Medical Association 303 (14): 1392–1400. Steel, D. 2015. Philosophy and the Precautionary Principle. Cambridge, UK: Cambridge University Press. Sunstein, C.R. 2005. Laws of Fear: Beyond the Precautionary Principle. Cambridge, UK: Cambridge University Press. Thaler, R. 2016. Misbehaving: The Making of Behavioral Economics. New York, NY: WW Norton. United Nations Conference on Economic Development. 1992. The Rio Declaration on Environment and Development. Available at: https://www.un.org/en/development/desa/population/migration/ generalassembly/docs/globalcompact/A_CONF.151_26_Vol.I_Declaration.pdf. Accessed 20 Jan 2021. Wadman, M. 2019. Sickle Cell Drug Raises Hopes and Doubts. Science 365 (6459): 1235. WebMD. 2019. Heart Attack: What to Expect in the Emergency Room. Available at: https://www. webmd.com/heart-disease/what-to-expect-in-the-er#1. Accessed 20 Jan 2021. Webster’s Dictionary. 2019. Risks. Available at: https://www.yourdictionary.com/risk. Accessed 20 Jan 2021. Whiteside, K. 2006. Precautionary Politics: Principle and Practice in Confronting Environmental Risk. Cambridge, MA: MIT Press.

Chapter 2

Precautionary Reasoning and Decision Theory

Decision theory is the study of how people make rational choices, where rationality is defined as taking effective means to one’s goals and conforming to the rules of logic and axioms of probability theory (Resnik 1987; Peterson 2017). As noted in Chap. 1, this type of rationality, known as instrumental rationality, is different from reasonableness because reasonableness is founded on moral and social values. A decision could be rational in the decision theorist’s sense but not reasonable. For example, it might be rational to jump off a tall building to commit suicide (if that is one’s goal) but not reasonable to do this. Decision theory can be divided into normative and descriptive branches (Resnik 1987; Peterson 2017). Normative decision theory examines how people ought to make decisions. Normative decision theorists use analytical methods, such as linguistic and conceptual analysis, and mathematical and logical argumentation, to support their conclusions and principles. Descriptive decision theory investigates how people make decisions. Descriptive decision theorists, such as behavioral economists and psychologists, use empirical methods, such as surveys, interviews, and controlled experiments, to test their hypotheses and theories. A key question of behavioral economics is the extent to which people conform to the rules of instrumental rationality when making economic decisions (Ariely 2010; Kahneman 2011; Thaler 2016). Since my main concern in this book is to consider how individuals and groups ought to make precautionary decisions, I will focus on normative decision theory in this chapter, because normative decisions theory includes rules that can guide actions and policies. However, since insights from descriptive decision theory are often relevant to normative arguments, I will periodically discuss or refer to research in behavioral economics, psychology, and other empirical sciences. Descriptive decision theory can tell us, for example, that some normative principles may be difficult or impossible to apply in the real world because people are not likely to follow them. Descriptive decision theory can serve as a reality check for normative ideals. For example, as mentioned in Chap. 1, many people are risk-averse, but risk-aversion

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_2

15

16

2 Precautionary Reasoning and Decision Theory

violates the rules of instrumental rationality (Kahneman 2011; Thaler 2016). Instrumental rationality tells us that we should prefer to have a 50% chance of winning $50 than a 90% chance of winning $20, because our expected payoff for this first option is $25 (0.5 × $50) and for the second it is $18 (0.9 × $20), but many people would prefer the second option. A similar phenomenon, known as loss aversion, refers to the tendency to gain more psychological satisfaction from avoiding losses than equivalent gains. For example, most people would gain more satisfaction from not losing $50 than gaining $50. Risk-aversion and loss-aversion are aspects of human psychology and behavior that must be accounted for when developing a coherent and robust approach to precautionary decision-making.

2.1 Overview of Decision Theory In this chapter I will give an overview of some of the key concepts and principles of decision theory. This overview will be informal insofar as I will not present the formal, mathematical proofs that decision theorists use to support their ideas. The proofs are, without doubt, essential for justifying the principles of decision theory, but one can understand and appreciate the main insights that decision theory offers without examining the mathematical proofs in detail.1 Decision theory can be divided into three parts, individual decision theory, game theory, and social choice theory. Individual decision theory examines decisions that individuals make; game theory focuses on the choices that individuals make in competitive or cooperative games2 ; and social choice theory considers cooperative decisions made by groups of individuals (Resnik 1987; Peterson 2017). In this book, I will focus on individual decision theory and social choice theory, since I believe that these two parts of decision theory offer the most useful guidance for public policy decision-making related to the management of risks. Individual decision theory is useful because it includes rules for decision-making that can be applied to a variety of contexts, including choices made by individuals and groups. A group, such as a business or government agency, could decide to follow rules that apply to individual decision-making. Social choice theory is useful because it examines problems related to group decision-making.

1 See

Resnik (1987), Peterson (2017), Bradley (2017) for a review of some the formal proofs of decision theory. 2 The most famous of these competitive games is known as Prisoner’s Dilemma. In this game, two suspects are caught by the police and each must decide whether to confess to the crime and implicate their partner or remain silent. The police interrogate them in separate rooms. If one partner confesses and the other does not, the confessor will 0 years in prison and the other partner will receive 10. If neither one confesses, they both get 1 year in prison. If both confess, they both get 6 years. The best outcome for both prisoners is if neither one confesses, but to achieve this outcome they must cooperate. Decision theorists study the strategies and rules that one could use in games like these. See Von Neumann and Morgenstern (1944), Resnik (1987).

2.1 Overview of Decision Theory Table 2.1 Taking an umbrella to work

17 It rains

It doesn’t rain

Take an umbrella

I don’t get wet

I don’t get wet

Don’t take an umbrella

I get wet

I don’t get wet

Decisions consists of options (or alternatives) and possible outcomes (Resnik 1987). Outcomes are events that may or may not happen, depending on states of the world (or nature) and the choices we make. For example, suppose I am considering whether to take an umbrella to work. The two options are: take an umbrella and don’t take an umbrella; the states of nature are: it rains and its doesn’t rain; and the outcomes are: I get wet and I don’t get wet. We can put these together in a decision matrix (Table 2.1). The value (or disvalue) that one places on an outcome is its utility. There are two different ways of assigning utilities to outcomes, one that uses an ordinal scale and one that uses a cardinal scale (Resnik 1987). An ordinal scale represents the ranking of outcomes: the most preferred outcomes have the highest ranking; the least preferred, the lowest. For example, if one prefers chocolate ice cream to vanilla and vanilla to strawberry, one could assign them the numbers 3, 2, and 1, respectively, to represent this ranking. In decision theory, preferences are assumed to be well-ordered and rational, which means that they conform to some logical and mathematical constraints.3 A cardinal scale assigns numerical quantities to outcomes. For example, if I am willing to pay $1 for a scoop of chocolate, $0.75 for vanilla, and $0.60 for strawberry, I could assign them dollar values (or utilities) of 1, 0.75, and 0.6. Both ways of evaluating outcomes may be used in decision theory, depending on our ability evaluate outcomes. For example, one might use a relative scale for evaluating ice cream flavors if it is not possible to assign numeric values to these preferences, but one might use an absolute scale to evaluate outcomes related to financial decisions because one can assign monetary values to these outcomes. Decision theorists distinguish between three types of decisions: decisions under certainty, decisions under ignorance, and decisions under risk (Resnik 1987). Risk and ignorance are different types of epistemological uncertainty.4 A decision under certainty is one in which one knows the outcomes that will occur with absolute certainty. A decision under ignorance is one in which one does not know at all whether the outcomes will occur. A decision under risk is one in which one knows the probabilities that different outcome will occur. For example, if I know with absolute certainty that it will rain, then deciding whether to take an umbrella would be a decision under certainty. If I have no idea whether it will rain, this would be a

3 Some

of these constraints include: if a person prefers x to y, then it is not the case that they prefer y to x; if they prefer x to y, then they are not indifferent between x and y; if they are indifferent between x and y, they do not prefer x to y and they do not prefer y to x; if they prefer x to y and y to z, then they prefer x to z (also known as transitivity); if they are indifferent between x and y and indifferent between y and z, then they are indifferent between x and z. See Resnik (1987). 4 See discussion of epistemological vs. moral uncertainty below.

18

2 Precautionary Reasoning and Decision Theory

decision under ignorance, and if I know that there is a 50% chance that it will rain, this would a decision under risk. While decision-making under certainty is a theoretical possibility, it rarely happens in the real world because we seldom face situations in which outcomes are known with certainty. Most of the issues we will examine in this book, such as regulation of drugs, environmental policy, and genetically modified organisms, involve outcomes that are uncertain. Accordingly, I will not say much more about decision-making under certainty.

2.2 Decision-Making Under Ignorance Decision-making under ignorance is a part of decision theory that examines strategies (or rules) for making choices when the probabilities for different outcomes are not known. Suppose, for example, that we do not know if will rain or not when we are deciding whether to take an umbrella with us to work. If we consider getting wet to be the worst possible outcome, we should take an umbrella, because that is the only option that avoids the worst outcome. This approach to decision-making under ignorance is known as the maximin rule: maximize your worst possible outcome (Resnik 1987; Peterson 2017). For an illustration of maxim that uses a cardinal scale to rank outcomes, suppose you are deciding whether to invest in stocks, bonds, or mutual funds under different possible economic conditions (i.e. states of the world), e.g. a growing economy, a stable economy, or a declining economy. Suppose, also, that these investments perform differently under different economic conditions: stocks do the best with a growing economy, but the worst with a declining economy; bonds do the worst with a growing economy, but the best with a declining one; and mutual funds do best in a stable economy. Using numbers to represents monetary payoffs, we can represent this problem with the following decision matrix (Table 2.2). According to the maximin rule, one should invest in bonds because this choice has the highest worst outcome (5). Maximin makes a great deal of sense in situations where one faces serious or even catastrophic risks and uncertain outcomes. For example, if someone asks you to play a game of Russian Roulette for $10,000 and you don’t know whether there are any bullets in the pistol, you should not do it, because it is not worth risking one’s life for $10,000. It would be unreasonable to do so. Table 2.2 Decision matrix for investments: maximin

Growing economy

Stable economy

Declining economy

Stocks

70

30

−13

Bonds

40

25

5

Mutual funds

53

45

−5

2.2 Decision-Making Under Ignorance

19

An objection to maximin is that it is a highly conservative, risk-averse rule that could lead one to forego important benefits or opportunities. In situations where the adverse outcomes are not as serious, it may be worth taking risks to have a chance at obtaining benefits. For example, many people would probably invest in stocks (Table 2.2) to have a chance at gaining more money. For those who are willing to take risks to obtain benefits, it may be reasonable to follow the maximax rule. According to maximax, one should choose the option with the greatest maximum outcome or opportunity (Resnik 1987). Maximax would recommend investing in stocks (Table 2.2) because this option gives on the greatest possible return on investment (70). Maximax would seem to be a reasonable approach to precautionary reasoning when the risks one faces are not serious and the benefits are significant. Investors who can afford to lose money on stocks might decide to follow maximax. However, following maximax all the time could lead one to make rash, ill-advised decisions. For example, maximax would recommend that you play Russian Roulette for $10,000 to have a chance at obtaining the best outcome, i.e. you win the money and don’t die. Most people would agree that it would be unreasonable to follow maximax in this situation. Other decision rules strike a balance between avoiding risks (maximin) and seeking benefits or opportunities (maximax). According to the minimax regret rule, one should choose the option that has the lowest maximum regret or lost opportunity (Resnik 1987; Peterson 2017). To calculate the regret for an option, one subtracts the possible outcome for the option from the highest possible outcome among the different options. In the investing example (Table 2.2), in growing economy, stocks perform best (70), so the regret for choosing stocks is 70 − 70 = 0. bonds have a regret of 70 − 40 = 30, and mutual funds have a regret of 70 − 53 = 17. Table 2.3 includes the regrets for different investing options. According to the minimax regret rule, one should invest in mutual funds because this option has the lowest maximum regret (17). While the minimax regret rule also makes sense for some types of decisions, such as investing in stocks, it may not be a very useful approach to precautionary reasoning when there are many different possible outcomes, because it focuses on outcomes with the lowest regret and ignores the others. In the investment example, each option has only three outcomes based on three states of the world (growing economy, stable Table 2.3 Regret matrix for investments

Growing economy

Stable economy

Declining economy

Stocks

70 − 70 = 0 45 − 30 = 15

5 − (−13) = 18

Bonds

70 − 40 = 30

45 − 25 = 20

5−5=0

Mutual funds

70 − 53 = 17

45 − 45 = 0 5 − (−5) = 10

20

2 Precautionary Reasoning and Decision Theory

Table 2.4 Expanded regret matrix for investments

Stocks

Strongly growing economy

Growing economy

Stable economy

Declining economy

80 − 80 = 0

70 − 70 = 0

45 − 30 = 15 5 − (−13) = 18

Strongly declining economy 7 − (−14) = 21

Bonds

80 − 35 = 45 70 − 40 = 30 45 − 25 = 20 5 − 5 = 0

7−7=0

Mutual funds

80 − 63 = 17 70 − 53 = 17 45 − 45 = 0

7 − ( − 10) = 17

Table 2.5 Optimism-pessimism matrix for investments (optimistic)

5 − (−5) = 10

Growing economy Declining economy Total Stocks

70 × 0.7 = 49

−13 × 0.3 = −3.9

45.1

Bonds

40 × 0.7 = 28

5 × 0.3 = 1.5

29.5

−5 × 0.3 = −1.5

35.6

Mutual funds 53 × 0.7 = 37.1

economy, declining economy). Suppose that we expanded the matrix a bit and include outcomes based on a strongly growing economy and strongly declining economy, with stocks and bonds performing best, respectively. See Table 2.4. In this example, bonds still have the lowest maximum regret (17), but stocks have the lowest total regret. The total regret for stocks is: 0 + 0 + 15 + 18 + 21 = 54; for bonds it is: 45 + 30 + 20 + 0 + 0 = 95; and for mutual funds it is: 17 + 17 + 0 + 10 + 17 = 61. So, if one looks beyond the highest maximum regret, it appears that stocks are the best choice. The potential weakness of the minimax regret rule become even more apparent as one moves beyond simplified examples and considers real world decisions that may have dozens of outcomes for each option. This critique of the minimax regret rule also applies to the maximin rule and the maximax rule, because they also focus on the worst or best outcomes, but not both. Perhaps what is needed is a decision rule that looks at the worst and the best. The optimism-pessimism rule does just this. As we saw, maximin is a pessimistic rule, because it focuses on avoiding the worst outcomes, whereas maximax is an optimistic rule because it focuses on the best outcomes. According to the optimismpessimism rule, one should consider the best and worst possible outcome for each option and adjust these by how optimistic or pessimistic one is about the world. We adjust the outcomes by multiplying them by a number between 0 and 1 (our optimism index). For consistency, we apply the same adjustment to each option. We multiply the highest outcome for each option by the optimism index (OI) and the lowest by 1-OI. We then sum up the highest and lowest outcome for each option and choose the option with the highest total outcomes. In the investment example (Table 2.2), let’s suppose our optimism index is 0.7. For stocks, we would multiply the highest outcome, 70, by 0.7, and the lowest, −13 by 0.3. So, the total would be: 49 − 3.9 = 45.1 (Table 2.5).

2.2 Decision-Making Under Ignorance Table 2.6 Optimism-pessimism matrix for investments (pessimistic)

21 Growing economy Declining economy Total

Stocks

70 × 0.3 = 21

−13 × 0.7 = −9.1

11.9

Bonds

40 × 0.3 = 12

5 × 0.7 = 3.5

15.5

−5 × 0.7 = −3.5

12.4

Mutual funds 53 × 0.3 = 15.9

So, if one is somewhat optimistic (OI = 0.7), one should pick stocks. However, we obtain a very different result if we are somewhat pessimistic5 (OI = 0.3, Table 2.6). The optimism-pessimism rule could be a reasonable approach to use in precautionary decision-making when one is concerned about seeking the best outcome and avoiding the worst and one has some reason to be optimistic or pessimistic about the outcomes. For example, the optimism-pessimism rule might be a reasonable approach to betting on horse races so that one could seek high payoffs without losing too much money. However, as one can see from Tables 2.5 and 2.6, one problem with the optimism-pessimism rule is that the outcomes can vary considerably, depending on one’s optimism index, which is subjective. Since different people considering the same options could make very different choices based on their degree of optimism, the optimism-pessimism rule may be difficult to apply to group decision-making (Resnik 1987). At the very least, groups would need to resolve disagreements concerning their degree of optimism. It could also pose a problem for individual decision-making if one’s degree of optimism swings back and forth, such that one’s decision-making becomes wildly inconsistent. A second problem with the rule is that it ignores intermediate outcomes. In Tables 2.5 and 2.6, we did not consider outcomes that occur under a stable economy because these are not the highest or lowest. A third problem with the rule is justifying one’s degree of optimism. Should this be based on evidence? A gut feeling? If one holds that one’s degree of optimism should be based on evidence, then the optimism-pessimism rule begins to look like a rule for adjusting outcomes by their probability, which implies that the optimismpessimism rule is not really about making decisions under ignorance but about make decisions under risk (Peterson 2017). A rule known as the principle of indifference (or the principle of insufficient reason) also begins to cross the line from decision-making under ignorance to decision-making under risk. According to this rule, one should assume that all outcomes are equally probable, absent evidence to the contrary. One then multiplies all outcomes by their probabilities to obtain expected outcomes. Probabilities are based on dividing 1 by the total number of possible outcomes for each option. For example, in the investment matrix (Table 2.2), there are three outcomes for each option, so the probability of each outcome is 0.33. The total of these outcomes for each option is the total expected outcome. The principle of indifference instructs us one is completely pessimistic (OI = 0), the optimism-pessimism rule collapses into maximin; conversely, if one is completely optimistic (OI = 1), it collapses into maximax.

5 If

22

2 Precautionary Reasoning and Decision Theory

Table 2.7 Decision matrix for investments: principle of indifference Stocks

Growing economy

Stable economy

Declining economy

Total

70 × 0.33 = 23.1

30 × 0.33 = 10

−13 × 0.33 = −4.29

28.81

Bonds

40 × 0.33 = 13.2

25 × 0.33 = 8.25

5 × 0.33 = 1.65

23.1

Mutual funds

53 × 0.33 = 17.49

45 × 0.33 = 15

−5 × 0.33 = −1.65

30.84

to choose the option with the highest expected outcomes. Table 2.7 applies this idea to the investment example: If one follows the principle of indifference (Table 2.7), one should invest in mutual funds, because this option has the highest total expected outcome (30.84). Like the other rules, it may be reasonable to use the principle of indifference to make decisions under some circumstances. For example, if one has no reason to believe that any outcome is more or less probable than any other, then perhaps one should assume equal probability and make decisions based on this assumption. For example, in a randomized, controlled trial (RCT) that compares two medical treatments for safety and effectiveness, researchers often assume, at the outset, that neither is more likely to be more safe or effective than the other. This assumption is known as clinical equipoise. Since it is not known which treatment is better, doctors can randomly assign patients to receive either treatment without violating standards of medical ethics. Indeed, the main rationale for conducting the study is to determine which treatment is better (Resnik 2018). The main problem with using the principle of indifference in decision-making is that we often do have reasons to believe, based on personal experience, a hunch, or scientific data, that some outcomes are more likely than others (Resnik 1987). In these cases, perhaps we should follow rules for decision-making under risk, instead of rules for decision-making under ignorance. Before concluding my discussion of decision-making under ignorance, I would like to discuss two practical problems that arise when one attempts to apply rules for decision-making under ignorance to real world decisions. As I shall argue in Chaps. 4 and 5, these problems with give us reasons to use the precautionary principle as a form of decision-making.

2.3 The Problem of Value Uncertainty As we have seen, rules for decision-making under ignorance assume that one can assign utilities to different outcomes. Utilities could be understood as a ranking of outcomes (i.e. ordinal utilities) or an assignment of numerical values to outcomes (i.e. cardinal utilities). To make these assignments, one must have some prior understanding of what kind of outcome counts as preferable, valuable or worthwhile. In the Russian roulette example, we assume that life is preferable to death. In the investing

2.3 The Problem of Value Uncertainty

23

examples, we assume that making more money is preferable to making less money (or losing money). In many, real world situations, however, we may need to assign utilities to different outcomes when we face moral (or value6 ) uncertainty. Moral uncertainty is different from epistemological (or scientific) uncertainty. We face epistemological uncertainty when we are uncertain (or unsure) about what we ought to believe, due to lack of evidence, proof, or justification. We face moral uncertainty when we are uncertain about what we ought to do, due to lack of knowledge about the facts pertaining to our situation, or a lack of clarity or agreement about values (Bykvist 2017; Tarsney 2018; Koplin and Wilkinson 2019). Epistemological uncertainty is uncertainty pertaining to knowledge or belief; moral uncertainty is uncertainty pertaining to action or conduct.7 For an example of moral uncertainty, recall the unethical investment scenario, discussed in Chap. 1, in which Rita could make more money by investing in an unethical company or make less by investing in an ethical one. In assigning utilities to her decision outcomes, she must choose between making the most money and investing ethically. This choice could be difficult for her, due to her moral uncertainty about whether money is more important to her than ethics. In assigning utilities to outcomes related to restricting the use of an industrial chemical, a government agency must choose between protecting public health or the environment and promoting economic growth. Moral uncertainty would arise in ranking these competing values. In Chaps. 4 and 5 I will argue that the precautionary principle can be a useful rule to deploy when we face moral uncertainty, because it does not assume that we have a ranking of utilities that reflects established values. As we shall see below, moral uncertainty is also a problem for approaches to decision-making under risk.

2.4 The Problem of Implausible Outcomes Another practical problem arises because the decision matrices used to apply the rules involve a limited (and usually small) set of possible outcomes. But one can often imagine many different possible, extremely good or bad outcomes that could be included in these matrices. These outcomes may not be in any way plausible, but they are still possible nonetheless. Including these outcomes could radically change the options recommended by the rules, but the rules of decision theory do not tell us which outcomes to include or exclude. They only tell us how to make choices, given a set options and ranking of outcomes. However, we usually ignore these implausible outcomes when making real world decisions. How we decide—or should decide—to 6I

use the term ‘value’ to indicate that uncertainty may encompass not only moral values but also other values, such as economic, aesthetic, religious, or political values. 7 It is worth mentioning that the uncertainty I am referring to here is not merely psychological but has to do with lack of evidence or justification. Psychological uncertainty is subjective a feeling or state of mind (Chisholm 1977). Epistemological and moral uncertainty have to do with insufficient justification for belief or action (respectively).

24

2 Precautionary Reasoning and Decision Theory

eliminate these implausible outcomes is a problem we must face when we apply the rules for decision-making under ignorance to actual choices. To see why we need to eliminate implausible outcomes, consider the maximin rule, which instructs us to avoid the worst outcome, no matter how implausible or unrealistic it may seem. For example, suppose you are considering whether to get married and the worst possible outcome you can imagine is that your spouse turns out to be a psychopath who kills you for your money. Following maximin, you should not get married. If we thought like this all the time, we would never seek employment, go to college, get married, or take any other risk with where risks and rewards are high. We would be paralyzed with fear and doubt (Sunstein 2005). Or consider the maximax rule, which instructs us to seek the best outcomes, no matter how unrealistic or fanciful they may seem. Suppose that you are deciding whether go on a honeymoon to New York City or Riverton, Wyoming. Suppose, also, that you have read a book about people finding gold in the Riverton area, so you decide that the best possible outcome related to this decision is that you could find a gold nugget worth $1 million in Wyoming. So, you decide to honeymoon in Wyoming instead of New York City. If we thought like this all the time, we would constantly seek opportunities and go on wild goose chases. We would succumb to wishful thinking and gullibility. The problem of implausible alternatives goes beyond maximin and maximax, however, because different of rules for decision-making under ignorance are sensitive to extreme minimum or maximum outcomes included in the decision matrix. In the stock example, suppose we think it is possible that the mutual fund manager could embezzle money and mismanage the fund, such that its worst performance could be −30. This would change the maximum regret for this option to 25, so it would no longer be recommended under the minimax regret rule. Or suppose that for stocks we include the possibility that the company we invest in grows at an incredible rate similar to Microsoft’s growth during the 1990s and 2000s. If include 700 as a possible outcome for stocks, this would skew total outcomes under the optimism-pessimism rule and the principle of indifference. In real world decision-making, we do not consider these implausible outcomes. We decide that some outcomes are not worth considering, because they are too far-fetched (Steele 2006; Munthe 2011). However, ignoring some possible alternatives starts to move us beyond decision-making under ignorance toward decision-making under risk. If we were completely ignorant, we would have no legitimate basis for ruling out some alternatives. We rule out alternatives based on our knowledge of the natural or social world. In some cases, we might conclude that wild, “sky is falling” scenarios have no factual basis or contradict well-established scientific laws or theories.8 For example, before scientists activated the Large Hadron Collider (LHC) in Geneva, Switzerland, the press reported that the LHC could form microscopic black holes that could grow in size and eventually swallow the whole Earth. However, these reports were not based on a sound understanding of the science behind the LHC. 8 Readers

may recall the European folk tale of Henny Penny, also known as Chicken Little, who warned other animals and the king that the sky is falling after an acorn landed on her head.

2.4 The Problem of Implausible Outcomes

25

First, if the LHC creates a black hole it will decay in a fraction of a second; and second, in the brief time that a black holes exists it would be moving at the speed of light, so it would quickly leave the Earth for the vacuum of space before it could contact any matter (Phillips 2008). In other cases, we might decide that extreme scenarios (such as our spouse turns out to be a psychopathic killer or we find a huge gold nugget in Wyoming) have a factual basis but are highly improbable, based on the evidence we have. In both types of cases, it is our knowledge of the world that allows us to properly frame the choices we make when we lack knowledge about the probabilities of different outcomes. In these situations, we are still making decisions under ignorance, but our ignorance is not all-encompassing. I will return to this important point when I examine the PP in greater depth in Chap. 4 and when expand upon my approach to precautionary reasoning in Chap. 5.

2.5 Transitioning from Decision-Making Under Ignorance to Decision-Making Under Risk Before discussing rules for decision-making under risk, it will be useful to take a step back and ask ourselves a prior question: when should we adopt rules for decisionmaking under risk? Decision-theorists might reply that I have asked an incoherent question. If one understands the definition of decision-making under risks properly, then it follows, tautologically, that one should adopt this approach when one knows the probabilities related to outcomes. Absent these probabilities, one should use rules for decision-making under ignorance. This answer gets to my deeper question: when do we have enough knowledge about probabilities to justify the transition from decision-making under ignorance to decision-making under risk? This is a difficult question to answer and an important one for precautionary reasoning in general and the PP in particular. To address this question, let’s assume that it is reasonable to follow rules for decision-making under ignorance when we do not know anything about the probabilities of different outcomes, except those outcomes we eliminate as implausible. Recalling the mushroom example from Chap. 1, if one does not know whether a mushroom is likely to be poisonous and one is not facing starvation, one should not eat it. It makes sense to avoid the worst possible outcome, i.e. death. However, if one has enough knowledge of mycology to know that the probability that the mushroom is safe to eat is 99%, it may make sense to eat it. What I am asking readers to consider is how do we—or should we—make this transition from rules for decisionmaking under ignorance to rules for decision-making under risk? How much (or what type of) knowledge or evidence do we need to justify changing our decision-making approach?

26

2 Precautionary Reasoning and Decision Theory

2.6 Interpretations of Probability The answer to the above question depends, in part, on what we mean by ‘probability.’ There are several different interpretations of probability (Peterson 2017; Hájek 2019). According to the classical (mathematical) view, the probability of an event is a fraction calculated by dividing the possibilities of the event (or outcome) by the total possible outcomes. For example, to determine the probability of flipping a coin and getting heads, we divide the event (heads, 1) by the total possible outcomes (heads or tails, 2) to obtain the fraction ½, or probability, p = 0.5. The probability of rolling two dice and obtaining is 7 = 6/36 or p = 0.167. The classical view of probability has useful applications in mathematics and statistics, because probabilities are objective, measurable, and precise. However, since the view treats all possible outcomes as equally probable, it has limited applications for real world problems (with the notable exception of card games). In the real world, events are not equally probable. For example, a coin might be slightly heavier on one side than the other, so it comes up heads more often than tails. When government officials are deciding whether to approve a new drug, they want to know its success rate based on clinical trials in patients who have received it. Likewise, a doctor would want to know the success rate of the drug when prescribing it for a medical condition, and judge would want to know the success rate of a drug treatment facility when deciding whether to offer a convicted drug offender drug treatment as an alternative to jail. In science, engineering, technology, medicine, law, public policy, and other applied contexts, we make decisions based on probabilities derived from observations and empirical evidence. The statistical (or frequentist) view of probability provides an objective, empirical basis for probability. According to the statistical view, the probability of an event is a function of the observed frequency of the event (Reichenbach 1949; Hájek 2019). For example, suppose that we conduct a poll of randomly selected participants (i.e. a sample of the population) from a small city (30,000 people) and ask them if they approve of the mayor’s performance (a simple, yes/no question). If we poll a random sample of 100 members and 50 approve of the mayor, then we could say the probability that a member of the city approves of the mayor is 0.5, with a margin of error (or confidence interval) of ±0.1. So, the probability would range from 0.4 to 0.6. If we increase our sample size, we can narrow our margin of error. For example, if we increase our sample size to 1000 and 500 people approve of the mayor, the probability would be 0.5, with a margin of error of ±0.03 (range from 0.47 to 0.53).9 The statistical view of probability has useful applications in many different areas of decision-making, include science, engineering, medicine, business, economics, insurance, weather forecasting, sports, and public policy. A problem with the view is that we may need to make decisions when we do not have enough evidence to be 9 This a very simple example of a statistical test for estimating a quantity of a single variable. Other

statistical tests and methods (such as t-tests, regression, correlation, and analysis of variance) are much more complex than the one used for this simple example and may involve more variables (Moore et al. 2016).

2.6 Interpretations of Probability

27

sufficiently confident in statistical probabilities. For example, a regulatory agency might need to warn doctors and patients about the adverse effects of a drug based on only a handful of case reports. While this small sample might suggest that adverse effects from the drug can occur, it could be biased, and one might need many more cases to be more confident in this conclusion.10 Moreover, in some cases, it may not be possible to obtain data based on observed frequencies because the phenomena we are attempting to observe are very rare. For example, if an engineer is trying to determine the probability that a nuclear reactor’s containment building can withstand a direct impact from a commercial airliner, he or she cannot use observed frequencies to calculate this probability, because no one has ever observed this event. Some defenders of the statistical view have responded to critics by arguing that probability of an event is its observed frequency in the long run, which can be defined as indefinitely large set of observations (Reichenbach 1949). The problem with this response is that we often need to make decisions in applied contexts, such as engineering, medicine, or public policy, based on available probabilities in the short run, so this interpretation of probability has little practical value (Hájek 2019). As the famous economist John Maynard Keynes (1923: 80) once said, “In the long run, we are all dead.” Perhaps a better response is to admit that statistical probabilities are, at best, approximations that have inherent uncertainties and potential biases.11 They can still be useful, nonetheless, provided that we are aware of their limitations and make decisions accordingly. The propensity theory of probability is an empirical alternative to the statistical view that can be useful in estimating probabilities for low-frequency events, such the radioactive decay of a single radium atom or the collapse of a bridge. According to the propensity theory, a probability is a property of a physical, chemical, biological, economic, or social system that affects its tendency to produce an outcome over time (Popper 1959; Hacking 1965; Hájek 2019). For example, if a die is weighted more on the “1” side than on others we could estimate that it is more likely to turn up “6” when rolled than a die that is evenly weighted. This estimate would be based on the physical properties of the die. If we roll the die numerous times, we would expect it to yield results consistent with our analysis of its physical properties. An engineer could conduct a similar type of analysis to determine whether a nuclear reactor’s containment building could withstand an impact from a commercial airliner. The analysis could take the form of a mathematical model that would include numerous variables, such as the materials used in the building, the shape of the building, the materials in the commercial airliner, the speed of the airliner, the angle of impact, and so on (Baker 2017). A key problem with the propensity theory is that we often lack enough information to estimate the probabilities of different outcomes for complex systems, such as the Earth’s tectonic plates, the Earth’s climate, the human body, the stock market, or 10 Biases can occur with larger samples, but smaller samples are more likely to be biased (Moore et al. 2016). 11 Other biases may be due to assumptions made in applying statistical tests and methods, including the possibility of unknown, confounding variables that impact the data. (Moore et al. 2016).

28

2 Precautionary Reasoning and Decision Theory

the macroeconomy. Very often we must make assumptions about how those systems behave under certain conditions, but these assumptions could be on biased. For example, a group of scientists known as the Intergovernmental Panel on Climate Change (IPCC) have developed mathematical models that can be used to estimate changes in global surface and ocean temperatures and the impacts of those changes on sea levels, the weather, and human and non-human species (Intergovernmental Panel on Climate Change 2013, 2014). However, these models are based on numerous assumptions, such as: solar activity, volcanic activity, cloud cover, atmospheric aerosols, ocean algal growth, ocean currents, human population growth, human energy usage and development. Small changes to these assumptions can lead to vastly different outputs from climate change models., The IPCC’s models of climate change, for example, predict that global surface temperatures will rise between 0.3 °C and 4.8 °C by 2100 (Intergovernmental Panel on Climate Change 2013). However, this is huge range in temperatures, and the low and high ends of this range have substantially different implications for public policy. If we believe that temperatures will rise by only 0.3 °C, then it might be reasonable for us to do very little to minimize or mitigate climate change; if we believe temperatures will rise 4.8 °C, then it would be reasonable for us to take extensive and substantive measures to deal with climate change. The propensity interpretation of probability, like the other interpretations, has strengths and weaknesses. The strength of the view is that it provides a way of objectively estimating probabilities when we lack enough data to calculate statistical probabilities. If an event happens infrequently (or has never happened at all), then we may need to rely on propensity approaches to estimate the probability that the event will occur. However, in making such estimates, we should be aware of their potential biases and limitations and make decisions accordingly. According to the subjective interpretation of probability, probabilities are not objective features of the world but are degrees of belief (or one’s confidence) that an outcome will occur (Ramsey 1926; Hájek 2019). Probabilities are educated guesses. To minimize the potential for biased, unfounded, or irrational estimates, proponents of the subjective interpretation place some constraints on beliefs. Thus, probabilities should conform to the axioms of probability theory and Bayes’ theorem for updating probabilities in light of new evidence (Howson and Urbach 1989; Hájek 2019).12 Since Bayes’ Theorem has a great deal of relevance for decision-making under risk, it will be useful to say a few words about it here. Probability theory includes an important distinction between independent and dependent (or conditional) probabilities. Each time you roll a die, the outcomes are independent: rolling a 6 has no bearing on the probability of rolling it again. However, some probabilities depend on events that have happened in the world. For example, if you draw a card from a deck of cards, this impacts the probability of drawing other cards afterwards. A conditional probability is the probability that an event will occur, 12 Some of these axioms include: probability (p) for an event must be not less than 0 and not greater

than 1; (p)(A) + p(~ A) = 1; p(A & B) = p(A) × p(B); p(A or B) = p(A) + p(B); if p(A) = p(B) and p(B) = p(C), then p(A) = p(C); if p(A) > p(B) and p(B) > p(C), then p(A) > p(C). Skryms (1986) for an overview of the axioms of probability theory.

2.6 Interpretations of Probability

29

given that another event has happened (Skryms 1986). For example, if you have a full deck of cards, the probability is 1/52 (0.0192) that the first card you draw will be the ace of spades. However, if you draw another card (e.g. the jack of spades), the probability that the next card you draw will be the ace of spades increases to 1/51. If you continue drawing cards without getting the ace of spades, its probability will increase to 1/1 (1.0). Bayes’ Theorem allows us to calculate conditional probabilities, given information we have about other probabilities. It can be stated as the following equation (Skryms 1986): p(A/B) =

p(B/A) × p(A) p(B)

In this equation, p(A/B) means, “the probability of A occurring, given that B has occurred;” p(B/A) means “the probability of B occurring, given that A has occurred;” p(A) means “the probability of A occurring;” p(B) means “the probability of B occurring.” For an illustration of how one could use this theorem in something other than a card game, let’s consider how one could calculate the probability that a person has HIV, given that the have a positive HIV test result. Let’s assume they come from a region of the world where the prevalence of HIV is high, so that the probability that the person has HIV prior to testing is = 0.2. Let’s also assume that the probability that they would have a positive HIV test if they in fact have HIV is high (0.95). Let’s also assume that the probability of having a positive HIV test is 0.22. Putting this together we get: p(Have HIV/positive test result) =

0.95 × 0.2 = 0.864 0.22

If we decide to repeat the test, we can no longer assume that your probability of having HIV is 0.2 because we have evidence that it is 0.864. Let’s again assume that the probability that you will have a positive result if you have HIV is 0.95. However, the probability that you will test positive has also gone up, because you have already tested positive. If we assume that this probability is 0.85, we get: 0.95 × 0.864 = 0.97 0.85 So, Bayes Theorem shows us that getting positive results from two HIV tests increases the probability that this person has HIV from 0.2 to 0.97. Psychologists and behavioral economists who have studied how human beings actually form probability judgments have found human beings do not follow Bayes’ Theorem all the time and are prone to several biases that violate Bayesian principles (Tversky and Kahneman 1974; Ariely 2010; Kahneman 2011; Thaler 2016). One of these biases is known as the anchoring heuristic, i.e., the human tendency to stay with one’s initial estimate of a probability despite new evidence that should change it. For

30

2 Precautionary Reasoning and Decision Theory

example, a person who grew up where poisonous snakes are common might continue to believe that poisonous snakes are a significant risk to him or her, despite moving to an area where there are no poisonous snakes. Another of these is the availability heuristic, i.e. the human tendency to base probability estimates on information that is readily available to the mind because it is memorable, salient, or recently acquired (Tversky and Kahneman 1974; Kahneman 2011). For example, one might judge that shark attacks at the beach are fairly common, based on a recent, graphic media report of shark attacks. While the human tendency to violate Bayesian principles is an important fact that should be taken into account in thinking about precautionary reasoning, it does not constitute a major objection to the subjective interpretation of probability, because one might argue that subjective probabilities are judgments that rational agents should make, based on the available evidence. In other words, the subjective interpretation is a normative, not descriptive account of probability (Hájek 2019). It would still be the case, subjectivists could argue, that people ought to follow Bayesian principles, even if they often fail to do so. A more significant objection to the subjective interpretation is that Bayesian updating may not compensate for the inherent biases in prior probability estimates (Resnik 1987; Earman 1992). To calculate the probability that a person has HIV, given that they have had a positive test result, we entered several prior probabilities into Bayes’ equation, such as the probability that the person has HIV, the probability that they would test positive if they have HIV, and the probability that anyone would test positive. Bayes’ equation then gives us the posterior probability for the person having HIV. However, if we start out with biased probabilities prior to Bayesian updating, the posterior probability for the person having HIV will reflect those biases. Suppose we had assumed, for example, that the prior probability that the person has HIV was very low (0.001), and that the prior probability of testing positive if you have HIV was 0.5, and that the prior probability of testing positive was 0.25. Then we would get: p(Have HIV/positive test result) =

0.5 × 0.001 = 0.002 0.25

As one can see, the posterior probability for having HIV would still be very low (0.002). It would take numerous tests to move the probability closer to the values we obtained in the previous example, based on different prior probabilities. Extensive Bayesian updating might not be able to overcome biases in our prior probability judgments. Proponents of subjectivism have a reply to this objection. They argue that in the long run Bayesian updating will overcome biased prior probability judgments so that our posterior probabilities will reflect the evidence, rather than our initial biases. Bayesians have constructed convergence theorems to prove this point (Howson and Urbach 1989). I do not dispute these arguments. However, it might take considerable time and evidence to obtain this outcome, and we may need to make decisions based on probabilities before this happens (Earman 1992).

2.6 Interpretations of Probability

31

While the subjective approach may be suitable for personal decision-making that does not involve the public, such as placing bets on horse races, a strong case can be made that for public decision-making one should use probabilities that are as free as possible from biases because the stakes are much higher. A bad bet on a horse race only harms the bettor but a bad bet on a new drug could harm thousands of people. In theory, the subjective approach can yield probabilities that are free enough from biases for public decision-making. However, we should use caution when applying this approach to such decisions and be mindful of the potential biases that may impact our decisions. To summarize this discussion, the four interpretations of probability provide different ways of estimating probabilities. The classical approach uses mathematics, and the other three use empirical evidence. Since each of these interpretations have potential problems and limitations, the most reasonable way of estimating probability may be to use the interpretation that provides the most objective and reliable estimate, given the available evidence. If one has the ability to obtain evidence based on a sufficiently large sample of observations of an outcome, then it may be reasonable to use the statistical interpretation for estimating the probability of that the outcome will occur in the future. If one cannot obtain this sort of evidence, then it may be reasonable to estimate the probability that an outcome will occur based on a scientific analysis of the system that has the propensity to produce the event. When this type of analysis is not possible, then it may be reasonable to estimate probabilities based on subjective judgments, with Bayesian updating as one obtains new evidence. It might also be reasonable to combine these approaches when making decisions involving probabilities. For example, one might use the statistical or propensity interpretation of probability for prior probabilities that are then updated, using Bayesian principles, as new evidence comes in. Regardless of which interpretation of probability one uses (or some combination thereof), important questions need to be addressed concerning the quality and quantity of evidence used for estimating probabilities used in decision-making. Since a great deal often turns on public policy choices, our default position should be to use rules for decision-making under ignorance until we are satisfied that we have enough evidence to use rules for decision-making under risk. But what counts as enough evidence to use a probability estimate in public decision-making? I will not give a general answer to this question in this book but will assume that scientists can help us answer it, because scientists (and engineers and technicians) are the experts at evaluating and interpreting evidence.13 For example, we rely on medical researchers to tell us when we have enough evidence from clinical trials to estimate the likely benefits and risks of approving a new drug, and aeronautics engineers to tell us when we have enough evidence concerning the design and construction of an airplane to judge that it is safe to fly.14

13 I

will discuss the use of experts in public decision-making in more detail in Chaps. 4 and 5. public can and should participate in the discussion of evidence for scientific statements used in public policy (Steel 2015; Resnik and Elliott 2016). For example, the public might demand 14 The

32

2 Precautionary Reasoning and Decision Theory

However, as we shall see in Chap. 5, this is not a purely scientific issue, because values can also play an important role in deciding the degree (or level) of evidence required to assign probabilities to outcomes (Douglas 2009; Steel 2015; Elliott 2017). For example, we might require more evidence to approve a new drug than we would require make a bet on a horse race. In Chap. 5, I will return to this important point. Before concluding this section, it is important to note that the concept of probability may also apply to statements we make about probabilities, because we may view those statements as more (or less) probable. Evidence that supports (or confirms) a statement (such as a hypothesis, diagnosis, or theory) increases the probability that it is true; conversely, evidence that refutes (or disconfirms) a statement decreases the probability that it is true (Huber 2019).15 For example, observing water boiling at 100 °C increases the probability that the statement “water boils at 100°C” is true; whereas observing water not boiling at 100 °C decreases the probability of this statement. We can apply this idea to statements about probabilities. For example, if we roll a die and it comes up six 95/100 times, we could say “the probability that the die will come up six is 0.95 +/- the standard error for this experiment.” We could say that “it is highly probable that [the probability that the die will come up six is 0.95 +/- the standard error for this experiment].” These are two different statements. The former is making a probability claim about events in the world, and the latter is making a probability claim about a statement, i.e. that the statement is probable or probably true. This may seem like irrelevant distinction, but it is not, since, rules for decision-making under risk assume that we know the probabilities of different outcomes. However, unless we use the mathematical interpretation of probability, our knowledge of probabilities is empirical, and therefore probabilistic. Some philosophers have developed quantitative approaches to confirmation (Huber 2019). For example, using such an approach, one might be able to claim that “the probability is 0.75 that [the probability that the die will come up six is 0.95 +/- the standard error for this experiment].” However, in practice scientists and members of the public often speak of the probability of hypotheses (or theories) in qualitative terms, such as “probable,” “improbable,” “highly probable, “highly improbable,” and so on.

2.7 Decision-Making Under Risk Expected utility theory (EUT) is the most influential approach to making decisions under risk. It is similar to the principle of indifference, except one does not assume that the probabilities for outcomes are equal. Instead, one assigns precise, numerical

more evidence for approving a new, high-risk drug than approving a new, low-risk medical device. However, scientific expertise would still play a crucial role in evaluating and interpreting evidence. 15 The is the basic idea of Bayesian confirmation theory (Howson and Urbach 1989).

2.7 Decision-Making Under Risk

33

Table 2.8 Decision matrix for investments with expected utilities Growing economy

Stable economy

Declining economy

Overall expected utility

Stocks

70 × 0.5 = 35

30 × 0.3 = 9

−13 × 0.2 = −2.6

41.4

Bonds

40 × 0.5 = 20

25 × 0.3 = 7.5

5 × 0.2 = 1

28.5

Mutual funds

53 × 0.5 = 26.5

45 × 0.3 = 13.5

−5 × 0.2 = −1

39

Table 2.9 Decision matrix for investments with expected utilities Growing economy

Stable economy

Declining economy

Overall expected utility

Stocks

70 × 0.2 = 14

30 × 0.6 = 18

−13 × 0.2 = −2.6

29.4

Bonds

40 × 0.2 = 8

25 × 0.6 = 15

5 × 0.2 = 1

24.5

Mutual funds

53 × 0.2 = 10.6

45 × 0.6 = 27

−5 × 0.2 = −1

36.6

probabilities to the different outcomes, based on one’s knowledge or evidence.16 The product of the probability of an outcome and its utility is its expected utility.17 The sum of these expected utilities for an option is its total expected utility. Utilities are values assigned to outcomes, which could be positive or negative. For example, if we measure utility in dollars, then a gain in dollars would be positive utility and a loss would be negative. Thus, according to EUT, one should choose the option with the highest total expected utility (Resnik 1987; Peterson 2017). In the investment example (Table 2.2), suppose that the probability that the economy will grow is 0.5, that it will it will be stable is 0.3, and that it will decline it 0.2. The overall expected utility for stocks would be: (70 × 0.5) + (30 × 0.3) + (−13 × 0.2) = 41.4. Below is a decision matrix for investing in stocks under these assumptions concerning the probability of different economic conditions. So, under these probabilities, one should invest in stocks because they yield the highest expected utility. If the probabilities were different, the expected utilities and the recommend decision would also be different, as one can see from Table 2.9. One could also apply EUT to medical decision-making (Albert et al. 1988). For example, suppose that patient with advanced colon cancer is trying to choose among 16 P = 0.92 would be a precise probability, whereas p is between 0.89 and 0.95 would not be. If probabilities lack precision, then EUT may not yield clear results. For example, in Table 2.8, if the probability of a growing economy is between 0.2 and 0.6, a stable economy is between 0.5 and 0.1, and a declining economy is between 0.4 and 0, then the expected utilities would also range and overlap, and EUT would not tell us which investment choice we should make. The degree of precision required may vary, depending on the type of decision we are making and the type or amount of evidence we have for probabilities. Decision theorists try to deal with this problem through sensitivity analysis. The basic idea here is that one attempts to determine how sensitive expected utility is to the parameters of the decision, such as probabilities and values. One may be able to show how the optimal choice would vary with respect to variations of the parameters or that some choices are stable with respect to a certain amount of variation (Evans 1984). 17 Utilities should also be precise. See discussion in Footnote 16.

34

2 Precautionary Reasoning and Decision Theory

Table 2.10 Decision matrix for cancer treatment with expected utilities Aggressive cancer

Moderately aggressive cancer

Non-aggressive cancer

Overall expected utility

Chemotherapy

9 × 0.4 = 3.6

12 × 0.4 = 4.8

15 × 0.2 = 3

11.4

New drug

9 × 0.4 = 3.6

15 × 0.4 = 6

18 × 0.2 = 5.4

15

No treatment

4 × 0.4 = 1.6

6 × 0.4 = 2.4

12 × 0.2 = 2.4

6.4

three options: a standard chemotherapy regimen, a recently approved drug (not chemotherapy), or no treatment at all (and pursuing palliative care). His oncologist has data for survival rates for patients pursuing these different options. Survival rates depend on how aggressive the cancer is. Suppose there is a 40% chance that the cancer is aggressive, a 40% chance that it is moderately aggressive, and a 20% chance that it is non-aggressive. The survival rates for these possible states of the world are: aggressive: 9 months for chemotherapy, 9 months for the new drug, and 4 months for no treatment; moderately aggressive: 12 months for chemotherapy, 15 months for the new drug, and 6 months for no treatment; and non-aggressive: 15 months for chemotherapy, 18 months for the new drug, 12 months for no treatment. Putting these numbers together we have the following decision matrix (Table 2.10). So, assuming the patient wants to have the longest expected survival time, he or she should opt for the new drug. EUT is an influential and popular approach to decision-making under risk because it is simple, straightforward, quantitative, precise, and empirical. For these reasons, many policymakers and scientists regard it as a scientific approach to assessing and managing risks (Brombacher 1999; National Research Council 2009).18 Government agencies that regulate risks, such as the Environmental Protection Agency (EPA), Food and Drug Administration (FDA), and Occupational Safety and Health Administration (OSHA), use decision-making strategies based on EUT. An influential approach to decision-making in business and government, known as cost-benefit analysis, applies EUT to economic decisions and measures utilities in financial terms, such as dollars (Samuelson and Nordhaus 2009; Peterson 2017). As an approach to rational decision-making, EUT makes a great deal of sense, because individuals who use this approach to make decisions will tend to effectively achieve their ends in the long run (Briggs 2019). When one in investing in stocks, for example, it makes sense to make choices that tend to maximize one’s returns on investments. However, EUT has several problems that may make it less than desirable

18 In risk assessment, one identifies and evaluates risks, based on scientific evidence. In risk manage-

ment, one weighs and compares risks and expected benefits to decide upon the course of action for dealing with risks (Shrader-Frechette 1991; National Research Council 2009). For example, in the investment examples discussed in this chapter (e.g. Tables 2.8 and 2.9), the risks were financial losses and the benefits were financial gains. Risk assessment involved assigned probabilities and dollars values to these possible outcomes. Risk management involved deciding which course of action was the best to take (e.g. investing in stocks, bonds, of mutual funds), given the risk assessment.

2.7 Decision-Making Under Risk

35

as approach to precautionary decision-making. EUT may be a reasonable approach to decision-making under some conditions, but not under others.

2.8 Problems with Expected Utility Theory The first problem with EUT, which was discussed above but bears repeating here because it is so important, is that we may not have enough evidence to assign accurate and precise probabilities to different outcomes. While this is not a problem internal to the theory, it is a crucial problem for applying the theory to real world decisions.19 We could make serious mistakes if we use EUT without enough evidence to be confident in our probability estimates. In the new drug example (above), suppose that the drug is available only in a country with lax drug regulation and oversight and that there are no published controlled, clinical trials on its risks and benefits. The only evidence concerning the drug’s benefits and risks comes from a small, uncontrolled study published in an obscure journal. The publication reports that 4 people have lived 20 months while taking the drug, 4 have lived 12 months, and 4 have lived only 4 months. One could conclude from this small sample (12) that the drug has an expected utility of 12 months of life and decide to take it instead of standard chemotherapy. However, taking the drug could be a serious mistake if it turns out that subsequent data from a larger controlled clinical trial (300 patients) show that 50% of people taking the drug live only 6 months, 25% live eighteen months, and 25% live 12 months. In a situation like this, it would have been better (i.e. more reasonable) to not include the option of taking this drug in one’s expected utility calculus (due to low evidence related to probabilities) or to apply a rule from decision-making under risk (such as maximin) to the decision.20 As we shall see in Chaps. 4 and 5, the PP may be a reasonable alternative to EUT when we have insufficient evidence concerning the probabilities of different outcomes. The second problem, also discussed above, is that we may not be able to obtain quantitative measurements of utilities that we can use in our calculus, due to moral (or value) uncertainty. Again, this is not a problem internal to EUT, but it is a difficulty we often encounter when we try to apply the theory to real world problems. To use EUT for making complex personal and public policy decisions, one needs to be able to measure utilities in terms of some common, quantifiable metric or scale. In the cases we have considered thus far, the measurement was simple and straightforward: for the investment choices (Tables 2.8 and 2.9), utilities were measured in dollars amounts; for the cancer treatment choices (Table 2.10), utilities were measured in months of life. But suppose that we make a decision that involves the comparison 19 What is mean here is that this problem is not a matter of EUT’s logical and mathematical foundations. Decision theorists have adequately dealt with these sorts of problems. See Resnik (1987), Bradley (2017), and Peterson (2017). 20 Maxim would recommend taking chemotherapy to avoid the worst outcome (4 months of life). Maxim would recommend taking the new drug to have a chance at obtaining the best outcome (20 months of life).

36

2 Precautionary Reasoning and Decision Theory

of different types of outcomes and values. For example, suppose that in the cancer treatment case, we also know that the new treatment is much more expensive than chemotherapy and we want to factor this cost into our decision-making. To do this, we would need to be able to compare costs of treatment and months of life in terms of a common metric.21 But what might this metric be? An answer given by many economists is that we can measure anything we value, including life, health, clean air and water, wilderness, and social justice in monetary terms, such as our willingness to pay (WTP) for something (Samuelson and Nordhaus 2009).22 The value of human life, for example, could be measured in terms of a WTP for life-saving treatments, or our willingness to engage in life-risking activities for money. If you will pay $20,000 for cancer treatment that extends your life by two months, then one month of life is worth $10,000 to you. If you would participate in a medical experiment that has a 1/100 risk of death for $2000, then your life is worth $2 million to you. If you would pay an extra $500 per year on your electric bill to have 10% less sulfur dioxide in your local air, then that increase in air quality is worth $1000 per year to you. If you would pay $2 more per pound for organically grown tomatoes (as opposed tomatoes treated with industrial pesticides and fertilizers), then that is how much those tomatoes are worth to you. There are at least two objections to the WTP approach to measuring value. The first is practical: WTP usually does not provide us with useful guidance because we often must make decisions involving risks and benefits when we lack information concerning how much we would pay for various outcomes. We often lack this information because WTP can be difficult to measure accurately and objectively and because we have not gathered the data we need, due to lack of time or resources. Proponents of the WTP approach can reply to this objection by arguing that this is not a fatal flaw with using WTP as a measure of utility, and that given enough time and resources we can determine our WTP for anything. The second objection is theoretical and runs much deeper: WTP often does not provide us with useful guidance because some things have moral, social, or political value that cannot be measured in monetary terms or equated with money (Resnik 1987; Sagoff 2004; Ackerman 2008). Many people would argue that we cannot, morally, put a price tag on human life (Kant 1981). While we might be able to calculate the value of a life in terms of WTP or economic variables (such as expected income or economic productivity), this calculation would not capture the real value of human life, which is priceless.23 Even attempting to put a price tag on human life disrespects our inherent dignity and worth (Kant 1981; Hill 1992).24 One might argue that many other things that we regard as morally or socially valuable, such as love, loyalty, integrity, autonomy, natural wilderness, and social justice, cannot 21 Developing a common metric for measuring what we value is also a problem for utilitarianism, as we shall see in Chap. 3. 22 Willingness to pay is a type of expected monetary value (Resnik 1987). 23 For its regulatory decision-making, the EPA estimates the that value of a statistical human life is $7.4 million, measured in 2006 dollars (Environmental Protection Agency 2019). 24 We will discuss this point in more depth in Chap. 3.

2.8 Problems with Expected Utility Theory

37

be defined or measured in terms of WTP or other economic variables (Sagoff 2004; Ackerman 2008).25 While the WTP approach may apply to a carefully defined range of economic and business decisions, we cannot use it to make important public policy choices, because these decisions often involve values that cannot be equated with WTP. The theoretical objection to using WTP to measure value extends beyond this particular measurement tool and applies to other methods one might use for this purpose.26 If we hold that some things, such as human life, autonomy, natural wilderness, and social justice, have an inherent moral value that cannot be defined or measured in terms of a common, quantifiable property (such as WTP), then EUT will have limited applicability to public policy decisions. This is not to say that we should ignore the effects that our choices are likely to have on measurable utilities (such as economic costs and benefits), since these impacts are often relevant to ethical and public policy choices (Timmons 2012; Resnik 2018). However, measurable utilities (such as WTP) should not be the sole determining factor in public policy decisions that involve competing moral, social, or political values. For example, in deciding whether to approve a new cancer drug, a regulatory agency should consider the likely impacts of the drug on measurable outcomes such as survival, morbidity, and economic costs and benefits, but also the drug’s impact on outcomes that are not easily quantified or measured, such as quality of life and access to health care. Ideally, the agency’s decision should be based on a careful assessment of relevant quantitative and qualitative factors. We will return to this point when we examine moral approaches to precautionary reasoning in Chap. 3.

25 Readers may recall the investment example from Chap. 1 in which Rita decided that it was more important to her to invest her money in ethical businesses than to make the highest possible return on her investments. 26 Some economic theorists argue that utility can be equated with satisfaction. For example, if you derive more satisfaction from eating chocolate ice cream than eating vanilla, then eating chocolate ice cream has a greater utility for you (Samuelson and Nordhaus 2009). However, this approach faces the same types of problems that plague WTP because we often lack information concerning how much satisfaction people obtain from different things they value and there many things that cannot be morally equated with satisfaction of preferences. For example, I might obtain more satisfaction from eating ice cream than changing my two-year-old son’s diaper, but that does not mean that eating ice cream is more valuable (morally) than changing my son’s diaper. We regard some things as valuable even though they do not provide us with a great deal of satisfaction. Others argue that we can measure utility in terms of the risks we are willing to take for something (Von Neumann and Morgenstern 1944). For example, if I am willing to risk my life to save my son’s life but not my dog’s life, then I obtain more utility from my son than my dog. This approach also faces the same sorts of problems that plague WTP because we often lack information about the risks people are willing to take and value is often not based on willingness to take risks. For example, one could argue that the value of social justice, privacy, natural wilderness, personal integrity, and other things cannot be equated with the risks we are willing to take for these things.

38

2 Precautionary Reasoning and Decision Theory

2.9 Social Choice Theory The last part of decision theory we will consider in this chapter is social choice theory. Up to this point, the approaches to decision-making we have considered apply to choices made by individuals. Social choice theory deals with group (or collective) decision-making. Clearly, individual decision-making rules have considerable relevance for group decision-making, since members of the group may choose to follow these rules (such as maximin, EUT, etc.) when interacting with other members. Also, in some cases, institutions or organizations (e.g. government agencies, business) may act like individuals when they make decisions, and they may follow rules for individual decision-making when they do so. For example, a business might follow EUT when making financial decisions or a regulatory agency might follow EUT when making decisions concerning drug approvals. However, in this section we will consider issues that arise when groups composed of individuals make decisions. There are several different strategies for making group decisions. In a dictatorship, one person, such as a monarch, emperor, queen, king, or chief, makes decisions for the group. In an oligarchy, a group of individuals, such as a royal family, board of directors, or tribal elders, makes decisions for the larger group. In a democracy, members of the group jointly make decisions by voting or consensus. In direct democracy, citizens directly participate in group decisions. For example, a state referendum on raising the sales tax by 10% would be an example of direct democracy. In representative democracy, members of the group elect representatives who make decisions on their behalf. The US federal government is a form of representative democracy in which citizens vote for members of Congress and the President, who make decisions for the US. Most of the work in social choice theory has focused on democratic decisionmaking involving voting procedures.27 In Chap. 4, we will examine the moral arguments for democracy and political and practical problems with this form of government. For now, we will consider some theoretical problems with democratic voting. To understand some of the problems with democratic decision-making, we need to introduce several ideas. The first is the idea of social choice, which is a choice made by members of a group (or citizens). For example, a group of citizens might need to select a site for disposal of solid waste. Like individual choices, social choices have options (or alternatives) and outcomes. For the solid waste decision, some options could include different proposed sites, e.g. site A, site B, and site C. Also, like individual choices, social choices may be made under conditions of ignorance or risk. For example, the likely public health, environmental, and economic impacts of 27 Consensus is a form of democracy in which members discuss options and arrive at a decision that all agree upon, without formal voting. Consensus is a morally admirable form of decision-making (and some would say the best) because it respects each person’s views. While consensus can be achieved in small groups, it often cannot be achieved in larger ones, and it is completely unworkable in very large groups (e.g. cities, states, or nations). Also, consensus may lead to unfair results in some cases, because it allows small minorities (or even single individuals) to thwart the will of the majority.

2.9 Social Choice Theory

39

disposing of waste in different sites may or may not be known. Citizens may assign different utilities (expressed as preferences) and probabilities (to the extent that these are known) to different outcomes. For example, citizen 1 might prefer A to B to C, citizen 2 might prefer B to C to A, and so on. Decision theorists are concerned with how to aggregate the preferences of citizens to produce a social choice that is rational and fair (Resnik 1987). In other words, a choice that represents the will of the people. The most common type of aggregation is majority rule: if the majority of citizens prefer one alternative to the other, then social preferences should reflect this ordering. One problem with majority rule is that it may not produce a winner when there are more than two alternatives. In this case, one might use another method, such as plurality rule, to determine rankings. In plurality rule, the alternative with the most votes is ranked first, the one with the second most is ranked second, and so on. To ensure that a winner is produced that a majority of the citizens favor, one could also hold runoff elections among the top two voter-getters. It is also important to consider how to aggregate citizens’ probability estimates, but most theorists do not devote much attention to this problem (Resnik 1987). One might argue that probability estimates used in public policymaking should be based on objective evidence, not on the whims of democracy. For example, citizens could help decide whether to adopt a policy related to mitigating climate change (such as a carbon tax) but they would not decide the probability estimates for outcomes related to different options (such the likely economic and environmental impacts of a carbon tax). Citizens (or their representatives) could delegate responsibility for probability estimations to experts (such as scientists, engineers, medical professionals, or statisticians) for the purpose of making public policy decisions.28 Thus, probability estimates for social choices would not be completely decided by a democratic process. We will return to the issues of the role of experts in decision-making in Chap. 5 (see Jasanoff 1990; Pielke 2007; Resnik 2009). Returning to the problem of aggregating preferences, decision theorists study collective choice rules known as social welfare functions (SWFs). SWFs generate preference orderings for the group from the preference orderings of the citizens. Preferences must conform to various logical and mathematical constraints to be considered rational.29 One of the important questions in social choice theory is whether democratic approaches to decision-making, such as majority rule, can generate rational preference orderings for the group (Resnik 1987). Decision theorists have shown that some forms of voting lead to logical problems under certain conditions. One of these is known as the voting paradox, which was first recognized by eighteenth century French philosopher and mathematician Nicolas de Condorcet (1743–1794). To understand this problem, suppose that we have 3 three voters, 1, 2, and 3; and 28 Of course, citizens (or groups of citizens representing industry interests or political ideologies) might decide to involve themselves in these debates by hiring experts to acquire evidence to support their point of view (Michaels 2008). In the debate concerning climate change, for example, opponents of climate change mitigation proposals have hired experts to challenge the scientific evidence concerning the role of human activities in global warming (Hulme 2009; Giddens 2013). 29 See Footnote 3.

40

2 Precautionary Reasoning and Decision Theory

Table 2.11 Illustration of Condorcet’s voting paradox

1st

2nd

3rd

Voter 1

X

Y

Z

Voter 2

Y

Z

X

Voter 3

Z

X

Y

Table 2.12 Illustration of unanimity 1st

2nd

3rd

4th

5th

Voter 1

A

B

C

D

E

Voter 2

A

B

D

E

C

Voter 3

C

D

E

A

B

three alternative sites for placement of a landfill, i.e. sites X, Y, and Z. Suppose that 1 prefers X to Y to Z; 2 prefers Y to Z to X; and 3 prefers Z to X to Y. We can represent their collective preferences as follows (Table 2.11). The preferences for these citizens are circular, because the majority prefer X to Y, Y to Z, and Z to X. Since preference ordering must be transitive, majority rule under these conditions cannot produce a rational preference ordering for the citizens (Resnik 1987; Pacuit 2019).30 It cannot even produce a winner because all preferences have the same number of votes. Although the voting paradox is an important problem in decision theory, it probably has limited relevance for most democratic processes, because the probability of circular preference orderings decreases as the number of voters increases (Gehrlein and Lepelley 2017). For example, if we add a 4th voter who prefers X to Y to Z, the circle and the tie are broken. Another well-known paradox was developed by American economist Kenneth Arrow (1921–2017). Arrow’s paradox, also known as Arrow’s impossibility theorem, states that it is impossible to generate a rational social preference ordering when there are more than two alternatives, a finite number of voters, and certain other conditions obtain.31 The first of these conditions is unanimity: if each citizen prefers one choice over another, the group ranking should reflect this preference ordering. For example, suppose we have 3 voters and 5 alternatives (Table 2.12). In this example, each citizen prefers A to B and D to E, so the group ranking should reflect this ordering. The second condition is independence of irrelevant alternatives: social preferences depend only on pairwise comparisons of individual preferences. Another way of stating this condition is that removing irrelevant alternatives should not affect social rankings (Pacuit 2019). For example, suppose we remove alternative C (Table 2.13).

30 Transitivity is the requirement that is A prefers X to Y and prefers Y to Z, then A prefers X to Z, see Footnote 3. 31 I am not going to examine Arrow’s theorem in detail here. For further insight, see Resnik (1987), Pacuit (2019).

2.9 Social Choice Theory

41

Table 2.13 Illustration of irrelevance of independent alternatives 1st

2nd

Voter 1

A

B

Voter 2

A

B

D

E

D

E

A

First preference

Second preference

Third preference

Voter 1

X

Y

Z

Voter 2

X

Z

Y

Voter 3

Y

Z

X

Voter 4

Y

X

Z

Voter 5

Z

X

Y

Voter 6

Z

Y

X

Voter 7

X

Y

Z

Voter 3

Table 2.14 Illustration of dictatorship

3rd

4th

5th

D

E B

Removing alternative C should not affect the social preference ordering, since each citizen still prefers A to B and D to E. The third condition is non-dictatorship: no individual voter’s preference orderings should determine the social preference orderings. Arrow was able to show that no SFW satisfies unanimity, independence of irrelevant alternatives, and non-dictatorship (Resnik 1987). An SFW that satisfies the first two conditions, will be a dictatorship, as illustrated by Table 2.14. In this situation, voter 7 is a pivotal because he or she can dictate the outcome of the election. If voter 7 votes X to Y to Z, then the group’s ranking will be X to Y to Z; if voter 7 votes X to Z to Y, then the group’s ranking will be X to Z to Y, and so on. Before considering responses to Arrow’s paradox, it is important to realize that it does not apply to voting procedures in which there are only two alternatives, such as elections involving only two candidates for political office or a referendum requiring a yes or no vote. In these elections, majority rule can decide the outcome fairly. Thus, Arrow’s paradox does not arise in many of the elections held in the US and other countries. Commentator’s have developed different responses to Arrow’s paradox. One response is to relax the independence of irrelevant alternatives assumption and allow positional vote counting (Pacuit 2019).32 Under this method, alternatives receive points for where voters rank them, and these points are aggregated. For example, if there are four alternatives, a first choice would receive 4 points, a second 3, and so on, as in Table 2.15. 32 This

is also known as Borda counting, named after French mathematician and physicist JeanCharles de Borda (1733–1799).

42

2 Precautionary Reasoning and Decision Theory

Table 2.15 Positional vote counting W

X

Y

Z

Voter 1

4

2

3

1

Voter 2

4

2

1

3

Voter 3

3

4

1

2

Voter 4

3

4

2

1

14

12

7

7

Total

In Table 2.15, W wins because it has 14 points, even though the same number of voters ranked W and X first, because W was not ranked lower than second and X was ranked third and fourth by two voters.33 A possible problem with positional vote counting is that voters could vote disingenuously to manipulate the outcome (Pacuit 2019). For example, if voter 2 wants W to win he or she could give X a very low position so that it will get fewer points. Voter 2 might actually prefer X to the other alternatives, but he might not vote that way because he wants to W to win. While vote manipulation may be a problem that could undermine fairness in smaller elections, it probably is less likely to occur in larger ones because each individual vote will have less of an impact on the overall outcome, so voters will not be as tempted to manipulate their votes. Another response is to relax the non-dictatorship requirement (Pacuit 2019). An argument for doing this is that Arrow’s non-dictatorship requirement is very different from the idea of a dictator that we find morally objectionable. Most of us associate the word ‘dictator’ with a despot or tyrant like Stalin, Hitler, or Mussolini. A dictator controls his or her country for his or her own purposes, which may be morally corrupt, ruthless, selfish, and cruel. But a dictator, in Arrow’s sense of the word, may not be morally objectionable, provided that the person who casts the pivotal vote does not know that they have the power to control the outcome of the voting process. For example, if voter 7 in Table 2.14 does not know that her vote will determine the group’s preference orderings, then she cannot intentionally manipulate the outcome. Most voting systems guarantee secrecy, so that voters cannot know how others have voted. A third response is to limit elections to two alternatives, since Arrow’s problem does not arise when there are fewer than three alternatives (Pacuit 2019). To do this, one could hold a voting tournament. Thus, if there are four candidates, A, B, C, and D, A could be matched against B and C against D in the first round, and then the winners would be matched against each other. This response can be time-consuming and impractical in which numerous candidates are running for office. Also, it could produce unfair results, since the outcomes will depend on who opponents are matched against in the tournament. 33 Positional vote counting is often used in sports for voting on awards. For example, baseball writers use a ballot to vote for Most Valuable Player in American League and National League. Ballots can include ten names. First place is 10 points, second 9, and so on. The player with the most total points wins.

2.9 Social Choice Theory

43

Table 2.16 Voting with cardinal utilities

X

Y

Voter 1

7

6

5

Voter 2

7

5

6

Voter 3

5

10

6

Voter 4

6

10

5

Voter 5

4

3

5

Voter 6

3

4

5

Voter 7

4

3

2

37

41

34

Total

Table 2.17 Voting with cardinal utilities

Chocolate John

Vanilla

Z

Strawberry

10

3

1

Martha

4

5

4

Ted

2

5

3

Total

16

13

8

A more radical response to Arrow’s theorem is to aggregate cardinal utilities instead of ordinal ones (Sen 1970). Numbers represent the strength of voters’ preferences, as illustrated in Table 2.16. There are several problems with cardinal voting procedures, however. First, cardinal voting can produce winners that are not the first choice of most voters. In Table 2.16, Y is the winner even though Y is the first choice of only 2/7 voters, while X is the first choice of 3/7 voters. Under ordinal voting, X would be the first choice. Another problem with cardinal voting is that it requires that utilities are interpersonally comparable, which is a questionable assumption (Resnik 1987; Pacuit 2019). For example, suppose that John, Martha, and Ted have pooled their money together to buy a quart of ice cream and are deciding whether to buy chocolate, vanilla, or strawberry. They assign utilities as follows (Table 2.17). Martha and Ted both prefer vanilla to chocolate and strawberry, but if we aggregate their personal utilities, the group will decide to buy chocolate, which is John’s favorite. This outcome occurs because John has assigned chocolate a very high utility (10), compared to the other flavors. But would this be a fair decision? Should the group decide to go with John’s preference simply because he likes it so much? How can we compare the amount of utility John receives from eating chocolate ice cream to the amount that Martha and Ted get from vanilla? We will return to this important problem when we discuss utilitarianism in Chap. 3.

44

2 Precautionary Reasoning and Decision Theory

2.10 Reflections on Democracy Thus far we have examined two well-known paradoxes in social choice theory, Condorcet’s voting paradox and Arrow’s paradox. There are many other paradoxes relating to voting, but we need not consider them here.34 At this point, I would like to take a step back from these theoretical problems with democracy and focus on practical issues. The problems identified by social choice theorists give us reasons to believe that voting is often not a perfectly fair or rational way for groups to make decisions. This result, while important, is not all that surprising, since most of us probably are familiar with problems with the electoral process that we have become aware of through personal experience or media reports. For a recent example, most readers will recall that in several U.S. Presidential elections the winning candidate had less than 50% of the popular vote.35 In the 2016 US presidential election, there were also numerous allegations of voter fraud and tampering with electronic voting systems (Voter Fraud Facts 2019; Kaplan 2019). Beyond voting issues, there are also significant political and practical problems with democracies, such as, curbing the influence of powerful citizens and industry, consumer, ideological groups on public debates and the electoral process; and delineating the role of experts in democracies (discussed briefly earlier). We will consider these political and practical issues in more detail in Chap. 5. While these theoretical, political, and practical problems are important to address, they do not show that we should abandon democracy. Democracy is not a perfect form of government, but it is far better than the alternatives, i.e. dictatorship, or oligarchy. If we are to pursue democratic ideals—and I think we should—then we should move forward by tackling these problems so that we can improve our system of government. I will return to these issues in Chap. 5.

2.11 Conclusion As we have seen in this chapter, decision theory provides us with some valuable insights into rational decision-making and has useful applications for personal and social choices. However, decision theory, by itself. does not show us how to make reasonable decisions, because its rules and strategies are morally neutral, and for a decision to be reasonable it must take moral and social values into account. Decision theory tells us how to make choices related to obtaining specific outcomes, but it does not tell us whether these outcomes are morally worthwhile. Decision theory assumes that we have already made some evaluation of the outcomes, based on our 34 See

Pacuit (2019). 2016, Donald Trump won the electoral college but lost the popular vote to Hillary Clinton; in 2000, George Bush won the electoral college but lost the popular vote to Al Gore; and 1888, Benjamin Harrison won the electoral college but lost the popular vote to Grover Cleveland. 35 In

2.11 Conclusion

45

values. To make reasonable precautionary decisions, we therefore need to examine theories that tell us which outcomes we ought to pursue, and how we ought to go about pursuing them. This will be the main topic of Chap. 3.

References Ackerman, F. 2008. Poisoned for Pennies: The Economics and Toxics of Precaution. Washington, DC: Island Press. Albert, D.A. R. Munson, and M.D. Resnik. 1988. Reasoning in Medicine: An Introduction to Clinical Inference. Baltimore, MD: Johns Hopkins University Press. Ariely, D. 2010. Predictably Irrational, Revised and Expanded Edition: The Hidden Forces That Shape Our Decisions. Cambridge, MA: Harvard University Press. Baker, M. 2017. Officials Crashed Jet into Nuclear Reactor Facility to Tests Its Walls. Interesting Engineering, January 12. Available at: https://interestingengineering.com/crashed-jet-nuclear-rea ctor-test. Accessed 18 Jan 2021. Bradley, R. 2017. Decision Theory with a Human Face. Cambridge, UK: Cambridge University Press. Briggs, R.A. 2019. Normative Theories of Rational Choice: Expected Utility. Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/entries/rationality-normative-utility/. Accessed 18 Jan 2021. Brombacher, M. 1999. The Precautionary Principle Threatens to Replace Science. Pollution Engineering (Summer): 32–34. Bykvist, K. 2017. Moral Uncertainty. Philosophy. Compass 12: e12408. Chisholm, R. 1977. Theory of Knowledge, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall. Douglas, H. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh, PA: University of Pittsburgh Press. Earman, J. 1992. Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. Cambridge, MA: MIT Press. Elliott, K.C. 2017. A Tapestry of Values: An Introduction to Values in Science. New York, NY: Oxford University Press. Environmental Protection Agency. 2019. Mortality Risk Valuation. Available at: https://www.epa. gov/environmental-economics/mortality-risk-valuation. Accessed 18 Jan 2021. Evans, J.R. 1984. Sensitivity Analysis in Decision Theory. Decision Sciences 15 (2): 239–247. Gehrlein, W.V., and D. Lepelley. 2017. Probabilities of Voting Paradoxes. Elections, Voting Rules and Paradoxical Outcomes, Studies in Choice and Welfare, 27–57. Cham, Switzerland: Springer. Giddens, A. 2013. The Politics of Climate Change. Cambridge, UK: Polity. Hacking, I. 1965. The Logic of Statistical Inference. Cambridge, UK: Cambridge University Press. Hájek, A. 2019. Interpretations of Probability. Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/entries/probability-interpret/. Accessed 19 Jan 2021. Hill Jr., T.H. 1992. Dignity and Practical Reason in Kant’s Moral Theory. Ithaca, NY: Cornell University Press. Howson, C., and P. Urbach. 1989. Scientific Reasoning: A Bayesian Approach. New York, NY: Open Court. Huber, F. 2019. Confirmation and Induction. Internet Encyclopedia of Philosophy. Available at: https://www.iep.utm.edu/conf-ind/. Accessed 19 Jan 2021. Hulme, M. 2009. Why We Disagree about Climate Change. Cambridge, UK: Cambridge University Press. Intergovernmental Panel on Climate Change. 2013. Climate Change 2013: The Physical Basis. Cambridge, UK: Cambridge University Press.

46

2 Precautionary Reasoning and Decision Theory

Intergovernmental Panel on Climate Change. 2014. Climate Change 2014: Mitigation of Climate Change. Cambridge, UK: Cambridge University Press. Jasanoff, S. 1990. The Fifth Branch: Science Advisors as Policy Makers. Cambridge, MA: Harvard University Press. Kahneman, D. 2011. Thinking, Fast, and Slow. New York, NY: Farrar, Straus, and Giroux. Kant I. 1981 [1785]. Groundwork for the Metaphysics of Morals, trans. J.W. Ellington. Indianapolis, IN: Hackett. Kaplan, F. 2019. Bring Back Paper Ballots. Slate, July 26. Available at: https://slate.com/news-andpolitics/2019/07/elections-hacking-russia-senate-intelligence-committee.html. Accessed 19 Jan 2021. Keynes, J.M. 1923. A Tract on Monetary Reform. London, UK: MacMillan. Koplin, J.J., and D. Wilkinson. 2019. Moral Uncertainty and the Farming of Human-Pig Chimeras. Journal of Medical Ethics 45 (7): 440–446. Michaels, D. 2008. Doubt Is Their Product: How Industry’s Assault on Science Threatens Your Health. New York, NY: Oxford University Press. Moore, D.S., G.P. McCabe, and B.A. Craig. 2016. Introduction to the Practice of Statistics, 9th ed. New York, NY: WH Freeman. Munthe, C. 2011. The Price of Precaution and the Ethics of Risks. Dordrecht, Netherlands: Springer. National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: National Academies Press. Pacuit, E. 2019. Voting Methods. Stanford Encyclopedia of Philosophy. Available at: https://plato. stanford.edu/entries/voting-methods/#CondPara. Accessed 19 Jan 2021. Peterson, M. 2017. Introduction to Decision Theory, 2nd ed. Cambridge, UK: Cambridge University Press. Phillips T. 2008. The Day the World Didn’t End. NASA Science, October 10, 2008. Available at: https://science.nasa.gov/science-news/science-at-nasa/2008/10oct_lhc. Accessed 19 Jan 2021. Pielke, R. 2007. The Honest Broker: Making Sense of Science in Policy and Politics. Cambridge, UK: Cambridge University Press. Popper, K. 1959. The Propensity Interpretation of Probability. British Journal of the Philosophy of Science 10: 25–42. Ramsey, F.P. 1926. Truth and Probability. In Foundations of Mathematics and Other Essays, ed. R.B. Braithwaite, pp. 156–198. London, UK: Kegan, Paul, Trench, Trubner, & Company. Reichenbach, H. 1949. The Theory of Probability. Berkeley, CA: University of California Press. Resnik, M.D. 1987. Choices: An Introduction to Decision Theory. Minneapolis, MN: University of Minnesota Press. Resnik, D.B. 2009. Playing Politics with Science: Balancing Scientific Independence and Government Oversight. New York: Oxford University Press. Resnik, D.B. 2018. The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust. Cham, Switzerland: Springer. Resnik, D.B., and K.C. Elliott. 2016. The Ethical Challenges of Socially Responsible Science. Accountability in Research 23 (1): 31–46. Sagoff, M. 2004. Price, Principle, and the Environment. Cambridge, UK: Cambridge University Press. Samuelson, P.A., and W.D. Nordhaus. 2009. Economics, 19th ed. New York: McGraw-Hill. Sen, A. 1970. Collective Choice and Social Welfare. San Francisco: Holden-Day. Shrader-Frechette, K.S. 1991. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley, CA: University of California Press. Skryms, B. 1986. Choice and Chance: An Introduction to Inductive Logic, 3rd ed. Belmont, CA: Wadsworth. Steel, D. 2015. Philosophy and the Precautionary Principle. Cambridge, UK: Cambridge University Press. Steele, K. 2006. The Precautionary Principle: A New Approach to Public Decision-Making? Law, Probability and Risk 5 (1): 19–31.

References

47

Sunstein, C.R. 2005. Laws of Fear: Beyond the Precautionary Principle. Cambridge, UK: Cambridge University Press. Tarsney, C. 2018. Moral Uncertainty for Deontologists. Ethical Theory and Moral Practice 21: 505–520. Thaler, R. 2016. Misbehaving: The Making of Behavioral Economics. New York, NY: W. W. Norton. Timmons, M. 2012. Moral Theory: An Introduction, 2nd ed. Lanham, MD: Rowman and Littlefield. Tversky, A., and D. Kahneman. 1974. Judgment Under Uncertainty: Heuristics and Biases. Science 185 (4157): 1124–1131. Von Neumann, J., and O. Morgenstern. 1944. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press. Voter Fraud Facts. 2019. Voter Fraud Statistics. Available at: http://voterfraudfacts.com/. Accessed 20 Jan 2021.

Chapter 3

Precautionary Reasoning and Moral Theory

In the previous chapter, we explored decision-theoretic approaches to precautionary reasoning and found them wanting. While decision theory offers important insights into making individual and group choices involving risks and benefits, it does not provide use with adequate guidance for precautionary reasoning, because it lacks moral content. Decision theory can tell us how to make decisions, given our values, but it cannot tell us what those values should be or how to rank them. For insight into values, we need to look beyond decision theory toward moral theory. In this chapter we will consider how some prominent moral theories deal with issues pertaining to benefits (i.e. goods) and risks (i.e. potential harms). In examining these theories, we will pose two questions. The first question is: “What types of things have intrinsic moral value, according to the theory?” Something has intrinsic value if it has value for its own sake and not merely as a means to something else (Timmons 2012). For example, one might argue that happiness is intrinsically valuable (or good) because we should seek it for its own sake but that the value of money is only extrinsic because we should value it only for what it can help us obtain, such as food, shelter, education, health care, and so on. The second question is: “Does the theory impose any moral constraints on how we should pursue things that have intrinsic moral value?” This is also an important question to consider, because one might argue that there are moral constraints on the means we should use to pursue things that have value. For example, one might argue that we should not violate basic human rights to benefit society. Although our inquiry will focus on what the theories have to say about intrinsic moral values, it will become clear during the discussion of these theories hold that many things can have a great deal of moral value even if we do not regard their value as intrinsic. For example, health is a very important value because it helps one achieve happiness, opportunities, education, social relationships, wealth, and other things that one might value (Daniels 1984). Economic development is an important value because it can help societies achieve social welfare, public health, public education, national security, and other important social goods. © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_3

49

50

3 Precautionary Reasoning and Moral Theory

3.1 What Are Moral Theories? Before examining specific moral theories, it will be useful to say a few words about moral theories in general. A moral theory is a set of statements that provides us with guidance for action and helps us to systematize the judgments of right and wrong, good and bad, and just or unjust that we make about particular cases or situations (Timmons 2012). For example, if we judge that it wrong to kill people in many different situations (e.g. out of anger or jealousy or for money, revenge, or sport) but not in some other situations (e.g. in defense of one’s self or others), then we could develop a theory that tell us when it is wrong and not wrong to kill. Moral theories, unlike scientific ones, are normative and prescriptive: that is, they tell us what we ought to do and what we ought to value or desire.1 Scientific theories, by contrast, are descriptive and explanatory (Timmons 2012). For example, a scientific theory of human behavior could tell us that most people cheat on their taxes, but a moral theory could tell us whether people should or should not cheat on their taxes. A scientific theory could tell us that most people are motivated by greed and vanity, but a moral theory would tell us whether one should be motivated by greed or vanity. Because moral theories are normative and prescriptive, we cannot test them in the same way that we can test scientific theories, e.g. by means of empirical observations or experiments. We can, however, test moral theories by determining how good they are at helping us understand our moral judgments, i.e. our moral experience.2 If a moral theory implies that we should do something that we would view as immoral (such as killing a person for their organs) then we should modify theory so it does not have this implication or we should reject it. Moral theories must account for the “data” of our moral experience (Timmons 2012). However, since our moral judgments are susceptible to bias, we may also need to also revise them based on the implications of our moral principles. For example, the enslavement of Africans was widely regarded as morally acceptable in the US prior to the Civil War. US Citizens began to reject this moral judgment after realizing that human rights apply to all people, regardless of race or color. The process of seeking coherence between our moral theories and moral judgments is known as the method of reflective equilibrium (Rawls 1971; Daniels 1996).3 In this chapter, I will use the method of reflective equilibrium when I discuss counterexamples to moral theories.

1 A note about moral language. To say that something is morally required or obligatory is to say that

we ought to do it, or that not doing it is not permitted. To say that something is morally forbidden is to say that we ought not do it, or that it is not morally permitted. If something is morally permitted and not required or forbidden, then it is morally neutral (Timmons 2012). 2 “Reflective moral judgments” are moral judgments that are free from bias or prejudice (Rawls 1971). 3 Reflective equilibrium is a state of affairs in which our moral theories are moral judgments are in complete coherence. It is the outcome of the method of reflective equilibrium (Rawls 1971).

3.1 What Are Moral Theories?

51

Philosophers and theologians have developed many different moral theories. We will only discuss some of the most influential ones here.4 A useful way of classifying theories is to distinguish between teleological and deontological theories. Teleological theories, such as utilitarianism (discussed below), hold that our actions are right or wrong insofar as they promote morally worthwhile ends or goals; deontological theories (such as Kantianism, discussed below), by contrast, hold that are actions are right or wrong as a matter of principle, irrespective of our ends or goals.

3.2 Utilitarianism The first theory we will consider is utilitarianism because it ties in neatly with several of the decision theory approaches discussed in the last chapter, such as expected utility theory. The basic idea behind utilitarianism is that the morally right thing to do is to maximize the greatest balance of good/bad consequences (or outcomes) for all people in society. Early utilitarians, such as John Stuart Mill (1806–1873), defined good and bad in terms of happiness and unhappiness. According to Mill: “The creed which accepts the foundation of morals or ‘utility’ or ‘the greatest happiness principle’ holds that actions are right in proportion as they tend to promote happiness; wrong as they tend to promote the reverse of happiness” (Mill 1979: 7). Utilitarianism has generated considerable interest among philosophers and policymakers since it was developed in the 1800s because it is a simple and straightforward theory that can be applied to a variety of personal and social decisions, including public and environmental health policymaking choices. In addition, because utilitarianism focuses on maximizing desirable outcomes, it naturally lends itself to quantitative approaches to decision-making. For example, the FDA could decide whether to approve a new drug based on its assessment of how approving the drug will impact the overall health of the population. The EPA could follow a similar pattern of reasoning when deciding whether to strengthen an air quality standard. In a public health emergency (such as a natural disaster or pandemic) when health care resources (such as hospital beds, medical staff, and medications) are in short supply and there are many people in need of health care, health care providers often distribute resources according to a principle known as triage: the patients who can most benefit from immediate treatment are treated first, followed by patients in less immediate need and patients who are likely to die soon, even if they receive treatment. Triage maximizes the overall medical benefits of limited resources (Wagner and Dahnke 2015).5

4I

will not discuss the divine command theory in this chapter because it makes controversial theological and religious assumptions. Also, many of the moral values espoused by divine command theories, such as respect for human life and respect for nature, are endorsed by secular moral theories. For further discussion of moral theories, see Pojman (2005), Timmons (2012). 5 I will discuss triage in public health emergencies in more depth in Chap. 9.

52

3 Precautionary Reasoning and Moral Theory

Despite its influence and usefulness, utilitarianism faces several problems that call into question its appropriateness as an overall approach to making reasonable decisions concerning benefits, risks, and precautions. One of the main problems with utilitarianism is how to define utility or good consequences. Though early utilitarians defined the good in terms of happiness, modern utilitarians have rejected this approach because there is little agreement about what constitutes happiness (Timmons 2012). Modern utilitarians equate the good with satisfaction of preferences, fulfillment of interests, or attainment of welfare (Singer 1979; Brandt 1998; Hooker 2000). Other desirable outcomes, such as individual health, public health, income, wealth, and economic development, are valuable insofar as they promote preferences, interests or welfare. However, it may not be possible to define many of the things we view as good, such as human life, social justice, or natural wilderness in terms of a common metric, such as satisfaction of preferences or fulfillment of interests. It may be the case that the things we view as intrinsically morally valuable cannot be easily compared or exchanged for each other; they are, so to speak, incommensurable. Accordingly, some utilitarians, such as Brink (1989), adopt a pluralistic view of the good and hold that there are many different things that we should try to maximize. However, accepting some form of pluralism requires us to deal with conflicts among intrinsic values, which undercuts one of the strengths of utilitarianism by making the theory more difficult to apply. A second problem with utilitarianism, which was mentioned in Chap. 2, is that it assumes that utilities can be interpersonally compared. Suppose, for example, that we define the good in terms of satisfaction of preferences. If I am thinking about giving away a piano I no longer use, and only three people, Jane, June, and Jake want it, then, according to utilitarianism, I should give it to the person who will get the most satisfaction from it. But how can we make this comparison? One might argue that there is no way to decide who gets more satisfaction from the piano because satisfaction refers to subjective mental states (Hausman 1995). We run into different types of problems if we try to define utility in terms of something more objective, such as interests. While we may all share some common, measurable interests, such as interests in life, health, and shelter, we may also have interests that are idiosyncratic, such as interests in playing the ukulele or reading Batman comic books, or common interests that are difficult to define, such as interests in freedom or dignity. It is worth noting, however, that the problem of interpersonal comparisons of utility is not unique to utilitarianism, because other theories encounter this problem when dealing with issues related to the fair distribution of social goods or resources (Hausman 1995). For example, even I am not a utilitarian, I might still want to make comparisons between Jane, June, and Jake when deciding whom to give my piano. Unless I am planning to use a lottery to award the piano, I need to find some way of comparing these individuals in terms of a common metric. Some writers have argued, for example, that we can make interpersonal comparisons of income, welfare, wealth, or health (Sen 1970; Rawls 1971; Daniels 2008). However, these approaches are controversial as well.

3.2 Utilitarianism

53

Table 3.1 Distributions of wealth in three societies a

b

g

h

Total

Society A

100

10

c 5

d 1

e 1

f 1

1

1

120

Society B

30

20

15

13

10

10

7

5

110

Society C

4

4

4

4

4

4

4

4

32

A third problem with utilitarianism is that it fails to give adequate respect to the rights and wellbeing of individuals (Timmons 2012). For example, suppose that five people at a hospital need an organ or they will die from chronic illnesses. A homeless person comes into the emergency department for treatment for a stab wound. He is unconscious but has five healthy organs (two kidneys, a heart, and two lungs) that happen to be a perfect match for those who need organs. According to Mill’s greatest happiness principle, the doctors should kill this person to use his organs to save five lives, since this would maximize overall utility. Most people would agree that killing this person to save five lives would be morally abhorrent, and that we should abandon any theory that implies that we should. A fourth problem with utilitarianism is that it does not provide us with an adequate account of distributive justice (Timmons 2012). Suppose we are considering which of three societies is more just (Table 3.1). Society A has the most total wealth but the largest disparities in the distribution of wealth. Society C distributes wealth equally but has the less total wealth than A or B. Society B has a more even distribution than A but less total wealth. Utilitarians would say that society A is more just than B, but many people would hold that B is more just than A, since socioeconomic goods are distributed more equitably in B than A (Rawls 1971). Utilitarians have responded to the third and fourth critiques by distinguishing between two forms of utilitarianism: act-utilitarianism (AU) and rule-utilitarianism (RU). AU holds that when we are faced with a moral decision we should use the principle of utility to evaluate each alternative and then choose the alternative (or action) that maximizes overall utility. RU holds that when we are faced with a moral decision, we should follow rules that maximize overall utility. In RU, one applies the principle of utility to rules, not actions (Hooker 2000). Modern utilitarians, such as Brandt (1998) and Hooker (2000), hold that morality consists of a system of rules that work together to promote overall utility. Rule-utilitarians argue that their theory does not have the morally objectionable implications of AU. For example, rule-utilitarians can argue that their theory does not imply that a doctor should kill one person to save five, because a rule that required doctors to act this way would not promote overall utility, since it would have negative impacts on the doctor-patient relationship, the health care system, and our overall respect for human life. These negative consequences of adopting the rule would outweigh the good consequences of saving lives. Rule-utilitarians can also argue that there are problems with inequitable distributions of socioeconomic goods, such as social unrest and class resentment, that must be considered in assessing social rules. Rule-utilitarians could argue that it is unlikely that Society A would have

54

3 Precautionary Reasoning and Moral Theory

more total utility than Society B, given the negative consequences of socioeconomic inequality. While RU may be a defensible moral theory, its emphasis on following moral rules strays far from the spirit of utilitarianism, which focuses on producing good consequences. Also, RU’s support of individual rights and distributive justice may be a mile wide but an inch deep, since it depends on what sorts of rules produce the most net social good, which could vary according to social, political, and economic conditions. Social acceptance of restrictions on rights might be much higher in a collectivistic country, such as China, than in an individualistic one, such as the US. Turning to the question of how utilitarians think about benefits and risks, the answer to this question depends on what type of utilitarianism one has in mind, since different versions of this theory provide different accounts of benefits and risks. As we have seen, utilitarians have defined intrinsic goods in terms of happiness, satisfaction of preferences, or promotion of interests. Other benefits could include outcomes that promote these benefits, such as life, health, education, income, opportunities, or wealth. Risks could be defined as outcomes that are contrary to those values, such as unhappiness, death, sickness, poverty, and so on. Additionally, some versions of utilitarianism hold that there are many different intrinsic values that could be used to define benefits and risks. Concerning the question of whether there are moral constraints on the pursuit of benefits/risks, act-utilitarians would say that there are none. Act-utilitarians hold that we should make choices with an eye only toward maximizing overall benefits and minimizing overall risks. For act-utilitarians, the ends justify the means. Actutilitarians would recommend that we use expected utility theory as an approach to moral decision-making, if we have probability estimates to calculate expected utilities. If we do not have these estimates, would recommend that we follow other maximizing rules, such the principle of indifference or the minimax regret rule. Because AU focuses on maximizing overall good consequences, it would not recommend that we use maximin for decision-making. Rule-utilitarians, however, would take a very different approach to moral decisionmaking concerning benefits and risks, since they would place some constraints on the means used to maximize benefits/risks, such as protections for individual rights and wellbeing and considerations related to distributive justice. Rule-utilitarians, for example, could argue that socioeconomically disadvantaged populations should be protected against undue risks. The exact nature of these constraints would depend on how the rules connect to overall social utility (Hooker 2000).

3.3 Kantianism German philosopher Immanuel Kant (1724–1804) developed an approach to ethical theory which has had considerable influence in moral philosophy and public policy. Even theorists who do not consider themselves Kantians have incorporated some of his ideas into their work (Timmons 2012). As a deontologist, Kant held that

3.3 Kantianism

55

morality consists in following categorical rules (or maxims) that are consistent with a general principle known as the categorical imperative (CI). Categorical imperatives are different from hypothetical ones because they are not directed at any particular goal (Kant 1981). For example, the imperative, “If you want people to trust you, you should not tell lies,” is hypothetical because it is directed toward the goal of being trusted by others. However, the imperative, “Don’t lie,” is categorical because it is not directed at any particular goal. Utilitarian imperatives, including utilitarian rules, are all hypothetical because they are directed toward the goal of maximizing overall utility. Kant stated several versions of the CI which he held are equivalent. According to the universal law version, one should, “act only on that maxim whereby you can will at the same time should become a universal law (Kant 1981: 30).” The key insight behind this version of the CI is that moral rules should be universalizable; that is, they should apply to everyone, regardless of their circumstances. A rule like “I will lie in order to avoid embarrassment,” is not universalizable because if everyone followed this rule, truth-telling would break down because we could not trust each other to tell the truth. A rule like, “Do not lie,” is universalizable and can therefore serve as a moral rule. According to the respect for humanity version of the CI, one should “act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means” (Kant 1981: 36). The key insight behind this version of the CI is that we should always treat people (including ourselves) as having intrinsic, moral value (Hill 1992).6 We should respect each person’s autonomy and dignity and should not lie to or manipulate people to obtain our goals. For example, we should not kill one person in order to save five lives, because this would be treating that person as a mere means to obtaining a goal. Kant distinguished between persons (or rational agents) and things, and held that while things have a price, persons do not. It is therefore immoral, according to Kant, to treat people as if they have a price (Hill 1992). Kant’s proof of the CI makes use of concept that he called the Kingdom of Ends (Korsgaard 1996). The kingdom of ends is hypothetical (imagined) situation, in which rational agents (persons) are deciding upon the rules for society. It is, so to speak, a philosophical thought experiment. The rational agents in the Kingdom of Ends have autonomy of will, which means that they are capable of formulating and following moral rules (Kant 1981). Kant argues that members of the Kingdom of Ends would agree to follow the Categorical Imperative. This hypothetical agreement among rational agents is similar to the idea of social contract found in the work of Thomas Hobbes (1588–1679), John Locke (1632–1704), Jean Jacques Rousseau (1712–1778), John Rawls (1921–2002), and Robert Nozick (1938–2002) (Korsgaard 1996).

6 See

Hill (1992) for a discussion of the relationship between the universal law version and respect for humanity version of the CI.

56

3 Precautionary Reasoning and Moral Theory

Kantianism, like utilitarianism, faces several problems that call into question its appropriateness as an overall approach to making reasonable decisions concerning benefits, risks, and precautions. One of the main objections to Kant’s theory is that it is sets forth an absolutist morality that yields counterintuitive results; that is, it is inconsistent with our moral experience. For example, Kant held that it is always wrong to lie, because lying, as noted earlier, is not universalizable, and because lying treats people as mere things. But what about lying to save human life? Consider the following counterexample to Kant’s theory. Suppose that we are living in Poland during World War II and we are harboring Jewish people in our attic. A police officer from Nazi Germany comes to our door and asks us if we have any Jews hiding in our house. Should we tell the truth and says that we do, or should we tell a lie? Most people would hold that we should lie to the police officer in this situation to save human lives. Kant discusses this sort of example and claims that we should not lie to a would-be murderer at our door (Varden 2010). Some Kantians have argued that one can avoid this counterintuitive implication by formulating a maxim to cover this situation that would be consistent with the CI, such as, “I will lie to a would-be murderer to save human life.” However, this philosophical move goes against the spirit of Kantianism by making rules situational and leads Kantians down a path toward making numerous exceptions to categorical rules to accommodate important ends or goals. Another objection to Kant’s view is that does not provide us with much guidance on how to resolve conflicts of duties (Timmons 2012). Kant (1981) distinguished between perfect and imperfect duties. A perfect duty is a duty that we should always obey, regardless of our circumstances, because the negation of the duty is not universalizable. Kant held, for example, that we have perfect duties not to lie, break promises, or harm others. An imperfect duty is a duty that we are obligated to obey, but we need not follow all the time, because the negation of the duty could be universalized, even though rational agents would not will that it become a universal law. Kant held, for example, that we have an imperfect duty to help others. This duty is imperfect because its negation, “I will not help others,” is universalizable, since a society could still exist even if no one helped anyone else.7 However, rational agents would not want to live in that society, because they know they would sometimes need help, so they would not will the maxim. In other words, the maxim is universalizable but not willable. Kant held that perfect duties always trump imperfect ones when these types of duties conflict. For example, we should not lie to someone to help another person. However, when imperfect duties conflict we must decide what to do, based on our other obligations and commitments. For example, suppose I have $25 leftover from my paycheck after paying my bills and I am trying to decide whether to give it to a soup kitchen or to a beggar on the street. I have an imperfect duty to help the soup kitchen and an imperfect duty to help the beggar. What should I do? Kant does not have a ready answer to this question. Utilitarianism, however, does. For example, utilitarians could say that we should give our money to soup kitchen because it is likely to help more people. 7 This

is a questionable assumption!

3.3 Kantianism

57

Turning to the two questions about moral theories that I posed at the beginning of this chapter, it should be clear from our discussion that persons have intrinsic moral value for Kantians, because we have a duty to treat persons as ends and not as mere things (Hill 1992). Persons, however, are valuable because they can have good will. A person has good will (or good intentions) if they do their moral duty for the sake of duty and for no other reason or motivation. Things other than good will, which we might view as good, such happiness, health, or virtue, are not intrinsically good because they can be used for immoral purposes. For example, Kant (1981) held that happiness, health, and virtue in an evil man are not good, because they enable him to do bad things. Also, if a person has a good will but achieves a bad result,8 Kant would still say that they acted morally. Concerning the question about moral constraints on pursuing things that have value, Kant held that categorical imperatives impose numerous constraints on how we may pursue things that have moral value. For example, it is not morally acceptable to kill another person in order to save the life of another person, nor is it acceptable to break a promise to a person help another person. It would also not be acceptable, according to Kant, to treat people as having a monetary value (or price) for the purposes allocating resources or making other social decisions. Concerning the balancing of benefits and risks, it is not clear how Kantians would address this issue. While Kantians can say that we have a perfect duty not to intentionally, knowingly, or recklessly harm other people, and that we have an imperfect duty to help others, they have little say about taking risks that have a chance of benefitting some and harming others. Consider a regulatory agency’s decision concerning a new lung cancer drug that has been shown to extend life by one year in 50% of test subjects but has shortened life by six months in 25% of test subjects and has neither shortened nor extended life in the other 25%. The drug also has various side-effects, ranging from nausea and fatigue to liver and kidney toxicity and cardiac arrhythmia, and is very expensive ($250,000 per year). To apply Kantian theory to this decision, one would need to formulate different maxims that could be used by agency officials, such as “Approve a new drug if it is likely to help twice as many people as it harms,” “Approve a new drug if it is likely to help more people than it harms,” and so on. While these maxims might be universalizable, it is not clear whether a rational agent would adopt any of them. The issue of how to balance benefits and risks would then boil down to how what type of rule a rational agent would adopt for making these decisions, but this does not get us very far, unless we have some prior account of what a rational agent values and how he or she would address value conflicts.

8 For

example, you stop to help a motorist fix a flat tire, but he later goes on to rob a bank.

58

3 Precautionary Reasoning and Moral Theory

3.4 Virtue Ethics Virtue ethics differs from the utilitarianism and Kantianism insofar as it focuses on developing moral virtue and living a good life rather than following moral principles or rules (Timmons 2012). For the virtue ethicist, the most important moral question is not “what should I do?” but “what kind of a person should I be?” Moral virtues, according to the virtue ethicist, are intrinsically good traits of character. Virtue ethics is a teleological theory because our actions should be directed toward accomplishing the goal of developing virtue (Timmons 2012). We virtues through imitation and practice. For example, one becomes honest by following some else’s example of how to behave and repeatedly telling the truth. Other virtues include: integrity, courage, benevolence, humility, kindness, loyalty, compassion, perseverance, patience, wisdom, and justice. Virtue ethics traces its history to ancient Greece. Plato (427–347 BCE) and his student, Aristotle (384–322 BCE), wrote about the nature of moral virtue.9 Plato’s work appears in the form of dialogues, many of which involved conversations between his teacher Socrates (470–399 BCE) and interlocutors about the nature of virtue and whether it can be taught (Frede 2017). In the Republic, Socrates provides an account of the virtue of justice by comparing justice in the state to justice in a man. Socrates argues that there are three types of citizens in the state, the ruling class, the soldier class, and the producing class (i.e. craftsmen, farmers, merchants). Justice in the state is achieved when the ruling class rules with wisdom, the soldier class protects the state with courage, and the producing class produces and consumes goods with temperance. Plato argued that the human soul has three parts that are analogous to these classes: the rational part, the spirited or emotional part, and the appetitive part. Justice in the human soul is achieved when the rational part rules with wisdom and the other parts practice their respective virtues (Plato 1974). Plato argued that justice in the soul is intrinsically valuable, because a person who lack justice will be destroyed by his appetites or emotions. Because Plato’s writing took the form of dialogues, his views are difficult to pin down. Aristotle, however, wrote clearly argued treatises. In one of these, Nichomachean Ethics, Aristotle articulated an influential approach to virtue ethics (Aristotle 1985). Like Plato, Aristotle believed that rational thought was essential to virtue. Aristotle defended his approach to virtue by drawing an analogy between goodness in a man and goodness in other things. A thing is good if it performs its function well. For example, a good piano player plays the piano well, a good roof keeps out the rain out well, and so on. A good human being is a person who performs uniquely human functions well. Human beings share many functions with plants and animals, such as eating, growing, reproducing, and moving. However, reasoning is unique to human beings. Thus, a good (or virtuous) person is one who is good at acting in accordance with reason. 9 The

influential ancient Chinese philosopher Confucius (551–479 BCE) also wrote about virtue (Hursthouse 2016).

3.4 Virtue Ethics

59

Virtues are character traits we acquire through imitation and practice that fall between two extreme forms of behavior. For example, too little courage is cowardice, which is a vice, but too much courage is rashness, which is also a vice. Proper courage lies somewhere in between cowardice and rashness. Aristotle discussed a variety of virtues other than courage, including honesty, temperance, and pride. Aristotle argued that practical wisdom was a key virtue because we can use it to decide how to act in situations that may involve moral uncertainty (Aristotle 1985).10 Although most philosophers who developed moral theories after the ancient period acknowledged the importance of virtue, there was little interest in virtue ethics until it was revived by Philippa Foot (1978), Alisdair MacIntyre (1984) and other philosophers in the late twentieth century (Timmons 2012; Pojman 2005; Hursthouse 2016). Part of the appeal of virtue ethics is that it provides guidance on how one ought to live one’s life and the kind of person on ought to be. However, like the previous two theories we have discussed, virtue ethics also faces several problems that call into question its appropriateness as an overall approach to making reasonable decisions concerning benefits, risks, and precautions. One of the main objections to virtue ethics is that it does not provide us with a workable procedure for making moral choices when virtues lead us in different directions (Timmons 2012). Consider, for example, the situation (described above) in which a German police officer asks us if we are hiding Jews in our attic. If we follow the virtue of honesty, we should tell the police officer that we are hiding Jewish people in our attic; if, however, we follow the virtue of benevolence, we should not. To make drug approval decisions, we must consider the risks and benefits of different options, but virtue ethics provides us with little insight into how we should balance benefits and risks, other than to advise us to do so wisely, carefully, and honestly. Virtue ethicists have attempted to deal with the problem of moral decision-making by arguing that we can use practical wisdom to decide what to do (Hursthouse 2016). Since we develop practical wisdom by following the examples of people who have this virtue, we can consider how they would act under the circumstances that create the moral conflict. However, this suggestion simply pushes the problem back a bit further, since it presupposes that we know whose example we should follow in our circumstances. Should we follow the example of King Solomon, Abraham Lincoln, Martin Luther King, Mother Theresa? Different people who have practical wisdom might act differently, and we might disagree about whom to follow. Turning to the two questions about moral theories that I posed at the beginning of this chapter, it should be clear from our discussion that virtues theorists hold that virtues have intrinsic moral value. However, it is not clear what sorts of constraints virtue ethicists would place on the pursuit of virtue, other than to say that we should be virtuous in our pursuit of virtue. For example, we should not be dishonest in our pursuit of benevolence, nor should we be unkind in pursuit of honesty. Also, as we saw earlier, virtue ethics offers us little insight into how we should balance benefits and risks.

10 See

discussion of moral uncertainty in Chap. 2.

60

3 Precautionary Reasoning and Moral Theory

3.5 Natural Law Natural law theorists argue that morality (or the moral law) should be based on human nature (or the natural law). Natural law theory has its origins in the work of Aristotle (1985), who argued that virtue is a natural human function. Natural law theorists assert that some things are naturally good or bad and that we have moral obligations to promote good things and prevent bad ones. Thus, natural law is a teleological theory because it holds that morality is directed toward promoting natural good or avoiding natural evils. The Italian priest and theologian St. Thomas Aquinas (1225–1274) developed an approach to ethics that has had considerable influential on philosophical and theological discussions of natural law theory. Aquinas held that there are four basic, natural human goods: life, procreation, knowledge, and social relationships (Aquinas 1988). Other goods, for example, food, shelter, happiness, and health, are worth pursuing insofar as they help us to obtain the four basic goods. Things that are contrary to these natural goods, such death, disease, ignorance, famine, and suffering, are evil. Our duties derive from these natural goods. We have a duty not to murder, for example, because human life is naturally good, and anything that destroys it is evil. We have a duty not to lie, cheat, or steal because these actions undermine social relationships, which are a natural good (Timmons 2012). Aquinas’ approach to natural law is theological, because he held that God is the source of natural law. Understandably, Aquinas’ view has had considerable influence over Christian ethics, especially Catholic ethics (Timmons 2012). However, moral philosophers, known as moral naturalists, have developed secular views that have much in common with Aquinas’ theory. According to moral naturalists, morality is based on objective facts about the world, such as facts about human biology, psychology, or sociology, i.e. human nature (Sayre-McCord 2015; Lenman 2018). Sociobiologists, such as E.O. Wilson (2004), argue that morality is an evolved behavior that promotes social cooperation. Although moral naturalism has become popular in recent years, natural law theory also faces several problems that call into question its appropriateness as an overall approach to making reasonable decisions concerning benefits, risks, and precautions. One of these problems is how to deal with conflicts between fundamental values. Suppose that someone is trying to kill me. Should I kill them in self-defense? On the surface it seems that the natural law approach would say I should not because human life is naturally good. But my life is also valuable. Aren’t I allowed to defend my own life? Natural law theorists have developed a method for resolving conflicts such as these known as the doctrine of double effect. According to this doctrine, it is morally acceptable to perform an action that has bad effects, provided that: (1) the action itself is not immoral; (2) the bad effect is not intended (but may be foreseen); and (3) the bad effect is proportionate to the good effect (Timmons 2012). Killing in self-defense is morally acceptable because defending one’s life is not immoral, the death of the attacker is not intended (but may be foreseen), and the death of the attacker is proportionate to saving one’s life. Killing someone who threatens to

3.5 Natural Law

61

punch you in the face would not be morally acceptable because the bad effect (death) would not be proportionate to the good effect (avoiding injury to one’s face). While the doctrine of double effect has some useful applications for moral dilemmas, it also encounters some problems (Billings 2011; Timmons 2012). One of these is determining whether a bad effect is merely foreseen and not intended. For example, if a military pilot drops a bomb on a house that shelters several well-known terrorists and he knows that several innocent civilians (perhaps more) are likely to be killed by the bomb, would we say that he did not intend to kill the civilians? When does a foreseen bad effect become intended? Another problem with the doctrine is determining whether a good effect is proportional to a bad effect. For example, if a regulatory agency decides to approve a new drug because it is likely to save 1000 lives per year, even though 1000 people are likely to die from its side-effects, would we say that the lives saved were proportionate to the live lost? What ratio of lives saved to lives lost would be proportionate? These are the kind of choices that one must make when making decisions about risks, benefits, and precautions, but it is not clear that the doctrine of double effect has clear answers to them. Another objection to the natural law theory is that it commits what British philosopher G.E. Moore (1873–1958) called the naturalistic fallacy (Moore 1903). Over a century before Moore was born, the Scottish philosopher David Hume (1711– 1776) argued that we cannot infer normative, ought-statements from descriptive, is-statements. For example, one cannot use the descriptive statement “most people cheat on their taxes” to support the normative statement “most people ought to cheat on their taxes.” Cheating on one’s taxes is right or wrong irrespective of facts about how many people do it. Moore expanded on Hume’s ideas and argued that we cannot derive moral value-statements (such as “life is good”) from factual statements about the world (such as “all organisms have a strong survival instinct”) (Timmons 2012; Sayre-McCord 2015). Moore (1903) argued that we cannot derive values from facts because for any naturalistic definition of a value, we can still ask, “is that good?” For example, if we define human life as good, we could still ask, “but is human life good?” While I do not find Moore’s objections to naturalism to be entirely convincing because they seem to beg the question against naturalism, I agree with his basic insight that some values transcend human nature. Consider notions of justice and fairness, for example. Suppose that we are allocating scarce medical resources, such as human organs, and we must decide who will live and who will die. The assumption that human life has moral value does not tell us how to decide who shall and live and who shall die, since this decision involves notions of justice and fairness, which are complex, abstract ideas that do not seem to have a clear basis in human nature (Miller 2017). Other important ethical concepts, such as human rights, also seem to be to go beyond what we can learn from studying human nature. While some moral values, such as life, health, friendship, and pain avoidance, are derived from human nature, others, such as justice and human rights, are not.

62

3 Precautionary Reasoning and Moral Theory

Looking beyond this specific critique of the natural law approach to ethics, there is a larger issue here concerning whether the concept of naturalness11 provides useful guidance for complex moral issues. Various philosophers and theologians have argued that some types of technological innovations, such as surrogate pregnancy, human cloning and genetic engineering, are immoral because they are unnatural (Kass 1988, 1997; President’s Council on Bioethics 2003). While there is substantial scientific evidence that morality is an evolved trait that has been shaped by our biology (Wilson 2004; de Waal 2009; Greene 2013), the inference from unnatural to immoral is invalid. History provides us with numerous examples of moral claims that appeal to unnaturalness that were later rejected. For example, people have argued that women’s suffrage, interracial marriage, homosexuality are immoral because they are unnatural, but we now reject these claims as biased and unfounded. Furthermore, most of our technologies interfere with or change nature in some way. For example, modern medicine has extended the human lifespan beyond its natural length of 40– 50 years and. Rejecting new technologies because they are unnatural amounts to a moral condemnation of technology itself.12 Turning to the two questions about moral theories that I posed at the beginning of this chapter, it should be clear from our discussion that natural law theorists hold that human life, human relationships, and other things have intrinsic moral value, and that they place moral some constraints on how we pursue those values. For example, one should not intentionally kill another human being to save human life. While natural law theorists could use the doctrine of double effect to balance some benefits and risks, this doctrine has limited applicability for precautionary reasoning because it provides insufficient guidance for determining the proportionality of benefits to risks.

3.6 Natural Rights The natural rights approach to morality asserts that all human beings have some fundamental rights that impose moral constraints on the actions of others and limit the authority of the state (Timmons 2012). Rights protect interests and give individuals control over certain types of decisions. For example, the right to life protects one’s interests in life and allows one to make life and death decisions. Rights also imply moral duties. For example, my right to life implies that others have a moral duty not to kill me (Wenar 2015). Moral rights are different from legal rights. Legal rights are established by state or federal constitutions, statutes, or judicial decisions and are enforced by the state, 11 Since human beings are part of nature, one might question whether the concept of naturalness can be clearly defined: a dam built by human beings is no more “unnatural” than a dam built by beavers. 12 Even Amish communities, which reject modern technologies, still use technologies that were prevalent during the 1800s, such as horse-drawn carriages, hammers, nails, saws, wood stoves, hand-woven clothing, candles, and cooking utensils.

3.6 Natural Rights

63

whereas moral rights are justified by moral theories or principles and often are not enforced by the state (Wenar 2015). Very often states also enforce widely recognized moral rights, such as property rights, but legal property rights are distinct from the moral ones. Rights can be construed negatively or positively (Wenar 2015). A negative right is a right to be left alone. For example, a negative right to life is a right not be killed. Positive rights are rights to be provided with something. For example, a right to health care would be a right to receive health care from others. Two of the theories we have examined thus far, rule-utilitarianism, and Kantianism, can support moral rights. Rule-utilitarians could argue that rules which grant individuals certain rights promotes the overall good of society (Hooker 2000). Mill (1978), for example, was a strong defender of liberty. Mill argued that allowing people to make their own choices (within limits13 ) generally produces more overall good for society than restricting freedom. But, as noted earlier, utilitarian justifications for rights are tentative because they depend on social, economic, and political conditions that may change. Kantians can provide a stronger justification for rights by arguing that many of the moral rules supported by the CI imply that individuals have rights (Hill 1992). For example, the moral duty not to lie implies that other people have a right not to be lied to, and the moral duty to keep promises implies that other people have a right not to have promises broken. Natural rights theorists regard rights as foundational. That is, rights are justified without appealing to other moral considerations or principles, such as utility or the CI. Seventeenth century British philosopher John Locke founded the natural rights approach to moral and political theory. Locke (1980) argued that God has endowed all human beings with rights to life, liberty,14 and property.15 Modern natural rights theorists, such as Nozick (1974) and Thomson (1992), also treat rights as foundational even though they do not claim that rights come from God. Natural rights theorists, like Kantians, hold that morality is not teleological, because rights have moral justification irrespective of the goals they serve. Although natural rights theorists are champions of individual freedom and autonomy, they are not anarchists. Locke (1980) and Nozick (1974) appeal to the idea of a social contract to justify the state and circumscribe its authority. According to Locke, in a hypothetical time before the existence of civil society, known as the state of nature, individuals had rights to life, liberty, and property as well as the authority to enforce and protect these rights. However, individuals decided to form governments because they recognized it would be to their mutual advantage to cooperate together to protect their rights. Thus, the main function of government, according to Locke, Nozick, and other natural rights theorists is to protect individual rights (Wenar 2015). Rights, according to this view, are only negative, and do not include positive rights 13 For

example, choices that place people unnecessarily at risk may be restricted.

14 Liberties recognized by modern natural rights theorists include freedom of thought, speech, action,

movement, association, assembly, and religion. 15 The founding fathers of the US took inspiration from Locke’s writings. The US Declaration of Independence and the US Constitution are based on Locke’s ideas.

64

3 Precautionary Reasoning and Moral Theory

to education, health care, food, or other goods or services.16 Natural rights theory is often equated with libertarianism, because natural rights theorists and libertarians both favor limited government, minimal taxation, and free-market approaches to the economy, and oppose the use of government funds to redistribute wealth (Nozick 1974). Natural rights theory also faces several problems that call into question its appropriateness as an overall approach to making reasonable decisions concerning benefits, risks, and precautions. One of the main problems with natural rights approaches to morality is adjudicating conflicts between individual rights. According to natural rights theorists, rights should be respected, barring compelling reasons for restricting them. In many situations, we would agree that there are compelling reasons for restricting rights. For example, most people would agree that my right to freedom of speech does not allow me to yell “fire!” in a crowded theater. Other sorts of conflicts, however, can be more difficult to resolve. For example, suppose that I own some farmland and want to start raising pigs on it. My neighbors object to this plan because they are worried that it may threaten their health by increasing their risks of respiratory and gastrointestinal illnesses. This controversy would involve a conflict between my right to use my property and my neighbors’ rights not to be exposed to additional health risks. Relevant factors to consider when resolving this could include the magnitude of the risk to my neighbors’ health and the history of property ownership. For example, if I had been raising pigs on my farm for decades before my neighbors bought their property, my neighbors would have less of a compelling case for restricting my rights because they should have known about this risk before buying their property. Natural rights theorists have proposed some strategies for handling conflicts of rights. One of these is to develop a list of exceptions for each right (Oberdiek 2008). For example, the right to free speech could include exceptions for direct harm to others (such as inciting violence), libel, slander, and violations of intellectual property, trade secrets, or national security. A problem with this approach is that the lists of exceptions could become so long that that rights would lose their value as a policy tool (Thomson 1992). Other theorists have developed tests for balancing and prioritizing conflicting rights (Thomson 1992). As we have seen in this chapter, balancing and priority-setting are key issues in applying other moral theories to real world problems. Turning to the two questions about moral theories that I posed at the beginning of this chapter, it is not clear whether natural rights theorists regard anything as intrinsically valuable except rights. We should honor a person’s right to life, according to the natural rights view, not because life is intrinsically valuable, but because that person has right to life that we should not interfere with. It should be clear, however, that natural rights theorists would place constraints on the pursuit of things that we value. They would hold that we should not violate fundamental rights in the pursuit of things we regard as having value, barring compelling reasons for doing 16 Libertarian

political theorists adopt this view.

3.6 Natural Rights

65

so. However, this type of constraint tells us very little about how to balance risks and benefits reasonably, since one could impose significant risks on others without actually violating their rights.

3.7 John Rawls’ Theory of Justice American philosopher John Rawls (1921–2002) made many important to contributions to moral and political theory, such as the method of reflective equilibrium discussed earlier in this chapter (Rawls 1971). Though most of Rawls’ writings focused on political philosophy, I am including him in this chapter because of his work on distributive justice. Distributive justice has to do with the fairness of distributions of benefits (such as income, wealth, education, and health care) and burdens (such as health risks, grueling work) in society (Rawls 1971). We can think of distributive justice as the outcome of the distribution itself or the procedures used to produce the outcome (Rawls 1971). A procedure that we regard as just could produce an outcome that we view as unjust. For example, if there is overwhelming evidence that a man has committed murder but he is found not guilty because some of the evidence was not admissible in court because the police violated legal rules while collecting it, we could still view the trial as just even if we regard the outcome as unjust. Distributive justice is an important concern for precautionary reasoning, because how risks and benefits are distributed in society matters a great deal to many people. For example, in the social choice concerning the location of a solid waste disposal site discussed in Chap. 2, different options are likely to have different effects on risk/benefit distributions, since people living near the site are likely to experience more health risks related to the site than those living further away (Resnik 2012). In selecting a disposal site, it is important to think not only about the risks and benefits of different options for society as a whole but also about how these options are likely to impact people differently. It is also important to ensure that people who will be living near the site have the opportunity to participate in the social choice and express their views (Shrader-Frechette 2007; Resnik 2012; Resnik et al. 2018). As we saw earlier, one of the main critiques of utilitarianism is that it does not provide us with a satisfactory account of distributive justice. In the three societies example discussed earlier, act-utilitarians would say that Society A is more just than Society B, because it has more overall utility than Society B, even though it has larger disparities wealth distribution than Society B. However, many people would regard B as more just than A. Rule-utilitarians could claim that it is unlikely that Society A would have more overall utility than Society B, since there are negative consequences of socioeconomic inequalities. However, this assumption is questionable, since it is possible that the negative consequences of socioeconomic inequalities might not be great enough to offset the overall increase in utility. Other approaches to distributive justice also have implications that most people would find morally objectionable. Natural rights theorists hold that vast differences in wealth are morally acceptable, provided that they arise from fair procedures for

66

3 Precautionary Reasoning and Moral Theory

generating and transferring income and wealth (Nozick 1974). Natural rights theorists do not care about how wealth is distributed if the distribution arises from procedures that reward people for merit and do not involve theft, fraud, or coercion. On the opposite end of the spectrum, egalitarians, such as Karl Marx (1818–1883) argue that socioeconomic goods should be distributed equally (Marx and Engels 2012; Lamont 2017). Egalitarians would say that Society C is the most just because it distributes wealth equally. However, may people would regard C as unjust because it has less total wealth than A or B. People are poorer in C than A or B even though there is less socioeconomic inequality in C. Rawls developed an approach to distributive justice that synthesizes utilitarian, egalitarian, and libertarian insights. Rawls (1971) introduces the idea of the Original Position to justify his theory of justice. The Original Position is a hypothetical time and place in which free citizens (or contractors) are choosing principles that will govern the ethical, legal, economic, social, and educational systems that organize society. The contractors are behind a Veil of Ignorance, which means that they do not know who they are in that society. They do not know whether they are rich or poor, black or white, male or female, and so on. The contractors are trying to decide upon rules for distributing primary goods in society, i.e. things that any person would need to develop their conception of the good life and to participate in society, such as rights, liberties, income, wealth, opportunities, powers of offices, and the social basis of self-respect (Rawls 1971). Rawls argues that the contractors in the Original Position would choose two principles for distributing primary goods (or principles of justice): (1) rights and liberties are distributed equally (the equality principle); (2) socioeconomic inequalities are justified provided that: (a) there is fair equality of opportunity in society, and (b) socioeconomic differences benefit the least advantaged members of society (the difference principle) (Rawls 1971). The equality principle represents the libertarian strain in Rawls’ thought, while the difference principle can be viewed as a combination of egalitarian and utilitarian ideas. Rawls’ view implies that Society B (discussed above) would be more just than Society A and Society C because these worst-off members do best in Society B, assuming that rights and liberties are distributed equally in these societies. Society B has differences in wealth, but these differences work to the advantage of the leastadvantaged members of society. Rawls (1971) proposes that differences in income and wealth can benefit the least advantaged members of society by providing people with economic incentives for innovation and hard work, which increases economic productivity and growth. Inequalities can increase the socioeconomic pie. Although the slices are not equal, all people get bigger pieces. In societies where income and wealth are distributed equally, such as communist countries like Cuba, people lack these economic incentives, so economic productivity and growth stagnate. One of the main objections to Rawls’ theory is that it not clear that the contractors would choose his two principles of justice (Lamont 2017). If we view Rawls’ principles of justice from the perspective of decision theory, the Veil of Ignorance ensures that each contractor in the Original Position is making a decision under ignorance. Rawls’ difference principle can be interpreted as a form of maximin reasoning

3.7 John Rawls’ Theory of Justice

67

because it selects the best of the worst outcomes (Rawls 1971). If you don’t know who you are in society, according to Rawls, it makes sense to choose principles that are likely to make you better off than you would be in other societies, because you might turn out to be one of the least-advantaged members of society when the Veil of Ignorance is lifted. However, as we saw in Chap. 2, there are other, less riskaversive, rules that might apply in this situation. For example, one could assume that it is equally likely that one will be any member of society when the veil is lifted and decide to follow the principle of indifference for distributing socioeconomic goods. The principle of indifference would favor utilitarian distributions because it would instruct you to choose principles that maximize overall social goods. Other rules for making decisions under ignorance, such as the maximax or the minimax regret rule, would imply principles of justice other than the ones Rawls endorses. Rawls is not the only philosopher who has articulated principles of distributive justice, but he has been the most influential. Norman Daniels (1984, 2008), for example, has applied Rawlsian principles to the distribution of health and health care in society. Other noteworthy distributive justice theorists include Brian Barry, Ronald Dworkin, Catherine MacKinnon, Amartya Sen, and Michael Walzer (Lamont 2017). I will not explore all of these different accounts of distributive justice here.17 I would like to note, however, that an important insight we can glean from Rawlsian theory is that distributive justice should be concerned with the wellbeing of the worstoff or most vulnerable members of society. I will return to this point when I discuss the distribution of benefits and risks in more depth in Chaps. 4 and 5. Turning to the two questions about moral theories that I posed at the beginning of this chapter, Rawls views primary goods, such as rights, liberties, income, wealth, and opportunities, as intrinsically valuable, and he would place some restrictions on the pursuit of those values. These restrictions would be imposed to ensure that distributions of goods conform to principles of justice. This is an important point to consider in precautionary reasoning, even if one does not subscribe to Rawlsian principles of justice.

3.8 Environmental Ethics The moral theories we have considered thus far focus on human-centered values, principles, or rights, such as happiness, social relationships, respect for persons, justice, and so on. The environmental ethics movement started in the 1970s as a reaction to anthropocentric attitudes toward environmental problems (Attfield 2003; Cochrane 2019). According to the anthropocentrism that dominated environmental ethics and policy at that time, non-human species and the environment have value only as means to advancing human interests or goals. We should be concerned about environmental problems, such as air and water pollution, deforestation, and climate change only to the extent that they impact health, economic development, enjoyment 17 For

further discussion, see Lamont (2017).

68

3 Precautionary Reasoning and Moral Theory

of nature, or other interests or goals. Philosophers, theologians, scientists, and others objected to this longstanding view by arguing that to adequately address environmental issues we need to deploy moral concepts, principles, and theories that treat other species, habitats, ecosystems, and the biosphere, as having intrinsic value, not merely as things with extrinsic value that we can manipulate for our own ends (Naess 1986; Leopold 1989; Rolston 1994; Varner 1998; Singer 2009).18 Others have responded to this attack on anthropocentrism, however, by arguing that we can incorporate environmental values into traditional, moral theories without abandoning anthropocentrism (Passmore 1980; Hill Jr. 1983; Norton 1987; O’Neill 1997; Gewirth 2001). For example, utilitarians could argue that policies that protect the environment and non-human species can be justified because they promote the overall good of society. Kantians could argue that the CI supports an imperfect duty to respect nature. Virtue theorists could argue that part of what is means to be good person is to treat all living things with respect, including other species. Regardless of the stance that one takes on anthropocentrism, one must still confront difficult questions pertaining to how much priority we should give to environmental values when they conflict with human-centered ones (Attfield 2003; Resnik 2012). For example, suppose that a community is deciding whether to build a dam on a river to provide a source of drinking water for its growing population. Damming the river can help to promote human health and wellbeing but is likely to disrupt habitats and ecosystems and could threaten some species. While anthropocentrists would tend to resolve this dilemma in favor of human values and interests, they would still need to seriously consider the environmental impacts of damming up the river when deciding what to do. Non-anthropocentrists would tend to be against building a dam, but they would not be able to ignore the pressing human needs for drinkable water. The issue comes down to how to balance and prioritize these competing values, and it is not clear that environmentalist approaches to ethics do a better job of resolving conflicts among value than human-centered approaches (Resnik 2012). Turning to the two questions about moral theories that I posed at the beginning of this chapter, some environmental ethicists hold that non-human species and the environment have intrinsic moral value, while others hold that the moral value these things have is extrinsic, but important nonetheless. Environmental ethicists would place some constraints on the pursuit of things we value. For example, they would hold that we should take environmental impacts into account when make decisions that affect the environment, such as decisions about economic development or environmental regulation. Environmental ethics makes an important contribution to our thinking about benefit/risk balancing, and precautionary reasoning more generally, by calling our attention to environmental benefits and risks.

18 Various

moral and religious traditions also emphasize respect for nature and other species, including Buddhism, Native American and African religions, and some Christian sects.

3.9 Feminist Ethics

69

3.9 Feminist Ethics Feminism, as a social and political movement, traces its origins to the seventeenth, eighteenth and nineteenth centuries, when English and American writers and activists, such as Mary Astell (1666–1731), Mary Wollstonecraft 1759–1797), Harriet Taylor Mill (1807–1858), and Catherine Beecher (1800–1878), criticized patriarchy and the unequal and disrespectful treatment of women (Norlock 2019). The movement produced important victories for women’s rights in the US, UK, and European countries, such as the right to receive education, to the right to vote, and the right to hold political office. In the twentieth century, Charlotte PerkinsGilman (1860–1935), Simone de Beauvoir (1908–1986), Gloria Steinem, and other women made important contributions to the feminist movement, and women secured additional legal rights (Norlock 2019). Feminist ethical theory emerged in the 1980s as an alternative to traditional, maleoriented moral theories (Norlock 2019). Philosopher/psychologist Carol Gilligan (1993) argued that women have different experiences of morality than men. Women, according to Gilligan, are more concerned with developing caring relationships than with notions of duty, rights, justice or social utility. Other philosophers who have developed feminist moral theories include Nel Noddings, Annette Baier, and Allisson Jaggar (Norlock 2019; Sander-Staudt 2019). Some have combined feminist ethics with environmental concerns to form an approach to moral thinking known as ecofeminism (Gaard and Gruen 1993). Gilligan’s work inspired an approach to moral theory known as the ethics of care because it emphasizes the importance of caring relationships (Sander-Staudt 2019). One of the main critiques of the ethics of care is that it is not really a distinct approach to moral theory (Sander-Staudt 2019). For example, Timmons (2012) classifies care ethics as a type of virtue ethics that emphasize care as a virtue. Others argue that the notion of care can be derived from utilitarian or Kantian frameworks (Sander-Staudt 2019). Some have argued that the view is empirically flawed because not all women view morality from the perspective or caring relationships, while many men do think of morality in terms of caring. Thus, notions of duty, rights, social utility and care may apply to both men and women (Sander-Staudt 2019). Despite these and other criticisms, feminist approaches to ethics are an important part of the of contemporary moral theory that provide us with useful insights into moral values. Turning to the two questions about moral theories that I posed at the beginning of this chapter, care ethicists view caring relationships as intrinsically valuable. However, it is not clear whether care ethicists would place constraints on the development of these relationships, since constraints are often couched in terms of moral obligations or rights. Care ethicists view the development of caring relationships as a type of benefit that should be considered in thinking about risk/benefit reasoning, but it is not clear how they would balance benefits and risks.

70

3 Precautionary Reasoning and Moral Theory

Table 3.2 List of moral values Individual values

Social values

Environmental values

Life Procreation Happiness Knowledge Virtue Artistic creation Dignity, liberty, and rights Health Wealth

Social relationships: family, friends Social justice Public health Economic growth, prosperity Work Culture, religion

Animal welfare Non-human species Habitats Ecosystems Biodiversity The biosphere

3.10 Conclusion We have seen in this chapter that moral theories provide us with a rich array of values we can take into consideration when engaging in precautionary reasoning. Some of these are listed in Table 3.2. As on can see from the list in Table 3.2, there are many different things that people might regard as morally valuable. Sometimes these values may complement each other: for example, life and health, happiness and health, and happiness and social relationships often go together. But sometimes they may conflict: for example, promoting economic development or public health may negatively impact non-human species, habitats, ecosystems, or biodiversity; and respecting property rights may threaten habitats or ecosystems or undermine the pursuit of social justice. One of the main tasks of moral theory is to help us resolve value-conflicts (Timmons 2012). As we have seen, however, each of the theories considered in this chapter face some substantial objections that limit their usefulness as an overall approach to resolving value-conflicts and making decisions involving risks, benefits, and precautions (Hannson 2010). If we do not accept a single, over-arching moral theory that resolves all value conflicts, we are left with an assortment of incommensurable values that we must consider, weigh, and prioritize when making choices concerning risks, benefits, and precautions. To make reasonable, precautionary decisions we therefore need to develop an approach to decision-making that addresses value pluralism (or uncertainty) in a way that respects and appreciates competing values (Resnik 2018).19 In Chaps. 4 and 5 I will argue that the Precautionary Principle can help us do this.

19 A

number of theorists, including Ross (1930), Rawls (2005), and Beauchamp and Childress (2012), have developed approaches to moral reasoning that deal with value pluralism.

References

71

References Aquinas, S.T. 1988 [1265–1275]. On Politics and Ethics, trans. P.E. Sigmund. New York: W. W. Norton. Aristotle. 1985 [340 BCE]. Nichomachean Ethics, trans. T. Irwin. Indianapolis, IN: Hackett. Attfield, R. 2003. Environmental Ethics. Cambridge, UK: Polity Press. Beauchamp, T.L., and J.F. Childress. 2012. Principles of Biomedical Ethics, 7th ed. New York, NY: Oxford University Press. Billings, J.A. 2011. Double Effect: A Useful Rule That Alone Cannot Justify Hastening Death. Journal of Medical Ethics 37 (7): 437–440. Brandt, R. 1998. A Theory of the Good and the Right, revised ed. New York: Prometheus Books. Brink, D.O. 1989. Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Cochrane, A. 2019. Environmental Ethics. Internet Encyclopedia of Philosophy. Available at: https:// www.iep.utm.edu/envi-eth/. Accessed 18 Jan 2021. Daniels, N. 1984. Just Health Care. Cambridge, UK: Cambridge University Press. Daniels, N. 1996. Justice and Justification: Reflective Equilibrium in Theory and Practice. New York, NY: Cambridge University Press. Daniels, N. 2008. Just Health: Meeting Health Needs Fairly. Cambridge; New York: Cambridge University Press. de Waal, F. 2009. Primates and Philosophers: How Morality Evolved. Princeton, NJ: Princeton University Press. Foot, F. 1978. Virtues and Vice. Oxford, UK: Blackwell. Frede, D. 2017. Plato’s Ethics: An Overview. Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/entries/plato-ethics/. Accessed 19 Jan 2021. Gaard, G., and L. Gruen. 1993. Ecofeminism: Toward Global Justice and Planetary Health. Society and Nature 2: 1–35. Gewirth, A. 2001. Human Rights and Future Generations. In Environmental Ethics, ed. M. Boylan, 207–211. Upper Saddle River, NJ: Prentice Hall. Gilligan, C. 1993. In a Different Voice: Psychological Theory and Women’s Development, 6th ed. Cambridge, MA: Harvard University Press. Greene, J. 2013. Moral Tribes: Emotion, Reason, and the Gap between Us and Them. New York, NY: Penguin Press. Hannson, S. 2010. The Harmful Influence of Decision Theory on Ethics. Ethical Theory and Moral Practice 13 (5): 585–593. Hausman, D.M. 1995. The Impossibility of Interpersonal Utility Comparisons. Mind 104 (415): 473–490. Hill Jr., T.H. 1983. Ideals of Human Excellence and Preserving Natural Environments. Environmental Ethics 5: 211–224. Hill Jr., T.H. 1992. Dignity and Practical Reason in Kant’s Moral Theory. Ithaca, NY: Cornell University Press. Hooker, B. 2000. Ideal Code, Real World: A Rule-consequentialist Theory of Morality. New York, NY: Oxford University Press. Hursthouse, R. 2016. Virtue Ethics. Stanford Encyclopedia of Philosophy. Available at: https:// plato.stanford.edu/entries/ethics-virtue/. Accessed 19 Jan 2021. Kant, I. 1981 [1785]. Groundwork for the Metaphysics of Morals, trans. J.W. Ellington. Indianapolis, IN: Hackett. Kass, L.R. 1988. Toward a More Natural Science. New York, NY: Free Press. Kass, L.R. 1997. The Wisdom of Repugnance. The New Republic 216 (22): 17–26. Korsgaard, C. 1996. Creating the Kingdom of Ends. Cambridge: Cambridge University Press. Lamont, J. 2017. Distributive Justice. Stanford Encyclopedia of Philosophy. Available at: https:// plato.stanford.edu/entries/justice-distributive/. Accessed 19 Jan 2021.

72

3 Precautionary Reasoning and Moral Theory

Lenman, J. 2018. Moral Naturalism. Stanford Encyclopedia of Philosophy. Available at: https:// plato.stanford.edu/entries/naturalism-moral/#WhatMoraNatu. Accessed 19 Jan 2021. Leopold, A. 1989. A Sand County Almanac: And Sketches Here and There, Commemorative ed. Oxford, UK: Oxford University Press. Locke, J. 1980 [1689]. Second Treatise of Government. Indianapolis, IN: Hackett. MacIntyre, A. 1984. After Virtue. South Bend, IN: University of Notre Dame Press. Marx, K., and F. Engels. 2012 [1848]. The Communist Manifesto: A Modern Edition. London, UK: Verso. Mill, J.S. 1978 [1859]. On Liberty, ed. E. Rapaport. Indianapolis, IN: Hackett. Mill, J.S. 1979 [1861]. Utilitarianism, ed. G. Sher. Indianapolis, IN: Hackett. Miller, D. 2017. Justice. Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford. edu/entries/justice/. Accessed 19 Jan 2021. Moore, G.E. 1903. Principia Ethica. Cambridge, UK: Cambridge University Press. Naess, A. 1986. The Deep Ecological Movement: Some Philosophical Aspects. Philosophical Inquiry 8 (1/2): 10–31. Norlock, K. 2019. Feminist Ethics. Stanford Encyclopedia of Philosophy. Available at: https://plato. stanford.edu/entries/feminism-ethics/. Accessed 19 Jan 2021. Norton, B. 1987. Why Preserve Natural Variety?. Princeton, NJ: Princeton University Press. Nozick, R. 1974. Anarchy, State, Utopia. New York, NY: Basic Books. O’Neill, O. 1997. Environmental Values, Anthropocentrism and Speciesism. Environmental Values 6: 127–142. Oberdiek, J. 2008. Specifying Rights Out of Necessity. Oxford Journal of Legal Studies 28 (1): 127–146. Passmore, J. 1980. Man’s Responsibility for Nature, 2nd ed. London, UK: Duckworth. Plato. 1974 [380 BCE]. The Republic, trans. G.M.A. Grube. Indianapolis, IN: Hackett. Pojman, L.P. 2005. Ethics: Discovering Right and Wrong, 5th ed. Belmont, CA: Wadsworth. President’s Council on Bioethics. 2003. Beyond Therapy: Biotechnology and the Pursuit of Happiness. New York, NY: Harper Perennial. Rawls, J. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press. Rawls, J. 2005. Political Liberalism, 2nd ed. New York: Columbia University Press. Resnik, D.B. 2012. Environmental Health Ethics. Cambridge, UK: Cambridge University Press. Resnik, D.B. 2018. The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust. Cham, Switzerland: Springer. Resnik, D.B., D.R. MacDougall, and E.M. Smith. 2018. Ethical Dilemmas in Protecting Susceptible Subpopulations from Environmental Health Risks: Liberty, Utility, Fairness, and Accountability for Reasonableness. American Journal of Bioethics 18 (3): 29–41. Rolston III, H. 1994. Environmental Ethics: Values in and Duties Toward the Natural World. In Reflecting on Nature, ed. L. Gruen and D. Jamieson, 65–84. New York: Oxford University Press. Ross, W.D. 1930. The Right and the Good. Oxford, UK: Oxford University Press. Sander-Staudt, M. 2019. Care Ethics. Internet Encyclopedia of Philosophy. Available at: https:// www.iep.utm.edu/care-eth/. Accessed 19 Jan 2021. Sayre-McCord, G. 2015. Moral Realism. Stanford Encyclopedia of Philosophy. Available at: https:// plato.stanford.edu/entries/moral-realism/. Accessed 19 Jan 2021. Sen, A. 1970. Collective Choice and Social Welfare. San Francisco: Holden-Day. Shrader-Frechette, K.S. 2007. Taking Action, Saving Lives: Our Duties to Protect Environmental and Public Health. New York, NY: Oxford University Press. Singer, P. 1979. Practical Ethics. Cambridge, UK: Cambridge University Press. Singer, P. 2009. Animal Liberation, reissue ed. New York, NY: Harper Perennial. Thomson, J.J. 1992. The Realm of Rights. Cambridge, MA: Harvard University Press. Timmons, M. 2012. Moral Theory: An Introduction, 2nd ed. Lanham, MD: Rowman and Littlefield. Varden, H. 2010. Kant and Lying to the Murderer at the Door…One More Time: Kant’s Legal Philosophy and Lies to Murderers and Nazis. Journal of Social Philosophy 41 (4): 403–421.

References

73

Varner, G. 1998. In Nature’s Interests? Interests, Animal Rights, and Environmental Ethics. Oxford, UK: Oxford University Press. Wagner, J., and M.D. Dahnke. 2015. Nursing Ethics and Disaster Triage: Applying Utilitarian Ethical Theory. Journal of Emergency Nursing 41 (4): 300–306. Wenar, L. 2015. Rights. Stanford Encyclopedia of Philosophy. Available at: http://plato.stanford. edu/entries/rights/#5.2. Accessed 20 Jan 2021. Wilson, E.O. 2004. On Human Nature, 2nd ed. Cambridge, MA: Harvard University Press.

Chapter 4

The Precautionary Principle

In the previous two chapters, I considered approaches to precautionary reasoning stemming from decision theory and moral theory. In Chap. 2, I argued that while decision theory provides us with some useful tools for making rational decisions, it does not give us sufficient guidance for making reasonable decisions because it is value neutral. To make reasonable decisions, we need decision-making frameworks that are informed by moral values (Hannson 2003, 2010). In Chap. 3, I discussed some moral values associated with different moral theories. I argued that since no single theory can satisfactorily answers all objections and resolve all value-conflicts, we must come to terms with value pluralism when engaging in precautionary reasoning. That is, we must carefully consider, appreciate, weigh, and prioritize competing values when making choices concerning risks, benefits, and precautions. In this chapter, I will consider whether the precautionary principle (PP) is a useful tool for making decisions concerning benefits and risks. In Chap. 1, I briefly described the PP but did not say a great deal about it. In this chapter I will examine the PP in greater depth, consider some of the main critiques of the principle, and develop a definition of the PP that can meet these objections. I will clarify some of the key terms used in the PP and distinguish between different versions of the principle. I will then describe a method for applying the PP and illustrate it with a couple of cases. I will conclude this chapter by arguing that the PP can play an important role in precautionary reasoning because it provides decision-makers with a reasonable way of dealing with epistemological (or scientific) and moral uncertainty concerning personal and social choices.

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_4

75

76

4 The Precautionary Principle

4.1 Some Background on the Precautionary Principle To better understand the PP and its rationale, it will be useful to reflect on why it emerged as an approach to policymaking in the 1980s (Sandin 2004). The PP was proposed by scientists, legal theorists, political activists, and policymakers who were unsatisfied with the existing forms of reasoning used by government agencies to make policy decisions concerning environmental and public health risks (European Commission 2000, 2017; Tait 2001; Sandin 2004; Munthe 2011). Many (perhaps most) government agencies in industrialized nations1 used a risk management approach for making policy decisions (and many still do). As we saw in Chap. 2, risk management is based on principles of expected utility theory (EUT). Under the risk management approach, one identifies possible risks and benefits related to a decision, assesses these risks and benefits using scientific evidence, and then makes a choice that balances risks and benefits (Shrader-Frechette 1991; European Commission 2000; National Research Council 2009). For example, to assess the risks of approving a new drug, the FDA examines scientific evidence from human and animal studies concerning the drug’s safety and efficacy and determines how approving (or not approving) the drug is likely to impact public health (Hawthorne 2005). While science can provide the FDA with evidence concerning the likely public health impacts of approving or not approving the drug, it cannot tell the agency how it ought to compare risks and benefits, because these comparisons involve value judgments that are not entirely reducible to scientific facts (Shrader-Frechette 1991).2 Risk management is therefore not a purely scientific endeavor because it combines science and moral values. Even though the risk management approach incorporates moral values into decision-making, many scientists and policymakers regard it as a scientific approach to decision-making because scientific evidence plays a prominent role in this approach (Brombacher 1999; Resnik 2001). One potential problem with the risk management approach is that it provides us with very little guidance when we lack definitive scientific evidence concerning the probabilities of different outcomes related to public and environmental health decisions, other than to advise us to do more research so that we can gain more knowledge (Ackerman 2008). However, it is possible that preventable harms related to a technology (such as an industrial chemical) or human activity (such as production of greenhouse gases) could occur while we are waiting for enough evidence to accumulate so that we have a scientific basis for making a policy decision related to that technology or activity. One might argue that it would be prudent to take action to address possible harms when scientific evidence is inconclusive (European

1 Private

corporations also use the risk management approach for making decisions related to legal and financial risks. 2 For more on the claim that values are not reducible to scientific facts, see the critique of the natural law approach to moral theory in Chap. 3. For further discussion of the relationship between values and science, see Lenman (2018) and Ridge (2019).

4.1 Some Background on the Precautionary Principle

77

Commission 2000, 2017). Thus, the risk management approach creates a potential safety gap. For example, by the late 1970s, scientists had gathered considerable evidence indicating there had been a rise in global surface temperatures since the beginning of the twentieth century and they hypothesized that human activities, such as greenhouse gas emissions and deforestation, were largely responsible for this change. During the first World Climate Conference held in 1979, scientists warned the public that it would be necessary to reduce greenhouse gas emissions to prevent catastrophic consequences of continued global warming. However, many questions remained concerning the evidence for a rise in global surface temperatures, the validity of computer models used in climate science, the role of human activities in climate change, and the likely consequences of climate change; and many politicians and policymakers argued that there was not enough evidence at that time to justify major policy changes to that could negatively impact the global economy (Hulme 2009). Others argued, however, that it was important to take effective action to prevent further global warming, even if the scientific evidence concerning climate change was incomplete or inconclusive (Hulme 2009).3 For another example of a safety gap, consider the history of regulation of toxic substances in the US. In 1976, the US Congress passed the Toxic Substances Control Act (TSCA), which granted the EPA the authority to regulate toxic substances.4 Approximately 62,000 chemicals that were in use prior to the passage of the TSCA were classified as existing chemicals and considered to be safe (Krimsky 2017). New chemicals were required to be registered with the EPA prior to manufacture and use. However, companies did not have to submit any health and safety data to the EPA concerning new chemicals, and the agency had only 90 days to determine whether a chemical was safe. The EPA was granted the authority to regulate chemicals once evidence emerges that they pose an unreasonable risk to public health, but companies were under no obligation to conduct research on the safety of their own chemicals. In 2016, Congress strengthened the TSCA by giving the EPA more authority to require manufacturers to provide the agency with health and safety information concerning new chemicals, and by requiring the EPA to consider impacts on susceptible populations (such as children and people with chronic illnesses) in its risk assessment (Krimsky 2017). Although the revised TSCA enhances the EPA’s authority to regulate toxic substances, it still does not allow the agency to prevent harms related to these chemicals before conclusive evidence emerges that they pose an unreasonable risk to human health or the environment, because existing chemicals are presumed to be safe until proven otherwise and new chemicals do not need to undergo extensive safety testing prior to marketing and use (Krimsky 2017). For example, various toxic substances, 3 While

the evidence for human-caused climate change is no longer inconclusive, many questions remain. For a discussion of the relationship between the PP and climate change issues see McKinnon (2009), Steel (2015), Hartzell-Nichols (2017). 4 ‘Toxic substances’ excludes pesticides, which are regulated by the EPA under different laws, and drugs, biologics, cosmetics, and foods additives, which are regulated by the FDA.

78

4 The Precautionary Principle

including lead, asbestos, dioxin, and PCBs (polychlorinated biphenyls) caused harm to public health and the environment long before government agencies had sufficient evidence to regulate them (Cranor 2011).5

4.2 Definitions of the Precautionary Principle There are dozens of definitions of the PP (Sandin 2004). I will not review them all here but will state a few worth mentioning. In Chap. 1, I quoted an influential, early version of the PP that appears in the UN’s 1992 Rio Declaration on Environment and Development. Another influential version on the PP was developed in 1998, when 35 scientists, lawyers, and policymakers held a three-day academic conference on the PP in the Wingspread Building (headquarters of the Johnson Foundation), located in Racine, Wisconsin. They drafted an influential version of the PP known as the Wingspread Statement: When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof. The process of applying the Precautionary Principle must be open, informed and democratic and must include potentially affected parties. It must also involve an examination of the full range of alternatives, including no action. (Science and Environmental Health Network 1998)

Some other versions worth noting include one that appears in the Cartagena Protocol on Biosafety: Lack of scientific certainty due to insufficient relevant scientific information and knowledge regarding the extent of the potential adverse effects of a living modified organism on the conservation and sustainable use of biological diversity in the Party of import, taking also into account risks to human health, shall not prevent that Party from taking a decision, as appropriate, with regard to the import of the living modified organism in question as referred to in paragraph 3 above, in order to avoid or minimize such potential adverse effects. (Cartagena Protocol on Biosafety 2000, Article 10, Section 6)

And one that appears in the European Commission’s white paper on its approach to chemical regulation: Whenever reliable scientific evidence is available that a substance may have an adverse impact on human health and the environment but there is still scientific uncertainty about the precise nature or the magnitude of the potential damage, decision-making must be based on precaution in order to prevent damage to human health and the environment. (European Commission 2000: 5)

These different versions of the PP all stress the importance of taking measures to address possible harm when we lack scientific evidence concerning the outcomes 5 Regulation

is very different for drugs and pesticides, which must undergo extensive health and safety testing prior to marketing approval. These chemicals are assumed to be unsafe until proven otherwise. See discussion in Chap. 6.

4.2 Definitions of the Precautionary Principle

79

of different courses of action. These versions of the PP differ dramatically from the risk management approach to precautionary reasoning, which relies on scientific evidence concerning risks and benefits. While many commentators view the PP as a reasonable policy rooted in commonsense ideas about dealing with risks and uncertainty, others have sharply criticized it. I will now address these criticisms.

4.3 Criticism #1: The Precautionary Principle Is Vague Numerous critics (e.g. Bodansky 1991; Resnik 2001; Marchant 2002; Sandin et al. 2002; Sandin 2004; Soule 2004; Peterson 2006) have argued that influential definitions of the PP, like those above, are vague because they do not clearly define key terms, such as “serious or irreversible damage,” “threats of harm,” “full scientific certainty,” “not fully established scientifically,” or “environmental degradation.” I agree with these critics that it is important to seek clarity and precision when defining important ideas like the PP. However, I do not think the PP is hopelessly vague or impossible to define clearly. Let’s consider the terms related to scientific uncertainty first. As discussed in Chap. 2, scientific or epistemological uncertainty is uncertainty with respect to knowledge. Many philosophers and scientists have observed that science, unlike mathematics, does not achieve absolute certainty.6 Scientific proof is empirical and inductive, which means that hypotheses (and theories) can be confirmed as likely to be true, given the evidence, but never as true with absolute certainty. Hypotheses are subject to further testing, criticism, and refinement; and even well-established scientific theories or ideas may turn out to be false (Popper 1959; Kuhn 1970; Kitcher 1993; Haack 2003). For example, in the eighteenth and nineteenth centuries Isaac Newton’s (1643–1727) laws of motion were well-confirmed, but in the twentieth century physicists demonstrated that they were false when applied to motions of subatomic particles or motions that occur at close to the speed of light (Hawking 1988). If science does not achieve certainty, then the Rio Declaration’s phrase “full scientific certainty” is a misnomer. A better way of viewing that matter is to adopt the Wingspread Statement’s phrase “not fully established scientifically.” However, this phrase must also be defined because it does not tell us what is means to establish something scientifically. To establish something scientifically is to provide rigorous evidence or proof for something. Evidence or proof may be based on empirical methods, such as well-designed tests or experiments; or conceptual/analytical ones, such mathematical or statistical modelling. It is important to clarify what is meant by “scientific certainty” because evidence and proof come in degrees, and one should know how much evidence or proof is 6 For example, we can know with certainty that 1 + 1 = 2 or that the square root of 16 = 4. We can use deductive arguments to prove mathematical theorems.

80

4 The Precautionary Principle

Table 4.1 Standards of EVIDENCE Plausibility

S is consistent with well-established scientific facts, hypotheses, laws, models, or theories

Weak Confirmation (credibility)

S is supported by some evidence generated by scientific methods

Strong Confirmation (veracity)

S is supported by substantial evidence generated by scientific methods

required concerning a possible harm before taking action to avoid, minimize, or mitigate that harm (Hannson 1997; Cranor 2001; Resnik 2001; Munthe 2011; Steel 2015). Clearly, we should take action when we have strong scientific evidence that a serious harm is likely to occur unless we do something to address it. But what should we do if a harm is merely possible? Should we ignore it? As we saw in Chap. 2, rules for decision-making under ignorance do not provide a satisfactory answer to this question, because they do not rule out implausible outcomes. But it seems that we must have some way of ruling out these outcomes when making decisions or else we would always be fretting over nightmare scenarios. According to many proponents of the PP, if a harm is merely a possible harm and there is no evidence that it is likely to occur, it may be reasonable not to take any action to address it (Von Schomberg 2012; European Commission 2017).7 Very often we can imagine disastrous outcomes related to new technologies that we can afford to ignore in policymaking because the evidence for these harms is so scant. For example, in his popular science fiction book Prey, Michael Crichton (2002) envisions a world in which swarms of nano-robots wreak havoc on humanity and the environment. While this is a possible consequence of the development of nanotechnology, the evidence that this would happen is so miniscule that we should not worry it when thinking about the risks of nanotechnology. Instead, we should focus our attention on risks of nanotechnology that have a solid basis in scientific fact and theory, such as potential toxic effects of nanomaterials on human health and the environment (Elliott 2011; Monteiro-Riviere and Tran 2014). Worrying about possible harms of nanotechnology that have little evidentiary basis will divert our time and energy from attending to the more important threats. To avoid wasting time and energy worrying about highly speculative possible harms that are not supported by any evidence, the PP should include a minimum standard of evidence for addressing possible harms (Munthe 2011; Steel 2015). This minimum standard would be less evidence than is required for scientific confirmation or proof, but more than is required for armchair speculation. One such standard is plausibility (Shapere 1966; Resnik 2001, 2004). Plausibility is a standard of proof that is weaker than confirmation but stronger than mere speculation (see Table 4.1).8 A scientific statement is plausible if it is consistent with well-established scientific 7 The same point also applies to benefits: we should not waste time and energy pursing benefits that

are merely possible. See footnote 15 below. 8 The European Commission (2001) definition of the PP uses the phrase “reliable scientific evidence,”

which is stronger than plausibility.

4.3 Criticism #1: The Precautionary Principle Is Vague

81

facts, hypotheses, laws, models, or theories.9 For example, the hypothesis that “there is intelligent life elsewhere in the universe” is plausible, given what we know about planetary formation, the requirements for life, and biological evolution. This statement has not been confirmed because we have not made any empirical observations that support it. However, since this statement is plausible and would have significant implications for science and society if it were true, it provides impetus and direction for further research and hypothesis testing.10 Confirmation is a stronger standard of evidence than plausibility. A statement is confirmed if it is supported by evidence from scientific methods, such as tests, or experiments, or mathematical or statistical modelling, which tends to show that the statement is true (Huber 2019). In general, confirmed statements are hypotheses or theories that scientists have judged to be plausible and have tested. Confirmation comes in degrees from weak to strong, depending on the quantity and quality of confirming evidence.11 A statement is weakly confirmed (or credible) if it is supported by some evidence that supports its truth. A statement is strongly confirmed (or veracious) if it is supported by substantial empirical evidence which supports its truth. For example, the statement that “there is liquid water on Mars,” has been confirmed by taking radar images of the planet’s subsurface. Confirmation for this statement will become stronger as scientists make additional observations that support it. The statement “Mars is the fourth planet from the sun” is strongly confirmed (or veracious, truthful) because it is supported by substantial evidence from astronomical observations. Plausibility should not be confused with probability. As we saw in Chap. 2, we can always assign subjective probabilities to statements. However, since subjective probabilities are highly susceptible to bias and error, it is not prudent to use them as a basis for accepting statements with implications for public decision-making. A statement must be supported by enough evidence before we are justified in assigning it a probability for the purposes of making decisions that impact the public. Thus, a statement might be plausible but not probable because we do not have enough evidence to claim that it is probable. As also noted in Chap. 2, we should not confuse the probability that a statement is true with the claims the statement makes about probabilities. In the example from Chap. 2, we rolled the die 100 times and got 95 sixes. We could say “the probability that the die will come up six is 0.95 ± the standard error for this experiment.” But we could also say “it is highly probable that [the probability that the die will come up six is 0.95 ± the standard error for this experiment].” Distinguishing between the probability that a statement is true and the claims a statement makes about probabilities helps us to clarify our thinking about low 9 Scientific

communities determine the type of support required for plausibility. Search for Extraterrestrial Intelligence Institute (2019) is a private, non-profit organization dedicated to the search for extraterrestrial, intelligent life and understanding to evolution of life and intelligence in the universe. 11 Scientific communities determine the type of support required for confirmation as well as the degree of confirmation. 10 The

82

4 The Precautionary Principle

probability events. Very often in public discussions about risk and harm, people fail to pay attention to this distinction. For example, suppose that several months before the terrorist attacks on the World Trade Center on September 11, 2001, national security experts were debating about whether to be concerned about the risk that someone would hijack an airplane and crash it into a skyscraper. Suppose, also, that one of these experts had dismissed this concern as unlikely but had no evidence for making this assertion. We could say that the expert’s statement that “the probability that hijackers will crash an airplane into a skyscraper is very low” was itself, not probable (or not confirmed) because it was not supported by the evidence. Other statements about low probabilities may be well-confirmed (or highly probable). For example, according to data compiled by the National Weather Service (2019), the odds of getting struck by lightning in any given year are 1/1 million. While the probability of getting struck by lightning is very low (0.00001), the statement “the odds of getting struck by lightning in any given year are 1/1 million” is highly probable because it is well-supported by the evidence. As we shall see in subsequent chapters, it is important to think about low probability, catastrophic events (such as terrorism) in a clear and coherent way, since paying too much or too little attention to these events can bias our reasoning in favor of excessive risk-avoidance or excessive risk-taking. One of the main criticisms of the PP is that it is risk averse (Sunstein 2005). Adopting a minimum evidentiary standard for possible harms addressed by the PP can help us deal with this issue by ruling out claims about possible catastrophic harms that lack evidentiary support (Resnik 2001). Concerning the terms related to harms, the Rio Declaration Version of the PP focuses on “serious or irreversible damage” to the environment. There are some problems with this expression, however. First, why should we care whether harm is irreversible? Some types of irreversible harm are not very bad. For example, suppose that we could eradicate the malaria parasite. This would be irreversible environmental damage, but would it be a bad thing? Further, some reversible damage is very bad. For example, the damage caused by air pollution can be reversed, over time, but it still might result in significant harms to people and the environment. Second, why should we focus only on environmental harm? As we saw in Chap. 3, there are many types of things that people consider to be harmful, including death, disease, poverty, famine, war, genocide, economic declines, exploitation, human rights violations, and injustice. Should we focus on environmental harms to the exclusion of these things? The Wingspread Statement talks about “threats of harm to human health or the environment.” But again, there are problems with vagueness here. First, the definition does not tell us how serious or significant the threat must be. There are many different types of threats to the human health of the environment, ranging from minor harms (such as being exposed to automobile exhaust for a brief time or chopping down a tree) to catastrophic ones (such as an influenza pandemic or serious damage to the protective ozone layer). The PP should include a minimum standard of harm (such as serious harm) to avoid applying it to minor or trivial threats (Resnik 2001; Trouwborst 2006; Munthe 2011; Wareham and Nardini 2015; Steel 2015). Second, while the Wingspread Statement addresses environmental and public health harms, there are

4.3 Criticism #1: The Precautionary Principle Is Vague

83

other types of harm one might be concerned about, such as poverty, famine, war, genocide, economic declines, human rights violations, and injustice. The statements from the Cartagena Protocol and European Commission also do not address these other types of harm. In defending the PP against the charge of vagueness, it is important to realize that any general principle for decision-making will need to be interpreted in order to apply it to specific cases. For example, to apply maximin, one must define “the worst outcome,” and to apply a moral principle like “avoid harming other people,” one must describe harm and say what is means to avoid harming someone. To apply the PP, we will need to interpret terms like “plausible,” “serious,” “harm,” and “reasonable” (discussed below). The PP is no different, in this regard, from many other decisionmaking principles that we regard as meritorious (Munthe 2011; Beauchamp and Childress 2012).

4.4 Criticism #2: The Precautionary Principle Is Incoherent Another important criticism of the PP is that it is incoherent; that it is, it may recommend that we do completely opposite things and not tell us which option is best (Sandin et al. 2002; Harris and Holm 2002; Peterson 2006, 2007a, b).12 The PP, therefore, can lead to decisional paralysis (Munthe 2011). For example, suppose that a community is trying to decide whether to build a dam. The dam will provide much needed drinking water for the growing population in the community but could disrupt the local ecosystem and cause a species of fish to go extinct. If it builds the dam, the community will prevent a serious threat to public health; if it does not build the dam, it will prevent a serious threat to the environment. Thus, according to the PP, the community should both build the dam and not build the dam, which is contradictory. For another example, consider a country’s decision whether to allow nutritionally-enhanced GM rice to be cultivated. Suppose the country has perennial problems related to famine and malnutrition that the GM crop could help to alleviate. Suppose, however, that ingestion of the crop poses some potential harm to human health (such as increased risk of liver cancer). If the country approves cultivation of GM crop, it can prevent famine and malnutrition; if it does not approve cultivation of the crop, it can prevent liver cancer. Thus, the country should approve and not approve the crop, which is contradictory. 12 Peterson

(2006, 2007a, b) takes this charge one step further and develops a formal proof that the PP is logically inconsistent. However, to prove this result, Peterson assumes an untenable version of the PP that has not been defended by PP proponents. Peterson (2007b: 306) defines the PP as follows: “If one act is more likely to give rise to a fatal outcome than another, then the latter should be preferred to the former, given that both fatal outcomes are equally undesirable.” There are two problems with this definition of the PP. First, it does not address the reasonableness of taking risks (discussed below). Second, it includes probability concepts in the definition (i.e. “more likely”) which is not what most people have in mind when they define the PP. Peterson has constructed a proof against a straw man.

84

4 The Precautionary Principle

Table 4.2 Decision matrix for investments Stocks

Growing economy

Stable economy

Declining economy

70

30

−5

Bonds

40

25

5

Mutual funds

53

45

5

Table 4.3 Decision matrix for investments with expected utilities Growing economy

Stable economy

Declining economy

Overall expected utility

Stocks

70 × 0.5 = 35

30 × 0.3 = 9

−20 × 0.2 = −4

40

Bonds

40 × 0.5 = 20

25 × 0.3 = 7.5

5 × 0.2 = 1

28.5

Mutual funds

53 × 0.5 = 26.5

50 × 0.3 = 15

−7.5 × 0.2 = −1.5

40

The charge of incoherence is an important one, since it undermines the PP’s ability to provide clear guidance for decision-making. Some have argued, based on this and other criticisms, that the PP should not be viewed as a principle for decision-making but as an overall approach (or framework) for thinking about risk and harm (see Peterson 2006; Munthe 2011; Steel 2015, for discussion). While I agree that the PP could still play an important role in public and environmental health policy even if it does not function as a rule for decision-making, I think it can also serve as a principle for making reasonable decisions about harms and benefits if it is carefully interpreted and applied. In thinking about the charge of incoherence, it is important to realize that many of the decision-making rules discussed in Chap. 2 yield unclear guidance when choices are assessed as equal. For example, maximin does not tell us what to do when two different courses of action have the same highest, worst possible outcome. Suppose, for example, that mutual funds and bonds both produce an outcome of 5 in a declining economy (based on the example from Chap. 2, Table 4.2). According to maximin, we should invest in bonds or mutual funds but not stocks. Although maximin is not advising us to do conflicting actions, it does not tell us which option we should choose. It does not yield a clear choice, even if it does not tell us to do conflicting things. Other rules for decision making under ignorance and risk can also lead to indecisiveness when options have equal assessments. For example, maximax does not tell us what to do when different options have the same maximin outcome, minimax regret does not tell us what to do when different options have the same maximum regret, and EUT does not tell us what to do when different options have the same total expected utility (see Table 4.3). Likewise, as we saw in Chap. 3, moral theories may also give us unclear guidance under circumstances involving moral conflict. Act-utilitarians do not provide us with clear guidance when two or more options are likely to produce the same net utility; rule utilitarians do not provide us with clear guidance when two or more rules (that we could follow) produce the same net utility; Kantians do not provide us with clear

4.4 Criticism #2: The Precautionary Principle Is Incoherent

85

guidance when imperfect duties conflict; virtue theorists do not provide us with clear guidance when different virtues lead us in different directions; natural rights theorists do not provide us with clear guidance when human right conflict. It is important note that the PP does not recommend conflicting options when only one option avoids serious harm. In the mushroom example (Chap. 1), the PP advises us to not eat the mushroom if we do not know whether it is poisonous, and we are not starving. We do not forego an important benefit by not eating the mushroom. Another way of putting this point is to say that we do not prevent a serious harm by eating it. The decision context changes, however, if we are starving and the mushroom is our only food option. In that case, we may forego an important benefit by not eating it, and we may prevent a serious harm by eating it. Thus, inconsistency arises when two or more conflicting choices (e.g. building a dam vs. not building a dam or eating a mushroom vs. not eating a mushroom) allow us to avert serious harm. Some have argued that the PP can avoid incoherence if the harms it addresses are very serious or even catastrophic (Sunstein 2005; Hartzell-Nichols 2012, 2013, 2017). While this suggestion may reduce the potential for inconsistency because fewer harms will qualify as “very serious” or “catastrophic,” it does not eliminate this potential problem because situations could still arise in which conflicting choices may prevent very serious or catastrophic harms. Also, one needs to define the terms “very serious,” or “catastrophic.” Is the difference between a serious harm and a very serious harm or a catastrophe a difference in degree or in kind? Is a global pandemic catastrophic but a severe epidemic in one country not? Further, why should the PP apply only to very serious or catastrophic harms? Shouldn’t we also take action to prevent harms that are not as serious? One way of enabling the PP to provide clearer guidance for decision-making is to include some balancing of benefits and harms in its application to specific cases (Munthe 2011). As we noted in Chap. 1, taking precautions may require us to forego benefits (Resnik 2001; Munthe 2011). If we do not build a dam to protect the environment, we forego the social benefit of providing drinking water for the population. A reasonable way of approaching this problem would be to carefully consider these benefits when we are deciding whether to protect the environment from the harms caused by building the dam, and to think about ways of minimizing or mitigating damage to the environment. Putting these points together, I propose that we interpret the PP as advising us to take reasonable precautionary measures to address serious harms (Resnik 2001). For example, in the mushroom case, if we are not starving and we do not know whether the mushroom is poisonous, then a reasonable precautionary measure is not to eat it, since the benefits of eating it are minimal and the risks are serious. However, if we are starving and the mushroom is our only food source, then it may not be reasonable to avoid the serious harm of mushroom poisoning, because by doing so we could starve to death.

86

4 The Precautionary Principle

4.5 Reasonableness of Precautionary Measures While adding the qualification that precautionary measures must be reasonable helps to clarify the guidance provided by the PP, it still leaves an important question unanswered, namely, “what does it mean for a precautionary measure to be reasonable?” If we don’t have a cogent answer to this question, then the requirement that precautionary measures should be reasonable is no more than a vague platitude that lacks substance. To address this potential objection, we need to say a bit more about makes a precautionary measure reasonable. As noted in Chap. 1, we can implement a variety of precautionary measures to deal with risks, including avoidance, minimization, and mitigation. As Munthe (2011) astutely observes, there are degrees of precaution, ranging from avoiding an activity that produces a risk to mitigating the risk. Deciding which (if any) precautionary measures are reasonable depends on how we balance risks and benefits. If we decide that the benefits are not worth the risks, it may be reasonable to opt for risk avoidance. If we decide that the benefits are worth the risks, it may be reasonable to opt for risk minimization or mitigation. It may also be reasonable to implement several approaches at the same time (Munthe 2011). For example, we could implement measures to mitigate risks if we are concerned that our attempts to avoid or minimize them may not be successful. It is also worth noting that sometimes it may not be possible to avoid risks and all we can do it minimize or mitigate them. For example, we cannot avoid the risks of volcanic eruptions. All we can do is minimize these risks by not living near areas of volcanic activity; or mitigate them by preparing to evacuate people when eruptions occur. Concerning the reasonableness of precautionary measures, I propose that they should be evaluated based on four criteria.13 These criteria are neither necessary nor sufficient conditions but desiderata. That is, precautionary measures should satisfy most of these criteria, but they could still be reasonable if they do not satisfy all of them. Also, sometimes these criteria may conflict, and we must decide which one should have priority.14 The “more” or “most” reasonable option is the one that best satisfies the criteria.15 The criteria are as follows (see Table 4.4). 13 I did not pull these criteria out of thin air. These are similar to proposals found in European Commission (2000), Resnik (2001), Whiteside (2006), Shrader-Frechette (2007), Munthe (2011), and Hartzell-Nichols (2017). 14 I will discuss examples of such conflicts in Chap. 6 (susceptible populations) and Chap. 7 (GM crops/plants). 15 I put “more” and “most” in quotes here because I think that reasonableness judgments may not satisfy rational ordering rules, such as transitivity. If reasonableness judgments are transitive, then we should be able make the following inference: “if option X is judged to be more reasonable than option Y and option Y is judged to be more reasonable than option Z, then option X is judged to be more reasonable than option Z.” I am not sure that we can make this type of inference because reasonableness judgments are context-dependent. We judge that an option is more or less reasonable based on the circumstances we face, our values, and the other options. Saying that something is “reasonable” is therefore like saying that it is “desirable,” “good” or “just.” We might develop a formal theory of reasonableness that requires that reasonableness judgments conform to rational

4.5 Reasonableness of Precautionary Measures

87

Table 4.4 Criteria for reasonableness of precautionary measures Proportionality

Reasonable measures balance plausible risks and possible benefits proportionally

Fairness

Reasonable measures are based on a fair balancing of risks and benefits; fairness includes distributive and procedural fairness

Epistemic Responsibility

Reasonable measures comply with norms for the responsible acquisition and utilization of evidence, knowledge, and expertise

Consistency

Reasonable measures are based on a consistent rationale for decision-making

Proportionality. Reasonable measures balance plausible risks and benefits16 proportionally (European Commission 2000, 2017; Resnik 2001; Whiteside 2006). That is, the risks taken should be proportional to the benefits or opportunities that may be gained. High risks can only be justified by high benefits, while low risks may be justified by minimal benefits. While most commentators agree that this criterion is important for applying the PP,17 the concept of proportionality requires further interpretation to be applied to actual decisions, since it depends on some prior assessment of benefits and risks by decision-makers. Moreover, the concept of proportionality used here is metaphorical, not literal, because benefits and risks are not measured numerically or ranked.18 The criterion does not hold that benefits are greater than risks. A more accurate way of stating this criterion is that risks are justified in relation to benefits, with the understanding that justification depends on various contextual factors, including fundamental values (such as human rights) that may be impacted by precautionary measures.19 Fairness. Reasonable measures are based on a fair (or just) balancing of plausible risks and benefits (Shrader-Frechette 2007; Resnik 2012; European Commission 2017). 20 Fairness includes distributive and procedural fairness (Rawls 1971).21 Distributive fairness refers to the fairness of the outcome of benefit/risk decisions, ordering rules, but real-world judgments of reasonableness, made by individuals or groups, may not always conform to the dictates of the theory. 16 Here I am using the term ‘plausible’ as a modifier for risks and benefits. Since most people speak of “benefits” when they really mean “plausible benefits,” I will follow that convention, keeping in mind that benefits, like risks, imply a degree of uncertainty, since we are usually not absolutely certain that they will occur. 17 See European Commission (2000), Resnik (2001), Whiteside (2006). 18 I have argued in Chaps. 2 and 3 that there are often significant problems with assigning numerical values to benefits and risks when we have conflicting, incommensurable values, such as human life vs. money. 19 In Chaps. 6 and 9 I will discuss human rights issues when applying the PP to drug regulation and responses to public health emergencies. 20 The topic of fairness or justice is beyond the scope of this book, so I will not present an in-depth account of it here. I will assume, however, that it is an important consideration that should be addressed when deciding whether precautionary measures are reasonable. For a review of theories of justice, see Miller (2017). 21 See, also, the discussion of fairness in Chap. 3.

88

4 The Precautionary Principle

including the overall balance of risks and benefits and the distribution of benefits and risks. For example, suppose that three people contribute materials and labor toward making a pizza. Distributive fairness pertains to the fairness of how the pizza is distributed among these three people. Should the pizza be divided equally? Should the person who contributed the most materials or labor to the pizza get more pizza? Should the hungriest person get more pizza? And so on. Procedural fairness pertains to the fairness of the decision procedures used for addressing questions of distributive fairness. In the pizza example, procedural fairness would address the fairness of procedures or rules the three people agreed upon for distributing the pizza among themselves. I will discuss procedural fairness in more depth in Chap. 5 when I consider issues related to democratic decision-making concerning risks. For now, I will stress that fair procedures meaningfully involve stakeholders (i.e. affected parties) in the decision-making process, respect the plurality of values, and are publicly accountable.22 For example, a reasonable measure for dealing with an environmental and public health issue, such as building a dam, would balance public health, environmental, economic, and other risks and benefits fairly and would ensure that impacted parties (such as people living in the area, businesses, and political activists) have meaningful input into the decision. It would also provide the public with a clear and transparent rationale for the decision. Epistemic responsibility.23 Reasonable measures comply with norms for the responsible acquisition and utilization of evidence, knowledge, and expertise (European Commission 2000). Reasonable measures should be based on the best available scientific evidence and expertise, and should be revised, if necessary, as new evidence emerges and expertise develops. Decision-makers should acknowledge uncertainties, controversies, biases, and gaps in the scientific knowledge related to their decision, points of consensus among scientists, and areas of agreement and difference in expert opinions. Evidence and arguments used in decision-making should be presented clearly and available to the public. Decision-makers should encourage and support additional research and scientific debate to ensure that their decisions are based on the best available evidence. Consistency. Reasonable measures are based on a consistent rationale for decision-making (Resnik 2001). That is, the rationale should treat similar cases similarly, and different cases differently. For example, if a government agency restricts the use of a chemical because its risks far outweigh its possible benefits, it should impose similar restrictions on chemicals that are assessed in the same way. Consistency is important for ensuring that policies are publicly accountable, acceptable, and justifiable (Rawls 1993). If a policy leads to inconsistent decisions, the public is likely to question its legitimacy and may try to overturn it or may ignore it. Inconsistency can also lead to confusion among those who are subject to and rely upon the policy and those who enforce it, which can undermine the policy’s effectiveness. It is important to note, however, that consistency does not preclude revision of previously 22 The Wingspread Statement of the PP refers to using democratic procedures to apply the PP and addressing the interests of affected parties. 23 For more on epistemic responsibility, see Goldmand and McGrath (2014).

4.5 Reasonableness of Precautionary Measures

89

approved decisions. A government agency might decide, after gathering more information about a chemical that is being marketed without restrictions, that it should be restricted. Though previous decisions set a precedent for future decisions, they are not immune to revision for compelling reasons.24 Consistency applies to the entire body of decisions.25 While the requirement that precautionary measures must be reasonable allows the PP to avoid incoherence and provides clearer guidance for decision-making, it does not guarantee that applying this principle to specific cases will be straightforward, since decision-makers must grapple with issues related to the reasonableness of different precautionary measures, as well as issues related to the plausibility and seriousness of harms. The devil, as they say, is in the details. However, I would argue that it is better to use a principle for making decisions about risks and benefits that can be difficult to apply but addresses the complexities (such as scientific and moral uncertainty) inherent in actual cases than to use one that is easy to apply but ignores these complexities. Real world decision-making involving risks, benefits, and precautions is often complicated and controversial. If it were not, then simple rules like “better safe than sorry,” and “do no harm.” would be adequate to the task at hand. But, as we have seen in this book, these rules seldom provide the kind of specific guidance one needs to make decisions related to environmental and public health risks.

4.6 Criticism #3: The Precautionary Principle Is Opposed to Science, Technology, and Economic Development Ever since the PP was first formulated, critics have argued that it is opposed to science, technology, and economic development (Holm and Harris1999; Brombacher 1999; Goklany 2001; Sunstein 2005). The PP, according to critics, obliges us to put the brakes on technologies or areas of scientific inquiry that could cause harm to public health, the environment or society. Since almost all new technologies and scientific ideas may cause some harm to public health, the environment, or society, adopting the PP as rule for making public decisions related to risk would stifle scientific discovery, technological innovation, and economic development.26 If we followed the dictates of the PP, we would never have developed automobiles, airplanes, telephones, nuclear energy, computers, or the scientific ideas behind them, such as chemical bonding, aerodynamics, electromagnetism, quantum physics, or symbolic logic.27 24 The

idea here is similar to a legal doctrine known as stare decisis (Kozel 2010). type of consistency is similar to the method of reflective equilibrium discussed in Chap. 4. 26 I am assuming that science and technology are key factors in economic development. There are also other important factors, such as well-functioning legal and banking systems, natural resources, capital, and labor. 27 The relationship between science and technology is complex. Although scientific theories, concepts, and principles often lead to technological advancements, technology also progresses 25 This

90

4 The Precautionary Principle

Before addressing this objection, we should note that it assumes that scientific and technological progress and economic development are good things. While most people in the world agree with this assumption, not all do. Some radical environmentalists argue that we need to keep science, technology, and economic development in check to prevent the rapidly expanding human population from threatening other species, ecosystems, biodiversity, natural resources, and the entire biosphere (Woodhouse 2018). Others are religiously or philosophically opposed to scientific and technological development. The Amish, for example, avoid the use of modern technology and do not educate their children past eighth grade level (Kraybill et al. 2018). In this book, I will assume that science and technology are neither inherently good nor inherently bad, but that they can be used for good or bad purposes and can have good or bad effects.28 Recombinant DNA techniques, for example, may be used to develop treatments for cancer or biological weapons. Automobiles can provide people with a means of transportation for work, school, shopping, sightseeing, and recreation, but they also generate emissions that harm human health and the environment. To develop and use science and technology responsibly, we need to conceive of ways of minimizing their and maximizing benefits—a task which is a main theme of this book. I will assume, however, that economic development is generally a good thing, provided that policies and institutions are in place to protect public health and the environment and distribute economic benefits fairly. Economic development contributes to many beneficial outcomes such as employment; access to food, housing, and health care; education; and transportation. Economic stagnation or decline can lead to unemployment, poverty, famine, crime, and other negative outcomes (Samuelson and Nordhaus 2009). So, if we are concerned that a policy may have negative economic impacts, it is important to realize that these concerns reflect more than a capitalistic/materialistic outlook but are rooted in a concern for the overall wellbeing of people in society. Some versions of the PP, such as the Wingspread Statement, are in direct to opposition science, technology, and economic development, because they advise us to take precautionary actions to prevent possible threats to public health or the environment. If these precautionary actions include banning a technology or activity that could produce these harms, then the PP would be a highly risk-aversive principle similar to the maximin rule for making decisions under ignorance (Hannson 1997; Gardiner 2006; Munthe 2011).29 However, including the requirements that harms are plausible and serious and that precautions are reasonable helps to dispel the charge that the PP is opposed to science, technology, and progress, because these requirements set minimal standards for risks and instruct us to consider the benefits we may forego (or independently of science. Technology and science are fundamentally related, but technology is not merely applied science (Radder 2019). 28 For further discussion of the relationship between science, technology, and human values see Radder (2019). 29 As noted in Chap. 1, banning or prohibiting something is a form of risk avoidance.

4.6 Criticism #3: The Precautionary Principle Is Opposed to Science …

91

harms we may not prevent) if we implement precautionary measures (Resnik 2001, Sandin et al. 2002). Indeed, if we consider the full range of benefits and harms and precautionary measures related to decisions involving science or technology, the PP could advise to develop new technologies and areas of scientific inquiry in order to prevent harms (Foster et al. 2000; Engelhardt and Jotterand 2004; Kaebnick et al. 2016; Hofmann 2020). For example, though some have appealed to the PP to ban GM crops (Tait 2001), one might argue that the PP requires us to develop GM crops to prevent starvation and malnutrition (Resnik 2012). Likewise, one could use the PP as a rationale for slowing down the development of geoengineering technologies (Elliott 2010), or for moving forward with them to mitigate climate change (Resnik and Vallero 2011).

4.7 Defining the Precautionary Principle Having addressed three main critiques of the PP, I will now state a version of this principle that deals with these objections and can be fruitfully applied to situations involving precautionary reasoning. I define the PP as the following: Precautionary Principle: In the absence of the degree30 of scientific evidence required to establish accurate31 and precise32 probabilities for outcomes related to a decision,33 take reasonable precautionary measures to avoid, minimize, or mitigate plausible and serious harms.

This definition includes the elements discussed in the previous sections, including minimal standards for harm and scientific evidence and the requirement that precautionary measures are reasonable. The definition also makes it clear that the PP applies when scientific evidence for probabilities is inconclusive, and it therefore distinguishes between the PP and EUT. As discussed above, a precautionary measure is reasonable insofar as it manages risks and benefits in accordance with standards of proportionality, fairness, consistency, and epistemic responsibility; harms include the full range of harms that people care about, such as harms to public health, the environment, the economy, and society; and harms are plausible if they are consistent with well-established scientific facts, hypotheses, laws, models, or theories. The requirement that harms are plausible distinguishes the PP from rules for decision-making under ignorance. 30 I am not committed to the word ‘degree.’ One could substitute other words here that would have the same effect, such as ‘amount’ or ‘quantity.’ The degree of evidence to establish accurate and precise probabilities is determined by the relevant scientific community. 31 By ‘accurate’ I mean tending to be true or correct. If I correctly predict whether it will rain 90% of the time, then I am an accurate forecaster. 32 By ‘precise’ I mean exact or specific. Asserting that there is a 95% chance that it will rain tonight is a precise probability estimate; saying that it is likely to rain tonight is not. To apply expected utility theory to a decision, probabilities must be precise. 33 A decision could be a choice by an individual or a group.

92

4 The Precautionary Principle

Commentators have distinguished between stronger and weaker versions of the PP, where stronger versions are more risk-averse than weaker ones (Munthe 2011; Steel 2015). With this in mind, one could modify my definition of the PP to generate versions of the PP which are more (or less) risk-averse. For example, we could state a risk-averse version as follows: Precautionary Principle (risk-averse): In the absence of degree of scientific evidence required to establish accurate and precise probabilities for outcomes related to a decision, take reasonable precautionary measures to avoid plausible and serious harms.

By eliminating risk minimization and mitigation as options, we transform the PP into a risk-averse principle, since it only gives one the option of avoiding risks. If we eliminate the word ‘reasonable’ from the definition, we can state a highly risk-averse version of the PP: Precautionary Principle (highly risk-averse): In the absence of the degree of scientific evidence required to establish accurate and precise probabilities for outcomes related to a decision, take precautionary measures to avoid plausible and serious harms.

There is a problem with this principle, however, because, as discussed above, it can lead to incoherence, since it might instruct us to taking conflicting actions to avoid harms (e.g. building vs. not building a dam, discussed above). So, this version fails to provide us with useful guidance. Going in the opposite direction, we could also state a risk-taking version of the PP as follows: Precautionary Principle (risk-taking): In the absence of the degree of scientific evidence required to establish accurate and precise probabilities for outcomes related to a decision, take reasonable precautionary measures to minimize or mitigate plausible and serious harms.

By removing risk avoidance as a precautionary measure, this version of the PP is committed to some form of risk-taking. Basically, it acknowledges that harms will occur and that all we should do is minimize or mitigate them. If we remove risk minimization as an option, we could state a highly risk-taking version as follows: Precautionary Principle (highly risk-taking): In the absence of the degree of scientific evidence required to establish accurate and precise probabilities for outcomes related to a decision, take reasonable precautionary measures to mitigate plausible and serious harms.

The version of the PP could provide us with useful guidance, but it appears to be an unreasonable rule, because it does not even tell us to minimize risks. Most people, even those who like to take risks to gain benefits, acknowledge that it is a good idea to try to minimize risks. It would be unreasonable to skydive, for example, without even making sure that one’s parachute is working properly. Although the word ‘reasonable’ is in the definition, it is not clear how reasonable one can be when dealing with risks when mitigation is the only option that one will consider.

4.7 Defining the Precautionary Principle

93

4.7.1 The Precautionary Principle, Decision Theory, and Moral Theory Now that I have articulated what I believe to be a clear and coherent version of the PP, it will be useful to consider its relationship to decision theory and moral theory. The most important difference between the PP and the rules of decision theory is that the PP does not assume a ranking (or measurement) of values (or utilities). That is, one can apply the PP to a decision problem without already having a prior ranking or measurement of values, which is not the case in decision theory. Rankings of values emerge from applying the PP’s concept of reasonableness to decisions. However, since reasonableness is context-dependent, a value that is given high priority in one decision might be given a lower priority in another decision. Thus, the PP does not generate an overall ranking of values. As we saw in Chap. 2, establishing a ranking or measurement of utilities can be difficult or impossible when we face moral (or value) uncertainty, and is a significant practical problem for applying rules from decision theory to real world problems. One of the strengths of the PP is that it addresses risks and benefits but does not assume a prior ranking or measurement of values or utilities. The PP deals with moral uncertainty because precautionary measures should be reasonable, and reasonableness is a function of the proportionality and fairness of risks and benefits. Some authors (e.g. Hannson 1997; Gardiner 2006) argue that the PP is a form of maximin, but there are important differences between the PP and maxim. First, the PP addresses plausible harms, whereas maximin addresses possible harms. To apply maxim to real world problems, one must eliminate implausible outcomes, but the PP already does this. Second, the PP considers benefits, whereas maximin does not. If one eliminates ‘reasonable’34 from the definition of the PP, then it, like maximin, would focus only on harms. However, as noted above, removing ‘reasonable’ from the definition of the PP can lead to incoherence, because the PP may recommend conflicting courses of action to deal with harms. This problem could be avoided if the PP instructed one to avoid the worst of all plausible harms, in which case the PP would look a more like maximin. However, I am not aware of any formulations of the PP that focus only on the worst harm, so it is not clear that this interpretation of the PP would capture the main idea behind the PP, i.e. that we should take preventative action to deal with serious harms, not just the worst harm. Chisholm and Clarke (1993) argue that the PP is a version of the minimax regret rule, but I would argue that it is very different from this rule for two reasons. First, as noted in Chap. 2, regret is calculated by subtracting the outcome for a give option 34 The word ‘reasonable’ does not appear in either the Rio Declaration version of the PP or the Wingspread Statement. However, both of these versions of the PP make implicit reference to benefits, not just harms. The Rio declaration includes the term ‘cost-effective,’ which implies some concern about economic costs and benefits, and the Wingspread Statement says the application of the PP must be ‘democratic’ and address the concerns of ‘affected parties.’ These phrases imply that some consideration of harms and benefits must occur in the application of the PP, since affected parties are likely to be concerned about how they will be harmed or benefited by decisions.

94

4 The Precautionary Principle

from the highest outcome among the other options. The minimax regret rule involves minimizing lost opportunities while minimizing losses. However, if there is a very high possible outcome for the decision, one might choose an option with a very bad possible outcome in order to avoid a lost opportunity. This type of reasoning goes against the main idea behind the PP, which involves minimizing losses in a reasonable way. Second, minimax regret focuses on the option with the lowest regret as an outcome. However, to make a reasonable decision one often may need to consider the wide range of options and possible outcomes. The PP does this, but the minimax regret rules does not. The PP may have more in common with decision rules that address overall benefits and harms, such as the principle of indifference and EUT (Sandin 2004). However, the PP is also different from these rules because it makes no assumptions about the probabilities or utilities of harms. The principle of indifference assumes that all outcomes are equally probable, including good and bad ones (see Chap. 2 for discussion). EUT involves the assignment of accurate and precise probabilities to different outcomes based on evidence. However, as we have seen, the purpose of the PP is to provide guidance when scientific evidence is not conclusive enough to assign accurate and precise probabilities to outcomes. So, the PP is different from EUT, including EUT’s applications in risk management and cost-benefit analysis. Indeed, as noted above, the main reason the PP was developed in the first place was to provide an alternative to the risk management and cost-benefit analysis approaches used by governments to make environmental and public health decisions. The PP also is unlike these rules insofar as it does not involve quantitative assessments of harms and benefits. That is, the PP involves a comparison of harms and benefits, but the comparison is qualitative or wholistic, rather than quantitative.35 The moral theories discussed in Chap. 3 provide broad support for the PP, although none of them include the PP as a moral principle.36 All the moral theories imply the we have duties to avoid intentionally or knowingly causing harm, although they characterize harms and benefits in different ways. Utilitarians, for example, define benefits and harms in terms of happiness, preference satisfaction or some other common value; natural law theorists define benefits and harms in terms of natural goods, such as life, health, and social relationships; and environmental ethicists relate benefits and harms to impacts on other species, habitats, ecosystems, and the biosphere. We also saw, however, that most of these major moral theories have difficulty with providing useful guidance for balancing possible benefits and possible harms (or risks) reasonably. Kantians, for example, hold that we should balance benefits and risks by acting according to a maxim that could be universalized, but this does little to narrow our choices because there are many ways of balancing benefits and risks that could pass this test. Virtue theorists hold that we should use practical wisdom to balance benefits and risks, but this guidance provides us with little instruction as to 35 For

examples of qualitative risk assessment, see Fletcher (2005) and Han and Weng (2011). Peterson (2007b) views the PP qualitative rather than quantitative. 36 See Munthe (2011) for additional discussion of the relationship between the PP and moral theories.

4.7 Defining the Precautionary Principle

95

how we should balance benefits and risks in actual cases. Natural law theorists hold that we should use the doctrine of double effect to balance benefits and risks, but there are problems with interpreting and applying this principle, such as determining whether bad effects are intended and whether bad effects are proportional to good effects. Natural rights theorists would hold that we should balance benefits and risks in a manner that respects human rights, but this guidance does not tell us much at all about what makes risks reasonable, since one could impose significant risks on others without actually violating their rights. Utilitarianism does provide us with useful guidance for balancing benefits and risks reasonably. However, as we saw in Chap. 3, utilitarianism is a controversial view that faces several significant objections. The version of utilitarianism that answers these objections fairly well is a type of rule-utilitarianism that recognizes a plurality of goods.37 It is conceivable that something like the PP would part of a system of rules that maximize different personal, social, and environmental goods. However, I see no reason why the PP must be linked to utilitarianism. Indeed, others have argued for the PP on deontological and democratic grounds (Whiteside 2006; John 2007; Munthe 2011). The important point to draw from this discussion is that the PP is neither a rule of formal decision theory nor a principle of moral theory. Rather, the PP is best understood as a confluence of ideas from decision theory and moral theory. It is like a rule from decision theory insofar as it provides us with guidance for making decisions, and it is like a moral principle insofar as it requires us to balance and prioritize values (i.e. risks and benefits) in a reasonable way (Steele 2006).

4.8 Other Interpretations of the Precautionary Principle The PP, as I understand it, is a rule (or principle) for making decisions about risks and benefits. While most commentators and organizations interpret the PP in this way, not all do. Steel (2015) and Koplin et al. (2020) distinguish between three different ways of interpreting the PP: (1) as a rule for making practical decisions; (2) as a rule for establishing the burden of proof; and (3) as a procedural rule. Commentators have proposed the second and third interpretations of the PP in response to problems with the practical decision interpretation. Since I favor the first interpretation, I will critique the other two. According to the burden of proof interpretation, the PP is an epistemological rule for making decisions about forming beliefs (or accepting statements) rather than a rule for making practical decisions (Peterson 2006, 2007a, b; Munthe 2011; Steel 2015). The PP shifts the burden of proof for beliefs or statements toward those who impose risks on others (O’Riordan et al. 2001; Sandin 2004; John 2007; Hansen et al. 2007). As noted above, under the TSCA, existing toxic substances are presumed safe until the EPA has conclusive evidence that they are not. The burden of proof is on the 37 See

Brink (1989) for a defense of this type of view.

96

4 The Precautionary Principle

EPA to prove that a chemical is unsafe, rather than on the manufacturer to prove that it is safe. The situation is very different for pharmaceuticals in the US, where the burden of proof is on manufacturers to prove to the FDA that their drugs are safe, rather than on the FDA to prove that they are unsafe. One could argue that the US’ approach to regulating drugs is precautionary because it preemptively avoids risks, but that its approach to regulating toxic substances is risk-taking because risks are acted on after they materialize (Krimsky 2017). The European Union’s approach to chemical regulation, known as Registration, Evaluation, and Authorization of Chemicals (REACH), is more precautionary than the US approach because it requires manufacturers to provide some evidence of safety to regulators as part of the registration process (Hansen et al. 2007).38 Clearly, many proponents of the PP have wanted to shift the burden of proof of safety to scientists, engineers, and companies that develop new technologies to protect the society and the environment from harm. The Wingspread Statement (quoted above) explicitly refers to shifting the burden of proof. While placing the burden of proof on those who impose risks on society or the environment would lead to precautionary actions or policies, there are several problems with this interpretation of the PP. First, not all events involving precautions result from the development of new technologies. Many of the precautions we take are related to preparing for and responding to natural disasters.39 For example, meteors have hit the Earth numerous times during its history. Though most meteors are small pieces of rock that vaporize in the Earth’s atmosphere, some are large enough to cause significant damage to the environment or human populations. For instance, scientists have compelling evidence that an asteroid impact in the Yucatan Peninsula is largely to blame for mass extinctions of plants and animals (including the dinosaurs) that took place 66 million years ago (Schulte et al. 2010). One could argue that it would be reasonable to take precautions to address possible, devastating future impacts, such as cataloging and monitoring asteroids and other large objects in our solar system, and developing rockets that can intercept and redirect or destroy these objects (National Science & Technology Council 2018). The PP would also seem to apply decisions related to other natural disasters, such as hurricanes, earthquakes, tsunamis, volcanic eruptions, and pandemics. Second, the burden of proof interpretation of the PP would provide little guidance for dealing with human technologies or activities that involve many different types of risks imposed by many different actors, including corporations, governments, and perhaps millions of individuals. For example, it is not at all clear who should have the burden of proof with respect to air pollution, water pollution, or climate change, because literally billions of individuals impact these problems in one way or another. Third, the burden of proof interpretation of the PP would provide little guidance for dealing with risks related to complex human systems or activities that do not necessarily involve technologies. For example, it is not at all clear how shifting the

38 Hansen 39 I

et al. (2007) argue that REACH is not precautionary enough. discuss some of these issues in Chap. 9.

4.8 Other Interpretations of the Precautionary Principle

97

burden of proof could help us manage risks related to global trade, banking and finance, disease epidemics or pandemics, or racism. Fourth, shifting the burden of proof does not involve the nuanced balancing of benefits and risks that is involved in precautionary reasoning (Munthe 2011). For example, in deciding whether to shift the burden of proof toward chemical manufacturers we should have an informed debate about whether this form of precaution is worth the costs it would impose on chemical manufacturers and consumers. Having this type of discussion is part of what it means to take reasonable precautions and should precede any decision to impose a burden of proof on manufacturers. None of the foregoing implies that shifting the burden of proof has no place in precautionary reasoning. I would argue that shifting the burden of proof is best understood as a type of precautionary measure we should consider taking in some situations. It is an application of the PP, but not the PP itself. For example, the US has decided that requiring pharmaceutical manufacturers to prove that their products are safe prior to marketing is a reasonable precaution to protect the health of the public, since we have substantial evidence that most drugs are inherently dangerous if not used properly and that the benefits to the public are worth the costs to manufacturers and consumers. Requiring chemical manufacturers to prove that their products are safe prior to marketing may or may not be a reasonable precaution, depending on the evidence we have concerning their risks to the public and the costs this would impose on manufacturers and consumers. It might be the case that it would be reasonable to impose a burden of proof on manufacturers for some types of chemicals, such as flame retardants used in clothing or furniture, but not others, such as automobile lubricants. According to the procedural-rule interpretation, the PP is a rule for determining the kinds of procedures we should follow in making decisions about risks, but it does not provide us with any specific guidance for practical or policy decisions about risks (Steel 2015). For example, the idea that lack of scientific evidence related to the outcomes of decisions should not be a reason for inaction in the face of serious public health or environmental risks is a procedural rule for making decisions, because it tells us how we should make decisions, but it does not tell us what types of decisions to make (Steel 2015). The idea that we should base all public and environmental health policy decisions on the best scientific evidence, not on speculation, would also be a procedural rule. Along the same lines, Steele (2006)40 argues that the PP is not a rule for making decisions but a set of guidelines for formulating decision problems and weighing evidence. The problem with these interpretations of the PP is that they eviscerate the principle and leave us with a rule that does not provide us with clear guidance for making practical decisions. These interpretations of the PP would be compatible with extreme risk-avoidance, risk-taking, or something in between. Moreover, if most people understood the PP as merely as a set of procedures, it probably would not have generated much controversy and I would not be writing book. The PP has

40 Daniel

Steel and Katie Steele should not be confused.

98

4 The Precautionary Principle

generated a great deal of controversy largely because it a rule for avoiding, minimizing, or mitigating risks. That being said, procedural rules are an important part of the PP and its application to specific problems, but they are not the PP, since the PP should also provide guidance for decision-making (European Commission 2001, 2017; Munthe 2011).

4.9 Applying the Precautionary Principle In Chaps. 6–9, I will apply my approach to precautionary reasoning to a variety of issues in environmental and public health policy. In this chapter, I will describe a method for applying the PP and illustrate it with examples. The decision tree in Fig. 4.1 outlines the method.

4.9.1 Case 1: Changing Jobs Jane is a software engineer for a large company that develops computer programs for cell phones. She has a husband, Jake, and two children, Michael and Marsha, ages 12 and 8. Jake works as a manager for an office supply company. They live in Raleigh, North Carolina. Jane has been offered a job working for another, smaller company in Denver, Colorado. The job pays 20% more money (adjusted for cost of living) than her current job and has increased responsibilities. She may need to do a significant amount of travel in the new job. She and Jake love the Western US and are thrilled about possibly of moving out there. Taking the new job presents the family with great financial, personal, educational, and cultural opportunities/benefits, but there are also some risks. Some of the opportunities/benefits include: • • • •

More money, which increases opportunities for education, travel, and so on. Doing a different job, which may be interesting and challenging. Living in a different and beautiful part of the country, experiencing new places. Meeting new people.

Some of the possible harms include: • The risk of moving their children to a new part of the country and new school system. How would they adjust? Would they make new friends? Would they like their new school? Would the school be as good? • The risk of Jane’s new job. The job offers more money, but would Jane be as happy at her new job as she is at her current one? How stable is the company? Jane’s current employer is very and stable. The new company is smaller and might not be as stable. Could the company run into financial trouble?

4.9 Applying the Precautionary Principle

99

Is the possible harm serious and plausible? Yes

No; stop; consider other methods for dealing with the harm

Can the harm be avoided? Yes Is it reasonable to avoid the harm?

No

Yes No Avoid the harm and consider measures to minimize or mitigate harm if avoidance does not work Can the harm be minimized? Yes

No

Are there reasonable ways of minimizing the harm? Yes Implement reasonable ways of minimizing the harm and consider measures to mitigate the harm if minimization does not work

No Can the harm be mitigated? Yes

No; stop; precaution is futile and therefore not reasonable

Are there reasonable ways of mitigating the harm? Yes Implement reasonable

No; stop, precaution is not reasonable

ways of mitigating the harm

Fig. 4.1 Decision tree for applying the precautionary principle

• The risks for Jake’s employment opportunities. Jake is likely to find work in the Denver area, but would he like his new job as much as he likes his current one? Would his new job pay as much? • The risks of buying and selling houses. Would they be able to sell their house for the what it’s worth in a timely fashion? Would they have to pay for two mortgages for a time period? Will they find a new house that they like and can afford? Jane has at least three options: • Decline the job offer and keep her old job. • Take the new job.

100

4 The Precautionary Principle

• Inform her current employer that she has been offered a job at a 20% higher salary and negotiate for a salary increase at her current job. The first option is the safest one. It has fewer risks than the other options, but it also offers fewer opportunities/benefits. The second option is the riskiest one. In the worst-case scenario, Jane could hate her new job, the company could go bankrupt, Jake would not find a satisfactory job, the children would hate their new schools and not be able to make new friends, and they would have trouble selling their old home and buying a new one. The second option also has the most opportunities/benefits, as noted above. The third option is fairly safe, but it also has some risks. For example, Jane’s supervisor might refuse to increase her salary and might be resentful that she asked for a raise and has been thinking about changing jobs. This anger/resentment might negatively impact Jane’s work with her current employer. The PP would offer Jane and her family a useful way of approaching this decision. Jane could start by considering the most serious and plausible harms related to her decision, such as her children having trouble adjusting to a new school and environment, her husband having difficulty finding a job, and problems with buying and selling their houses. Next, she would need to consider whether these harms can be avoided. These harms can be avoided if Jane does not take the job. But would it be reasonable to not take the job? To determine whether not taking the job would be a reasonable precaution, Jane and her family could consider whether this option would comply with the four criteria mentioned above, i.e. proportionality, fairness, consistency, and epistemic responsibility. Concerning proportionality, not taking the job would be reasonable only if Jane and her family decide that the harms avoided by not taking the job outweigh the lost opportunities/benefits. This is perhaps the most important judgment they must make, because if they decide that the benefits of taking the job outweigh the harms, Jane should not take the job. Concerning fairness, a key question would be how the benefits and harms would be distributed among the family members. Not taking the job would probably harm Jane more than the other family members, because she could become frustrated at her current job and resentful that she passed on an opportunity to advance her career. Taking the job could harm the children more than other family members, if they have trouble adjusting to their new school and environment. However, one might also argue that the harms to the children would be short-term, and in the long run they might benefit from the move, if their mother has a better job and they are able to thrive in a new place. Concerning epistemic responsibility, Jane should make sure that she has all the relevant information before deciding to take the job, because it could significantly impact possible outcomes and her family’s overall choice. For example, if the company offering her the job is financially unstable, Jane might decide that it is not worth the risk of leaving a stable work situation for one with less financial security. If the housing market is very poor for sellers, Jane and Jake might decide that she should not take the new job because they do not want to risk having trouble selling

4.9 Applying the Precautionary Principle

101

their house. If the employment opportunities for Jake are scarce, Jane might not take the job to avoid the risk of him being unemployed for a long period of time. Suppose, then, that Jane decides that not taking the job is an unreasonable way of avoiding harm, so she decides to take the job. Having reached this stage of her decision-making, she can still implement precautions to minimize or mitigate possible harms, such as: exploring job opportunities for Jake; learning more about school systems, neighborhoods, parks, shopping areas, recreational activities, and community and religious organizations; preparing their house to be sold; negotiating with her employer about when she needs to start her new job to give her family adequate time to prepare for the move; and discussing the move with the children to help them prepare for it. These precautions would probability meet the proportionality and fairness criteria because they can yield significant benefits for Jane and her family with minimal risks.

4.9.2 Case 2: Autonomous Vehicles Automobile manufacturers, technology companies, and military research organizations have been working on developing self-driving (autonomous) vehicles since the 1980s. Autonomous vehicles are controlled by onboard computer systems equipped with sensors, algorithms, artificial intelligence software, global positioning systems (GPSs) and large amounts of driving data. Though most vehicles today have onboard computers for controlling different functions, these have limited capabilities. National Highway Traffic Safety Administration has developed a scale for classifying automation in vehicles. At the lowest level (level 0, no autonomy), the driver has total control and the vehicle has none; at the highest level (level 5, full autonomy), the vehicle’s computer system has total control and the driver has none. Automated vehicles fall somewhere in the middle of the scale. Many cars are now equipped with automated features, such as automated cruise control, braking, or parallel parking. These vehicles would be at level 1 or 2 on the automation cale. Though companies, such as Uber and Google, have tested level 5 vehicles under tightly controlled conditions, it may be decades before fully automated vehicles will be marketed and used widely (Mervis 2017). Some of the opportunities/benefits of autonomous vehicles include: • Improvements in safety. Driver error is a major factor in most accidents. Autonomous vehicles could make fewer errors than drivers, which would improve safety (Mervis 2017). • Improvements in driving efficiency. Autonomous vehicles may be able to take more efficient routes and drive more efficiently than human drivers, which could save energy and wear and tear on vehicles (Mervis 2017). • More convenience. People may find that autonomous vehicles are more convenient than non-autonomous ones. For example, people who are freed from driving may

102

4 The Precautionary Principle

have time to do office work, read books, socialize, or do any number of activities while in the vehicle (Mervis 2017). • Transportation for disabled drivers. Millions of people who cannot drive due to physical disabilities, such as blindness or paralysis, would have better access to transportation if autonomous vehicles are developed (Claypool et al. 2017). Transportation is important for employment, socialization, health care, and many other activities. • Improvements in economic efficiency and productivity. Autonomous vehicles used for transporting cargo may perform more efficiently than human drivers, which could cut transportation costs for many goods. Some of the possible harms include: • Reduced safety. Driving is a very complex operation, requiring the driver to juggle many different variables, such as driving conditions, pedestrians, traffic, etc. Autonomous vehicles may make more mistakes than human drivers, especially at the earlier stages of development (Mervis 2017).41 • Increased driving. People may use vehicles more often when they are freed from the burden of driving. Increased automobile use would lead to more pollution and energy usage (Mervis 2017). • Unequal access. Autonomous vehicles may be very expensive, especially at first. It might be the case that only wealthy people can afford them. Unequal access to this technology could exacerbate socioeconomic disparities. • Reliance on machines, dumbing-down of humanity (Frischmann 2018). As autonomous vehicles become more prevalent, people will depend on them and forget how to drive. One might argue that to maintain our intelligence and mental vigor it is important for us to do some complex tasks without relying on machines. We have already lost many important cognitive skills as a result of relying on calculators, GPS devices, and internet search engines, for example. Since level 1 and 2 vehicles are already legal in the US and many other countries, decision-making and policy should focus on level 3 and above vehicles. Some possible policy options include: • Permanently ban autonomous vehicles above level 2. • Permit autonomous vehicles above level 2 and begin regulating their production, design, and use to promote public health and safety. • Institute a moratorium (or temporary bad) on vehicles above level 2, which could be lifted in stages as we move toward regulation.

41 Autonomous

vehicles have killed pedestrians. In March 2018, Uber’s self-driving car struck and killed Elaine Herzberg, who was walking her bike across the street in Tempe, Arizona. A human driver, Rafaela Vasquez was riding in the vehicle to prevent an accident if the car did not drive appropriately. The human driver did not react in time to prevent the accident. The National Transportation Safety Board determined that the vehicle’s computer system did not recognize the pedestrian, who was jaywalking (Gonzales 2019).

4.9 Applying the Precautionary Principle

103

The PP offers us some useful guidance for making policy choices concerning autonomous vehicles. To apply the PP to autonomous vehicles, we should consider the serious and plausible harms related to use of this technology, such as reduced safety, increased driving, and unequal access to the technology. These harms can be avoided by banning autonomous vehicles, but would it be reasonable to do so? One might argue that permanently banning autonomous vehicles above level 3 is unreasonable because it denies consumers and society important benefits, such as improvements in safety, efficiency, and convenience. This option therefore fails to meet the proportionality criterion. One might argue that this option would be unfair, because it would impact people with physical disabilities that impair driving more than other people. Autonomous vehicles, if permitted, could open up a world of opportunities for disabled people. If permanently banning autonomous vehicles is not a reasonable precaution, we should consider ways of addressing harms, such as regulation. One might argue that regulation meets the proportionality criterion because it balances benefits and risks proportionally. A key problem with regulation is that we lack current lack enough scientific and technical information and public input to begin developing regulations that balance benefits and risks proportionally. Consequently, we run the risk of under-regulating or over-regulating autonomous vehicles. Both outcomes are less than ideal, since under-regulation fails to prevent significant harms and overregulation denies society important benefits. Regulation may be to best way of dealing with autonomous vehicles in the future, but not now. The most reasonable option, given the risks and benefits of autonomous vehicles and our scientific and moral uncertainty, would seem to be to institute a moratorium on vehicles above level 2, which could be lifted in stages as we move toward regulation. This option would seem to balance benefits and risk proportionally. The moratorium could be lifted when we acquire more scientific and technical information about these vehicles and the public has had an opportunity to voice its needs and concerns. When we begin regulating autonomous vehicles above level 2, we should make sure that regulations are carefully and informatively written to maximize benefits and minimize harms. An important consideration for the development of this technology is whether autonomous vehicles will be widely accessible. History teaches us that it is likely that these vehicles will be very expensive at first, but that costs will decline, due to improvements in product design, manufacturing efficiency, and competition among automakers. Numerous technologies, such as automobiles, radios, televisions, telephones, cell phones, and personal computers have become widely accessible as costs have gone down. It will also be important to ensure that all affected parties have meaningful input into policy decisions related to autonomous vehicles. For example, regulations should be based on input from transportation safety experts, automobile manufacturers, technology companies, consumer safety advocates, and the public. Regulations should reflect a consistent rationale for decision-making. Regulations should not be haphazard or driven by economic interests. Finally, regulations should be based on the best available scientific evidence and expertise. They should also be revised, if need be, based on new evidence.

104

4 The Precautionary Principle

4.10 Usefulness of the Precautionary Principle As one can see from these examples, the PP can be a useful rule for making decisions about dealing with possible harms and benefits when evidence concerning the probabilities pertaining to different outcomes is lacking. In both cases described above, the decision-makers do not have the level of scientific evidence required to implement expected utility theory or other methods that involve the assignment of accurate and precise probabilities to different outcomes. While Jane knows that her children could have significant difficulty adjusting to a new school and new environment, she does not know how likely it is that this will happen. While government policymakers know that autonomous vehicles could improve traffic safety, they do not know how likely it is that this will happen. As time passes, decision-makers may acquire enough evidence to assign probabilities to different outcomes. However, very often the decisions we face are urgent and we must make decisions when we lack evidence. For example, Jane’s job offer may expire long before she has enough evidence to apply expected utility theory to her decision. In the case of autonomous vehicles, policymakers can enact a temporary moratorium until they have more evidence. As discussed in Chap. 2, one can also use rules for decision-making under ignorance when one lacks enough evidence to assign probabilities to different outcomes. In Jane’s case, she could follow maximin and not take the job to avoid the worst outcomes; she could follow the minimax regret rule and take the job to avoid lost opportunities, depending on how she rates those opportunities; and she could also follow the optimism-pessimism rule and take the job, or not take it, depending on how optimistic or pessimistic she is. In the case of autonomous vehicles, maxim would favor a ban on autonomous vehicles to avoid the worst outcome, i.e. reduced traffic safety; the minimax regret rule would favor regulating autonomous vehicles to avoid the lost opportunities, depending on how significant policy-makers consider those opportunities to be; and the optimism-pessimism rule would favor regulating autonomous vehicles or banning them, depending on their degree of optimism or pessimism. Other rules, such as the principle of indifference, could also be applied to these decisions. While decision theory offers a useful perspective on these choices, it does not provide enough guidance for making reasonable decisions about possible harms and benefits because it is morally neutral. To use decision theory, decision-makers must therefore draw upon moral values to assign utilities to different outcomes. The PP also relies on moral values for the assessment of outcomes (i.e. harms and benefits), but it provides more guidance for making reasonable decisions than decision theory because some moral values, such as proportionality and fairness, are part of the concept of reasonableness. Moreover, the PP is better than decision theory at coming to terms with moral uncertainty and pluralism, because it requires one to consider and weigh the wide range of values (such as public health, environmental protection, and economic development) that may be impacted by a decision. As we saw in Chap. 2, decision theory operates on these values as if they can be represented and measured via a common currency, i.e. utility.

4.10 Usefulness of the Precautionary Principle

105

While I think that decision theory has some significant limitations, I do not mean to imply that it is not useful way of approaching choices concerning possible benefits and harms, since many intelligent and thoughtful people think that it is. However, decision theory may work best when we do not face the scientific and moral uncertainty that impacts public policy choices related to the environment, public health, and the economy. For example, if you know that your most important value is life, then decision theory could be a useful tool for deciding whether to eat a mushroom that could be poisonous. You could follow maximin and not eat it. Or if you know that you are mostly concerned with economic values, then expected utility theory could be a useful tool for making investments. It may be the case, as I shall argue in the next chapter, that different decision-making rules or approaches apply to different circumstances. One could also apply moral theories to these choices, but, as we saw in Chap. 3, most moral theories do not provide the kind of specific guidance for balancing possible harms and benefits that the PP offers. The most productive way of making these types of decisions is to use a framework that combines insights from moral theory and decision theory. The PP, I contend, does just this.

4.11 Objections and Replies Before concluding this chapter, it will be instructive to discuss and respond to some possible objections to my account of the PP. The first objection is that my account of the PP is too weak because it does not provide us with enough protection against possible harms related to science, technology, economic development, and other activities. The version of the PP I have defended is an “anything goes” principle that would endorse all kinds of risks, provided that we can justify them in terms of benefits. I agree that my version of the PP is weaker than some versions, but I maintain that it still provides us with adequate protections against possible harms because it calls our attention to these harms, includes three different strategies for dealing with them, and clearly defines how we can address possible harms reasonably. While other versions of the PP are inherently more risk-averse than my version, my version incorporates different attitudes toward risk because it includes procedural fairness as criteria for reasonableness. People with different attitudes toward risk can participate in the public debate about risk management. Decision-makers might decide, following an informed debate, that the best course of action is to avoid a possible harm, rather than to minimize or mitigate it. Stronger versions of the PP incorporate risk-aversion into the principle itself, while my version allows attitudes toward risk to impact the debate about the reasonableness of different precautionary measures. Thus, I would characterize my version as risk-neutral rather than risk-averse or risk-seeking. I would also characterize my version of the PP as more democratic than other, risk-averse versions because it does not assume that any particular attitude toward risk must guide public policy.

106

4 The Precautionary Principle

The second objection is that my account of the PP may not always provide us with clear guidance because decision-makers could judge that two or more options are equally reasonable. For example, Jane and her family could view taking and not taking the job as equally reasonable, and society could view banning and not banning autonomous vehicles as equally reasonable. I agree that the PP does not provide clear guidance when decision-makers regard conflicting options as equally reasonable, but there are ways of dealing with this problem. For example, decision-makers could continue deliberating or voting until one alternative is recognized as more reasonable than the other. Moreover, indecisiveness is not unique to the PP, since most methods of making personal and policy choices may involve a degree of indecisiveness. Although my version of the PP can lead to indecisiveness, it includes a framework for overcoming this problem and arriving at decisions, since decision-makers can continue deliberating or voting until they reach a decision. The third objection is that my version of the PP is imprecise, since it relies on qualitative assessments of possible harms and benefits. We should use other, more precise methods of decision-making, such as EUT or cost-benefit analysis, for making important social policy choices, according to this objection. I agree that my version of the PP is less precise than other rules for decisionmaking, but it is more precise than other versions of the PP because I have clearly defined important terms, such as ‘plausibility,’ ‘serious harm,’ and ‘reasonableness.’ Moreover, I would argue that the type of imprecision inherent in my version of the PP is a strength rather than a weakness because it allows decision-makers to deliberate about how best to deal with possible harms and benefits when they lack the level of scientific evidence needed to use rules that operate on quantitative assessments of harms (or risks) and benefits. It is better to use qualitative reasoning methods for making decisions when quantitative methods are unsuitable for the situation than to use quantitative methods inappropriately. To quote Aristotle (1985: 3), “our discussion will be adequate if it fits its subject matter; we should not seek the same level of exactness in all arguments alike, any more than in the products of different crafts.”

4.12 Conclusion In this chapter, I articulated what I believe to be a useful version of the PP, applied it to some cases, and defended it against objections. In the next chapter, I will expand upon the account of precautionary reasoning that I described briefly in Chap. 1 and show how the PP and other principles of decision-making fit into it.

References

107

References Ackerman, F. 2008. Poisoned for Pennies: The Economics and Toxics of Precaution. Washington, DC: Island Press. Aristotle. 1985 [340 BCE]. Nichomachean Ethics (T. Irwin, Trans.). Indianapolis, IN: Hackett. Beauchamp, T.L., and J.F. Childress. 2012. Principles of Biomedical Ethics, 7th ed. New York, NY: Oxford University Press. Bodansky, D. 1991. Scientific Uncertainty and the Precautionary Principle. Environment 33 (7): 43–44. Brink, D.O. 1989. Moral Realism and the Foundations of Ethics. Cambridge: Cambridge University Press. Brombacher, M. 1999. The Precautionary Principle Threatens to Replace Science. Pollution Engineering (Summer): 32–34. Chisholm, A., and H. Clarke. 1993. Natural Resource Management and the Precautionary Principle. In Fair Principles for Sustainable Development: Essay on Environmental Policy and Developing Countries, ed. E. Dommon, 109–122. Cheltenham, UK: Edward Elgar. Claypool, H., A. Bin-Nun, J. Gerlach. 2017. Self-Driving Cars: The Impact on People with Disabilities. Ruderman Family Foundation White Paper. Available at https://rudermanfoundat ion.org/wp-content/uploads/2017/08/Self-Driving-Cars-The-Impact-on-People-with-Disabilit ies_FINAL.pdf. Accessed 18 Jan 2021. Cranor, C. 2001. Learning from Law to Address Uncertainty in the Precautionary Principle. Science and Engineering Ethics 7: 313–326. Cranor, C. 2011. Legally Poisoned: How the Law Puts Us at Risk from Toxicants. Cambridge, MA: Harvard University Press. Crichton, M. 2002. Prey. New York, NY: Harper Collins. Elliott, K.C. 2010. Geoengineering and the Precautionary Principle. International Journal of Applied Philosophy 24: 237–253. Elliott, K.C. 2011. Nanomaterials and the Precautionary Principle. Environmental Health Perspectives 119 (6): A240. Engelhardt, H.T., and F. Jotterand. 2004. The Precautionary Principle: A Dialectical Reconsideration. Journal of Medicine and Philosophy 29 (3): 301–312. European Commission. 2000. Communication for the Commission on the Precautionary Principle. Available at https://publications.europa.eu/en/publication-detail/-/publication/21676661a79f-4153-b984-aeb28f07c80a/language-en. Accessed 18 Jan 2021. European Commission. 2017. Science for Environment Policy: The Precautionary Principle: Decision Making Under Uncertainty, Future Brief 18. Available at https://ec.europa.eu/env ironment/integration/research/newsalert/pdf/precautionary_principle_decision_making_under_ uncertainty_FB18_en.pdf. Accessed 18 Jan 2021. Fletcher, W.J. 2005. The Application of Qualitative Risk Assessment Methodology to Prioritize Issues for Fisheries Management. ICES Journal of Marine Science 62 (8): 1576–1587. Foster, K.F., P. Vecchia, and M.H. Repacholi. 2000. Science and the Precautionary Principle. Science 288 (5468): 979–981. Frischmann, M. 2018. Is Smart Technology Making Us Dumb? Scientific American, December 27, 2018. Available at https://blogs.scientificamerican.com/observations/is-smart-technologymaking-us-dumb/. Accessed 19 Jan 2021. Gardiner, S. 2006. A Core Precautionary Principle. Journal of Political Philosophy 14: 33–60. Goklany, I.M. 2001. The Precautionary Principle: A Critical Appraisal of Environmental Risk Assessment. Washington, DC: Cato Institute. Gonzales, R. 2019. Feds Say Self-Driving SUV Did Not Recognize Jaywalking Pedestrian in Fatal Crash. NPR, November 7, 2019. Available at https://www.npr.org/2019/11/07/777438412/ feds-say-self-driving-uber-suv-did-not-recognize-jaywalking-pedestrian-in-fatal-. Accessed 19 Jan 2021. Haack, S. 2003. Defending Science within Reason. New York, NY: Prometheus Books.

108

4 The Precautionary Principle

Han, Z.Y., and W.G. Weng. 2011. Comparison Study on Qualitative and Quantitative Risk Assessment Methods for Urban Natural Gas Pipeline Network. Journal of Hazardous Materials 189 (1–2): 509–518. Hannson, D. 1997. The Limits of Precaution. Foundations of Science 2: 293–306. Hannson, S. 2003. Ethical Criteria of Risk Acceptance. Erkenntnis 59 (3): 291–309. Hannson, S. 2010. The Harmful Influence of Decision Theory on Ethics. Ethical Theory and Moral Practice 13 (5): 585–593. Hansen, S.F., L. Carlsen, and J.A. Tickner. 2007. Chemicals Regulation and Precaution: Does REACH Really Incorporate the Precautionary Principle. Environmental Science & Policy 10 (5): 395–404. Harris, J., and S. Holm. 2002. Extending Human Lifespan and the Precautionary Paradox. Journal of Medicine and Philosophy 27 (3): 355–368. Hartzell-Nichols, L. 2012. Precaution and Solar Radiation Management. Ethics, Policy, and Environment 15: 158–171. Hartzell-Nichols, L. 2013. From “the” Precautionary Principle to Precautionary Principles. Ethics, Policy, and Environment 16: 308–320. Hartzell-Nichols., L. 2017. A Climate of Risk: Precautionary Principles, Catastrophes, and Climate Change. New York, NY: Routledge. Hawking, S. 1988. A Brief History of Time. New York, NY: Bantam Books. Hawthorne, F. 2005. Inside the FDA: The Business and Politics Behind the Drugs We Take and the Food We Eat. New York, NY: Wiley. Hofmann, B. 2020. Progress Bias Versus Status Quo Bias in the Ethics of Emerging Science and Technology. Bioethics 34 (3): 252–263. Holm, S., and J. Harris. 1999. Precautionary Principle Stifles Discovery. Nature 400 (6743): 398. Huber, F. 2019. Confirmation and Induction. Internet Encyclopedia of Philosophy. Available at https://www.iep.utm.edu/conf-ind/. Accessed 19 Jan 2021. Hulme, M. 2009. Why We Disagree About Climate Change. Cambridge, UK: Cambridge University Press. John, S.D. 2007. How to Take Deontological Concerns Seriously in Risk-Cost-Benefit Analysis: A Re-Interpretation of the Precautionary Principle. Journal of Medical Ethics 33 (4): 221–224. Kaebnick, G.E., E. Heitman, J.P. Collins, J.A. Delborne, W.G. Landis, K. Sawyer, L.A. Taneyhill, and D.E. Winickoff. 2016. Precaution and Governance of Emerging Technologies. Science 354 (6313): 710–711. Kitcher, P. 1993. The Advancement of Science. New York, NY: Oxford University Press. Koplin, J.J., C. Gyngell, and J. Savulescu. 2020. Germline Gene Editing and the Precautionary Principle. Bioethics 34 (1): 49–59. Kraybill, D.B., K. Johnson-Weiner, and S.M. Nolt. 2018. The Amish. Baltimore, MD: Johns Hopkins University Press. Kozel, R.J. 2010. Stare decisis as Judicial Doctrine. Washington and Lee Law Review 67: 411–466. Krimsky, S. 2017. The Unsteady State and Inertia of Chemical Regulation Under the US Toxic Substances Control Act. PLoS Biology 15 (12): e2002404. Kuhn, T. 1970. The Structure of Scientific Revolutions, revised ed. Chicago, IL: University of Chicago Press. Lenman J. 2018. Moral Naturalism. Stanford Encyclopedia of Philosophy. Available at https://plato. stanford.edu/entries/naturalism-moral/#WhatMoraNatu. Accessed 19 Jan 2021. Marchant, G. 2002. Biotechnology and the Precautionary Principle: Right Question, Wrong Answer. International Journal of Biotechnology 12 (1): 34–45. McKinnon, K. 2009. Runaway Climate Change: A Justice-Based Case for Precautions. Journal of Social Philosophy 40: 187–207. Mervis J. 2017. Are We Going Too Fast on Driverless Cars? Science, December 14, 2017. Available at https://www.sciencemag.org/news/2017/12/are-we-going-too-fast-driverless-cars. Accessed 19 Jan 2021.

References

109

Miller D. 2017. Justice. Stanford Encyclopedia of Philosophy. Available at https://plato.stanford. edu/entries/justice/. Accessed 19 Jan 2021. Monteiro-Riviere, N.A., and C.L. Tran (eds.). 2014. Nanotoxicology: Progress Toward Nanomedicine, 2nd ed. Boca Raton, FL: CRC Press. Munthe, C. 2011. The Price of Precaution and the Ethics of Risks. Dordrecht, Netherlands: Springer. National Research Council. 2009. Science and Decisions: Advancing Risk Assessment. Washington, DC: National Academies Press. National Science & Technology Council. 2018. National Near-Earth Object Preparedness Strategy and Action Plan. Available at https://www.whitehouse.gov/wp-content/uploads/2018/06/Nat ional-Near-Earth-Object-Preparedness-Strategy-and-Action-Plan-23-pages-1MB.pdf. Accessed 19 Jan 2021. National Weather Service. 2019. How Dangerous Is Lightning? Available at https://www.weather. gov/safety/lightning-odds. Accessed 19 Jan 2021. O’Riordan, T., A. Jordan, and J. Cameron (eds.). 2001. Reinterpreting the Precautionary Principle. London, UK: Cameron May. Peterson, M. 2006. The Precautionary Principle Is Incoherent. Risk Analysis 26 (3): 595–601. Peterson, M. 2007a. Should the Precautionary Principle Guide Our Actions or Our Beliefs? Journal of Medical Ethics 33 (1): 5–10. Peterson M. 2007b. The Precautionary Principle Should Not Be Used as a Basis for DecisionMaking. Talking Point on the Precautionary Principle. EMBO Reports 8 (4): 305–308. Popper, K. 1959. The Logic of Scientific Discovery. London, UK: Hutchinson. Radder, H. 2019. From Commodification to the Common Good: Reconstructing Science, Technology, and Society. Pittsburgh, PA: University of Pittsburgh Press. Rawls, J. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press. Resnik, D.B. 2001. DNA Patents and Human Dignity. Journal of Law, Medicine, and Ethics 29 (2): 153–165. Resnik, D.B. 2004. The Precautionary Principle and Medical Decision Making. Journal of Medicine and Philosophy 29: 281–299. Resnik, D.B. 2012. Environmental Health Ethics. Cambridge, UK: Cambridge University Press. Resnik D.B., and D. A. Vallero. 2011. Geoengineering: An Idea Whose Time Has Come? Journal of Earth Science and Climate Change 2011 S1: 001. Ridge M. 2019. Moral Non-Naturalism. Stanford Encyclopedia of Philosophy. Available at https:// plato.stanford.edu/entries/moral-non-naturalism/#Int. Accessed 2 July 2020. Samuelson, P.A., and W.D. Nordhaus. 2009. Economics, 19th ed. New York: McGraw-Hill. Sandin, P. 2004. Better Safe Than Sorry: Applying Philosophical Methods to the Debate on Risk and the Precautionary Principle. Stockholm: Theses in Philosophy from the Royal Institute of Technology. Sandin, P., M. Peterson, S.O. Hansson, C. Rudén, and A. Juthe. 2002. Five Charges Against the Precautionary Principle. Journal of Risk Research 5 (4): 287–299. Science and Environmental Health Network. 1998. Wingspread Statement on the Precautionary Principle. Available at http://www.who.int/ifcs/documents/forums/forum5/wingspread. doc. Accessed 19 Jan 2021. Schulte, P., L. Alegret, I. Arenillas, J.A. Arz, P.J. Barton, P.R. Bown, T.J. Bralower, G.L. Christeson, P. Claeys, C.S. Cockell, G.S. Collins, A. Deutsch, T.J. Goldin, K. Goto, J.M. Grajales-Nishimura, R.A. Grieve, S.P. Gulick, K.R. Johnson, W. Kiessling, C. Koeberl, D.A. Kring, K.G. MacLeod, T. Matsui, J. Melosh, A. Montanari, J.V. Morgan, C.R. Neal, D.J. Nichols, R.D. Norris, E. Pierazzo, G. Ravizza, M. Rebolledo-Vieyra, W.U. Reimold, E. Robin, T. Salge, R.P. Speijer, A.R. Sweet, J. Urrutia-Fucugauchi, V. Vajda, M.T. Whalen, and P.S. Willumsen. 2010. The Chicxulub Asteroid Impact and Mass Extinction at the Cretaceous-Paleogene Boundary. Science 327 (5970): 1214– 1218. Search for Extraterrestrial Intelligence Institute. 2019. About. Available at https://www.seti.org/ about. Accessed 19 Jan 2021.

110

4 The Precautionary Principle

Shapere, D. 1966. Plausibility and Justification in the Development of Science. Journal of Philosophy 63 (20): 611–662. Shrader-Frechette, K.S. 1991. Risk and Rationality: Philosophical Foundations for Populist Reforms. Berkeley, CA: University of California Press. Shrader-Frechette, K.S. 2007. Taking Action, Saving Lives: Our Duties to Protect Environmental and Public Health. New York, NY: Oxford University Press. Soule E. 2004. The Precautionary Principle and the Regulation of U.S. Food and Drug Safety. Journal of Medicine and Philosophy 29 (3): 333–350. Steel, D. 2015. Philosophy and the Precautionary Principle. Cambridge, UK: Cambridge University Press. Steele, K. 2006. The Precautionary Principle: A New Approach to Public Decision-Making? Law, Probability and Risk 5 (1): 19–31. Sunstein, C.R. 2005. Laws of Fear: Beyond the Precautionary Principle. Cambridge, UK: Cambridge University Press. Tait, J. 2001. More Faust than Frankenstein: The European Debate About the Precautionary Principle and Risk Regulation for Genetically Modified Crops. Journal of Risk Research 4 (2): 175–189. Trouwborst, A. 2006. Precautionary Rights and Duties of States. Leiden: Martinus Nijhof. Von Schomberg, R. 2012. The Precautionary Principle: Its Use Within Hard and Soft Law. European Journal of Risk Regulation. 2: 147–156. Wareham, C., and C. Nardini. 2015. Policy on Synthetic Biology: Deliberation, Probability, and the Precautionary Paradox. Bioethics 29 (2): 118–125. Whiteside, K. 2006. Precautionary Politics: Principle and Practice in Confronting Environmental Risk. Cambridge, MA: MIT Press. Woodhouse, K.M. 2018. The Ecocentrists: A History of Radical Environmentalism. New York, NY: Columbia University Press.

Chapter 5

Precautionary Reasoning and the Precautionary Principle

In the first four chapters of this book, I have taken the reader on a tour of decision theory and moral theory and examined, critiqued, and defended the precautionary principle (PP). In the first chapter, I made seven key points that form the basis of my approach to precautionary reasoning. In this chapter, I will develop my approach in more detail. First, I will briefly review these key points.

5.1 Foundations of Precautionary Reasoning Redux Point #1 We usually have an array of precautionary measures we can use to deal with risks, including avoidance, minimization, and mitigation. I discussed and illustrated this point in many different cases in Chapters 1, 2, 3, and 4 and included it in the definition of the PP. Point #2. We may face risks related to taking action or not taking action. I discussed and illustrated this point in many different cases in Chapters 1, 2, 3, and 4. Point #3. Precautionary reasoning is inherently normative because risk management has a moral, social, and political dimension. I discussed and illustrated this point in many different cases in Chapters 1, 2, 3, and 4. I argued in Chapter 2 that decision theory is not, by itself, an adequate approach to precautionary reasoning because it is morally neutral. In Chapter 3 I discussed an array of moral values one may use to guide decision-making. I discussed group decision-making in Chapters 3 and 4. Point #4. The decisions we make concerning risks, benefits, and precautions depend on a variety of contextual factors, including, but not limited to: • The circumstances (or facts) related to the decision; • Our available options; • Our values, which we use to evaluate outcomes related to the options; © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_5

111

112

5 Precautionary Reasoning and the Precautionary Principle

• Our knowledge (or lack thereof, i.e. uncertainty) concerning outcomes, including our knowledge of probabilities or causal relationships; • Our tolerance for risk and uncertainty; • Interpersonal and social relationships (i.e. whether we are making decisions only for ourselves or for or with other people). I discussed and illustrated this point in many different cases in Chapters 1, 2, 3, and 4. As we saw in Chapter 2, decisions can be defined in terms circumstances, options, and evaluations. Our knowledge concerning possible outcomes plays an important role in distinguishing between decisions under ignorance and decisions under risk and in defining and applying the PP. As we saw in Chapter 4, tolerance for risk and uncertainty can play an important role in how we judge the reasonableness of risks. Several of the cases discussed in the previous chapters involved interpersonal and social relationships. Point #5. There are a variety of rules and procedures we can use to make precautionary decisions. I discussed some of these different rules and principles for decision-making in Chapters 2, 3, and 4, including the PP and rules based on decision theory and moral theory. Point #6. Which decision-making rule or procedure one should use depends, in large part, on contextual factors related to the decision; and Point #7. It is reasonable to consider revising decision-making rules or procedures when contextual factors change. I elaborated on these two points in Chapter 2 when discussing the transition from decision-making under ignorance to decision-making under risk. I also discussed this point in relation to the PP in Chapter 4. In this chapter, I will develop my approach to precautionary reasoning in greater depth, drawing up these key points. My main thesis is this chapter is that the PP complements other types of rules and procedures one may use in precautionary reasoning (Tait 2001). The applicability and usefulness of all rules and procedures, including the PP, depends on contextual factors. When these factors change, it may be appropriate to change our rules and procedures for decision-making. At the outset, it will be useful to reflect on the crucial point that deciding which decision rule or procedure to use is, itself, a decision (Resnik 1987). Decision theory, for example, describes rules and procedures for decision-making, but it does not tell us which rule or procedure we should use. Decision-makers must decide whether to use maximin when facing decisions under ignorance, or whether to use majority rule for making group decisions. Decision-makers may also need to decide which moral rules to follow (such as the categorical imperative or the doctine of doubleeffect) or values to promote (such as social utility, virtue, economic development, or environmental protection) when making decisions. There is a potential for an infinite regress here if we think that we must appeal to a rule or procedure to help us decide which rule or procedure to use for making decisions. To avoid a regress, we must therefore make a choice at some point that is not based on acceptance of other rules or procedures.

5.2 Individual Decisions

113

5.2 Individual Decisions I will start my analysis by considering some rules and procedures one might use to make precautionary decisions for one’s self. These choices are simpler than group decisions because they involve only the circumstances, values, and knowledge of one person, whereas group decisions involve the circumstances, values, and knowledge of many people. Of course, these decisions are not made in a social vacuum, since they may affect other people, but they still fall under the decision-making purview of one person. The insights developed in this section will be applicable to group decision-making. We will start by considering decisions under certainty. As noted in Chapter 2, a decision under certainty is one in which you know what the outcomes will be with certainty. These decisions are rare, but for the sake of discussion, let’s assume that they do happen. When you know the outcomes of different choices with certainty, the only questions you face are how to evaluate the outcomes. For example, suppose that I want to travel to Washington, DC, which is 250 miles away, and I only care about the time it will take to complete the trip. If I know that it will take 4.5 h to travel to Washington, DC by car and 6 h by train, my decision is simple: I should travel by car. However, suppose that I also care about minimizing my carbon footprint.1 The amount of carbon produced by traveling by car is 0.07 metric tons and by train it is 0.02 metric tons.2 Now the choice is not so simple because I have conflicting values, i.e. minimizing my travel time vs. minimizing my carbon footprint. I can make it even more complex if I also consider the cost of the trip, i.e. $125 by car vs. $75 by train (including parking), and the convenience I derive from traveling by car vs. train. Suppose I find it more convenient to travel by car, because I can stop for food, use the car when I arrive at my destination, depart whenever I want, and so on. I give the car a convenience score of 0.75 and the train a score of 0.5 (see Table 5.1). As one can see from Table 5.1, this is a complex decision, even if I we assume that I know the outcomes of traveling by car vs. train with certainty. The decision is complex not because there is epistemological or scientific uncertainty, but because there is moral or value uncertainty,3 and I must decide how to rank (or prioritize) my values. If I decide that minimizing time and maximizing convenience are the most important considerations, I should travel by car; if I decide that minimizing my cost Table 5.1 Traveling by Car vs. Train Time

Carbon footprint

Cost

Convenience

Car

4.5 h

0.07 metric tons

$125

0.75

Train

6h

0.02 metric tons

$75

0.5

1A

carbon footprint is the amount of carbon dioxide produced by an activity. Atmospheric carbon dioxide contributes to global warming (Intergovernmental Panel on Climate Change 2013). 2 These numbers are approximations. See Carbon Footprint Calculate (2019). 3 See discussion of moral or value uncertainty in Chapter 2.

114

5 Precautionary Reasoning and the Precautionary Principle

Table 5.2 Traveling by Car vs. Train Time Car

4h

4.5 h

5.5 h

6.5 h

7.0 h

Train

5.75 h

5.85 h

6.0 h

6.15 h

6.3 h

and my carbon footprint are the most important considerations, I should travel by train. If I cannot rank these values, then I cannot apply rules of from decision theory to my decision because these rules require me to assign utilities to different outcomes derived from a preference ordering or numeric assessment, which is based on how I prioritize my values. So, before I even decide whether to use decision theory, I must have an idea of what my values are and how to rank them (Hannson 2003, 2010). Once I know my values and how to rank them, I can use the tools that decision theory provides. The first complexity I encounter here concerns my knowledge of the outcomes of different choices. My decision concerning traveling by car vs. train to Washington, DC would be even more complex if I do not know the probabilities for some of the outcomes. Suppose, for example, that it could take me between 4 and 7 h to drive to Washington, DC and 5.75–6.3 h to take the train, but I do not know the probabilities for this range of estimates (Table 5.2). If focus only on time and consider 7 h by car to be the worst outcome, then I could follow maximin and take the train, or I could follow maximax and travel by car. If I follow the minimax regret rule, I would take the car, because taking the train has the highest regret (5.75 – 4 = 1.75). I could also follow the principle of indifference and take the car because the average time by car is 5.5 h, compared to 6.01 by train, or I could follow the optimism-pessimism rule and take the car or train, depending on my level of optimism or pessimism. Which rule should I follow? Decision theory does not have an answer to this question. To answer it, I must reflect on my values again. If I really hate spending too much time in the car, then I may decide to follow maximin; if I really enjoy the convenience of traveling by car, then I may decide to follow maximax or the minimax regret rule. My degree of risk-aversion could also impact my decisionmaking. If I am risk-averse, for example, I might follow maximin to avoid the worst outcome, or assume a pessimistic outlook when using the optimism-pessimism rule. The important point here is that I need to draw upon my values not only to rank outcomes, but also to decide which decision rule to use. Thus, although decision theory and moral theory provide distinct ways of making decisions, they intersect in significant ways (Hannson 2003, 2010). Values also come into play when deciding whether to apply rules for decisionmaking under risk, such as expected utility theory (EUT), to my choice. In Chapter 2 I argued that a key question in making the transition from decision-making under ignorance to decision-making under risk is deciding whether one has enough evidence to assign accurate and precise probabilities to different outcomes. This is not a purely scientific issue, because values can play an important role in deciding the degree (or level) of evidence required to assign probabilities to outcomes (Douglas 2009; Steel 2015; Elliott 2017). For example, it might be reasonable to require a higher degree of

5.2 Individual Decisions

115

evidence for a decision involving the approval of a new drug than a decision involving the placement of roadside billboards because more is at stake (i.e. human lives) in the drug approval decision. The degree of evidence needed for assigning probabilities related to important policy decisions is partly a function of the consequences of making a mistake (Douglas 2009). I may require a higher degree of evidence for a decision that could result in a serious mistake than a decision that will not result in a serious mistake. My degree of risk-aversion could impact my decision-making at this point as well. If I am risk-averse, I might require a higher level of evidence for making decisions under risk than I if I am risk-taking or risk-neutral. If I am not satisfied that I have enough evidence to apply rules for decision-making under risk to my choice, then I could continue to use rules for decision-making under ignorance or I could use the PP. If I am uncertain about my values or how to rank them, then I may decide to use the PP instead of other rules for decision-making under ignorance, because I can use the PP without assuming that I have a ranking or measurement or values. Value uncertainty is also an important concern in making the transition from decision-making under ignorance to decision-making under risk, since EUT and its offshoot, i.e. cost-benefit analysis, both require one to use a common metric to evaluate different outcomes. If I still have unresolved value conflicts (e.g. minimizing my travel time vs. minimizing my carbon footprint), then I cannot use EUT because I cannot evaluate outcomes in terms of a common metric. As we saw in Chapter 2, one of the problems with EUT is that it assumes that values can be defined and compared in terms of a common metric, such as utility or willingness to pay. If I have moral objections to doing this, for example, if I do not think that I can assign a utility or dollar value to human life, then I should not use EUT for my decision, and I should consider some other approach, such as the PP. Thus, the PP can serve as a useful approach to individual decision-making in situations where there is epistemological or scientific uncertainty because evidence for probabilities related to outcomes is lacking, moral uncertainty related unresolved (and perhaps unresolvable) value-conflicts, or where this both epistemological and moral uncertainty.

5.3 Decisions for Others Suppose that I am not making a decision for myself, but for someone else, such as my child, spouse, patient, or client. In this situation, I would be making decisions concerning risks, benefits, and precautions (and the rules for making decisions) not from my perspective, but from another person’s perspective. The process for considering the different decision-making rules would be basically the same as it would be for individual decision-making, except I would be making these decisions based on the other person’s values (see Table 5.3 for summary). In thinking about these decisions, it is important to consider my obligation to honor the person’s right to self-determination or autonomy, which has wide support among

116

5 Precautionary Reasoning and the Precautionary Principle

Table 5.3 Approaches to decision-making for others Approach

Moral value

Difficulty

Expressed preferences Respecting Autonomy

Individual’s expressed preferences may not be known; individual may not have mental capacity to coherently express preferences

Substituted judgment

Respecting Autonomy

Individual’s values may not be known; individual may not have to mental capacity to form a coherent set of values

Best interests

Protecting individuals from harm; Disagreements about what would doing good for individuals and be in the individual’s best interests society

different moral theories (Beauchamp and Childress 2012). I should try to honor the person’s autonomy, unless they do not have the ability to make autonomous decisions, because they are a child, or they have a mental disability or disease that compromises cognition and judgment (Buchanan and Brock 1990). Honoring someone’s autonomy implies that I should make decisions for them according to their values, not my own. Let’s first consider a situation in which the person is a competent adult who has clearly expressed their preferences concerning some types of medical decisions.4 Suppose that I am a physician treating Jenna James, an 85-year-old woman who has had a stroke that has caused severe brain damage. It is highly likely (p = 0.98) that Ms. James will never regain consciousness. Her breathing is supported artificially by a ventilator. She is also receiving fluids, nutrition, and medications through an intravenous catheter. When she was competent, Ms. James signed a living will indicating that she would not want to be kept alive artificially if she is unable to make medical decisions and is terminally and incurably ill or permanently unconscious. Most people would agree that I should follow the dictates of Ms. James’ living will (i.e. her expressed preferences) and take her off artificial life-support, since this would be honoring her right to make autonomous choices concerning her medical care (Buchanan and Brock 1990). Suppose, however, that Ms. James does not having a living will or any other legal document expressing her treatment preferences in this type of situation. However, her husband and son have indicated, based on discussions they have had with her about life and death, that she would not want to be kept alive in this situation. Ms. James has been a very active woman throughout her life and did not believe in wasting time, energy, or medical resources. Her husband said the she lived her life to the fullest and was ready to die “when her time comes.” Most people would agree that I should take Ms. James off artificial life-support because this decision would accord with her values and respect her right to autonomy. In so doing, I would be making the decision that Ms. James would make if she were able to decide, i.e. a substituted 4 Though

I am using a medical example, the thought process would apply to making financial or other decisions for another person.

5.3 Decisions for Others

117

judgment (Buchanan and Brock 1990). If my knowledge of the other person’s values is incomplete but not totally deficient, I still could follow the substituted judgement approach to the best of my ability. For example, suppose that Ms. James husband and son cannot recall any conversations they have had with her about end-of-life decisions, but they believe, based on how she lived her life, that she would not want to be kept alive artificially. One could argue that it would be ethical for me to take Ms. James off artificial life-support, even my knowledge of her values is incomplete. Matters can become more complex if I know nothing about the person’s values or how they would resolve conflicts. Suppose that Ms. James has no living will, and no family members or friends who can give me an indication of her values. In this situation, the obligation to honor Ms. James’ autonomy no longer applies, because I do not know her values or preferences. Most ethicists would argue that I should make a decision that promotes the person’s best interests when the obligation to respect autonomy does not apply (Buchanan and Brock 1990). But what are Ms. James’ best interests? To continue living? To die? While most people would probably want to be allowed to die in this situation, others might want to live as long as possible. Since people often disagree about values (see discussion in Chapter 3), making a decision in someone’s best interests can be controversial. Decision-makers who make precautionary decisions for someone else based on the best interests approach must be careful to follow a commonly accepted understanding of best interests and not to impose their own, unique understanding of best interests on that person. It would be wise to not take risks that most people would not take, for example. In many cases, however, acting in a person’s best interests will not be morally controversial because there is widespread consensus about values. Suppose that I am a physician working at an emergency department and an ambulance brings an unconscious patient in for treatment. The patient is a 25-year-old male who has been in an automobile accident and is bleeding profusely. No family members or friends who may know the patient’s values are present. To save the patient’s life, I need to stop the bleeding and give him a blood transfusion. Most people would agree that I should do my best to save the patient’s life, since this is in his best medical interests (Beauchamp and Childress 2012). If I happen to know that the patient is a Jehovah’s Witness and is religiously opposed to blood transfusions, then I should respect his values and not give him a blood transfusion. Absent knowledge that would indicate that the patient would not want a transfusion, I should go ahead and give him one. The best interests standard also applies to making decisions for people who are not capable of making their own decisions, such as children and adults with mental disabilities that undermine judgment and decision-making (Buchanan and Brock 1990).5 The best interest standard applies most clearly when a child is too young to make responsible decisions, or when a mentally disabled adult has never had the ability to make responsible decisions. Matters become more complex, however, 5 Bioethicists

distinguish between competence, which is legal concept, and decision-making capacity, which is an ethical or clinical concept (Buchanan and Brock 1990). Individuals are considered to be legally competent to make medical decisions when they reach adulthood (e.g. 18 in most countries), but many adults are not capable of making medical decisions, due to mentally disability or illness.

118

5 Precautionary Reasoning and the Precautionary Principle

when the child or adult approaches the threshold for making responsible decisions. For example, suppose that a 16-year-old, highly intelligent girl has terminal bone cancer and wants to refuse an experimental treatment, which will cause considerable pain and suffering but could save her life (20% chance of a cure). The girl has a 90% chance of dying within six months to a year if she does not receive the treatment. The girl’s parents and her doctor want her to try the experimental treatment because they think it is in her best interests. Should we honor the girls’ autonomy or act in her best interests? This is not an easy question to answer (Buchanan and Brock 1990). On the one hand she is a mature adolescent and it is her life and her body. She is the one who will have to undergo treatment and/or face death. On the other hand, she may not be fully capable of appreciating the seriousness of refusing to try the experimental treatment and the importance a taking a chance at being cured.

5.4 Social Choices As discussed in Chapters 1 and 2, groups also make decisions concerning risks, benefits, and precautions. Many important decisions relating to public and environmental health, such as the regulation of drugs, pesticides, industrial chemicals, and genetically modified organisms, involve group decisions. In Chapter 2, I distinguished between three forms of decision-making at the group level: dictatorship, oligarchy, and democracy. There can also be combinations of these forms of decisionmaking. Members of a democratic society could delegate decision-making authority to a single person or a group of people for certain types of decisions. In the US, the president has the authority to command the military forces, to make treaties, and to appoint federal judges (United States Constitution 1789, Article II), and the Federal Open Market Committee has the authority to set the interest rate (known as the Federal Funds Rate) that the Federal Reserve charges banks to borrow money (Federal Reserve 2019). Many different types of groups make decisions, including private businesses, churches, and philanthropic, community, and professional organizations. Some of these are dictatorships or oligarchies, while others are democracies. For thousands of years, dictatorships and oligarchies were the most common form of government on the planet. Since the late 1700s, however, nations have become increasingly democratic. Today, 57% of the world’s countries are democratic (Desilver 2019).

5.5 Arguments for Democracy There are several moral and political arguments for democracy. First, democracy respects human dignity and autonomy by giving individuals a voice in decisions that affect them. Democracy is government by consent (Rawls 2005; Christiano 2001; Fabienne 2017). Second, democracy upholds political equality by treating

5.5 Arguments for Democracy

119

the preferences of different citizens as having the same weight in decision-making. In a democracy, each citizen gets one vote (Christiano 2001). Third, democracy promotes procedural justice (or fairness) by ensuring that people have meaningful input into decisions that affect them (Rawls 2005; Gutmann and Thompson 2004; Fishkin 2011). Fourth, democracy promotes social utility by helping to ensure that citizens accept government decisions (Fabienne 2017). Forms of government that do not respect citizens’ opinions, such as dictatorships, often lead to widespread dissatisfaction with the government, political unrest, and social instability.

5.6 Problems with Democracy There are, however, some well-known practical and political problems with democracy that need to be dealt with for societies to use it as a reasonable form of decisionmaking for social choices (see Table 5.4 for summary). As we saw in Chapter 2, democratic voting procedures can generate various paradoxes under certain conditions. Fortunately, these problems are, for the most part, theoretical in nature and rarely undermine real-world voting involving large populations of voters. The second problem is that democracy in its purest form, i.e. direct democracy, is unworkable in larger societies because it is not possible for each citizen to vote on every decision made by the government (Christiano 2001). Because direct democracy is not workable in large groups, most democratic societies are representative democracies; that is, citizens vote for representatives who make decisions on their behalf. That being said, representative democracies sometimes make decisions by direct forms of democracy, such as referenda on particular issues (e.g. the UK’s 2016 referendum on leaving the European Union). Another way that democratic societies deal with this practical problem is by delegating decision-making authority to government agencies that administer laws (Rosen 1998). Many important decisions relating to environmental and public health Table 5.4 Problems with democracy Problem

Solution

Voting paradoxes

Use voting procedures that produce clear winners

Direct democracy is unworkable in large societies

Representative democracy; delegation of authority to government agencies

Uninformed, ignorant voters

Public education; information sharing; public engagement

Dominance by powerful individuals or groups

Campaign finance laws; deliberative democracy

Protecting human rights

Laws that safeguard individual rights and liberties

120

5 Precautionary Reasoning and the Precautionary Principle

risks are made by government agencies. These decisions may be made agency leaders, government workers, or expert advisory committees. With the exception of agency leaders, most of these individuals are non-elected officials (Rosen 1998). For example, in the US the FDA regulates drugs, medical devices, biologics, food additives, nutritional supplements, cosmetics, and tobacco products (Food and Drug Administration 2019a). The FDA’s regulatory authority is based laws passed by the US Congress and signed by the President, including the Pure Food and Drug Act and the Food and Drug Act as well as amendment to these laws (Hawthorne 2005). The FDA makes decisions concerning the products it regulates based on input from expert advisory committees and agency scientists. Similarly, the EPA regulates pesticides and toxic chemicals and establishes standards for air and water quality (Resnik 2012). The EPA’s regulatory authority is based on several laws, including the Clean Air Act, the Clean Water Act, the Toxic Substances Control Act, and the Federal Insecticide, Fungicide, and Rodenticide Act (Resnik 2012). Like the FDA, the EPA also makes regulatory decisions based on input from expert advisory committees and agency scientists (Resnik 2012). While the decisions made by government bureaucracies are far removed from the choices that citizens make, they can still be publicly accountable. In the US, Congress and the President oversee the activities of government agencies, and the judicial system can review agency actions to ensure that they fall with the bounds of their statutory authority (Rosen 1998). Additionally, government agencies are required by law to solicit input from the public when they engage in rulemaking (i.e. when they make new regulations or change existing ones). However, there are still important issues concerning the extent of the public’s input into these decisions and its ability to hold agencies accountable (Rosen 1998). One might argue that government bureaucracies that are not accountable to the public pose an existential threat to democracy, and that societies that rely too heavily on experts to make decisions are in danger of becoming technocracies, i.e. societies ruled by an elite group of scientific and technological experts (Runciman 2018). Another well-known problem is the tendency for democracies to produce uninformed decisions by uneducated people (Gutmann and Thompson 2004). Plato (1974) distrusted democracies because he believed that they inevitably produce illconsidered choices made by ignorant and irrational people. Plato held that the only way to ensure that social choices are reasonable and well-informed is to entrust these decisions to an intellectually and socially superior class of citizens known as philosopher-kings. Although most people would dismiss Plato’s views as elitist and arrogant, the concerns he raised persist to this day, as witnessed by widespread objections among intellectuals to the UK’s popular vote to leave the European Union (Runciman 2018). Many argued that a decision as important as leaving the European Union should not be entrusted to the popular will (Runciman 2018). The third problem with democracy cannot be avoided if we accept the egalitarian idea that the vote of someone with a 4th-grade education who gets his information from talk-radio should count as much as the vote of someone with a doctoral degree who gets her information from scientific journals and the New York Times. The problem can be alleviated somewhat by promoting the free flow of information in

5.6 Problems with Democracy

121

society and providing public support for research and education, so that citizens have the resources and skills they need to make wise choices (Dewey 1916). Scientists, scholars, educators, journalists and others can play an important role in helping the public to make wise choices by sharing information and expert opinion with the public and helping laypeople understand complex scientific, technical, and social problems (Pielke 2007; Resnik 2009; Resnik and Elliott 2016). (I will return to this important point below when I discuss public and community engagement.) A fourth problem with democracy is that powerful individuals or organizations may disproportionally impact public debate, voting, and government decisionmaking, which can undermine the equality and fairness of the process (Gutmann and Thompson 2004). These individuals and organizations can influence the process in a variety of ways, such as by making contributions to political campaigns, lobbying Congress, sponsoring advertisements in the media, and initiating letter-writing drives. As a result of this imbalance of power, those with less power and influence may be marginalized from deliberations, and social decisions may not reflect the will of the people. In the US, for example, the National Rifle Association has substantially influenced the gun control debate and has succeeded in thwarting popular gun control legislation. This problem is not easily avoided, due to the imbalances of wealth, income, education, and status in society. As long as these imbalances exist, it is likely that individuals or organizations will use their power to promote their interests. One way of addressing this problem is to enact laws that restrict financial contributions to political campaigns. However, campaign finance laws have a limited impact because individuals and organizations often find ways of circumventing the laws, courts have restricted the laws in order to protect free speech, and the laws do not address other forms of influence, such as lobbying, advertising, and so on (Brown 2016). Another way of minimizing the problem is to implement policies and procedures that promote broad civic engagement in the democratic process, otherwise known as deliberative democracy (Gutmann and Thompson 2004; Gastil and Levine 2005; Fishkin 2011). The goal of deliberative democracy is to ensure that all citizens have the opportunity for meaningful involvement in government decisions. Deliberative democracy seeks to go beyond the superficial policy discussions that often occur in the media and engage citizens in well-informed and thoughtful debates about social decisions (Gutmann and Thompson 2004). Deliberative democracy can also address the problem of ignorance (mentioned above) by promoting educated and informed decision-making. Some ways of promoting deliberative democracy include town hall meetings, open forums, information sharing on traditional and social media, outreach to socioeconomically disadvantaged and minority groups, and public and community engagement (discussed in more detail below). While deliberative democracy can play an important role in promoting justice, equity, and civic engagement, it is not practical to use it for every government decision, because there are so many of them. Moreover, many government decisions, such as matters involving personnel, internal agency operations, or disbursement of funds, have a limited public interest or impact. Deliberative democracy should be used to address important government decisions that many people care about. As we shall

122

5 Precautionary Reasoning and the Precautionary Principle

see below, public and community engagement can play an important role in decisions concerning environmental and public health risks, benefits, and precautions. A fifth problem with democracies is how to protect the rights of individuals and minority groups. A majority of citizens or their representatives could adopt laws or policies that treat individuals or groups unfairly. For example, in the US the Jim Crow laws enforced racial discrimination and segregation following the Civil War. Many of these laws remained in effect until the Civil Rights movement of the 1960s (History.com 2019). To protect individuals and groups from the tyranny of the majority, most democracies have laws that safeguard some basic rights and liberties. For example, the Bill of Rights in the United States Constitution grants all citizens rights to due process under the law, rights to vote, rights against self-incrimination or unreasonable searches and seizures, and freedom of the press, speech, peaceful assembly, association, and religion (United States Constitution 1789). The Civil Rights Act of 1964 protects citizens from discrimination based on race, color, sex, religion, or national identity.

5.7 Public, Stakeholder, and Community Engagement Now that we have considered some of the problems and issues that may arise in democracies, let’s consider how democratic societies may make decisions concerning risks, benefits, and precautions. As noted above, many of the decisions are delegated to government agencies charged with protecting public health and safety or the environment. In the US, for example, the public does not vote on issues related to food and drug regulation, chemical safety, air and water quality, workplace safety, and so on. Important decisions concerning these topics are made by regulatory agencies, such as the FDA, the EPA, and the Occupational Safety and Health Administration (OSHA). Although these agencies are overseen by elected officials (e.g. Congress and the President) and are required to solicit public input concerning rulemaking, the connection between the will of the people and agency decisions is usually indirect and attenuated. Given the bureaucratic structure of most democratic governments, one might argue that it makes little sense to talk about public decision-making concerning risks, benefits, and precautions. Although various sectors of the public, such as scientists, medical and public health professionals, industry organizations, and political interest groups, inform or influence these decisions, the public at-large does not. Most democratic societies do not vote on whether to ban dangerous chemicals, approve new drugs, or strengthen air quality standards. Most ordinary citizens impact these decisions only indirectly. For example, citizens can express their views concerning environmental and public health regulation by voting for political candidates who subscribe to their philosophies (e.g. pro-environment, pro-business, etc.). While I agree that the public at-large is usually far-removed from important government decisions concerning risks, benefits, and precautions, I think it still makes some sense to bring democratic principles to bear on these decisions. One could argue

5.7 Public, Stakeholder, and Community Engagement

123

that agency leaders should make decisions based on the input they receive from different sectors of the public as well as their understanding of the general public’s views on the issue. This way of proceeding allows for public decision-making even if it does not involve voting by the public. One might object that this way of making decisions might allow powerful organizations (such as industry groups or political interest groups) to disproportionately influence agency deliberations. These organizations have the resources to participate in public meetings, submit public comments, lobby agency leaders, and so on. The decisions agencies make are therefore more likely to reflect the interests of industry or a powerful political group, rather than the will of the people. I recognize that this is an important problem that societies must overcome to promote democratic decision-making concerning risks, benefits, and precautions. As discussed earlier, I believe that deliberative democracy can play a key role in ensuring the decisions made by government agencies are democratic and fair. Agencies can implement deliberative democracy by conducting extensive public (and in some cases community or stakeholder) engagement prior to making decisions (National Academies of Sciences, Engineering, and Medicine 2016a, b, 2017a, b; Kaebnick et al. 2016; Stirling et al. 2018; Resnik 2018, 2019).6 The National Academies of Sciences, Engineering, and Medicine (2016a) defines engagement as: Seeking and facilitating the sharing and exchange of knowledge, perspectives, and preferences between or among groups who often have differences in expertise, power, and values. (p. 131)

Engagement is ongoing dialogue among policymakers, scientists, and the public that involves (1) communicating scientific or technical information to the public; (2) soliciting public knowledge and opinion; and (3) seeking mutual understanding of values, worldviews, and concerns. Engagement should occur at different times and venues during the decision-making process, using different formats. The public should be engaged early in the decision-making process, and the public’s views should make a difference to the outcome. While engagement can build public trust in and acceptance of science, technology, and the government, the main goal of engaging the public is to make just, fair, and democratic decisions.7 The National Academies of Sciences, Engineering, and Medicine (2016a) distinguishes between public, community, and stakeholder engagement. The public includes all members of society who contribute to democratic decision-making pertaining to a government decision; stakeholders include individuals or groups with professional, personal, or other interests related to a decision; and communities include individuals who live near an area likely to be directly impacted by 6 The

literature on engagement addresses scientists’ and technologists’ responsibilities to engage the public concerning new discoveries and innovations. While I think that these obligations are important, I am focusing on the obligations of government agencies. 7 For more on public engagement, see National Academies of Sciences, Engineering, and Medicine (2017a).

124

5 Precautionary Reasoning and the Precautionary Principle

the decision. For example, suppose that the OSHA is considering revising its standards for exposure to toluene in the workplace.8 The agency should not only engage all members of society concerning this issue, but it should make a special effort to engage stakeholders, such as people who are exposed to toluene at work and companies that engage in manufacturing or other work that use toluene. Or suppose that a company wants to conduct a field trial of genetically modified mosquitoes to help prevent Dengue fever. Agencies which are responsible for approving the field trial should engage not only the public but all people who live near the proposed release site, i.e. the community (Resnik 2018, 2019; Neuhaus 2018). Engagement can provide agencies with valuable information pertaining to the decisions they make related to environmental or public health risks, such as: • What are the risks and benefits that the public (stakeholders, or communities) cares about? • How does the public weigh these risks and benefits? • What does the public think about different ways of dealing with these risks, e.g. avoidance, minimization, mitigation? • What are underlying values that shape the public’s views about benefits, risks, and precautions? • To what extent is public opinion divided or polarized on this issue? • Are there reasonable compromises that can be reached?

5.8 Choosing Decision-Making Rules As discussed in Chapter 4, government agencies often use decision-making frameworks based on expected utility theory (EUT) for making choices involving environmental or public health risks, benefits, and precautions. However, they are usually not legally bound to this framework. In the US, for example, to approve a new drug the FDA must determine that the benefits of the drug for the intended population outweigh its known and potential risks (Food and Drug Administration 2019a, 2019b). The FDA generally uses a decision-making approach based on EUT to assess and manage risks, but there is no reason why it could not use the PP. Likewise, the EPA establishes and revises national air quality standards based on its assessment of public health and environmental risks and benefits. The EPA’s decision-making approach is also based on EUT, but it could also use the PP. For many other decisions related to environmental or public health risks, such as decisions relate to genetically modified organisms, oil or natural gas production, nanotechnology, or land management, the role of the EUT in decision-making may not be clearly established (Resnik 2012). 8 Toluene

is an aromatic, hydrocarbon compound used in paint thinners and as a solvent. The Occupational Health and Safety Administration has developed limits for exposure to toluene in the workplace (Resnik 2012). Exposure to toluene can cause irritation of the eyes and nose, exhaustion, headache, weakness, anxiety, and insomnia. Exposure to high levels of toluene can cause liver and kidney damage, loss of consciousness, respiratory depression, and death.

5.8 Choosing Decision-Making Rules

125

Thus, I would like to suggest that it is often an open question, legally, morally, and politically, as to whether the government should use EUT, the PP or some other decision-making rule or procedure for making policy choices concerning risks, benefits, and precautions. If we believe that this question should be addressed democratically, then the public should have significant input into deciding which rules should be followed. That is, the public should be allowed to help decide not only how the government should manage risks, benefits, and precautions, but also how the government should go about making these decisions. The case for using EUT (or a related approach, such as cost-benefit analysis) will be most compelling when scientific and moral uncertainty are both low. For example, suppose that the decision pertains to the approval of a new drug for treating a type of lymphoma and there is ample evidence from clinical trials and pre-clinical studies concerning the medical benefits and risks of the drug, such as its side effects and impacts on mortality, morbidity and quality of life. Suppose that the only benefits and risks that people care about are medical ones and there are no significant environmental impacts or economic costs to consider. Suppose, also, that we can measure these effects of approving or not approving the drug (good and bad) and estimate their probability with a reasonable degree of confidence. In a situation like this, one could argue that we should make the decision that maximizes expected medical benefits/risks. The situation becomes more complex, however, if we do not have enough evidence to make accurate and precise estimates of the probabilities of different outcomes related to our decision or we have strong disagreements about values, such that we cannot evaluate outcomes in terms of a common metric like utility or economic costs. Suppose that we have evidence to believe that the drug will save lives but that it could have environmental impacts when excreted from the body or disposed of improperly, because it does not biodegrade quickly and may bioaccumulate in animal tissues.9 Though cancer patient groups are lobbying for approval of the drug, environmental advocates are lobbying against approval. Under these circumstances, it might make sense to not use EUT for making decisions concerning the drug, but to use the PP, due to our lack of scientific evidence and conflicting values that cannot be reduced to a common metric (Ackerman 2008). If circumstances change, however, it might be reasonable to switch from the PP to EUT. For example, suppose we obtain evidence that the drug is not likely to have significant environmental impacts and that it is likely to significantly benefit human health. We could use moral theories to make our decision if we know the outcomes of different options with near certainty, and the only questions we face pertain to resolving conflicts among competing values (e.g. public health vs. the environment). However, since most moral theories are not very good at dealing with epistemological uncertainty (Hannson 2003), it will usually be the case that moral theories will inform our choices by providing us with the values used in decision-making, but they will not serve as our primary decision-making tools.

9 The

environmental impact of pharmaceuticals is a growing problem. See Owens (2015).

126

5 Precautionary Reasoning and the Precautionary Principle

Table 5.5 Considerations for using the precautionary principle to make decisions Moral uncertainty Scientific uncertainty

Low

High

Low

Use expected utility theory or its Use the precautionary principle or offshoots moral theories

High

Use rules for making decisions under ignorance or the precautionary principle

Use the precautionary principle

Making decisions about what decision-making rules to use to manage risks is not something that government agencies ordinarily do, since they generally follow regulations and agency precedence and tradition. It is also not a topic that is likely to be on the minds of stakeholders and other members of the public who provide input into agency decisions. Nevertheless, I would like to suggest that it is a topic that is important to consider, since the type of decision-making rule one uses can have a substantial impact on the decision that is ultimately made. Table 5.5 lists some considerations to take into account when deciding whether to use the PP for personal or social choices. The terms ‘low’ and ‘high’ in Table 5.5 indicate degrees of uncertainty. One could use related words, such as ‘significant,’ ‘insignificant,’ ‘substantial,’ or ‘substantial’ instead. Also, there may be a wide range of degrees between low and high that impact the choice of decision rules. ‘Low’ and ‘high’ represent relative points on the scale where one may choose to follow a different rule.

5.9 Conclusion In this chapter I have examined and discussed contextual factors relevant to choosing the rules or procedures one uses to make decisions concerning risks, benefits, and precautions. I have considered this issue from the perspective of making decisions for oneself (individual decision-making), for others, and with others (social decisionmaking). Two of the most important contextual factors to consider include epistemological or scientific uncertainty concerning outcomes related to the decision, and moral (or value) uncertainty related to the evaluation of outcomes. The case for using the PP is most compelling when scientific uncertainty is high, moral uncertainty is high, or both are high (Ackerman 2008). Decision-makers could use the PP when they face a high degree of scientific and/or moral uncertainty and then switch to EUT when uncertainties are reduced. In the remaining chapters of this book, I will apply the decision-making framework developed in the first five chapters to environmental and public health policy choices.

References

127

References Ackerman, F. 2008. Poisoned for Pennies: The Economics and Toxics of Precaution. Washington, DC: Island Press. Beauchamp, T.L., and J.F. Childress. 2012. Principles of Biomedical Ethics, 7th ed. New York, NY: Oxford University Press. Brown, H. 2016. Pay-to-Play Politics: How Money Defines the American Democracy. Santa Barbara, CA: Praeger. Buchanan, A.E., and D.W. Brock. 1990. Deciding for Others: The Ethics of Surrogate DecisionMaking. Cambridge, UK: Cambridge University Press. Carbon Footprint Calculator. 2019. Available at: https://calculator.carbonfootprint.com/calculator. aspx. Accessed 18 Jan 2021. Christiano, T. 2001. Democracy. In Encyclopedia of Ethics, 2nd ed., ed. L.C. Becker and C.B. Becker CB, 385–389. New York, NY: Routledge. Desilver, D. 2019. Despite Global Concerns About Democracy, More Than Half of Countries Are Democratic. Pew Research Center, May 14. Available at: https://www.pewresearch.org/fact-tank/ 2019/05/14/more-than-half-of-countries-are-democratic/. Accessed 18 Jan 2021. Dewey, J. 1916. Democracy and Education. New York, NY: The Free Press. Douglas, H. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh, PA: University of Pittsburgh Press. Elliott, K.C. 2017. A Tapestry of Values: An Introduction to Values in Science. New York, NY: Oxford University Press. Fabienne, P. 2017. Political Legitimacy. In Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/entries/legitimacy/. Accessed 18 Jan 2021. Federal Reserve. 2019. Federal Open Market Committee. Available at: https://www.federalreserve. gov/monetarypolicy/fomc.htm. Accessed 6 Dec 2019. Fishkin, J.S. 2011. When the People Speak: Deliberative Democracy and Public Consultation. Oxford, UK: Oxford University Press. Food and Drug Administration. 2019a. What Does FDA Regulate? Available at: https://www.fda. gov/about-fda/fda-basics/what-does-fda-regulate. Accessed 19 Jan 2021. Food and Drug Administration. 2019b. Development and Approval Process/Drugs. Available at: https://www.fda.gov/drugs/development-approval-process-drugs#FDA. Accessed 19 Jan 2021. Food and Drug Administration. 2019c. Bisphenol A (BPA): Use in Food Contact Application. Available at: https://www.fda.gov/food/food-additives-petitions/bisphenol-bpa-use-food-contact-app lication#summary. Accessed 19 January 2021. Food and Drug Administration. 2019d. Milestones in U.S. Drug Law History. Available at: https://www.fda.gov/about-fda/fdas-evolving-regulatory-powers/milestones-us-foodand-drug-law-history. Accessed 19 Jan 2021. Food and Drug Administration. 2019e. The Facts on the FDA’s New Tobacco Rule. Available at: https://www.fda.gov/consumers/consumer-updates/facts-fdas-new-tobacco-rule. Accessed 19 Jan 2021. Gastil, J., and P. Levine. 2005. The Deliberative Democracy Handbook: Strategies for Effective Civic Engagement in the Twenty-First Century. San Francisco, CA: Jossey-Bass. Gutmann, A., and D. Thompson. 2004. Why Deliberative Democracy? Princeton, NJ: Princeton University Press. Hannson, S. 2003. Ethical Criteria of Risk Acceptance. Erkenntnis 59 (3): 291–309. Hannson, S. 2010. The Harmful Influence of Decision Theory on Ethics. Ethical Theory and Moral Practice 13 (5): 585–593. Hawthorne, F. 2005. Inside the FDA: The Business and Politics Behind the Drugs We Take and the Food We Eat. New York, NY: Wiley. History.com. 2019. Jim Crow laws. Available at: https://www.history.com/topics/early-20th-cen tury-us/jim-crow-laws. Accessed 19 January 2021.

128

5 Precautionary Reasoning and the Precautionary Principle

Intergovernmental Panel on Climate Change. 2013. Climate Change 2013: The Physical Basis. Cambridge, UK: Cambridge University Press. Kaebnick, G.E., E. Heitman, J.P. Collins, J.A. Delborne, W.G. Landis, K. Sawyer, L.A. Taneyhill, and D.E. Winickoff. 2016. Precaution and Governance of Emerging Technologies. Science 354 (6313): 710–711. National Academies of Sciences, Engineering, and Medicine. 2016a. Gene Drives on the Horizon: Advancing Science, Navigating Uncertainty, and Aligning Research with Public Values. Washington, DC: National Academies Press. National Academies of Sciences, Engineering, and Medicine. 2016b. Genetically Engineered Crops: Experiences and Prospects. Washington, DC: National Academies Press. National Academies of Sciences, Engineering, and Medicine. 2017a. Communicating Science Effectively: A Research Agenda. Washington, DC: National Academies Press. National Academies of Sciences, Engineering, and Medicine. 2017b. Human Genome Editing: Science, Ethics, and Governance. Washington, DC: National Academies Press. Neuhaus, C.P. 2018. Community Engagement and Field Trials of Genetically Modified Insects and Animals. Hastings Center Report 48 (1): 25–36. Owens B. 2015. Pharmaceuticals in the Environment: A Growing Problem. The Pharmaceutical Journal, February 19. Available at: https://www.pharmaceutical-journal.com/news-and-analysis/ features/pharmaceuticals-in-the-environment-a-growing-problem/20067898.article?firstPass= false. Accessed 19 Jan 2021. Pielke, R. 2007. The Honest Broker: Making Sense of Science in Policy and Politics. Cambridge, UK: Cambridge University Press. Plato. 1974 [380 BCE]. The Republic, trans. G.M.A. Grube. Indianapolis, IN: Hackett. Rawls, J. 2005. Political Liberalism, 2nd ed. New York: Columbia University Press. Resnik, D.B. 2009. Playing Politics with Science: Balancing Scientific Independence and Government Oversight. New York: Oxford University Press. Resnik, D.B. 2012. Environmental Health Ethics. Cambridge, UK: Cambridge University Press. Resnik, D.B. 2018. Ethics of Community Engagement in Field Trials of Genetically Modified Mosquitoes. Developing World Bioethics 18 (2): 135–143. Resnik, D.B. 2019. Two Unresolved Issues in Community Engagement for Field Trials of Genetically Modified Mosquitoes. Pathogens and Global Health 113 (5): 238–245. Resnik, D.B., and K.C. Elliott. 2016. The Ethical Challenges of Socially Responsible Science. Accountability in Research 23 (1): 31–46. Resnik, M.D. 1987. Choices: An Introduction to Decision Theory. Minneapolis, MN: University of Minnesota Press. Rosen, B. 1998. Holding Government Bureaucracies Accountable, 3rd ed. Westport, CT: Praeger Publishing Group. Runciman, D. 2018. Why Replacing Politicians with Experts Is Reckless Idea. The Guardian, May 1. Available at: https://www.theguardian.com/news/2018/may/01/why-replacing-politicians-withexperts-is-a-reckless-idea. Accessed 19 Jan 2021. Steel, D. 2015. Philosophy and the Precautionary Principle. Cambridge, UK: Cambridge University Press. Stirling, A., K.R. Hayes, and J. Delborne. 2018. Towards Inclusive Social Appraisal: Risk, Participation and Democracy in Governance of Synthetic Biology. BMC Proceedings 12 (Suppl 8): 15. Tait, J. 2001. More Faust Than Frankenstein: The European Debate About the Precautionary Principle and Risk Regulation for Genetically Modified Crops. Journal of Risk Research 4 (2): 175–189. United States Constitution. 1789. Available at https://constitutioncenter.org/media/files/constitut ion.pdf. Accessed 12 March, 2021.

Chapter 6

Chemical Regulation

Every day we are exposed to thousands of chemicals (and other substances) through contact with the food we eat, the medications we take, the consumer products we use, the air we breathe, the water we drink, and the dust we touch. Most of these chemicals1 are naturally occurring, but many are man-made. Most of the chemical exposures we encounter in our daily lives are benign, but some can be toxic, carcinogenic, or even deadly at certain exposure levels. Most modern societies have enacted various laws and regulations to protect the public and the environment from harmful chemical exposures, but these laws and regulations do not cover every possible exposure, nor do they always reflect our most up-to-date knowledge concerning chemical risks and safety.2 The laws and regulations are constantly being revised as we learn more about chemical risks. A key ethical and policy issue related to chemical regulation is: how safe is safe enough? (Cranor 2011; Resnik and Elliott 2015; Resnik 2018b). The primary means by which we promote safety is through regulation, but too little regulation can place the public and the environment at an unreasonable risk of harm, and too much can stifle technological and industrial innovation and interfere with economic development. In this chapter, I will highlight some ethical and policy dilemmas related to our current system of chemical regulation and argue that the PP 1 There

are some interesting scientific and philosophical issues concerning the difference between chemicals, foods, chemical mixtures, and other substances. The Cambridge Dictionary (2020) defines a chemical as “any basic substance that is used in or produced by a reaction involving changes to atoms or molecules.” Thus, foods, plants, animals, and people contain chemicals but are not chemicals. However, some things regulated like chemicals do not neatly fit this definition. For example, we regulate tobacco products, marijuana, alcohol, dietary supplements, nanomaterials, asbestos, biologics, and particulate matter like they are chemicals, but they are complex substances that contain chemicals. 2 Safety is different from risk in that it includes practices that minimize risk. For example, candles pose a risk of fire, but this risk can be minimized if they are contained safely.

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_6

129

130

6 Chemical Regulation

can lend some valuable insights into the issues we face.3 By way of an introduction to the issues, I will describe some different types of chemical regulation with some illustrative cases.4

6.1 Pharmaceuticals In the US, the FDA has the legal authority to regulate drugs, biologics,5 medical devices, food additives, cosmetics, dietary supplements, and tobacco products (Food and Drug Administration 2019a).6 As mentioned in Chapter 4, the FDA approves drugs for marketing based on evidence of safety and efficacy from clinical trials and pre-clinical studies (Gassman et al. 2017). This form of regulation is highly protective because it allows society to avoid risks of drugs unless there is evidence that risks are worth taking. However, drug regulation in the US was not always highly protective. Prior to 1840s, the US had no national drug laws. The first national drug law, the Drug Importation Act, was passed in 1848 to prevent the importation of adulterated drugs into the US. The Food and Drugs Act, passed in 1906, prohibited the sale of adulterated or misbranded drugs in the US. The Federal Food, Drug, and Cosmetic Act, passed in 1938, required that new drugs had to be shown to be safe before they could be marketed, and the Kefauver-Harris Drug Amendments to this law, adopted in 1962, required that new drugs also had to be shown to be effective (Food and Drug Administration 2019a). During the 1960s, the FDA developed a framework for drug testing that it applies to all new drug applications.7 Prior to conducting studies with human subjects, drug manufacturers must conduct pre-clinical, animal experiments and chemical studies to provide evidence of drug safety and potential efficacy in humans. If the FDA determines that a drug is safe enough to test in humans, it will allow the manufacturer to conduct Phase I clinical trials. Phase I trials are small studies composed of about 50 subjects or fewer, which are designed to obtain information on safety, dosing, and 3 Tort liability also constitutes an important form of legal oversight of the risks of chemicals, because

it provides financial incentives for manufactures to take measures to protect the public and the environment from harms related to their products. While I will discuss some lawsuits against chemical manufacturers in this chapter, I will focus on statutory laws and regulations, not tort law. For more on tort liability related to chemicals, see Cranor (2011). 4 This chapter will not examine all types of regulation of chemicals. For example, I am not examining regulation of cosmetics (such as lipstick, mascara, or skin moisturizer) or food additives (such as artificial sweeteners, dyes, or preservatives). 5 A biologic is a product of a biological process. Biologics include complex chemicals, such as proteins, hormones, blood components, and antibodies, as well as cells and tissues (Food and Drug Administration 2020a). 6 Other countries have similar laws and regulations. For example, the European Medicines Agency regulates and oversees medical products for countries that belong to the European Union, and the Medicines and Healthcare Products Regulatory Agency regulates and oversees medical products in the UK. 7 This framework also applies to biologics and medical devices.

6.1 Pharmaceuticals

131

pharmacology (Resnik 2007a). If a drug completes Phase I successfully, the FDA will allow the manufacturer to begin Phase II testing. Phase II clinical trials are larger studies composed of about 300 patients who have a disease or condition being treated by the drug. In Phase II studies, patients are randomly assigned to the experimental group or the control group. The control group may receive a placebo (if there is no effective therapy for the condition) or a standard treatment. Phase II studies are designed to gather data on safety and efficacy and to determine whether the experimental (or investigational) drug is superior to the control medication. Because Phase II studies involve randomization and a control group, they are known as randomized, controlled trials (RCTs). RCTs are regarded as the gold standard for medical research because they are designed to minimize bias due to selection of patients and confounding due to uncontrolled variables (Perucca and Wiebe 2016). If a drug passes Phase II testing successfully, the FDA will allow the manufacturer to begin Phase III studies. Phase III clinical trials have the same basic study design as Phase II trials, but they are larger, with up to 3000 patients who have a disease or condition being treated by a drug (Resnik 2018a). After Phase III testing has been completed, the manufacturer reviews all the data and submits it to the FDA, which then does its own review of the data (Resnik 2007a). As mentioned in Chapter 5, the FDA uses expert panels to make recommendations related to drug safety and approval. Decisions made by these panels are based on a review of the evidence of safety and efficacy, provided by drug companies and independent sources (Gassman et al. 2017). Ethics committees known as institutional review boards (IRBs) review and oversee clinical trials to protect the rights and welfare of patients, and data safety and monitoring boards (DSMBs) review and analyze data gathered during clinical trials to protect patients from harm (Resnik 2018a). During the mid-1980, patients with HIV/AIDS and their advocates argued that the FDA’s drug approval process was too slow and restrictive, and that people were dying while waiting for potentially life-saving medications. Laws passed since that time have expanded access to experimental drugs for patients with life-threatening diseases and streamlined the review process8 for these drugs (Darrow et al. 2014, 2015; Food and Drug Administration 2019c).9 There are three types of expanded access to experimental drugs. Under the first type, also known as emergency use, the FDA can allow a patient to have access to an unapproved, experimental drug, provided that the patient faces a life-threatening medical emergency for which there are no available alternatives. Although this type of expanded access is usually granted on a per patient basis, the FDA can also grant it to large groups of patients when a public health emergency has been declared, such 8 For

example, the FDA could put a drug on fast track review or base it decisions on surrogate endpoints for efficacy, such as biomarkers of disease, rather than “hard” endpoints, such as mortality or morbidity. To evaluate an HIV/AIDS drug, for example, the FDA could review data pertaining to the amount of virus in the blood (viral load) or white blood cell counts instead of mortality or morbidity. 9 In Chapter 9, I will consider drug approval issues related the novel coronavirus (COVID-19) pandemic of 2020.

132

6 Chemical Regulation

as occurred during the COVID-19 pandemic.10 Under the second type, also known as compassionate use, an experimental drug that has completed Phase I testing can be made available to patients who are suffering from a life-threatening condition and do not have access to the drug from participation in a clinical trial. Under the third type, an experimental drug that has completed clinical trials but not yet received FDA approval can be made available to patients who are suffering from a lifethreatening condition. In all three of these types of expanded access, patients are receiving experimental drugs as part of their medical treatment, not because they are participating in research (Darrow et al. 2015). The FDA approves drugs under a labelling, which describes the approved use of the drug to treat or prevent a disease or condition in a specified population, and includes information about administration, dosing, safety, risks, and benefits. The labelling is a way of minimizing and mitigating the risks of the drug and maximizing its benefits. However, once a drug is approved, physicians may prescribe it without following the labelling. For example, they could prescribe the drug for a population (such as children or pregnant women) not included on the label (Gassman et al. 2017). Offlabel prescribing is a common practice that can benefit patients but may also place them at undue risk.11 In the US, off-label prescribing is overseen by professional boards rather than government agencies. Professional boards can impose sanctions on health care providers who prescribe drugs irresponsibly. Malpractice law also helps to control off-label prescribing, since health care professionals who prescribe drugs off-label may face the threat of litigation if they prescribe drugs irresponsibly (Wittich et al. 2012). The FDA also minimizes risks by monitoring safety data after a drug is on the market and inspecting and auditing drug manufacturing for quality control. The FDA reviews data from studies conducted after a drug is on the market, such as clinical trials comparing the drug to competing drugs. In some cases, the FDA may require manufacturers to conduct additional studies, known as Phase IV clinical trials, after the drug has been approved for marketing. The FDA also reviews data related to adverse drug reactions submitted by health care professionals, patients, or manufacturers to its Safety Information and Adverse Event Reporting Program, known as MedWatch (Resnik 2007a). The FDA may issue a warning concerning a drug, change the labelling of a drug, or withdraw approval of a drug to protect the public’s health (Gassman et al. 2017). Drug testing and development is lengthy and expensive process. According to industry data, it takes, on average, 10–12 years to obtain FDA approval after a drug is first discovered. Companies may chemically screen thousands of drugs to find one that shows enough promise to make it worth submitting for approval. Less than 12% of new drugs that go through clinical trials are ultimately approved by the FDA. According to industry estimates, it costs, on average, $2.6 billion to develop a new

10 During the COVID-19 pandemic, the FDA granted emergency use authorizations (EUAs) to tests,

treatments, and vaccines (Food and Drug Administration 2021a). 11 In the US, about 20% of prescriptions are off-label (Wittich et al. 2012).

6.1 Pharmaceuticals

133

drug and obtain FDA approval (Medicine.net 1999; Pharmaceutical Research and Manufacturing Association 2015).12 The thalidomide tragedy illustrates how the US’ drug regulation system can function well to protect the public from harm. The German pharmaceutical company Chemie-Grunenthal developed and marketed thalidomide in the late 1950s as a nonaddictive, sedative medication. It was soon discovered, however, that the drug had anti-emetic properties that would make it an effective form of treatment for morning sickness in pregnant women (Vargesson 2015). Thalidomide soon became one of the world’s top-selling drugs. Chemie-Grunenthal marketed the drug in 46 different countries under various names and gave physicians free samples to distribute to their pregnant patients. Although the company insisted that the drug was completely safe, reports began to surface that it could cause peripheral neuropathy in adult patients and severe birth defects (such as missing or deformed limbs) in babies exposed to the drug in utero (Vargesson 2015). In 1961, German physician Widukind Lenz and Australian physician William McBride published papers providing evidence that thalidomide had caused over 10,000 birth defects. The drug was subsequently withdrawn from markets worldwide. Although Chemie-Grunenthal had submitted an application to the FDA in 1957, thalidomide was never sold in the US because a physician working for the FDA, Frances Kelsey, refused to approve the drug, due to concerns she had about the drug’s safety. President John F. Kennedy awarded Kelsey the President’s Award for Distinguished Federal Civilian Service for her role in protecting the American public from the thalidomide tragedy (Vargesson 2015). While thalidomide is an example of the drug regulation system functioning well, the system is not perfect. In 1999, the FDA approved a non-steroidal antiinflammatory drug (NSAID) manufactured by Merck known as Vioxx (rofecoxib), based on data from 5,400 subjects in eight clinical trials (Prakash and Valentine 2007). Merck promoted Vioxx as a medication to relieve pain and inflammation due to arthritis and other conditions with fewer side-effects (such as bleeding or gastrointestinal problems) than other NSAIDs (Resnik 2007b). In 1998, Merck launched the VIGOR study, which compared Vioxx to naproxen, another NSAID marketed under the trade name Aleve. In 2000, Merck reported the results of the VIGOR study to the FDA, which showed that patients taking Vioxx had five times more cardiovascular events (such as a heart attack or stroke) than those taking naproxen. Later that year, Merck published the results of the VIGOR study in the New England Journal of Medicine (Bombardier et al. 2000). However, the study underreported safety data pertaining to the study and downplayed the cardiovascular risks of Vioxx (Curfman et al. 2006). 11 out of 12 VIGOR study investigators had financial relationships to Merck (Resnik 2007b). In 2001, the FDA warned Merck that it had misrepresented the cardiovascular risks of Vioxx in its marketing campaigns, and in 2002 the FDA issued a black box warning about cardiovascular risks to be included in Vioxx’s labelling (Resnik 2007b). From 2002 to 2004, additional studies were published documenting Vioxx’s cardiovascular risks. In 2004, Merck withdrew the drug from 12 Since these are data provided by industry, they could be biased. Some argue that the costs of new drug development are closer to $700 million (Goozner 2004).

134

6 Chemical Regulation

the market, due to liability and safety concerns (Resnik 2007b). Over 13,000 lawsuits were filed against the company. According to a study published in the Lancet, an estimated 88,000 out of 20 million patients who took Vioxx had a heart attack, and 38,000 died (Prakash and Valentine 2007).13 The US’ drug safety system has several shortcomings that compromise its ability to protect the public’s health.14 First, companies often suppress data related to the risks of their drugs. This happened in the Vioxx case, as noted above, but it has happened in other prominent cases, such as the suppression of data on suicide risks associated with some anti-depressant medications (Resnik 2007b). To avoid this problem, regulatory agencies (including the FDA) and professional journals require that companies register clinical trials in a public database, such as ClinicalTrials.gov. To register a clinical trial, companies must provide some basic information about the study, such as experimental design, methods, interventions, investigational drugs, endpoints (such as measures of safety or efficacy), inclusion criteria, and research sites. Companies also must report the outcomes of their clinical trials (Zarin et al. 2011). However, since companies are not required to report the data for their studies, it is possible for them to suppress data that could be useful in evaluating their studies. Moreover, companies do not always comply with clinical trial registration requirements and organizations rarely enforce them (Viergever et al. 2014). Second, clinical trials do not always uncover all the risks related to taking a medication (Oakie 2005). A drug may be approved on the basis of data from several clinical trials involving as few as 500 participants (Strom 2006). However, the risks of a drug may not be fully understood until tens of thousands of patients have taken the medication, because people respond differently to the same medication (Oakie 2005; Strom 2006). Also, some risks may develop only after taking a medication for several years or more, and most clinical trials last only one to two years (Resnik 2007a). While long-term studies could help researchers to learn about these risks, companies have no financial incentive to conduct long-term drug studies, and government agencies have limited funding to support such research (Strom 2006; Resnik 2007a). Third, although regulatory agencies can require companies to conduct postmarketing studies to obtain more information on drug safety and efficacy, companies often fail to do so, and agencies often do not enforce this requirement when they impose it (Strom 2006). Fourth, the MedWatch program is an imperfect system for acquiring drug safety data because it involves voluntary reporting by health professionals, who may not have the time or the motivation to fill out adverse event reports. Additionally, health professionals, patients, and manufacturers may not recognize that some adverse outcomes that patients experience are related to taking a medication. For example, if an elderly patient develops dementia or muscle weakness while taking a medication, a health professional may view this as related to his or her age or chronic diseases, rather than a drug he or she is taking (Strom 2006; Resnik 2007a). 13 This study simply reported aggregate data and did not demonstrate that the drug caused 88,000 heart attacks or 38,000 deaths. 14 These problems also occur in other countries.

6.1 Pharmaceuticals

135

Table 6.1 Drug schedules under the Controlled Substances Act Schedule

Definition

I

Substances with no currently accepted Heroin, lysergic acid diethylamide medical use, lack of medical safety, and a (LSD), marijuana, peyote, high potential for abuse methaqualone, 3,4-methylenedioxymethamphetamine (“Ecstasy”)

Examples

II

Substances with a high potential for abuse which may lead to severe psychological or physical dependence

Hydromorphone, methadone, meperidine, oxycodone, fentanyl, morphine, opium, codeine, hydrocodone, amphetamine, methamphetamine, methylphenidate

III

Substances with a lower potential for abuse relative to substances in Schedule II and may lead to moderate or low physical dependence or high psychological dependence

Products containing not more than 90 milligrams of codeine per dosage, buprenorphine, benzphetamine, phendimetrazine, ketamine, anabolic steroids

IV

Substances with a lower potential for abuse relative to substances in Schedule III

alprazolam, carisoprodol, clonazepam, clorazepate, diazepam, lorazepam, midazolam, temazepam, triazolam

V

Substances with a lower potential for abuse relative to substances listed in Schedule IV

cough preparations containing not more than 200 milligrams of codeine per 100 milliliters or per 100 grams, ezogabine

The US also has federal laws that limit access to drugs to protect the health and safety of the public. A federal law known as the Controlled Substances Act (CSA) categorizes certain types of drugs, known as controlled substances, according to five different schedules, based on risk of abuse or misuse (see Table 6.1) (United States Department of Justice 2020). Controlled substances must be purchased at and dispensed by a pharmacy with a prescription from a health care professional who has the authority to write the prescription. Professions with this authority include physicians, physicians’ assistants, nurse practitioners, and veterinarians. Drugs that are not classified as controlled substances (i.e. over-the-counter medications) can be purchased without a prescription. Drugs that are classified as Schedule I, such as heroin and marijuana, have a high risk of abuse and no medical purpose, according to the CSA, and are illegal (United States Department of Justice 2020). US States have their own laws to control access to drugs. While most of these laws mirror US federal laws, they differ significantly with respect to marijuana. Marijuana may be legally purchased and used by adults in 12 states and purchased and used for medical purposes in 22 states (National Organization for the Reform of Marijuana Laws 2020).

136

6 Chemical Regulation

6.2 Dietary Supplements Dietary supplements include vitamins, minerals, herbal medications, and other substances that are not classified as foods, food additives, or drugs (Resnik 2018b). The dietary supplement industry has grown exponentially since the 1990s. More than 90,000 dietary supplement products are sold in the US, and over half of US adults regularly consume dietary supplements (Resnik 2018b). People take dietary supplements for various reasons, such as improving their physical or mental health, losing weight, or enhancing athletic or sexual performance. For many years, the US did not regulate dietary supplements. In 1976, the FDA attempted to regulate dietary supplements as drugs, but supplement manufacturers won a lawsuit in which they claimed the FDA did not have the statutory authority to do so. Concerned about the health risks of dietary supplements, Congress passed the Dietary Supplement Health and Education Act (DSHEA) in 1994.15 Congress amended this law in 1997 and 2006 (Resnik 2018b). The DSHEA gives the FDA the authority to regulate dietary supplements. The FDA has the authority to take action (such as banning products or restricting use) against supplement manufacturers if it has credible evidence that their products pose a significant risk to human health or safety when used as directed, or that their products have been adulterated or include inadequate safety information (Resnik 2018b). While the FDA has considerable authority to regulate dietary supplements once they are on the market, its pre-market regulatory authority is more limited. To obtain marketing approval from the FDA, manufacturers of new dietary supplements must provide the agency with evidence that their products only contain ingredients that are already in the food supply and have not been altered chemically or are reasonably expected to be safe (Resnik 2018b). Manufacturers do not have to conduct clinical trials of their products or demonstrate efficacy. Products that were on the market before the passage of the DSHEA are not covered by these requirements; they are grandfathered in (Resnik 2018b). Dietary supplements occupy a tenuous position between foods and drugs and pose difficult dilemmas for policymakers and the public. While many supplements are harmless and some, such as Vitamin D, Vitamin B6, Vitamin B12, and folic acid, offer demonstrable health benefits, others can pose significant risks to public health and safety (Gershwin et al. 2010; Cohen and Bass 2019). Some supplement products can damage the liver or kidneys, interact with medications, induce toxicity at high doses, or increase the risk of cancer. Supplements may be contaminated with drugs, and some supplements include compounds that would be classified as drugs if marketed in a purified form (Gershwin et al. 2010; Resnik 2018b).16 In the US, adverse effects of supplement use lead to over 20,000 emergency room visits and 2,000 hospitalizations 15 Other

countries have also enacted laws to regulate dietary supplements (Resnik 2018b).

16 For example, red rice contains monacolin K, which is chemically the similar to the active ingredient

lovastatin, a drug which has been approved by the FDA to lower blood cholesterol levels (Resnik 2018b).

6.2 Dietary Supplements

137

per year (Geller et al. 2015). While some have argued for strengthening of the dietary supplements regulations to protect the public’s health (Starr 2015; Cohen and Bass 2019), others have argued that the current regulations are strong enough to protect the public, provided they are appropriately applied and enforced (Abdel-Rahman et al. 2011). A key issue in reform proposals is whether to require manufacturers to conduct more pre- and post-market testing of their products, since testing could be prohibitively expensive, especially for smaller companies (Resnik 2018b).

6.3 Alcohol and Tobacco The are no US federal laws to control access to alcohol. The 18th amendment to the Constitution, which was ratified in 1919, prohibited the sale, production, importation, and transportation of alcohol. However, prohibition of alcohol proved to be a public policy disaster, as it led to the growth of a huge, illegal alcohol industry, controlled by criminal gangs. Consequently, the amendment was repealed in 1933. States have laws and regulations concerning the sale, possession, distribution, and importation of alcohol, as well as laws that prohibit driving under the influence of alcohol (Alcohol.org 2020). Alcohol use can have short-term adverse effects, such as addiction, abuse, and intoxication; and long-term use increases the risk of high blood pressure, liver damage, and cancer of the mouth, throat, esophagus, and stomach (Centers for Disease Control and Prevention 2020a). However, most people believe that adults who want to consume alcoholic beverages should be allowed to take these risks. Tobacco use has been linked to many different diseases since the 1960s. Smoking tobacco increases the risk of various types of cancer (especially lung cancer), heart disease, stroke, diabetes, and chronic obstructive pulmonary disease. An estimated 480,000 die each year in the US due to smoking-related illnesses. Chewing tobacco increases the risk of cancer of the mouth, tongue, and throat. Tobacco contains nicotine, which is a highly addictive chemical. Tobacco users have a difficult time quitting, due to physical and psychological effects of nicotine withdrawal. Exposure to second-hand smoke contributes to about 41,000 deaths each year in the US among non-smokers (Centers for Disease Control and Prevention 2020b). For many years, the US federal government did not regulate tobacco products, other than to impose restrictions on tobacco advertising and the age for purchasing tobacco. Most people recognized that using tobacco is bad for your health, but they also believed that adults should be free to take this risk. Tobacco growers and cigarette manufacturers put pressure on government officials not to regulate tobacco. The federal government subsidized tobacco growers through quotas and price supports (Milov 2019). However, as evidence concerning the health risks of tobacco use and secondhand exposure continued to mount, public opinion shifted toward more government control over tobacco (Milov 2019). States and municipalities enacted various laws

138

6 Chemical Regulation

that prohibit smoking in public places, such as restaurants, bars, parks, and government buildings. These laws were enacted to reduce exposure to second-hand smoke, which also has harmful effects on the respiratory system (Milov 2019).17 In the realm of tort law, tobacco companies settled lawsuits brought by smokers who claimed they were harmed by smoking and states sought to recoup the health care costs of smoking-related illnesses. In one of the largest legal settlements in history, the five largest tobacco companies agreed to pay states over $200 billion (Milov 2019). In 2009, Congress passed legislation giving the FDA authority to regulate tobacco products, including cigarettes, cigars, pipe tobacco, nicotine gels, and electronic cigarettes (e-cigarettes). The law imposes age restrictions on purchasing tobacco products and requires tobacco products to include warning labels about health risks. The law gives the FDA the authority to regulate ingredients in tobacco products, including nicotine and flavorings, and allows the FDA to require tobacco products not on the market as of February 15, 2007 to meet public health standards (Food and Drug Administration 2019c). Since cigarettes were on the market as of February 15, 2007, they are grandfathered in, but e-cigarettes were not, because they did not come on the market until after that date. In August 2016, the FDA announced it would regulate e-cigarettes. The FDA announced age restriction rules for e-cigarettes and began inspecting ecigarette manufacturers, including vape shops that make liquids used in e-cigarettes. The agency also stated that e-cigarette manufacturers must submit applications to the agency in order to market their products legally. Applications must include information concerning ingredients and health benefits and risks. Manufacturers were required to submit their applications to the FDA no later than May 12, 2020. Manufacturers may market their products for one year while the FDA reviews their applications (Sharpless 2019). E-cigarettes have sharply divided the public health community. Some argue that e-cigarettes should be allowed on the market because they are a safer alternative to smoking. While e-cigarettes are not harmless, giving smokers the option of using e-cigarettes may reduce the harms associated with smoking. Others argue that ecigarettes should be banned because they have health risks, such as potential carcinogenicity and adverse impacts on the pulmonary system (see Lee et al. 2018a; Tang et al. 2019), which are not well-understood at this time, and they may not be effective at helping smokers to quit smoking. Moreover, allowing the marketing of e-cigarettes may encourage adolescents to take up the nicotine habit and increase social acceptance of vaping or smoking (Fairchild et al. 2018, 2019). However, a recent survey found that while about 20% of US adolescents vape regularly only about 11% smoke regularly, a substantial drop from about 33% who smoked regularly in the early 2000s (Newport 2018). This survey suggests that adolescent nicotine users are vaping instead of smoking, which could significantly reduce smoking related harms.

17 The ethical and policy issues pertaining to second-hand smoke are different from those related to first-hand smoke because exposure to second-hand smoke is usually not a voluntary choice, whereas smoking is (Resnik 2012).

6.4 Pesticides

139

6.4 Pesticides The US has had pesticide laws since 1910, but these laws did little to protect the public or the environment from the risks of pesticides, because their main purpose was to protect consumers from inaccurate pesticide labelling. The Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), adopted in 1947, required accurate labeling of pesticides. Although the law required registration of pesticides with the Department of Agriculture, it did not regulate pesticide use (Resnik 2012). Beginning in 1972, Congress passed several amendments to FIFRA to strengthen the law. Under the current system, pesticide manufacturers must obtain approval (known as registration) from the EPA to market their products. To obtain approval, manufacturers must submit data to the EPA concerning the public health and environmental impacts of their products. Data are obtained from chemical studies and animal experiments. The label on the pesticides includes information about approved uses and safety information. The EPA has the authority to revise or cancel its approval, based on new data related to the safety of a pesticide. The EPA also has the authority, under the Federal, Food, Drug, and Cosmetic Act, to establish acceptable exposure levels to pesticides on foods. Acceptable exposure levels are determined by dividing the exposure level that produces no adverse effects in laboratory animals by two safety factors of ten, a safety factor to account variation between animals and humans, and a safety factor to account for variation among humans. In 1996, Congress passed the Food Quality Protection Act (FQPA) to provide additional protection from pesticide exposure risks for children. The FQPA added an additional safety factor of ten to account for variation between children and adults. Consequently, the acceptable exposure level for pesticides on foods is 1/1000 the level at which no adverse effects are observed in laboratory animals (Resnik 2012). The US took steps to strengthen its pesticide laws in response to a growing awareness in the 1960s of the adverse public health and environmental effects of pesticides, especially DDT. DDT (dichlorodiphenyltrichloroethane) was developed by Austrian chemists in 1873 but was not used extensively until the 1940s, when Swiss chemist Paul Müller discovered that it was an effective insecticide. Müller won the Nobel Prize in Physiology and Medicine for demonstrating that DDT can kill a variety of insect pests, includ ing mosquitoes, lice, and houseflies (Resnik 2012). Mosquito-control has been the most important public health use of DDT, because mosquito-borne diseases, such as malaria, dengue, West Nile virus, chikungunya, yellow fever, and encephalitis have been a major public health problem since humanity has existed (Resnik 2019a). In the late 1950s and early 1960s, Rachel Carson (1907–1964), a biologist for the US Fish and Wildlife Service, documented DDT’s detrimental effects on predatory birds, such as eagles, hawks, and falcons. Carson argued that DDT was killing thousands of these birds by causing shells to rupture before chicks are mature enough to hatch. Carson alerted the world to her concerns in her highly acclaimed, popular book, Silent Spring, published in 1962 (Resnik 2012). Carson’s book helped to launch the modern environmental movement. Soon, other scientists began to investigate the

140

6 Chemical Regulation

public health and environmental risks posed by DDT. It is now known that DDT is a persistent organic pollutant, which means that it does not degrade quickly in the environment and that it accumulates in the tissues of organisms higher up the food chain, such as predatory birds. DDT is an endocrine-disrupting compound, which means that it interferes with hormone systems in the body. DDT is toxic to crayfish, shrimp, and some species of fish, and may pose health risks to human beings. Over a hundred countries have banned DDT, but some nations, mostly in Africa, still use the chemical to control mosquitoes (Resnik 2012). More recently, glyphosate, the active ingredient on Monsanto’s herbicide Roundup™ has been enmired in controversy. Glyphosate is an herbicide that has been used to control weeds since the 1970s. The chemical is highly toxic to many plants because it interferes with the shikimic acid pathway, which is a series of chemical reactions that growing plants use to synthesize amino acids (Davoren and Schiestl 2018). As little as 10 micrograms of glyphosate can kill a plant growing in the wild (Resnik 2012). Monsanto has genetically engineered some types of crops (known as “Roundup ready crops”) to resist the effects of glyphosate, so that farmers can use the herbicide to kill weeds without harming their crops (Resnik 2012). For many years, scientists and environmental regulators regarded glyphosate as not a significant threat to human health. In recent years evidence has emerged that glyphosate may increase the risk of some types of cancer. The evidence of carcinogenicity is inconclusive at present, however, since some studies have shown that glyphosate does not increase the risk of cancer (Davoren and Schiestl 2018). In 2015, the World Health Organization’s International Agency for Research on Cancer classified glyphosate as probably carcinogenic to humans, but other agencies, including the EPA and the European Food Safety Authority and European Chemicals Agency have not drawn this conclusion (Davoren and Schiestl 2018; Environmental Protection Agency 2019b). The National Toxicology Program (2020)18 is currently conducting studies and reviewing data on carcinogenic19 and toxicological effects of glyphosate. Over 40,000 people have sued Bayer AG, which recently acquired Roundup™’s manufacturer, Monsanto (Croft 2019).

6.5 Toxic Substances Toxic substances are chemicals not covered by other types of chemical regulations and include molecules used in consumer products, manufacturing, industry, housing, telecommunications, electric power generation, and transportation. As mentioned in Chapter 4, the EPA regulates toxic substances under the TSCA. The US’ regulation of toxic substances is much less protective than its regulation of drugs or pesticides, 18 The NTP is organization funded by the US Department of Health and Human Services that evaluates the risks of potentially hazardous chemicals and other environmental agents. 19 A carcinogen is a substance (such as asbestos) or a physical or chemical process (such as ionizing radiation) that causes cancer.

6.5 Toxic Substances

141

because toxic substances do not need to undergo extensive safety testing prior to their use in commerce or industry. Although the EPA has the discretionary authority to require manufacturers to submit health and safety data prior to marketing or using new chemicals, TSCA does not require that all new chemicals meet health or safety standards. This is very different from the regulation of new drugs or pesticides, which must meet health and safety standards to receive marketing approval. Also, 62,000 chemicals that were on the market were grandfathered in when TSCA became a law. The primary means by which the EPA protects the public and the environment from harms related to toxic substances is by restricting their use once evidence emerges that they pose an unreasonable risk to public health. However, taking action on chemicals only after they have been on the market for while creates a safety gap because it may take many years for evidence concerning public health risks to emerge (Krimsky 2017; Johnson et al. 2020). The European Union (EU) has a system of regulation of toxic substances, known as REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals), which is more protective than the US’ system (European Chemicals Agency 2020). REACH applies to all chemical substances and chemical mixtures and places the burden of proof for safety on companies. Companies are required to register chemicals (totaling more than one metric ton per year) they manufacture or import with the European Chemicals Agency (ECA). They must provide information about their chemicals to the ECA, assess their potential risks, and identify methods of managing these risks. They must also inform users about risk management methods and safe uses. If the risks of a chemical cannot be managed adequately, the ECA can restrict its use or require that a safer chemical be substituted for it (European Chemicals Agency 2020). The history of asbestos illustrates some of the risks posed by chemicals used in industry. Asbestos is a naturally occurring silicate mineral composed of thin fibers. It has been used since ancient times for its fire-retardant properties. Asbestos use became much more widespread in the 1800s, when companies mined and manufactured asbestos to sell it as a commercial product. By 1910, worldwide annual production of asbestos was 100,000 metric tons per year. Asbestos production peaked in 1977 at 4.8 million metric tons annually (Asbestos.com 2019). Asbestos has been used not only as a fire-retardant but also as an insulator for steam engines, turbines, boilers, ovens, pipes, and electrical generators, and as a building material in floors, ceilings, roofs, and walls. Asbestos has also been used in clothing, brake linings, and cement. The adverse health impacts of asbestos were apparent early on. Greek and Roman scholars and scientists observed that slaves who mined asbestos developed lung diseases. By the 1890s, doctors had observed that asbestos factory workers had developed lung diseases. In 1906, Dr. Montague Murray at London’s Charring Cross Hospital concluded, based on an autopsy, that one of his patients had died from large amounts of asbestos fibers the had accumulated in his lungs from working in at an asbestos factory for 14 years (Asbestos.com 2019). By the 1930s, evidence emerged linking asbestos exposure to mesothelioma, a type of lung cancer. From 1999 to 2015, there were 45,221 deaths due to mesothelioma in the US. Each year,

142

6 Chemical Regulation

about 3,000 people die from the disease in the US. The latency period for developing mesothelioma after first exposure to asbestos is 20–71 years (Mazurek et al. 2017). Asbestos exposure has also been linked cancer of the larynx and esophagus and forms of lung cancer other than mesothelioma. An estimated 40,000 Americans die each year as a result of asbestos exposure (Landrigan and Lemen 2019). Asbestos use has declined since the late 1970s, and 17 countries, have enacted complete or partial bans on the product. In 1989, the EPA banned asbestos, but a federal court nullified the ban because the EPA had not demonstrated that a ban was the least burdensome way of controlling asbestos exposure (Landrigan and Lemen 2019). However, the EPA has taken steps to restrict the use of asbestos and may now have authority to ban it under amendments to the TSCA (Landrigan and Lemen 2019). Although newer buildings do not contain asbestos, many older buildings do, and removing the substance can be a complex and costly process. Looking beyond asbestos, other flame retardants pose difficult ethical and policy questions for society, since flame retardants can play an important role in protecting people from imminent harm (i.e. injury or death from exposure to fire), but they may also create long-term health risks (Shaw et al. 2010). For example, some flame retardants, such as polyfluoroalkyl substances and polybrominated diphenyl ethers, can produce adverse effects on the endocrine, reproductive, nervous, and immune systems; interfere with child development; or increase the risk of cancer (Shaw et al. 2010; Environmental Protection Agency 2020). While scientists and manufacturers continue to search for and develop safer flame retardants, it may be the case that no flame retardant is perfectly safe, and policymakers will need to make difficult choices concerning short-term and long-term benefits and risks. Bisphenol A (BPA) is another interesting case study in chemical regulation. BPA is a carbon compound that has been used since the 1950s to manufacture plastics and resins. One of the important industrial uses of BPA is to strengthen plastics so that they do not break easily when deformed (Vogel 2009). BPA occurs in various products made of or containing plastic, such as water bottles, compact discs, toys, medical devices, water supply pipes, kitchen appliances, dental sealants, and food can linings (Resnik and Elliott 2015). BPA is one of the world’s most common industrial chemicals: each year manufacturers produce about eight billion pounds of BPA. Human exposure to BPA is ubiquitous: the chemical has been found in blood, urine, saliva, breast milk, and amniotic fluid. Most of the exposure to BPA occurs when the chemical leaches from products containing BPA, such as food or beverage containers (Resnik and Elliott 2015). In the 1970s, evidence from animal studies suggested that BPA might be a human carcinogen. However, a report from the NTP in 1982 concluded that BPA is not a carcinogen. In 1988, the EPA took the first steps to regulate BPA by establishing a safe dose of 50 milligrams per kilogram of body weight per day (Vogel 2009). This dose was based on dividing by 1000 the dose at which no adverse effects are observed in laboratory animals. In the 1990s, evidence began to emerge that BPA could negatively impact human health because it is an endocrine disrupting compound that interferes with the body’s estrogen system. Scientists also determined that, like hormones, BPA could affect the body even at very low doses. This idea was controversial because most

6.5 Toxic Substances

143

toxicologists assumed at that time that toxicity is proportional to dose and that below a low dose threshold toxicity is negligible.20 Studies of the effects of BPA in laboratory animals suggested that it could increase the risk of various human diseases, including prostate and breast cancer, types II diabetes, obesity, attention deficit disorder, early onset puberty in females, low sperm count, and urogenital abnormalities in male babies (Resnik and Elliott 2015). In 2007, an expert panel convened by the National Institute of Environmental Health Sciences (NIEHS) concluded, based on a review of several hundred animal studies, that BPA exposure is associated with “organizational changes in the prostate, breast, testis, mammary glands, body size, brain structure and chemistry, and behavior of laboratory animals” (vom Saal et al. 2007: 134). In 2008, the NTP again reviewed the evidence concerning the potential health risks of BPA and concluded that BPA could have adverse effects on the brain and prostate gland in fetuses, infants, and children at exposure levels less than the EPA’s safe dose (Resnik and Elliott 2015). However, that same year the FDA concluded that BPA is safe at current levels in foods (Food and Drug Administration 2019b). By 2010, Canada and European countries began to restrict the use of BPA in baby products (such as bottles and cups) and most manufacturers voluntarily stopped making baby products containing BPA. In 2012, the FDA said it would no longer approve the use of BPA in baby products, based on lack of manufacturer interest in making such products (Food and Drug Administration 2019b). Currently, manufacturers are making plastic products labelled as “BPA free” in response to consumer demands to avoid BPA exposure. However, many of manufacturers are using alternatives to BPA (such as bisphenol S and F) that are chemically similar to BPA and may therefore pose the same risks to human health (Rochester and Bolden 2015). Engineered nanomaterials (ENMs) are an emerging challenge for chemical regulation. ENMs are materials that are typically between 1 and 100 nm (nm) in diameter or length. Nanomaterials are larger than atoms but smaller than the smallest materials we can see with a light microscope, such as red blood cells or cellular organelles. Some nanomaterials, such as volcanic dust or fire ash, occur naturally, while others, such as carbon nanotubes or nanosilver particles, are created by manufacturing processes. Due to their small size, nanomaterials are influenced by quantum mechanical effects and have unique chemical and physical properties that allow them to have useful applications in medicine, industry, manufacturing, transportation, telecommunications, energy production, and waste treatment (Resnik 2019b). For the last decade, toxicologists have been studying the impact of exposure to ENMs on cells, tissues, laboratory animals, and human beings, but the risks of ENMs for public and environmental health are not well understood at present (Oberdörster et al. 2005; Savolainen et al. 2010). Animal experiments and tissue culture studies have shown that carbon nanotubes are potential carcinogens and can induce inflammation, pulmonary fibrosis, and genotoxicity when inhaled, and that 20 This idea was first proposed by Swiss physician and alchemist Paracelsus (1493–1541), the father of toxicology. Paracelsus observed that almost any substance can be toxic at a certain dose and that deadly poisons may be safe at low doses (Grandjean 2016). For example, drinking too much water in short period of time can be toxic and ingesting a small quantity of arsenic (e.g. less than 10 parts of arsenic per billion parts of water) is not harmful.

144

6 Chemical Regulation

other types of nanomaterials, such as titanium dioxide and nickel, can induce immune responses and oxidative stress (Savolainen et al. 2010; Pietroiusti et al. 2018). Of more general concern is that some nanomaterials can persist in the environment, accumulate in animal tissues, cross the blood-brain barrier, or penetrate cellular membranes (Savolainen et al. 2010). Nanomaterials pose a unique challenge for regulation because they are highly heterogenous, with size being their only common feature (Resnik 2019b). Some ENMs may be toxic or carcinogenic, while others may be benign. Although existing laws and regulations apply to ENMs, some have argued that we need new regulations developed specifically for ENMs, given the potential threat they pose to public health and the environment and the difficulties with applying existing laws to them (Marchant and Sylvester 2006).

6.6 Air and Water Pollution Since the industrial revolution, scientists, physicians, and public health officials have understood that air pollution, principally from the combustion of fossil fuels, such as coal and petroleum products, has negative impacts on public health and the environment. Long-term exposure to air pollution increases the risk of lung cancer, chronic obstructive pollution disease, and heart disease, and exacerbates asthma (Resnik 2012). Short-term exposure to high levels of certain pollutants can cause acute health problems, such as carbon monoxide poisoning, asphyxiation, and respiratory distress. For example, in December 1952, a dense cloud of air pollution known as the London Fog settled over London, England for two weeks. The pollution, which contained particulate matter, sulfur dioxide, and ozone, resulted mostly from the burning of coal to heat homes and provide power for factories. As many as 12,000 people died from exposure to the London Fog (Resnik 2012). On December 2, 1984, highly toxic gases that leaked from a pesticide plant in Bhopal, India killed as many as 15,000 people and injured hundreds of thousands. Several thousand died during the initial exposure to the gases while others perished years later, after suffering from lung illnesses (Taylor 2014). The US has enacted numerous federal and state laws since the 1970s to control air pollution. The most important of these is the Clean Air Act (CAA), which authorizes the EPA to set national ambient air quality standards for a variety of pollutants, including particulate matter, ozone, carbon monoxide, sulfur oxides, nitrogen oxides, lead, and volatile organic compounds (Resnik 2012). The CAA applies to stationary sources of pollution, such as power plants and factories, as well as mobile sources, such as automobiles. The EPA sets standards to protect the health of the general population as well as vulnerable groups, such as children and asthmatics (Resnik 2012; Resnik et al. 2018). While most developed nations have also enacted laws to control air pollution, it is still a significant public health and environmental problem in many places in the world, such as parts of India and China (BBC News 2019; Guy 2019).

6.6 Air and Water Pollution

145

Water pollution also poses a significant threat to public health and the environment. Throughout history, contaminants in the water, including chemicals and microorganisms, have sickened and killed millions of people. Lack of safe drinking water is one of the most significant public health problems in developing nations. According to the United Nations (2019), 2.2 billion people lack access to safe drinking water. Dysentery, cholera, diarrhea, typhoid fever, and many other water-borne diseases kill millions of people each year, mostly in developing nations. About 300,000 children die each year from diarrheal diseases associated with poor drinking water (Resnik 2012). Industrial pollutants, fertilizers, phosphates, wastewater treatment products, plastics, agricultural waste, and heavy metals pose public health and environmental risks in developing and developed nations alike. Though most pollutants result from human activities, some, such as arsenic and microorganisms, are naturally occurring (Resnik 2012). The US and many other countries have enacted laws to control water pollution. The Safe Drinking Water Act and the Clean Water Act authorizes the EPA to set and enforce national water quality standards. The EPA sets acceptable levels for a variety of contaminants in public water systems, swimming pools, lakes, and watersheds. For most contaminants, the acceptable level is measured by parts per million (PPM). For example, the acceptable level of arsenic is 0.10 PPM (Environmental Protection Agency 2019a). However, for some contaminants, such as known carcinogens and lead, the acceptable level is 0 PPM (Resnik 2012). States also have their own water quality regulations and work with the EPA to protect the water supply (Resnik 2012). It is also important to note that toxic substances and pesticide regulations can help to control water pollution related to toxic substances and pesticides entering streams, rivers, ponds, and lakes (Johnson et al. 2020).

6.7 Chemicals in the Workplace It is also important to mention that people may be exposed to hazardous chemicals in the workplace, such as lead and other heavy metals, volatile organic compounds, particulate matter, carbon monoxide, sulfur oxides, formaldehyde, petroleum products, and nanomaterials. In the US, the Occupational Health and Safety Administration (OSHA) sets national standards for exposures to chemical in the workplace, and states have their own regulations (Resnik 2012). There are also laws that deal with safety issues related to specific industries, such as mining (Resnik 2012). While it is important to protect worker’s health, policymakers cannot ignore practical and economic realities, since preventing risky chemical exposures may be unfeasible or prohibitively expensive (Resnik 2019c). Accordingly, government agencies usually set exposure levels that are “safe enough,” given costs and other practical considerations.

146

6 Chemical Regulation

6.8 Precautionary Reasoning and Chemical Regulation As one can see from the preceding discussion, there are many different types of chemicals that societies regulate and many different forms of chemical regulation. The central ethical and policy question is: how much regulation is required to reasonably protect public health and the environment from the risks of chemicals? As we have seen, regulatory frameworks range from highly protective ones, such as drug laws and regulations, to minimally protective ones, such as toxic substance laws and regulations. While highly protective frameworks offer the most protection from harm, they also incur the greatest costs and inconvenience. Applying drug testing standards to all chemicals would threaten industrial and technical innovation and could seriously damage the economy (Hartung and Rovida 2009). Some types of regulatory decisions, such as banning BPA, glyphosate, or certain types of flame retardants, would pose difficult challenges for industry and society because there may not be low-risk substitutes for these chemicals (Resnik and Elliott 2015). Restrictive laws and regulations can also significantly interfere with the freedom to take risks.21 As noted above, patients with life-threatening illnesses have argued that the FDA’s regulation of new drugs is too restrictive (Hawthorne 2005; Fountzilas et al. 2018). Additionally, it is lawful in many countries to buy, sell, or use of alcohol or tobacco because people believe that citizens should be allowed risk their own health to use alcohol or tobacco. The key, then, is to find a reasonable balance among competing values, such as protecting the public and the environment from harm, benefitting individuals, society, or the economy, and respecting autonomy. In thinking about these issues, it will be useful to consider the various ways the government can protect the public and the environment from the risks of chemicals. A list of these forms of protection, from the least protective to the most protective (roughly) is listed in Table 6.2. Most of the laws and regulations pertaining to chemical safety discussed earlier in this chapter include research and education, labeling and registration, post-market safety review, and regulatory action. One of the biggest policy issues facing most governments is whether to require manufacturers to conduct some form of pre-market or post-market testing (or both) for chemicals (Cranor 2011; Krimsky 2017). Other important issues are whether to ban, restrict, or tax certain chemicals and whether to craft laws or regulations to protect susceptible (or vulnerable) populations (such as children or asthmatics) from risks (Cranor 2011; Resnik et al. 2018). These issues involve social choices. In thinking about these choices, we can distinguish between four different levels of social decision-making: • • • •

Choosing political leaders; Drafting and enacting statutory laws; Developing regulations that implement laws; Applying regulations to specific cases.

21 Finding

low-risk substitutes for toxic chemicals can be a difficult problem. Often the substitutes are just as risky as the chemicals they replace (Gold and Wagner 2020).

6.8 Precautionary Reasoning and Chemical Regulation

147

Table 6.2 Types of government protection from chemical risks Research and Education • Supporting/conducting research on public health and environmental effects of chemicals • Supporting/conducting public education on the benefits and risks of chemicals Labeling and Advertising • Requiring chemical labels and advertisements to inform users about risks and safe use • Restricting advertisements of some types of chemicals (such as tobacco products) to avoid targeted of children • Monitoring advertisements for accuracy and truthfulness Registration • Requiring manufacturers to register their products with regulatory agencies Pre-Market Research • Requiring manufacturers to submit safety or efficacy data from scientific research to receive approval for use or marketing of chemicals; scientific research could include chemical analyses; cell/tissue, animal, or human studies experiments; or environmental impact assessment Post-Market Research and Safety Review • Inspecting and auditing chemical manufacturing for quality, integrity, and consistency • Requiring manufacturers to conduct safety studies of their products, including newly approved chemicals and grandfathered in chemicals • Reviewing scientific data on chemical risk and safety to address emerging issues and concerns and taking regulatory action, when necessary, to protect public health and the environment • Monitoring exposure levels of chemicals in the workplace, air, water, and other environments Regulatory Action • Establishing acceptable exposure levels • Restricting use • Banning, including temporary bans or moratoria Taxation • Taxing chemicals (such as alcohol or tobacco) to discourage use or to internalize22 the economic, public health, or environmental costs of chemicals

22 A cost is externalized if it generated by a producer or consumer but paid for by society (Samuelson

and Nordhaus 2009). For example, if a company pollutes a river and does not pay for the cleanup, this would be an externalized cost. Making the company pay for the cleanup would be a way of internalizing the cost of the pollution.

148

6 Chemical Regulation

In Chapter 5, I argued that decisions related to choosing political leaders should be made by democratic processes (such as voting), because democracy is a form of government that promotes respect for human dignity and autonomy, political equality, and justice. I also defended representative democracy as a form of government, because direct democracy is unworkable in larger societies. In a representative democracy, citizens can express their preference related to chemical safety by voting for political candidates who represent their views. Ultimately, decisions made by voters concerning political candidates would lead to the enactment of laws dealing with chemical risks, benefits, and precautions. I also argued in Chapter 5 that decisions concerning the development and application of regulations should be made by government agencies, with meaningful input from citizens, so that their views are adequately represented. What role, if any, could the PP play in guiding these decisions? In Chapter 5, I argued that the rationale for using the PP in social decision-making is strongest when we face a high degree of scientific or moral uncertainty (or both) concerning our choices. I also argued that the rationale for using expected utility theory (or an offshoot, such as cost-benefit analysis) is strongest when scientific and moral uncertainty are both low. In theory, the PP could provide guidance for decision-making at any of the levels described above. For example, citizens could use the PP to guide their voting, politicians could use the PP for drafting legislation, and government officials could use the PP for developing or applying regulations. In reality, however, citizens are not likely to use the PP or other rules of decision-making discussed in this book (such as maximin or expected utility theory) when voting for political candidates. Citizens may cast votes based on a variety of factors that are not relevant to the focus of this book, such the candidates’ personality, moral character, race, gender, or religion. They could also cast votes based on how candidates agree (or disagree) with their values and policy positions. For example, voters could support candidates who favor environmental protection, public health, industrial innovation, economic development, and so on. If candidates have declared their support for types of chemical regulation policies, voters make decisions based on whether they agree (or disagree) with those policies. While I think the PP could be useful in voting decisions, it is not my aim in this book to examine the finer details of voting psychology. Rather, I would like to focus on the normative question of how decision-makers (including voters and government officials) should make choices concerning chemical regulation. The perspective I will take, in this chapter and the remaining ones, is that of an ideal decision-maker, who is trying to choose what he or she sincerely believes is the best policy for society, not the perspective of a person or politician who is trying to promote his or her interests. With this perspective in mind, let’s consider some chemical regulation issues where the PP may provide some useful guidance and insight for decision-making.

6.9 Regulation of Toxic Substances

149

6.9 Regulation of Toxic Substances One of the key issues in the regulation of toxic substances is whether to require manufacturers to conduct pre-market research on their chemicals to obtain approval. As noted earlier, the US recently revised the TSCA so that the EPA has the discretionary authority to require manufacturers to submit safety data to obtain approval of new chemicals, but the TSCA does not require that all new toxic substances undergo pre-market testing or analysis. The EU’s REACH regulations, by contrast, require pre-market testing of new toxic substances and place the burden of proof for safety on manufacturers. However, the testing is minimal and does not rise to the level of proof one would need to obtain approval for a new pesticide or drug in the US or EU. Moreover, testing is required only if manufacturers will produce more than one metric ton of the new chemical per year. Although both approaches grant agencies extensive authority to take regulatory action on chemicals post-market, neither approach requires companies to conduct safety research post-market. Commentators have claimed that the EU’s approach to chemical regulation is more precautionary than the US’ approach, but this characterization is an oversimplification (Hansen et al. 2007; Krimsky 2017). A more accurate way of characterizing it is to say that the EU’s approach is better at avoiding chemical risks because it requires all new chemicals to pass some minimal safety standards to obtain approval. However, even this characterization may be inaccurate because the EPA could use its discretionary authority to require extensive research on chemicals thought to pose risks to public health or the environment. For example, if a new chemical is structurally similar to a known carcinogen or mutagen23 or persistent organic pollutant,24 the EPA could require the manufacturer to conduct extensive testing to receive approval. However, since the EPA’s authority is discretionary, it could also not require manufacturers to conduct research on new chemicals. A great deal depends on how the EPA interprets and applies the TSCA. Another important issue in toxic substances regulation is minimizing risks postmarket. The EPA and European agencies both have extensive authority to minimize chemical risks post-market, which can be achieved in several ways, such as: requiring manufacturers to conduct post-market safety testing of chemicals, sponsoring research on chemical risks, inspecting and auditing chemical production for quality control, or restricting the use of chemicals based on data pertaining to health and safety. The PP would seem to apply to making decisions about laws and regulations concerning toxic substances, because the evidence concerning the outcomes of different policy options is inconclusive and the value disagreement concerning risk and benefit issues is high (Ackerman 2008). It is difficult to accurately and precisely estimate probabilities concerning the impacts of different legal/regulatory frameworks on public health, the environment, or the economy. Also, making trade-offs 23 A mutagen is a substance (such as asbestos) or a physical or chemical process (such as ionizing radiation) that causes genetic mutations. While most genetic mutations have no significant impact on the organism, others can cause cancer or other diseases. 24 A persistent organic pollutant is an organic compound, such as DDT, that does not degrade quickly in the environment (Resnik 2012).

150

6 Chemical Regulation

between competing values (such as promoting public health and safety, protecting the environment, and stimulating economic growth) is controversial. If we apply the PP to toxic substance risk issues, it would recommend that we take reasonable precautions to minimize these risks. Reasonable precautions could include forms of risk avoidance, minimization, or mitigation. What counts as reasonable would be a function of the criteria discussed in Chapter 4: proportionality, fairness, epistemic responsibility, and consistency. Proportionality would advise us to balance benefits and risks proportionally. The balance of benefits and risks would depend on the strength of the laws/regulations. Highly protective laws concerning toxic substances would minimize risks to public health and the environment but would also negatively impact industry and the economy. Permissive laws/regulations could benefit industry and the economy but increase risks to public health and society.25 Reasonable precautions would seem to fall somewhere between these two extremes. Reasonable precautions could include pre-market measures (such as pre-market safety testing), post-market ones (such as post-market research, surveillance, and monitoring), and others (such as research, education, and product labelling). Pre-market research requirements are likely to be the most controversial form of legal/regulatory control because these requirements could have the most significant public health, environmental, or economic impacts. As noted above, lack of premarket testing can create a safety gap, because chemicals are regulated once evidence of harm emerges but not beforehand. One could argue that a reasonable balance of benefits and risks would include some form of pre-market safety research which is more stringent than mere registration but less stringent than the testing required for new drugs or pesticides. As we have seen in this chapter, pre-market research could include chemical or tissue studies or animal or human experiments, or environmental impact assessments. Requiring manufacturers to submit data from all these types of research in order to obtain approval for their chemicals would maximize safety but greatly increase costs to industry and the economy.26 One could argue that pre-market research on toxic substances does not need to be as thorough as research on drugs or pesticides because exposures to drugs or pesticides are usually more invasive (i.e. invasive of the body) and extensive (in quantity and duration) than exposures to toxic substances. One could argue that a reasonable balance of risks and benefits would be to require some basic safety research for all chemicals that will be produced in greater than a specified quantity (such as one metric ton), with additional testing for chemicals thought to pose special public health or environmental risks, such as substances which are chemically similar to known mutagens or carcinogens.27 25 It is worth noting that adverse health effects of chemicals can have negative economic consequences, such as increased health care costs and decreased worker productivity. 26 Testing toxic substances on human beings raises ethical issues as well, because the volunteers could be exposed to risks without receiving any off-setting benefits, such as medical treatment (see Resnik 2018a). 27 This proposal would combine US and EU approaches to pre-market research.

6.9 Regulation of Toxic Substances

151

Concerning fairness, one could argue that to promote distributive fairness laws/regulations should include some extra protections for susceptible populations, such as children or adults with chronic illnesses, so that they do not bear an unfair burden of chemical risks. However, as noted earlier, determing the appropriate level of protection is likely to be controversial, because of the potential economic impacts of such protections. As noted in Chapter 4, TSCA includes a provision that requires the EPA to consider impacts on susceptible populations in its risk assessment. To satisfy procedural fairness, laws/regulations should be developed with meaningful input from the public as well as stakeholders (such as public health or environmental groups and industry) or directly impacted communities (such people living or working near sources of toxic substances). Meaningful input can be acquired by the engagement processes described in Chapter 5. Concerning epistemic responsibility, applications of regulations to specific cases should be based on up-to-date scientific evidence, knowledge, and expertise. To promote epistemically responsible decision-making, the government could support research on chemical safety and require companies to conduct post-market safety studies, where warranted.28 Regulatory decisions may need to be revised in light of emerging evidence or knowledge concerning risks and safe uses. For example, if evidence emerges that a chemical poses a significant risk to public health or the environment, a regulatory agency could take action to restrict the use the chemical or even ban it. Concerning consistency, laws/regulations should treat similar cases similarly and different cases differently. For example, if two chemicals, A and B, have the same risk/benefit profile, they should be regulated in the same way, e.g. restrictions imposed on A should also apply to B.

6.10 Regulation of Drugs Policies related to approval of new drugs are firmly established in most countries, and there is little dispute that companies must conduct extensive research involving chemical, animal, and human studies prior to receiving marketing approval for new drugs. However, there are some lingering issues related to the amount of evidence needed to make some types of drugs available to the public. As noted earlier, the FDA’s expanded access policy lowers the threshold of evidence needed to make drugs available to patients who are suffering from life-threatening illnesses for which no other effective treatments are available. While drugs ordinarily must go through three phases of testing prior to marketing, drugs used to treat patients with life-threatening illnesses for which there are no effective treatments can be made available after completing Phase I testing or with no testing at all. In responding to the COVID-19 28 Post-market research would be warranted if there is evidence that a chemical could produce harms that need to be studied more carefully. For example, if evidence suggests that an approved chemical is a possible human carcinogen, additional research would be warranted.

152

6 Chemical Regulation

pandemic, for example, the FDA granted EUA’s to treatments and vaccines before they had completed Phase III clinical trials (Food and Drug Administration 2021a). Before considering these issues, we should put drug regulation into ethical perspective. The main argument for controlling access to drugs is to protect the public’s health. Policies related to testing, approving, labelling, and prescribing have been enacted to protect people from the harmful effects of drugs and to promote their welfare. These policies restrict individual freedom and autonomy for the sake of the public good (Gostin 2007; Schüklenk and Lowry 2009). Many of the issues29 related to drug regulation reflect the inherent tension between individual rights and the common good. Historically, the main argument for expanded access policies is that they promote patients’ rights to try potentially life-saving medications.30 Since the 1980s, patients have argued that they should have the right to try potentially life-saving experimental drugs, and that the government does not have the moral authority to restrict this right. Patients have argued that the right to try experimental drugs is based on broader moral and legal rights to bodily freedom and autonomy. If competent adults should be allowed to smoke, drink, skydive, and take other risks, they should also be allowed to try experimental drugs, especially drugs that are potentially life-saving (Robertson 2006; Leonard 2009; Schüklenk and Lowry 2009). In an important US federal court case, Abigail Alliance for Better Access to Developmental Drugs v. von Eschenbach (2007), the plaintiffs argued that the FDA’s policy of not making potentially life-saving drugs that had completed Phase I testing available to patients was unconstitutional because it violated the right to due process. There are several arguments for restricting access to experimental drugs, however. From a public health perspective, rigorous testing of drugs is important to promote the public’s health. Patients who try experimental drugs that have not been thoroughly tested may suffer adverse effects, including death. The benefits patients receive from these drugs (if any) may not offset harms. Indeed, the entire system of drug regulation and oversight rests on the assumption that the state has the moral and legal authority to restrict access to drugs to protect the public’s health (Leonard 2009; Darrow et al. 2015; Fountzilas et al. 2018). Additionally, providing experimental drugs to patients who are not participating in clinical trials may undermine RCT recruitment because patients may not want to participate in medical research if they can get access to drugs off-study (Leonard 2009; Darrow et al. 2015; Fountzilas et al. 2018). Finally, drug manufacturers may not want to make their drugs available to the public until they have completed clinical testing and received regulatory approval, due to liability concerns and the potential for negative publicity (Leonard 2009; Darrow et al. 2015). Concerning the Abigail Alliance case, the courts ultimately sided with the defendant 29 These include issues related to the legality of recreational drugs, such as marijuana, and the regulation of medical drugs. I will not examine all these issues here. 30 I use the word ‘historically’ here because during the COVID-19 pandemic promoting public health has been a key consideration in making experimental treatments available to the public. I will return to this issue in Chapter 9.

6.10 Regulation of Drugs

153

(von Eschenbach, who was the FDA commissioner). The courts weighed the interests in this case and decided that the state’s interest in promoting public health outweighed the plaintiffs’ interest in access to drugs. During the Ebola epidemic in West Africa from 2014 to 2016, in which 11,325 out 28,600 infected people died (Centers for Disease Control and Prevention 2019a), public health advocates argued that patients should receive access to unproven therapies. The World Health Organization declared the Ebola epidemic to be a public health emergency and said that unproven treatments which have shown some promise in the laboratory and in animal models should be made available to Ebola patients (Calain 2018). During the Ebola outbreak, clinical trials of treatments and vaccines took place while patients received unproven therapies without participating in a study. The FDA approved an Ebola vaccine in 2019 but has not approved any drugs to treat the disease (Centers for Disease Control and Prevention 2019b). During the COVID-19 pandemic, the FDA granted EUAs to drugs (such as remdesivir) and vaccines (such as RNA vaccines) that had not completed clinical testing (Food and Drug Administration 2021a, b, c). The issue of access to potentially life-saving medical treatments does not lend itself to a satisfactory resolution by means of expected utility theory or related decision-making rules, due to the inconclusiveness of scientific evidence concerning the outcomes of different policy options and substantial disagreement about values (e.g. patients’ rights, public health). The PP, however, can provide a useful approach to these issues. The PP would advise us to take reasonable precautionary measures to avoid, minimize, or mitigate the risks of experimental drugs. Two key considerations in thinking of these issues are the proportionality of risks to benefits and the fairness of the distribution of risks and benefits. Some of the risks and benefits related to access to experimental medications include: (1) risks to patients of taking the experimental drugs that have not completed clinical testing; (2) risks to clinical research and public health of allowing patients to have access to drugs before they have completed clinical testing (or the benefits of restricting access); (3) benefits to patients of having access to experimental drugs that have completed clinical testing (or the risks of not having access); (4) benefits to public health of making experimental drugs available that have not completed clinical testing; and (5) risks and benefits to pharmaceutical companies which are making drugs available to patients before they have completed clinical testing. If we only consider potential risks and benefits to patients, then proportionality and fairness would support making experimental drugs available to patients with life-threatening diseases, provided that drug manufacturers agree to this option. If I am dying from an illness that has no treatment, it may be reasonable for me to take an experimental medication that could save my life, even though it could kill me (Schüklenk and Lowry 2009). It would not be reasonable to try a drug that has no chance of helping me and can only hurt me, even if I am dying Thus, there must be at least credible evidence that the medication is safe and effective enough to be

154

6 Chemical Regulation

worth taking.31 Furthermore, limiting access to drugs unfairly prevent dying patients from trying potentially life-saving drugs in order to benefit other (future) patients. A policy of restricting access may sacrifice the interests of these patients for the good of future patients. However, proportionality and fairness would favor restricting access to life-saving, experimental medications if we also consider benefits and risks to society. Although restricting access to experimental medications until testing is complete denies current patients important and potentially life-saving benefits, it is likely to benefit far more future patients by ensuring that the drugs are safe and effective (London and Kimmelman 2020). Moreover, one might argue that it is fair to restrict access to some people in order to benefit others, given the significant public health benefits of promoting research and ensuring drug safety. That being said, the balance may shift from restriction to access if thousands or even hundreds of thousands of patients are seeking access to a potentially life-saving experimental treatment, as may occur in a pandemic or severe epidemic. One way of handling these complex tradeoffs among risks and benefits it to pursue policies that provide access without compromising scientific rigor. A reasonable option is to conduct RCTs while making treatments that have not completed clinical testing available to people off-study, which happened in the Ebola epidemic and in the COVID-19 pandemic (National Academies of Science, Engineering and Medicine 2017; Food and Drug Administration 2021b). In the COVID-19 pandemic, the FDA made vaccines available to the public through an EUA, based on preliminary data from Phase III testing (Food and Drug Administration 2021b). The clinical trials continued while the vaccine was made available. Another reasonable option is to modify clinical trial designs, without compromising rigor, to accelerate research (Calain 2018). For example, researchers could accelerate research by conducting small studies that look for large effects, such as survival. Smaller studies generally take less time to conduct than larger studies and can achieve statistical significance if the effect size is large enough. A third option is to enroll patients in non-randomized, uncontrolled trials and evaluate safety and efficacy by comparing outcomes for these patients to outcomes for patients who receive other types of treatment or no treatment at all. Although RCTs are the gold standard for medical research, it may be possible to obtain useful data from studies that fall short of this standard if they include methods for minimizing bias. An uncontrolled study could provide evidence of a possible treatment effect that could be investigated further by an RCT (Perucca and Wiebe 2016). However, drug approval decisions should ultimately be based on data from RCTs. While pursuing these three options, it is important to ensure that RCT recruitment is not undermined. In most cases, patients will be willing to enroll in RCTs even when they have other options, because they want access to therapies that could save their lives. The FDA’s decisions to make COVID-19 vaccines available through EUAs did not significantly impact clinical trial recruitment because ten of thousands of patients were already enrolled in Phase III RCTs. 31 See

discussion of plausibility, credibility, and veracity in Chapter 4.

6.10 Regulation of Drugs

155

Moving beyond the issue of access to life-saving drugs, the PP would lend some insight into the amount of evidence needed to approve other drugs or medical products, such as biologics or medical devices. The PP implies that the amount of evidence needed to approve medical products or make them available to the public is partly a function of the potential benefits and harms of approval for patients and society (i.e. proportionality). When the benefits to patients are very high (e.g. potentially life-saving) a medical product may be approved or made available with minimal testing even though the potential harms may be great as well, as has happened with HIV/AIDS, Ebola, and COVID-19 medications. When the potential harms of a medical product are very low it could also be made available with minimal testing, if it has potential benefits for patients and the impacts on society of approval are minimal. For example, the amount of evidence to obtain FDA approval of non-significant risk medical devices (such as daily wear contact lenses or dentures) is much lower than the amount of evidence needed to obtain approval of significant risk devices (such as pacemakers or surgical lasers), because non-significant risk devices do not create significant risks for patients (Food and Drug Administration 2006). More extensive testing could be required for products where risks are significant but potential benefits are not very high. For example, extensive testing could be required for a new medication to treat hypertension, since the medication may involve significant risks and the benefits of the medication are not likely to be very compelling, given that there are already effective treatments for hypertension on the market. Issues concerning the amount of evidence needed for approval or access frequently arise in the regulation of many different types of medical products other than drugs. For example, in recent years controversies have arisen concerning the approval of and access to genetic tests (Evans and Watson 2015), fecal matter transplants (Smith et al. 2014), and stem cells (Sipp 2018). I will note examine these issues in this book but would like to suggest that the PP may offer a useful perspective on these issues as well. Before concluding this section, I would like to mention some important issues related to minimizing and mitigating the risks of drugs. As noted previously, there are several shortcomings in the US’s drug safety system that compromise its ability to protect the public’s health. The PP would support reasonable measures to improve the drug safety system, such as: • Requiring more manufacturers to conduct long-term studies of the health effects of their drugs; • Sponsoring more independent long-term studies of the health effects of drugs; • Encouraging and incentivizing health care professionals to report adverse drug effects to the MedWatch program; • Encouraging medical boards to more closely monitor off-label prescribing; • Overseeing medical advertisements to ensure that the information conveyed is truthful, accurate, and understandable to the public; • Taking additional steps to promote integrity in clinical research, such as developing and enforcing rules pertaining to data fabrication/falsification, conflict of

156

6 Chemical Regulation

interest, and authorship; conducting independent audits of research records; and enforcing requirements for clinical trial registration.

6.11 Regulation of Electronic Cigarettes The PP can also be applied to regulation of e-cigarettes. Since e-cigarettes have been on the market for only about a dozen years, scientific evidence concerning the benefits and risks of e-cigs is inconclusive and incomplete. Although health researchers are beginning to identify and describe some of the short-term adverse effects of e-cigarettes, it may take decades to fully understand the risks of these products. Policy decisions must be made, however, despite considerable scientific uncertainty concerning the risks and potential benefits of e-cigarettes. To apply the PP to this case, we should consider which precautionary measures would be reasonable. As in the previous cases, reasonableness would be a function of proportionality, fairness, epistemic responsibility, and consistency. There are four basic policy options for dealing with the risks of e-cigarettes: taking no action; banning them; regulating them and taxing them. Regulation and taxation could be pursued at the same time. To evaluate the reasonableness of these options, we should consider the potential benefits of e-cigarettes, such as helping smokers to stop using tobacco altogether or switching from regular cigarettes to e-cigarettes; and the risks of electronic cigarettes, such as increasing nicotine use and addiction among adolescents, and short-term and long-term adverse health effects among people who vape. In thinking about these benefits and risks, it important not to underestimate the potential positive impact ecigarettes could have on public health. As noted earlier, smoking takes an enormous toll on human health. Although e-cigarettes are not harmless, it is plausible to assume at this point that they are much less harmful than regular cigarettes, since they do not expose people who vape to the particulate matter and hundreds of carcinogens in tobacco smoke. If e-cigarettes were only marginally effective at helping smokers to quit smoking, they could still potentially save thousands of live per year and substantially reduce the social and economic costs of smoking. Though they are far from harmless, e-cigarettes reduce the harm of smoking (Fairchild et al. 2018, 2019). Banning e-cigarettes would require to society to forego an important and potentially substantial benefit. Thus, one could argue that banning e-cigarettes would be an unreasonable precaution because it would not balance risks and benefits proportionally. Banning e-cigarettes would also violate the consistency condition, since regular cigarettes are legal, and we have substantial evidence of their health risks. Doing nothing would also be an unreasonable precaution because it would not adequately address the risks of vaping. Some form of regulation (possibly combined with taxation) would seem to be the most reasonable policy approach for addressing the risks of e-cigarettes. Though regulation seems to be the most reasonable option, there are numerous questions concerning regulation that need to be addressed. The first of these deals

6.11 Regulation of Electronic Cigarettes

157

with the level of evidence that is required for approval of e-cigarettes. Requiring manufacturers to submit the type of evidence required for approval of new drugs would be prohibitively expensive for most companies, especially smaller ones, and would be tantamount to banning e-cigarettes. Some form of evidence for safety beyond mere chemical testing would seem to be reasonable, such as evidence from human tissue or animal studies or observational studies of human subjects who vape.32 Manufacturers should could also be required to conduct additional research on products that receive marketing approval. Another important question would be whether to regulate the chemicals used in e-cigarettes, such as nicotine or various flavors. Limits on the levels of nicotine contained in e-cigarettes would seem to be a reasonable way of minimizing the addictive effects of these products, and limits on flavorings could help reduce the risk of use by adolescents, who tend to prefer fruity and sweet flavors, rather than the flavors associated with regular cigarettes (Leventhal et al. 2019). Restrictions on marketing would also seem to be reasonable type of regulation to prevent companies from marketing to adolescents,33 and government inspections and audits of manufacturers for quality control could help to reduce the risk of e-cigarettes. Taxation could generate revenue that could be used to support research on the health effects of vaping and government programs related to e-cigarette regulation. Epistemic responsibility would be paramount in e-cigarette regulation because policies should reflect up-to-date research on the health effects of these products and could be modified accordingly. For example, government agencies could impose new restrictions based on emerging data pertaining to risks or relax some restrictions if the data support such a policy.

6.12 Protecting Susceptible Populations from Chemical Risks As we have seen several times in this book, considerations of distributive and procedural fairness have an important bearing on the reasonableness of risks. These issues become paramount in chemical regulation when an exposure adversely impacts some members of the population more than others (Resnik et al. 2018). For example, exposure levels of ozone that do not significantly impact healthy people may produce negative health effects in people with lung disease, such as asthma or emphysema (Resnik et al. 2018). Children are more adversely impacted by exposure to lead, pesticides, air pollutants, and other chemicals than adults, due to the sensitivity of their developing brains and bodies (Miller and Marty 2017). As noted earlier in this 32 Controlled experiments with human subjects would be ethically controversial because participants

would be exposed to a potentially harmful substance without the prospect of direct medical benefit. These would be similar, in some ways, to pesticide testing on healthy subjects. See Resnik and Portier (2005), Resnik (2018a). 33 As noted earlier, age restrictions for purchasing tobacco products are already in place.

158

6 Chemical Regulation

chapter, pesticide, toxic substance, and air pollution regulations include protections for susceptible (or vulnerable) populations. To decide whether an exposure level for the entire population is reasonable, regulators must consider the proportionality of risks and benefits as well as their distribution. Some cases may involve ethically challenging trade-offs between fairness and proportionality. For example, strengthening air quality standards for ozone to protect asthmatics from health risks may negatively impact the overall economy (Resnik et al. 2018). The PP can be a useful tool for dealing with these issues because it addresses proportionality and fairness. Fairness would require that the risks of chemicals are distributed fairly and that fair procedures are used for making distribution decisions.

6.13 Expected Utility Theory and Chemical Regulation So far in this chapter, I have focused on the usefulness of the PP in making decisions concerning chemical regulation and I have downplayed the importance of expected utility theory (EUT) and its offshoots, such as cost-benefit analysis and risk-benefit assessment. However, I do not mean to imply that EUT does not or cannot play an important role in these decisions. Instead, I would like to reemphasize key points, made in Chapters 1, 4, and 5, that the PP complements other decision-making rules and that the rule we should choose to use depends on the contextual factors (or conditions) related to the decision. EUT can be a very valuable decision-making tool when we have reliable scientific evidence concerning the probabilities related to the outcomes of different options and substantial moral agreement concerning the desirability of those outcomes. For example, if a regulatory agency is deciding to approve a new drug, and it has reliable evidence concerning the health benefits and risks34 of approving or not approving the drug, and there is substantial moral agreement about the importance of these benefits and risks, then the agency could make the decision that maximizes health benefits and minimizes health risks. Or suppose that a regulatory agency has compelling evidence that a type of restriction on the use of a particular chemical is likely to save 300 lives per year, and a reliable estimate of the economic costs of the restriction, and sound and morally acceptable methodology for estimating the economic value of these lives.35 Under these circumstances, the agency could use EUT to decide whether apply that restriction to the chemical.

34 Benefits and risks could include: effects on mortality (e.g. death, survival, survival time), morbidity (e.g. alleviation of disease, symptoms, side effects), and impact on quality of life. 35 This is likely to be a controversial assumption. See discussions of estimating the economic value of human life in Chapters 2 and 3.

6.14 Conclusion

159

6.14 Conclusion In this chapter, I have described some different types of chemical regulation and argued that the PP can be a useful rule for making chemical regulation decisions when we face scientific uncertainty concerning the likely outcomes of different options, moral uncertainty concerning the preferability of different outcomes, or both. The PP complements other decision-making rules, and it may be reasonable to change rules as conditions change. Although regulators often use decision frameworks based on expected utility in decision-making, they could use the PP if conditions warrant. In this chapter, I have shown how the PP can be applied to controversies concerning the regulation of toxic substances, drugs, and electronic cigarettes, and issues concerning protecting vulnerable populations from chemical risks. In theory, one could also apply the PP to the regulation of other chemicals, such as pesticides,36 nanomaterials,37 dietary supplements, and recreational drugs, but I will not develop the argument for that assertion here.

References Abdel-Rahman, A., N. Anyangwe, L. Carlacci, S. Casper, R.P. Danam, E. Enongene, G. Erives, D. Fabricant, R. Gudi, C.J. Hilmas, F. Hines, P. Howard, D. Levy, Y. Lin, R.J. Moore, E. Pfeiler, T.S. Thurmond, S. Turujman, and N.J. Walker. 2011. The Safety and Regulation of Natural Products Used as Foods and Food Ingredients. Toxicological Science 123 (2): 333–348. Abigail Alliance for Better Access to Developmental Drugs v. von Eschenbach. 2007. 495 F.3d 695 (DC Circuit 2007). Ackerman, F. 2008. Poisoned for Pennies: The Economics and Toxics of Precaution. Washington, DC: Island Press. Alcohol.org. 2020. Alcohol Laws and Regulations. Available at: https://www.alcohol.org/laws/. Accessed 17 Jan 2020. Asbestos.com. 2019. The History of Asbestos. Available at: https://www.asbestos.com/asbestos/his tory/. Accessed 18 Jan 2021. BBC News. 2019. India Air Pollution at ‘Unbearable Levels’, Delhi Minister Says. BBC News, November 4. Available at: https://www.bbc.com/news/world-asia-india-50280390. Accessed 18 Jan 2021. Bombardier, C., L. Laine, A. Reicin, D. Shapiro, R. Burgos-Vargas, B. Davis, R. Day, M.B. Ferraz, C.J. Hawkey, M.C. Hochberg, T.K. Kvien, T.J. Schnitzer, and VIGOR Study Group. 2000. Comparison of Upper Gastrointestinal Toxicity of Rofecoxib and Naproxen in Patients with Rheumatoid Arthritis. VIGOR Study Group. New England Journal of Medicine 343 (21): 1520–1528. Calain, P. 2018. The Ebola Clinical Trials: A Precedent for Research Ethics in Disasters. Journal of Medical Ethics 44 (1): 3–8. Cambridge Dictionary. 2020. Chemical. Available at: https://dictionary.cambridge.org/us/dictio nary/english/chemical. Accessed 18 Jan 2021.

36 See 37 See

Resnik (2012). Elliott (2011).

160

6 Chemical Regulation

Centers for Disease Control and Prevention. 2019a. 2014–2016 Ebola Outbreak in West Africa. Available at: https://www.cdc.gov/vhf/ebola/history/2014-2016-outbreak/index.html. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2019b. Ebola. Available at: https://www.cdc.gov/vhf/ ebola/index.html. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020a. Alcohol Use and Your Health. Available at: https://www.cdc.gov/alcohol/fact-sheets/alcohol-use.htm. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020b. Health Effects of Cigarette Smoking. Available at: https://www.cdc.gov/tobacco/data_statistics/fact_sheets/health_effects/effects_cig_smo king/. Accessed 18 January 2021. Cohen, P.A., and S. Bass. 2019. Injecting Safety into Supplements: Modernizing Dietary Supplement Law. New England Journal of Medicine 381 (25): 2387–2389. Cranor, C. 2011. Legally Poisoned: How the Law Puts Us at Risk from Toxicants. Cambridge, MA: Harvard University Press. Croft, A. 2019. As Roundup Lawsuits Pile Up by the Thousands, Bayer Remains Defiant. Fortune, October 30. Available at: https://fortune.com/2019/10/30/roundup-lawsuits-bayer-def iant/. Accessed 18 Jan 2021. Curfman, G.D., S. Morrissey, and J.M. Drazen. 2006. Expression of Concern Reaffirmed. New England Journal of Medicine 354 (11): 1193. Darrow, J.J., A. Sarpatwari, J. Avorn, and A.S. Kesselheim. 2015. Practical, Legal, and Ethical Issues in Expanded Access to Investigational Drugs. New England Journal of Medicine 372 (3): 279–286. Darrow, J.J., J. Avorn, and A.S. Kesselheim. 2014. New FDA Breakthrough-Drug Category–Implications for Patients. New England Journal of Medicine 370 (13): 1252–1258. Davoren, M.J., and R.H. Schiestl. 2018. Glyphosate-Based Herbicides and Cancer Risk: A Post-IARC Decision Review of Potential Mechanisms, Policy and Avenues of Research. Carcinogenesis 39 (10): 1207–1215. Elliott, K.C. 2011. Nanomaterials and the Precautionary Principle. Environmental Health Perspectives 119 (6): A240. Environmental Protection Agency. 2019a. Drinking Water Requirements for States and Public Water Systems. Available at: https://www.epa.gov/dwreginfo/chemical-contaminant-rules. Accessed 18 Jan 2021. Environmental Protection Agency. 2019b. EPA Takes Next Step in Review Process for Herbicide Glyphosate, Reaffirms no Risk to Public Health. Press Release, April 30. Available at: https://www.epa.gov/newsreleases/epa-takes-next-step-review-process-herbicide-glypho sate-reaffirms-no-risk-public-health. Accessed 18 Jan 2021. Environmental Protection Agency. 2020. Basic Information on PFAS. Available at: https://www. epa.gov/pfas/basic-information-pfas. Accessed 18 Jan 2021. European Chemicals Agency. 2020. Understanding REACH. Available at: https://echa.europa.eu/ regulations/reach/understanding-reach. Accessed 18 Jan 2021. Evans, J.P., and M.S. Watson. 2015. Genetic Testing and FDA Regulation: Overregulation Threatens the Emergence of Genomic Medicine. Journal of the American Medical Association 313 (7): 669–670. Fairchild, A.L., C. Healton, J. Curran, D. Abrams, and R. Bayer. 2019. Evidence, Alarm, and the Debate Over e-Cigarettes. Science 366 (6471): 1318–1320. Fairchild, A.L., J.S. Lee, R. Bayer, and J. Curran. 2018. E-Cigarettes and the Harm-Reduction Continuum. New England Journal of Medicine 378 (3): 216–219. Food and Drug Administration. 2006. Information Sheet Guidance for IRBs, Clinical Investigators, and Sponsors: Significant Risk and Nonsignificant Risk Medical Device Studies. Available at: https://www.fda.gov/media/75459/download. Accessed 19 Jan 2021. Food and Drug Administration. 2019a. What Does FDA Regulate? Available at: https://www.fda. gov/about-fda/fda-basics/what-does-fda-regulate. Accessed 19 Jan 2021. Food and Drug Administration. 2019b. Bisphenol A (BPA): Use in Food Contact Application. Available at: https://www.fda.gov/food/food-additives-petitions/bisphenol-bpa-use-food-contact-app lication#summary. Accessed 19 Jan 2021.

References

161

Food and Drug Administration. 2019c. Milestones in U.S. Drug Law History. Available at: https://www.fda.gov/about-fda/fdas-evolving-regulatory-powers/milestones-us-foodand-drug-law-history. Accessed 19 Jan 2021. Food and Drug Administration. 2020a. What Are “Biologics”—Questions and Answers. Available at: https://www.fda.gov/about-fda/center-biologics-evaluation-and-research-cber/what-arebiologics-questions-and-answers. Accessed 19 Jan 2021. Food and Drug Administration. 2021a. Emergency Use Authorization. Available at: https://www. fda.gov/emergency-preparedness-and-response/mcm-legal-regulatory-and-policy-framework/ emergency-use-authorization. Accessed 8 Jan 2021. Food and Drug Administration. 2021b. COVID-19 Vaccines. Available at: https://www.fda.gov/ emergency-preparedness-and-response/coronavirus-disease-2019-covid-19/covid-19-vaccines. Accessed 8 Jan 2021. Food and Drug Administration. 2021c. COVID-19 Update: FDA Broadens Emergency Use Authorization for Veklury (Remdesivir) to Include All Hospitalized Patients for Treatment of COVID19. Available at: https://www.fda.gov/news-events/press-announcements/covid-19-update-fdabroadens-emergency-use-authorization-veklury-remdesivir-include-all-hospitalized. Accessed 9 Jan 2021. Fountzilas, E., R. Said, and A.M. Tsimberidou. 2018. Expanded Access to Investigational Drugs: Balancing Patient Safety with Potential Therapeutic Benefits. Expert Opinion in Investigational Drugs 27 (2): 155–162. Gassman, A.L., C.P. Nguyen, and H.V. Joffe. 2017. FDA Regulation of Prescription Drugs. New England Journal of Medicine 376 (7): 674–682. Geller, A.I., N. Shehab, N.J. Weidle, M.C. Lovegrove, B.J. Wolpert, B.B. Timbo, R.P. Mozersky, and D.S. Budnitz. 2015. Emergency Department Visits for Adverse Events Related to Dietary Supplements. New England Journal of Medicine 373 (16): 1531–1540. Gershwin, M.E., A.T. Borchers, C.L. Keen, S. Hendler, F. Hagie, and M.R. Greenwood. 2010. Public Safety and Dietary Supplementation. Annals of the New York Academy of Sciences 1190: 104–117. Gold, S.C., and W.E. Wagner. 2020. Filling Gaps in Science Exposes Gaps in Chemical Regulation. Science 368 (6495): 1066–1068. Goozner, M. 2004. The $800 Million Pill: The Truth Behind the Costs of New Drugs. Berkeley, CA: University of California Press. Gostin, L.O. 2007. General Justifications for Public Health Regulation. Public Health 121 (11): 829–834. Grandjean, P. 2016. Paracelsus Revisited: The Dose Concept in a Complex World. Basic Clinical Pharmacology and Toxicology 119 (2): 126–132. Guy, J. 2019. China Has Saved Hundreds of Thousands of Lives by Reducing Air Pollution, Study Says. CNN, November 19. Available at: https://www.cnn.com/2019/11/19/asia/china-air-pollut ion-study-scli-intl-scn/index.html. Accessed 19 Jan 2021. Hartung, T., and C. Rovida. 2009. Chemical Regulators Have Overreached. Nature 460 (7259): 1080–1081. Hawthorne, F. 2005. Inside the FDA: The Business and Politics Behind the Drugs We Take and the Food We Eat. New York, NY: Wiley. Johnson, A.C., X. Jin, H. Nakada, and J.P. Sumpter. 2020. Learning from the Past and Considering the Future of Chemicals in the Environment. Science 367 (6476): 384–387. Krimsky, S. 2017. The Unsteady State and Inertia of Chemical Regulation Under the US Toxic Substances Control Act. PLoS Biology 15 (12): e2002404. Landrigan, P.J., and R.A. Lemen. 2019. A Most Reckless Proposal—A Plan to Continue Asbestos Use in the United States. Journal of the American Medical Association 381 (7): 598–600. Lee, H.W., S.H. Park, M.W. Weng, H.T. Wang, W.C. Huang, H. Lepor, X.R. Wu, L.C. Chen, and M.S. Tang. 2018a. E-Cigarette Smoke Damages DNA and Reduces Repair Activity in Mouse Lung, Heart, and Bladder as Well as in Human Lung and Bladder Cells. Proceedings of National Academy of Sciences of the United States of America 115 (7): E1560–E1569.

162

6 Chemical Regulation

Lee, Y.G., X. Garza-Gomez, and R.M. Lee. 2018b. Ultimate Costs of the Disaster: Seven Years After the Deepwater Horizon Oil Spill. Journal of Corporate Accounting and Finance 29 (1): 69–79. Leonard, E.W. 2009. Right to Experimental Treatment: FDA New Drug Approval, Constitutional Rights, and the Public’s Health. Journal of Law, Medicine and Ethics 37 (2): 269–279. Leventhal, A.M., R. Miech, J. Barrington-Trimis, L.D. Johnston, P.M. O’Malley, and M.E. Patrick. 2019. Flavors of e-Cigarettes Used by Youths in the United States. Journal of the American Medical Association 322 (21): 2132–2134. London, A.J., and J. Kimmelman. 2020. Against Pandemic Research Exceptionalism. Science 368 (6490): 476–477. Marchant, G.E., and D.J. Sylvester. 2006. Transnational Models for Regulation of Nanotechnology. Journal of Law, Medicine and Ethics 34 (4): 714–725. Mazurek, J.M., G. Syamlal, J.M. Wood, S.A. Hendricks, and A. Weston. 2017. Malignant Mesothelioma Mortality—United States, 1999–2015. Morbidity and Mortality Weekly Report 66: 214–218. Medicine.net. 1999. Drug Approvals: From Invention to Market, a 12-Year Trip. Available at: https:// www.medicinenet.com/script/main/art.asp?articlekey=9877. Medicines and Healthcare Products Regulatory Agency. 2020. Available at: https://www.gov.uk/gov ernment/organisations/medicines-and-healthcare-products-regulatory-agency. Accessed 19 Jan 2021. Milov, S. 2019. The Cigarette: A Political History. Cambridge, MA: Harvard University Press. National Academies of Sciences, Engineering, and Medicine. 2017. Integrating Clinical Research into Epidemic Response: The Ebola Experience. Washington, DC: National Academies Press. National Organization for the Reform of Marijuana Laws. 2020. State Laws. Available at: https:// norml.org/laws. Accessed 19 Jan 2021. National Toxicology Program. 2020. Glyphosate and Glyphosate Formulations. Available at: https://ntp.niehs.nih.gov/whatwestudy/topics/glyphosate/index.html?utm_source=dir ect&utm_medium=prod&utm_campaign=ntpgolinks&utm_term=glyphosate. Accessed 19 Jan 2021. Newport, F. 2018. Young People Adopt Vaping as Their Smoking Rate Plummets. Gallup, July 26. Available at: https://news.gallup.com/poll/237818/young-people-adopt-vaping-smo king-rate-plummets.aspx. Accessed 19 Jan 2021. Oakie, S. 2005. Safety in Numbers: Monitoring Risk in Approved Drugs. New England Journal of Medicine 352 (12): 1173–1176. Oberdörster, G., E. Oberdörster, and J. Oberdörster. 2005. Nanotoxicology: An Emerging Discipline Evolving from Studies of Ultrafine Particles. Environmental Health Perspectives 113 (7): 823– 839. Perucca, A., and S. Wiebe. 2016. Not all That Glitters Is Gold: A Guide to the Critical Interpretation of Drug Trials in Epilepsy. Epilepsia Open 1 (1–2): 9–21. Pharmaceutical Research and Manufacturing Association. 2015. Biopharmaceutical Research and Development: The Process Behind the New Medicines. Available at: http://phrma-docs.phrma. org/sites/default/files/pdf/rd_brochure_022307.pdf. Accessed 19 Jan 2021. Pietroiusti, A., H. Stockmann-Juvala, F. Lucaroni, and K. Savolainen. 2018. Nanomaterial Exposure, Toxicity, and Impact on Human Health. Wiley Interdisciplinary Review of Nanomedicine and Nanobiotechnology 10 (5): e1513. Prakash, S., and V. Valentine. 2007. Timeline: The Rise and Fall of Vioxx. National Public Radio, November 10. Available at: https://www.npr.org/2007/11/10/5470430/timeline-the-rise-and-fallof-vioxx. Accessed 19 Jan 2021. Resnik, D.B. 2007a. Beyond Post-Marketing Research and MedWatch: Long-Term Studies of Drug Safety. Drug Design, Development and Therapy 1: 1–5. Resnik, D.B. 2007b. The Price of Truth: How Money Affects the Norms of Science. New York, NY: Oxford University Press. Resnik, D.B. 2012. Environmental Health Ethics. Cambridge, UK: Cambridge University Press.

References

163

Resnik, D.B. 2018a. The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust. Cham, Switzerland: Springer. Resnik, D.B. 2018b. Proportionality in Public Health Regulation: The Case of Dietary Supplements. Food Ethics 2 (1): 1–16. Resnik, D.B. 2019a. Two Unresolved Issues in Community Engagement for Field Trials of Genetically Modified Mosquitoes. Pathogens and Global Health 113 (5): 238–245. Resnik, D.B. 2019b. How Should Engineered Nanomaterials Be Regulated for Public and Environmental Health? AMA Journal of Ethics 21 (4): E363–369. Resnik, D.B. 2019c. Occupational Health and the Built Environment: Ethical Issues. In Oxford Handbook of Public Health Ethics, ed. A.C. Mastroianni, J.P. Kahn, and N.E. Kass, 718–727. New York, NY: Oxford University Press. Resnik, D.B., and C. Portier. 2005. Pesticide Testing on Human Subjects: Weighing Benefits and Risks. Environmental Health Perspectives 113 (7): 813–817. Resnik, D.B., D.R. MacDougall, and E.M. Smith. 2018. Ethical Dilemmas in Protecting Susceptible Subpopulations from Environmental Health Risks: Liberty, Utility, Fairness, and Accountability for Reasonableness. American Journal of Bioethics 18 (3): 29–41. Resnik, D.B., and K.C. Elliott. 2015. Bisphenol A and Risk Management Ethics. Bioethics 29 (3): 182–189. Robertson, J.A. 2006. Controversial Medical Treatment and the Right to Health Care. Hastings Center Report 36 (6): 15–20. Rochester, J.R., and A.L. Bolden. 2015. Bisphenol S and F: A Systematic Review and Comparison of the Hormonal Activity of Bisphenol A Substitutes. Environmental Health Perspectives 123 (7): 643–650. Samuelson, P.A., and W.D. Nordhaus. 2009. Economics, 19th ed. New York: McGraw-Hill. Savolainen, K., H. Alenius, H. Norppa, L. Pylkkänen, T. Tuomi, and G. Kasper. 2010. Risk Assessment of Engineered Nanomaterials and Nanotechnologies—A Review. Toxicology 269 (2–3): 92–104. Schüklenk, U., and C. Lowry. 2009. Terminal Illness and Access to Phase 1 Experimental Agents, Surgeries and Devices: Reviewing the Ethical Arguments. British Medical Bulletin 89: 7–22. Sharpless, N. 2019. How the FDA Is Regulating Electronic Cigarettes (Food and Drug Administration Website). Available at: https://www.fda.gov/news-events/fda-voices-perspectives-fda-lea dership-and-experts/how-fda-regulating-e-cigarettes. Accessed 19 Jan 2021. Shaw, S.D., A. Blum, R. Weber, K. Kannan, D. Rich, D. Lucas, C.P. Koshland, D. Dobraca, S. Hanson, and L.S. Birnbaum. 2010. Halogenated Flame Retardants: Do the Fire Safety Benefits Justify the Risks? Review of Environmental Health 25 (4): 261–305. Sipp, D. 2018. Challenges in the Regulation of Autologous Stem Cell Interventions in the United States. Perspectives in Biology and Medicine 61 (1): 25–41. Smith, M.B., C. Kelly, and E.J. Alm. 2014. Policy: How to Regulate Faecal Transplants. Nature 506 (7488): 290–291. Starr, R.R. 2015. Too Little, Too Late: Ineffective Regulation of Dietary Supplements in the United States. American Journal of Public Health 105 (3): 478–485. Strom, B. 2006. How the US Drug Safety System Should Be Changed. Journal of the American Medical Association 295 (17): 2072–2075. Tang, M.S., X.R. Wu, H.W. Lee, Y. Xia, F.M. Deng, A.L. Moreira, L.C. Chen, W.C. Huang, and H. Lepor. 2019. Electronic-Cigarette Smoke Induces Lung Adenocarcinoma and Bladder Urothelial Hyperplasia in Mice. Proceedings of the National Academy of Sciences of the United States of America 116 (43): 21727–21731. Taylor, A. 2014. Bhopal: The World’s Worst Industrial Disaster, 30 Years Later. The Atlantic Monthly, December 2. Available at: https://www.theatlantic.com/photo/2014/12/bhopal-the-wor lds-worst-industrial-disaster-30-years-later/100864/. Accessed 19 Jan 2021. United States Department of Justice. 2020. Controlled Substance Schedules. Available at: https:// www.deadiversion.usdoj.gov/schedules/. Accessed 20 Jan 2021.

164

6 Chemical Regulation

Vargesson, N. 2015. Thalidomide-Induced Teratogenesis: History and Mechanisms. Birth Defects Research Part C: Embryo Today 105 (2): 140–156. Viergever, R.F., G. Karam, A. Reis, and D. Ghersi. 2014. The Quality of Registration of Clinical Trials: Still a Problem. PLoS One 9 (1): e84727. Vogel, S.A. 2009. The Politics of Plastics: The Making and Unmaking of Bisphenol A “Safety”. American Journal of Public Health 99 (Suppl 3): S559–S566. vom Saal, F.S., B.T. Akingbemi, S.M. Belcher, L.S. Birnbaum, D.A. Crain, M. Eriksen, F. Farabollini, L.J. Guillette Jr., R. Hauser, J.J. Heindel, S.M. Ho, P.A. Hunt, T. Iguchi, S. Jobling, J. Kanno, R.A. Keri, K.E. Knudsen, H. Laufer, G.A. LeBlanc, M. Marcus, J.A. McLachlan, J.P. Myers, A. Nadal, R.R. Newbold, N. Olea, G.S. Prins, C.A. Richter, B.S. Rubin, C. Sonnenschein, A.M. Soto, C.E. Talsness, J.G. Vandenbergh, L.N. Vandenberg, D.R. Walser-Kuntz, C.S. Watson, W.V. Welshons, Y. Wetherill, and R.T. Zoeller. 2007. Chapel Hill Bisphenol A Expert Panel Consensus Statement: Integration of Mechanisms, Effects in Animals and Potential to Impact Human Health at Current Levels of Exposure. Reproductive Toxicology 24 (2): 131–138. Wittich, C.M., C.M. Burkle, and W.L. Lanier. 2012. Ten Common Questions (and Their Answers) About Off-Label Drug Use. Mayo Clinic Proceedings 87 (10): 982–990. Zarin, D.A., T. Tse, R.J. Williams, R.M. Califf, and N.C. Ide. 2011. The ClinicalTrials.gov Results Database—Update and Key Issues. New England Journal of Medicine 364 (9): 852–860.

Chapter 7

Genetic Engineering

In this chapter I will apply the PP to ethical and policy issues related to genetic engineering of microbes, plants, animals, and human beings. I will argue that the PP can provide some useful insights into these issues, due to the scientific and moral uncertainty surrounding the consequences of genetic engineering for public health, the environment, society, and patients. Before I consider these issues, I will provide some background concerning genetics and genetic engineering.1

7.1 DNA, RNA, Genes, and Proteins Most organisms encode their genetic information in DNA (deoxyribonucleic acid),2 a helically-shaped, self-replicating, double-stranded polymer3 composed of the nucleic acid-base pairs adenosine (A) and cytosine (C), and thymine (T) and guanine (G), which are attached to a sugar and phosphate backbone (Alberts et al. 2015). See Fig. 7.1. DNA replicates when an enzyme called helicase unwinds the helix and the 1 By

“genetic engineering” I mean technologies that involve direct modification or alteration of the genomes of cells or organisms. Changes brought about by genetic engineering might or might not be inheritable, depending on the type of change and the organism. Modification of the genomes of somatic cells in humans (discussed below) does not normally result in inheritable genetic changes, but modification of human germ cells, sperm, eggs, or embryos does (Resnik et al. 1999). Modification of bacterial genomes always results in inheritable genetic changes because bacteria are unicellular organisms. Ooplasm transfer, nuclear transfer, and reproductive cloning in human beings raise important ethical and social issues, but these procedures are not genetic engineering, according to my definition, because their purposes is not modify genomes, even though they involve the manipulation of genetic material. Synthetic biology uses genetic engineering methods to design cells, organisms, and biological system that do not already exist in the natural world (Biotechnology Innovation Organization 2020b). 2 Some viruses encode their genetic information in RNA (ribonucleic acid). 3 A polymer is a large molecule. © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_7

165

166

7 Genetic Engineering

Fig. 7.1 Deoxyribonucleic acid, National Human Genome Research Institute, public domain, https://www.genome.gov/genetics-glossary/Deoxyribonucleic-Acid

strands come apart. Another enzyme, DNA polymerase, attaches complementary nucleic acids to each half so that two identical copies are formed (Alberts et al. 2015).4 See Fig. 7.2.

4 James

Watson (1928–) and Francis Crick (1916–2004) won the Nobel Prize in Physiology of Medicine in 1962 for discovering the structure of DNA. Their model was confirmed by Rosalind Franklin’s x-ray crystallography data, Watson and Crick did not name Franklin as an author on the paper that described their model of the structure of DNA. Franklin (1920–1958) was also not awarded the Nobel Prize for her contribution, because she died of ovarian cancer in 1958, and the Nobel Prize is not awarded posthumously (Maddox 2003).

7.1 DNA, RNA, Genes, and Proteins

167

Fig. 7.2 DNA replication, National Human Genome Research Institute, public domain, https:// www.genome.gov/genetics-glossary/DNA-Replication

Because DNA is a self-replicating molecule, it can function as a mechanism of inheritance between different generations of cells and organisms. When DNA replicates, a copy is passed on to the next generation. Organisms store their DNA in chromosomes, which help protect DNA from damage due to radiation or chemicals from outside the cell. Different species have different numbers of chromosomes. Bacteria, for example, have only one chromosome, fruit flies have four pairs of chromosomes, and human beings have 23 pairs of chromosomes (Ridley 2000). Most of an organism’s DNA is located in the cell nucleus, but eukaryotic organisms also have DNA in their mitochondria, which are organelles that perform metabolic functions for the cell (Alberts et al. 2015).5 The genome of an organism includes all its nuclear and mitochondrial DNA. The human genome consists of approximately three billion nucleic acid base pairs or nucleotides (Ridley 2000). DNA also provides the information for making proteins, which are strings of amino acid polymers connected by peptide bonds. Proteins perform a range of functions in the cell and are components of many important biomolecules and cell structures, including chromosomes, enzymes, receptors, transmitters, hormones, antibodies, microtubules, membranes, clotting factors, hemoglobin, actin, myosin, and keratin (Alberts et al. 2015). A gene is a sequence of DNA that includes information for making a protein. The process of making proteins from genetic information is known as gene expression. There are about 21,000 genes in the human genome, representing only about 1% of the genome. The remaining 99% of the genome, known as non-coding DNA, is not well-understood. Some of it includes DNA sequences that promote, enhance, or inhibit the expression of genes and sequences from viruses that were incorporated into the genome as the result infections (Alberts et al. 2015).

5 Because

mitochondria have their own DNA, scientists have speculated that mitochondria were at one time independent organisms that became incorporated into primordial, unicellular organisms (Alberts et al. 2015).

168

7 Genetic Engineering

Gene expression includes DNA transcription, DNA translation, and protein processing. Sequences of three nucleotides, known as codons, code for twenty different amino acids or signal the cell to stop adding amino acids to the protein chain. For example, the sequence TTT codes for the amino acid phenylalanine, AAA codes for lysine, TGG codes for tryptophan, and GCT is a stop codon (Alberts et al. 2015). During transcription, a complementary copy of a gene is made from m-RNA (messenger-ribonucleic acid). The nucleotides in the m-RNA are the same as those in DNA, except uracil (U) is substituted for thymine. For example, the DNA codon GCC would be copied as the m-RNA codon UAA. The m-RNA splices out portions of the gene, known as introns, that are not expressed as proteins. The portions of the gene that are expressed are called introns (Alberts et al. 2015). During translation, m-RNA exits the cell nucleus and enters the cytoplasm, where it attaches to a ribosome, which moves along the m-RNA sequence stringing together amino acids. t-RNA (transfer-RNA) forms anti-codons from m-RNA codons which it uses to guide amino acids from the cytoplasm to the ribosome. The process ends when the ribosome reaches a stop codon. Proteins undergo additional processing and modification in the cytoplasm and endoplasmic reticulum after the initial sequence of amino acids is generated. The same gene may provide information for making more than one protein (Alberts et al. 2015). See Fig. 7.3. Epigenetics also plays an important role in gene expression. Epigenetics refers to modifications of DNA that affect gene expression. For example, adding a methyl

Fig. 7.3 DNA transcription and translation, Copyright 2017 by Terese Winslow, U.S. government has certain rights, used with permission, https://www.cancer.gov/publications/dictionaries/cancerterms/def/translation

7.1 DNA, RNA, Genes, and Proteins

169

group to a gene (also known as methylation) or modifying the histones that wrap around and protect DNA can prevent the gene from being expressed. RNA interference can inhibit gene expression by neutralizing m-RNA molecules (Alberts et al. 2015). DNA also contains sequences, known as promoters, that promote gene expression.

7.2 Genes and Reproduction As noted above, DNA is passed on to future generations of cells and organisms. In prokaryotic6 organisms (such as bacteria), each daughter cell receives a copy of the genome as a result of a type of cell division known as cell fission. The process begins when DNA replicates and then consolidates into chromosome(s), which then move to different sides of the cell. The cell membrane then begins to cleave, and two distinct cells emerge (Urry et al. 2016). In higher, multicellular organisms, such as flowering plants and mammals, reproduction of cells within the organism occurs by a process known as mitosis. Mitosis begins when DNA replicates inside the nucleus and then consolidates into distinct chromosomes.7 The nuclear membrane begins to break down and a microtubule structure known as the mitotic spindle attaches to centromeres on the chromosomes and pulls them to opposite sides of the cell. The cell membrane then begins to cleave, and two distinct cells emerge. If the organism or cell is haploid (has only one set of chromosomes), then each daughter receives one set of chromosomes. If the cell (or organism) is diploid (has two set of chromosomes), then each daughter cell receives two sets (Alberts et al. 2015).8 See Fig. 7.4. For organisms that reproduce sexually, a form of cell division known as meiosis produces haploid gametes (such as sperm and eggs) from diploid germ cells.9 The chromosomes segregate independently, so each daughter cell has a 50% chance of receiving one of the chromosome pairs. Also, genes can move (or cross over) to different chromosomes. Although crossing over is usually random, some linked genes move together. During sexual reproduction, the gametes fuse together so that the resulting zygote is diploid. In many species, one pair of chromosomes carries genes related to sexual characteristics. In human beings, females normally have two X chromosomes and males have an X and a Y chromosome. Human eggs have a copy of an X chromosome and sperm have either an X or a Y. The fusion of the sperm and egg during fertilization can produce a female (XX) or male (XY) (Alberts et al. 2016). See Fig. 7.5. 6 Prokaryotes

are single-celled organisms with no distinct cell nucleus or organelles. replicate independently of the cell. 8 Most higher life forms, including most plants, mammals, and human beings, are diploid (Alberts et al. 2015). 9 Many species of plants and animals that reproduce sexually can also propagate asexually. Growing a new plant from a cutting is a form of asexual propagation. 7 Mitochondria

170

7 Genetic Engineering

Fig. 7.4 Mitosis, National Human Genome Research Institute, public domain, https://www.gen ome.gov/sites/default/files/tg/en/illustration/mitosis.jpg

After the egg is fertilized, the zygote divides and cells in the developing embryo (or blastocyst) begin to differentiate into different types of tissues. The cells at this stage of development are pluripotent stem cells, which means that they can differentiate into all different tissue layers that give rise to different tissue types (e.g. muscle, epithelial, connective and nervous tissue in mammals10 ), which can form organs and organ systems (e.g. brain, heart, lungs, kidneys, skin, gastrointestinal tract in mammals). Embryonic stem cells also have the potential to form new organisms. When a human embryo splits, the two halves can each become new organisms (i.e. identical twins). Adult stem cells are multipotent, meaning that they can only differentiate into specific cell types. For example, bone marrow stem cells can differentiate into red blood cells, platelets, or immune system cells. Researchers have recently developed methods for inducing multipotent stem cells into becoming pluripotent stem cells, a discovery that has important uses in scientific research in cytology, embryology, development,

10 Plant

stem cells can also generate different tissue types.

7.2 Genes and Reproduction

171

Fig. 7.5 Meiosis, National Human Genome Research Institute, public domain, https://www.gen ome.gov/sites/default/files/tg/en/illustration/meiosis.jpg

and toxicology, and clinical applications in organ and tissue engineering, tissue transplantation, regenerative medicine. See Fig. 7.6. In addition to playing an important role in development and growth, stem cells can renew tissues by replenishing dead cells. For example, bone marrow stem cells continue to supply red blood cells for the entire life of the organism (National Institutes of Health 2020a).

172

7 Genetic Engineering

Fig. 7.6 Research and clinical applications of stem cells, Copyright 2008 by Terese Winslow, U.S. government has certain rights, used with permission, https://stemcells.nih.gov/research/promise. htm

7.3 Genotypes and Phenotypes Long before scientists discovered the structure of DNA, they had theorized that inheritable biological units, known as genes, can produce physiological and behavioral characteristics (or traits). The Austrian monk Gregor Mendel (1822–1884) proposed laws of inheritance based on his experiments with breeding peas. Mendel studied different characteristics of these plants, such as seed shape, pod shape, flower color, and height. Mendel observed that breeding of these plants produced distinct characteristics, rather than a blending of characteristics. He also observed that breeding produced regular ratios of traits in the offspring. For example, if he bred two plants that produce round seeds, the offspring (or F1 generation) would have a ratio of 3:1 (round, wrinkled). If he bred the wrinkled seeded plants, all the offspring would be wrinkled. Mendel also did experiments in which he bred plants with two different characteristics, such as tall and round seeds and short and wrinkled seeds. He observed that these experiments produced ratios of 9:3:3:1 (tall and round, tall and wrinkled, short and round, short and wrinkled). From these and other experiments, Mendel proposed some laws of inheritance, which can be stated as follows (Ridley 2000): 1.

Traits are distinct (round vs. wrinkled seeds, tall vs. short, etc.) rather than blended.

7.3 Genotypes and Phenotypes

2. 3. 4. 5. 6.

173

For each trait, there are factors (or genes) that produce the trait. There are different types of factors (or alleles). Offspring inherit an allele from each parent. Alleles segregate independently in reproduction (discussed earlier). Some alleles are dominant, and others are recessive, such that a trait associated with a recessive allele is produced only when an offspring has two copies of that allele.

To illustrate Mendel’s ideas, consider the genetics of Sickle Cell Anemia (SCA). This disease follows an autosomal recessive pattern, meaning that the allele is recessive, and offspring have the disease only when they are homozygous for the disease (i.e. have two copies of the allele). People who are heterozygous for the SCA allele (i.e. have only one copy of the disease allele) or a homozygous for normal allele (have two copies of the normal allele) do not develop SCA. See Fig. 7.7. The disease occurs because the SCA allele is mutated so that it does not code for a normal hemoglobin protein. Individuals with only one copy of the SCA allele (i.e. carriers) can still make normal hemoglobin, so they do not manifest the disease. People with SCA have anemia, fatigue, infections, delayed growth, and severe pain episodes (National Heart, Lung, and Blood Institute 2020). It is likely that natural selection has not

Fig. 7.7 Sickle cell disease inheritance, centers for disease control and prevention, public domain, https://www.cdc.gov/ncbddd/sicklecell/traits.html

174

7 Genetic Engineering

eliminated the SCA allele from the human gene pool because it gives heterozygotes some type of adaptive advantage, such as resistance to malaria (Resnik et al. 1999). While Mendel’s theory of inheritance was a major advance in our understanding of genetics, we now know that the relationship between genotypes and phenotypes (traits) is much more complex than he envisioned (Ridley 2000; Urry et al. 2016). First, many traits and diseases are polygenic (or multi-factorial), meaning that they are caused by dozens or even hundreds of genes. Second, alleles are not always dominant or recessive. Third, because traits are polygenic and alleles are not always dominant or recessive, breeding often produces blended characteristics rather than distinct ones. Fourth, epigenetics, development and the external environment play a key role in the genesis of different traits. For example, a boy who has genes that predispose him to grow to six feet all will not do so if he has a poor diet. When the human genome was sequenced and mapped in the early part of the twenty-first century, many scientists predicted that this achievement would revolutionize medicine by enhancing our understanding of the genetic and molecular basis of disease. However, it may take many decades (or even centuries) to fully identify, describe and understand that thousands genomic, epigenomic, developmental, and environmental processes and interactions that lead from genotypes to phenotypes (Shendure et al. 2019).

7.4 Genetic Engineering For thousands of years, human beings have modified animal and plant species by breeding organisms with desired characteristics. Genes that produced these characteristics were passed on to future generations by a form of natural selection known as selective breeding (Curry et al. 2016). In the early 1970s, Paul Berg, Walter Gilbert, and Frederick Sanger discovered how to use biotechnological methods (known as recombinant DNA technology) to directly manipulate the genes of bacteria and other microbes to create genetically modified organisms (GMOs).11 One of the first of these methods was the use of plasmids to insert DNA into bacteria. Plasmids are small strands of DNA that are independent of the chromosome. When bacteria exchange plasmids, the recipient incorporates the DNA from the donor into its genome. For example, an antibiotic resistance gene could be transferred from one bacterium to another. Scientists discovered how to use enzymes to splice DNA sequences into plasmids, so that they could genetically manipulate bacterial genomes (Alberts et al. 2015). An early important application of this technique involved inserting a gene the codes for human insulin into a bacterial genome so that it would produce insulin (Baeshen et al. 2014). See Fig. 7.8. Researchers working in this emerging field quickly realized the scientific and social significance of this new technology and the potential risks to laboratory 11 Berg, Gilbert, and Sanger won the Nobel Prize in chemistry in 1980 for their development of recombinant DNA techniques (Nobel Prize.org 2021).

7.4 Genetic Engineering

175

Fig. 7.8 Genetic modification of bacteria to produce insulin, National Library of Medicine, public domain, https://www.nlm.nih.gov/exhibition/fromdnatobeer/exhibition-interactive/recomb inant-DNA/recombinant-dna-technology-alternative.html

workers and the public from accidental releases of GM bacteria. Many people feared that scientists would create a “superbug” that would eradicate the human race. Scientists working with recombinant DNA quickly understood the significance of the risks that their research posed for society and the agreed to a voluntary moratorium on this research until they could develop methods and procedures for managing these risks. In February 1975, top researchers in molecular biology and microbiology met at Asilomar, CA to consider the issues surrounding recombinant DNA and to develop biosafety standards and best practices. They agreed that they should only work with organisms that cannot survive outside the laboratory and to use DNA vectors that can only grow in specific hosts. They agreed that recombinant DNA research should move forward cautiously. In 1974, the National Institutes of Health (NIH), established the recombinant DNA Advisory Committee (RAC) to provide guidance and oversight for NIH-funded research. In 1976, the RAC published guidelines and recommendations for conducting recombinant DNA research. Among these was the requirement that institutions establish institutional biosafety committees (IBCs) to review and oversee recombinant DNA research and other potentially hazardous biological experiments (Resnik 2012). In the four decades since the advent of genetic engineering, researchers have made tremendous progress in the field. They have developed other types of vectors for transferring DNA, such as viruses and artificial chromosomes, as well as techniques for injecting DNA directly into somatic cells, germ cells, and embryos. Scientists have also discovered how to delete or replace DNA (Alberts et al. 2015). Scientists were also able to use genetic engineering to produce transgenic plants and animals. See Fig. 7.9. However, scientists still faced two technical problems: inefficiency and inaccuracy (Resnik et al. 1999). Early methods were inefficient because DNA transferred to a cell might not be incorporated into the genome so that it might take hundreds of attempts to achieve successful results. Early methods were inaccurate because DNA might

176

7 Genetic Engineering

Fig. 7.9 Strategies for creating transgenic mice, Tratar et al. (2018), Creative Commons License

insert into a random place in the genome, which could mean that the DNA sequence would not be expressed or could cause unintended mutations or deletions, known as off-target effects, that could have adverse impacts on the organism or subsequent generations. Scientists made progress in dealing with the inaccuracy problem by discovering proteins that bind to and cut specific DNA sequences, which made it possible to target genes for replacement or deletion (Resnik et al. 1999). However, these new methods were still inefficient, expensive, and time consuming, because scientists had to construct specific proteins for each targeted sequence (Alberts et al. 2015; National Human Genome Research Institute 2017). In the early 2000s, Jennifer Doudna, Emanuelle Charpentier and researchers made a discovery that would transform the genetic engineering field when they realized that they could use a technique for DNA editing that bacteria use to defend themselves against invading viruses. The technique is known as clustered regularly interspaced short palindromic repeats (CRISPR) (Alberts et al. 2015; National Human Genome

7.4 Genetic Engineering

177

Research Institute 2017).12 CRISPR is a family of DNA sequences found in bacteria and other prokaryotes. To use CRISPR to delete a gene, researchers create an RNA sequence that matches the part of the genome they want to edit and attach this sequence to a Cas 9 (or Cas 12) protein on the CRISP complex. The Cas 9 protein cuts both strands of the DNA at the place that corresponds to the RNA sequence. The DNA then repairs itself at that spot by a process known as non-homologous end joining, and the targeted part of the genome is removed. The technique can also be used to add or replace a gene if donor DNA is incorporated into the CRISPR complex (Hou and Zhang 2019). Although CRISPR is much more efficient and accurate than previously developed gene editing methods, it can still produce off-target effects. Additional research may help to improve the accuracy of this tool. See Fig. 7.10.

7.5 Applications of Genetic Engineering Genetic engineering has had many important applications in the biomedical sciences, pharmaceutical manufacturing, medicine and agriculture (Curry et al. 2016). In the biomedical sciences, researchers have applied genetic engineering methods to microbes to gain a better understanding of genomes, chromosomes, gene expression, cell functioning, cell signaling, cell structure, programmed cell death, immunity, and many other basic biological phenomena. Researchers have deleted (knocked out) or added genes to laboratory mice to better understand cellular, physiological, developmental, and pathological processes in mammals; and they have created transgenic mice to serve as models for various human diseases, such as obesity, kidney disease, heart disease, Parkinson’s disease, cystic fibrosis, and cancer (Simmons 2008; Doyle et al. 2012). In pharmaceutical manufacturing, scientists have used genetically modified microbes (e.g. bacteria and yeast) and laboratory animals to manufacture biological substances that cannot be easily synthesized chemically. In addition to insulin, other biologic drugs include human growth hormone, clotting factors, botulinum toxin Botox™, and monoclonal antibodies, such as Humira™ (adalimumab) and Herceptin™ (trastuzumab) (Lee 2018).13 The annual global market for biologic drugs is estimated to be over $300 billion (The Business Research Company 2019). Millions of people around the world benefit from biologic drugs. In medicine, clinical researchers have used gene editing to replace defective genes in patients with genetic diseases and to modify human immune cells to fight cancer 12 Doudna and Charpentier won the Nobel Prize in Chemistry in 2000 for the discovery of CRISPR (Ledford and Callaway 2020). 13 Laboratory animals are used to produce monoclonal antibodies. An antigen is introduced into the animal, which produces antibodies in its lymphocyte cells. These cells are cultured and then antibodies are isolated. Since these antibodies would be rejected by the human immune system, the cells are genetically modified so that they produce antibodies with a human protein component, or humanized antibodies. The genetically modified cells are then cultured and humanized antibodies are isolated for production (GenScript 2020).

178

7 Genetic Engineering

Fig. 7.10 Using CRISPR to edit a gene, Costa et al. (2017), Creative Commons License

(Kumar et al. 2016). In a process known as gene therapy, doctors use a vector (such as a virus) to transfer genes to the somatic cells14 of a patient with a disease resulting from a genetic defect. There are two types of gene therapy: in vivo gene therapy, in which a gene in transferred into cells in a patient’s body; and ex vivo gene therapy, in which a gene is transferred into cells outside of the patient’s body. Gene therapy has been used successfully to treat patients with diseases that result from monogenic defects (Collins and Thrasher 2015). For example, ex vivo gene therapy has been used to treat patients with severe Combined Immunodeficiency Disease (SCID), which occurs when a person has a mutation that prevents them from producing 14 Somatic cells are cells other than the reproductive or germ cells, such as skin, nerve, muscle, liver

or bone marrow cells.

7.5 Applications of Genetic Engineering

179

adenosine deaminase, an enzyme which plays an important role in immune system functioning. Researchers have treated this disease by isolating T-cells from patients and transferring a functional copy of the adenosine deaminase gene into the cells. The cells are then grown in culture and infused into the patient (Mamcarz et al. 2019). In a type gene therapy cancer treatment known as chimeric antigen receptor (CAR) T-cell therapy, doctors remove T-cells from the patient’s body and transfer a gene to them so that they express a CAR antigen, which binds to a receptor on the patient’s cancer cells. The modified T-cells are then infused into the patient and they bind to and kill cancer cells (Miliotou and Papadopoulou 2018). See Fig. 7.11. Researchers have also genetically modified pigs to be immunologically compatible with human beings to they can serve as a source of organs and tissues for transplantation (Hryhorowicz et al. 2017). Although this type of xenotransplantation is still under development, if it is perfected it could help reduce the shortage of organs and tissues needed for donation. In early 2018, the first genetically engineered children were born in China. He Jiankui, who was then a biophysics professor at the Southern University of Science and Technology in Shenzhen, China, announced the birth of gene-edited, twin baby

Fig. 7.11 CAR T cell therapy, Copyright 2017 by Terese Winslow, U.S. government has certain rights, used with permission, available at: https://www.cancer.gov/publications/dictionaries/cancerterms/def/car-t-cell-therapy

180

7 Genetic Engineering

girls through an Associated Press interview and several YouTube videos in November 2018. He had conducted his research in secret but decided to reveal it to the public after a story published in the MIT Technology Review exposed his work. In his public announcements, He said that he used CRISPR technology to modify a gene that codes for a receptor on the surface of white blood cells that the human immunodeficiency virus (HIV) uses to infect the immune system. The babies were conceived in vitro, and gene editing was performed shortly after the embryos were formed. The goal of the genetic modification, according to He, was to provide the girls with immunity to HIV, which is a highly stigmatized disease in China. He had previously worked with researchers who had successfully edited the same gene in monkeys, but they had no idea that he would apply their technique to humans (Cohen 2019a). Following this stunning announcement, researchers and bioethicists around the world condemned He’s experiments as unethical, and China tightened its rules on human genome editing (Normile 2018, 2019; Wang and Yang 2019). The Southern University of Science and Technology terminated He’s employment shortly after the announcement. On December 30, 2019, a Chinese court convicted He of irresponsible medical practice and sentenced him to three years in prison (Cyranoski 2020). That same day, a Chinese news agency confirmed that a third gene-edited baby had been born as a result of He’s experiments (House 2019). The identities of the gene-edited babies remain a secret to protect their privacy. Scientists have not confirmed whether He’s experiments were successful (Wang and Yang 2019). Researchers are concerned that the gene editing may produce off-target effects or other mutations that could compromise the health of the girls or subsequent generations, and that their health might be harmed if the receptor is not functioning properly and is needed to fight disease (Normile 2018; Wang and Yang 2019). (The ethical issues pertaining to human genetic engineering will be discussed in more depth below.) In agriculture, scientists have developed many different genetically modified (GM) crops, including soybeans, corn, cotton, sorghum, potatoes, apples, tomatoes, sugar beets, alfalfa, papaya, rice, and squash (GMO Answers 2020a; Zhang et al. 2016). GM crops have been developed that resist diseases, droughts, insects,15 and herbicides.16 Crops have also been developed with enhanced shelf life, nutritional value,17 salt tolerance, growth, and yield (Resnik 2012). 24 countries around the world permit the planting of GM crops and 43 allow the importation of GM crops. 17 million farmers around the world plant GM crops for a total of 190 million hectares. Since 1996, increased agricultural productivity from GM crops has generated $186 billion. The top GM crop producing countries include the US, Brazil, Argentina, Canada, India, and China. The largest growth of GM crop production has been in developing countries, which now account for 53% of GM crop cultivation worldwide 15 Monsanto has developed GM crops (known as Bt crops) that produce Bacillus thuringiensis toxins, which are deadly to insects. Farmers were already using these toxins as pesticides were Bt crops were developed (Resnik 2012). 16 Monsanto has developed GM crops (known as “Roundup Ready” crops) that are immune to the effects of glyphosate, the active ingredient in the widely-used herbicide Roundup ™. Farmers can control weeds with damaging their crops by spraying their crops with Roundup (Resnik 2012). 17 Golden rice, for example, contains more beta carotene than normal rice (McDivitt 2019).

7.5 Applications of Genetic Engineering

181

(Conrow 2018). In 2014, GM crops constituted 93% of the soybean acreage planted in the US and 90% of the corn acreage (Fernandez-Cornejo et al. 2014). GM foods are consumed by people and are a major part of animal feed. GM crops, according to many scientists, play an important role in fighting world hunger and malnutrition (Nobel Prize Winners 2016). The human population, which is currently at 7.8 billion people, is expected to grow to 10.9 billion people by 2100 and then level off (Cilluffo and Ruiz 2019). In order to meet the food demands of this growing population without increasing deforestation, agricultural productivity will need to increase by about 30%, and GM crops can help to increase productivity (Zhang et al. 2016). GM crops benefit the environment by reducing the use of pesticides and chemical fertilizers (National Academy of Sciences, Engineering, and Medicine 2016b; Zhang et al. 2016). GM crops have been controversial since they were first introduced in 1996. The public has expressed its concerns about the environmental risks of GM crops and the health risks of “Frankenfoods.” The European Union (EU) banned GM foods and crops from 1998 to 2007 (Resnik 2012). 38 countries around the world ban the cultivation of GM crops, including 19 members of the EU. 9 countries ban the cultivation and importation of GM crops (Genetic Literacy Project 2020). (The ethical issues pertaining genetic engineering of plants and animals will be discussed in the more depth below.) Researchers are also developing GM animals as potential food sources (Biotechnology Innovation Organization 2020b). Researchers have developed GM pigs, cows, and chickens that resist diseases, have enhanced nutritional value, and grow more rapidly than non-GM livestock, and they have developed GM sheep, goats, and cows that produce milk with human proteins (Forabosco et al. 2013). GM salmon developed by AquaBounty Technologies grow twice as fast as normal salmon. The FDA approved AquaBounty’s salmon in 2015, making it the first GMO animal to be approved for human consumption (Forabosco et al. 2013; GMO Answers 2020b). The salmon has been sold in Canada but has not yet been sold in the US. So far, consumers have not shown a great deal of interest in GM meat, but that could change if GM meats taste the same as non-GM meats and have enhanced nutritional value (Cossins 2015; Geib 2018). In the public health arena, scientists working for Oxitec have genetically engineered mosquitoes to help control dengue and malaria, two serious diseases that take an enormous toll on human health and life, especially in the developing world.18 Scientists have introduced a mutation into male mosquitoes from the species Aedes aegypti (which carries the dengue virus) that causes offspring to die prematurely unless they receive the antibiotic tetracycline. Field trials of these mosquitoes in Brazil have reduced Aedes aegypti populations from 80 to 95% and dengue fever cases by more than 90% (Resnik 2018b). In 2016, Oxitec proposed field trials of its GM mosquitoes in Key Haven, FLA. Although the FDA, state of Florida voters, and 18 In 2018, 228 million people worldwide contracted malaria and 405,000 people died from the disease (World Health Organization 2020a). About 390 million people contract the dengue virus each year and about 4000 die from the disease (World Health Organization 2020b).

182

7 Genetic Engineering

Fig. 7.12 Gene editing with gene drive, GM Watch (2019), Creative Commons License

local mosquito control board approved the field trials, Key Haven voters rejected them, so Oxitec is looking for other US sites (Resnik 2019b).19 Scientists have also genetically engineered mosquitoes from the Anopheles genus (which carries malaria) so that they do not transmit malaria to humans. The gene that interferes with malaria transmission is attached to a gene drive mechanism. Gene drives are naturally occurring DNA sequences that bias Mendelian inheritance. A gene linked to a gene drive mechanism could have a 90% chance of being inherited and become highly prevalent in a population in only a few generations (National Academies of Sciences, Engineering, and Medicine 2016a). This method for combatting malaria has been tested in the laboratory but not in the field (Resnik 2019a). See Fig. 7.12. Other noteworthy developments in genetic engineering include GM bacteria designed to effectively treat wastewater (Treatment Solutions 2017), GM yeast that can produce ethanol (Biofuels International 2018), GM plants that can be efficiently converted into biofuels (Genetic Literacy Project 2018), GM trees that can resist herbivores (and thereby grow more effectively) better than non-GM trees (Hjältén and Axelsson 2015), GM grasses designed to grow and spread quickly and resist herbicides (Main 2017), and GM aquarium fish that glow in the dark (Food and Drug Administration 2020a).

7.6 Regulation of Genetic Engineering Most countries have a patchwork of different laws and regulations that apply to different types of genetic engineering (Resnik 2012; Kuzma 2016). In the US,

19 Oxitec has also genetically engineered diamondback moths (Plutella xylostella) to control these populations. Diamondback moths are a destructive pests that feed on cauliflower, cabbage, broccoli and canola (Campbell 2020a).

7.6 Regulation of Genetic Engineering

183

biologic drugs, cells (e.g. cells used in gene therapy), tissues (e.g. tissues for transplantation), and other medical treatments produced by genetic engineering are regulated by the FDA. Manufacturers must submit safety and efficacy data to the FDA to obtain marketing approval (see discussion of the FDA approval process in Chapter 6). GM crops are regulated by the EPA, FDA, and US Department of Agriculture (USDA). GM crops designed to produce an insecticide as a form of protection are regulated by the EPA under pesticide laws.20 Crops that are designed to produce food for human or animal consumption are regulated by the FDA under food protection laws. If the FDA determines that a GM food is substantially equivalent to a nonGM food, then it considers the food to be safe and the manufacturer can market it without submitting safety data to the FDA (Resnik 2012). What makes a GM food “substantially equivalent” to a non-GM food is subject to interpretation. Thus far, the FDA has made substantial equivalence determinations based on nutritional content and safety (such as potential to trigger allergies) and has found that all GM foods submitted to the agency are substantially equivalent to non-GM foods (Porterfield and Entine 2018). The USDA regulates GM crops (or other organisms) that are plant pests or potentially plant pests (United States Department of Agriculture 2020). The FDA also regulates genetically modified meat and milk for human under food protection laws. The only GM meat the FDA has approved thus far is AquaBounty’s salmon. The FDA also applies the substantial equivalence test to GM meat (Food and Drug Administration 2020a). Regulation of GM meat or milk may soon switch from the FDA to the USDA (Geib 2018). GM mosquitoes have posed a challenge for US regulators. The FDA initially claimed regulatory authority over Oxitec’s mosquitoes, because it determined they were a medical product designed to prevent a disease (i.e. dengue), but it later withdrew this determination when it decided the mosquitoes fell under the EPA’s authority to regulate pesticides, because the mosquitoes are designed to control mosquito populations. The FDA will regulate mosquitoes that are designed to prevent diseases, such as malaria-resistant mosquitoes (Food and Drug Administration 2020b). Concerning human genetic engineering, as noted above, the FDA regulates human gene therapy. The FDA has also asserted regulatory authority over human cloning, somatic cell nuclear transfer, ooplasm transfer, and genome modification (or editing) intended to produce a child (Food and Drug Administration 2020c). Thus far, the FDA has not approved any of these procedures. It remains to be seen what the FDA would do if someone were to propose or conduct genome editing to produce a child in the US as form of therapy. It is also worth noting that institutional review boards oversee clinical trials involving human genetic engineering and medical boards oversee medical treatments involving human genetic engineering. IBCs (discussed above) also provide some form of oversight over human genetic engineering experiments. Some GMOs may evade US regulation. For example, microbes that produce biofuels or other consumer products not intended as medications or foods; plants that do not produce pesticides, are not plant pests, and are not intended for human consumption (such as some types of cotton, grass, and trees); and animals that are 20 E.g.

Bt crops. See Footnote 12.

184

7 Genetic Engineering

not intended as sources of food, tissues, organs, or biologic drugs (e.g. pets) may not be regulated under current US laws. Concerning non-US regulations, some countries, like the US, have a patchwork of laws, while others have comprehensive laws that cover all forms of genetic engineering (Kuzma 2016). EU countries follow the EU’s policies on GMOs, which include requirements for dealing with environmental and public health risk assessment and labelling and traceability of GM products (European Commission 2020). Also, as noted above, many countries ban the cultivation or importation of GM crops. With regard to human genetic engineering, most countries regulate gene therapy under their medical product laws. The UK regulates all assisted reproductive technologies, including reproductive cloning, nuclear transfer, ooplasm transfer, and genome editing under the Human Fertilisation and Embryology Authority (HFEA). HFEA’s regulatory authority is broader than the FDA’s because it has jurisdiction over fertility clinics and many different procedures designed to assist human reproduction, including in vitro fertilization and sperm and egg donation (Human Fertilisation and Embryology Authority 2020). Dozens of countries have banned human reproductive cloning and human reproductive genome editing (United Nations Educational, Scientific, and Cultural Organization 2020). Beyond national laws and regulations, there are also some international treaties pertaining to genetic engineering. The Cartagena Protocol on Biosafety (quoted in Chapter 4), which is part of the Convention on Biological Diversity (CBD), is an international agreement for the safe handling, transportation, and use of GMOs. While the Cartagena Protocol establishes a precautionary risk management framework for GMOs (see Chapter 4) it does not ban or restrict GMOs. 172 countries have signed the Cartagena Protocol; the US and Russia, notably, have not (Convention on Biological Diversity 2020). The CBD addresses topics related to GMOs, such as the conservation and sustainability of biodiversity and the equitable sharing of the benefits of genetic resources. 196 countries, including the US and Russia, have signed the CBD (Convention on Biological Diversity 2020). The Biological Weapons Convention (BWC), signed by 197 countries, bans the development and stockpiling of biological agents, toxins, and delivery systems used for non-peaceful purposes (Arms Control Association 2018). The use of genetic engineering to develop biological weapons would violate the BWC. (I will discuss biological weapons in more depth in Chapter 8.)

7.7 Two Overarching Objections to Genetic Engineering Before considering how the PP can be applied to issues related to the genetic engineering of microbes, plants, animals, and humans I will consider—and dismiss— two overarching objections to genetic engineering that have had considerable influence. It is important to examine these objections at the outset of our discussion because if these objections are convincing, assessment of genetic engineering based

7.7 Two Overarching Objections to Genetic Engineering

185

on a weighing of risks and benefits would be morally irrelevant, because genetic engineering would be immoral as a matter of principle. The “playing God” objection. People often articulate vaguely formulated objections to GMOs by accusing genetic eers of “playing God” (Boone 1988; Reiss and Straughan 1996). Some people frame this objection from a religious viewpoint and claim that genetic engineers are superseding God’s creative authority. This version of the “playing God” objection rests on the premise that God designed and created the universe with a plan or purpose in mind. Because man’s place in the universe is to take care of God’s creation but not to change it, we do not have the right change living things by means of genetic engineering or to “play God” with genetic technology. There are two problems with this type of argument. First, it assumes a religious viewpoint that many people may not accept. Second, even if one accepts the idea that God designed and created the universe, one might not agree that human beings have no right to change the living world. Many religions acknowledge that human beings have been altering the living world for thousands of years and view human beings a cocreators with God (Cole-Turner 1997). We can and should change the world for good purposes, such as preventing or treating disease, overcoming hunger and malnutrition, relieving pain and suffering, and so on. Genetic engineering, according to many religious, is morally acceptable, provided that it is done wisely and responsibly (Reiss and Straughan 1996; Cole-Turner 1997; Mitchell et al. 2007). Others frame their objections to GMOs in secular terms. According to one version of the secular “playing God” objection, it is immoral to change living things because they have an inherent value, dignity, or integrity that should not be altered. Some have applied this argument to genetic engineering of plants and animals (see Rollin 1995), while others have applied it to human beings (President’s Council on Bioethics 2002; Annas et al. 2002; Lanphier et al. 2015). In response to this objection, one might argue that some things that occur in the natural world, such as disease, malnutrition, and starvation, are not inherently good, and that we are therefore justified in changing the natural world to counteract these things and promote human well-being.21 For example, Osteogenesis imperfecta is a genetic disease caused by mutations in genes that code for collagen, an essential component of bones and tissues. People with this disease have brittle bones, short stature, hearing loss, and loose joints (Kids Health 2018). Most people would agree that there is nothing inherently good about being born with this disease and that doctors and scientists would be justified in taking steps to prevent or treat this disease, including genetic engineering. Concerning human dignity, one might argue that our dignity resides in our cognitive and emotive capabilities, such as sentience, intelligence, language, reasoning, and moral judgement, rather than our genomic characteristics (Resnik 2001, 2007; National Academies of Sciences, Engineering, and Medicine 2017; Beriain 2018). A slightly different version of the secular the “playing God” objection holds that we should not change living things because we are profoundly ignorant about how they work. Cells, organisms, species, habitats, ecosystems have a natural order and 21 These are the sorts of problems encountered by the natural law approaches to morality, discussed in Chapter 3.

186

7 Genetic Engineering

stability resulting from millions of years of evolution. Our genetic engineering experiments will disrupt the natural order of things and inevitably have adverse outcomes that we cannot predict or control. Attempting to genetically engineer organisms, despite our ignorance, would be a form of hubris akin to “playing God.” Therefore, we should leave living things as they are and refrain from genetic engineering (Rifkin 1983). The main problem with this objection is that it assumes a level of ignorance that no longer exists. Since the discovery of the structure of DNA in 1953, scientists have learned a great deal about the molecular and genetic basis of life. While I agree that we need to be keenly aware of our ignorance when making decisions related to altering the living world, we have enough knowledge and understanding of life to move forward, cautiously, with genetic engineering (National Academies of Sciences, Engineering, and Medicine 2017). As I shall argue below in more depth, we must always be mindful of our ignorance as we take reasonable precautions to address the risks of genetic engineering. The slippery slope objection. The slippery slope objection holds that we should not perform genetic engineering in general or only certain types of genetic engineering (such as human genetic engineering22 ) because this will lead us on a slippery slope toward dangerous and immoral forms of genetic engineering (Resnik 1993; Walton 2017). The slope leads from benign and/or useful applications of genetic engineering, such as modifying bacteria to produce biologic drugs or modifying human immune cells to fight cancer, to more dangerous and morally repugnant forms of genetic engineering, such as GM “superbugs,” invasive GM plants, and GM “supersoldiers.” Once we open the Pandora’s box of genetic engineering, we will be unable to prevent misuses of this technology, and terrible things will inevitably happen. While there are different interpretations of the slippery slope objection, it as best understood as an empirical, consequentialist argument (Resnik 1993). The slippery slope objection is essentially claiming that if we develop a technology (such as genetic engineering) then very bad things are likely to happen. Proponents of this argument usually claim that these bad things will happen because we will be unable to control the technology once it is developed. As we continue to use the technology, we will continue to lose control over it, and we will use it inappropriately. In some ways the slippery slope objection is similar to the maximin decision rule discussed in Chapter 2, because it is advising us to avoid the worst possible outcomes. However, the argument is different from maximin because it is claiming not just that these bad things could happen, but that bad things are likely to happen. The most straightforward critique of the slippery slope objection is to reject the assumption that terrible things are likely to happen as a result of developing genetic engineering technology because we can implement laws, regulations, and policies to control genetic engineering (Resnik 1993). We can prevent global pandemics caused by GMOs by implementing biosafety measures to prevent accidental releases of GM microbes or control releases when they happen; we can prevent GMOs from destroying the natural world by controlling the introduction of GM plants and animals 22 Most defenders of the slippery slope argument in genetic only apply it to using genome editing in humans, but it could be applied to other applications of genetic engineering.

7.7 Two Overarching Objections to Genetic Engineering

187

into the environment; and we can prevent the creation of “supersoldiers,” a sub-human species, a genetic caste system by regulating the use of human genome editing. Even though the slippery slope argument is not a compelling reason to refrain from genetic engineering (in general or particular types), it plays a useful role in the discussion of this technology by calling our attention to some risks that we should try to avoid or minimize. We will consider these risks when we apply the PP to genetic engineering issues.

7.8 Applying the Precautionary Principle to Genetic Engineering Having examined two overarching objections to genetic engineering, I will now consider how the PP can be applied to different types of genetic engineering. At the outset, I will observe that decision-making related to genetic engineering is a paradigm case for using the PP, due to the scientific uncertainty related to the consequences of genetic engineering and the moral uncertainty concerning those consequences (Tait 2001; Resnik 2012; Kelle 2013; Wareham and Nardini 2015; Kaebnick et al. 2016; Koplin et al. 2020). As we shall see below, these uncertainties arise in many different types of genetic engineering.

7.9 Genetic Engineering of Microbes As noted earlier in this chapter, GM microbes have already produced important benefits for science and society and are expected to produce many more. To apply the PP to decisions concerning the genetic engineering of microbes, we need to consider the options for risk management and conditions for reasonableness of risks. The main risks of genetically engineering microbes are (1) accidental release of microbes from the laboratory (biosafety risks); and (2) development of biological weapons (or biosecurity risks).23 I will consider the biosecurity risks in more depth in Chapter 8. These risks, though real, are probably limited to certain types of dangerous microbiology experiments (National Research Council 2004). The biosafety risks related to GM microbes are well-understood as a result of over four decades of recombinant DNA research (Kimman et al. 2008; Coelho and García 2015). Scientists have developed methods for containing microbes and protecting workers and the public, including protective clothing and equipment,

23 I

am assuming that GM microbes will not be intentionally released into the environment, which would create risks not discussed here. Scientists have developed GM microbes to clean up oil spills but have not deployed them yet, mostly due to regulatory issues. In nature, microbes already play an important role in cleaning up oil spills (Ezezika and Singer 2010).

188

7 Genetic Engineering

safe and secure facilities, biosafety procedures and protocols, and hazard assessment. Biosafety experts have developed four classifications of biosafety laboratories (BSLs), ranging from BSL level 1 to BSL level 4, which implement these different methods (Centers for Disease Control and Prevention and National Institutes of Health 2009). See Table 7.1. Accidental contamination from GM microbes poses risks to laboratory workers as well as members of the community if infected workers spread pathogens to people outside the laboratory (Coelho and García 2015). It is difficult to estimate the risks of contracting a laboratory acquired infection (LAI), due to lack of data, nonuniform reporting standards, and variations in laboratory biosafety conditions and practices Table 7.1 Biosafety levels, based on information available at Public Health Emergency (2015) Description of agents

Example

Biosafety Level 1 Infectious agents or toxins Escherichia coli not known to consistently cause disease in health adults Biosafety Level 2 Moderate risk infectious agents or toxins that pose a risk infection if accidentally inhaled, swallowed, or exposed to skin

Safety requirements (sample) Standard microbiological safety practices; surfaces must be easily cleaned and can withstand basic chemicals

Influenza virus, Hand washing sinks; eye HIV, Lyme disease washing stations; access to equipment that can decontaminate laboratory waste, including an incinerator, an autoclave, and/or another method

Biosafety Level 3 Infectious agents or toxins Tuberculosis that may be transmitted through the air and cause potentially lethal infection through inhalation exposure

Experiments performed in biosafety cabinets; laboratories are designed to be easily decontaminated and must use controlled air flow; sealed windows and wall surfaces, and filtered ventilation systems

Biosafety Level 4 infectious agents or toxins Ebola virus that pose a high risk of aerosol-transmitted laboratory infections and life-threatening disease for which no vaccine or therapy is available

Laboratories a located in safe, isolated zones within a larger building or are housed in a separate, dedicated building; work may be done in a biosafety cabinet or laboratory personnel wear full-body, air-supplied suits and go through a series of procedures designed to fully decontaminate them before leaving the building

7.9 Genetic Engineering of Microbes

189

(Kimman et al. 2008). Estimates of the LAI rate, based on data from US and European labs, range from 0.000057 per person per year (Henkel et al. 2012) to 0.001 per person per year (United States Department of Homeland Security 2008). However, we do not have any estimates of the LAI rate for research related to GMOs, since there are only a few reported cases of such LAIs related to GMOs (Kimman et al. 2008). Since genetic engineering tends to be conducted in very safe labs (BSL 3 or 4), the rate of LAIs related to genetic engineering is probably much lower than the overall LAI rate, which includes infections due to many different pathogens in many different laboratories (e.g. clinical, academic, commercial). Although LAI rates appear to be declining as a result of improvements in biosafety (Kimman et al. 2008), there are concerns about biosafety lapses in some countries. For example, in March 2004, eight people developed LAIs and one person died as a result of biosafety lapses in a Chinese laboratory conducting research on the severe acute respiratory syndrome (SARS) virus (Normile 2004). Estimates of the risks of transmission of a LAI from the laboratory to the community (or onward transmission) are based not on data but on mathematical modeling. Merler et al. (2013), for example, developed a mathematical model to estimate the risk of onward transmission. The risk ranges from 5 to 15%, depending on the pathogen’s R0 (or reproduction rate)24 and the probability that the pathogen will produce clinical symptoms. Very often people who are infected with pathogens experience no ill effects, so they do not take adequate precautions to prevent infecting others. In the COVID-19 pandemic, for example, the majority of people who tested positive were asymptomatic (Nogrady 2020). Although there is some evidence related to the risk of biocontamination from genetic engineering of microbes, there is not enough to make accurate and precise probability estimates of the consequences of different policy options. Given this level a scientific uncertainty, it would not be reasonable use expected utility theory (EUT) for decision-making, and a precautionary approach is warranted. To apply the PP to genetic engineering of microbes, we should first consider the three basic options: risk avoidance, risk minimization, and risk mitigation. To avoid the risks of the genetic engineering of microbes, we would need to completely ban this technology. This would seem to be an unreasonable and draconian option at this point in time,25 given the tremendous potential benefits of GM microbes for science and society and our demonstrated ability to control the risks. Banning GM microbes would therefore violate the proportionality condition of the PP. A ban would also be an unrealistic option, since people have been genetically engineering microbes for four decades and probably will continue to do so, even if this technology is outlawed.

0 = 1 means that an infected person infects one more person on average; R0 = 2 means an infected person infects two people on average. 25 It is worth noting, however, that a voluntary moratorium was a reasonable option when this technology was emerging in the 1970s. 24 The reproduction rate is how many people infected persons infect. R

190

7 Genetic Engineering

A black market for GM microbes and the products of GM microbes could emerge if this technology is banned.26 Assuming that a ban would violate the proportionality condition, we can consider how the other conditions would apply to risk minimization and mitigation. Concerning the fairness condition, most of the biosafety risks of GM microbes fall on laboratory workers, rather than the public at large. Although the public derives great benefits from GM microbes, one could argue that it is fair for workers to bear a greater share of the risks because they benefit economically from their employment and they have chosen to work at a job that exposes them to these types of risks (Resnik 2012). Workers should have meaningful input into the formulation and implementation of policies and procedures designed to protect them from risks. People living near BSL laboratories should also have meaningful input into these policies and procedures because they bear a greater risk from accidental contamination than other members of the public (Moritz 2020). Concerning the consistency condition, it would be inconsistent to ban the genetic engineering of microbes while allowing riskier research that poses biosafety risks, such as research on dangerous pathogens and toxins, to move forward. The most consistent approach is to manage laboratory research according to the degree of risk, as exemplified by the BSL classifications. Concerning epistemic responsibility, this condition would support continuing research on biosafety containment methods, procedures, protocols, equipment, and facilities. Biosafety measures used in genetic engineering should reflect the most up-to-date research on how to protect laboratory workers and the public. Taking foregoing points into account, the most reasonable approach to dealing with the risks of the genetic engineering of microbes is to minimize and mitigate the risks by enacting scientifically-informed laws, regulations, and policies, and by implementing rigorous oversight of scientists, staff, pathogens, and laboratories (Reiss and Straughan 1996). As noted above, genetic engineering researchers and government officials began to develop this type of approach in the mid-1970s and have been refining and improving it ever since. While the genetic engineering of microbes is not perfectly safe, it seems that the risks are well-managed at this point.

7.10 Genetic Engineering of Plants GM plants have produced many important benefits for science and society and are expected to produce many more. To apply the PP to decisions concerning the genetic engineering of plants, we need to consider the options for risk management and conditions for reasonableness of risks. The main risks of plant genetic engineering are: (1) risks to public health due to consuming GM foods; (2) risks to the environment 26 As noted in Chapter 6, a black market for alcohol emerged during Prohibition era in the US (1919– 1933). The desire to avoid creating a black market for any product is an relevant to regulatory actions that involve prohibitions.

7.10 Genetic Engineering of Plants

191

from invasive GM plants or transfer of genes from GM plants to other plants; (3) social and cultural risks related to the transformation of agriculture and food production (Resnik 2012). Let’s consider the health risks first. As discussed earlier, there has been considerable public opposition to GM crops and foods, especially in Europe (Resnik 2012; Lucht 2015; Cornish 2018). One of the main reasons why people oppose GM foods is that they are concerned about the health risks of these products (Fagan et al. 2014; Lucht 2015; Pew Research Center 2016; Cornish 2018). Most scientists, however, regard GM foods as safe as non-GM foods (Lucht 2015; National Academies of Sciences, Engineering, and Medicine 2016b; Cornish 2018). In June 29, 2016, 109 Nobel Prize-winning scientists signed a petition to leaders of the environmental group Greenpeace,27 the United Nations, and Governments around the world urging them to stop their opposition to GM crops and foods (Nobel Prize Winners 2016).28 The National Academies of Sciences, Engineering, and Medicine (NASEM) (2016b) recently reviewed evidence from thousands of studies on the health effects of GM foods and concluded that GM foods are just as safe as non-GM foods. The NASEM based its findings on evidence from three types of research: (1) short-term, controlled experiments on the effects of feeding GM foods to laboratory animals; (2) long-term studies of the effects of feeding GM foods to livestock; and (3) long-term, epidemiological studies of the effects of consuming GM foods on human health (National Academies of Sciences, Engineering, and Medicine 2016b).29 Despite substantial evidence supporting the safety of GM foods, it is important to note that there are some shortcomings and gaps in the scientific literature (Domingo 2016). First, controlled laboratory animal experiments typically last only 90 days, while human exposures to GM foods may occur over decades.30 Thus, these animal experiments may not accurately model human exposures and risks (Resnik 2012). Second, long-term studies of the effects of eating GM foods on human health, such as the epidemiological studies mentioned above, are not controlled experiments, so the data produced by these studies may be affected by confounding variables that researchers have not adequately controlled for, such as demographic, lifestyle and genetic factors. Third, some GM foods may contain chemicals, such as soy or tree nut proteins, that trigger allergies in some people (Zhang et al. 2016). Although the 27 As a side note, members of Greenpeace broke into a research farm in Australia in 2011 and destroyed an entire crop of GM wheat. Members of another environmental damaged a crop of golden rice in the Philippines (Zhang et al. 2016). 28 To date, 156 Nobelists have signed the petition (Nobel Prize Winners 2016). 29 For a review of the GM food safety literature, also see Domingo (2016). 30 It is worth noting the long-term animal studies pose some scientific and technical challenges because most of the rodent species used in these types of experiments have a lifespan of about three years and normally develop tumors and other health problems as they age. So, it can be difficult to determine whether an adverse effect in a laboratory animal is due to an exposure to a GM food or the natural aging process. A two-year study published by Séralini et al. (2012) claiming that mice fed a diet of Roundup Ready GM corn had more tumors than mice fed the normal diet (the control group) was later retracted by the journal due to serious methodological flaws that undermined the validity of the data (Resnik 2015a).

192

7 Genetic Engineering

evidence produced so far indicates that GM foods are not more allergenic than nonGM foods, more research is warranted (Dunn et al. 2017). Fourth, not all GM foods have been studied. Most of the research has focused on GM soybeans, corn, and wheat (Domingo 2016). Fifth, about half of the studies of the safety GM foods have been sponsored by industry, so the published literature may reflect an industry bias (Guillemaud et al. 2016). Indeed, one survey found that industry-sponsored studies were 50% more likely to report results favorable to GM crops than independent studies (Guillemaud et al. 2016). However, there are also hundreds of independently funded studies that support the safety of GM foods (Norero 2016). In sum, while the evidence for the safety of GM foods is compelling, more research is needed (Zhang et al. 2016). The environmental risks of GM crops, include: (1) the risk of GM crops transferring genes to other species by cross-fertilization or a process (known as horizontal gene transfer) in which viruses or other pathogens transmit genes between species; (2) the risk of GM crops becoming invasive plants that threaten other species and disrupt habitats and ecosystems; and (3) the risks that GM crops equipped with their own pesticides pose for non-target species, such as bees or butterflies; and (4) the risks that the use of Roundup Ready crops will lead to more use of glyphosate and the emergence of glyphosate-resistant weeds (Resnik 2012; National Academies of Sciences, Engineering, and Medicine 2016b; International Service for the Acquisition of Agri-biotech Applications 2018). Although the environmental risks of GM crops have not been as well-studied as the public health risks, there is now considerable data concerning the environmental impacts of GM crops (National Academies of Sciences, Engineering, and Medicine 2016b). Regarding the risks of gene transfer, studies have shown that GM crops will breed with non-GM crops to produce hybrids. Moreover, many of the traits exhibited by GM crops, such as drought tolerance, salt tolerance, herbicide resistance, and insect resistance, are evolutionarily advantageous. Therefore, if cross-breeding occurs, the offspring could become highly prevalent (Warwick et al. 2009). Many agricultural scientists maintain that gene flow risks can be managed by using containment methods, such segregating GM and non-GM crops, and by designing GM crops so that they cannot produce viable offspring (Daniell 2002). However, segregation is not always easy to achieve, since farmers who wish to grow non-GM (e.g. organic) crops may not be able to plant their crops far enough away from GM crops to prevent cross-fertilization, and designing crops to not produce viable offspring is technically challenging and controversial, since most farmers in the developing world cannot afford to buy seeds each year so they save seeds for the next crop (Resnik 2012). GM canola has been difficult to control and has hybridized with several weed species in North America (Biello 2010). Fortunately, there is little evidence of gene flow from GM to non-GM crops has occurred by means of horizontal gene transfer, which is a rare event in any case (National Academies of Sciences, Engineering, and Medicine 2016b). Concerning the risks of invasiveness, throughout history, human beings have accidentally or intentionally introduced invasive plant species to ecosystems. Kudzu, English ivy, barberry, wisteria, Japanese honeysuckle, and numerous other plants

7.10 Genetic Engineering of Plants

193

have wreaked environmental havoc in the US (Resnik 2012). While most GM crops thrive only when cultivated, some, such as GM canola, can grow easily in the wild and are potentially invasive (Biello 2010). Although there is little evidence that GM crops, on the whole, pose a significant invasiveness risk, the situation bears watching and the risk of invasiveness should be managed (Warwick et al. 2009; National Academies of Sciences, Engineering, and Medicine 2016b). One method of protecting against invasiveness is to design crops that do not produce viable offspring. Regarding the risks to non-target species, in 1999 the journal Nature published a study showing that Monarch butterfly larvae that ate milkweed plants dusted with Bt corn pollen31 grew more slowly and had a higher mortality rate than larvae that ate milkweed plants that were not dusted with Bt corn pollen. The study concluded that Bt crops can adversely impact Monarch butterflies and other insects that feed on or interact with those plants (Losey et al. 1999). However, other scientists rejected these findings because the exposure levels of Bt corn pollen in the laboratory experiment were much higher than those in the field. Furthermore, field studies have shown that Bt crops have little negative impact on Monarch butterflies (Sears et al. 2001). Some environmentalists and journalists have hypothesized that Bt crops may be partly responsible for the worldwide decline in honeybee populations (McDonald 2007). However, a review of 25 studies on the impacts of Bt toxins on honeybees concluded that the evidence does not support this hypothesis (Duan et al. 2008). Studies that have examined the impact of Bt crops on lacewings, ladybugs, and parasitic wasps have also found that their impact is negligible (Poppy 2000). While there is little evidence that Bt crops have adverse off-target effects on non-target species, further research is warranted to better understand how Bt crops interact with non-target species, and how Bt crop cultivation changes habitats and ecosystems that support non-target species (Poppy 2000). Concerning the risks related to glyphosate resistance, a review of the literature on this topic concluded that while Roundup Ready crops can contribute to the development of glyphosate resistance in several different species (see Werth et al. 2013), this problem can be managed by taking an integrated pest management (IPM) approach to weeds (National Academies of Sciences, Engineering, and Medicine 2016b). IPM includes judicious use of pesticides in combination with other pest control methods (such as tilling fields to kill weeds) and active monitoring of pesticide use patterns and pesticide resistance (Resnik 2012). The social/cultural risks of GM crops are difficult to articulate, let alone quantify (Lucht 2015). Many people oppose GM crops because they fear they will transform agriculture by making it more industrialized and large-scale. However, agriculture has been becoming more and more industrialized since the 1960s, and GM crops play only a small part in this trend. Other developments, such as innovations in farming equipment, consolidation of small farms, and the use of chemical fertilizers and pesticides have played a much more important role in industrializing agriculture (Resnik 2012). The best way to counter the trend toward industrialization may be to provide economic support for small farms. 31 See

Footnote 12.

194

7 Genetic Engineering

Others oppose GM crops for moral reasons. Some oppose GM crops because they distrust the biotechnology industry and how it negatively impacts farmers, especially farmers in developing nations who cannot save the seeds from GM crops to produce new crops because the seeds are inviable (Cornish 2018). However, as noted above, the industry has economically benefitted farmers, especially those in the developing world (Conrow 2018). Some oppose GM crops because they regard them as unnatural or fundamentally opposed to nature (Lucht 2015; BBC News 2015). However, this opposition is self-contradictory, because, as noted above, most of the non-GM crops that farmers grow have been perfected through thousands of years of selective breeding and domestication. Today’s non-GM corn scarcely resembles the maize plant that grew in the wild thousands of years ago (Resnik 2012). Non-GM plants are not truly “natural,” if by “natural” one means “unaltered by human beings.” Some argue that public opposition to GM crops is based on irrational fear and ignorance, rather than a careful assessment of the science and evidence (Blancke 2015; BBC News 2015), a claim which is supported by data. A survey by the Pew Research Center (2016) found that science literacy was positively correlated with support for GM crops, and that 90% of scientists who responded to the survey believe that GM foods are as safe as non-GM foods but only about 50% of non-scientist respondents expressed this view. As one can see from the discussion of the risks of GM foods/crops, while there is a great deal of evidence concerning the safety of eating GM foods, there is considerable scientific uncertainty concerning the environmental risks of GM crops. There is also a great deal of moral uncertainty concerning GM crops, because some people oppose GM crops partly for moral reasons (Resnik 2012). Thus, while EUT is not a useful tool for managing the risks of GM foods/crops, the PP should be able to lend some insight into these issues. To apply the PP to issues related to GM crops/foods, we should consider which of the three basic policy options (risk avoidance, risk minimization, and risk mitigation) is most reasonable. As noted above, many countries have banned GM crops and foods. Is this a reasonable option? If we consider the proportionality of risks and benefits, then we need to determine whether the benefits of GM crops/foods, such as increased food production, reduction in the use of pesticides and fertilizers (an environmental benefit), and economic development, outweigh the risks, such possible harms to public health and the environment, and social/cultural impacts. As noted above, most scientists and many members of the public who have considered the benefits and risks of GM crops/foods believe that the benefits outweigh the risks, but a substantial proportion of the public disagrees with this assessment. While it is tempting to dismiss public opposition to GM crops as based on fear and ignorance, a more respectful and charitable way of conceiving of this opposition is that it stems from conflicting value priorities (or risk tolerances) between proponents and opponents of GM crops.32

32 Davidson

(2001) defends a principle of charity for interpreting language. The basic idea here is that one should interpret a speaker’s statements as being rational, other things being equal.

7.10 Genetic Engineering of Plants

195

To frame this type of value conflict, it may be useful draw upon the distinction between taking risks for one’s self vs. taking risks for others (or imposing risks on others) (see Chapter 5). As noted in Chapter 5, we allow (competent) adults to take greater risks for themselves than they can take for other people because we value personal freedom and autonomy. We allow adults to smoke tobacco, drink alcohol, skydive, and take other health risks, as long as they don’t impose unreasonable risks on others. Some people may not want to take these risks, and they are free to make this choice, but they do not have a right to stop other adults from taking such risks, as long as these risks don’t affect them. Thus, we allow adults to smoke but we prohibit smoking in public places. We allow adults to drink alcohol, but we do not allow adults to drive while under the influence of alcohol. If we apply this reasoning to GM foods, we could say that people should be free to eat GM foods as long as they are not imposing unreasonable risks on others. Banning GM foods because some people do not think the benefits of GM foods are worth the risks would be an excessive and unreasonable restriction on human freedom (Resnik 2015b). It would also be inconsistent with other public health policies. If we allow people to smoke or eat unhealthy foods, then we should allow them to eat GM foods. If the only risks of GM crops were personal health risks, then banning them would be unreasonable because it would violate the proportionality and consistency conditions. However, there are other risks associated with GM crops/foods, such as environmental risks. How should we think about reasonableness when these other risks are included? To think about the reasonableness of these risks we should reflect upon the other environmental risks of agriculture that we already permit. Non-GMO Plant agriculture has many different environmental risks, such as deforestation of land, water pollution (due to fertilizers, pesticides, and other chemicals), air pollution (due to the combustion of fossil fuels), excessive water usage, and waste production (Resnik 2012). Non-GMO agricultural crops also pose a risk of invasiveness, and pesticides sprayed on these crops can have adverse effects on non-target species (see Chapter 6). Although industrialized agriculture has more of an environmental impact than nonindustrialized agriculture, even small, low-tech farms can pose a risk to the environment (Resnik 2012). Taking all this into account, one could argue that it would be inconsistent to permit non-GM agriculture while at the same time banning GM crops, especially since environmental impacts of GM crops may be equal to or less than the impacts of non-GM crops (National Academy of Sciences, Engineering, and Medicine 2016b). Thus, a compelling argument can be made, based on proportionality and consistency, that banning GM crops would be an unreasonable way of managing their risks. A more reasonable approach would be to minimize and/or mitigate these risks. Although there are solid, science-based arguments against banning GM crops, many people do not find them convincing, since, as noted previously, 38 countries

Interpreting disagreements about GM foods/crops as based on differing value priorities portrays these disagreements as rational, rather than based on irrational fear or ignorance.

196

7 Genetic Engineering

have banned GM crops.33 Assuming these bans resulted from democratic decisionmaking processes, we need to ask whether they are reasonable. Recall that one of the criteria of reasonableness is that decisions about risks and benefits should be made fairly. As argued in Chapter 5, democracy is essential component of fairness in social decision-making. Thus, it appears that, in these countries at least, there would be a conflict between proportionality/consistency and fairness concerning the reasonableness of risks. Proportionality and consistency do not support a ban, but fairness would support a ban, assuming that a majority of citizens have voted for it. How does the PP address this type of conflict? As argued in Chapter 5, there are strong moral and political reasons for supporting democratic rule within a legal framework that protects individual rights (Rawls 2005). Thus, if the majority of citizens in a nation (or their duly elected representatives) decide to ban GM crops/foods, then this decision is, for that nation, the most reasonable choice. It may not be reasonable from a scientific perspective, but it is reasonable from a democratic one. Assuming that there is no fundamental right to grow or eat GM crops/foods, then bans that result from democratic decision-making processes are reasonable. Bans on GM crops/foods illustrate the importance of public participation and democratic decision-making when applying the PP to real world problems of risk management (Science and Environmental Health Network 1998; Kriebel et al. 2001; Whiteside 2006). While the public should have the final say in whether a social risk (such as growing GM crops) is worth taking, it is important to realize that citizens are free to change their minds and they may decide to revoke bans. As noted above, the EU lifted its ban on GM crops in 2007. One way of interpreting the EU’s decision-making concerning GM crops/foods is that EU officials decided to revise EU policies in light of new and emerging evidence concerning the benefits and risks of GM crops/foods. When the ban was enacted in 1998, much less was known about the risks and benefits of GM crops/foods than is known today. The organization, under this interpretation, acted in accordance with a principle of epistemic responsibility.34 To enable citizens and policymakers to make epistemically responsible choices concerning the management of risks related to genetic engineering and other emerging technologies, scientists should educate and inform the public about these issues (National Academies of Sciences, Engineering, and Medicine 2016a). If a nation has decided by democratic procedures to deal with the risks of GM crops by minimizing or mitigating them rather than banning them, it will need to decide how best to minimize or mitigate the risks of GM crops/foods. As we saw in Chapter 6, there are various strategies for minimizing or mitigating the risks of chemicals, such as pre-market testing, post-market research and safety review, labelling, registration, consumer education, and research. One could argue, following the reasoning in Chapter 6, that all these different strategies would be reasonable strategies for minimizing and mitigating the risks of GM crops/foods. Rather than 33 It is also worth noting that bans on GM plants can create black markets because of the high demand for these products. 34 As of the writing of this book, Kenya is currently rethinking its ban on GM crops (Meeme 2019).

7.10 Genetic Engineering of Plants

197

examining each of these strategies, I will focus on two that have generated the most controversy: pre-marketing testing requirements and mandatory labelling. As noted above, the FDA does not require companies to perform animal or human studies on GM crops/foods to obtain marketing approval. Companies only need to provide data demonstrating that their products are substantially equivalent to non-GM products. The data could be derived from chemical experiments and need not involve any testing on animals or humans. Substantial equivalence assessments should include data concerning allergenicity potential and nutritional content. It is also worth noting that companies have sponsored thousands of safety studies on GM foods, which can also impact substantial equivalence determinations. Critics of GM foods argue that the level of safety testing required by the FDA and other agencies is not sufficient to protect the public from harm, and that GM foods should undergo more stringent testing prior to marketing (Fagan et al. 2014). In response to this criticism, one might argue that the level of evidence needed to demonstrate substantial equivalence is sufficient to permit marketing, because it would be inconsistent and unfair, as matter of policy, to not allow GM foods to be marketed if they are substantially equivalent to non-GM foods that are already on the market. Consider an analogy with drug approval: if the FDA determines that a generic drug is equivalent to a brand name drug in terms of ingredients, pharmacology, manufacturing, and other characteristics, then it allows the generic drug to be marketed without further testing (Food and Drug Administration 2020d). In GM food and generic drug cases the reasoning is the same: a product is allowed on the market if it is determined to be equivalent to another product that has marketing approval. If the old product is safe enough for consumers, then the new one is presumed to be safe enough if it is equivalent to the old product. While the substantial equivalence standard may be adequate for addressing the health risks on GM foods, one might argue that it is not adequate for addressing the environmental risks of GM crops, and that companies should perform some type of environmental risk assessment before receiving marketing approval. In the US, the EPA requires companies that manufacture GM crops that produce pesticides (such as Bt crops) to submit environmental impact assessments that address effects on offtarget species as part of the application process (Environmental Protection Agency 2020b). However, this type of assessment, while important, does not address other types of environmental impacts, such as risks of invasiveness, gene transfer, and herbicide tolerance. Mandatory labelling of GM foods is a relatively inexpensive and reasonable way of minimizing or mitigating risks because it allows consumers to make choices concerning their risk exposure. People who do not want to eat GM foods should be provided with the information they need to make choices that reflect their values (Messer et al. 2015). One could argue, additionally, that mandating GMO labelling would be consistent with other labelling requirements, which address nutritional content and ingredients (Borges et al. 2018). Over 60 countries require labelling of GM foods (Justlabelit.org 2020). Until recently, the US did not require labeling of GM foods, so most consumers were eating GM foods unknowingly (Borges et al. 2018). On December 20, 2018, the USDA announced labelling guidelines, which became

198

7 Genetic Engineering

Fig. 7.13 Bioengineered food label, U.S. Department of Agriculture, public domain, https://www.ams. usda.gov/rules-regulations/ be/consumers

effective January 1, 2020 (except for small food manufacturers). Companies must comply with guidelines by January 1, 2022. According to the guidelines, foods must be labelled as bioengineered foods if they contain “detectable genetic material that has been modified through lab techniques and cannot be created through conventional breeding or found in nature” (United States Department of Agriculture 2018). See Fig. 7.13. The main objection to mandated labelling is that this is potentially misleading to consumers because the evidence indicates that GM foods are just as safe as non-GM foods (American Association for the Advancement of Science 2012). Labelling of GM foods may induce consumers into thinking that GM foods are especially risky, when, in fact, they are not. In response to this objection, one could argue that labelling is useful to consumers even if scientists and regulatory agencies have determined that GM foods are just as safe as non-GM foods because it allows consumers to make choices that reflect their values (Messer et al. 2015). Many people, such a vegetarians and practitioners of certain religions, make food choices based on moral or religious values, rather than health concerns. Informing a vegetarian that a product contains meat does not necessarily mislead her into believing that the product is especially dangerous; it merely gives him or her information that can be useful in making a choice consistent with his or her values.

7.11 Genetic Engineering of Animals Before applying the PP to GM animals, it is important to say a few words about the moral status of animals. Many people believe that animals have inherent moral worth and should not be killed or harmed for scientific research other reasons, such as to produce food or clothing (Regan 1983; Singer 2009). People who hold these views would likely oppose genetic engineering of animals for these purposes, although they might accept forms of genetic engineering, such as somatic gene therapy, which are designed to benefit animals. For people who hold these views, the only way of

7.11 Genetic Engineering of Animals

199

managing the risks of GM animals reasonably is to not genetically engineer animals. In applying the PP to GM animals, I will assume that while animals have some moral value and deserve to be treated humanely, they do not have the type of moral value that we accord to human beings. We have moral obligations to avoid causing unnecessary pain, suffering, or harm to animals, but we can use them for research and other worthwhile purposes, provided that we treat them humanely (Thompson 1993; Beauchamp and DeGrazia 2020). Turning to the discussion of applying the PP to GM animals, as noted above, GM animals have produced important benefits for science and society and are expected to produce many more. To apply the PP to decisions concerning the genetic engineering of animals, we need to consider the options for risk management and conditions for reasonableness of risks. The main risks of animal genetic engineering are: (1) risks to the animals; (2) risks to public health due to consuming GM meat or animal products; (3) risks to the environment from GM animals that escape captivity or are introduced into the wild; (4) social and cultural risks. The animal risk issues related to genetic engineering are important and should be considered carefully. As noted earlier, scientists have used genetic engineering to develop animals to serve as research tools by creating animal models of human diseases and animals with genes which have been deleted (or “knocked out”) to study the functions of those genes. While genetic engineering of animals to serve as research tools has yielded important advancements in science and medicine, it also raises animal welfare issues (Thompson 1993; Resnik 2011). In the US and many other countries, research with animals is reviewed and overseen by Institutional Animal Care and Use Committees (IACUCs), which are charged with protecting laboratory animal welfare. Three ethical principles, known as the three Rs—reduction, replacement, and refinement—play a key role in making determinations concerning laboratory animal welfare (Russel and Birch 1959; Beauchamp and DeGrazia 2020). Reduction refers to the obligation to reduce the number of animals used in research, when this is scientifically feasible; replacement refers to obligation to replace animals with other tools for obtaining knowledge, such as cells, tissues, or computer models, to the extent that this is scientifically feasible; and refinement refers to the obligation to refine experimental techniques to minimize animal pain and suffering (Shamoo and Resnik 2015). Genetic engineering of animals to serve as research tools raises some concerns related to the three Rs. Concerning reduction, while using GM animals as research tools can improve the scientific and practical value of animal research, it can also increase the number of animals used in research, because researchers may need to use several hundred animals to create a GM animal that has that desired alterations (Resnik 2011). Large numbers of animals need to be used because most of the GM embryos that are created do not survive and only between 1 and 30% of embryos have the desired genetic alteration (Ormandy et al. 2011). However, using animals as research tools may also reduce the number of animals used by improving the relevance and efficiency of animal experimentation. While it is difficult to determine whether improvements in relevance and efficiency compensate for increases in the

200

7 Genetic Engineering

number of animals used, this tradeoff merits scrutiny by researchers and IACUCs (Schuppli et al. 2004; Ormandy et al. 2011). Concerning replacement, it is likely that genetic engineering will help researchers to replace animals with GM animal cell lines or tissues that have the genomic alterations they are interested in studying (Schuppli et al. 2004). However, it is likely that genetic engineering will not eliminate the need for animals in research because important biological phenomena, such as homeostasis, immunity, development, reproduction, toxicity, and carcinogenesis must be studied at the level of the whole animal (Resnik 2011). Concerning refinement, the procedures used to create GM animals for research tools can cause pain and suffering. Some of these include injecting females with hormones to cause superovulation so eggs can be harvested; surgical removal of eggs from females; implanting embryos in females; and performing vasectomies on males so they can be used to induce pseudopregnancies (Ormandy et al. 2011). GM laboratory animals may also experience pain and suffering due to the effects of genetically engineered diseases, deletions of genes, or off-target effects of genetic alterations (Schuppli et al. 2004). Regarding the welfare of animals used for meat, milk, or other products, many of the issues are similar to those that arise in research. For example, genetic engineers may require hundreds of animals to produce an animal with desired genetic alterations, and animals may experience pain or suffering during the research process or as a result of off-target genomic alterations. Additionally, GM animals used for meat, milk or animal products may experience pain and suffering due to enhanced traits, such as growth. If an animal grows too fast, for example, it may suffer as result of stresses on its muscles, bones, joints, or organ systems (Rollin 1995). Although many countries, including the US, have banned the use of growth hormones in agriculture (Rigby 2017), these bans might not apply to GM animals with enhanced growth characteristics. However, it is worth noting that genetically engineering animals to resist diseases may reduce pain and suffering related to those diseases. So far, salmon is the only type of GM meat to be marketed, but other types of meat may soon be available. The effects of enhanced growth on the welfare of salmon are not known (Forabosco et al. 2013). Concerning the risks of eating GM meat or animal products, very little is known at this point in time, because only one type of meat made by one producer has been marketed, due, in part, to lack of consumer interest in GM meat (Cossins 2015). As noted above, the FDA determined that Aqua Bounty’s GM salmon was substantially equivalent (i.e. just as safe as) non-GM salmon, but it may take some time before we fully understand the risks of consuming GM meats and animal products. Turning to the environmental risks of GM animals, these may be more significant than the previously mentioned risks, depending on the type of animal that is modified and how it is used. For animals that will be reared in captivity, such as laboratory animals, the environmental risks are low. Potential harms to the environment (such as breeding with wild species or becoming invasive) could arise if the animals escape captivity, but these risks are very low, as long as researchers follow guidelines for

7.11 Genetic Engineering of Animals

201

the care, use, safety, and security of laboratory animals (National Research Council 2011). The risks could be greater for animals that are raised in areas that are less secure than research laboratories, however. For example, AquaBounty’s GM salmon are raised in secure tanks similar to what are currently used for non-GM, farm-raised salmon. AquaBounty has additional security measures to prevent sabotage. Since it is possible that some fish could escape captivity during a natural disaster, such as a hurricane or flood, AquaBounty has genetically engineered its fish to be triploid (three copies of each chromosome) so that these fish are infertile. However, the success rate of AquaBounty’s method for producing triploid fish is only 98%, so 2% of its fish are capable of breeding (Bodner 2015). If some fertile GM salmon manage to escape, they could breed with other salmon or with each other, which could disrupt wild salmon populations, especially since the GM salmon could have an adaptive advantage (i.e. enhanced growth) over wild salmon (Resnik 2012). The risks are greater still for animals that are intentionally released into the wild, because animals are mobile and can be difficult or impossible to control once released. Some of these risks can be minimized be releasing animals in geographically isolated areas, such as islands, or by rendering the animals infertile, but these risks cannot be entirely eliminated (National Academies of Sciences, Engineering, and Medicine 2016a). As noted earlier, Oxitec has conducted several field trials of its GM Aedes aegypti mosquitoes with the goal of reducing these populations to minimize infections from the dengue virus. Researchers have also developed (but not released) malaria-resistant GM Anopheles mosquitoes equipped with a gene drive mechanism to increase prevalence of this genetic alteration in the population. Releasing GM mosquitoes into the wild poses risks for human health and the environment. The main environmental risk of releasing the GM Aedes aegypti mosquitoes is that this could disrupt the food web and ecosystem by substantially reducing this population. However, this risk may be minimal because animals that feed on these mosquitoes may find other sources of food, including other mosquito species. Release of these mosquitoes poses no significant risks to human health because the mosquitoes with the genetic alteration are males, which don’t bite (Resnik 2019b). Releasing malaria-resistant mosquitoes poses several risks to public health and the environment. First, the genetic alteration might not work as intended. For example, it could immunize the mosquitoes against malaria but enable them to carry another disease. Second, the malaria parasite might evolve in response to the genetic alteration and become more dangerous. Third, the gene drive mechanism might become linked to other genes, with unpredictable effects on the mosquito population. Fourth, the gene drive mechanism could have adverse impacts on non-target species if it infects these species by means of horizontal gene transfer (National Academies of Sciences, Engineering, and Medicine 2016a; Resnik 2019b). Although there are no planned releases of GM animals other than GM mosquitoes, researchers could, in theory, develop GM animals for release in the wild for a variety of purposes, such as pest or pathogen control, repopulation of endangered species, and ecosystem management.

202

7 Genetic Engineering

Concerning the social and cultural risks related to the genetic engineering of animals, GM animals used in agriculture raise similar sorts of social and cultural risks as GM plants, i.e. the risks of making agriculture more industrialized. However, animal farming, like other forms of agriculture, has been becoming more and more industrialized since the 1960s, and genetic engineering has little to do with this trend. The best way to counter the trend toward industrialization of animal farming may be to economically support small farms. Another risk is that some may view GM animals as unnatural or fundamentally opposed to their values. This concern is self-contradictory, because, as noted above, most of the non-GM animals (such as livestock and poultry) are also the products of thousands of years of selective breeding and domestication. Non-GM animals are not truly “natural,” if by “natural” one means “unaltered by human beings.” Even though this concern is self-contradictory, it still may impact consumer acceptance of GM meat, which so far has been lukewarm (Cossins 2015). A more significant, and novel, social and cultural risk pertains to the production of animal-human chimeras for research or other purposes (Hübner 2018). A chimera is an organism that contains parts (such as genes, cells, tissues, or organs) from different species. For example, researchers have implanted human cells into laboratory animals to study diseases, development, growth, muscle and nerve function, and other biological processes (Hübner 2018). As noted earlier, researchers have also transferred human genes to laboratory animals for similar research purposes (Simmons 2008) and have transferred human genes to pigs to develop animals that can provide tissues and organs for transplantation (Hryhorowicz et al. 2017; Le Page 2020). While tissues and organs from GM pigs can promote human health, they also pose a risk of zoonosis (i.e. transmission of diseases from animals to humans) (Hryhorowicz et al. 2017). Creating animal-human chimeras can give rise to moral confusion because if we create an animal that is sufficiently human-like, we may not know how we should treat that animal (Robert and Baylis 2003; Streiffer 2005; Koplin and Wilkinson 2019). Should we grant the animal legal and moral rights? How should we decide when an animal is sufficiently human-like to be accorded human moral/legal status? Should the animal have human cognitive characteristics, such as intelligence, reasoning, or language capability? Moral confusion is not, in itself, a bad outcome (or risk); however, it can lead to adverse outcomes. For example, if we create chimeras that are sufficiently humanlike such that they deserve the moral/legal status that we accord to human beings, and we do not grant them this status because we have not yet thought clearly about their moral/legal status, then this would be a travesty. It would be like treating people as animals. One might argue that to avoid this bad outcome we should not create animal-human chimeras that have a reasonable chance of exhibiting human-like characteristics, such as intelligence, language, or reasoning, until we have decided how these beings should be treated (Streiffer 2005). Fortunately, the moral confusion problem is not likely to occur in GM animals for the foreseeable future, since most of the animals that have been created do not have the phenotypes or genotypes that would make them at all human-like. For example,

7.11 Genetic Engineering of Animals

203

inserting a human insulin gene into a mouse or pig will not give the mouse or pig human-like cognitive characteristics. The problem is more likely to arise when we perform genetic alterations pertaining to brain structure or function on animals that already have advanced cognitive abilities, such as chimpanzees or monkeys.35 As one can see from the preceding discussion, there is considerable scientific and moral uncertainty concerning the genetic engineering of animals. Thus, EUT is not applicable to these issues but the PP may lend some insight. To apply the PP to GM animals, we should consider which strategy—risk avoidance, risk minimization, or risk mitigation—most reasonably manages the risks of genetically engineering animals. Taking into consideration the various benefits of GM animals for scientific research, medicine, and agriculture, a strong case can be made that these benefits outweigh risks to animals, public health, and the environment.36 Thus, avoiding the risks of animal genetic engineering by banning it would contravene the proportionality criterion and be an unreasonable way of managing these risks. The most reasonable way of managing these risks, arguably, is to minimize and mitigate them by regulating, reviewing, monitoring, and overseeing animal genetic engineering to protect animals, people, and the environment. To satisfy the consistency criterion, GM animals should have the same ethical and legal protections that other animals have; and to meet the epistemic responsibility criterion, scientists who create or work with GM animals should stay abreast of the latest developments in genetic engineering (such as new methods gene editing) and veterinary medicine to minimize harm to animals. Additional research should be done on the risks and benefits of consuming GM meat and animal products to inform public policy. While the PP does not support a total ban on GM animals, it would caution us to avoid forms of animal genetic engineering where the benefits may not outweigh the risks. For example, the PP would advise us to avoid genetic engineering projects that could plausibly result in significant animal suffering or environmental harm. The PP would advise to go slowly and carefully with some types of animal genetic engineering, such genetically modifying pigs to serve as sources of tissues/organs or creating potentially human-like chimeras, until we have a better understanding of these technologies and their associated risks and benefits. The PP would also advise us to strengthen regulations and guidelines related to GM animals and close potential gaps,37 so that we can minimize and mitigate risks. Before closing this section, we should also consider fairness issues related to GM animals. As noted earlier, Oxitec has conducted several field trials of its GM 35 Most of the debate about chimeras so far has focused on inserting human cells into early animal embryos (or blastocysts), not on inserting human genes into animals. 36 It is also worth noting that a ban would probably create a black market because demand for GM animals and animal products it high. 37 There is a potential regulatory gap in the genetic engineering of animals for meat or animal products. Although regulations and ethical guidelines require IACUCs to review and oversee genetic engineering of animals for research conducted at academic institutions, there are no such requirements for genetic engineering of animals for non-research purposes, such as meat production. One could argue that companies that genetically engineer animals for non-research purposes should form ethics committees similar to IACUCs to oversee these activities.

204

7 Genetic Engineering

mosquitoes. Most commentators agree that meaningful and effective community engagement is an ethical prerequisite for releasing GM mosquitoes into the environment (National Academies of Sciences, Engineering, and Medicine 2016a; Resnik 2018b, 2019a; Neuhaus 2018). As discussed in Chapter 5, community engagement is a partnership between researchers and public health officials and community members that includes the exchange of information, ideas, and opinions; mutual respect for values and interests; and shared decision-making. As far as the PP is concerned, engagement helps to promote procedural fairness by allowing community members to have meaningful input into public policy decisions that affect them. Although Oxitec did not conduct any meaningful community engagement prior to its initial field trials in the Cayman Islands, it has conducted meaningful community engagement since then. Oxitec conducted extensive community engagement before obtaining approval to release its GM mosquitoes at several sites in Brazil in 2011. The company conducted community engagement in Key Haven, Florida in order to obtain their approval for fields trials, but, as noted above, residents voted against them (Resnik 2018b, 2019a). As far as the PP is concerned, one could argue that these decisions made by citizens in Brazil and Key Haven were reasonable ways of managing of managing risks insofar as they resulted from fair, democratic procedures. Communities should have the right to decide whether they view the benefits of field trials as worth the risks, irrespective of whether scientists or regulatory officials think the benefits are worth the risks. If a community rejects a field trial, then its decision should be respected (Resnik 2019a). Fairness may trump other reasonableness criteria in some cases. In the future, scientists may genetically engineer other types of animals for the purposes of controlling diseases. Researchers have already developed malariaresistant mosquitoes, and they are developing mice that resist Lyme disease (Harmon 2016) and honeybees that resist diseases and parasites (Campbell 2020b). Other species that could be genetically engineered to control disease include black flies (river blindness), Tsetse flies (African sleeping sickness), and Triatomine bugs (Chagas disease) (Resnik 2018b). Community engagement should also be an important component of other proposed releases of GM animals that impact local communities, such as those mentioned above (National Academies of Sciences, Engineering, and Medicine 2016a).

7.12 Genetic Engineering of Human Beings Genetic engineering of human beings lends itself naturally to application of the PP, given the scientific and moral uncertainty related to this topic (Resnik et al. 1999; National Academies of Sciences, Engineering, and Medicine 2017; Koplin et al. 2020). Before I discuss the risks and benefits of genetic engineering of human beings, it will be useful to review two key distinctions that have influenced debates about this issue. In the 1980s, one of the pioneers of human gene therapy, French

7.12 Genetic Engineering of Human Beings

205

Table 7.2 Key distinctions in human genetic engineering Therapy

Enhancement

Somatic

Morally acceptable when conducted under Morally questionable even if safe and appropriate scientific, medical, and ethical effective (?) standards

Germline

Currently too dangerous but may be morally acceptable when safe and effective (?)

Morally unacceptable even if safe and effective (?)

Anderson38 (1985, 1989), argued that social policy and medical decision-making should be guided by distinctions between somatic and germline genetic engineering and therapeutic and enhancement genetic engineering. See Table 7.2. Somatic genetic engineering (SGE) attempts to alter human somatic cells. The gene therapy procedures performed on patients discussed earlier in this chapter (e.g. CAR T-cell therapy) are examples of somatic interventions. Germline genetic engineering (GGE) attempts to alter human germ cells, such as the ovaries, tests, sperm, eggs, or early embryos. The genetically engineered children born in China in 2018, discussed earlier in this chapter, were conceived by means of GGE. The key difference between these SGE and GGE is that germline modifications are inheritable, whereas somatic ones are not (Anderson 1989; American Association for the Advancement of Science 2000). However, there is a small chance that SGE performed in the body may accidentally alter germ cells and cause inheritable genetic modifications (Resnik et al. 1999; American Association for the Advancement of Science 2000). Therapeutic modifications are attempts to treat or prevent a disease (Anderson 1985, 1989). The gene therapy procedures pioneered by Anderson39 and performed on patients discussed earlier in this chapter (e.g. CAR T-cell therapy) are examples of therapeutic genetic engineering. Genetic enhancements are attempts to use genetic engineering to enhance, augment, or alter traits for purposes other than treating or preventing diseases (Anderson 1985, 1989). Altering salmon so that they grow twice as fast as normal is a type of enhancement. Altering the genome of a human being to increase height, memory, strength, or intelligence would be a form of an enhancement (Resnik et al. 1999). Anderson (1985, 1989) and others argued that somatic genetic therapy is a morally acceptable procedure that fits within the bounds of clinical medicine, that germline genetic therapy is potentially acceptable but very dangerous, that somatic genetic

38 Anderson led the research team that conducted the world’s first human gene therapy clinical trial. The experiment used an adenovirus vector to insert the adenosine deaminase gene into the T-cells of two young children with combined immunodeficiency. The trial showed that the procedure was safe and effective even if did not cure the patients (Blaese et al. 1995). In 2006, Anderson was convicted of molesting and sexually abusing a girl over a four-year period, beginning when she was 10 years old, and he served 12 years in prison. Anderson maintains that he is innocent and that his conviction was based on falsified evidence (Begley 2018). 39 See Footnote 29.

206

7 Genetic Engineering

enhancement40 is morally questionable, and that germline genetic enhancement is morally unacceptable (Berger and Gert 1991; Walters and Palmer 1997; Juengst 1997; American Association for the Advancement of Science 2000).41 Some argued that germline gene therapy was also unacceptable because it could start society on a slippery slope toward germline enhancement (for discussion, see Resnik 1993). Public opinion seems to concur with this view (Blendon et al. 2016). A Pew Research Center poll of 2537 US adults conducted in April/May 2018 found that 72% of respondents approve of GGE to treat a serious disease or condition a baby would have at birth, 60% approve of GGE to reduce the risk of a serious disease that could occur over a lifetime, but only 19% approve of GGE to enhance intelligence (Funk and Hefferon 2018). A high level of scientific education increased acceptance of GGE, while a high level of religiosity decreased acceptance of GGE (Funk and Hefferon 2018). Other polls have yield similar results (Blendon et al. 2016). Paradoxically, only 33% of respondents to the Pew Research Center poll said that testing gene editing methods on human embryos was acceptable (Funk and Hefferon 2018). This is an ethical inconsistency, since methods to treat serious diseases/conditions in babies cannot be developed without testing them in human embryos since babies develop from embryos. Many commentators have questioned the cogency and usefulness of the distinction between therapy and enhancement (Juengst 1997; Resnik 2000a; Resnik and Langer 2001; American Association for the Advancement of Science 2000; Rasko et al. 2006; Baylis 2019). One of the problems the distinction is that the division between therapy and enhancement is unclear because it rests on a prior concept of healthy (or normal) functioning, which is itself controversial (Murphy 2020). The therapy/enhancement distinction depends on a prior definition of normal functioning because therapy restores or promotes normal functioning, whereas enhancement alters normal functioning (Parens 1998; Resnik 2000a; American Association for the Advancement of Science 2000). However, what it means to be healthy or normal is often not a purely objective determination and may depend on moral, social, and cultural values (Szasz 1961; Caplan 1995, 1997; Resnik 2000a; Buchanan et al. 2000; Murphy 2020). For example, in many countries homosexuality was at one time regarded as an abnormal (unhealthy) behavior that could be treated or cured but it is now widely regarded as a normal sexual orientation for some people. Masturbation was once regarded as a form of mental illness but is now regarded as a normal part of human sexuality (Szasz 1961). Other conditions associated with aging, such as hair loss, hearing loss, memory loss, macular degeneration, menopause, and andropause,

40 An example of somatic genetic enhancement would be a transferring a gene to an adult male to stimulate production of testosterone to enhance athletic and sexual performance. 41 It is worth noting that not everyone regards genetic enhancement immoral or morally questionable. The transhumanist movement embraces various forms of enhancement to benefit mankind and allow people to express creative freedom (Harris 2007; Bostrom 2008, 2010; More and Vita-More 2013; Porter 2017; Rana and Samples 2019).

7.12 Genetic Engineering of Human Beings

207

may be regarded as diseases or as part of the normal aging process, depending on one’s values and assumptions (Callahan 1995).42 Recently, a Russian couple have been considering whether to use genome editing to prevent their second child from having a mutation that causes deafness (Cohen 2019b). While most people would consider deafness to be a disability related to abnormal hearing function, many members of deaf cultures regard deafness as normal functioning within their culture and some have used assisted reproductive technologies (but not genome editing) to ensure that their children would be born deaf (Savulescu 2002; Johnston 2005). A second problem with the therapy/enhancement distinction is that it breaks down entirely when applied to genetic modifications designed to help the body fight disease or the aging process, because these modifications often work by enhancing (i.e. changing, augmenting) normal functions or structures (Juengst 1997). For example, a genetic modification that immunizes children against a disease (such as HIV or malaria) may also enhance the human immune system by improving its ability to fight HIV or other diseases. A genetic modification that prevents tooth decay may enhance the durability of human teeth, and a modification that prevents osteoporosis may enhance bone structure. A third problem with the distinction is that it may be difficult to enforce, because genetic modifications used for therapeutic purposes may also be used for enhancement purposes. For example, a genetic modification designed to treat muscular degeneration may might also be used to enhance muscle function, and a modification designed to treat low testosterone levels could also be used to increase levels above normal (Parens 1998). A fourth problem with distinction is that it may not be morally significant, since there are types genetic therapy that we would regard as morally unacceptable, such as gene therapy that places a patient at an unreasonable risk, and types of genetic enhancement that we might regard as acceptable, such as an enhancement that protects against HIV infection (Resnik 2000a). Although the therapy/enhancement distinction has some significant limitations, it does capture some of our moral concerns related to human genetic engineering and provides some rough guidance for human genetic engineering policies. Moreover, the distinction has been popularized to the point where it is difficult talk about the ethics of human genetics without referring to it. 42 Some have attempted to define health in terms of a normal range of variation for an organism. In medicine, a normal physiological trait is a trait that falls within a range of variation for healthy functioning of the organism (Boorse 1977; Schaffner 1993). For example, normal fasting blood sugar levels range from 60 mg/dL to 100 mg/dL (WebMD 2020). Fasting blood sugar levels that are too high cause diabetes and levels that are too low cause hypoglycemia, both of which are unhealthy conditions. However, normality cannot be equated with the statistical norm for a population, since the statistical norm might be unhealthy. If most people in a population have a fasting blood sugar greater than 100 mg/dL, we would not say that a fasting blood sugar greater than 100 mg/dL is normal, even though it would be the statistical norm for that population. Thus, the concept of a normal range of variation cannot be defined statistically and depends on a broader concept of health, which may be influenced by moral, social, and cultural factors.

208

7 Genetic Engineering

7.13 Somatic Genetic Engineering Turning to the discussion of the benefits and risks of human genetic engineering, let us first consider SGE interventions, such as the gene therapy43 procedures described earlier. When the first gene therapy clinical trial began in 1990, many researchers believed that this would usher in an era of genomic medicine and that many diseases, including cancer, heart disease, and diabetes, would be treated or cured using this new form of type of intervention (Collins and Thrasher 2015). Since that time, the field has had some important successes but also some tragic failures. While there are hundreds of clinical trials in progress, only a few treatments have been approved by the FDA for marking so far (Horgan 2017; Kaemmerer 2018). SGE carries some significant risks, such as: the risk of serious (and possibly fatal) immune reaction to the gene therapy vectors delivered in vivo; the risk of cancer due to off target mutations caused by vectors; and toxicity (Kaemmerer 2018). In 1999, Jesse Gelsinger, an eighteen-year-old participant in a gene therapy clinical trial at the University of Pennsylvania, died as a result of a severe immune reaction to an adenovirus vector (Resnik 2018a). Gelsinger had a disease called ornithine transcarbamylase deficiency, which occurs when a person lacks a functional copy of gene that codes for ornithine transcarbamylase, an enzyme that plays a key role in the metabolism of proteins. Gelsinger had a mild form of the illness and was able to maintain his health by means of medications and dietary restrictions. The gene therapy procedure had been tested in monkeys, some of whom had died. The researchers did not tell Gelsinger about the full extent of adverse reactions to the therapy in animal studies (Resnik 2018a). There is little dispute about the moral legitimacy of SGE as a form of medical treatment. Most of these issues related to this type of genetic engineering have to do with managing the risks to human subjects and patients and informing them about these risks. Using SGE to alter normal traits (i.e. enhancement) also raises moral issues (Baylis 2019), but so far SGE has not been used for this purpose. As noted earlier, in the US, the FDA regulates SGE treatments and IRBs oversee SGE clinical trials. The FDA and IRBs have the authority to decide whether clinical trials may proceed to Phase I, II, and III. One of the most important issues is whether a genetic intervention is a safe enough to test in humans (Kimmelman 2010). Because there are significant metabolic, immunologic, and physiologic differences between animals and humans, it can be difficult to accurately or reliably predict how a person will react to a drug, device, or biologic that has been only tested in animals (Kimmelman 2010). The PP would advise us to take reasonable precautions when moving from preclinical studies to clinical ones to ensure that risks to the subjects are proportional to benefits to the subjects and society. For example, it would be reasonable for an IRB (or the FDA) to not approve a Phase I gene therapy clinical trial until it is 43 Some argue that “gene therapy” is a misleading term because it implies that the genetic interventions are likely to benefit the patient or human subject, when often they do not (Henderson et al. 2006).

7.13 Somatic Genetic Engineering

209

satisfied that there is convincing evidence concerning safety and possible efficacy to move forward with the study. When a Phase I study is approved, it would also be reasonable to take precautionary measures to minimize risks to subjects. Some of these include: testing the treatment on only a few subjects at first to determine how safe it is before testing it on others; clinical monitoring of subjects to protect their health and withdrawing them if necessary; using DSMBs to monitor data; and developing clear and comprehensive inclusion/exclusion criteria to protect subjects from harm (Resnik 2018a).44 Informed consent can also play an important role in protecting subjects from risks. Evidence indicates that subjects in Phase I studies often do not understand the difference between a clinical study whose main purpose is to generate knowledge and a therapeutic intervention whose main purpose is to benefit the patient, and that subjects and investigators often overestimate the benefits of experimental treatments and underestimate the risks (Henderson et al. 2006; Miller and Joffe 2009). It is important to ensure that subjects understand the benefits and risks of SGE clinical trials, so they can make participation decisions that reflect their values. Subjects who do not feel that the benefits of a study are worth the risks can decide not to participate in the study. The benefits of Phase I studies to subjects are often speculative at best, since these studies are usually designed to gather data on safety, dosing, toxicity, and pharmacology, not to test efficacy (Miller and Joffe 2009). Some commentators argued that the Phase I trial that Jesse Gelsinger participated in did not offer him significant benefits because he was relatively healthy when he agreed to be in the study, and the risks of the study were significant (and turned out to be fatal) (Resnik 2018a). Once an SGE treatment has completed clinical trials, regulatory agencies and health care professionals can take a variety of measures to protect patients and the public (see discussion in Chapter 6).

7.14 Germline Genetic Engineering We now turn to our attention to GGE, a topic that has generated a great deal of moral controversy for four decades and is likely to continue to do so (President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research 1982; Baylis 2019). To apply the PP to GGE issues, we need to consider the risks and benefits of various types of GGE. Since GGE has been performed only on three human beings (as far as we know) so far, the benefits of this procedure are highly speculative at this point. For the purposes of this discussion, it will be useful to distinguish between four types of GGE: • GGE for research purposes only with no intention of producing children. 44 See Resnik (2018a) for discussion of additional safety protections for subjects enrolled in clinical

research.

210

7 Genetic Engineering

• GGE to prevent the birth of children with genetically-based diseases, disease predispositions, or conditions caused by a mutation in a single gene (i.e. monogenic disorders); • GGE to prevent the birth of children with genetically-based diseases, disease predispositions, or conditions caused by mutations in multiple genes (i.e. polygenic disorders); • GGE to alter normal traits to prevent disease, improve health, or other purposes (i.e. enhancement). Although most of the discussion of the ethics of GGE has focused on using GGE to produce children, GGE may also be used for research purposes. For example, researchers could use GGE to create embryos to study human development or gene function or to acquire safety and efficacy data concerning reproductive technologies, including GGE (National Academies of Sciences, Engineering, and Medicine 2017; De Wert et al. 2018). The embryos would be destroyed and discarded. Creating GGE embryos for research raises ethical concerns that are different from creating GGE embryos to produce children. Some people believe that creating, and then destroying and discarding embryos for research is unethical because human embryos have moral status or value. Human embryos should only be created outside of the body for the purpose of implanting them and producing children (Green 2001). However, creating GGE embryos for the purpose of producing children raises many different issues, such as risks to the child, future generations, society, etc., which are discussed below. In this book, I will assume that using GGE to create embryos for research does not raise significant issues related to the reasonableness of risks, even though some people find it to be morally objectionable.45 Since this book is about managing risks reasonably, I will focus on issues related to using to GGE to produce children.46 GGE for monogenic disorders is potentially more feasible and safer than other types of GGE because it involves editing only one gene (Resnik et al. 1999; National Academies of Sciences, Engineering, and Medicine 2017). There are over 10,000 monogenic disorders affecting about 2% of the human population (Kumar et al. 2001; Yourgenome.org 2020). Many parents are justifiably concerned about passing on these types of genetic disorders to their children and want to use medical technology to prevent this from happening. Parents with a genetic disorder or a gene for a disorder can often avoid giving birth to children with the disorder without using GGE. For example, parents who are concerned about giving birth to a child with Tay Sachs disease can undergo prenatal genetic testing (PNGT) and abort47 a fetus that tests positive for this disorder (American College of Obstetricians and Gynecologists

45 In

1996, the US Congress passed a ban, known as the Dickey-Wicker amendment, on the use of federal funds to create human embryos for research (Green 2001). Though the ban has been interpreted differently by different administrations, it is still in effect. 46 For further discussion of creating embryos for research, see Green (2001). 47 I will assume that parents who are willing to use medical technology to prevent the birth of children with genetic diseases view abortion as morally acceptable, at least for this purpose.

7.14 Germline Genetic Engineering

211

2019).48 They can then try to become pregnant again to have a chance at having a healthy child. If one of the parents is a male with disease linked to the X chromosome, such as hemophilia or Duchene muscular dystrophy, the parents could use PNGT (or other methods) to test for the sex of the child and abort a male. In other cases, preimplantation genetic testing (PIGT) may be the best option. For example, if one parent has SCA, and one parent does not, they can create embryos in vitro, test them for this condition, and implant an embryo that has at least one copy of the normal allele (see earlier discussion of SCA) (Resnik et al. 1999; Gallo et al. 2010).49 There are some circumstances, however, in which couples cannot use PNGT or PIGT to avoid giving birth to children with genetic diseases because they do not have the genetic material needed to produce normal zygotes (Resnik et al. 1999; Resnik and Langer 2001).50 This problem could occur because the parents are both homozygous for a recessive disease, such as the SCA or Cystic Fibrosis (Resnik et al. 1999). The Russian parents (discussed above) who want to have a second child that is not deaf cannot produce a non-deaf child because they are both homozygous for the deafness allele and neither parent has a copy of the normal allele. The problem could also arise if one of parent is homozygous for a dominant disease, such as Huntington’s disease, because even if the other parent has a copy of the non-disease allele, the disease allele will dominate (Resnik et al. 1999). Of course, one might argue that couples who cannot use their genetic material to produce healthy children could adopt a child or produce healthy children by means of sperm or egg donation combined with PIGT, but many couples do not want to pursue these options because they want a child that is genetically related to both parents.51 GGE for preventing the birth of children with polygenic disorders would be much more complex and technically challenging than GGE for monogenic disorders, because it would involve editing several or perhaps dozens of interacting genes (Resnik et al. 1999; National Academies of Sciences, Engineering, and Medicine 2017). For example, genome-wide association studies have identified 38 genetic variants that increase the risk of type II diabetes (Billings and Florez 2010), and 11 variants that increase the risk of colorectal cancer (He et al. 2011). Genetics is a risk factor for many chronic and acute diseases, such as cancer, diabetes, hypertension, arthritis, Alzheimer’s disease, and obesity (Stöppler 2019). While we are beginning to understand the genetic basis of many polygenic disorders, we have a long we to go before we will be able to understand how to manipulate the genome to prevent 48 Prenatal genetic testing can also be used to avoid giving birth to children with chromosomal abnormalities, such as Trisomy 21 (Down Syndrome). 49 Embryos that are not implanted would be destroyed. I am assuming that parents would view this as morally acceptable. 50 See Resnik et al. (1999) and National Academies of Sciences, Engineering, and Medicine (2017) for additional examples of monogenic disorders that GGE might be used to prevent. 51 The concept of a parent can be confusing here, because people who related to the child genetically might not be related socially. The concept of a parent can be even more confusing when surrogate pregnancy is used to produce children, since woman who gestates and gives birth to the child might not be genetically related to the child, if she is carrying a fetus created by another couple in vitro.

212

7 Genetic Engineering

them. Moreover, the environment also plays an important role in the etiology of most polygenic diseases, so GGE may only be able to address the genetic component of the risk of disease (Resnik et al. 1999; National Academies of Sciences, Engineering, and Medicine 2017). While it may sometimes be possible for parents to use PNGT or PIGT to avoid giving birth to children with polygenic disorders, GGE would seem to be the only viable option for achieving this outcome in many cases, because if a disorder is caused by multiple genes, it is likely that gametes produced by the parents will always have some of those genes (Resnik et al. 1999). In most cases, GGE to alter normal traits would generally be much more complex and technically challenging than GGE for monogenic disorders, because it would usually involve editing dozens or perhaps hundreds of interacting genes (Resnik et al. 1999). The philosophical and scientific literature includes discussions of many different types of traits that could be altered (or enhanced), such as intelligence, athletic or musical ability, moral compassion, height, and longevity (Nozick 1974; Harris 1992, 2007; Resnik et al. 1999; American Association of the Advancement of Science 2000; Fukuyama 2002; Agar 2014; Blackford 2014; Baylis 2019). Most of these proposed enhancements belong in the realm of science fiction, because we are far from having a complete understanding of the genetic and molecular basis of these traits or how to genetically manipulate them reliably and safely (Resnik et al. 1999; Resnik and Vorhaus 2006; National Academies of Sciences, Engineering, and Medicine 2017; Koch 2020). Although many of the genetic enhancements that have been discussed may not even be technically achievable at this time, the discussion of genetic enhancement is not completely irrelevant to GGE policy, however, because it is possible that some types of enhancement might involve editing one or only a few genes. For example, the genome-edited babies born in China in 2018 had alterations in a single gene that codes for a receptor on white blood cells. We can consider this to be a form of enhancement because the purpose of the GGE intervention was not to prevent a genetic disease but to give the children immunity to HIV by altering cells in their immune system. The GM salmon (discussed earlier) were created by altering a single gene that promotes the expression of another gene that codes for a growth hormone (Cossins 2015). If human growth is biologically similar to salmon growth, it may be possible to enhance human growth by editing a single gene.

7.15 Benefits of Germline Genetic Engineering Benefits to children. Children can benefit from GGE that enables them to be born without genetic diseases, which can cause pain, suffering, psychological distress, disability, high health care costs, reduced economic opportunities, shortened lifespan, and other adverse outcomes (Resnik et al. 1999; Nuffield Council on Bioethics 2016). Children can also benefit from health-related enhancements, such as immunity to diseases or resistance to cancer, or non-health related enhancements, such as increases in longevity, stature, intelligence, athletic ability, and so on (Harris 1992, 2007;

7.15 Benefits of Germline Genetic Engineering

213

Parens 1998; Resnik et al. 1999; Fukuyama 2002; Agar 2014; Blackford 2014). I am assuming, of course, that these interventions are technically possible and successful. The benefits of preventing polygenic diseases and enhancing human may not be possible for the foreseeable future (see discussion of plausibility Chapter 4), given the current state of biomedical science and technology. Benefits to parents and other family members. For many parents, the fear that their children will be born unhealthy causes a great deal of stress and anxiety. Also, caring for children with a genetic disease can be an economic and psychological hardship for parents and other family members, such as siblings. Many parents also want their children to be happy and successful, and they expend a great deal of time and money providing them with the resources they need to achieve these goals (McGee 2000; Davis 2001). Genetic enhancements may be viewed by some parents as another way a giving children advantages that are no different, in principle, from providing their children with nutritious food, education, health care, or cultural enrichment (Parens 1998; McGee 2000). Parents can therefore benefit psychologically and economically from using GGE to produce healthy children or children with enhanced abilities to be successful in life (Parens 1998; Resnik et al. 1999; Fukuyama 2002; Agar 2014; Blackford 2014). Other family members can also benefit from these additions to the family. Additionally, one might argue that all people have a right to use reproductive technologies, including GGE, to procreate, and that the state should not unreasonably restrict this right (Robertson 1994). This is an important human right issue that should be taken into account when evaluating the reasonableness of precautions concerning GGE. Benefits to future generations. Future generations could benefit from being born without genetic diseases or having genetic enhancements, assuming that modified genes can be passed on to the next generation (National Academies of Sciences, Engineering, and Medicine 2017). Benefits to society. Society can benefit from the health improvements related to GGE. Preventing the birth of children with genetic diseases can benefit society by reducing health care costs and other burdens. While it is difficult to estimate the economic costs of monogenic genetic diseases, due to their heterogeneity, a recent study found that Canadian children with these diseases cost between 4.54 and 19.76 times more to care for than healthy children (Marshall et al. 2019). Another study found that the inpatient birth care costs were $16,000–$77,000 higher for neonates born with monogenic diseases, as compared to healthy neonates (Gonzaludo et al. 2019). According to one estimate, the lifetime health care costs for treating monogenic diseases range from $30,000 to over $3 million (Cummings 2018). Since the vast majority of health care expenses are related to polygenic diseases, GGE for these conditions could save billions of dollars per year. Cancer costs the US about $300 billion per year in treatment and lost productivity and heart disease costs the US$215 billion per year (Yabroff et al. 2011; Centers for Disease Control and Prevention 2019). Society could also possibly benefit from genetic enhancements to the immune system, which could, theoretically, improve public health and lower health care costs. For example, genetically immunizing people against HIV, malaria,

214

7 Genetic Engineering

dengue, COVID-19 and other infectious diseases could dramatically improve public health. Looking beyond the health benefits of GGE, it is possible that individuals with enhanced intelligence or creativity could make important contributions to science, technology, industry, athletics, and the arts (Harris 1992, 2007; Resnik et al. 1999; Fukuyama 2002; Bostrom 2008; Berry 2013; Agar 2014; Blackford 2014). The social and economic benefits of genetic enhancement would depend on which traits are enhanced and are highly speculative at this point.

7.16 Risks of Germline Genetic Engineering Risks to children. GGE can create significant risks for children conceived by this procedure. First, genetic interventions might have off-target, harmful genetic effects (Zhang et al. 2015; Araki and Ishii 2016; National Academies of Sciences, Engineering, and Medicine 2017). Although recent advances in genetic engineering methods, such as CRISPR, are safer and more effective than older methods, they are far from perfect (Liang et al. 2015; Wang and Yang 2019). Off-target effects could range from mutations that have no adverse effects (such as mutations affecting only one or a few DNA sequences) to mutations that cause loss of function, disease, disability, or death (such as translocations of DNA sequences, inversions of DNA sequences, or large deletions of DNA sequences) (Araki and Ishii 2016). Harmful genetic alterations might not be apparent at birth and could emerge during development or adulthood. He Jiankui claims that the children he performed gene editing on are healthy, but it remains to be seen whether will suffer any adverse effects from gene editing as they mature. A possible way of minimizing the risk of off-target effects is to use whole genome sequencing to test for off-target mutations in embryos prior to implantation. Embryos with significant off-target mutations would not be implanted. However, whole genome sequencing might not be able to distinguish between off-target mutations and other genetic variants (Araki and Ishii 2016). Second, if the gene editing is performed on the early embryo, it is possible that it might affect some cells but not others. The embryo would be composed of cells with different genomes, a phenomenon known as mosaicism (Liang et al. 2015; National Academies of Sciences, Engineering, and Medicine 2017). Mosaicism (also known as chimerism) is a common problem in animal genome editing (see Fig. 7.9). To develop an animal without mosaicism, scientists test for the condition and breed animals that have fully incorporated the altered gene (Resnik et al. 1999). Mosaicism could limit the effectiveness of the genetic modification and could pose unknown risks for the child (National Academies of Sciences, Engineering, and Medicine 2017). It is important to note, however that because random mutations often occur during somatic cell division, all adult mammals are genetically mosaic, to a certain extent. Genomic differences between cells in the body may have no impact on health, or they could lead to diseases, such as cancer (Frank 2014).

7.16 Risks of Germline Genetic Engineering

215

Third, genetic interventions that involve the alteration of multiple genes might have unanticipated harmful effects on the child because the genes may interact with each other and with other molecular, cellular and organismic processes and mechanisms in ways that are not presently understood (Resnik et al. 1999; Holdrege 2008; Baylis 2019). For example, altering a gene that plays a role in the development of diabetes might adversely affect carbohydrate metabolism. Altering a gene that plays a role in the development of intelligence might adversely affect other cognitive and emotion functions, such as social behavior and moral judgment. Fourth, some types of GGE might have intended harmful effects on children. For example, suppose that deaf parents use GGE to ensure that their child is born deaf. Although some members of the deaf community do not view deafness as a disability, there is little dispute that being deaf deprives a person of important human experiences and opportunities, such as interacting verbally with non-deaf people or enjoying music. Deafness can also interfere with education, socialization, and employment (Johnston 2005). Fifth, genetically enhanced children might suffer psychological harms from learning that their parents have designed them to be a certain way (McGee 2000; Davis 2001; President’s Council on Bioethics 2003). A child who has been engineered for enhanced musical ability might resent the fact that his parents have imposed their values on him, and he might rebel against their desires and demands. Of course, children often already must deal with parental expectations but GGE would to take these expectations to a new level (Resnik and Vorhaus 2006). Risks to parents and other family members. GGE also creates risks for parents and other family members. Some of these are the risks associated with in vitro fertilization (required by GGE), such as the risks of taking drugs to ovulate, the risks of egg retrieval, and the risks of multiple births (Resnik et al. 1999). Other risks are psychological in nature. For example, if the GGE procedure is unsuccessful or harms to the child, parents may experience guilt and remorse for their actions and stress and anxiety related to caring for an unhealthy child. Risks to future generations. GGE interventions pose risks to future generations who are the progeny of children conceived by GGE. In many cases, the adverse effects of off-target mutations will manifest themselves in the child, but they might emerge in subsequent generations, especially if the mutation is recessive (Resnik et al. 1999; National Academies of Sciences, Engineering, and Medicine 2017). Although it is difficult to ascertain the risks to future generations from GGE, we know that mutations in somatic cells are occurring all the time, that most of these are repaired by the cells, but that many are not repaired and are harmful (Alberts et al. 2015). Risk to society. GGE poses several types of risks to society. While most these are highly speculative at this point because they assume that GGE will be highly effective and widely used in the future, they are at least plausible, given our knowledge of population genetics and human psychology and behavior. The first social risk is that widespread use of GGE could lead to the elimination of genes from the human gene pool and the loss of genetic diversity (Resnik 2000b). Genetic diversity is an important biological resource because it can help protect the human population from disease and promote health. The loss of genetic diversity

216

7 Genetic Engineering

could have adverse effects on the human population, depending on whether the eliminated genes have an impact on health and survival. For example, eliminating the SCA allele might reduce resistance to malaria (Resnik et al. 1999). Elimination of other genes may have adverse effects that cannot be predicted at this time. However, the loss of genetic diversity is not likely to have a significant impact on human health unless GGE is effective and is widely used (National Academies of Sciences, Engineering, and Medicine 2017). The second social risk is that widespread use of GGE to prevent genetic diseases could contribute to discrimination and prejudice against people with geneticallybased (or other) disabilities (Parens and Asch 1999; Mitchell et al. 2007; Sandel 2009).52 Widespread use of GGE could negatively affect social attitudes toward people with disabilities and lead some to regard individuals with genetic disabilities as “defective persons” who did not make it through the “genetic screen,” and ask why they were allowed to be born at all. GGE could also contribute to racial or ethnic discrimination or prejudice if parents use GGE to modify skin color, hair color, eye shape, lip shape, or other physical characteristics associated with racial or ethnic groups. Widespread use of GGE could lead to social and cultural conformity related to physical appearance. Individuals who do not meet these standards could be discriminated against (even more than they are now) and viewed as genetically inferior. There is precedence for this type of risk, since for many years people with dark skin have used cosmetics to lighten the skin tone and chemicals to straighten their hair in order to assimilate into society (Bates 2014). While this is also an important risk to take into account, it is entirely possible that widespread use of GGE would lead to diversity rather than conformity, since different people might value different sorts of physical characteristics. While exacerbation of discrimination is an important risk to take into account, it is worth noting that many technologies already in use for medical and non-medical purposes can contribute to discrimination and prejudice based on disability, skin color, height, age, or any number of characteristics. For example, cochlear implants can contribute to discrimination against deaf people, cosmetic products can contribute to discrimination people based on skin color, high-heeled shoes can contribute to discrimination based on height, and various forms of plastic surgery can contribute to discrimination based on age or appearance. However, the risk of discrimination is not usually a good reason to deny someone a significant benefit. For example, we would not tell a person that they cannot receive a cochlear implant because this could encourage discrimination or prejudice against deaf people. The third social risk is that widespread use of GGE could lead to eugenics, i.e. the control of human reproduction to decrease the prevalence of undesirable traits and increase the prevalence of desirable ones (Kitcher 1996). The eugenics movement was a political ideology that began in Europe and spread to the US following publication of Charles Darwin’s (1809–1882) Origin of Species by Means of Natural Selection in 1859 (Darwin 1859; Kevles 1985). Leaders of the movement, such as Francis Dalton (1822–1911) argued that we should apply Darwin’s theory to social 52 This

is one of the themes of the science fiction movie GATTACA.

7.16 Risks of Germline Genetic Engineering

217

policy to improve the fitness of the human population. Dalton encouraged “superior” individuals to breed and argued that “inferior” individuals should not breed. In the early twentieth century, many nations and states adopted laws that required people who were mentally ill or disabled or criminally insane to be sterilized (Carlson 2001). In the US, over 100,000 people were sterilized under eugenics laws (Proctor 1988). In Nazi Germany, the eugenics movement began in the 1930s with the sterilization of “undesirable” members of the population, including Jews, mentally or physically disabled people, Gypsies, and mixed-race children. The program soon expanded beyond sterilization and led to the killing of people considered to be inferior. About 250,000 people died in Germany’s “euthanasia” program, and over six million people, most of whom were Jewish concentration camp prisoners, died in the racial purification program (Proctor 1988). After World War II, state-sponsored eugenics programs began to wane. By the 1960s, most US states had revoked their eugenics programs (Kevles 1985). Eugenics, as conducted by states or nations, is clearly a moral abhorrent practice/ideology that should be condemned and avoided at all costs. It is worth noting, however, that state-sponsored eugenics programs were enacted long before the development of genetic engineering. It is not clear, therefore, whether GGE would increase the probability that state-sponsored eugenics would occur again, especially in countries that protect human rights. Eugenics practices related to GGE are most likely to occur in countries with authoritarian regimes and minimal human rights protections (Resnik et al. 1999). Although state-sponsored eugenics may not be a significant risk of GGE in most countries, that does not mean that GGE would not contribute to selective breeding by individuals. Parental eugenics is selective breeding by parents to achieve desired reproductive outcomes (Kitcher 1996; Resnik et al. 1999; Agar 2014). For example, if a man who tests positive for the gene for Huntington’s disease decides not to have children, this would be a form of parental eugenics. If a woman picks a mate based on qualities that she would like her children to have, such as above average height or intelligence, this would also be a form of parental eugenics. Parental eugenics has been practiced for thousands of years and will continue to be practiced, regardless of whether GGE is developed. Most people regard parental eugenics as morally acceptable and commendable in some cases, e.g. to prevent passing on genetic diseases to one’s children (Kitcher 1996). The fourth social risk is that widespread use of GGE for enhancement purposes will exacerbate socioeconomic inequalities because only the wealthy people will be able to afford it. The wealthy people will use GGE to give birth to children who will use their enhanced traits to increase their own wealth and have children with genetic enhancements, and so on. The rich will get richer and the genetically rich will get genetically richer. Eventually, the human species could become genetically stratified and something like a genetic caste system, consisting of “normal” humans and enhanced or “trans-humans,” could emerge (Kitcher 1996; Resnik et al. 1999; Buchanan et al. 2000; American Association for the Advancement of Science 2000; Fukuyama 2002; Mehlman 2009; Berry 2013; Agar 2014; National Academies of Sciences, Engineering, and Medicine 2017; Baylis 2019).

218

7 Genetic Engineering

Almost all discussions of the ethical and policy issues of GGE mention the risk of exacerbating socioeconomic inequalities and argue that we must take steps to avoid this outcome because it would be unjust or unfair (see Buchanan et al. 2000; Mehlman 2009), but very few discussions actually consider the assumptions made by those who consider this to be a plausible risk. Some of these assumptions are as follows: 1.

2. 3. 4. 5.

GGE for enhancing human traits will eventually become safe and effective enough that many parents will view it as a reasonable option for ensuring that their children have the best opportunities in life. Genetic enhancements will give children significant advantages that increase their wealth, health, and other socioeconomic goods. Genetically enhanced children will pass on these enhancements to their children. Genetically enhanced children will pursue additional enhancements. GGE for enhancing human traits will be prohibitively expensive for most people.

Most of these assumptions are questionable at present. The first assumption is highly questionable, given the limits of our current biomedical science and biotechnology. GGE for enhancement would probably not be a reasonable option for most parents unless it is highly effective and as safe as other forms of assisted reproduction, such as in vitro fertilization (Resnik et al. 1999). That day may come eventually, but not in the foreseeable future. The second assumption is also questionable because it ignores the impact of the environment on socioeconomic outcomes (Resnik and Vorhaus 2006). People who are born with natural abilities do not always develop them, and people often succeed in life without a great deal of natural ability. As mentioned previously, enhanced children might even rebel from parental expectations are pursue life goals that do not match their natural abilities. The third assumption is plausible if the genetic intervention changes the genome in the intended way. However, it is worth noting that genetically enhanced people might breed with non-enhanced people, which would decrease the probability that their offspring would have the enhanced trait(s). For thousands of years, people have crossed barriers related to race, ethnicity, and socioeconomic class to have sex and produce children. It seems likely that this would also happen with genetic enhancement. The fourth assumption is also questionable because genetically enhanced children might decide they do not want to enhance their own children, or additional enhancements might not be available. The fifth assumption is also questionable, because the costs of GGE may decrease as this technology becomes more developed (National Academies of Sciences, Engineering, and Medicine National Academies of Sciences, Engineering, and Medicine 2017). Most new technologies are very expensive at first and then become more affordable as result of improvements in production, economies of scale, and competition that drive down prices. At one time only wealthy people could afford automobiles, televisions, computers, and cellular phones, but now most people in industrialized nations own these technologies. Clearly, some technologies used to perform

7.16 Risks of Germline Genetic Engineering

219

GGE will drop in price. The price of genetic testing used in GGE is likely not be prohibitively expensive for most people. For example, the cost of sequencing a whole human genome has dropped from about $1 billion53 in 2003, when the first human genome was sequenced, to around $200 today (Molteni 2018). Genome sequencing has dramatically dropped in price largely due to advancements in automated sequencing technology. Other technologies used for GGE, however, may not drop a great deal in price because they involve high-skilled, human labor. For example, the average cost of one cycle of in vitro fertilization, excluding medications, is about $12,000 (Gurevich 2020). GGE used to prevent genetic diseases may also become affordable if private or government health insurers decide to cover it or are required by law to cover it (National Academies of Sciences, Engineering, and Medicine 2017). In the US, 16 states either require insurance companies to cover infertility treatment (including fertility testing and in vitro fertilization) or to offer coverage (National Conference of State Legislatures 2019). Medicaid, a US federal and state program that covers the costs of health care for disabled or low-income adults and their children, does not cover fertility treatment (Kaiser Family Foundation 2016). Private and government insurers are not likely to cover the costs of GGE unless it is health-related. Before concluding this discussion of the potential impact of GGE on socioeconomic inequalities, it is worth thinking about whether concern about this risk would be a good reason to deny people significant medical (or other) benefits. Many forms of medical therapy are prohibitively expensive for most people, even when they have health insurance coverage. CAR T-cell therapy for cancer costs as much as $1.5 million per patient (Maziarz 2019). Actimmune, an immune system boosting drug used treat chronic granulomatous disease, costs $52,000 for one month; Daraprim, an antiparasitic drug to treat toxoplasmosis, costs $45,000 for one month (Christensen 2018). Gender reassignment surgery can cost about $100,000 (Whitlock 2019). Should we prohibit these forms of treatment because they are so costly that they could contribute to socioeconomic inequalities? One could argue that this would be a high price to pay for avoiding this social risk. The fifth social risk is that widespread use of GGE will threaten respect for human dignity (President’s Council on Bioethics 2003; Mitchell et al. 2007; Sandel 2009; National Academies of Sciences, Engineering, and Medicine 2017). This is a slightly different version of the human dignity concerns discussed earlier under the “playing God” objection to GMOs. The idea here is not that GGE violates human dignity per se but that it will lead to the attitudes that cause people to violate human dignity (Resnik 2001, 2007). GGE could have this effect by encouraging people to regard the human body as an object (or thing) that can be designed, modified, or manipulated. Violations could include discrimination (discussed earlier) and various forms of harm or exploitation. This risk is difficult to assess because there are already 53 This cost estimate is based on dividing the total cost of the Human Genome Project--$3 billion— by three. The Human Genome Project was a US-funded research project that took place from 1990 to 2003. Although sequencing the human genome was the primary goal of the project, it also included other activities, such as studies of human diseases, model organisms, genetic technologies, computational methods, and ethical issues (Human Genome Project 2020).

220

7 Genetic Engineering

many different social practices and activities that objectify the human body, such beauty pageants, body building contests, cosmetic surgery, pornography, professional athletics, and forms of advertising that prominently display the body. It is difficult to determine whether GGE would have an appreciable effect beyond these existing practices and activities. Moreover, most societies already have laws to protect people from discrimination and various forms of exploitation and harm.

7.17 Germline Genetic Engineering and the Precautionary Principle As one can see from the preceding discussion, GGE has risks, benefits, and uncertainties (scientific and moral). To apply the PP to GGE policies, we should consider which of the three options—risk avoidance, risk minimization, or risk mitigation (or some combination) would offer the most reasonable approach to dealing with the risks of GGE. We should also apply the criteria for reasonableness, i.e. proportionality, fairness, consistency, and epistemic responsibility, to these approaches, and consider policies related to the different types of GGE, i.e. GGE to prevent monogenic disorders, GGE to prevent polygenic disorders, and GGE for enhancement of traits. Many scientists and ethicists have argued that all forms of GGE should be banned, at least temporarily. Several years before the birth of the gene edited babies in China, when CRISPR emerged as an invaluable tool in genetic engineering, scientists and ethicists had also called for a moratorium on gene edited children (Baltimore et al. 2015; Park 2019). After the first gene edited babies were born, a group of prominent scientists and ethicists called for a five-year moratorium on all types of human GGE to give scientists, health care providers, and the public more time to gather information and consider the issues (Lander et al. 2019; Park 2019).54 The International Bioethics Committee of the United Nations Economic, Scientific, and Cultural Organization (UNESCO) (2020) and the NIH (Wolinetz and Collins 2019) also supported the moratorium. Not all scientists or ethicists think GGE should be banned, however. The National Academies of Sciences, Engineering, and Medicine (2017) and Nuffield Council on Bioethics (2016) have both given a tentative endorsement to limited GGE to prevent genetic diseases, provided that rigorous scientific, clinical, and ethical standards are met. According to the National Academies of Sciences, Engineering, and Medicine (2017), GGE should be approached with caution but not prohibited. The Organizing Committee of the Second International Summit on Human Genome Editing (2018) has stated that GGE at this time would be irresponsible but has not called for a ban.

54 Interestingly, two of the scientists who called for the moratorium, David Baltimore and Paul Berg,

participated in the Asilomar conference on recombinant DNA (discussed earlier).

7.18 Germline Genetic Engineering for Preventing Monogenic Disorders

221

7.18 Germline Genetic Engineering for Preventing Monogenic Disorders For several decades, many experts have agreed that the strongest case for using GGE is to prevent serious, well-understood monogenic disorders that cannot be reasonably prevented by other means (Resnik et al. 1999; National Academies of Sciences, Engineering, and Medicine 2017; Baylis 2019). According to many experts, the main barriers to performing GGE for monogenic disorders are scientific and technical. Once GGE is safe and effective enough to use for well-understood monogenic disorders, Phase I clinical trials can be planned and initiated (American Association for the Advancement of Science 2000; Nuffield Council on Bioethics 2016; National Academies of Sciences, Engineering, and Medicine 2017). According to the National Academies of Sciences, Engineering, and Medicine (2017) GGE clinical trials to prevent disease may be conducted under the following conditions: the genetic disease or condition is serious; the edited genes cause or predispose the disease or condition; there are no reasonable alternatives to genome editing; there is credible safety and efficacy data from animal or human studies55 ; there will be ongoing, rigorous oversight of clinical trials to protect participants; there will be long-term, multi-generational follow-up; and oversight mechanisms are in place to prevent use of GGE for purposes other than preventing a serious disease or condition (i.e. enhancement). The National Academies of Sciences, Engineering, and Medicine (2017) asserts that more research is needed before GGE meets the risk/benefit standards for initiating clinical trials. Should GGE be permitted for serious monogenic disorders? To use the PP to answer this question, we need to consider the benefits and risks to children conceived by means of GGE, their parents, future generations, and society. We should first consider whether the risks would be proportional to the benefits for the children conceived by this type of GGE, because they will be most directly impacted by the procedure. The answer to this question depends on the extent to which GGE is safe and effective for preventing monogenic disorders. Recent successful applications of CRISPR in monkeys, dogs, rabbits, mice, and pigs indicate that CRISPR gene editing is fast becoming a safe and effective tool for genome editing (Cohen 2019a). However, while CRISPR is safe enough to use in animal genetic engineering, it still has technical limitations and uncertainties and may not be safe enough to use in humans (Liang et al. 2015; National Academies of Sciences, Engineering, and Medicine 2017). To apply the proportionality criterion to this situation, let’s first consider the option of risk avoidance. If we focus on taking reasonable precautions to deal with the risk of the genetic disorder, the disorder can only be avoided by using GGE, since we are assuming that the child has a 100% chance of having this disorder if GGE is not used. However, GGE is not risk-free and may produce unintended, adverse health effects. If we focus on taking reasonable precautions to deal with the risks of GGE, then the 55 These

studies could include the creation of human embryos to study the safety and efficacy of GGE methods and techniques (Liang et al. 2015).

222

7 Genetic Engineering

Table 7.3 Decision matrix for using GGE to prevent a serious, monogenic disorder

Use GGE

GGE is safe and effective

GGE is safe and ineffective

GEE is unsafe but GGE is unsafe effective and ineffective

Child is born healthy

Child is born with the serious, monogenic disorder

Child does not have the serious monogenic disorder but has adverse effects produced by GGE

Child is born with the serious, monogenic disorder and has adverse effects produced by GGE

Child is born with the serious, monogenic disorder

Child is born with the serious, monogenic disorder

Child is born with the serious, monogenic disorder

Don’t use GGE Child is born with the serious, monogenic disorder

only way to avoid these risks is to not use GGE. Thus, the proportionality criterion leads to a kind of stalemate, depending on which risks we are most concerned about avoiding.56 The stalemate can be broken, however, if we can compare these risks and decide which risk is worse. We might decide, for example, that the genetic disease is much worse than the possible adverse effects of GGE. We can also break the stalemate if we have evidence pertaining to the likelihood of different outcomes. Although we are assuming that we do not have enough scientific evidence to make probability judgments with an acceptable degree of accuracy and precision to apply EUT to this decision, we may have enough evidence to decide that some outcomes are highly implausible (and thus not worth considering) or that some outcomes more or less likely. For example, we may have enough evidence to decide that the monogenic disorder is likely to be worse than the genetic abnormalities inadvertently produced by GGE. See Table 7.3. Taking all this into account, one could use the PP to argue that GGE would be a reasonable option for children who would be born with serious, monogenic diseases or conditions that are likely to be much worse than any problems caused by the genome editing. The risks of using GGE could be minimized and mitigated by taking various precautions to protect and promote the health of children conceived by GGE, such as those proposed by the National Academies of Sciences, Engineering, and Medicine (2017) for conducting GGE clinical trials. Of course, that leaves open the question of what counts as a “serious” disease or condition. Is SCA a serious genetic disease or condition? Cystic fibrosis? Deafness? Alopecia areata?57 Perhaps regulatory agencies and IRBs can answer this question on a case-by-case basis when they review GGE proposals. However, this question should be confronted in a way that proportionally, fairly, consistently, and responsibly balances benefits and risks to the child. 56 This

is an example of the problem of incoherence discussed in Chapter 4.

57 Alopecia areata is a condition that leads to hair loss. It is thought to have a genetic basis (McIntosh

2017).

7.18 Germline Genetic Engineering for Preventing Monogenic Disorders

223

To apply the PP to this issue, we must also consider risks and benefits beyond those that impact the child. The risks to the parents and family members could be significant, especially if they have a child with abnormalities caused by GGE. However, one might argue that as long as the parents understand these risks, they should be free to take them (Robertson 1994). Parents often make difficult, health care decisions, such as choices related to a child’s cancer treatment, that can affect the entire family. GGE is no different, in principle, from these types of decisions in terms of impacts on the parents and other family members. Risks to future generations while potentially significant, may be minimized with long-term follow-up of GGE children. When they become old enough to reproduce, children could be tested for the presence of off-target (or other) mutations and counseled about reproductive risks. Risks to society will not be significant unless GGE is widely used, which is not likely to be the case for preventing monogenic disorders, because as mentioned above, these affect only 2% of the population and can usually be prevented by other means, such adoption, sperm or egg donation, PIGT, or PNGT. Risks of eugenics use of GGE are likely to be non-significant in societies that have strong protections for human rights. Putting these points together, one could make a strong argument, based on the PP, that using GGE to prevent the birth of children with serious, well-understood, severe monogenic disorders when there are no other reasonable alternatives should not be banned but should be tightly regulated and controlled to minimize and mitigate risks (National Academies of Sciences, Engineering, and Medicine 2017). However, since GGE may not be currently safe enough to attempt in human beings at present, clinical trials should not be initiated until scientists have obtained more evidence concerning safety and efficacy (Liang et al. 2015; De Wert et al. 2018). Thus, a temporary moratorium (e.g. 5 years) would be a reasonable precautionary measure to give scientists more time to do more research related to GGE. The moratorium would also be an opportunity to for additional public education and engagement concerning the benefits, risks, and ethical and social implications of GGE (Lander et al. 2019).58 Once the moratorium is lifted, the risks of GGE to prevent monogenic disorders can be managed through existing legal and ethical frameworks. The decision whether to conduct a Phase I clinical trial to use GGE to prevent a serious monogenic disorder would be made by oversight committees (such as IRBs or FDA committees that review applications for new biologics), and parents would decide whether to enroll in it. Oversight committee should follow ethical and legal standards for approving clinical trials with human subjects, such as risk minimization, reasonableness of risks, informed consent, equitable subject selection, privacy/confidentiality protections, and additional protections for vulnerable subjects (in this case, future children) (Resnik 2018a). In deciding whether to enroll in a clinical trial that has been approved by oversight committees, parents would make a decision on behalf of the 58 The

moratorium would not apply to GGE for research purposes.

224

7 Genetic Engineering

future child. The parents should follow the best interest standard (see discussion in Chapter 5) and make a choice that they judge to be in the best interests of their future child. Although oversight committees and parents would be following different rules and procedures, they would both be attempting to decide whether risks are reasonable in relation to benefits. Parents would also need to consider the impact of their decision on the family. Parents might decide not to enroll in the clinical trial and pursue other options for having children, such as adoption or sperm/egg donation, or they might decide not have children. GGE procedures that have completed all three phases of clinical trials successfully and have been approved for marketing would be available to parents who want to use them to prevent monogenic disorders. To ensure the safety and efficacy of GGE in a clinical setting, GGE providers and GGE clinics could be regulated by the state, much in the way that technologically assisted reproduction is regulated in the UK. Professional boards and associations would also play an important role in establishing guidelines and best practices for GGE. Before concluding this section, we should also consider how fairness, consistency, and epistemic responsibility apply to using GGE to prevent serious, monogenic diseases. Fairness implies that people with a stake in GGE policy, such as individuals with monogenic diseases, parents of children with monogenic diseases, researchers who study monogenic diseases, and health care professionals who take care of people with monogenic diseases or provide assitance in reproduction, should have meaningful input into decisions made by regulatory agencies and oversight committees. Consistency implies that similar cases should be treated in the same way. For example, if a regulatory agency or oversight committee refuses to approve a clinical trial that it judges to be too risky, it should also not approve a clinical trial that it judges to have the same degree of risk or greater. Epistemic responsibility implies GGE policy decisions should be informed by the most up-to-date scientific research and should be revised in light of new information.

7.19 Germline Genetic Engineering for Preventing Polygenic Disorders As noted above, GGE for preventing polygenic disorders would be much more technically challenging and riskier than GGE for monogenic disorders. In applying the PP to this type of GGE, we would need to consider the benefits and risks to children conceived by means of GGE, their parents, future generations, and society. We should first consider whether the risks would be proportional to the benefits for the children conceived by this type of GGE, because they will be most directly impacted by the procedure. Clearly, being born without a polygenic disorder would be an important benefit for children conceived by means of GGE. However, the benefit is probably not currently worth the risk. Given our current level of technology, using GGE to prevent polygenic disorders is likely to have a much lower success rate

7.19 Germline Genetic Engineering for Preventing Polygenic Disorders

225

than using GGE to prevent monogenic disorders, due to the technical difficulties of altering numerous genes. Also, the risks to the children of using GGE to prevent polygenic disorders are likely to be much greater than the risks of using GGE to prevent monogenic disorders, because increasing the number genomic alterations increases the chances of off-target effects and other adverse effects. Thus, at this time, using GGE to prevent polygenic disorders would not be a reasonable way of managing the risks of these diseases or conditions. The most reasonable option would be to prevent or treat these diseases or conditions by more conventional means, such as pharmacological, surgical, medical, or psychological therapy, instead of GGE. We could stop the analysis of using GGE to prevent polygenic disorders at this point, because it would not be reasonable to pursue this option if the risks are not proportional to the benefits for children conceived by means of GGE. However, it is important to address other risks and benefits, since it is possible that some future time GGE may become safe and effective enough that risks to children would be proportional to the benefits (National Academies of Sciences, Engineering, and Medicine 2017). The risks to parents and family members for using GGE to prevent polygenic diseases would be similar to those for using GGE to prevent monogenic diseases. However, one might argue that as long as the parents understand these risks, they should be free to take them (Robertson 1994), and that GGE is no different, in principle, from other health-related decisions that impact parents and other family members. Risks to future generations could be very significant, given the potentially large number of genetic alterations (perhaps dozens). However, these risks could be minimized with long-term follow-up of GGE children. When they become old enough to reproduce, children could be tested for the presence of off-target mutations and counseled about reproductive risks. Risks to society could be significant because GGE for preventing polygenic diseases or conditions could be widely used, because polygenic diseases or conditions are fairly common and they usually cannot be prevented by other means, such adoption, sperm or egg donation, PIGT, or PNGT. Widespread use of GGE to prevent polygenic diseases could potentially increase discrimination and prejudice against disabled people but would probably not exacerbate socioeconomic inequalities because being born without a polygenic disease or condition is not a significant social or economic advantage. Putting these points together, one could make a strong argument, based on the PP, that using GGE to prevent the birth of children with polygenic diseases is not a reasonable risk to take at this point in time and should be banned for the foreseeable future. The main reason for adopting a ban would be to protect children conceived by GGE from harm, but risks society also have some impact on this decision.

226

7 Genetic Engineering

7.20 Germline Genetic Engineering for Enhancement I will now consider the most controversial uses of GGE. As noted above, while there is substantial public support for using GGE to prevent genetic diseases or conditions, there is little support for using GGE to enhance human traits, such as intelligence or height (Blendon et al. 2016). Public attitudes toward enhancing the immune system to better fight disease have not been well-studied. In considering GGE for the prevention of genetic diseases, risks and benefits to children conceived by GGE were paramount. Risks and benefits beyond those to these children were also important, but they did not factor significantly into the overall balance of risks and benefits, since it would not be reasonable to use GGE to conceive a child if the medical benefits do not outweigh the risks for that child, nor would it be reasonable to deny a child (or anyone else) an important medical benefit because of concerns about adverse social impacts. We would not deny a patient an expensive type of cancer treatment, for example, because it would be unaffordable for most patients, nor would we deny someone a cochlear implant because it could contribute to discrimination against deaf people. We also would not tell couples with genetic diseases that they should not use reproductive technology to have healthy children because we are concerned about impacts on the human gene pool. While social risks did not have much of an impact on our thinking about benefit and risks related to GGE to prevent genetic diseases, they loom large in assessment of other uses of GGE, because these could radically transform society (Fukuyama 2002; Mehlman 2009; National Academies of Sciences, Engineering, and Medicine 2017; Baylis 2019). In contemplating these risks, it is important to remember that they are mostly in the realm of science fiction, not science fact, since we are nowhere close to having the scientific and technical capabilities to make some of the enhancements that have been discussed or proposed. Thus, much of the discussion of these risks— and how to deal with them—is based on the highly speculative assumptions that GGE will one day be safe and effective enough to be widely used, and that we will be able to design and create babies like we design and create automobiles or houses. However, as noted above, some of these alterations (such as enhancements of the immune system or growth) may be within our reach. Clearly, widespread use GGE for human enhancement poses a threat to social, moral, and cultural values. But the question we need to ask is “what would be a reasonable way of dealing with this risk?” One answer would be to permanently ban GGE for purposes other than preventing the birth of children with genetic diseases. One might argue, however, that a ban of this sort would not balance benefits and risks proportionally because it would deny individuals and society important health benefits. As noted above, individuals and society could benefit a great deal from using GGE to immunize children again infectious diseases, such as HIV, malaria, dengue, COVID-19, and influenza. If we would not deny children a vaccine to prevent malaria for social reasons, would we also not deny children a genetic enhancement for the same purpose.

7.20 Germline Genetic Engineering for Enhancement

227

Perhaps a permanent ban could be limited to the use of GGE for non-health related purposes. GGE could be used to prevent genetic and non-genetic diseases and to improve human health, but it could not be used for other purposes, such as enhancing human intelligence, height, or strength. While this proposed ban might balance benefits and risks proportionally, it might fail the consistency test because we already permit many non-genetic enhancements of human performance and function, such as drugs, dietary supplements, surgery, higher education, tutoring, and computers that have significant social impacts (Parens and Asch 1999). New technologies that can enhance human cognitive performance are currently being developed and implemented. Pharmacological and electronic enhancements of cognitive function could have more of an impact on society than genetic enhancements (Maslen et al. 2014; Dubljevi´c 2019). Consistency would seem to require us to ban all technologies that can significantly enhance human performance or function, but such as ban would deny people important benefits and would be very difficult to enforce. The preceding discussion supports the view that a permanent ban on GGE for purposes other than preventing genetic diseases would not be a reasonable way of addressing the social risks of GGE. That is not to say, of course, that a temporary ban (or moratorium) would not be justified. As argued above, a temporary ban on all forms of GGE can be justified on the grounds that GGE is not safe enough to attempt in human beings at this time. Moratoria on different types of GGE could be lifted as the technology improves in safety and efficacy. First, the moratorium on using GGE to prevent serious, monogenic disorders would be lifted, then the moratorium on using GGE to prevent polygenic disorders, and finally the moratorium on using GGE for other purposes. While this proposal sounds reasonable, it still does not address the question of how we should minimize and mitigate the social risks of GGE for purposes other than preventing genetic diseases. To answer this question, we could draw some lessons from other ways that we currently address the adverse social impacts of transformative technologies. A good example of a transformative technology is the internet, which has fundamentally changed communication, work, business, education, entertainment, social relations, transportation, and politics. Although the internet has yielded tremendous social and economic benefits, it has also produced or contributed to significant social harms, such as invasion of privacy, fraud, propaganda, theft, destruction, disinformation, and cyberwarfare. One of the important ways that we minimize and mitigate these social risks is by modifying existing laws or enacting new ones to protect the welfare and rights of individuals and organizations (Spinello 2016). Another way that we minimize and mitigate the social risks of the internet is by taking steps to ensure that it is widely accessible so there will not be a digital divide (Ragnedda and Muschert 2015). We can also minimize and mitigate risks by developing electronic security measures. If we apply these lessons to GGE, we could adopt laws or policies to minimize or mitigate the social risks of GGE. For example, we could modify existing laws or enact new ones to protect people from some of the possible harms of GGE, such as eugenics, discrimination, or exploitation. We could also modify existing laws to ensure that the benefits of GGE are widely accessible. For example, we could require

228

7 Genetic Engineering

health insurers to cover the cost of GGE to prevent genetic diseases (Buchanan et al. 2000; National Academies of Sciences, Engineering, and Medicine 2017). Of course, all of these proposals currently belong to the realm of science fiction, since GGE will not be safe and effective enough to be widely use for decades or more. Even so, it is worth thinking about the steps that could be taken to minimize and mitigate the social risks of GGE.

7.21 Conclusion Genetic engineering is a paradigmatic case for application of the PP to environmental and public health policy, due to the scientific uncertainty related to the consequences of genetic engineering and the moral uncertainty concerning those consequences. In this chapter I have applied the PP to genetic engineering of microbes, plants, animals, and human beings. I have considered the risks and benefits of these different types of genetic engineering and argued that the PP would advise us to minimize and mitigate most of these risks through regulation and oversight, with the exception of GGE for human beings. The PP would support a temporary moratorium on GGE to produce children to give scientists more time to do more research related to the safety and efficacy of GGE.59 The moratorium would also be an opportunity to for additional public education and engagement concerning the benefits, risks, and ethical and social implications of GGE. The moratorium could be lifted when there is sufficient evidence to begin GGE clinical trials for the prevention of serious, monogenic disorders. A ban on GGE for polygenic disorders and GGE for purposes other than preventing genetic diseases would remain in effect for the foreseeable future.

References Agar, N. 2014. Truly Human Enhancement: A Philosophical Defense of Limits. Cambridge, MA: MIT Press. Alberts, B., A.D. Johnson, J. Lewis, D. Morgan, M. Raff, K. Roberts, and P. Walter. 2015. Molecular Biology of the Cell, 6th ed. New York, NY: W. W. Norton. American Association for the Advancement of Science. 2000. Human Inheritable Genetic Modifications: Assessing Scientific, Ethical, Religious, and Policy Issues. Washington, DC: American Association for the Advancement of Science. American Association for the Advancement of Science. 2012. Statement by the AAAS Board of Directors on labeling of genetically modified foods, October 2012. Available at: http://www.aaas. org/sites/default/files/AAAS_GM_statement.pdf. Accessed 18 Jan 2021.

59 The moratorium would not apply to research on embryos created by GGE, which would be necessary to obtain the knowledge needed to better understand the safety and efficacy of using GGE to produce children (Liang et al. 2015; Baltimore et al. 2015).

References

229

American College of Obstetricians and Gynecologists. 2019. Prenatal genetic screening tests. Available at: https://www.acog.org/Patients/FAQs/Prenatal-Genetic-Screening-Tests?IsM obileSet=false. Accessed 18 Jan 2021. Anderson, W.F. 1985. Human Gene Therapy: Scientific and Ethical Considerations. Journal of Medicine and Philosophy 10 (3): 275–291. Anderson, W.F. 1989. Human Gene Therapy: Why Draw a Line? Journal of Medicine and Philosophy 14 (6): 81–93. Annas, G.J., L.B. Andrews, and R.M. Isasi. 2002. Protecting the Endangered Human: Toward an International Treaty Prohibiting Cloning and Inheritable Alterations. American Journal of Law and Medicine 28: 151–178. Araki, A., and T. Ishii. 2016. Providing Appropriate Risk Information on Genome Editing for Patients. Trends in Biotechnology 34 (2): 86–90. Arms Control Association. 2018. The Biological Weapons Convention (BWS) at a Glance. Available at: https://www.armscontrol.org/factsheets/bwc. Accessed 18 Jan 2021. Baeshen, N.A., M.N. Baeshen, A. Sheikh, R.S. Bora, M.M. Ahmed, H.A. Ramadan, K.S. Saini, and E.M. Redwan. 2014. Cell Factories for Insulin Production. Microbial Cell Factories 13: 141. Baltimore, D., P. Berg, M. Botchan, D. Carroll, R.A. Charo, G. Church, J.E. Corn, G.Q. Daley, J.A. Doudna, M. Fenner, H.T. Greely, M. Jinek, G.S. Martin, E. Penhoet, J. Puck, S.H. Sternberg, J.S. Weissman, and K.R. Yamamoto. 2015. A Prudent Path Forward for Genomic Engineering and Germline Gene Modification. Science 348 (6230): 36–38. Bates, K.G. 2014. A Chosen Exile: Black People Passing in White America. NRP, October 7. Available at: https://www.npr.org/sections/codeswitch/2014/10/07/354310370/a-chosen-exile-blackpeople-passing-in-white-america. Accessed 18 Jan 2021. Baylis, F. 2019. Altered Inheritance: CRISPR and the Ethics of Human Genome Editing. Cambridge, MA: Harvard University Press. BBC News. 2015. Is Opposition to Genetically Modified Food Irrational? BBC News, June 3. Available at: https://www.bbc.com/news/science-environment-32901834. Accessed 18 Jan 2021. Beauchamp, T.L., and D. DeGrazia. 2020. Principles of Animal Research Ethics. New York, NY: Oxford University Press. Begley S. 2018. Out of Prison, the ‘Father of Gene Therapy’ Faces a Harsh Reality: A Tarnished Legacy and an Ankle Monitor. STAT, July 23. Available at: https://www.statnews.com/2018/07/ 23/w-french-anderson-father-of-gene-therapy/. Accessed 18 Jan 2021. Berger, E., and B. Gert. 1991. Genetic Disorders and the Ethical Status of Germ-Line Gene Therapy. Journal of Medicine and Philosophy 16 (6): 667–683. Beriain, I. 2018. Human Dignity and Gene Editing: Using Human Dignity as an Argument Against Modifying the Human Genome and Germline Is a Logical Fallacy. EMBO Reports 19 (10): e46789. Berry, R. 2013. The Ethics of Genetic Engineering. New York, NY: Routledge. Biello, D. 2010. Genetically Modified Crops on the Loose and Evolving in the U.S. Midwest. Scientific American, August 6. Available at: https://www.scientificamerican.com/article/geneti cally-modified-crop/. Accessed 18 Jan 2021. Billings, L.K., and J.C. Florez. 2010. The Genetics of Type 2 Diabetes: What Have We Learned from GWAS? Annals of New York Academy of Science 1212: 59–77. Biofuels International. 2018. GM Yeast Could Fix Food vs. Fuel Debate Around Bioethanol. Biofuels International, April 4. Available at: https://biofuels-news.com/news/gm-yeast-could-fix-food-vsfuel-debate-around-bioethanol/. Accessed 26 Feb 2020. Biotechnology Innovation Organization. 2020b. Genetically Engineered Animals: Frequently Asked Questions. Available at: https://archive.bio.org/articles/genetically-engineered-animalsfrequently-asked-questions. Accessed 18 Jan 2021. Blackford, R. 2014. Humanity Enhanced: Genetic Choice and the Challenge for Liberal Democracies. Cambridge, MA: MIT Press. Blaese, R.M., K.W. Culver, A.D. Miller, C.S. Carter, T. Fleisher, M. Clerici, G. Shearer, L. Chang, Y. Chiang, P. Tolstoshev, J.J. Greenblatt, S.A. Rosenberg, H. Klein, M. Berger, C.A. Mullen,

230

7 Genetic Engineering

W.J. Ramsey, L. Muul, R.A. Morgan, and W.F. Anderson. 1995. T Lymphocyte-Directed Gene Therapy for ADA-SCID: Initial Trial Results After 4 Years. Science 270 (5235): 475–480. Blancke, S. 2015. Is Opposition to Genetically Modified Food Irrational? Scientific American, August 18. Available at: https://www.scientificamerican.com/article/why-people-oppose-gmoseven-though-science-says-they-are-safe/. Accessed 18 Jan 2021. Blendon, R.J., M.T. Gorski, and J.M. Benson. 2016. The Public and the Gene-Editing Revolution. New England Journal of Medicine 374 (15): 1406–1411. Boone, C.K. 1988. Bad axioms in Genetic Engineering. Hastings Center Report 18 (4): 9–13. Bodner, A. 2015. Preventing Escape of GMO Salmon. Biology Fortified, November 20. Available at: https://biofortified.org/2015/11/gmo-salmon/. Accessed 18 Jan 2021. Boorse, C. 1977. Health as a Theoretical Concept. Philosophy of Science 44: 542–573. Borges, B.J., O.M. Arantes, A.A. Fernandes, J.R. Broach, and P.M. Fernandes. 2018. Genetically Modified Labeling Policies: Moving Forward or Backward? Frontiers in Bioengineering and Biotechnology 6: 181. Bostrom, N. 2010. Letter from Utopia (Version 1.9). Studies in Ethics, Law, and Technology 2: 1–7. Bostrom, N. 2008. Why I Want to Be a Posthuman When I Grow Up. In Medical Enhancement and Posthumanity, ed. B. Gordijn and R. Chadwick, 107–137. Dordrecht, Netherlands: Springer. Buchanan, A., D.W. Brock, N. Daniels, and D. Wikler. 2000. From Chance to Choice: Genetics and Justice. Cambridge, UK: Cambridge University Press. Callahan, D. 1995. Setting Limits: Medical Goals in an Aging Society with “A Response to My Critics”. Washington, DC: Georgetown University Press. Campbell, M. 2020a. World’s First Genetically Engineered Moth Is Released into an Open Field. Technology Networks, January 29. Available at: https://www.technologynetworks.com/genomics/ news/world-first-genetically-engineered-moth-is-released-into-an-open-field-329960. Accessed 18 Jan 2021. Campbell, M. 2020b. Genetically Engineered Bacteria Protect Honey Bees Against Parasites. Technology Networks, February 24. Available at: https://www.technologynetworks.com/genomics/ news/genetically-engineered-bacteria-protect-honey-bees-against-parasites-331209. Accessed 18 Jan 2021. Caplan, A. 1995. Moral Matters. New York, NY: Wiley. Caplan, A. 1997. The Concepts of Health, Illness, and Disease. In Medical Ethics, 2nd ed, ed. R. Veatch, 57–74. Sudbury, MA: Jones and Bartlett. Carlson, E.A. 2001. The Unfit: A History of a Bad Idea. Cold Spring Harbor, NY: Cold Spring Harbor Press. Centers for Disease Control and Prevention. 2019. Heart Disease Facts. Available at: https://www. cdc.gov/heartdisease/facts.htm. Accessed 18 Jan 2021. Centers for Disease Control and Prevention and National Institutes of Health. 2009. Biosafety in Microbiological and Biomedical Laboratories, 5th ed. Available at: https://www.cdc.gov/labs/pdf/ CDC-BiosafetyMicrobiologicalBiomedicalLaboratories-2009-P.PDF. Accessed 18 Jan 2021. Christensen J. 2018. The Five Most Expensive Drugs in the United States. CNN, May 11. Available at: https://www.cnn.com/2018/05/11/health/most-expensive-prescription-drugs/index.html. Accessed 18 Jan 2021. Cilluffo, A., and N.G. Ruiz. 2019. World’s Population Is Projected to Nearly Stop Growing by the End of the Century. Pew Research Center, June 17. Available at: https://www.pewresearch. org/fact-tank/2019/06/17/worlds-population-is-projected-to-nearly-stop-growing-by-the-endof-the-century/. Accessed 18 Jan 2021. Coelho, A.C., and J.D. García. 2015. Biological Risks and Laboratory-Acquired Infections: A Reality That Cannot Be Ignored in Health Biotechnology. Frontiers in Bioengineering and Biotechnology 3: 56. Cohen J. 2019a. China’s CRISPR Push in Animals Promises Better Meat, Novel Therapies, and Pig Organs for People. Science, July 31. Available at: https://www.sciencemag.org/news/2019/ 07/china-s-crispr-push-animals-promises-better-meat-novel-therapies-and-pig-organs-people. Accessed 18 Jan 2021.

References

231

Cohen, J. 2019b. Deaf Couple May Edit Embryo’s DNA to Correct Hearing Mutation. Science, October 21. Available at: https://www.sciencemag.org/news/2019/10/deaf-couple-may-edit-emb ryo-s-dna-correct-hearing-mutation. Accessed 18 Jan 2021. Cole-Turner, R. 1997. Genes, Religion and Society: The Developing Views of the Churches. Science and Engineering Ethics 3: 273–288. Collins, M., and A. Thrasher. 2015. Gene Therapy: Progress and Predictions. Proceedings of Biological Sciences 282: 1821. Conrow, J. 2018. Developing Nations Lead the Growth of GMO Crops. Alliance for Science, June 29. Available at: https://allianceforscience.cornell.edu/blog/2018/06/developing-nationslead-growth-gmo-crops/. Accessed 18 Jan 2021. Convention on Biological Diversity. 2020. Available at: https://www.cbd.int/. Accessed 18 Jan 2021. Cornish, L. 2018. Understanding the Continued Opposition to GMOs. Devex, January 22. Available at: https://www.devex.com/news/understanding-the-continued-opposition-to-gmos-91888. Accessed 18 Jan 2021. Cossins, D. 2015. Will We Ever See GM Meat? BBC Future. March 9. Available at: https://www. bbc.com/future/article/20150309-will-we-ever-eat-gm-meat. Accessed 18 Jan 2021. Costa, J.R., B.E. Bejcek, J.E. McGee, A.I. Fogel, K.R. Brimacombe, and R. Ketteler. 2017. Genome Editing Using Engineered Nucleases and Their Use in Genomic Screening. In Assay Guidance Manual, ed. S. Sittampalam et al. Bethesda, MD: Eli Lilly and Company and the National Center for Advancing Translational Sciences. Available at: https://www.ncbi.nlm.nih.gov/books/NBK 464635/. Accessed 18 Jan 2021. Cummings, J.P. 2018. The Lifetime Economic Burden of Monogenic Diseases and the Social Motivations for Their Treatment with Genetic Therapy. Thesis. Rochester Institute of Technology. Available at: https://scholarworks.rit.edu/cgi/viewcontent.cgi?article=10984&context= theses. Accessed 18 Jan 2021. Cyranoski, D. 2020. What CRISPR-Baby Prison Sentences Mean for Research. Nature 577: 154– 155. Daniell, H. 2002. Molecular Strategies for Gene Containment in Transgenic Crops. Nature Biotechnology 20 (6): 581–586. Darwin, C. 1859. The Origin of Species by Means of Natural Selection. London, UK: John Murray. Davidson, D. 2001. Inquiries into Truth and Interpretation, 2nd ed. Oxford, UK: Clarendon Press. Davis, D.S. 2001. Genetic Dilemmas: Reproductive Technology, Parental Choices, and Children’s Futures. New York, NY: Routledge. De Wert, G., B. Heindryckx, G. Pennings, A. Clarke, U. Eichenlaub-Ritter, C.G. van El, F. Forzano, M. Goddijn, H.C. Howard, D. Radojkovic, E. Rial-Sebbag, W. Dondorp, B.C. Tarlatzis, M.C. Cornel, and European Society of Human Genetics and the European Society of Human Reproduction and Embryology. 2018. Responsible Innovation in Human Germline Gene Editing: Background Document to the Recommendations of ESHG and ESHRE. European Journal of Human Genetics 26 (4): 450–470. Domingo, J.L. 2016. Safety Assessment of GM Plants: An Updated Review of the Scientific Literature. Food and Chemical Toxicology 95: 12–18. Doyle, A., M.P. McGarry, N.A. Lee, and J.J. Lee. 2012. The Construction of Transgenic and Gene Knockout/Knockin Mouse Models of Human Disease. Transgenic Research 21 (2): 327–349. Duan, J.J., M. Marvier, J. Huesing, G. Dively, and Z.Y. Huang. 2008. A Meta-Analysis of Effects of Bt Crops on Honey Bees (Hymenoptera: Apidae). PLoS One 3 (1): e1415. Dubljevi´c, V. 2019. Neuroethics, Justice and Autonomy: Public Reason in the Cognitive Enhancement Debate. Cham, Switzerland: Springer. Dunn, S.E., J.L. Vicini, K.C. Glenn, D.M. Fleischer, and M.J. Greenhawt. 2017. The Allergenicity of Genetically Modified Foods from Genetically Engineered Crops: A Narrative and Systematic Review. Annals of Allergy, Asthma and Immunology 119 (3): 214–222. Environmental Protection Agency. 2020b. EPA’s Regulation of Biotechnology for Use in Pest Management. Available at: https://www.epa.gov/regulation-biotechnology-under-tsca-and-fifra/ epas-regulation-biotechnology-use-pest-management. Accessed 18 Jan 2021.

232

7 Genetic Engineering

European Commission. 2020. GMO Legislation. Available at: https://ec.europa.eu/food/plant/gmo/ legislation_en. Accessed 18 Jan 2021. Ezezika, O.C., and P.A. Singer. 2010. Genetically Engineered Oil-Eating Microbes for Bioremediation: Prospects and Regulatory Challenges. Technology in Society 32 (4): 331–335. Fagan, J., M. Antoniou, and C. Robinson. 2014. GMO Myths and Truths, 2nd ed. London, UK: Earth Open Source. Fernandez-Cornejo, J., S. Wechsler, M. Livingston, and L. Mitchell. 2014. Genetically Engineered Crops in the United States. U.S. Department of Agriculture, Economic Research Report 162, February. Available at: https://www.ers.usda.gov/webdocs/publications/45179/43668_err 162.pdf. Accessed 18 Jan 2021. Food and Drug Administration. 2020a. Animals with Intentional Genomic Alterations: Consumer Q & A. Available at: https://www.fda.gov/animal-veterinary/animals-intentional-genomic-altera tions/consumer-qa. Accessed 19 Jan 2021. Food and Drug Administration. 2020b. Oxitec Mosquito. Available at: https://www.fda.gov/animalveterinary/animals-intentional-genomic-alterations/oxitec-mosquito. Accessed 19 Jan 2021. Food and Drug Administration. 2020c. Therapeutic Cloning and Genome Modification. Available at: https://www.fda.gov/vaccines-blood-biologics/cellular-gene-therapy-products/the rapeutic-cloning-and-genome-modification. Accessed 19 Jan 2021. Food and Drug Administration. 2020d. What Is the Approval Process for Generic Drugs? Available at: https://www.fda.gov/drugs/generic-drugs/what-approval-process-generic-drugs. Accessed 19 Jan 2021. Forabosco, F., M. Löhmus, L. Rydhmer, and L.F. Sundström. 2013. Genetically Modified Farm Animals and Fish in Agriculture: A Review. Livestock Science 153 (1–3): 1–9. Frank, S.A. 2014. Somatic Mosaicism and Disease. Current Biology 24 (2): R577–R581. Fukuyama, F. 2002. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Picador. Funk, C., and M. Hefferon. 2018. Public Views of Gene Editing for Babies Depend on How It Would Be Used. Pew Research Center, July 26. Available at: https://www.pewresearch.org/science/2018/ 07/26/public-views-of-gene-editing-for-babies-depend-on-how-it-would-be-used/. Accessed 19 Jan 2021. Gallo, A.M., D. Wilkie, M. Suarez, R. Labotka, R. Molokie, A. Thompson, P. Hershberger, and B. Johnson. 2010. Reproductive Decisions in People with Sickle Cell Disease or Sickle Cell Trait. Western Journal of Nursing Research 32 (8): 1073–1090. Geib, C. 2018. Changing Regulations Mean Genetically Modified Meat Could Soon Be on Your Plate. Futurism, March 14. Available at: https://futurism.com/genetically-modified-meat-fdausda. Accessed 19 Jan 2021. Genetic Literacy Project. 2018. New Generation of GMO Crops Could Dramatically Boost Biofuel Production. Available at: https://geneticliteracyproject.org/2018/01/15/new-generationgmo-crops-dramatically-boost-biofuel-production/. Accessed 19 Jan 2021. Genetic Literacy Project. 2020. GMO FAQs. Available at: https://gmo.geneticliteracyproject.org/ FAQ/where-are-gmos-grown-and-banned/. Accessed 19 Jan 2021. GenScript. 2020. What Are Monoclonal Antibodies? Available at: https://www.genscript.com/howto-make-monoclonal-antibodies.html. Accessed 19 Jan 2021. GM Watch. 2019. International Scientists Urge Precaution with Gene Drives: New Study. GM Watch, May 21. https://www.gmwatch.org/en/news/latest-news/18951-international-scientists-urge-pre caution-with-gene-drives-new-study. Accessed 19 Jan 2021. GMO Answers. 2020a. What GMO Crops Are Currently Available on the Market? Available at: https://gmoanswers.com/current-gmo-crops?gclid=CjwKCAiAhc7yBRAdEiwAplGx X265z5GBxlV4Y4pqKVOfiooF2qfFs91eOW8InUo3yuJGH_B39BkoDxoCY2gQAvD_BwE. Accessed 19 Jan 2021. GMO Answers. 2020b. Nine Things You Need to Know About GMO Salmon. Available at: https:// gmoanswers.com/nine-9-things-you-need-know-about-gmo-salmon. Accessed January.

References

233

Gonzaludo, N., J.W. Belmont, V.G. Gainullin, and R.J. Taft. 2019. Estimating the Burden and Economic Impact of Pediatric Genetic Disease. Genetics in Medicine 21: 1781–1789. Green, R.M. 2001. The Human Embryo Research Debates: Bioethics in the Vortex of Controversy. New York, NY: Oxford University Press. Guillemaud, T., E. Lombaert, and D. Bourguet. 2016. Conflicts of Interest in GM Bt Crop Efficacy and Durability Studies. PLoS One 11 (12): e0167777. Gurevich, R. 2020. How Much Does IVF Really Cost? Very Well Family, March 5. Available at: https://www.verywellfamily.com/how-much-does-ivf-cost-1960212. Accessed 19 Jan 2021. Harmon, A. 2016. Fighting Lyme Disease in the Genes of Nantucket’s Mice. New York Times, June 7, A15. Harris, J. 1992. Wonderwoman and Superman: The Ethics of Human Biotechnology. Oxford, UK: Oxford University Press. Harris, J. 2007. Enhancing Evolution: The Ethical Case for Making Better People. Princeton, NJ: Princeton University Press. He, K., L.R. Wilkens, D.O. Stram, L.N. Kolonel, B.E. Henderson, A.H. Wu, L. Le Marchand, and C.A. Haiman. 2011. Generalizability and Epidemiologic Characterization of Eleven Colorectal Cancer GWAS Hits in Multiple Populations. Cancer Epidemiology and Biomarkers and Prevention 20 (1): 70–81. Henderson, G.E., M.M. Easter, C. Zimmer, N.M. King, A.M. Davis, B.B. Rothschild, L.R. Churchill, B. Wilfond, and D.K. Nelson. 2006. Therapeutic Misconception in Early Phase Gene Transfer Trials. Social Science and Medicine 62 (1): 239–253. Henkel, R.D., T. Miller, and R.S. Weyant. 2012. Monitoring Select Agent Theft, Loss and Release Reports in the United States—2004–2010. Applied Biosafety 18: 171–180. Hjältén, J., and E.P. Axelsson. 2015. GM Trees with Increased Resistance to Herbivores: Trait Efficiency and Their Potential to Promote Tree Growth. Frontiers in Plant Science, May 1. Available at: https://doi.org/10.3389/fpls.2015.00279. Accessed 19 Jan 2021. Holdrege, C. 2008. Understanding the Unintended Effects of Genetic Manipulation. The Nature Institute. Available at: https://natureinstitute.org/txt/ch/nontarget.php. Accessed 19 Jan 2021. Horgan, J. 2017. Has the Era of Gene Therapy Finally Arrived? Scientific American, September 1. Available at: https://blogs.scientificamerican.com/cross-check/has-the-era-of-gene-therapy-fin ally-arrived/. Accessed 19 Jan 2021. Hou, Z., and Z. Zhang. 2019. Inserting DNA with CRISPR. Science 365 (6448): 25–26. House, K. 2019. China Quietly Confirms Birth of Third Gene-Edited Baby. Futurism, December 30. Available at: https://futurism.com/neoscope/china-confirms-birth-third-gene-edited-baby. Accessed 19 Jan 2021. Hryhorowicz, M., J. Zeyland, R. Słomski, and D. Lipi´nski. 2017. Genetically Modified Pigs as Organ Donors for Xenotransplantation. Molecular Biotechnology 59 (9–10): 435–444. Hübner, D. 2018. Human-animal Chimeras and Hybrids: An Ethical Paradox Behind Moral Confusion? The Journal of Medicine and Philosophy 43 (2): 187–210. Human Fertilisation and Embryology Authority. 2020. About Us. Available at: https://www.hfea. gov.uk/about-us/. Accessed 19 Jan 2021. Human Genome Project. 2020. Human Genome Project Budget. Available at: https://web.ornl.gov/ sci/techresources/Human_Genome/project/budget.shtml. Accessed 19 Jan 2021. International Service for the Acquisition of Agri-biotech Applications. 2018. Gm Crops and the Environment. Available at: https://www.isaaa.org/resources/publications/pocketk/4/default.asp. Accessed: 19 Jan 2021. Johnston, T. 2005. In One’s Own Image: Ethics and the Reproduction of Deafness. Journal of Deaf Studies and Deaf Education 10 (4): 426–441. Juengst, E. 1997. Can Enhancement Be Distinguished from Prevention in Genetic Medicine? Journal of Medicine and Philosophy 22 (2): 125–142. Justlabelit.org. 2020. Labelling Around the World. Available at: http://www.justlabelit.org/right-toknow-center/labeling-around-the-world/. Accessed 19 Jan 2021.

234

7 Genetic Engineering

Kaebnick, G.E., E. Heitman, J.P. Collins, J.A. Delborne, W.G. Landis, K. Sawyer, L.A. Taneyhill, and D.E. Winickoff. 2016. Precaution and Governance of Emerging Technologies. Science 354 (6313): 710–711. Kaemmerer, W.F. 2018. How Will the Field of Gene Therapy Survive Its Success? Bioengineering and Translational Medicine 3 (2): 166–177. Kaiser Family Foundation. 2016. Medicaid Coverage of Family Planning Benefits: Results from a State Survey. Available at: https://www.kff.org/report-section/medicaid-coverage-of-family-pla nning-benefits-results-from-a-state-survey-fertility-services/. Accessed 19 Jan 2021. Kelle, A. 2013. Beyond Patchwork Precaution in the Dual-Use Governance of Synthetic Biology. Science and Engineering Ethics 19 (3): 1121–1139. Kevles, D.J. 1985. In the Name of Eugenics: Genetics and the Uses of Human Heredity. Cambridge, MA: Harvard University Press. Kids Health. 2018. Osteogenesis Imperfecta (Brittle Bone Disease). Available at: https://kidshealth. org/en/parents/osteogenesis-imperfecta.html. Accessed 19 Jan 2021. Kimman, T.G., E. Smit, and M.R. Klein. 2008. Evidence-Based Biosafety: A Review of the Principles and Effectiveness of Microbiological Containment Measures. Clinical Microbiology Reviews 21 (3): 403–425. Kimmelman, J. 2010. Gene Transfer and the Ethics of First-in-Human Research: Lost in Translation. Cambridge, UK: Cambridge University Press. Kitcher, P. 1996. The Lives to Come: the Genetic Revolution and Human Possibilities. New York, NY: Simon and Schuster. Koch, T. 2020. Transhumanism, Moral Perfection, and Those 76 Trombones. Journal of Medicine and Philosophy 45 (2): 179–192. Koplin, J.J., C. Gyngell, and J. Savulescu. 2020. Germline Gene Editing and the Precautionary Principle. Bioethics 34 (1): 49–59. Koplin, J.J., and D. Wilkinson. 2019. Moral Uncertainty and the Farming of Human-Pig Chimeras. Journal of Medical Ethics 45 (7): 440–446. Kriebel, D., J. Tickner, P. Epstein, J. Lemons, R. Levins, E.L. Loechler, M. Quinn, R. Rudel, T. Schettler, and M. Stoto. 2001. The Precautionary Principle in Environmental Science. Environmental Health Perspectives 109 (9): 871–876. Kumar, P., J. Radhakrishnan, M.A. Chowdhary, and P.F. Giampietro. 2001. Prevalence and Patterns of Presentation of Genetic Disorders in a Pediatric Emergency Department. Mayo Clinic Proceedings 76 (8): 777–783. Kumar, S.R.P., D.M. Markusic, M. Biswas, K.A. High, and R.W. Herzog. 2016. Clinical Development of Gene Therapy: Results and Lessons from Recent Successes. Molecular Therapy— Methods and Clinical Development 3: 16034. Kuzma, J. 2016. A Missed Opportunity for U.S. Biotechnology Regulation. Science 353 (6305): 1211–1213. Lander, E.S., F. Baylis, F. Zhang, E. Charpentier, P. Berg, C. Bourgain, B. Friedrich, J.K. Joung, J. Li, D. Liu, L. Naldini, J.B. Nie, R. Qiu, B. Schoene-Seifert, F. Shao, S. Terry, W. Wei, and E.L. Winnacker. 2019. Adopt a Moratorium on Heritable Genome Editing. Nature 567 (7747): 165–168. Lanphier, E., F. Urnov, S.E. Haecker, M. Werner, and J. Smolenski. 2015. Don’t Edit the Human Germ Line. Nature 519: 410–411. Ledford, H., and E. Callaway. 2020. Pioneers of CRISPR Gene Editing Win Nobel in Chemistry. Nature 586: 346–347. Lee, B. 2018. What Are Biologics? 5 Examples of Biological Drugs You May Already Be Taking. Good RX, June 13. Available at: https://www.goodrx.com/blog/biologics-biological-drugs-exa mples/. Accessed 19 Jan 2021. Le Page, M. 2020. Human Genes Have Been Added to Pigs to Create Skin for Transplants. New Scientist, January 29. Available at: https://www.newscientist.com/article/2231579-humangenes-have-been-added-to-pigs-to-create-skin-for-transplants/#ixzz6GPggXYEP. Accessed 19 Jan 2021.

References

235

Liang, P., Y. Xu, X. Zhang, C. Ding, R. Huang, Z. Zhang, J. Lv, X. Xie, Y. Chen, Y. Li, Y. Sun, Y. Bai, Z. Songyang, W. Ma, C. Zhou, and J. Huang. 2015. CRISPR/Cas9-Mediated Gene Editing in Human Tripronuclear Zygotes. Protein and Cell 6 (5): 363–372. Losey, J.E., L.S. Rayor, and M.E. Carter. 1999. Transgenic Pollen Harms Monarch Larvae. Nature 399: 214. Lucht, J.M. 2015. Public Acceptance of Plant Biotechnology and GM Crops. Viruses 7 (8): 4254– 4281. Maddox, B. 2003. Rosalind Franklin: The Dark Lady of DNA. New York, NY: HarperCollins. Main, D. 2017. USDA Agrees to Not Regulate Genetically Modified GRASS on the Loose in Oregon. Newsweek, January 31. Available at: https://www.newsweek.com/usda-agrees-not-reg ulate-gmo-grass-loose-oregon-550942. Accessed 19 Jan 2021. Mamcarz, E., S. Zhou, T. Lockey, H. Abdelsamed, S.J. Cross, G. Kang, Z. Ma, J. Condori, J. Dowdy, B. Triplett, C. Li, G. Maron, J.C. Aldave Becerra, J.A. Church, E. Dokmeci, J.T. Love, A.C. da Matta Ain, H. van der Watt, X. Tang, W. Janssen, B.Y. Ryu, S.S. De Ravin, M.J. Weiss, B. Youngblood, J.R. Long-Boyle, S. Gottschalk, M.M. Meagher, H.L. Malech, J.M. Puck, M.J. Cowan, and B.P. Sorrentino. 2019. Lentiviral Gene Therapy Combined with Low-Dose Busulfan in Infants with SCID-X1. New England Journal of Medicine 380 (16): 1525–1534. Marshall, D.A., E.I. Benchimol, A. MacKenzie, D.D. Duque, K.V. MacDonald, T. Hartley, H. Howley, A. Hamilton, M. Gillespie, F. Malam, and K. Boycott. 2019. Direct Health-Care Costs for Children Diagnosed with Genetic Diseases Are Significantly Higher Than for Children with Other Chronic Diseases. Genetics in Medicine 21: 1049–1057. Maslen, H., N. Faulmüller, and J. Savulescu. 2014. Pharmacological Cognitive Enhancement-How Neuroscientific Research Could Advance Ethical Debate. Frontiers in Systems Neuroscience 8: 107. Maziarz, R.T. 2019. CAR T-Cell Therapy Total Cost Can Exceed $1.5 Million Per Treatment. Healio, May 29. Available at: https://www.healio.com/hematology-oncology/cell-therapy/news/ online/%7B124396e7-1b60-4cff-a404-0a2baeaf1413%7D/car-t-cell-therapy-total-cost-can-exc eed-15-million-per-treatment. Accessed 19 Jan 2021. McDivitt, P. 2019. Golden Rice: The GMO Crop Loved by Humanitarians, Opposed by Greenpeace. Genetic Literacy Project, November 8. Available at: https://geneticliteracyproject.org/2019/ 11/08/golden-rice-the-gmo-crop-loved-by-humanitarians-opposed-by-greenpeace/. Accessed 19 Jan 2021. McDonald, J. 2007. Could Genetically Modified Crops Be Killing Honeybees? SF Gate, March 10. Available at: https://www.sfgate.com/homeandgarden/article/Could-genetically-modified-cropsbe-killing-bees-2611496.php. Accessed 19 Jan 2021. McGee, G. 2000. The Perfect Baby: Parenthood in the New World of Cloning and Genetics, 2nd ed. Lanham, MD: Rowman and Littlefield. McIntosh, J. 2017. What’s to Know About Alopecia Areata? Medical News Today, December 22. Available at: https://www.medicalnewstoday.com/articles/70956#home-remedies. Accessed 19 Jan 2021. Meeme, V. 2019. Kenya Reconsidering GMO Crop Ban for Food Security. Alliance for Science, April 30. Available at: https://allianceforscience.cornell.edu/blog/2019/04/kenya-reconsideringgmo-crop-ban-support-food-security/. Accessed 19 Jan 2021. Mehlman, M.J. 2009. The Price of Perfection: Individualism and Society in the Era of Biomedical Enhancement. Baltimore, MD: Johns Hopkins University Press. Merler, S., M. Ajelli, L. Fumanelli, and A. Vespignani. 2013. Containing the Accidental Laboratory Escape of Potential Pandemic Influenza Viruses. BMC Medicine 11: 252. Messer, K.D., S. Bligh, M. Costanigro, and H.M. Kaiser. 2015. Process Labeling of Food: Consumer Behavior, the Agricultural Sector, and Policy Recommendations. Council for Agricultural Science and Technology 10: 1–16. Miller, F.G., and S. Joffe. 2009. Benefit in Phase 1 Oncology Trials: Therapeutic Misconception or Reasonable Treatment Option? Clinical Trials 5 (6): 617–623.

236

7 Genetic Engineering

Miliotou, A.N., and L.C. Papadopoulou. 2018. CAR T-Cell Therapy: A New Era in Cancer Immunotherapy. Current Pharmaceutical Biotechnology 19 (1): 5–18. Mitchell, C.B., E.D. Pellegrino, J.B. Elshtain, J.F. Kilner, and S.B. Rae. 2007. Biotechnology and the Human Good. Washington, DC: Georgetown University Press. Molteni, M. 2018. Now You Can Sequence Your Whole Genome for Just $200. Wired, November 11. Available at: https://www.wired.com/story/whole-genome-sequencing-cost-200dollars/. Accessed 19 Jan 2021. More, M., and N. Vita-More (eds.). 2013. The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. New York, NY: WileyBlackwell. Moritz, R. 2020. Community Engagement on Pathogen Research. Presentation to the National Science Advisory Board for Biosecurity, January 24. Bethesda, MD. Murphy, D. 2020. Concepts of Health and Disease. Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/entries/health-disease/. Accessed 19 Jan 2021. National Academies of Sciences, Engineering, and Medicine. 2016a. Gene Drives on the Horizon: Advancing Science, Navigating Uncertainty, and Aligning Research with Public Values. Washington, DC: National Academies Press. National Academies of Sciences, Engineering, and Medicine. 2016b. Genetically Engineered Crops: Experiences and Prospects. Washington, DC: National Academies Press. National Academies of Sciences, Engineering, and Medicine. 2017. Human Genome Editing: Science, Ethics, and Governance. Washington, DC: National Academies Press. National Conference of State Legislatures. 2019. State Laws Related to Insurance Coverage for Infertility Treatment. Available at: https://www.ncsl.org/research/health/insurance-coverage-forinfertility-laws.aspx. Accessed 19 Jan 2021. National Heart, Lung, and Blood Institute. 2020. Cell Sickle Disease. Available at: https://www. nhlbi.nih.gov/health-topics/sickle-cell-disease. Accessed 19 Jan 2021. National Human Genome Research Institute. 2017. How Does Genome Editing Work? Available at: https://www.genome.gov/about-genomics/policy-issues/Genome-Editing/How-genomeediting-works. Accessed 19 Jan 2021. National Institutes of Health. 2020a. Stem Cell Information. Available at: https://stemcells.nih.gov/. Accessed 19 Jan 2021. National Research Council. 2004. Biotechnology in the Age of Terrorism. Washington, DC: National Academies Press. National Research Council. 2011. Guide for the Care and Use of Laboratory Animals, 8th ed. Washington, DC: National Academies Press. Neuhaus, C.P. 2018. Community Engagement and Field Trials of Genetically Modified Insects and Animals. Hastings Center Report 48 (1): 25–36. Nobel Prize.org. 2021. The Nobel Prize in Chemistry 1980. Available at: https://www.nobelprize. org/prizes/chemistry/1980/berg/lecture/. Accessed 10 Jan 2021. Nobel Prize Winners. 2016. Letter to Greenpeace, June 26. Available at: https://www.supportpreci sionagriculture.org/nobel-laureate-gmo-letter_rjr.html. Accessed 19 Jan 2021. Nogrady, B. 2020. What the Data Say About Asymptomatic COVID Infections. Nature 587: 534– 535. Norero, D. 2016. Genetically Modified Crops and the Exaggeration of “Interest Conflict.” Cornell Alliance for Science, November 3. Available at: https://allianceforscience.cornell.edu/blog/2016/ 11/genetically-modified-crops-and-the-exaggeration-of-interest-conflict/. Accessed 19 Jan 2021. Normile, D. 2004. Infectious Diseases: Mounting Lab Accidents Raise SARS Fears. Science 304: 659–661. Normile, D. 2018. Shock Greets Claim of CRISPR-Edited Babies. Science 362 (6418): 978–979. Normile, D. 2019. China Tightens Rules on Gene Editing. Science 363 (6431): 1023. Nozick, R. 1974. Anarchy, State, Utopia. New York, NY: Basic Books.

References

237

Nuffield Council on Bioethics. 2016. Genome Editing: An Ethical Review. Available at: https://www.nuffieldbioethics.org/publications/genome-editing-an-ethical-review. Accessed 13 Mar 2020. Organizing Committee of the Second International Summit on Human Genome Editing. 2018. Concluding Statement. Available at: http://www8.nationalacademies.org/onpinews/newsitem. aspx?RecordID=11282018b. Accessed 19 Jan 2021. Ormandy, E.H., J. Dale, and G. Griffin. 2011. Genetic Engineering of Animals: Ethical Issues, Including Welfare Concerns. The Canadian Veterinary Journal 52 (5): 544–550. Parens, E. (ed.). 1998. Enhancing Human Traits: Ethical and Social Implications. Washington, DC: Georgetown University Press. Parens, E., and A. Asch. 1999. The Disability Rights Critique of Prenatal Genetic Testing: Reflections and Recommendations. Hastings Center Report 29 (5): S1–22. Park, A. 2019. Experts Are Calling for a Ban on Gene Editing of Human Embryos. Time Magazine, March 13. Available at: https://time.com/5550654/crispr-gene-editing-human-embryosban/. Accessed 19 Jan 2021. Pew Research Center. 2016. Public Opinion About Genetically Modified Foods and Trust in Scientists Connected with These Foods. Pew Research Center, December 1. Available at: https://www.pewresearch.org/science/2016/12/01/public-opinion-about-genetically-mod ified-foods-and-trust-in-scientists-connected-with-these-foods/. Accessed 19 Jan 2021. Poppy, G. 2000. GM Crops: Environmental Risks and Non-target Effects. Trends in Plant Science 5 (1): 4–6. Porter, A. 2017. Bioethics and Transhumanism. Journal of Medicine and Philosophy 42 (3): 237– 260. Porterfield, A., and J. Entine. 2018. ‘Substantial Equivalence’: Are GMOs as Safe as Other Conventional and Organic Foods? Genetic Literacy Project, May 11. Available at: https://geneticliteracyproject.org/2018/05/11/substantial-equivalence-are-gmos-as-safeas-other-conventional-organic-foods/. Accessed 19 Jan 2021. President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. 1982. Washington, DC: President’s Commission. President’s Council on Bioethics. 2002. Human Cloning and Human Dignity: An Ethical Inquiry. Washington, DC: President’s Council on Bioethics. President’s Council on Bioethics. 2003. Beyond Therapy: Biotechnology and the Pursuit of Happiness. New York, NY: Harper Perennial. Proctor, R. 1988. Racial Hygiene: Medicine Under the Nazis. Cambridge, MA: Harvard University Press. Public Health Emergency. 2015. Biosafety Levels. Available at: https://www.phe.gov/s3/BioriskMa nagement/biosafety/Pages/Biosafety-Levels.aspx. Accessed 19 Jan 2021. Ragnedda, M., and G.W. Muschert (eds.). 2015. The Digital Divide. New York, NY: Routledge. Rana, F.R., and K.R.Samples. 2019. Humans 2.0: Scientific, Philosophical, and Theological Perspectives on Transhumanism. Covina, CA: Reasons to Believe. Rasko, J.E., G.M. O’Sullivan, and R.A. Ankeny (eds.). 2006. The Ethics of Inheritable Genetic Modification: a Dividing Line? Cambridge, UK: Cambridge University Press. Rawls, J. 2005. Political Liberalism, 2nd ed. New York: Columbia University Press. Regan, T. 1983. The Case for Animal Rights. Berkeley, CA: University of California Press. Reiss, M.J., and R. Straughan. 1996. Improving Nature? The Science and Ethics of Genetic Engineering. Cambridge, UK: Cambridge University Press. Resnik, D.B. 1993. Debunking the Slippery Slope Argument Against Human Germ-Line Gene Therapy. Journal of Medicine and Philosophy 19 (1): 23–40. Resnik, D.B. 2000a. The Moral Significance of the Therapy/Enhancement Distinction in Human Genetics. Cambridge Quarterly of Healthcare Ethics 9 (3): 365–377. Resnik, D.B. 2000b. Of Maize and Men: Reproductive Control and the Threat to Genetic Diversity. Journal of Medicine and Philosophy 25 (4): 451–467.

238

7 Genetic Engineering

Resnik, D.B. 2001. DNA Patents and Human Dignity. Journal of Law, Medicine, and Ethics 29 (2): 153–165. Resnik, D.B. 2007. Embryonic Stem Cell Patents and Human Dignity. Health Care Analysis 15 (3): 211–222. Resnik, D.B. 2011. Ethical Issues Concerning Transgenic Animals in Biomedical Research. In The Ethics of Animal Research: Exploring the Controversy, ed. J. Garrett, 169–179. Cambridge, MA: MIT Press. Resnik, D.B. 2012. Environmental Health Ethics. Cambridge, UK: Cambridge University Press. Resnik, D.B. 2015a. Retracting Inconclusive Research: Lessons from the Séralini GM Maize Feeding Study. Journal of Agricultural and Environmental Ethics 28 (4): 621–633. Resnik, D.B. 2015b. Food and Beverage Policies and Public Health Ethics. Health Care Analysis 23 (2): 122–133. Resnik, D.B. 2018a. The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust. Cham, Switzerland: Springer. Resnik, D.B. 2018b. Ethics of Community Engagement in Field Trials of Genetically Modified Mosquitoes. Developing World Bioethics 18 (2): 135–143. Resnik, D.B. 2019a. Two Unresolved Issues in Community Engagement for Field Trials of Genetically Modified Mosquitoes. Pathogens and Global Health 113 (5): 238–245. Resnik, D.B. 2019b. How Should Engineered Nanomaterials Be Regulated for Public and Environmental Health? AMA Journal of Ethics 21 (4): E363–369. Resnik, D.B., and D. Vorhaus. 2006. Genetic Modification and Genetic Determinism. Philosophy, Ethics, and Humanities in Medicine 1: 9. Resnik, D.B., H. Steinkraus, and P. Langer. 1999. Human Germ-Line Gene Therapy: Scientific, Moral and Political Issues. Georgetown, TX: RG Landes. Resnik, D.B., and P. Langer. 2001. Human Germline Gene Therapy Reconsidered. Human Gene Therapy 12 (11): 1449–1458. Ridley, M. 2000. Genome: The Autobiography of a Species in 23 Chapters. New York, NY: Harper Collins. Rifkin, J. 1983. Algeny. New York, NY: Viking Press. Rigby, B. 2017. Growth Hormones in Meat: Myths and Reality. Climbing Nutrition, February 24. Available at: https://www.climbingnutrition.com/diet/growth-hormones-meat-myths-reality/. Accessed 19 Jan 2021. Robert, J.S., and F. Baylis. 2003. Crossing Species Boundaries. American Journal of Bioethics 3 (3): 1–13. Robertson, J.A. 1994. Children of Choice: Freedom and the New Reproductive Technologies. Princeton, NJ: Princeton University Press. Rollin, B. 1995. The Frankenstein Syndrome: Ethical and Social Issues in the Genetic Engineering of Animals. Cambridge, UK: Cambridge University Press. Russell, W., and R. Birch. 1959. Principles of Humane Animal Experimentation. Springfield, IL: Charles C. Thomas. Sandel, M.J. 2009. The Case Against Perfection: Ethics in the Age of Genetic Engineering. Cambridge, MA: Harvard University Press. Savulescu, J. 2002. Education and Debate: Deaf Lesbians, “Designer Disability,” and the Future of Medicine. British Medical Journal 325 (7367): 771–773. Schaffner, K.F. 1993. Discovery and Explanation in Biology and Medicine. Chicago, IL: University of Chicago Press. Schuppli, C., D. Fraser, and M. McDonald. 2004. Expanding the Three Rs to Meet New Challenges in Humane Animal Experimentation. Alternative to Laboratory Animals 32: 515–532. Science and Environmental Health Network. 1998. Wingspread Statement on the Precautionary Principle. Available at: http://www.who.int/ifcs/documents/forums/forum5/wingspread. doc. Accessed: 19 Jan 2021. Sears, M.K., R.L. Hellmich, D.E. Stanley-Horn, K.S. Oberhauser, J.M. Pleasants, H.R. Mattila, B.D. Siegfried, and G.P. Dively. 2001. Impact of Bt Corn Pollen on Monarch Butterfly Populations:

References

239

A Risk Assessment. Proceedings of the National Academy of Sciences of the United States of America 98 (21): 11937–11942. Séralini, G.E., E. Clair, R. Mesnage, S. Gress, N. Defarge, M. Malatesta, D. Hennequin, and J.S. de Vendômois. 2012. Long Term Toxicity of a Roundup Herbicide and a Roundup-Tolerant Genetically Modified Maize. Food and Chemical Toxicology 50 (11): 4221–4231. Retraction in: Food and Chemical Toxicology 63: 244. Shamoo, A.E., and D.B. Resnik. 2015. Responsible Conduct of Research, 3rd ed. New York, NY: Oxford University Press. Shendure, J., G.M. Findlay, and M.W. Snyder. 2019. Genomic Medicine–Progress, Pitfalls, and Promise. Cell 177 (1): 45–57. Simmons, D. 2008. The Use of Animal Models in Studying Genetic Disease: Transgenesis and Induced Mutation. Nature Education 1 (1): 70. Singer, P. 2009. Animal Liberation, reissue ed. New York, NY: Harper Perennial. Spinello, R.A. 2016. Cyberethics: Morality and Law in Cyberspace, 6th ed. Boston: MA: Jones and Bartlett. Stöppler, M.C. 2019. Genetic Diseases. Medicine.net. Available at: https://www.medicinenet.com/ genetic_disease/article.htm. Accessed 19 Jan 2021. Streiffer, R. 2005. At the Edge of Humanity: Human Stem Cells, Chimeras, and Moral Status. Kennedy Institute of Ethics Journal 15 (4): 347–370. Szasz, T. 1961. The Myth of Mental Illness. New York, NY: Harper. Tait, J. 2001. More Faust Than Frankenstein: The European Debate About the Precautionary Principle and Risk Regulation for Genetically Modified Crops. Journal of Risk Research 4 (2): 175–189. The Business Research Company. 2019. Global Biologic Market Size and Segments, March 20. Available at: https://www.globenewswire.com/news-release/2019/03/27/1774114/0/en/Glo bal-Biologics-Market-Size-and-Segments.html. Accessed 20 Jan 2021. Thompson, P.B. 1993. Genetically Modified Animals: Ethical Issues. Journal of Animal Science 71 (Suppl. 3): 51–56. Tratar, U.L., S. Horvat, and M. Cemazar. 2018. Transgenic Mouse Models in Cancer Research. Frontiers in Oncology 8 (July 20): 268. Treatment Solutions. 2017. Are GMO Bacteria Safe for Wastewater Treatment? Available at: https:// aosts.com/gmo-bacteria-safe-wastewater-treatment/. Accessed 26 Feb 2020. United Nations Educational, Scientific, and Cultural Organization. 2020. UNESCO Panel of Experts Calls for Ban on “Editing” of Human DNA to Avoid Unethical Tampering with Hereditary Traits. Available at: https://en.unesco.org/news/unesco-panel-experts-calls-ban-editing-humandna-avoid-unethical-tampering-hereditary-traits. Accessed 20 Jan 2021. United States Department of Agriculture. 2018. Establishing the National Bioengineered Food Disclosure Standard. Available at: https://www.usda.gov/media/press-releases/2018/12/20/establ ishing-national-bioengineered-food-disclosure-standard. Accessed 20 Jan 2021. United States Department of Agriculture. 2020. Biotechnology Frequently Asked Questions. Available at: https://www.usda.gov/topics/biotechnology/biotechnology-frequently-asked-questionsfaqs. Accessed 20 Jan 2021. United States Department of Homeland Security. 2008. National Bio and Agro-Defense Facility Final Environmental Impact Statement, Appendix B. Washington, DC: US Department of Homeland Security. Urry, L.A., M.L. Cain, S.A. Wasserman, P.V. Minorsky, and J.B. Reece. 2016. Campbell Biology, 11th ed. New York, NY: Pearson. Walters, L., and J.G. Palmer. 1997. The Ethics of Human Gene Therapy. New York, NY: Oxford University Press. Walton, D. 2017. The Slippery Slope Argument in the Ethical Debate on Genetic Engineering of Humans. Science and Engineering Ethics 23 (6): 1507–1528. Wang, H., and H. Yang. 2019. Gene-Edited Babies: What Went Wrong and What Could Go Wrong. PLoS Biology 17 (4): e3000224.

240

7 Genetic Engineering

Wareham, C., and C. Nardini. 2015. Policy on Synthetic Biology: Deliberation, Probability, and the Precautionary Paradox. Bioethics 29 (2): 118–125. Warwick, S.I., H.J. Beckie, and L.M. Hall. 2009. Gene Flow, Invasiveness, and Ecological Impact of Genetically Modified Crops. Annals of the New York Academy of Sciences 1168 (1): 72–99. WebMD. 2020. What Are Normal Blood Sugar Levels? Available at: https://www.webmd.com/dia betes/qa/what-are-normal-blood-sugar-levels. Accessed 20 Jan 2021. Werth, J., L. Boucher, D. Thornby, S. Walker, and G. Charles. 2013. Changes in Weed Species Since the Introduction of Glyphosate-Resistant Cotton. Crop and Pasture Science 64 (8): 791–798. Whiteside, K. 2006. Precautionary Politics: Principle and Practice in Confronting Environmental Risk. Cambridge, MA: MIT Press. Whitlock J. 2019. Gender Reassignment Surgery. Very Well Health, November 8. Available at: https://www.verywellhealth.com/sex-reassignment-surgery-srs-3157235. Accessed 20 Jan 2021. Wolinetz, C.D., and F.S. Collins. 2019. NIH Pro Germline Editing Moratorium. Nature 567: 175. World Health Organization. 2020a. Malaria. Available at: https://www.who.int/malaria/en/. Accessed 20 Jan 2021. World Health Organization. 2020b. Dengue and Severe Dengue. Available at: https://www.who.int/ news-room/fact-sheets/detail/dengue-and-severe-dengue. Accessed 20 Jan 2021. World Health Organization. 2020c. Determinants of Health. Available at: https://www.who.int/hia/ evidence/doh/en/. Accessed 20 Jan 2021. Yabroff, K.R., J. Lund, D. Kepka, and A. Mariotto. 2011. Economic Burden of Cancer in the United States: Estimates, Projections, and Future Research. Cancer Epidemiology, Biomarkers and Prevention 20 (10): 2006–2014. Yourgenome.org. 2020. What Are Single Gene Disorders? Available at: https://www.yourgenome. org/facts/what-are-single-gene-disorders. Accessed 20 Jan 2021. Zhang, C., R. Wohlhueter, and H. Zhang. 2016. Genetically Modified Foods: A Critical Review of Their Promise and Problems. Food Science and Human Wellness 5 (3): 116–123. Zhang, X.H., L.Y. Tee, X.G. Wang, Q.S. Huang, and S.H. Yang. 2015. Off-Target Effects in CRISPR/Cas9-Mediated Genome Engineering. Molecular Therapy—Nucleic Acids 4: e264.

Chapter 8

Dual Use Research in the Biomedical Sciences

Scientific research benefits society in many ways. The knowledge generated by science has practical applications in medicine, public health, engineering, industry, transportation, navigation, communication, education, public policy, and numerous other aspects of human life. However, knowledge can also be used to cause harm to individuals, society, and the environment. The knowledge used to build a nuclear reactor may also be used to build nuclear weapons; the knowledge used to launch a rocket to the moon may also be used to guide a missile to kill innocent civilians; and the knowledge used to develop a vaccine might also be used to make a bioweapon. This double aspect of knowledge—that it can be used for good or bad purposes— is known as the problem of dual use (National Research Council 2004; Resnik 2013). Scientists and inventors have known about and grappled with this problem for quite some time. For example, Swedish chemist, engineer, inventor, and entrepreneur Alfred Nobel (1833–1896) earned hundreds of millions of dollars from the invention of dynamite, which he patented in 1867. Dynamite was initially used to remove dirt and rocks in mining and drilling operations but then later was used to make bombs for warfare. Distraught that he would be remembered as the inventor of a deadly explosive, Nobel bequeathed $265 million in his will to establish the Nobel Prizes for scientific research (Nobel Prize.org 2020). Arthur Galston (1920–2008) was an American botanist who studied how light and hormones affect plant development. As a graduate student in the 1940s, Galston discovered that 2,3,5-triiodobenzoic acid (TIBA) can trigger flowering in soybeans but that high levels of this compound act as a defoliant. Later, American and British researchers developed a chemical weapon known as Agent Orange that included TIBA as a key ingredient. The American military spayed millions of gallons of Agent Orange over forests in Vietnam, Laos, and Cambodia in the 1960s and early 1970s to remove leaf cover for North Vietnamese troops. Exposure to Agent Orange has been linked to numerous health problems, including cancer. Galston was appalled that his work had been exploited for military purposes, and he fought for years to stop the use of Agent Orange in warfare (Yale News 2008). © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_8

241

242

8 Dual Use Research in the Biomedical Sciences

Dual use research raises some vexing ethical and policy issues for scientists, government officials, and the public. Should research that could be readily used for immoral purposes be funded? Published? Kept secret? Suppressed? This chapter will apply the PP to dual use research in the biomedical sciences and offer some suggestions for policymakers.

8.1 A Brief History of Biowarfare and Bioterrorism Bioweapons have been used in warfare for hundreds of years (Frischknecht 2003). In 1343, the Tartars (Mongols) sieged Kaffa (Feodosia), a walled city located in the Crimean Peninsula. After 15,000 of their fighters died in the first siege, the Mongols faced the prospect of defeat. Additionally, the bubonic plague, which had ravaged parts of Asia since 1331, had taken a toll on the Mongols. The Mongols retreated and then returned to Kaffa with a new tactic to defeat their enemy: they catapulted the bodies of plague victims over the city walls. The plague quickly spread through Kaffa, forcing residents to flee on ships to escape the infestation. Residents who left Kaffa infected people throughout Europe and the Middle East, which contributed to the plague of 1347–1351 (known as the Black Death) that killed 50 million people in Europe or 60% of the population (Frischknecht 2003; Resnik 2009; Kalu 2018).1 During the French and Indian War (1745–1763), the British fought the French and their Native American allies for control of what is now Canada. The British gave smallpox infested blankets to Native Americans to start an outbreak that would weaken their foe. Smallpox decimated the Native American fighters, who had no immunity to the disease, and the British took the fort. The disease also spread to Native Americans living in the Ohio Valley (Riedel 2004). While it is well known that a smallpox epidemic decimated the Native American population from 1500s to the 1800s, this epidemic was probably mostly due to contact with European settlers, including the Spanish, rather than biowarfare (Patterson and Runge 2002). The disease swept through Native American populations in a series of epidemics over several hundred years, killing almost an entire tribe in some cases. Other diseases that the Europeans brought, including the flu and measles, also took a toll on the Native American population (Patterson and Runge 2002). The British also used smallpox as a weapon against the Americans in the Revolutionary War (1775–1783). After the Americans captured Montreal, the British infected civilians with smallpox and sent them to infect the American forces. The Americans retreated after about 10,000 soldiers contracted the disease (Flight 2011) (Fig. 8.1). During the US Civil War, Luke Blackburn (1816–1887), a medical doctor who would become governor of Kentucky, attempted to infect civilians living in the Northern US with yellow fever. Blackburn filled trunks with clothing and blankets from yellow fever patients he had treated in Bermuda and sent them to the Northern 1 The

plague is a bacterium that is transmitted by fleas that feed on rats and humans.

8.1 A Brief History of Biowarfare and Bioterrorism

243

Fig. 8.1 Smallpox lesions on the torso of a patient in Bangladesh in 1973 (Source James Hicks, Centers for Disease Control and Prevention, public domain, https://www.cdc.gov/sma llpox/clinicians/clinical-dis ease.html#one)

US, where they were auctioned off. Blackburn did not succeed in infecting anyone with yellow fever, however, because the disease is spread by mosquito bites (Nye 2016). During World War I, the Germans allegedly infected horses and cows with anthrax (Bacillus anthracis) and glanders (Pseudomonas pseudomallei) and shipped them to the US and other Allied nations. However, the Germans denied these allegations (Riedel 2004). No other countries are known to have used biological weapons successfully in World War I. Chemical weapons had a far greater impact on World War I than biological weapons. Both sides in the conflict used chlorine, phosgene and mustard gas against their enemies. Chemical weapons killed about 92,000 soldiers and civilians during War I. In response to the horrors of chemical weapons, 108 nations signed the Geneva Protocol in 1925, which prohibits the use of poisonous gases and bacteria in warfare (Riedel 2004). One reason why many people view biological and chemical weapons as morally repugnant is that they are difficult to control and may indiscriminately kill combatants and non-combatants (Thatcher 1988).2 As demonstrated by the Mongols’ use of the plague as a bioweapon in their siege of Kaffa, even when targeted successfully, bioweapons may kill many non-combatants. The plague killed not only the residents of Kaffa but also spread to Europe and killed millions of people (Frischknecht 2003). During World War II, Japan’s Imperial Army Unit 731, located near Pingfan, Manchuria, tested biological disease agents on Chinese, Korean, Manchurian, Mongolian, Soviet, American, and Australia prisoners, many of whom were prisoners of war. An estimated 10,000 people died in experiments conducted by Unit 731. Disease agents studied by Unit 731 included anthrax, the plague, typhoid, and cholera (Riedel 2004). At the end of the war, the US agreed to not prosecute members of Unit 731 in exchange for data from their experiments. The Japanese conducted biological warfare against Chinese civilians by contaminating food and water and 2 Of

course, some would argue that all non-defensive uses of weapons in warfare are immoral because war is immoral. For a discussion of the morality of war, see Orend (2006).

244

8 Dual Use Research in the Biomedical Sciences

dispersing plague-infected fleas (Resnik 2009). The US, UK, Soviet Union, Canada, and Germany also conducted research on bioweapons during World War II, but there is no evidence that they tested them on human subjects or used them against soldiers or civilians (Riedel 2004). During the Cold War (1947–1991), the US and the Soviet Union had bioweapons research programs. The US military conducted open-air tests that exposed animals, human volunteers, and thousands of unsuspecting civilians to pathogens (Frischknecht 2003). In 1969, the US military officially stopped its offensive bioweapons research and focused on defensive research (Resnik 2009). The hub of US bioweapons research is currently at the United States Army Medical Research Institute of Infectious Diseases (USAMRIID) laboratory in Fort Detrick, MD. In 1972, 118 nations, including the US and the Soviet Union, signed the Biological Weapons Convention (BWC), a treaty that bans the production, stockpiling, acquisition, or deployment of biological and toxin weapons. Only defensive bioweapons research is allowed under the BWC. The BWC has been modified several times (United Nations 2020). After signing the BWC, the Soviet Union continued to conduct secret offensive bioweapons research on pathogens such as the plague, anthrax, and the Ebola and Marburg viruses. The Soviets also developed missiles for delivering biological weapons and studied pathogens that could resist existing vaccines and antibiotics. It is likely that outbreaks of anthrax that killed 70 people in Sverdlovsk and smallpox that killed three people in Aralsk were due to accidental contamination from Soviet bioweapons research (Frischknecht 2003). The Soviet Union began to discontinue its bioweapons research program when the Cold War ended (Resnik 2009). In the 1980s, terrorist groups began to develop and use biological weapons. On September 17, 1984, 750 people who ate at the Taco Time restaurant in the town of The Dalles, OR, developed food poisoning. 45 people were hospitalized but no one died. The Centers for Disease Control and Prevention (CDC) had initially determined that the outbreak resulted from exposure to salmonella bacteria due to poor food preparation, but it later concluded, after a thorough investigation, that the incident resulted from a terrorist attack conducted by followers of Indian guru Bagwan Shree Rajneesh (1931–1990). Members of this religious cult carried out the attack by spraying salmonella-laced water on the salad bar. The attack was a trial run for contaminating Oregon’s water supply with salmonella (Bovsun 2013). From 1993 to 1995, the Japanese religious cult Aum Shinrikyo released anthrax in the Tokyo subways at least ten times. Fortunately, the anthrax was not sufficiently weaponized to cause harm. Determined to succeed in their terror campaign, the group released sarin gas in a Tokyo subway in 1995. The attack killed 12 people and sickened thousands (Resnik 2009). During the Fall of 2001 a person or organization sent letters containing weaponized anthrax spores to Tom Brokaw of NBC News in New York, the offices of the New York Post, and US Senator Tom Daschle. Five people died from anthrax, 18 contracted the disease, and thousands took antibiotics the prevent the disease. The anthrax letters caused tremendous anxiety and terror in a nation that was reeling from AlQaeda’s hijacking attacks on the World Trade Center and Pentagon on September

8.1 A Brief History of Biowarfare and Bioterrorism

245

Fig. 8.2 Electron micrograph image of spores from the Sterne strain of Bacillus anthracis bacteria, Centers for Disease Control and Prevention, public domain, https://www.cdc. gov/vaccines/vpd/anthrax/ photos.html

11, which killed over 3,000 people (Resnik 2009). At the beginning of July 2008, the Federal Bureau of Investigation (FBI), after conducting thousands of interviews and spending millions of dollars on microbial forensics, named Bruce Ivins (1946–2008), a microbiologist who worked at USAMRIID in Fort Detrick, MD, as their prime suspect in the case. However, Ivins was never put on trial because he committed suicide on July 29 as the FBI was closing in on him. Although the FBI remains convinced that Ivins was the culprit, some people associated with the investigation admit there are gaps in the evidence (Shachtman 2011). The threat of anthrax as a bioweapon is very serious, because anthrax is a lethal pathogen that can be obtained in the wild and does not require sophisticated equipment (such as a missile) for deployment (Inglesby et al. 1999; Webb 2003). The chief obstacle to successfully using anthrax as a bioweapon is weaponization. To weaponize anthrax, spores must be isolated, concentrated, dried and ground to a fine powder (Burton and Stewart 2008).3 The FBI began to suspect Ivins as the sender of the anthrax letters because he had the skills to weaponize the pathogen and had worked on developing anthrax vaccines (Shachtman 2011) (Fig. 8.2). In the Fall of 2001, the US Congress passed the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USAPATRIOT Act), which makes it illegal to possess or transfer biological agents or toxins in quantities that would not be reasonably used for peaceful purposes (Resnik 2009). In 2002, Congress passed two laws, the Public Health Security and Bioterrorism Preparedness, and Response Act and the Agricultural Bioterrorism Protection Act, which require the Department of Health and Human Services (DHHS) and the US Department of Agriculture (USDA) to establish, maintain, and regulate a list of biological agents and toxins that pose a severe threat to public health, animals, plants, or animal or plant products (Centers for Disease Control and Prevention 2020). Some select agents and toxins include: smallpox, Ebola, Marburg, reconstructed 1918 pandemic flu, avian flu, and sever acute respiratory syndrome (SARS) viruses; anthrax, Botulinum neurotoxins, and ricin (Centers for Disease Control and Prevention 2020). See Table 8.1. Congress also passed legislation that 3 Webb

(2003) does not view deployment as a significant obstacle to using anthrax as a bioweapon but Burton and Stewart (2008) disagree.

246

8 Dual Use Research in the Biomedical Sciences

Tables 8.1 Select agents and toxins list, https://www.selectagents.gov/SelectAgentsandToxinsList. html DHHS Select Agents and Toxins

USDA Select Agents and Toxins

Abrin

African horse sickness virus

Bacillus cereus Biovar anthracis

African swine fever virus

Botulinum neurotoxins

Avian influenza virus

Botulinum neurotoxin producing species of Clostridium

Classical swine fever virus

Conotoxins (Short, paralytic alpha conotoxins containing the following amino acid sequence X1CCX2PACGX3X4X5X6CX7)

Foot-and-mouth disease virus

Coxiella burnetii

Goat pox virus

Crimean-Congo haemorrhagic fever virus

Lumpy skin disease virus

Diacetoxyscirpenol

Mycoplasma capricolum

Eastern Equine Encephalitis virus

Mycoplasma mycoides

Ebola virus

Newcastle disease virus

Francisella tularensis

Peste des petits ruminants virus

Lassa fever virus

Rinderpest virus

Lujo virus

Sheep pox virus

Marburg virus

Swine vesicular disease virus

Monkeypox virus

USDA Plant Protection Select Agents and Toxins

Reconstructed replication competent forms of the Coniothyrium glycines (formerly Phoma 1918 pandemic influenza virus containing any glycinicola and Pyrenochaeta glycines) portion of the coding regions of all eight gene segments (Reconstructed 1918 Influenza virus) Ricin

Peronosclerospora philippinensis

Rickettsia prowazekii

(Peronosclerospora sacchari)

SARS-associated coronavirus (SARS-CoV)

Ralstonia solanacearum

Saxitoxin

Rathayibacter toxicus

South American Haemorrhagic Fever viruses:

Sclerophthora rayssiae

Chapare

Synchytrium endobioticum

Guanarito

Xanthomonas oryzae

Junin

Overlap Select Agents and Toxins

Machupo

Bacillus anthracis

Sabia

Bacillus anthracis Pasteur strain

Staphylococcal enterotoxins (subtypes A, B, C, D, E)

Brucella abortus

T-2 toxin

Brucella melitensis

Tetrodotoxin

Brucella suis (continued)

8.1 A Brief History of Biowarfare and Bioterrorism

247

Tables 8.1 (continued) DHHS Select Agents and Toxins

USDA Select Agents and Toxins

Tick-borne encephalitis complex (flavi) viruses:

Burkholderia mallei

Far Eastern subtype

Burkholderia pseudomallei

Siberian subtype

Hendra virus

Kyasanur Forest disease virus

Nipah virus

Omsk hemorrhagic fever virus

Rift Valley fever virus

Variola major virus (Smallpox virus)

Venezuelan equine encephalitis virus

Variola minor virus (Alastrim) Yersinia pestis

requires research institutions to keep records of select agents in their possession and who has access to them, to perform background checks on personnel who are granted access to select agents, and to implement security measures (Resnik 2009). Various terrorist groups have expressed an interest in acquiring weapons of mass destruction, including nuclear, chemical, and biological weapons. Fortunately, most of these groups have not been successful (as far as we know) in carrying out their threats to acquire and use bioweapons (Salama and Hansell 2005). The most harmful bioterrorism events to date are the Oregon salmonella attack and the anthrax letters.

8.2 Dual Use Research In the early 2000s, scientists published several studies in scientific journals with possible implications for bioterrorism. Many people were concerned that the data, methods, or results disclosed in these articles could be used to make bioweapons (National Research Council 2004). Some of these articles included: • Ronald Jackson and five coauthors published an article in the Journal of Virology in 2001 describing a method for increasing the virulence of a mousepox virus (Jackson et al. 2001). The purpose of the study was to provide information that would be useful in controlling rodent populations, but some scientists were concerned that this paper provided a blueprint for increasing the virulence of the smallpox virus (National Research Council 2004). • Ariella Rosengard and three coauthors published an article in Proceedings of the National Academy of Sciences (PNAS) in 2002 that examined genetic differences between variola major virus, a dangerous virus that causes smallpox, and variola vaccinia virus, a benign virus used to vaccinate individuals against smallpox (Rosengard et al. 2002). Some scientists were concerned that the research provided information for converting variola vaccinia into a bioweapon.

248

8 Dual Use Research in the Biomedical Sciences

• Jeronimo Cello and two coauthors published a paper in Science in 2002 showing how to construct a poliovirus from an RNA template using DNA ordered by mail (Cello et al. 2002). Although some scientists were concerned that this study provided information for making a bioweapon, others said that the research did not increase the risk of bioweapons development because the research was not novel and because the ideas and methods described in the paper had been known to scientists for several decades (National Research Council 2004). After these articles were published, scientists and members of the public became increasingly concerned about dual use research in the biomedical sciences. Several members the US Congress introduced a resolution objecting to the publication of the articles (Resnik 2009), and Congress asked the National Research Council (NRC) to study the issues raised by the publication of these papers and to prepare a report on how to minimize threats from biowarfare and bioterrorism without hindering the progress of science and technology (National Research Council 2004). The report, titled Biotechnology in the Age of Terrorism, has become known as the Fink Report, because Gerald R. Fink, a genetics professor at the Massachusetts Institute of Technology, who chaired the committee that wrote the report. The Fink report reviewed the history of biowarfare and bioterrorism, the legal and regulatory environment related to biomedicine and biotechnology, and the scientific and ethical issues concerning dual use research. Some of the key recommendations made in the report included (National Research Council 2004): • Professional associations and academic institutions should create programs to educate scientists about dual use dilemmas in biotechnology. • DHHS should create a system for reviewing seven types of experiments of concern involving microbes that create the potential for misuse. These include experiments that would (1) render a vaccine ineffective, (2) confer resistance for antibiotics or antiviral drugs, (3) enhance the virulence of a pathogen or make a non-pathogen virulent, (4) increase the transmissibility of a pathogen, (5) alter the host range of a pathogen, (6) enable the evasion of detection methods, and (7) enable the weaponization of a pathogen. • Scientists and journals should review publications for dual use concerns. • DHHS should create a National Science Advisory Board for Biodefense. The board would promote dialogue between scientists and the national security community concerning dual use issues, serve as a resource for scientists, journals, and academic institutions, review experiments of concern, and work the federal government to promote biosecurity. In 2004, DHHS acted upon the NRC’s recommendations and formed the National Science Advisory Board for Biosecurity (NSABB). The NSABB is a committee composed of experts from a variety of areas, including molecular biology, microbiology, infectious diseases, biosafety, public health, epidemiology, veterinary medicine, plant pathology, national security, biodefense, law enforcement, and scientific publishing (National Science Advisory Board for Biosecurity 2020). The

8.2 Dual Use Research

249

NSABB provides advice to scientists, funding agencies, journals, and research institutions concerning biosecurity issues raised by scientific research. It is a purely advisory body that does not have the authority to ban, censor, or regulate research (Resnik 2013). The NSABB focuses on dual use research of concern (DURC), which the US federal government defines as: “research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security (National Institutes of Health 2020).” In 2005, Lawrence Wein and Yifan Liu submitted an article to PNAS that described a mathematical model for contaminating the US milk supply with botulinum toxin (Wein and Liu 2005). The article stated the minimum amount of the toxin that would be needed to cause significant harm to public health and recommended that government officials and milk suppliers take steps to prevent or mitigate a terrorist attack. The editors of PNAS recognized that the article raised dual use issues and delayed publication to allow for additional review to address these issues. PNAS editors met with DHHS officials and decided to publish the article in full because they concluded the benefits of publication outweighed the risks. Although information from the publication might be used by terrorists, it also would alert public health authorities and milk distributions to take appropriate measures to protect the milk supply. The editors also suggested the article would make a good case study for the newly formed NSABB, when it was fully operational (Alberts 2005). Later that same year, researchers published a reconstructed DNA sequence of the 1918 pandemic flu virus in papers that appeared Science and Nature (Tumpey et al. 2005; Taubenberger et al. 2005). The 1918 pandemic flu killed an estimated 50 million people around the world (Sharp 2005). The NSABB had reviewed these papers prior to publication and concluded that the benefits of publication outweighed the risks because the probability that this research could be used to make a bioweapon was exceedingly small (Sharp 2005). In December 2011, the NSABB reviewed two NIH-funded papers reporting the results of gain of function,4 genetic engineering experiments designed demonstrate how the H5N1 avian flu virus can acquire mutations that alter its surface proteins and allow it to be transmitted between ferrets by respiratory water droplets.5 The researchers used ferrets in their experiments due to similarities between ferret and human respiratory systems (Herfst et al. 2012). The papers had been submitted to Science and Nature, and the editors of these journals asked the NSABB to review the research and make a recommendation concerning publication. The journals were also conducting their own review of the scientific and dual use aspects of the research (Resnik 2013). 4 Gain of function research is research that is designed to allow a pathogen to acquire a new function,

such as airborne transmissibility. use their surface proteins to infect cells. Proteins on the outside of the virus attach to receptors on cells. After the virus attaches to the cell, it inserts its genetic material into the cell (DNA or RNA), which instructs the cell to make copies of the virus (Urry et al. 2016).

5 Viruses

250

8 Dual Use Research in the Biomedical Sciences

H5N1 is a dangerous virus with an estimated case fatality rate of 61%. From 2003 to 2007, an epidemic of H5N1spread through Asia and Middle East and killed over 300 people (Resnik 2013). Fortunately, the wild type of H5N1 can only be transmitted to humans by direct contact with infected birds. Public health officials and infectious disease scientists were concerned at that time, and still are, that H5N1 could cause a deadly pandemic if it were to acquire the mutations needed for airborne transmission between mammals, because human beings have no immunity to H5N1 (Resnik 2013). The purpose of the papers was to provide information about how the wild type of H5N1 could become a significant threat to global health if it acquires mutations the enable it to be transmitted by air. The researchers were attempting to create in the laboratory a strain of H5N1 that many scientists and public health officials feared might arise in nature. The information contained in the papers would be important for public health preparedness and response to dangerous changes in H5N1 (Imai et al. 2012). Public health officials could use the information contained in the papers to monitor bird populations for the emergence of forms of H5N1 that could be transmitted by respiratory water droplets. Pharmaceutical companies could also possibly use the information to develop vaccines to immunize people against the virus or drugs to treat an infection. Although publication of the papers could significantly benefit society, it also could cause harm, because the papers contained information that a terrorist or rogue nation could use to make a bioweapon and trigger a deadly pandemic. A pandemic could also arise by accidental contamination from the virus, if other scientists try to reproduce the research (Resnik 2013) (Fig. 8.3). The research team led by Ron Fouchier from Erasmus Medical Center in the Netherlands submitted their paper to Science, and the team led by Yoshihiro Kawaoka from the University of Tokyo and the University of Wisconsin-Madison submitted their paper to Nature. Fouchier’s team inserted genes into an H5N1 virus to allow it to Fig. 8.3 A depiction of a generic influenza virus, showing its DNA inside a protein covered shell, Centers for Disease Control and Prevention, public domain, https://www.cdc. gov/flu/images/virus/flu virus-antigentic-characteriza tion-medium.jpg

8.2 Dual Use Research

251

be transmissible by respiratory water droplets (Russell et al. 2012). The researchers inoculated the ferrets’ nasal passages with the virus and passaged it between them ten times so mutations that increase transmissibility would be selectively favored. Passaging involves collecting mucous from nasal passages or lungs of one animal and transferring it to another (Herfst et al. 2012). By the end of the passaging process, the virus had acquired mutations that increased its transmission rate by water droplets to 75%. However, the lethality of virus also decreased (Russell et al. 2012). Kawaoka’s group used a hybrid virus consisting of an avian H5HA virus and a H1N1 virus, which caused a flu pandemic in 2009. The hybrid virus acquired mutations that facilitated airborne transmission, but it was not lethal and was vulnerable to vaccines and antiviral drugs (Imai et al. 2012). Experts considered Fouchier’s research to be more dangerous than Kawaoka’s because it provided a clearer demonstration of how to make H5N1 transmissible between mammals by respiratory water droplets (Resnik 2013). The NSABB considered the benefits of the research and risks of publishing the papers and decided that the risks outweighed the benefits. The NSABB recognized that the papers provided valuable knowledge for biomedical science and public health but that the papers contained information that could be used to make a bioweapon that could cause a global pandemic. The NSABB was also concerned that a pandemic could arise as a result of accidental contamination, if other researchers try to reproduce the experiments. The NSABB initially recommended, by unanimous vote, that only redacted forms of the papers should be published. The key details of the papers necessary to reproduce the results would be removed and made available to only responsible scientists and public health officials.6 After the NSABB’s December meeting, researchers working with H5N1 agreed to a voluntary moratorium on gain of function genetic engineering experiments involving viruses of pandemic potential to allow scientists and public health officials to examine of the issues further (Fauci 2012). In February 2012, the World Health Organization (WHO) convened a committee composed of scientists who study avian influenza, the editors of Science and Nature, and public health officials to discuss the issues raised by the controversial research. The committee recommended full publication of the articles, citing the scientific and public health benefits of publication and the difficulties associated with redaction. After the WHO committee met, the researchers rewrote their papers and resubmitted them to the journals and the NSABB. The revised versions of the papers provided important biosafety details for preventing accidental contamination (Resnik 2013). The NSABB met again in March 2012 to review the revised papers. The NSABB also considered new information relating to the value of the research to public health, the difficulties with reproducing the experiments to make a bioweapon, and practical and legal issues with redacted publication. The NSABB recommended full publication of both papers because they did not contain information that could be immediately used to make a bioweapon, and the mutations acquired by H5N1 did not 6 Of course, one needs to answer the question: “how does you decide when someone is a responsible

scientist?”, which will be addressed below.

252

8 Dual Use Research in the Biomedical Sciences

make the virus easily transmissible by air and highly pathogenic. The NSABB recommended that supplementary information not contained in the revised papers should not be published, because this information could enable someone to make a highly pathogenic form of H5N1. The supplementary information should be distributed only to responsible scientists. The NSABB also recommended that US government should develop ways of controlling access to sensitive scientific information. The NSABB’s recommendations concerning publication were not unanimous, however. The NSABB voted 12–6 to recommend full publication of Fouchier’s paper and 18–0 for full publication of Kawaoka’s. The editors of Science and Nature concurred with the NSABB’s recommendations and published the papers without redaction in June 2012 (Resnik 2013). In January 2014, a research team led by Stephen Arnon of the California Department of Public Health published two papers on a novel neurotoxin produced by a strain of Clostridium botulinum. The papers also compared the sequence of the gene that codes for the novel neurotoxin to genes in other Clostridium bacteria that code for botulinum neurotoxin (Dover et al. 2014). Although the research provided valuable information for microbiologists and public health experts, the authors decided to not publish DNA sequence data for the gene because there were currently no effective treatments for the novel neurotoxin, and someone could use the sequence data to create a dangerous bioweapon (Arnon et al. 2001; Relman 2014). The sequence data would not be publicly distributed but would be available to responsible researchers working with Clostridium botulinum. The sequence data would be published when there is an effective treatment for the novel neurotoxin (Relman 2014). This was the first case in recent history of biomedical scientists using redacted publication as a strategy for managing the risks of dual use research. In September 2014, the US government announced a policy for institutional oversight of all federally-funded life sciences dual use research of concern (United State Government 2014). The policy requires institutions to identify and review dual use research of concern and to take appropriate steps to mitigate risks in a way that minimizes impacts on legitimate science and is commensurate with the degree of risk. The policy describes responsibilities of investigators, funding agencies, and institutions and list dangerous pathogens and toxins and dangerous experiments (United States Government 2014). In October 2014, the NIH announced a pause on new funding of gain of function genetic engineering experiments involving influenza, SARS, and MERS viruses to review and evaluate the biosafety and biosecurity risks concerning this research (National Institutes of Health 2014). The NIH asked the NSABB to review the issues and make recommendation to the US government. After holding several meetings to discuss the issues with scientists, public health officials, biosafety and biosecurity experts, ethicists, attorneys, and members of the public, the NSABB recommended that the government should lift the funding pause and implement an oversight framework that minimizes and manages the risks of the research (Selgelid 2016). In December 2017, the NIH ended the funding pause and announced a framework to guide policy decisions involving enhanced potential pandemic pathogens (PPPs) (National Institutes of Health 2017a, b). The framework includes eight criteria for

8.2 Dual Use Research

253

guiding funding decisions. See Box 8.5. Earlier in 2017, the US government had announced policy guidelines for reviewing and overseeing research on enhanced PPPs (United States Government 2014). Box 8.5 Criteria for guiding HHS funding decisions on proposed research that involves, or is reasonably anticipated to involve, creation, transfer, or use of enhanced PPPs (National Institutes of Health 2017b) Department-level review of proposed research reasonably anticipated to create, transfer, or use enhanced PPPs will be based on the following criteria: 1) The research has been evaluated by an independent expert review process (whether internal or external) and has been determined to be scientifically sound; 2) The pathogen that is anticipated to be created, transferred, or used by the research must be reasonably judged to be a credible source of a potential future human pandemic; 3) An assessment of the overall potential risks and benefits associated with the research determines that the potential risks as compared to the potential benefits to society are justified; 4) There are no feasible, equally efficacious alternative methods to address the same question in a manner that poses less risk than does the proposed approach; 5) The investigator and the institution where the research would be carried out have the demonstrated capacity and commitment to conduct it safely and securely, and have the ability to respond rapidly, mitigate potential risks and take corrective actions in response to laboratory accidents, lapses in protocol and procedures, and potential security breaches; 6) The research’s results are anticipated to be responsibly communicated, in compliance with applicable laws, regulations, and policies, and any terms and conditions of funding, in order to realize their potential benefit; 7) The research will be supported through funding mechanisms that allow for appropriate management of risks and ongoing Federal and institutional oversight of all aspects of the research throughout the course of the research; and 8) The research is ethically justifiable. Non-maleficence, beneficence, justice, respect for persons, scientific freedom, and responsible stewardship are among the ethical values that should be considered by a multidisciplinary review process in making decisions about whether to fund research involving PPPs.

8.3 Legal Issues Concerning Publication of Dual Use Research Classified research. The US has laws that allow the federal government to classify information obtained by government employees, contractors, or special volunteers that would pose a threat to national security if distributed publicly.7 The federal government does not have the legal authority to classify privately funded research. However, the government may ask private companies who are conducting sensitive research to agree to work for the government, so the research can be classified. Various US federal agencies, including the Department of Defense, National Security Agency, FBI, Department of Homeland Security, Department of Energy, and DHHS 7 Other

countries have similar laws.

254

8 Dual Use Research in the Biomedical Sciences

support classified research. Agencies must make a determination before the research is conducted that it will be classified. However, research on nuclear weapons does not require an action by an agency and it automatically classified. Classified research includes research in the physical sciences, engineering, biomedicine, mathematics, computer science, social science, military intelligence, and criminal investigation and may be conducted at government or private laboratories or academic institutions (Resnik 2009). Access to classified information is granted on a need-to-know basis. That is, a person may have access to classified information to the extent that they need the information to perform their job for the government. There are three levels of classification based on the degree of damage disclosure could pose to national security. The most sensitive information is classified “top secret,” followed by “secret,” and “confidential.” To gain access to classified information, one must have a security clearance, based a thorough background check (Resnik 2009). Unauthorized disclosure of classified information is a federal crime punishable by up to ten years in prison, a fine, or both (United States Code 2012). In 1985, President Ronald Reagan (1911–2004) issued the National Security Decision Directive 189 which states that to “the maximum extent possible, the products of fundamental research remain unrestricted.” Fundamental research is defined as “basic and applied research in science and engineering, the results of which ordinarily are published and shared broadly within the scientific community, as distinguished from proprietary research and from industrial development, design, production, and product utilization, the results of which ordinarily are restricted for proprietary or national security reasons (National Security Decision Directive 189).” The purpose of the directive is to promote the free exchange of scientific information. Censorship. The US government’s authority to censor unclassified scientific information is extremely limited.8 The First Amendment to United States Constitution (1789) grants citizens freedom of speech (or expression), religion, petition, assembly, and the press. The US Supreme Court has interpreted the First Amendment as providing strong support for free speech but also as allowing the government to restrict speech to protect individuals, organizations, or the public from harm. For example, slander, libel, copyright infringement, inciting a riot, communicating a threat, criminal conspiracy, violation of trade secrecy, and child pornography are not protected speech in the US (Baron and Dienes 2016). While the Supreme Court has upheld many various laws that punish people for speech after the fact, it has interpreted the First Amendment as placing extraordinary limits on the government’s ability to stop communication before it occurs, known as prior restraint. The Supreme Court has interpreted the Constitution in this manner because prior restraint can have a tremendous chilling effect on free speech in general. The government may impose prior restraint on speech only when dissemination of information would pose grave and irreparable damage to national security. It is unlikely that the publication of 8 Freedom of thought, opinion, and expression are part of the United Nations Universal Declaration

of Human Rights (1948). About thirty countries have national laws protecting freedom of expression, but most countries do not and many actively suppress free expression (Freedom House 2017).

8.3 Legal Issues Concerning …

255

scientific research would meet this high bar (Robertson 1977). However, the government may impose restrictions on actions that generate scientific information. For example, the government may restrict conduct to protect the welfare or rights of human subjects or animal subjects in research (Shamoo and Resnik 2015) or to restrict access to select agents and toxins (see discussion above). Export controls. Export control laws adopted by the US and other countries also have implications for publication of dual use research. Export control laws allow the government to restrict the transfer of information, materials, and equipment to other countries for reasons of national security or international trade. Export of information, materials or equipment which could be used to make bioweapons could prohibited under these laws (National Research Council 2004). The Dutch government tried to stop the publication of Fouchier’s research under its export control laws but then relented (Shaw 2016). Freedom of information. The Freedom of Information Act (FOIA) grants individuals the right to access agency records generated by the federal government.9 To obtain a record under FOIA, an individual must submit a request to the relevant agency and that describes the information being sought. Agencies may charge individuals a reasonable fee (e.g. photocopying charges) for providing the information and must comply with the request in a timely manner unless the records fall within FOIA exemptions or exclusions. One of these exemptions relevant to dual use research would be if the record has been classified by an executive order to protect national security. Private citizens or organizations can use FOIA to access scientific research that is funded by the federal government if the research not exempted or excluded and is published data or methods needed to validate published data. Published data includes data published in peer reviewed journals or cited by federal agencies to support regulator decisions. Published data does not include preliminary data or analyses, drafts of papers, or information protected by copyright, patent, or trade secrecy law (Resnik 2013). An important implication of FOIA is that redaction of information from a US government-funded scientific publication may not prevent the public from gaining access to the information if it is necessary to validate the data. For example, if Fouchier and Kawaoka had redacted key details from their gain of function experiments, the public probably still could have accessed these details if they were necessary to validate the data.

9 30

other countries, including the Australia, Canada, France, Germany, Hong Kong, and India, Mexico, Nigeria, and Poland also have freedom of information laws (National Freedom of Information Coalition 2020).

256

8 Dual Use Research in the Biomedical Sciences

8.4 Ethical Dilemmas Concerning Dual Use Research Dual use research creates a potential conflict between the values of scientific openness and freedom on the one hand and protecting society from harm on the other (National Research Council 2004; Resnik 2013). Scientific openness involves the sharing of research data, results, methods, materials, hypotheses, and theories through publication and other forms of communication and interchange. Scientific freedom goes beyond openness and includes not only the freedom to publish research, but also the freedom to conduct research and to critically examine research and to discuss scientific ideas. Openness and freedom are both essential to scientific progress (Resnik 2009).10 Restrictions on openness and freedom can stifle creativity and innovation and interfere with collaboration, criticism, and peer review.11 Additionally, the public can often benefit from the sharing of scientific information that has practical or policy value (Resnik 2009). As we have seen, however, scientific research can also be used to cause harm to public health, agriculture, society, the economy, or national security. Research that could help scientists and public health officials better understand how a virus might acquire mutations that increase its virulence or transmissibility might also be used to make a bioweapon. The ethical dilemma of dual use research is how scientists and society should balance these competing values (National Research Council 2004). Although most people value scientific openness and freedom and protecting society from harm, the dual use dilemma involves moral uncertainty because different people may balance the competing values differently. While scientists are likely to prioritize openness and freedom, members of the public may prioritize protecting society from harm. The dual use dilemma arises at different stages of the research process, as illustrated by this series of questions (Kitcher 2001; National Research Council 2004; Miller and Selgelid 2007; Resnik 2013): • Should the research be conducted at all? Is some research so dangerous that it should be banned? • Should the research be publicly funded? • If the research is publicly funded, should it be classified? • If the research is not classified, should it be published in full? Redacted? • Should researchers share materials used or developed in the research, such as genetically engineered microbes? • If the research is published, how should its risks be minimized or mitigated?

10 Another reason why scientific freedom is important is because it is a form of freedom of expression, which is a basic human right (Resnik 2009). See discussion of censorship below. 11 The Soviet Union’s suppression of Mendelian genetics is salient example of how restrictions on freedom and openness can undermine progress. See Resnik (2009) for further discussion.

8.5 Evaluating the Risks and Benefits of Dual Use Research

257

8.5 Evaluating the Risks and Benefits of Dual Use Research Evaluating the benefits and risks of dual use research is essential to answering these questions, since if this research poses no significant risks, then there should be no ethical barriers to conducting or publishing the research. As we have discussed numerous times in this book, being able to assign accurate and precise probabilities to different outcomes is essential for using expected utility theory (EUT) in making decisions pertaining to risks and benefits. To use EUT to make decisions concerning dual research, we need to have enough evidence to assign accurate and precise probability estimates to different outcomes. Do we have enough evidence to make these estimates for dual use research? Sometimes? Never? To address these questions, let’s focus on the case of the controversial H5N1 gain of function research. The scientists who conducted these experiments argued that their research could benefit society by providing evidence that public health officials could use to monitor bird populations for dangerous forms of H5N1 (Selgelid 2016). While this outcome is clearly plausible, we may not be able to confidently assert that it is likely to happen. Many of the countries where H5N1 poses a threat, such as Bangladesh, Laos, Malaysia, and Nigeria, may not have the financial or public health resources to monitor bird populations for dangerous mutations of H5N1. To estimate the probability that countries would use information contained in the papers to monitor bird populations, one would need to gather information concerning their public health infrastructure and interview officials concerning the likelihood that they would use the information. We have no evidence that this was done prior to funding or publishing the research nor that it has been done since (Lipsitch and Galvani 2014). Another possible benefit of the research is that it could provide valuable information for pharmaceutical companies for developing vaccines or drugs for dangerous forms of H5N1. Again, this benefit is plausible, but do we have enough evidence to confidently assert that it is likely happen? To assign an accurate and precise probability to this outcome, we would need to have evidence concerning the research and development capabilities and business objectives of pharmaceutical companies with an interest in H5N1. It would also be helpful to interview company officials concerning their plans to develop vaccines or treatments for dangerous forms of H5N1. We would also need to know whether the dangerous form of H5N1 that emerges is that same as the one produced in the laboratory. It might be the case that these experiments would have little practical relevance to a real world threat that emerges (Selgelid 2016). Suffice it to say that evidence concerning the usefulness of the research for vaccine and drug development was also lacking when the research was funded and published. Finally, the research could benefit science by providing valuable information for other scientists working in the area of inquiry (i.e. gain of function research in virology), or related areas (e.g. virology, immunology, microbiology, genetics, epidemiology, etc.) (Selgelid 2016). The evidence for scientific benefits is much stronger than the evidence for social benefits because we know that other scientists, including the teams lead by Fouchier and Kawaoka, have been conducting gain of

258

8 Dual Use Research in the Biomedical Sciences

function research in virology and are planning to continue this work. The controversial H5N1 research would have direct applications for gain of function studies undertaken Fouchier and Kawaoka’s team and other researchers. Benefits beyond gain of function research are a bit more speculative, but are also likely, given the interrelatedness of different fields of biomedical science. However, while these benefits were clearly plausible, we still did not have enough evidence to assign accurate and precise probabilities to them (Selgelid 2016). There are two types of risks that have concerned scientists in the field, professional journals, policymakers, and the NSABB: biosecurity risks (i.e. that the research might be used to make a bioweapon) and biosafety risks (i.e. scientists who try to reproduce or build upon the research might accidentally release a dangerous pathogen) (Selgelid 2016). The biosecurity risks are clearly plausible but the evidence concerning the likelihood of these risks is woefully lacking. Recalling the discussion of different approaches to estimating probabilities in Chapter 2, two approaches would have had some relevance to estimating probabilities, the statistical approach and the propensity approach.12 The statistical approach estimates probabilities based on observed frequencies of events. While there have been a few incidents of bioterrorism in last thirty years (see discussion above), none of these have clearly depended on information that had been recently disclosed in scientific papers. The person who sent the anthrax letters, for example, did not need to review recent scientific papers telling him or her how to weaponize anthrax to make this bioweapon. All the information he or she needed to make this bioweapon was already in the published literature, and anyone with the requisite expertise and training in microbiology would have been able to make this bioweapon. Likewise, the terrorists who sprayed salmonella bacteria on the salad bar at the Oregon restaurant did not need to consult scientific papers on how to use salmonella as a bioweapon, because all the information they needed on salmonella poisoning was already in the published literature. Since were arguably have no empirical data to make a statistically-based estimate of the probability that the H5N1 gain of function research would have been used to make a bioweapon, the statistical approach would not have be applicable (Resnik 2017). The propensity approach also would have failed to yield an accurate and precise probability estimate for the biosecurity risks of the gain of function experiments. To use the propensity approach, one develops a predictive model, based on information and assumptions, to make probability estimates. While some studies in the scientific literature have developed predictive models to estimate the risk of bioterrorism (Walden and Kaplan 2004; National Research Council 2008; Ezell et al. 2010; Vladan et al. 2012; Boddie et al. 2015), none of these have attempted to estimate the probability that publication of an article in the scientific literature would lead directly to the development of a bioweapon used for terrorism or some other malevolent purpose. Moreover, these models make questionable assumptions concerning such parameters as the scientific and technical capabilities of terrorists (or others) to develop and deploy bioweapons, and the virulence and transmissibility of pathogens 12 The subjective approach would be too biased to be useful for estimating biosecurity and biosafety risks and the mathematical approach would be too unrealistic to be useful.

8.5 Evaluating the Risks and Benefits of Dual Use Research

259

that are used. The models yield different results depending the assumptions that are made (Resnik 2017). Clearly, the variability of these results can significantly impact decisions we make concerning dual use research, since probability estimates may differ in orders of magnitude.13 The same sorts of problems arise concerning biosafety risk estimates, but to a lesser degree. Biosafety risks are easier to estimate than biosecurity risks because we have substantial biosafety data, based on over four decades or genetic engineering research. In Chapter 7, we reviewed some of the biosafety data. Even though these risks are easier to estimate than biosecurity risks, that does not mean that we have enough evidence to make accurate and precise probability estimates concerning these risks, since we still lack enough data to make statistical estimates of probabilities and we must rely on predictive models. We saw in Chapter 7 the estimates of the rate of a laboratory acquired infection (LAI) range from 0.000057 to 0.001 per person per year, and the risk of onward transmission ranges from 0.05 to 0.15. If assume that 200 people are working on gain of function experiments in virology involving PPPs, and we place the risk of LAI rate conservatively at 0.0001 per person per year, then this would yield a risk of an LAI of 0.02, or one LAI every fifty years. If we assume the rate of onward transmission of a LAI is 0.15, then this would give us an annual risk a LAI leading to community infection of 0.003, or about one event every three hundred years. Of course, this estimate of the risk depends on several assumptions we have made, and different people might arrive at different estimates. A great deal depends on how many people are working on this research and their adherence to biosafety protocols. Fouchier, one of the principal investigators who conducted to controversial H5N1 experiments, has estimated the risk of a LAI from this type of work to be 1 LAI per 70,000 persons per year or 0.000014 (0.0014%) per person per year (Fouchier 2015). Fouchier’s risk estimate was based on LAI data for people working in BSL 3 labs. Fouchier estimated the risk of onward transmission of a LAI to be between 0.00000025 and 0.00003 for a probability of onward transmission from a LAI related to this type of work of 1 person per 33 billion years (Fouchier 2015). Other researchers have calculated the risk of onward transmission to be much greater than Fouchier’s estimate. Klotz and Sylvester (2012) estimate the risk of onward transmission of a LAI from gain of function experiments in virology to be 0.003 per laboratory per year or once per 536 years. However, they also factor in the number of labs that may be conducting this research (42) for an estimate of one event every 12.8 years. Lipsitch and Bloom (2012) and Lipsitch and Galvani (2014) also consider the risk of onward transmission of a LAI from this type of research to be much higher than Fouchier’s estimate. In thinking about both biosecurity and biosafety risks it is also important to consider the nature of the pathogen being investigated. If the pathogen that is created in the laboratory is not highly virulent or transmissible, then biosafety and biosecurity risks may be minimal. The most significant risks arise when the pathogen is both highly virulent and highly transmissible, because pathogens of this 13 See

the discussion of estimating low probability, catastrophic events in Chapters 2 and 4.

260

8 Dual Use Research in the Biomedical Sciences

type could cause a global pandemic through accidental contamination or deliberate misuse. As noted above, the H5N1 viruses created by both teams of researchers were transmissible but not highly virulent. Of course, a great deal depends on what one means by “highly virulent,” since a virus with a case fatality rate of only about 2% can still pose a significant threat to global health and the global economy, as illustrated by the COVID-19 pandemic (Wu and McGoogan 2020). While there are ample data concerning disease epidemics and pandemics, these may not apply to the pathogens under investigation, so scientists may need to use predictive models to estimate the probabilities related to the public health and economic impacts of these pathogens (National Research Council 2004). However, the predictions from these models may vary considerably, depending on the information and assumptions concerning the virulence and transmissibility of the pathogen, and countermeasures, such the public health response and vaccine development. The upshot of this discussion is that while we have data that we can use to assess to the biosafety risks of gain of function experiments like those conducted by Fouchier and Kowaoka, we do not have enough data to make accurate and precise estimates of probabilities related to these risks. Experts in the field have made radically different estimates of these probabilities based on divergent assumptions. Moreover, these estimates may reflect financial biases the authors have related this controversial research. Fouchier conducted the controversial, NIH-funded research and has interest in receiving additional funding. Lipsitch has defended alternatives to gain of function experiments, such as in vitro testing of proteins required to infect human beings, molecular modeling of how viral proteins interact with cell receptors, and comparative genomics of influenza viruses (Lipsitch and Galvani 2014). Lipsitch has received funding from the NIH and from pharmaceutical companies to do these kinds of alternative experiments.14 To summarize the preceding discussion, we do not have enough evidence to make accurate and precise estimates of the probabilities related to the benefits and risks of dual use research in the biomedical sciences. Since there is significant scientific and moral uncertainty concerning dual use research, we should not use EUT to formulate policies for addressing the risks of dual use research. The PP, however, may provide us with some useful insights into these types of decisions.

8.6 Applying the Precautionary Principle to Dual Use Research To apply the PP to dual use research policy decisions, we should consider the basic options (risk avoidance, minimization, and mitigation) for reasonably managing the risks related to conducting, funding, and publishing this research, as well as

14 See Lipsitch and Bloom (2012) and Lipsitch and Galvani (2014) for disclosure of financial interests

and funding.

8.6 Applying the Precautionary Principle to Dual Use Research

261

criteria for reasonableness (proportionality, fairness, consistency, and epistemic responsibility). Earlier I raised the issue of whether some types of research are so dangerous that they should be banned. To address this issue, it is useful to distinguish between (a) the risks of conducting the research; and (b) the risks of the knowledge generated by the research. Clearly, there are some types of research that should not be permitted because conducting the research would impose unreasonable risks on human or animal research subjects, laboratory workers, or communities (Shamoo and Resnik 2015). Some types of genetic engineering experiments, such as human germline genome editing, might be prohibited for reasons of this sort.15 Also, genetic engineering experiments involving dangerous pathogens conducted before the development of biosafety standards could have been prohibited for similar reasons.16 However, most types of dual use biomedical research projects will not be so dangerous to conduct that we would say they should be banned. Indeed, for many dual use research projects risks do not become a significant concern until after the results become publicly disseminated. For example, the study by Wein and Liu (2005), discussed above, raised no risk issues when it was conducted because it only involved mathematical modeling. The risks pertained to the knowledge or information generated by the research, not the research itself. Could knowledge (or opinion) be so dangerous that it should banned? While some repressive and authoritarian regimes17 have thought so, forbidden knowledge18 is fundamentally contrary to the moral and political ideals of liberal democratic societies (Rawls 2005) as well as laws in those societies that protect freedom of speech, thought, assembly, petition, religion, and the press. The best way to deal with dangerous forms of knowledge or opinion, such as racist or sexist theories, is through public debate and criticism, not suppression (Mill 1978). Banning knowledge would be an unreasonable way of managing the risks that knowledge may bring. However, just because knowledge is not forbidden does not mean that research leading to that knowledge should be funded. It is reasonable and morally responsible for citizens to decide not fund research that poses serious risks to individuals or society, is contrary to prevailing moral values or both (Kitcher 2001; Resnik 2009). Turning to the question of whether dual use research should be funded, the answer to this question depends on who provides the funding. Private corporations, such as pharmaceutical or biotechnology companies, fund research in order to develop products or services that promote their profits (Resnik 2007). A pharmaceutical company developing influenza vaccines might fund gain of function experiments with dual use potential to better understand how influenza viruses mutate, with the

15 See

discussion in Chapter 7. Chapter 7 for discussion of these risks. 17 Examples include the former Soviet Union and North Korea. 18 The idea of forbidden knowledge is thousands of years old. According to the book of Genesis in the Bible, Adam and Eve committed a sin by eating the forbidden fruit from the tree of knowledge of good and evil. 16 See

262

8 Dual Use Research in the Biomedical Sciences

ultimate goal of marketing safe and effective flu vaccines or treatments (SchultzCherry et al. 2014). Philanthropic organizations, such as the Wellcome Trust or Bill and Melinda Gates Foundation, fund research that accords with their goals. A private charity might also fund gain of function experiments as part of a research program for vaccine or drug development. Government funding agencies, such as the NIH, support research with an eye toward promoting social goods, such as public health or the advancement of scientific knowledge (Resnik 2009). As we have already seen, the NIH has funded dual use research in virology, microbiology, genomics, and other areas of inquiry and is planning to continue funding such research. Public funding decisions concerning research with potential dual use applications can be difficult to make for three reasons. First, dual use research often has important benefits as well as risks. Not funding valuable research may be a high price to pay for avoiding its risks. Second, the risks of research can often be minimized by implementing rigorous biosafety or biosecurity measures. Although most government agencies require investigators to publish funded research, they could decide that research should be published only in redacted form or should be classified. Third, it can be difficult to predict how research will unfold. A study that seems to have no dual use implications may generate unexpected dual use results. For example, the authors of the mousepox study mentioned earlier (Jackson et al. 2001) did not expect their experiments to generate results that could be used to make a bioweapon. Conversely, a study that seems to have dual use implications may not produce results that could be used to make a bioweapon, due to difficulties in reproducing the results. Fourth, research may have indirect or remote applications for dual use. For example, a study of antibiotic resistance might not be immediately used to make a bioweapon, but it might describe methods, results, or hypotheses that could facilitate bioweapons development. Government agencies usually make funding decisions based on the recommendations of committees of experts in the relevant area of research. These committees evaluate research proposals based on various criteria, such as the originality and scientific significance of the research, the rigor of the methodology, the qualifications of the investigator, institutional support, compliance with regulations and policies, and the social value of research (Shamoo and Resnik 2015). Dual use concerns would normally arise when review committees consider the social value of the research. As noted earlier, the NIH has adopted additional criteria for reviewing funding proposals that the creation, transfer, or use of enhanced PPPs (see Box 8.5). The NIH has decided not to avoid the risks of this research but to minimize risks through careful review and oversight. Assuming that a policy like the NIH’s balances risks and benefits proportionally, one still needs to ask whether it manages risks and benefits fairly. An important question related to the fairness of dual use research decisions is whether people who are impacted by these decisions have meaningful input into them (Selgelid 2016). Those who live close to laboratories conducting experiments with dangerous pathogens or toxins should have meaningful input into research decisions that directly affect them. However, potential impacts may extend beyond people in the local community to potentially the entire world, since a pandemic caused by a bioweapon or accidental

8.6 Applying the Precautionary Principle to Dual Use Research

263

release of a dangerous pathogen would have global impacts (Selgelid 2016). Clearly, public engagement should play an essential role in dual use research policy decisions. However, it may be difficult to conduct to effective public engagement, since the public could include the entire world population. While it is not possible for funding agencies to engage the entire human race, to promote fairness in decisionmaking, they could conduct broad public education campaigns and seek input from local populations and diverse audiences representing constituencies around the globe. Epistemic responsibility would also be important in managing the risks of funded dual use research. Epistemic responsibility would recommend that funding decisions as well as subsequent ones (e.g. publication) are based on the best available scientific data and evidence for managing the risks dual use research, especially risks related to biosafety and biosecurity.19 Decisions could be revised based on new evidence or data. Some have argued, for example, that knowledge we have gained from the COVID-19 pandemic changes our risk assessment of gain of function experiments with PPPs because it shows how easily a PPP with an R0 > 5 cause trigger a pandemic once it leaves the lab and enters the human population (Imperiale and Casadevall 2020). Once research has been completed, publication decisions would be paramount. As noted earlier, a federal agency could decide to classify research before it begins. Assuming that research has not been classified, scientists will need to decide whether it should be published. As noted earlier, National Security Decision Directive 189 holds that the results of basic and applied research should be widely disseminated. Also, funding agencies require investigators to publish their research and share data, materials, and methods (Shamoo and Resnik 2015). Nevertheless, not publishing some or all of the research because of biosafety or biosecurity concerns would still be an option that would be consistent funding agency guidance (United States Government 2017; National Institutes of Health 2017b). Publication decisions could be made be by scientists, funding agencies, or journals. In the case of the novel botulinum toxin research discussed earlier, the scientists decided not to publish the genome sequence that codes for the toxin. In the H5N1 research case, the NIH was prepared to follow the NSABB’s initial recommendation to redact portions of Fouchier’s paper. Scientific journals have also made important decisions concerning publication, as illustrated by the H5N1 case and the milk supply contamination case. Some scientific journals have adopted policies that require an additional level of review for submissions identified by the editors or reviewers as raising dual use concerns (Resnik et al. 2011). A survey of 155 biomedical journals published in 2011 found that only 7.7% of responding journals had a dual use review policy and that only 5.5% had experience with reviewing dual use submissions. The journal’s impact factor and previous experience with reviewing dual use research were positively 19 It

would be wise to obtain information from individuals with knowledge and expertise related to national and global security, which the NSABB has done. Since some of this information may be classified, some of the deliberations concerning funding of dual use research may need to be confidential.

264

8 Dual Use Research in the Biomedical Sciences

associated with having a dual use review policy. A survey of the editors of 127 biomedical journals published in 2012 found that only 8.7% of respondents had experience with reviewing dual use submissions and that no respondents had ever refused to publish a paper for biosecurity reasons. 74.8% of respondents agreed that editors have a responsibility to address biosecurity risks (Patrone et al. 2012). Although these surveys indicate most journals do not have dual use review policies, it is likely that more journals now have such policies, since these studies are more than eight years old, and dual use research issues have become more well-known since that time. Additional, follow-up studies of journal dual use review policies could provide more up-to-date information. The PP would recommend that publication decisions should balance the risks and benefits of the publication proportionally. One of the key problems with balancing benefits and risks of publication is that the viable options for managing risks are limited. The three basic options would seem to be: (1) publish the research; (2) don’t publish the research and keep it secret; and (3) publish the research in redacted form and make redacted information available only to responsible scientists.20 However, the first two options have limitations, since publishing the research may not adequately minimize its risks, and not publishing it may deny science and society of important benefits (Resnik 2013). The third option, redaction, may balance risks and benefits proportionally, but it has significant problems. The first problem with redacted publication is that it may prevent other scientists from having access to the information needed to reproduce and validate the results. One of the key requirements for scientific publication is that the article should contain the information needed for other scientists to reproduce and validate the results (Shamoo and Resnik 2015). This requirement is important to ensure that published research is objective and reliable. Redacted publication is contrary to this important scientific norm. The second problem, which we discussed earlier, is that redacted publication may not prevent scientific information from being disclosed publicly because the redacted information may still be available to the public under FOIA laws if the research is government funded. If Fouchier’s research had been published in redacted form, for example, it might have been possible for a terrorist to obtain the redacted information through a FOIA request if the information would be needed to validate the results. A new exemption might need to be added to FOIA laws to address redacting information from scientific publications that may pose a threat to national security. The third problem is that the scientific community does not currently have an effective system for handling redacted publication (Casadevall et al. 2013; Resnik 2013). To make redacted publication work, several things would need to be done. First, there would need to be a safe and secure place for storing and maintaining 20 In

theory, there is a fourth option, namely, publish the research in an obscure journal that only specialists in the field will know about. While this may have been a viable option before the advent of the internet, it is no longer, because people can easily use search engines to find papers published in journals that are accessible online.

8.6 Applying the Precautionary Principle to Dual Use Research

265

the full publication. Second, there would need to be a way of making redacted information available to individuals who should have access to it, such as responsible scientists, public health officials, or law enforcement or intelligence agencies. Third, individuals who are granted access would need to be properly vetted to ensure that they are well-intentioned (i.e. not terrorists) and responsible. Vetting might need to include detailed background checks equivalent to those used to give individuals access to classified information. Vetting could take considerable time and could cost a great deal of money.21 Fourth, there would need to be some oversight of the people who receive the sensitive information to ensure that they do not share it with people who are not supposed to have access to it. Since these aforementioned tasks require considerable time, money, and coordination, they are probably too much for journals to undertake and may be best handled by an organization with sufficient resources, such as the NSABB or NIH. Concerning the other reasonableness criteria, it would be important for researchers and journal editors to strive for consistency related to publication decisions to maintain the trust of the scientific community and the public. Consistency may be difficult to achieve, however, due to the heterogeneity of dual use research. The risks and benefits of a gain of function genetic engineering experiment in virology, for example, may be very different from the risks and benefits of a statistical modelling study of how to contaminate the US milk supply. Researchers and journals editors may need to make risk/benefit determinations on a case-by-case basis, since it is unlikely that they will be able to develop an algorithm or formula for balancing risks and benefits. Policies developed by journals, institutions, and funding agencies may help to promote consistency in dual use publication decisions. Epistemic responsibility would also be an important requirement for publication decisions, since these decisions should be based on up-to-date information and knowledge concerning the benefits and risks of publication. Additional research on publication policies and guidelines could also help to promote epistemic responsibility.

8.7 Conclusion In this chapter I have applied the PP to dual use issues related to biomedical research and offered some recommendations for scientists, journal editors, funding agencies, and policymakers. While many difficult problems concerning dual use research still remain, the PP can help to address them in a reasonable way. In other chapters in this book, I have argued that it is reasonable to change decision-making approaches as conditions warrant. For example, one might use the PP when scientific and moral uncertainty are high and then switch to EUT as the result of acquire more information or resolving value conflicts. However, it is unlikely that EUT will become a useful 21 The cost of background check to obtain a security clearance ranges from about $250 for a secret clearance to $4,000 for a top-secret clearance (Contract Professionals Incorporated 2019).

266

8 Dual Use Research in the Biomedical Sciences

approach to making decisions concerning dual use research for the foreseeable future, given its high degree of scientific and moral uncertainty.

References Alberts, B. 2005. Modeling Attacks on the Food Supply. Proceedings of National Academy of Sciences of the United States of America 102 (28): 9737–9738. Arnon, S.S., R. Schechter, T.V. Inglesby, D.A. Henderson, J.G. Bartlett, M.S. Ascher, E. Eitzen, A.D. Fine, J. Hauer, M. Layton, S. Lillibridge, M.T. Osterholm, T. O’Toole, G. Parker, T.M. Perl, P.K. Russell, D.L. Swerdlow, and K. Tonat. (2001). Working Group on Civilian Biodefense: Botulinum Toxin as a Biological Weapon: Medical and Public Health Management. Journal of the American Medical Association 285 (8): 1059–1070. Baron, J.A., and C.T. Dienes. 2016. Constitutional Law, 6th ed. St. Paul, MN: West Publishing. Boddie, C., M. Watson, G. Ackerman, and G.K. Gronvall. 2015. Assessing the Bioweapons Threat. Science 349 (6250): 792–793. Bovsun, M. 2013. 750 Sickened in Oregon Restaurants as Cult Known as the Rajneeshees Spread Salmonella in Town of the Dalles. New York Daily News, June 15, 2013. Available at: https://www.nydailynews.com/news/justice-story/guru-poison-bioterrorrists-spread-sal monella-oregon-article-1.1373864. Accessed 18 Jan 2021. Burton, F., and S. Stewart. 2008. Busting the Anthrax Myth. Strafor, July 30, 2008. Available at: https://worldview.stratfor.com/article/busting-anthrax-myth. Accessed 18 Jan 2021. Casadevall, A., L. Enquist, M.J. Imperiale, P. Keim, M.T. Osterholm, and D.A. Relman. 2013. Redaction of Sensitive Data in the Publication of Dual Use Research of Concern. mBio 5 (1): e00991–13. Cello, J., A. Paul, and E. Wimmer. 2002. Chemical Synthesis of Poliovirus cDNA: Generation of Infectious Virus in the Absence of Natural Template. Science 297 (5583): 1016–1018. Centers for Disease Control and Prevention. 2020. Select Agents and Toxins. Available at: https:// www.selectagents.gov/SelectAgentsandToxins.html. Accessed 18 Jan 2021. Contract Professionals Incorporated. 2019. The Big Business of Security Clearances. Available at: https://www.cpijobs.com/veteran-resources/the-big-business-of-security-clearances/. Accessed 18 Jan 2021. Dover, N., J.R. Barash, K.K. Hill, G. Xie, and S.S. Arnon. 2014. Molecular Characterization of a Novel Botulinum Neurotoxin Type H Gene. Journal of Infectious Diseases 209 (2): 192–202. Ezell, B.C., S.P. Bennett, D. von Winterfeldt, J. Sokolowski, and A.J. Collins. 2010. Probabilistic Risk Analysis and Terrorism Risk. Risk Analysis 30 (4): 575–589. Fauci, A.S. 2012. Research on Highly Pathogenic H5n1 Influenza Virus: The Way Forward. mBio 3 (5): e00359–12. Flight, C. 2011. Silent Weapon: Smallpox and Biological Warfare. BBC, February 17, 2011. Available at: http://www.bbc.co.uk/history/worldwars/coldwar/pox_weapon_01.shtml. Accessed 31 Mar 2020. Fouchier, R.A. 2015. Studies on Influenza Virus Transmission Between Ferrets: The Public Health Risks Revisited. MBio 6: e02560–14. Freedom House. 2017. Freedom in the World 2017. Available at: https://freedomhouse.org/sites/ default/files/FH_FIW_2017_Report_Final.pdf. Accessed 19 Jan 2021. Frischknecht, F. 2003. The History of Biological Warfare: Human Experimentation, Modern Nightmares and Lone Madmen in the Twentieth Century. EMBO Reports 4 Spec No (Suppl 1): S47–S52. Herfst, S., E.J. Schrauwen, M. Linster, S. Chutinimitkul, E. de Wit, V.J. Munster, E.M. Sorrell, T.M. Bestebroer, D.F. Burke, D.J. Smith, G.F. Rimmelzwaan, A.D. Osterhaus, and R.A. Fouchier.

References

267

2012. Airborne Transmission of Influenza A/H5N1 Virus Between Ferrets. Science 336 (6088): 1534–1541. Imai, M., T. Watanabe, M. Hatta, S.C. Das, M. Ozawa, K. Shinya, G. Zhong, A. Hanson, H. Katsura, S. Watanabe, C. Li, E. Kawakami, S. Yamada, M. Kiso, Y. Suzuki, E.A. Maher, G. Neumann, and Y. Kawaoka. 2012. Experimental Adaptation of an Influenza H5 HA Confers Respiratory Droplet Transmission to a Reassortant H5 HA/H1N1 Virus in Ferrets. Nature 486 (7403): 420–428. Imperiale, M.J., and A. Casadevall. 2020. Rethinking Gain-of-Function Experiments in the Context of the COVID-19 Pandemic. mBio 11 (4): e01868–20. Inglesby, T.V., D.A. Henderson, J.G. Bartlett, M.S. Ascher, E. Eitzen, A.M. Friedlander, J. Hauer, J. McDade, M.T. Osterholm, T. O’Toole, G. Parker, T.M. Perl, P.K. Russell, and K. Tonat. 1999. Anthrax as a Biological Weapon: Medical and Public Health Management: Working Group on Civilian Biodefense. Journal of the American Medical Association 281 (18): 1735–1745. Jackson, R., A. Ramsay, C. Christensen, S. Beaton, D. Hall, and I. Ramshaw. 2001. Expression of Mouse Interleukin-4 By a Recombinant Ectromelia Virus Suppresses Cytolytic Lymphocyte Responses and Overcomes Genetic Resistance to Mousepox. Journal of Virology 75 (3): 1205– 1210. Kalu, M.C. 2018. Birth of the Black Plague: The Mongol Siege on Caffa. War History Online, July 28, 2018. Available at: https://www.warhistoryonline.com/instant-articles/mongol-siege-caffablack-plague.html. Accessed 19 Jan 2021. Kitcher, P. 2001. Science, Truth, and Democracy. New York, NY: Oxford University Press. Klotz, L.C., and E.J. Sylvester. (2012). The Unacceptable Risks of a Man-Made Pandemic. Bulletin of Atomic Scientists, August 7, 2012. Available at: https://thebulletin.org/2012/08/the-unaccepta ble-risks-of-a-man-made-pandemic/. Accessed 19 Jan 2021. Lipsitch, M., and B.R. Bloom. 2012. Rethinking Biosafety in Research on Potential Pandemic Pathogens. MBio 3: e00360–12. Lipsitch, M., and A.P. Galvani. 2014. Ethical Alternatives to Experiments with Novel Potential Pandemic Pathogens. PLoS Medicine 11: e1001646. Mill, J.S. 1978 [1859]. On Liberty. E. Rapaport, ed. Indianapolis, IN: Hackett. Miller, S., and M.J. Selgelid. 2007. Ethical and Philosophical Consideration of the Dual-Use Dilemma in the Biological Sciences. Science and Engineering Ethics 13 (4): 523–580. National Freedom of Information Coalition. 2020. International FOI Laws. Available at: https:// www.nfoic.org/international-foi-laws. Accessed 19 Jan 2021. National Institutes of Health. 2014. Statement on Funding Pause on Certain Types of Gain-ofFunction Research, October 16. Available at: https://www.nih.gov/about-nih/who-we-are/nihdirector/statements/statement-funding-pause-certain-types-gain-function-research. Accessed 19 Jan 2021. National Institutes of Health. 2017a. NIH Lifts Funding Pause on Gain-of-Function Research. December 19, 2017. Available at: https://www.nih.gov/about-nih/who-we-are/nih-director/sta tements/nih-lifts-funding-pause-gain-function-research. Accessed 19 Jan 2021. National Institutes of Health. 2017b. Framework for Guiding Funding Decisions About Proposed Research Involving Enhanced Potential Pandemic Pathogens. Available at: https://www.phe.gov/ s3/dualuse/Documents/P3CO.pdf. Accessed 19 Jan 2021. National Institutes of Health. 2020. Dual Use Research of Concern. Available at: https://osp.od.nih. gov/biotechnology/dual-use-research-of-concern/. Accessed 19 Jan 2021. National Research Council. 2004. Biotechnology in the Age of Terrorism. Washington, DC: National Academies Press. National Research Council. 2008. Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change. Washington, DC: National Academies Press. National Science Advisory Board for Biosecurity. 2020. About Us. Available at: https://osp.od. nih.gov/biotechnology/national-science-advisory-board-for-biosecurity-nsabb/. Accessed 19 Jan 2021. Nobel Prize.org. 2020. The Man Behind the Prize—Alfred Nobel. Available at: https://www.nob elprize.org/alfred-nobel/. Accessed 19 Jan 2021.

268

8 Dual Use Research in the Biomedical Sciences

Nye, L. 2016. A Future Kentucky Governor Attempted Biological Warfare in the Civil War. We are the Mighty, July 26, 2016. Available at: https://www.wearethemighty.com/articles/a-future-ken tucky-governor-attempted-biological-warfare-in-the-civil-war. Accessed 19 Jan 2021. Orend, P. 2006. The Morality of War. Petersborough, CA: Broadview Press. Patrone, D., D.B. Resnik, and L. Chin. 2012. Biosecurity and the review and publication of dual-use research of concern. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 10 (3): 290–298. Patterson, K.B., and T. Runge. 2002. Smallpox and the Native American. American Journal of Medical Science 323 (4): 216–222. Rawls, J. 2005. Political Liberalism, 2nd ed. New York: Columbia University Press. Relman, D.A. 2014. “Inconvenient Truths” in the Pursuit of Scientific Knowledge and Public Health. Journal of Infectious Diseases 209 (2): 170–172. Resnik, D.B. 2007. The Price of Truth: How Money Affects the Norms of Science. New York, NY: Oxford University Press. Resnik, D.B. 2009. Playing Politics with Science: Balancing Scientific Independence and Government Oversight. New York: Oxford University Press. Resnik, D.B. 2011. Ethical Issues Concerning Transgenic Animals in Biomedical Research. In The Ethics of Animal Research: Exploring the Controversy, ed. J. Garrett, 169–179. Cambridge, MA: MIT Press. Resnik, D.B. 2013. H1N1 Avian Flu Research and the Ethics of Knowledge. Hastings Center Report 43 (2): 22–33. Resnik, D.B. 2017. Dual Use Research and Inductive Risk. In Exploring Inductive Risk, ed. K.C. Elliott and T. Richards, 59–78. New York, NY: Oxford University Press. Resnik, D.B., D.D. Barner, and G.E. Dinse. 2011. Dual-Use Review Policies of Biomedical Research Journals. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 9 (1): 49–54. Riedel, S. 2004. Biological Warfare and Bioterrorism: A Historical Review. Proceedings (Baylor University Medical Center) 17 (4): 400–406. Robertson, J.A. 1977. The Scientist’s Right to Research: A Constitutional Analysis. Southern California Law Review 51: 1203–1278. Rosengard, A., Y. Liu, Z. Nie, and R. Jimenez. 2002. Variola Virus Immune Evasion Design: Expression of a Highly Efficient Inhibitor of Human Complement. Proceedings of the National Academy of Sciences of the United States of America 99 (13): 8808–8813. Russell, C.A., J.M. Fonville, A.E. Brown, D.F. Burke, D.L. Smith, S.L. James, S. Herfst, S. van Boheemen, M. Linster, E.J. Schrauwen, L. Katzelnick, A. Mosterín, T. Kuiken, E. Maher, G. Neumann, A.D. Osterhaus, Y. Kawaoka, R.A. Fouchier, and D.J. Smith. 2012. The Potential for Respiratory Droplet-Transmissible A/H5n1 Influenza Virus to Evolve in a Mammalian Host. Science 336 (6088): 1541–1547. Salama, S., and L. Hansell. 2005. Does Intent Equal Capability? Al-Qaeda and Weapons of Mass Destruction. Nonproliferation Review 12 (3): 615–653. Schultz-Cherry, S., R.J. Webby, R.G. Webster, A. Kelso, I.G. Barr, J.W. McCauley, R.S. Daniels, D. Wang, Y. Shu, E. Nobusawa, S. Itamura, M. Tashiro, Y. Harada, S. Watanabe, T. Odagiri, Z. Ye, G. Grohmann, R. Harvey, O. Engelhardt, D. Smith, K. Hamilton, F. Claes, and G. Dauphin. 2014. Influenza Gain-of-Function Experiments: Their Role in Vaccine Virus Recommendation and Pandemic Preparedness. mBio 5 (6): e02430–14. Selgelid, M.J. 2016. Gain-of-Function Research: Ethical Analysis. Science and Engineering Ethics 22 (4): 923–964. Shachtman, N. 2011. Anthrax Redux: Did the Feds Nab the Wrong Guy? Brookings, March 3, 2011. https://www.brookings.edu/articles/anthrax-redux-did-the-feds-nab-the-wrong-guy/. Shamoo, A.E., and D.B. Resnik. 2015. Responsible Conduct of Research, 3rd ed. New York, NY: Oxford University Press. Sharp, P.A. 2005. 1918 Flu and Responsible Science. Science 310 (5745): 17.

References

269

Shaw, R. 2016. Export Controls and the Life Sciences: Controversy or Opportunity? Innovations in the Life Sciences’ Approach to Export Control Suggest There are Ways to Disrupt Biological Weapons Development By Rogue States and Terrorist Groups Without Impeding Research. EMBO Reports 17 (4): 474–480. Taubenberger, J.K., A.H. Reid, R.M. Lourens, R. Wang, G. Jin, and T.G. Fanning. 2005. Characterization of the 1918 Influenza Virus Polymerase Genes. Nature 437 (7060): 889–893. Thatcher, G. 1988. In War, Is One Type of Killing More Immoral Than Another? Christian Science Monitor, December 13, 1988. Available at: https://www.csmonitor.com/1988/1213/zmoral.html. Accessed 20 Jan 2021. Tumpey, T.M., C.F. Basler, P.V. Aguilar, H. Zeng, A. Solórzano, D.E. Swayne, N.J. Cox, J.M. Katz, J.K. Taubenberger, P. Palese, and A. García-Sastre. 2005. Characterization of the Reconstructed 1918 Spanish Influenza Pandemic Virus. Science 310 (5745): 77–80. United Nations. 2020. Biological Weapons Convention. Available at: https://www.un.org/disarm ament/biological-weapons. Accessed 20 Jan 2021. United Nations Universal Declaration of Human Rights. 1948. Available at: https://www.un.org/ en/universal-declaration-human-rights/. Accessed 20 Jan 2021. United States Code. 2012. 18 USC 798. Disclosure of Classified Information. United States Constitution. 1789. United States Government. 2014. United States Government Policy for Institutional Oversight of Life Sciences Dual Use Research of Concern. September 24, 2014. Available at: https://www. phe.gov/s3/dualuse/Documents/durc-policy.pdf. Accessed 20 Jan 2021. Urry, L.A., M.L. Cain, S.A. Wasserman, P.V. Minorsky, and J.B. Reece. 2016. Campbell Biology, 11th ed. New York, NY: Pearson. Vladan, R., B. Goran, and J. Larisa. 2012. A Mathematical Model of Bioterrorist Attack Risk Assessment. Journal of Bioterrorism and Biodefense 3: 114. Walden, J., and E.K. Kaplan. 2004. Estimating Time and Size of Bioterror Attack. Emerging Infectious Diseases 10 (7): 1202–1205. Webb, G.F. 2003. A Silent Bomb: The Risk of Anthrax as a Weapon of Mass Destruction. Proceedings of the National Academy of Sciences of the United States of America 100 (8): 4355–4356. Wein, L., and Y. Liu. 2005. Analyzing a Bioterror Attack on the Food Supply: The Case of Botulinum Toxin in Milk. Proceedings of the National Academy of Sciences of the United States of America 102 (28): 9984–9989. Wu, Z., and J.M. McGoogan. 2020. Characteristics of and Important Lessons from the Coronavirus Disease 2019 (COVID-19) Outbreak in China: Summary of a Report of 72,314 Cases from the Chinese Center for Disease Control and Prevention. Journal of the American Medical Association 323 (13): 1239–1242. Yale News. 2008. In Memoriam: Arthur Galston, Plant Biologist, Fought Use of Agent Orange. https://news.yale.edu/2008/07/18/memoriam-arthur-galston-plant-biologist-fought-useagent-orange. 20 Jan 2021.

Chapter 9

Public Health Emergencies

As I write this chapter, the entire world is reeling from the COVID-19 pandemic, one of the worst disease outbreaks in modern history. COVID-19 is a respiratory infection caused by the novel coronavirus SARS-CoV-2 (see Fig. 9.1). The pandemic is thought to have started in December 2019 in the wet markets of Wuhan, China, when a virus that normally resides in horseshoe bats acquired the mutations1 necessary to infect human beings (Andersen et al. 2020; Zhou et al. 2020).2 The virus, which is transmissible by respiratory water droplets, quickly spread from China to Europe, the US, and other parts of the world. As of January 14, 2021, there were 92.6 million confirmed COVID-19 cases and nearly 2 million deaths, for a case fatality rate of 2.1%.3 The US has had the highest number of COVID-19 deaths at 385,855,4 followed Brazil (205,964), India (151,727), Mexico (136,917),

1 The

mutations were for genes that code for the protein spike on the receptor binding domain of the virus. The protein spike enables the virus to bind to human lung epithelial cells (Andersen et al. 2020). See Fig. 9.1. 2 There has been speculation that SARS-CoV-2 is a genetically engineered virus that escaped from the Wuhan Institute of Virology, a BSL-IV laboratory in Wuhan, China, where researchers had been working with coronaviruses. However, researchers from the lab have denied this claim and there is no direct evidence to support it (Bryner 2020). Most scientists believe the virus mutated naturally but some dispute this claim and argue that a laboratory accident is a plausible scenario that cannot be ruled out (Andersen et al. 2020; Latham and Wilson 2020). 3 The case fatality rate is the number of deaths divided by the number of confirmed cases. The fatality rate is the number of people who are infected who die divided by the number of people who are infected. The fatality rate may be less than 1% due to underreporting of infections. Infections have been underreported due to lack of testing for the virus. Most individuals who are infected show no signs or symptoms of illness or have a mild case and do not come forward to be tested for the virus (Baggett et al. 2020; Harmon 2020; Schwalbe 2020; Sutton et al. 2020; Vogel 2020). 4 COVID-19 was the leading cause of death in the US in 2020 (Woolf et al. 2021).

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_9

271

272

9 Public Health Emergencies

Fig. 9.1 SARS-CoV-2 structure, Singh (2020), creative commons license

the UK (84,915), Italy (80,848), France (69,168), Russia (63,016), Iran (56,538), and Spain (52,878) (Johns Hopkins University 2021).5 The R0 for COVID-19 is about 5.7, which makes it a highly contagious disease (Sanche et al. 2020). By comparison the 1918 pandemic flu had a R0 = 2.4 to 4.3 and measles has a R0 = 12 to 18 (Vynnycky et al. 2007). Scientists cannot accurately predict, at this point in time, when the pandemic will end or how many people the virus will infect or kill, but the global death toll could reach several million (New Scientist 2020).6 The pandemic is not likely to end until the human race develops herd immunity, which can occur only if a large percentage of the population develops immunity as a result of surviving an infection or receiving a vaccine. It is not known what percentage of the population needs to be immune to COVID-19 for herd immunity to occur, but infectious disease specialists have placed that number at about 70% (McNeil Jr. 2020). Several vaccines have been developed

5 Notably absent from this list is China, which has a population of 1.4 billion people and only 4,801

COVID-19 reported deaths as of the writing of this book (Johns Hopkins 2021) for a mortality rate of 0.34 deaths per 100,000 people, which is ten to a hundred times lower than most other countries. Cremation data suggest that the Chinese government may be suppressing COVID-19 death data (Thomas 2020). 6 As of the January 14, 2021, there were 385,855 COVID-19 deaths in the US (Johns Hopkins 2021).

9 Public Health Emergencies

273

and tested are currently being distributed (Centers for Disease Control and Prevention 2020i). The virus poses a significant risk to people who are elderly or have underlying health conditions7 and poses very little risk to children (Wu and McGoogan 2020). Children comprised less than 3% of the initial 72,314 COVID-19 cases in China. 90% of children who were infected by the virus showed no symptoms of the disease or had only a mild case (Dong et al. 2020). The case fatality rate was 14.8% for the 80 years or older age group and 8% for the 70 to 79 group (Wu and McGoogan 2020). Data provided by the New York City Health Department as of April 14, 2020 showed that only three (0.04%) of the 6,839 COVID-19 deaths were among children (age 0–17) and that all three of these patients had underlying health conditions. 3263 deaths (47%) were among people 75 years old or older and most of these patients had underlying health conditions (Worldometer 2020). CDC data support these trends. 11% of the people aged 75 who died in the US in 2020 had COVID-19 as a cause of death, as compared to only 0.4% of children aged 14 or younger (Centers for Disease Control and Prevention 2021b). Public health emergencies, such as the COVID-19 pandemic, present government officials with difficult choices that may involve conflicts among fundamental values, such as protecting public health, respecting human rights, promoting economic growth, and protecting the most vulnerable members of society (Selgelid 2005, 2009; Zack 2009; Annas 2010; Afolabi 2018; Rieder et al. 2020; Munthe et al. 2020). These choices may arise when preparing for emergencies or responding to them. Very often, societies cannot avoid the risks created by these emergencies and the best that can be done is to minimize or mitigate risks. Since these emergencies often involve a high degree of scientific and moral uncertainty, they are good test cases for the utility of the PP as a decision-making tool. In this chapter, I will apply the PP to some policy issues that have arisen in the COVID-19 pandemic. Issues similar to the ones discussed here also occur in other public health emergencies.

9.1 Public Health Emergencies While there is no agreed upon definition of a “public health emergency,” events that we call public health emergencies have several common features.8 First, public health emergencies involve significant harm or the threat of significant harm to human life and health. While there is no precise number of lives that must be lost (or potentially lost) for there to be a public health emergency, it is safe to say that generally hundreds, thousands, or even millions of lives may be at stake. Second, public health 7 Underlying

health conditions include diabetes, lung disease, cancer, immunodeficiency, heart disease, hypertension, asthma, kidney disease, gastrointestinal disease, and liver disease. 8 In the US, a declaration of a public health emergency by the federal government or state governments can activate a variety of policies designed to respond to the emergency and trigger the release of funds (Public Health Emergency 2020).

274

9 Public Health Emergencies

emergencies happen suddenly and often unexpectedly. For example, thousands of people die from smoking related illnesses every day, but we would not describe this as a public health emergency, because these deaths generally occur after several decades of smoking. However, if a thousand people die from inhaling smoke from a forest fire in a single day, we would call this a public health emergency because these deaths happened suddenly and unexpectedly. Third, public health emergencies require immediate action. For example, while a nationwide anti-smoking ad campaign may help to reduce smoking-related deaths, waiting a few days to launch the campaign will not make a big difference to public health. However, if hundreds of people are trapped inside a building that is about to collapse, immediate action is required to save lives. Public Health Emergencies can be grouped into three basic types: Natural disasters, such as hurricanes, tornadoes, floods, monsoons, earthquakes, tsunamis, forest fires, volcanic eruptions, blizzards, heat waves, droughts. Natural disasters kill, on average, about 60,000 people per year, or 0.1% of global deaths (Ritchie and Roser 2019a), and cost hundreds of billions of dollars in property damage and economic disruptions. For example, Hurricane Harvey, which made landfall on the coasts of Texas and Louisiana in August 2017, cost $190 billion, making it the most expensive natural disaster in US history (Lanktree 2017). Human disasters, such as bridge collapses, burning buildings, airplane crashes, train crashes, gas explosions, oil and chemical spills, war, revolution, and terrorism. Since 2011, over 500,000 people have died in the Syrian civil war (Human Rights Watch 2019). Over the past decade, terrorist attacks have killed about 21,000 people per year, ranging from 8,000 deaths in 2010 to 44,000 in 2014 (Ritchie et al. 2019).9 As mentioned in Chapter 6, the chemical leak in Bhopal, India in 1984 killed 15,000 people (Taylor 2014). Human disasters can also have enormous economic impacts. For example, the Deepwater Horizon oil spill, which occurred in the Gulf of Mexico in 2010, cost $145 billion (Lee et al. 2018). Disease epidemics or pandemics. We have already mentioned some of history’s great pandemics in this book, such as the 1918 pandemic flu, which killed an estimated 50 million people, and the European plague of 1347–1351, which also killed about 50 million people. About 32 million people have died from HIV/AIDS since the pandemic began in the early 1980s (UNAIDS 2019). Over three million people may die in the COVID-19 pandemic before it is finished, which would make it the fourth leading cause of death for this time period (late 2019–2021). In 2017, cardiovascular disease was the leading cause of death globally, followed by cancer. About 6.88 million people died from infectious diseases, including diarrheal diseases, HIV/AIDS, malaria, tuberculosis, and lower respiratory infections (Ritchie and Roser 2019b). However, most of these deaths did not result from officially declared public health emergencies. See Table 9.1. Actions that address public health emergencies take two different forms:

9 For

comparison, about 80 million people died in World War II, including 25 million combatants and 55 million civilians (World Population Review 2020).

9.1 Public Health Emergencies Table 9.1 Top 25 global causes of death in 2017 (data from Ritchie and Roser 2019b)

275 1

Cardiovascular diseases

17.8 million

2

Cancer

9.56 million

3

Respiratory diseases

3.91 million

4

Lower respiratory infections

2.56 million

5

Dementia

2.51 million

6

Digestive diseases

2.38 million

7

Neonatal disorders

1.78 million

8

Diarrheal diseases

1.57 million

9

Diabetes

1.37 million

10

Liver diseases

1.32 million

11

Road injuries

1.24 million

12

Kidney diseases

1.23 million

13

Tuberculosis

1.18 million

14

HIV/AIDS

0.95 million

15

Suicide

0.79 million

16

Malaria

0.62 million

17

Malnutrition

0.50 million

18

Homicide

0.41 million

19

Alcohol/drug abuse

0.36 million

20

Parkinson’s disease

0.34 million

21

Drowning

0.30 million

22

Meningitis

0.29 million

23

Maternal disorders

0.19 million

24

War/conflict

0.13 million

25

Hepatitis

0.13 million

Emergency preparedness includes actions or policies to prepare for an emergency or minimize or mitigate its impact, such as: • • • • • • • • •

stockpiling food, water, medical supplies/equipment, or temporary housing units; planning, training, or rehearsing for responses to emergencies; educating the public about what to do in an emergency; conducting controlled burns or clearing away brush to reduce the risk of forest fires; adopting zoning laws to discourage or prevent people from living near areas prone to floods, forest fires, or other natural disasters; building dams, sea walls or other structures to stop or control flooding; fortifying buildings and other structures to minimize damage from earthquakes; conducting fire drills or active shooter drills; conducting research related to emergency preparedness and response.

276

9 Public Health Emergencies

Fig. 9.2 1918 flu pandemic: the Oakland Municipal Auditorium in use as a temporary hospital. The photograph depicts volunteer nurses from the American Red Cross tending influenza sufferers in the Oakland Auditorium, Oakland, California, during the influenza pandemic of 1918. Wikimedia Commons https://commons.wikimedia.org/wiki/File:1918_flu_in_Oakland.jpg, creative commons license

Emergency response includes actions or policies to deal with an emergency, such as: • evacuating or rescuing people from fires, floods, chemical spills, building collapses, and other emergencies; • securing areas impacted by emergencies; • providing medical or psychological care for people affected by emergencies; • providing temporary housing for people displaced by emergencies; • warning the public about developing or impending emergencies (such as hurricanes or tornadoes); • isolating individuals with infectious diseases; • quarantining individuals exposed to infectious diseases; • tracing and monitoring people who come into contact with people who have infectious diseases (i.e. contact tracing); • using protective clothing (such as masks and gloves) to prevent transmission of infectious diseases; • restricting travel to prevent transmission of infectious diseases;

9.1 Public Health Emergencies

277

• stopping or limiting social activities (such as work, school, or recreation) to prevent transmission of infectious diseases; • requiring people to stay at home to prevent the transmission of infectious diseases; • developing vaccines, treatments, or tests for infectious diseases (Fig. 9.2).

9.2 Ethical and Policy Issues Related to Emergency Preparedness and Response Emergency preparedness and response raises more ethical and policy issues than can be adequately discussed in this chapter. Rather than survey all of these issues, I will focus on a few that have arisen in COVID-19 pandemic, since these issues exemplify some of the key ethical dilemmas related to preparing for or responding to public health emergencies (Zack 2009; Annas 2010). Preventing or slowing disease transmission vs. respecting human rights and promoting economic growth. Isolation, quarantine, social (or physical) distancing, contact-tracing and good hygiene are time-honored measures for preventing or slowing the spread of infectious diseases throughout populations. Isolation and quarantine involve physically separating people from the population because they have a disease (isolation) or may have been exposed to a disease (quarantine). Social distancing involves avoiding close contact with people who you do not live with when you encounter them in public and avoiding crowded places and large gatherings (Centers for Disease Control and Prevention 2020d). Good hygiene for diseases transmitted by respiratory water droplets, mucous, or saliva includes washing hands frequently, cleaning and disinfecting surfaces, avoiding touching one’s face, and covering one’s face with a mask in public (Centers for Disease Control and Prevention 2020e).10 See Fig. 9.3. During the COVID-19 pandemic, governments went far beyond these timehonored measures and implemented other strategies that significantly restricted human rights and economic and social activity, including stay-at-home orders, travel restrictions (international and intranational), mandatory social distancing, curfews, electronic surveillance of residents’ movements and associations, and closures of schools, parks, and non-essential businesses (Amnesty International 2020; Mervosh et al. 2020; Connor 2020). In Wuhan, China, the government closed off travel to the city and required residents to stay in their homes from mid-January to early-April 2020. Over half of China’s population was ordered to stay-at-home for several months (Sudworth 2020). In Italy, the government ordered all residents to stay at home and restricted travel throughout the country, beginning in early March 2020 (Bradley et al. 2020). In February 2020, South Korea implemented wide-ranging foreign travel restrictions, stay-at-home-orders, business shutdowns, extensive testing for the virus, 10 Good hygiene for diseases transmitted by blood or sexual contact involves other measures, which

are not discussed here.

278

9 Public Health Emergencies

Fig. 9.3 Stop the spread of germs, Centers for Disease Control and Prevention, https://www.cdc. gov/coronavirus/2019-ncov/travelers/communication-resources.html, public domain

mask-wearing requirements, and contact-tracing aided by cell-phone apps (Herbst 2020; Klingner 2020; Parker et al. 2020). Singapore implemented travel-restrictions, extensive testing, mask-wearing (in public) requirements, and contact-tracing, with targeted closures of businesses and schools (Firth 2020). In the US, the federal government began restricting travel from outside the country in early February and state and municipal governments began closing businesses and schools and issuing stay-at-home orders and mask-wearing requirements in mid-March 2020 (Kelleher 2020; BBC News 2020).

9.2 Ethical and Policy Issues Related to Emergency …

279

These drastic measures (known collectively as “lockdowns”) have substantially restricted human rights11 and have had harsh global, economic consequences (Authers 2020; Gostin et al. 2020; Long and van Dam 2020). In the US, the unemployment rate soared from 3.6% in January to 2020 to over 19% in April 2020, its highest level since the Great Depression (Lambert 2020; Tappe 2020). However, the unemployment rate began to decline quickly once the lockdowns were lifted and the economy began to recover and was 6.7% by the end of 2020 (United States Bureau of Labor and Statistics 2021). US gross domestic product (GDP) declined by 31.4% in the second quarter of 2020 but then increased by 33.1% in the third quarter of 2020 (Bureau of Economic Analysis 2021). Entire sectors of the US economy, such the restaurant, hospitality, entertainment, sports, and air travel industries, came to a near standstill and have been recovering slowly. In China, exports dropped by 17.2% in January/February 2020, and GDP for the first quarter of 2020 shrank for the first time since the country began keeping GDP records in 1992 (Segal and Gerstel 2020). Similar economic downturns happened throughout the world. The Dow Jones Stock Market index lost 37% of its value after the pandemic began but rebounded by the end of 2020, spurred by the hopes of a vaccine (World Economic Forum 2020a; Partington and Rushe 2020). Oil prices plummeted due to drastically reduced air and automobile travel.12 COVID-19 has caused a recession that could ultimately cost the global economy two trillion dollars (World Economic Forum 2020b). An estimated 137 million people, mostly in low-income countries, will face acute food insecurity in 2020–2021, an 82% increase from 2019. This dramatic rise in food insecurity is due to the pandemic’s impact on food prices and supply chains (World Bank 2020). Without a doubt, the COVID-19 pandemic would have had serious adverse impacts on the global economy even if governments had not implemented policies that interfere with economic activity, since people would have taken actions to protect themselves from the risks of the disease, such as avoiding public places and forgoing purchases and investments, even if governments had not imposed restrictions. Also, mortality and morbidity from the disease would have been much higher than they have been, which would have overwhelmed health care systems and led to job losses, absences from work, business closures, and other dire economic impacts. While the lockdowns clearly led to short-term economic hardships for many people, the longterm impacts of the COVID-19 on the global economy could have been much worse without lockdowns, according to some. Economists continue to try to understand the precise impact of the lockdowns on the economy (Grigoli and Sandri 2020; Mandel and Veetil 2020; Paine 2020). Regardless of the role that government lockdowns have played in the economic downturn related to the COVID-19 pandemic, the world economy is currently in a deep recession, which could get worse and last several years, depending on how 11 Among these rights include rights to freedom of movement, association, and government protest;

and rights to privacy, education, and work. Some prefer the term “shelter in place” but the effect of the policy on human rights is the same. 12 Air pollution levels have also gone down, which is a positive effect of the economic decline (Segal and Gerstel 2020).

280

9 Public Health Emergencies

quickly governments, individuals and businesses can recover from the economic disaster. Companies have been reducing work hours, downsizing, and laying off employees. Many companies, especially smaller ones or larger ones disproportionally impacted by the economic decline, are likely to file for bankruptcy and/or go out of business. Governments around the world have enacted financial relief packages to soften the financial blow of the pandemic and stimulate economic recovery (Foran et al. 2020; National Conference of State Legislatures 2021). From March to December 2020, the US Congress passed several COVID-19 relief bills totaling $4 trillion (Barone 2020). The bills provided reliefs to individuals, businesses, and state and local governments and helped to fund COVID-19 public health responses. Other governments have enacted economic relief packages smaller than the US’, though larger as a percentage of GDP. While the US COVID-19 relief was 18.3% of GDP, Japan’s was 42.2%, Slovenia’s was 24.5%, and Sweden’s was 20.9% (Barone 2020). The economic consequences of the COVID-19 pandemic are likely to have significant adverse impacts on public health. Many studies have shown that income is a key social determinant of health (Deaton 2002; Kullar and Chokshi 2018; World Health Organization 2020c). The relationship between wealth and health is so robust that it is known as the wealth-health gradient (Deaton 2002). For example, Chetty and coauthors found that in the US the richest 1% of men live 14.6 years longer than the poorest 1%, and for women difference is 10.1 years (Chetty et al. 2016). Tsao and coauthors found that raising the minimum wage from $9 to $15 in New York City in 2012 would have resulted in 2800 to 5500 fewer premature deaths (i.e. deaths before average life expectancy) (Tsao et al. 2016). Lenhart (2019) found that expanding the Earned Income Tax Credit, a tax credit for low income households with at least two children, increase the percentage of heads of households who report they are in excellent or very good health from 6.9 to 8.9 percentage points. There are several explanations for the relationship between income and health. First, having money allows one to afford things that are essential to health, such as food, housing, education, medication, medical care, and recreation (Osborn et al. 2017; Kullar and Chokshi 2018). Second, poverty is a risk factor for mental health problems, such as stress, depression, psychosis, and alcohol and drug abuse (Heflin and Iceland 2009; Elliott 2016; Sohn 2016). Third, poor health can lead negatively impact income, causing a positive feedback loop between declining wealth and declining health (Kullar and Chokshi 2018). An increase in the suicide rate is the most dramatic impact of an economic decline. Numerous studies have documented a strong positive correlation between unemployment and suicide (Jin et al. 1995; Blakely et al. 2003; Nordt et al. 2015). Nordt and co-authors examined suicide and unemployment data from 63 countries from 2000 to 2011 and found unemployment increases the risk of suicide by 20–30% and that a one-percentage point rise in the unemployment rate correlates to a two-percentage point rise in the suicide rate (Nordt et al. 2015). For example, in 2018 the US had 14.2 suicides per 100,000 people (Centers for Disease Control and Prevention 2020f) and the unemployment rate was about 4.0% (United States Bureau of Labor and Statistics 2021). Nordt’s model predicts the US suicide rate should increase to 18.2 suicides

9.2 Ethical and Policy Issues Related to Emergency …

281

per 100,000 people in 2020 because the unemployment rate increased by at least two percentage points. Data for 2020 were not available at the time this book was written, however. Although suicide remains a major health concern related to the COVID-19 pandemic, the data collected so for do not indicate that suicide rates significantly increased during the pandemic (John et al. 2020; Nye 2020). However, since suicide is is influenced by numerous psychological, economic, cultural, and political factors, the situation bears watching. It may be the case the suicide rates increase in some groups (such as those who are most economically disadvantaged or socially isolated) but not in others, or that increases in suicide rates lag behind other effects of the pandemic. The suicide rate may also be higher than indicated by the data because suicide tends to be underreported. Many deaths attributed to drug overdoses, singlevehicle accidents, or other causes are actually be suicides (World Health Organization 2021). As one can see from the evidence and information reviewed in the preceding paragraphs, there is an inevitable conflict between some of the policies that have been implemented to prevent or slow the transmission of COVID-19 and global economic prosperity, human rights, and health (Rieder et al. 2020; Authers 2020). Moreover, the negative economic effects of these policies will disproportionately impact poor people, since rich people are likely to have the money and resources needed to weather the economic maelstrom. It remains to be seen whether more people die from public health emergency responses that throttle the economy and negatively impact health than would have died from the disease if these policies had not been implemented (Ioannidis 2020).

9.3 Were Lockdowns a Reasonable Response to the COVID-19 Pandemic? As we have seen, the COVID-19 pandemic has had, and will continue to have, catastrophic impacts on global public health and the global economy. The question I would like to pose in this section is: were lockdowns a reasonable way of managing the risks of the pandemic? Before we address this question, it is important to note that government leaders and public health officials had to make policy decisions based on limited data and evidence (Ioannidis 2020). There was a lack of data because the world had never dealt with a pathogen like SARS-CoV-2 in the modern era. Although the human race has experienced other devastating pandemics, such as the 1918 flu, there was no data on how different policy responses, such as lockdowns, had impacted the spread of that pathogen or the economy. Recent disease outbreaks, such as the MERS-CoV epidemic of 2015 and the SARS) epidemic of 2003 did not providence data that would be relevant to the COVID-19 pandemic because they were more limited in scope and did not have significant impacts on global travel, commerce, and trade.

282

9 Public Health Emergencies

Although scientists had developed statistical models that predicted the spread of the disease and likely effects of different public health responses to the pandemic (see Adam 2020; Chinazzi et al. 2020; Ferguson et al. 2020; Fink 2020; Health Metrics and Evaluation 2020), these models had significant limitations because their predictions could change considerably in response to variations in information and assumptions used in modelling. The models made assumptions about the biology of the disease (e.g. transmissibility, lethality, and incubation time) and the effects of various interventions (e.g. travel restrictions, stay-at-home orders, and school and business closures). While these models provided useful information for policymakers, they did not yield accurate and precise probability estimates for different outcomes related to COVID-19 responses (Holmdahl and Buckee 2020; Jewell et al. 2020). Therefore, policymakers did not have enough objective, reliable, accurate, and precise evidence—from empirical studies or statistical modelling—to apply expected utility theory to decisions concerning the public health response to the COVID-19 pandemic (Ioannidis 2020). Many of the decisions made in response to this public health emergency were policy experiments based on inadequate evidence or information (Kupferschmidt 2020; Pinto-Bazurco 2020). However, we need not fault policymakers for making decisions based on scant data or evidence, because they had little choice but to take action to deal with a deadly pandemic. As I have argued throughout this book, the PP may help policymakers make reasonable decisions when they face significant scientific and moral uncertainty, which has clearly been the case in the COVID-19 pandemic (Pinto-Bazurco 2020). To apply the PP to policy decisions related to the pandemic, one must consider three basic options, risk avoidance, risk minimization, and risk mitigation, and the four criteria for reasonableness, proportionality, fairness, consistency, and epistemic responsibility (Meßerschmidt 2020). When an emergency has happened, risk avoidance is no longer a viable option, since the risks have already materialized.13 All that can be done in an emergency is to minimize or mitigate risks. Most of the policies responses to the COVID-19 pandemic, such as travel restrictions and stay-at-home orders, were strategies for minimizing or mitigating public health risks (Rieder et al. 2020). While most countries instituted various forms of lockdowns, some, most notably Sweden, did not. Instead of imposing a government-mandated lockdown, Sweden relied on voluntary, trust-based measures to control the spread of COVID-19. The government recommended that people work at home as much as possible, practice good hygiene, wear masks, avoid non-essential travel and large social gatherings, and that older people avoid social contact, but the government did not close borders, businesses (including bars and restaurants), or schools for children under 16 (Ahlander and Johnson 2020; Paterlini 2020). Many scientists and public health officials have criticized Sweden’s approach to the pandemic for providing insufficient public health protections (Tharoor 2020). In the earlier part of the pandemic, 13 Risk avoidance might be an option for preventing future pandemics, such as pandemics triggered by gain-of-function experiments in virology. See discussion in Chapter 8.

9.3 Were Lockdowns a Reasonable Response to the COVID-19 Pandemic?

283

Sweden’s approach seemed to be working, but by late 2020, when COVID-19 cases and deaths began to soar, the government imposed some restrictions, such as limiting the size of social gatherings to no more than eight (Tharoor 2020). As of the writing of this book, 10,323 people have died from COVID-19 in Sweden, which has a population of 10.23 million. Sweden’s COVID-19 mortality rate is therefore 100 deaths per 100,000 people, which is about the same as Brazil’s rate (99); but much higher than neighboring countries, such as Norway (9.7), Finland (11), and the Netherlands (76); but lower than the US (120) and France (106) (Johns Hopkins University 2021). To apply the PP to lockdown policies as a response to the COVID-19 pandemic, we need to consider first whether the benefits of such policies are proportional to the risks (Meßerschmidt 2020). A lockdown policy, we shall assume, includes travel restrictions, stay-at-home orders, size limits for social gatherings, and closures of schools and non-essential businesses.14 The chief benefit of a lockdown is that it can reduce or slow the transmission of the disease and reduce morbidity and mortality. The disease will still infect a high percentage of the population, but it will have less of a public health impact than it would have had if nothing were done to slow its propagation. Another benefit of a lockdown is that by spreading out the impacts of the disease over time (i.e. “flattening the curve”), it can prevent the health care system from being overwhelmed with patients who require hospitalization, critical care, or ventilator support (Roberts 2020). If the health care system becomes overwhelmed, then patients with diseases other than COVID-19 may not receive adequate care and may die. As noted above, however, lockdowns have significant risks because they can send the economy into a recession that leads to long-term unemployment and loss of income, which, in turn, can have adverse health impacts.15 Additionally, lockdowns substantially restrict human rights. Even if lockdowns did not have negative economic impacts, one would still need to consider their impact on human rights to determine whether the benefits they yield are reasonable (Gostin et al. 2020; Rieder et al. 2020; Meßerschmidt 2020). The proportionality of risks to benefits from a lockdown depends a great deal on the transmissibility, lethality, preventability, and treatability of the disease. COVID19 is a highly infectious disease with a R0 > 5, and it is deadly, especially for people who are older or have underlying health conditions. We also did not have a vaccine for the disease or effective treatments when the pandemic began. The only option policymakers had for controlling the spread of the disease was to take steps to prevent people from transmitting it. Given these conditions, one could argue 14 Policymakers must also decide when and how to end lockdown if they impose one (Kupferschmidt 2020; Rieder et al. 2020). 15 The COVID-19 pandemic has had adverse health impacts beyond those discussed here. For example, many patients have avoided going to see a doctor for routine and emergency care (such as possible stroke, heart attack, or diabetes complications) because they do not want to be exposed to the virus. Hospitals and clinics have also delayed elective surgeries and other types of care (Mehrotra et al. 2020; Olson 2020). I will do not consider these as risks of lockdowns because these problems would still occur even without lockdowns in place because they are due to the impacts of the pandemic itself on patients and health care systems.

284

9 Public Health Emergencies

that lockdowns balanced benefits and risks proportionally. However, one could also argue that lockdowns would not be a proportional response to a disease, such as seasonal influenza, which is less transmissible or lethal, or more preventable or treatable than COVID-19. For example, in 2019 there were 38 million reported cases of seasonal influenza in the US and 22,000 deaths, for a case fatality rate of 0.058% (Centers for Disease Control and Prevention 2020h). Although seasonal influenza is a major public health concern, most public health experts agree that lockdowns are an excessively precautionary response to seasonal influenza. The most reasonable response is to encourage people to get vaccinated and to practice good hygiene.16 Turning to the fairness criteria, policymakers should consider how the benefits and risks of a lockdown will be distributed. As noted above, COVID-19 poses the greatest threat to people who are medically vulnerable due to age or an underlying health condition. People who are young and in good health are likely to have a mild case of the disease or show no symptoms at all if they become infected (Wu and McGoogan 2020). As New York Governor Andrew Cuomo has stressed in several speeches, one of the main reasons for implementing a lockdown is to protect medically vulnerable groups (Authers 2020; French 2020). While a lockdown can significantly benefit medically vulnerable groups, it can impose tremendous burdens on other people who experience socioeconomic hardships, such as loss of income, employment, food, or housing, from the lockdown. These burdens will fall hardest on people who are socioeconomically vulnerable because they are working in an industry, such as dining, entertainment, hospitality, or air travel, that will be catastrophically impacted by the lockdown; they are socioeconomically disadvantaged; or they are living in a low-income country; or all three. Many small businesses in the dining industry are likely to go bankrupt as a result of COVID 19’s impact on the economy (Craze and Invernizzi-Accetti 2020). Children are also likely to be disproportionally harmed by a lockdown. Since most children who are infected by SARS-CoV-2 show no signs or symptoms or have only a mild case of COVID-19, lockdowns are likely to harm children more than the disease. Children will be deprived of educational opportunities as a result of school closures and could fall a year behind in their studies despite efforts to continue schooling, remotely, at home. Children from low-income families will be deprived of free meals and other benefits they receive at school and may have more difficulty keeping up with their studies at home than upper- or middle-income children, due to lack of internet access or adequate parental assistance and supervision for at-home schooling. Children who have special educational, psychological, or behavioral needs will also not receive the help they need during a lockdown. Children may also be unable to get the amount of exercise that is required for health and physical development and could face an increased risk of physical or psychological abuse from staying at home in a stressful environment (Sharfstein and Morphew 2020; United Nations Children’s Fund 2020).

16 As

a side note, the lockdown measures implemented to stop COVID-19 have also significantly curtailed seasonal influenza (Jones 2020).

9.3 Were Lockdowns a Reasonable Response to the COVID-19 Pandemic?

285

Thus, the benefits and the risks/burdens of a lockdowns are likely to be distributed unequally. Lockdowns mostly benefit people who are medically vulnerable but harm other people, especially those who are economically vulnerable or are children. Is this distribution of benefits and risks/burdens fair? Is it fair to put a restaurant, bar, retail store, barbershop, or nail salon, out of business to protect elderly, sick, and immunocompromised people from a viral infection? How many lives must be saved to make it worth costing hundreds of thousands of people their jobs or depriving children of an education for six months? These are not easy questions to answer, but they must be faced when deciding whether to implement (or end) a lockdown in response to an epidemic like COVID-19 (Rieder et al. 2020; Craze and InvernizziAccetti 2020). These are similar to the sorts of questions that arose in Chapter 6 in the discussion of protecting susceptible populations from chemical risks, because they involve questions about the distribution of health and economic risks and benefits in society. As I have stressed previously in this book, fairness includes distributive fairness and procedural fairness. Procedural fairness requires that people who will be substantially impacted by a social policy decision have meaningful input into that decision. With respect to lockdown policies, procedural fairness would require that all members of the public have an opportunity for meaningful input these decisions, since these decisions affect almost everyone in society. To promote procedural fairness, government officials could conduct public, community, and stakeholder engagement concerning lockdowns policies. Procedural fairness may be difficult to achieve during public health emergencies, such as the COVID-19 epidemic, due to time constraints and political conditions. Decisions often must be made quickly, with little time for meaningful public engagement or education. Also, decisions are usually made by officials with the authority to act quickly, such as presidents, governors, and heads of local public health departments. These decisions often bypass the kind of public debate that can occur when decisions are made through the legislature or by the executive branch after a period of public engagement. Ideally, decision-makers should seek input from the public, to the extent that this is possible, even when decisions are made quickly. For example, a governor could consult with health professionals, members of the legislature, teachers, and representatives from different sectors of the economy before announcing a stay-at-home order. Post hoc review of decisions made during public health emergencies can give the public more of an opportunity for input that could provide guidance for future decisions. Consistency would require that decision-makers respond to similar emergency situations in the same way and different situations differently. For example, consistency would require that if a new virus is no more dangerous than seasonal influenza, then decision-makers should respond to it like they respond to season influenza. If a stay-at-home order is not a reasonable response to seasonal influenza, then it also would not be a reasonable response to virus that is similar to seasonal influenza in terms of its impact on public health. If a new virus is significantly more dangerous than seasonal influenza, then a stay-at-home order may be reasonable. If a new virus is more dangerous than SARS-CoV-19, then more drastic public health measures may

286

9 Public Health Emergencies

need to be taken than were implemented during the COVID-19 pandemic. Since public health emergencies often differ considerably with respect to their potential impact on public health, the economy, the environment, and other important factors, it may be difficult to achieve consistency. Epistemic responsibility would require decision-makers to develop and implement policies based on up-to-date knowledge and information concerning public health emergencies, public opinions and attitudes toward emergency responses, and the impacts of emergency responses on public health, the economy, and society. Decision-makers should continue to collect and process new knowledge and information during emergencies and make appropriate changes in policies. Post hoc review of public health emergencies and scientific research on emergencies and emergency responses can be valuable tools for preparing for future emergencies.

9.4 Testing and Approving Medical Products Used in Public Health Emergencies When the COVID-19 pandemic began, there were no approved diagnostic tests, treatments, or vaccines for the disease. All that could be done for patients, initially, was to provide supportive care, such as hydration, nutrition, oxygen, and mechanical ventilation. Scientists, physicians, government agencies, and private companies began rapidly developing COVID-19 tests, treatments, and vaccines to meet urgent public health needs. The FDA has acted quickly in response to the COVID-19 pandemic by granting EUAs for drugs, tests, and medical devices, relaxing its rules for approving medical tests for SARS-CoV-2 DNA or antibodies to virus, and fast-tracking new drug and device applications related to the pandemic (Hahn 2020; Rome and Avorn 2020; Food and Drug Admnistration 2021a, b, c).17 Currently, dozens of clinical trials for COVID-19 treatments and vaccines around the world are underway (Kupferschmidt and Cohen 2020; Sanders et al. 2020; Soares 2020; Kuznia 2020; Department of Health and Human Services 2020).18 Several clinical trials are investigating the effectiveness of antiviral drugs in treating COVID-19. The antiviral remdesivir has shown promising results in clinical trials with manageable side-effects and could be approved quickly by the FDA if these results hold up (Beasley 2020; Beigel et al. 2020). However, remdesivir is not risk-free: the drug can increase liver enzyme levels and cause liver damage (Feuerstein and Herper 2020). Also, it may have limited effectiveness in patients with severe COVID-19 17 The FDA has had to sacrifice reliability of tests for speed of approval. Many of the antibody tests on the market have a high false positive rate, i.e. testing positive for antibodies to the virus without actually having antibodies to the virus. Some tests are fraudulent (Eder et al. 2020). 18 A clinical trial of a medical product usually does not stop when the product receives an EUA because an EUA is not the same as full regulatory approval. Full regulatory approval may be granted once the product has completed clinical trials and the FDA has reviewed the data. For example, the FDA has granted EUA’s to two COVID-19 RNA vaccines, but clinical trials are continuing. Full approval of these vaccines is likely to be granted once clinical trials are completed.

9.4 Testing and Approving Medical Products Used in Public Health Emergencies

287

illness (Soares 2020). The anti-inflammatory drug dexamethasone has been shown to reduce mortality in COVID-19 patients with severe disease (RECOVERY Collaborative Group 2020). Other notable clinical trials have been investigating the effectiveness of monoclonal antibodies, such as sarilumab and tocilizumab, at suppressing a dangerous immune response known as the “cytokine storm” that occurs in patients with severe COVID-19 (Soares 2020) and using the plasma of people who have recovered from the disease to treat patients (Kupferschmidt 2020). Until a drug is proven effective against COVID-19, physicians often decide to treat their patients with unproven therapies. While this situation is far from ideal, many physicians believe it is better to offer patients who face the prospect of imminent death a treatment that might work than no treatment at all. Unproven therapies that physicians have used to treat COVID-19 patients (outside of RCTs) have yielded mixed results. Physicians began treating COVID-19 patients with the antimalarial medications chloroquine and hydroxychloroquine in February 2020 based on evidence from laboratory studies conducted in 2012, which showed that these drugs can block coronaviruses in vitro. After physicians reported positive results from treating their patients with chloroquine and hydroxychloroquine, the drugs received a great deal of attention from the media and politicians and clinical trials were launched. However, the drugs can also produce dangerous sideeffects, including arrythmia, which can lead to cardiac arrest and death. Researchers cancelled the high dose arm of a clinical trial chloroquine in Brazil in April 2020 due to safety concerns (Bowler 2020). In June 2020, the FDA revoked its EUA for the drug and cautioned against the use of the drug outside of a hospital setting or clinical trial (Food and Drug Administration 2020d). In Chapter 6 I applied to PP to drug testing and approval issues related to lifethreatening illnesses and public health emergencies, and the conclusions drawn in that chapter also apply to the COVID-19 pandemic. In Chapter Six, I applied the proportionality, fairness, consistency, and epistemic responsibility criteria to precautionary measures related to making experimental treatments available to patients who are facing life threatening illnesses for which no approved or medically accepted treatments are available. Proportionality, I argued, would support a policy of making treatments available before they have completed all phases of clinical testing. Fairness would require people who have a stake in drug access, such as patients and their representatives, to have meaningful input into policy decisions. Consistency would require decision-makers to respond to similar situations involving public health emergencies in the same manner and different situations differently. Epistemic responsibility would require decision-makers to formulate drug availability policies based on up-to-date evidence and data. Clearly, these are ideal standards for making reasonable risk management decisions which may difficult to meet in emergency situations, due to time constraints. However, decision-makers should do their best to live up to them. In Chapter 6, I discussed some workable compromises between rigorous testing and access to medical treatment that would be supported by the PP. Some of

288

9 Public Health Emergencies

these include: making unproven19 treatments widely available to people off-study while conducting RCTs concurrently; modifying RCTs to accelerate research or include more patients in studies without compromising rigor; and conducting nonrandomized, uncontrolled trials.20 These compromises allow rigorous research to move forward while making experimental treatments available to patients with life-threatening diseases.21 While these seem like reasonable policy options for life-threatening diseases for which there are no effective therapies, it is also important to include the caveat that researchers and physicians should do their best to ensure that patients clearly understand the risks and benefits of treatment options, since patients who are hospitalized with COVID-19 are likely to be desperately ill and may be willing to try almost anything that could save their lives. Under such dire circumstances, it is of utmost importance to promote informed decision-making by patients or their legal representatives and to avoid taking advantage of their vulnerability (Resnik 2018). Some patients, especially those with milder cases of COVID-19, may be able to recover without experimental medications. It would be a serious mistake to treat these patients with dangerous, unproven medications if it is not in their best interests to receive them. Another important research issue that has emerged during the COVID-19 pandemic is whether it would be ethical to expose healthy volunteers to SARSCoV-2 to accelerate vaccine development. Some scientists and ethicists have argued that experiments should be conducted that administer a vaccine that has cleared Phase I safety testing or a placebo to a small group (e.g. about 50) of healthy volunteers who have not been infected with SARS-CoV-2 and then later expose them to the virus (Callaway 2020a; Berger 2020; Eyal et al. 2020; Shah et al. 2020; Singer and Chappell 2020). Researchers could compare COVID-19 infection rates between the vaccine and the placebo to assess the vaccine’s effectiveness. Researchers could also measure other important variables, such as the development of SARS-CoV-2 antibodies among those receiving the vaccine. These experiments, called “challenge studies,” could speed up vaccine development by helping researchers identify viable vaccine candidates for Phase II and III clinical trials (Eyal et al. 2020). To minimize risks, volunteers would be included in the study only if their risks of developing a serious illness or dying are very low because they are young (e.g. age 20–30) and 19 Treatments would not be entirely unproven. There would still need to be credible evidence concerning risks and benefits. See discussion in Chapter 6. 20 For an example of non-randomized, non-controlled trial of a treatment for COVID-19, see Grein et al. (2020). Non-randomized, uncontrolled trials can provide valuable information for researchers but should not be used to make decisions concerning drug approval. Drug approval decisions should be based on evidence from RCTs. 21 London and Kimmelman (2020) argue that clinical researchers should never compromise scientific rigor even during pandemics. They also argue that scientists should resist the urge to make experimental treatments available to patients off-study or conduct non-randomized, uncontrolled trials if these practices interfere with RCT recruitment. In most public health emergencies, however, these practices will not interfere with RCT recruitment because patients will be eager to enroll in RCTs. Researchers have had little trouble recruiting patients for COVID-19 RCTs, for example (Kupferschmidt and Cohen 2020).

9.4 Testing and Approving Medical Products Used in Public Health Emergencies

289

have no underlying medial conditions (Eyal et al. 2020). Volunteers who become infected with COVID-19 would receive the best available medical care. Vaccine development and testing usually takes 12–18 months because researchers do not intentionally expose volunteers to the pathogen under investigation but must wait until enough people have developed the disease for statistically significant comparisons between the vaccine and the placebo can be made. To ensure that they have enough data to make these comparisons, researchers may enroll tens of thousands of volunteers in studies. For example, over 30,000 participants enrolled in a Phase III clinical trial of a COVID-19 RNA vaccine developed and tested by the NIH and Moderna. The FDA granted an EUA to the vaccine based on an interim analysis of data from 95 volunteers who tested positive for COVID-19. The efficacy of the vaccine was 94.5% (National Institutes of Health 2020). The vaccine, which consists of RNA with a lipid coating, was developed shortly after the genetic sequence of SARS-CoV-2 was published in March 2020. Phase I testing commenced shortly thereafter, and Phase III of testing was launched on June 27, 2020. RNA vaccines can speed up the time for vaccine development because they can be produced so quickly. It can take several months or longer to develop other vaccines.22 Interest in challenge studies was greatest before vaccines had obtained EUAs but has waned since then. Hundreds of people offered to volunteer for COVID-19 vaccine challenge studies when the idea was first proposed. In October 2020, the UK government contracted with a private company, Open Orphan, to conduct COVID-19 vaccine challenge trials, but these trials had not been launched as of the writing of this book (Callaway 2020c). Human challenge studies are ethically controversial because they expose healthy volunteers to significant risks without the prospect of direct, medical benefits (Miller and Grady 2001). Clinical trials that test new medications on patients also expose them to significant risks, including the risk of death in many cases, but these risks are justifiable if the patients are likely to benefit medically from participation and the study is expected to produce knowledge that can benefit society (Resnik 2018a). In challenge studies, the prospect of social benefit is the primary justification for the research, since the subjects do not receive substantial benefits other than the satisfaction of contributing to an important cause.23 Challenge studies include procedures for minimizing and mitigating risks to participants, such as clinical monitoring of subjects, independent safety review, rules for including/excluding unhealthy

22 COVID-19 RNA vaccines work inducing cells of the vaccine recipient to produce the SARS-CoV2 spike protein. The immune system responds to this protein by making antibodies against it and memory T-cells. If the body encounters the virus it will mount an immune response to it. Traditional vaccines against viruses, such as inactive viruses or related viruses, take longer to produce because viruses must be grown in cell culture. RNA vaccines can be made as soon as the genome of the pathogen is sequenced. 23 Subjects in challenge studies usually also receive payment for their participation, but IRBs do not treat this as a benefit in their risk/benefit determination because they do not want money to offset significant risks. Challenge studies are ethically similar to Phase I studies in health volunteers discussed in Chapter 6 (Resnik 2018a).

290

9 Public Health Emergencies

subjects,24 and compensation for injury (Miller and Grady 2001; Bamberry et al. 2016). To apply the PP to this issue, one should consider whether the risks of these studies would be proportional to the benefits. At the outset of the pandemic, the benefits of the studies could have been enormous, since earlier development of a safe and effective COVID-19 vaccine could have saved thousands of lives and minimized further socioeconomic devastation (Singer and Chappell 2020). The studies would impose significant risks on the human subjects, including the risk of serious illness, hospitalization, and death. However, the risks of the disease for younger, healthy subjects are much lower than the risks for other groups. As noted earlier, most younger people who develop a COVID-19 infection are likely to show no symptoms or have only a mild case (Wu and McGoogan 2020). The case fatality rate for the 20–30 age group is about 0.2% (Scott 2020). However, the true mortality rate for this age group is probably much lower, due to underreporting of cases. So, the risk of death for volunteers could be much less than 0.1% (Shah et al. 2020). While this is not a trivial risk, one could argue that this risk would be proportional to the benefits of the study when the pandemic first began (Eyal et al. 2020; Shah et al. 2020).25 However, once vaccines had obtained EUA approval, it was no longer clear that the risks would be proportional to the benefits, because the benefits decreased significantly. Challenge studies conducted after EUA approval of vaccine could still provide some important information about COVID-19 vaccines, but they probably would not save thousands of lives.

9.5 Allocation of Scarce Medical Resources The COVID-19 pandemic has created a problem of allocating scarce medical resources in many countries. In some countries there have not been enough ventilators, intensive care unit (ICU) beds, medications, or health care staff to treat all patients who require treatment. Although governments and hospitals have acted quickly to increase medical resources to meet these growing demands, problems of scarcity have still arisen, and physicians have had to make difficult ethical choices. Physicians in Italy, for example, stopped putting older and sicker COVID-19 patients on ventilators to allow younger and healthier patients to have access to these machines (Rosenbaum 2020). The physicians were following a hospital policy that was based on recommendations from an expert panel convened in 2019 to address allocation of ventilators during public health emergencies. The panel surveyed public opinion and weighed the pros and cons of different policies (Biddison et al. 2019).

24 Subjects could also be excluded if they smoke, since smoking increases the risk of developing a serious COVID-19 infection (Guan et al. 2020). 25 For a contrasting view of the issue, see Dawson et al. (2020), who argue that the proposed challenge studies would be too risky and might not produce substantial social benefits.

9.5 Allocation of Scarce Medical Resources

291

Now that vaccines have been developed, allocation problems have arisen in deciding whom should be vaccinated first within countries, and in distributing vaccines supplies among different countries (Jauhar 2020; Gupta and Morain 2020; McClung et al. 2020; Schmidt 2020; Berger 2021). An influential article published in New England Journal of Medicine in 2020 addressed the fairness of scarce medical resource allocation during the COVID-19 pandemic. The authors recommended that rationing policies should emphasize doing the most good for society; frontline health care workers should receive top priority for scarce resources; patients with similar prognoses should be treated equally by allocating resources on a first-come-first-served basis or a lottery unless one of them is a participant in a COVID-19 treatment or vaccine clinical trial, in which case the participant receives higher priority; there should be no difference in allocation between COVID-19 patients and other patients with similar prognoses but different conditions; and allocation policies should be based on the latest scientific evidence. The authors did not recommend using age for making allocation decisions (Emanuel et al. 2020). The CDC has developed recommendations for distributing the COVID-19 vaccine in the US that emphasize doing the most good for the most people by preventing death and promoting the functionality of society. The CDC recommends that the vaccine be distributed in different phases. The first phase includes health care personnel and residents of long-term care facilities. The second phase includes frontline workers (such as police, firefighters, food and agricultural workers, grocery stores workers, and teachers) and people who are 75 or older. The third phase includes people who are age 65–74, people who are age 16–64 and are at high risk because they have an underlying medical condition, and other essential workers (e.g. people who work in finance, construction, law, food service, communications). The final phase includes people who are 16–64 but are not at high risk. Children are not included in the current CDC vaccine distribution guidelines because the vaccine has not been tested on children yet (Centers for Disease Control and Prevention 2021a). Medical resource allocation decisions present difficult ethical dilemmas for physicians and health care organizations because they involve conflicts between fundamental values, such as promoting the overall good of society (e.g. utilitarian theory) and treating people as having equal moral worth (e.g. Kantian theory).26 As noted earlier in Chapter 4, triaging patients during medical emergencies is based on a utilitarian approach to resource allocation (Wagner and Dahnke 2015). Emanuel et al.’s (2020) recommendation to give COVID-19 patients with a poor prognosis less priority than those with a better prognosis reflects the utilitarian viewpoint that is implemented during triage. It makes sense to a allocate a scarce resource to a patient who can most benefit from it, since allocating the resource to a patient who has a poor prognosis would be wasteful if the patients dies. The CDC’s guidelines are also based on utilitarian thinking because they emphasize promoting the functionality of society. 26 See

Chapter 3 for discussion of utilitarianism and Kantianism. Medical resource allocation also raises issues of discrimination that I will not discuss in depth here. See Veatch and Ross (2015).

292

9 Public Health Emergencies

While most people would agree with the utilitarian idea that physicians and hospitals should not waste medical resources on patients who are unlikely to benefit from them, utilitarianism has other implications that many people would regard as morally questionable or repugnant. Utilitarianism would also recommend that scarce medical resources should be allocated to patients who can most benefit society. However, this recommendation conflicts with the idea that we should treat people as having equal moral worth, since it is claiming, in effect, that some people are more important to society than others. Emanuel et al. (2020) recommend that health care workers and participants in COVID-19 in clinical trials should be given priority in resource allocation. The CDC recommends that health care workers and frontline workers be given priority in vaccine allocation. Although most people would find these recommendations to be morally acceptable, the utilitarian approach does not stop there, since it also implies that we should allocate scarce medical resources based on other social criteria, such as age, family status, disability, and occupation. Utilitarianism would recommend that if we must choose between allocating a ventilator to a 40year-old patient and a 70-year-old, we should give it to the 40-year-old, since the younger patient is likely to live longer and benefit society more than the older one. Utilitarianism would also recommend that we give priority to a wife and mother of three young children instead of an unmarried man, or an elementary school teacher instead of an unemployed, mentally disabled person.27 The CDC’s vaccine distribution guidelines distinguish among many different classes of people based on their age, medical condition, and occupation (Centers for Disease Control and Prevention 2021a). The problem of allocating scarce medical resources is not unique to public health emergencies, since it occurs in non-emergency contexts, such as allocating organs for transplantation (Veatch and Ross 2015). Ethicists, physicians, and health care policymakers have grappled with the moral conflict between promoting the overall good of society and treating people equally since the 1960s, when dialysis machines became available but were in short supply. Since that time, a consensus has emerged that we should use medical, not social worth criteria when deciding who should receive a liver, a kidney, or ICU care (Veatch and Ross 2015). However, the COVID19 allocation scheme proposed by Emanuel et al. (2020) and the CDC seem to be at odds with this viewpoint. To apply the PP to the issues concerning the allocation of scarce resources, we need to consider how the four criteria for reasonableness would apply to different allocation policies. While the PP advises us to balance benefits and risks proportionally, it does not tell us how to incorporate equality into our balancing since equality is not a benefit per se nor is inequality a risk. It would recommend, however, that we consider the social risks of treating patients unequally in health care allocation, since unequal treatment could erode public trust in the health care system and interfere with the medical ethos, which emphasizes promoting the wellbeing of the patient irrespective of his or her value to society. 27 Mello

et al. (2020) argue that health resource allocation rules adopted by some states during the COVID-19 pandemic unfairly discriminated against people with disabilities.

9.5 Allocation of Scarce Medical Resources

293

Equality may come into play when we consider the fairness of the distribution of benefits, since one might argue that unequal distributions of scarce resources are unfair, depending on the social context. For example, if there are twelve people on a lifeboat with a limited supply of food, equal treatment would imply that each person receives the amount of food he or she needs for survival. If each person has the same food requirements, then they should receive the same amount of food. However, if some people have greater nutritional needs than others, they should receive a larger portion of food. So, equal treatment might or might not require equal scarce resource allocation, depending on the situation. Consistency would require that resource allocation policies treat similar cases similarly and different cases differently. Guidelines and rules contained in the policies would determine what sorts of characteristics (such as prognosis, age, etc.) make cases similar or different. Epistemic responsibility would require that policies be based on up-to-date scientific information and revised in light of new information (Emanuel et al. 2020). None of what I have said in the preceding paragraphs is very novel or illuminating, nor would I expect it to be. Since resource allocation problems primarily involve questions about justice and fairness, rather than risk management, one would not expect the PP to provide us with much help in dealing with these problems. Perhaps the biggest contribution the PP can make to resource allocation issues is by advising us to take precautionary measures to prevent these problems from happening in the first place, which is the topic of the next section.

9.6 Disaster Preparedness The final topic I will consider in this chapter is disaster preparedness. Many different principles, discussed in this book, especially the PP, support the idea that we should prepare for disasters. While there is little disagreement about the importance of disaster preparedness, countries are not always prepared for the disasters they face. With respect to the COVID-19, South Korea, Singapore, Japan, and China (including Hong Kong), were well-prepared to respond to the pandemic. These countries had adequate stockpiles of personal protective equipment (such as N-95 masks) for controlling respiratory infections and ventilators for supporting hospital patients in respiratory distress. These countries also had planned for different epidemic and pandemic scenarios and had policies in place to allow the government to respond rapidly to a public health emergency. One reason why these countries were wellprepared to deal with the COVID-19 pandemic is that their governments had learned lessons from previous epidemics that had taken a toll on their populations, such as the Middle East Respiratory Syndrome Coronavirus (MERS-CoV) epidemic of 2015 and the Severe acute respiratory syndrome (SARS) epidemic of 2003 (Klingner 2020; Healthline 2020).

294

9 Public Health Emergencies

The US, according to many commentators, was ill-prepared to deal with the COVID-19 pandemic. The US lacked adequate supplies of personal protective equipment, ventilators, and other critical supplies and had not conducted the kind of planning or policy development needed to deal with a pandemic like COVID-19 (Kavanagh 2020; Balz 2020; Keller 2020; Ranney et al. 2020). The US Government did adopt a National Biodefense Strategy in 2018 to prepare for natural, accidental, or deliberate biological threats (United States Government 2018). However, DHHS, which oversaw the strategy, had difficulty obtaining funding from Congress and financial support and cooperation from other federal agencies involved in implementing the strategy (Keller 2020). DHHS conducted a series of planning exercise in 2019 for a respiratory virus emerging from China which predicted that the US would lack critical medical supplies, especially those that are manufactured overseas, such as protective masks (Keller 2020). Lack of preparation chiefly occurs because government officials (elected and non-elected) and citizens do not give enough fiscal and political priority to disaster preparedness. Disaster preparedness often does not receive adequate government funding because other concerns, such as national defense, health care, education, housing, or transportation receive higher priority. Disaster preparedness often does not receive enough political attention because disasters are temporally remote events that may not happen or may not be as bad as predicted. Immediate practical concerns, such as defending national interests, creating jobs, building roads, and providing health care often have a greater emotional and psychological impact on popular opinion and political decision-making than abstract, far off concerns. Societies that have had recent experiences with disasters tend to prepare for future ones (of the same type) better than societies that do not have such experiences because recent experiences with disasters are emotionally and psychologically impactful.28 As noted above, South Korea, Singapore, and China were probably better prepared than the US to deal with COVID-19 because these countries had recently dealt with serious epidemics caused by respiratory diseases. To its credit, the US has taken appropriate steps to prepare for other types of disasters the country has recently faced, such as hurricanes, floods, earthquakes, forest fires, financial crises, and terrorism. Another reason why lack of preparation occurs is that some of the policies that are needed to minimize or mitigate the risks of disasters can be difficult to develop because they are morally or politically controversial. South Korea was able to do comprehensive contact tracing for COVID-19 because it had laws in place allow the government to access data from credit card and bank records, cellphones, and security cameras to track individuals and identify their contacts. Similar laws would likely be very controversial in the US (and possibly unconstitutional) because they would impose significant restrictions on civil liberties (Klingner 2020). As noted earlier, policies for allocating scarce medical resources during disasters can also be 28 Another way of viewing this issue is to say that human judgments about risks are influenced by the

availability heuristic, i.e. the tendency to base probability estimates on information that is readily available to the mind because it is memorable or recently acquired (Tversky and Kahneman 1974). See discussion in Chapter 2.

9.6 Disaster Preparedness

295

controversial because they involve conflicts between maximizing the social good and treating people equally. Many commentators have urged China and other countries to ban wet markets,29 which have been implicated as contributing factor in the emergence of SARS, SARS-CoV-2 and other zoonotic pathogens. A ban would be highly controversial, however, because wet markets are a cultural institution in many localities and an important part of the food economy (Cheng 2020; Northam 2020). Others have argued that wet markets should be regulated to promote good hygiene and safety and that the main source of zoonotic disease risk—the live, wild animal trade—should be banned (Beech 2020). To minimize the loss of human life or property from hurricanes, flood, and forest-fires, some states and communities have enacted laws that restrict or regulate building in flood-prone or fire-prone areas. These laws are also often controversial because they interfere with property rights (Resnik 2012). The PP has little to say about dealing with economic, moral, and political problems concerning disaster preparedness other than to advise us to take reasonable measures to minimize or mitigate the risk of disasters. Disaster preparedness policies should satisfy the four criteria for reasonableness discussed throughout this book, i.e. proportionality, fairness, consistency, and epistemic responsibility. The PP would also recommend that policies should be based on meaningful input from the public and affected communities and stakeholders, in order to satisfy the procedural fairness requirement. To achieve this input, government should consult with the public and scientists long before disasters arise, so there is ample time to for discussion and debate.

9.7 Conclusion In this chapter I have applied to PP to issues related to preparing for and responding to public health emergencies, with a focus on the COVID-19 pandemic. For the most part, it is not possible to avoid the risks of public health emergencies once they arise. Often, the best that can be done is to minimize or mitigate these risks. I have discussed how the PP can lend some insight into several different topics related to managing the risks of the COVID-19 pandemic, including preventing or slowing disease transmission, testing and approving drugs and vaccines, allocating scarce resources, and disaster preparedness. In the next, and final, chapter I will summarize the arguments, conclusions, and key points contained in this book and offer some additional reflections on the PP. 29 A wet market is a place where fresh meat, fish, and produce are sold. The markets are wet because

merchants slosh water on their goods to keep them fresh. Merchants may also sell a variety of live, wild or exotic animals, including birds, snakes, beavers, badgers, foxes, pangolins, bats for slaughter. For years, public health researchers have been concerned that wet markets could be a source of zoonotic diseases because of the close interaction between humans and other species and poor hygiene (Cheng 2020; Northam 2020). Wet markets may also enable the passaging (see discussion in Chapter 8) of pathogens back and forth between animals and humans.

296

9 Public Health Emergencies

References Adam, D. 2020. Special Report: The Simulations Driving the World’s Response to COVID-19. Nature, April 2. Available at https://www.nature.com/articles/d41586-020-01003-6. Accessed 15 May 2020. Afolabi, M.O. 2018. Public Health Disasters: Public Health Disasters. A Global Ethical Framework 12: 1–24. Ahlander, J., and S. Johnson. 2020. Toughest COVID-19 Measures Yet for Sweden as Cases Soar. Reuters, December 18. Available at https://www.reuters.com/article/uk-health-coronavirus-swe den-measures/toughest-covid-19-measures-yet-for-sweden-as-cases-soar-idUKKBN28S28V. Accessed 16 Jan 2021. Amnesty International. 2020. COVID-19, Surveillance and the Threat to Your Rights. Available at https://www.amnesty.org/en/latest/news/2020/04/covid-19-surveillance-threat-to-yourrights/. Accessed 18 Jan 2021. Andersen, K.G., A. Rambaut, W.I. Lipkin, E.C. Holmes, and R.F. Garry. 2020. The Proximal Origin of SARS-CoV-2. Nature Medicine 26 (4): 450–452. Annas, G.J. 2010. Worst Case Bioethics: Death, Disaster, and Public Health. New York, NY: Oxford University Press. Authers, J. 2020. How Coronavirus Is Shaking Up the Moral Universe. Bloomberg Opinion, March 29. Available at https://www.bloomberg.com/opinion/articles/2020-03-29/coronaviruspandemic-puts-moral-philosophy-to-the-test. Accessed 18 Jan 2021. Baggett, T.P, H. Keyes, N. Sporn, and J.M. Gaeta. 2020. Prevalence of SARS-CoV-2 Infection in Residents of a Large Homeless Shelter in Boston. Journal of the American Medical Association 323 (21): 2191–2192. Balz, D. 2020. America Was Unprepared for a Major Crisis. Again. Washington Post, April 4. Available at https://www.washingtonpost.com/graphics/2020/politics/america-was-unpreparedfor-a-major-crisis-again/. Accessed 18 Jan 2021. Bambery, B., M. Selgelid, C. Weijer, J. Savulescu, and A.J. Pollard. 2016. Ethical Criteria for Human Challenge Studies in Infectious Diseases. Public Health Ethics 9 (1): 92–103. Barone, E. 2020. The U.S. Is on the Verge of Passing Another COVID-19 Pandemic Relief Bill. Time Magazine, December 21. Available at https://time.com/5923840/us-pandemic-relief-billdecember/. Accessed 15 Jan 2021. BBC News. 2020. Coronavirus: US Travel Ban on 26 European Countries Comes into fForce. BBC News, March 14. Available at https://www.bbc.com/news/world-us-canada-51883728. Accessed 18 Jan 2021. Beasley, D. 2020. Exclusive: Trial of Gilead’s Potential Coronavirus Treatment Running Ahead of Schedule, Researcher Says. Reuters, April 24. Available at: https://www.reuters.com/article/ushealth-coronavirus-gilead-exclusive/exclusive-trial-of-gileads-potential-coronavirus-treatmentrunning-ahead-of-schedule-researcher-idUSKCN2262X3. Accessed 18 Jan 2021. Beech, P. 2020. What We’ve Got Wrong About China’s ‘Wet Markets’ and Their Link to COVID19. World Economic Forum, April 18. Available at https://www.weforum.org/agenda/2020/04/ china-wet-markets-covid19-coronavirus-explained/. Accessed 18 Jan 2021. Beigel, J.H., K.M. Tomashek, L.E. Dodd, A.K. Mehta, B.S. Zingman, A.C. Kalil, E. Hohmann, H.Y. Chu, A. Luetkemeyer, S. Kline, D.L. de Castilla, R.W. Finberg, and ACTT-1 Study Group Members. 2020. Remdesivir for the Treatment of Covid-19—Final Report. New England Journal of Medicine 383 (19): 1813–1826. Berger L. 2020. AstraZeneca Says It May Consider Exposing Vaccine Trial Participants to Virus. Reuters, May 28. Available at https://www.reuters.com/article/us-health-coronavirus-astraz eneca-challe/astrazeneca-says-it-may-consider-exposing-vaccine-trial-participants-to-virus-idU SKBN2342CC. Accessed 18 Jan 2021. Berger, M.W. 2021. How Can the World Allocate COVID-19 Vaccines Fairly? Penn Today, January 7. Available at: https://penntoday.upenn.edu/news/how-can-world-allocate-covid-19-vaccinesfairly. Accessed 18 Jan 2021.

References

297

Biddison, E.L., F. Ruth, H.S. Gwon, D.P. Mareiniss, A.C. Regenberg, M. Schoch-Spana, J. Schwartz, and E.S. Toner. 2019. Too Many Patients…A Framework to Guide Statewide Allocation of Scarce Mechanical Ventilation During Disasters. Chest 155: 848–854. Blakely, T., S. Collings, and J. Atkinson. 2003. Unemployment and Suicide. Evidence for a Causal Association? Journal of Epidemiology and Community Health 57 (8): 594–600. Bowler, J. 2020. Study of High-Dose Chloroquine for COVID-19 Stopped Early Due to Patient Deaths. Science Alert, April 14. Available https://www.sciencealert.com/clinical-trial-for-highdose-of-chloroquine-stopped-early-due-to-safety-concerns. Accessed 18 Jan 2021. Bradley, M., B. O’Reilly, M. Novaga, and Y. Talmazan. 2020. Coronavirus: Italy Deepens Lockdown as COVID-19 Spreads. NBC News, March 12. Available at https://www.nbcnews.com/ news/world/coronavirus-italy-deepens-lockdown-covid-19-spreads-n1156351. Accessed 18 Jan 2021. Bryner J. 2020. Wuhan lab says there’s no way coronavirus originated there. Live Science, April 18, 2020. Available at https://www.livescience.com/coronavirus-wuhan-lab-complicated-origins. html. Accessed 18 Jan 2021. Bureau of Economic Analysis. 2021. Gross Domestic Product, Third Quarter 2020 (Advance Estimate). Available at https://www.bea.gov/news/2020/gross-domestic-product-third-quarter-2020advance-estimate. Accessed 15 Jan 2021. Callaway, E. 2020a. Should Scientists Infect Healthy People with the Coronavirus to Test Vaccines? Nature 580: 17–18. Callaway, E. 2020b. Dozens to Be Deliberately Infected with Coronavirus in UK ‘Human Challenge’ Trials. Nature, October 20. Available at https://www.nature.com/articles/d41586-020-02821-4. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020a. Alcohol Use and Your Health. Available at https://www.cdc.gov/alcohol/fact-sheets/alcohol-use.htm. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020b. Health Effects of Cigarette Smoking. Available at https://www.cdc.gov/tobacco/data_statistics/fact_sheets/health_effects/effects_cig_smoking/. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020c. Select Agents and Toxins. Available at https:// www.selectagents.gov/SelectAgentsandToxins.html. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020d. Social Distancing, Quarantine, Isolation. Available at https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/social-distancing.html. Accessed 18 Jan 2021. Centers for Diseases Control and Prevention. 2020e. How to Protect Yourself and Others. https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/prevention.html. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020f. Suicide Mortality by State. Available at https:// www.cdc.gov/nchs/pressroom/sosmap/suicide-mortality/suicide.htm. Accessed January. Centers for Disease Control and Prevention. 2020g. Age 21 Minimum Legal Drinking Age. Available at https://www.cdc.gov/alcohol/fact-sheets/minimum-legal-drinking-age.htm. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020h. Disease Burden of Influenza. Available at https://www.cdc.gov/flu/about/burden/index.html. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020i. Different COVID-19 vaccines. Available at: https://www.cdc.gov/coronavirus/2019-ncov/vaccines/different-vaccines.html. Accessed: January 14, 2021. Centers for Disease Control and Prevention. 2021a. When Vaccine Is Limited, Who Should Get Vaccinated First? Available at https://www.cdc.gov/coronavirus/2019-ncov/vaccines/recommend ations.html. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2021b. Provisional Death Counts for Coronavirus Disease 2019 (COVID-19). Available at https://www.cdc.gov/nchs/nvss/vsrr/covid_wee kly/index.htm#AgeAndSex. Accessed 20 Jan 2021. Cheng, M. 2020. The Case Against Wet Markets. The Atlantic, April 10, 2020. Available at https:// www.theatlantic.com/culture/archive/2020/04/ban-wet-markets/609781/. Accessed 18 Jan 2021.

298

9 Public Health Emergencies

Chetty, R., M. Stepner, S. Abraham, S. Lin, B. Scuderi, N. Turner, A. Bergeron, and D. Cutler. 2016. The Association Between Income and Life Expectancy in the United States, 2001–2014. Journal of the American Medical Association 315 (16): 1750–1766. Chinazzi, M., J.T. Davis, A. Ajelli, C. Gioannini, M. Litvinova, Piontti A.P. MerlerS, K. Mu, L. Rossi, K. Sun, C. Viboud, X. Xiong, H. Yu, M.E. Halloran, I.M. Longini Jr., and A. Vespignani. 2020. The Effect of Travel Restrictions on the Spread of the 2019 Novel Coronavirus (COVID-19) outbreak. Science 368 (6489): 395–400. Connor, P. 2020. More Than Nine-in-Ten People Worldwide Live in Countries with Travel Restrictions Amid COVID-19. Pew Research, April 1. Available at https://www.pewresearch. org/fact-tank/2020/04/01/more-than-nine-in-ten-people-worldwide-live-in-countries-with-tra vel-restrictions-amid-covid-19/. Accessed: January 18, 2021. Craze, J., and C. Invernizzi-Accetti. 2020. Covid-19 Hurts the Most Vulnerable—But So Does Lockdown. The Guardian, May 16. Available at https://www.theguardian.com/commentisfree/ 2020/may/16/covid-19-coronavirus-lockdown-economy-debate. Accessed 17 Jan 2021. Dawson, L., Earl, J., and Livezey, J. 2020. SARS-CoV-2 Human Challenge trials: Too Risky, Too Soon. The Journal of Infectious Diseases 222 (3): 514–516. Deaton, A.S. 2002. Policy Implications of the Gradient of Health and Wealth. Health Affairs 21: 13–30. Department of Health and Human Services. 2020. Fact Sheet: Explaining Operation Warp Speed. Available at https://www.hhs.gov/about/news/2020/06/16/fact-sheet-explaining-operation-warpspeed.html. Accessed 18 Jan 2021. Dong, Y., X. Mo, Y. Hu, X. Qi, F. Jiang, Z. Jiang, and S. Tong. 2020. Epidemiology of COVID-19 Among Children in China. Pediatrics, April 2020:e20200702. Eder, S., M. Twohey, and A. Mandavilli. 2020. Antibody Test, Seen as Key to Reopening Country, Does Not Yet Deliver. New York Times, April 19. Available at https://www.nytimes.com/2020/ 04/19/us/coronavirus-antibody-tests.html. Accessed 18 Jan 2021. Elliott, I. 2016. Poverty and Mental Health: A Review to Inform the Joseph Rowntree Foundation’s Anti-Poverty Strategy. London, UK: Mental Health Foundation. Emanuel, E.J., G. Persad, R. Upshur, B. Thome, M. Parker, A. Glickman, C. Zhang, C. Boyle, M. Smith, and J.P. Phillips. 2020. Fair Allocation of Scarce Medical Resources in the Time of Covid-19. New England Journal of Medicine 382: 2049–2055. Eyal, N., M. Lipsitch, and P.G. Smith. 2020. Human Challenge Studies to Accelerate Coronavirus Vaccine Licensure. The Journal of Infectious Diseases 222 (11): 1752–1756. Ezell, B.C., S.P. Bennett, von D. Winterfeldt, J. Sokolowski, and A.J. Collins. 2010. Probabilistic Risk Analysis and Terrorism Risk. Risk Analysis 30 (4): 575–589. Ferguson, N.M., D. Laydon, G. Nedjati-Gilani, N. Imai, K. Ainslie, M. Baguelin, S. Bhatia, A. Boonyasiri, Z. Cucunubá, G. Cuomo-Dannenburg, A. Dighe, I. Dorigatti, H. Fu, K. Gaythorpe, W. Green, A. Hamlet, W. Hinsley, L.C. Okell, S. van Elsland, H. Thompson, R. Verity, E. Volz, H. Wang, Y. Wang, P.G.T. Walker, C. Walters, P. Winskill, C. Whittaker, C.A. Donnelly, R. Riley, and A.C. Ghani. 2020. Report 9: Impact of Non-Pharmaceutical Interventions (NPIs) to Reduce COVID-19 Mortality and Healthcare Demand. Available at https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/ Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf. Accessed 18 Jan 2021. Feuerstein, A., and M. Herper. 2020. Early Peek at Data on Gilead Coronavirus Drug Suggests Patients Are Responding to Treatment. Statnews, April 16. Available at https://www.statnews.com/2020/04/16/early-peek-at-data-on-gilead-coronavirus-drugsuggests-patients-are-responding-to-treatment/. Accessed 19 Jan 2021. Fink S. 2020. Worst-Case Estimates for U.S. Coronavirus Deaths. New York Times, March 13. Available at https://www.nytimes.com/2020/03/13/us/coronavirus-deaths-estimate.html. Accessed 18 Jan 2021. Firth S. 2020. Singapore: The Model for COVID-19 Response? MedPage Today, March 5, 2020. Available at https://www.medpagetoday.com/infectiousdisease/covid19/85254. Accessed 18 Jan 2021. Food and Drug Administration. 2020. FDA Cautions Against Use of Hydroxychloroquine or Chloroquine for COVID-19 Outside of the Hospital Setting or a Clinical Trial Due to Risk of

References

299

Heart Rhythm Problems. Available at https://www.fda.gov/drugs/drug-safety-and-availability/ fda-cautions-against-use-hydroxychloroquine-or-chloroquine-covid-19-outside-hospital-settin g-or. Accessed 19 Jan 19 2021. Food and Drug Administration. 2021a. Emergency Use Authorization. Available at https://www. fda.gov/emergency-preparedness-and-response/mcm-legal-regulatory-and-policy-framework/ emergency-use-authorization. Accessed 8 Jan 2021. Food and Drug Administration. 2021b. COVID-19 Vaccines. Available at https://www.fda.gov/ emergency-preparedness-and-response/coronavirus-disease-2019-covid-19/covid-19-vaccines. Accessed 8 Jan 2021. Food and Drug Administration. 2021c. COVID-19 Update: FDA Broadens Emergency Use Authorization for Veklury (remdesivir) to Include All Hospitalized Patients for Treatment of COVID19. Available at https://www.fda.gov/news-events/press-announcements/covid-19-update-fdabroadens-emergency-use-authorization-veklury-remdesivir-include-all-hospitalized. Accessed 8 Jan 2021. Foran, C., H. Byrd, and M. Raju. 2020. House Approves $480 Billion Package to Help Small Businesses and Hospitals, Expand Covid-19 Testing. CNN, April 23,. Available at https://www.cnn. com/2020/04/23/politics/house-vote-small-business-aid-vote/index.html. Accessed 19 Jan 2021. French, M.J. 2020. Cuomo Pins Hope of Economic Restart on Young People, Antibody Test. Politico, March 27. Available at https://www.politico.com/states/new-york/albany/story/2020/03/26/ cuomo-pins-hope-of-economic-restart-on-young-people-antibody-test-1269265. Accessed 19 Jan 2021. Gostin, L.O., E.A. Friedman, and S.A. Wetter. 2020. Responding to Covid-19: How to Navigate a Public Health Emergency Legally and Ethically. Hastings Center Report 50 (2): 8–12. Grein, J., N. Ohmagari, D. Shin, G. Diaz, E. Asperges, A. Castagna, T. Feldt, G. Green, M.L. Green, F.X. Lescure, E. Nicastri, R. Oda, K. Yo, E. Quiros-Roldan, A. Studemeister, J. Redinski, S. Ahmed, J. Bernett, D. Chelliah, D. Chen, S. Chihara, S.H. Cohen, J. Cunningham, A.D. Monforte, S. Ismail, H. Kato, G. Lapadula, E. L’Her, T. Maeno, S. Majumder, M. Massari, M. Mora-Rillo, Y. Mutoh, D. Nguyen, E. Verweij, A. Zoufaly, A.O. Osinusi, A. DeZure, Y. Zhao, L. Zhong, A. Chokkalingam, E. Elboudwarej, L. Telep, L. Timbs, I. Henne, S. Sellers, H. Cao, S.K. Tan, L. Winterbourne, P. Desai, R. Mera, A. Gaggar, R.P. Myers, D.M. Brainard, R. Childs, and T. Flanigan. 2020. Compassionate Use of Remdesivir for Patients with Severe Covid-19. New England Journal of Medicine 382 (24): 2327–2336. Grigoli, F., and D. Sandri. 2020. COVID’s Impact in Real Time: Finding Balance Amid the Crisis. IMF Blog, October 8. Available at https://blogs.imf.org/2020/10/08/covids-impact-in-real-timefinding-balance-amid-the-crisis/. Accessed 15 Jan 2021. Guan, W.J., Z.Y. Ni, Y. Hu, W.H. Liang, C.Q. Ou, J.X. He, L. Liu, H. Shan, C.L. Lei, D.S.C. Hui, B. Du, L.J. Li, G. Zeng, K.Y. Yuen, R.C. Chen, C.L. Tang, T. Wang, P.Y. Chen, J. Xiang, S.Y. Li, J.L. Wang, Z.J. Liang, Y.X. Peng, L. Wei, Y. Liu, Y.H. Hu, P. Peng, J.M. Wang, J.Y. Liu, Z. Chen, G. Li, Z.J. Zheng, S.Q. Qiu, J. Luo, C.J. Ye, S.Y. Zhu, N.S. Zhong, and China Medical Treatment Expert Group for Covid-19. 2020. Clinical Characteristics of Coronavirus Disease 2019 in China. New England Journal of Medicine 382 (18): 1708–1720. Gupta, R., and S.R. Morain. 2020. Ethical Allocation of Future COVID-19 Vaccines. Journal of Medical Ethics. Published Online First: 17 December 2020. Hahn, S. 2020. What We at the FDA Are Doing to Fight Covid-19. CNN, March 30. Available at https://www.cnn.com/2020/03/30/opinions/fda-coronavirus-vaccine-testing-hahn/index.html. Accessed 19 Jan 2021. Harmon, A. 2020. Why We Don’t Know the True Fatality Rate for COVID-19. New York Times, April 17. Available at https://www.nytimes.com/2020/04/17/us/coronavirus-death-rate.html. Accessed 19 Jan 2021. Healthline. 2020. How South Korea Successfully Battled COVID-19 While the U.S. Didn’t. Available at https://www.healthline.com/health-news/what-south-korea-has-done-correctly-inbattling-covid-19. Accessed 19 Jan 2021. Heflin, C.M., and J. Iceland. 2009. Poverty, Material Hardship and Depression. Social Science Quarterly 90 (5): 1051–1071.

300

9 Public Health Emergencies

Herbst, M. 2020. A South Korean COVID-19 Czar Has Some Advice for Trump. Wired, March 26. Available at https://www.wired.com/story/a-south-korean-covid-19-czar-has-some-advicefor-trump/. Accessed 19 Jan 2021. Holmdahl, I., and C. Buckee. 2020. Wrong But Useful—What Covid-19 Epidemiologic Models Can and Cannot Tell Us. New England Journal of Medicine 383: 303–305. Human Rights Watch. 2019. World Report 2019: Syria. Available at https://www.hrw.org/worldreport/2019/country-chapters/syria. Accessed 19 Jan 2021. Institute for Health Metrics and Evaluation. 2020. COVID-19 Projections. Available at https://cov id19.healthdata.org/united-states-of-america. Accessed 19 Jan 2021. Ioannidis, J. P. 2020. A Fiasco in the Making? As the Coronavirus Pandemic Takes Hold, We Are Making Decisions Without Reliable Data. Stat News, March 17. Available at https://www.statnews.com/2020/03/17/a-fiasco-in-the-making-as-the-coronavirus-pandemictakes-hold-we-are-making-decisions-without-reliable-data/. Accessed 19 Jan 2021. Jauhar, S. 2020. When a Covid-19 Vaccine Becomes Available, Who Should Get it First? Stat, May 23. Available at https://www.statnews.com/2020/05/23/when-a-covid-19-vaccine-becomes-ava ilable-who-should-get-it-first/. Accessed 19 Jan 2021. Jewell, N.P., J.A. Lewnard, and B.L. Jewell. 2020. Predictive Mathematical Models of the COVID19 Pandemic: Underlying Principles and Value of Projections. Journal of the American Medical Association 323 (19): 1893–1894. Jin, R.L., C.P. Shah, and T.J. Svoboda. 1995. The Impact of Unemployment on Health: A Review of the Evidence. Canadian Medical Association Journal 153 (5): 529–540. John, A., J. Pirkis, D. Gunnell, L. Appleby, and J. Morrissey. 2020. Trends in Suicide During the Covid-19 Pandemic. British Medical Journal 371: 1–2. Johns Hopkins University. 2021. COVID-19 Global Cases. Available at https://coronavirus.jhu.edu/ map.html. Accessed 14 Jan 2021. Jones, N. 2020. How COVID-19 Is Changing the Cold and Flu Season. Nature 588: 388–390. Kavanagh, K. 2020. Viewpoint: US Woefully Unprepared for COVID-19 Pandemic. Infection Control Today, March 11. Available at https://www.infectioncontroltoday.com/covid-19/viewpo int-us-woefully-unprepared-covid-19-pandemic. Accessed 19 Jan 2021. Kelleher, S.R. 2020. Rollout: Here’s When States Are Due to Lift Stay-at-Home Orders. Forbes, April 10. Available at https://www.forbes.com/sites/suzannerowankelleher/2020/04/12/rolloutheres-when-states-are-due-to-lift-stay-at-home-orders/#39825a4c61b1. Accessed 19 Jan 2021. Keller, J. 2020. Why the US Government Was Unprepared for COVID-19, According to a Biodefense Expert. Task and Purpose, March 20. Available at https://taskandpurpose.com/analysis/nat ional-biodefense-strategy-coronavirus-covid-19. Accessed 19 Jan 2021. Klingner, B. 2020. South Korea Provides Lessons, Good and Bad, on Coronavirus Response. The Heritage Foundation, March 28. Available at https://www.heritage.org/asia/commentary/southkorea-provides-lessons-good-and-bad-coronavirus-response. Accessed 19 Jan 2021. Kullar, D., and D.A. Chokshi. 2018. Health, Income, & Poverty: Where We Are & What Could Help. Health Affairs, October 4. Available at https://www.healthaffairs.org/do/10.1377/hpb201 80817.901935/full/. Accessed 19 Jan 2021. Kupferschmidt, K. 2020. Scientists Put Survivors’ Blood Plasma to the Test. Science 368 (6494): 922–923. Kupferschmidt, K., and J. Cohen. 2020. WHO Launches Global Megatrial of the Four Most Promising Coronavirus Treatments. Science, March 22. Available at https://www.sciencemag. org/news/2020/03/who-launches-global-megatrial-four-most-promising-coronavirus-treatm ents. Accessed 19 Jan 2021. Kuznia, R. 2020. The Timetable for a Coronavirus Vaccine Is 18 Months. Experts Say That’s Risky. CNN, April 1. Available https://www.cnn.com/2020/03/31/us/coronavirus-vaccine-timeta ble-concerns-experts-invs/index.html. Accessed 19 Jan 2021. Lambert, L. 2020. Real Unemployment Rate Soars Past 20%—And the U.S. Has Now Lost 26.5 Million Jobs. Fortune, April 23. Available at https://fortune.com/2020/04/23/us-unemploymentrate-numbers-claims-this-week-total-job-losses-april-23-2020-benefits-claims/. Accessed 19 Jan 2021.

References

301

Lanktree, G. 2017. Hurricane Harvey Could Cost $190 Billion, Topping Hurricane Katrina. Newsweek, September 1. Available at https://www.newsweek.com/hurricane-harvey-could-cost190-billion-topping-hurricane-katrina-658242. Accessed: January 19, 2021. Latham, S., and A. Wilson. 2020. The Case Is Building That COVID-19 Had a Laboratory Origin. Independent Science News, June 2. Available at https://www.independentsciencenews.org/health/ the-case-is-building-that-covid-19-had-a-lab-origin/. Accessed 19 Jan 2021. Lee, Y.G., X. Garza-Gomez, and R.M. Lee. 2018a. Ultimate Costs of the Disaster: Seven Years After the Deepwater Horizon Oil Spill. Journal of Corporate Accounting and Finance 29 (1): 69–79. Lee, H.W., S.H. Park, M.W. Weng, H.T. Wang, W.C. Huang, H. Lepor, X.R. Wu, L.C. Chen, and M.S. Tang. 2018b. E-cigarette Smoke Damages DNA and Reduces Repair Activity in Mouse Lung, Heart, and Bladder as Well as in Human Lung and Bladder Cells. Proceedings of National Academy of Sciences of the United States of America 115 (7): E1560–E1569. Lenhart, O. 2019. The Effects of Income on Health: New Evidence from the Earned Income Tax Credit. Review of Economics of the Household 17: 377–410. London, A.J., and J. Kimmelman. 2020. Against Pandemic Research Exceptionalism. Science 368 (6490): 476–477. Long H, van Dam A. 2020. America is in a depression. The challenge now is to make it short-lived. Washington Post, April 10. Available at https://www.washingtonpost.com/business/2020/04/09/66-million-americans-filed-une mployed-last-week-bringing-pandemic-total-over-17-million/. Accessed 19 Jan 2021. Mandel, A., and V. Veetil. 2020. The Economic Cost of COVID Lockdowns: An Out-of-Equilibrium Aanalysis. Economics of Disasters and Climate Change 4: 431–451. McClung, N., M. Chamberland, K. Kinlaw, D.B. Matthew, M. Wallace, B.P. Bell, G.M. Lee, H.K. Talbot, J.R. Romero, S.E. Oliver, and K. Dooling. 2020. The Advisory Committee on Immunization Practices’ Ethical Principles for Allocating Initial Supplies of COVID-19 Vaccine—United States, 2020. Morbidity and Mortality Weekly Report 69:1782–1786. McNeil Jt. D.G. 2020. How Much Herd Immunity Is Enough? New York Times, December 24, A1. Meßerschmidt, K. 2020. COVID-19 Legislation in the Light of the Precautionary Principle. The Theory and Practice of Legislation 8 (3): 267–292. Mehrotra, A., M. Chernew, D. Linetsky, H. Hatch, and D. Cutler. 2020. The Impact of the COVID19 Pandemic on Outpatient Visits: A Rebound Emerges. The Commonwealth Fund, May 19. Available at https://www.commonwealthfund.org/publications/2020/apr/impact-covid-19-outpat ient-visits. Accessed 19 Jan 2021. Mello, M.M., G. Persad, and D.B. White. 2020. Respecting Disability Rights—Toward Improved Crisis Standards of Care. New England Journal of Medicine 383: e26. Mervosh, S., D. Lu, and V. Swales. 2020. See Which States and Cities Have Told Residents to Stay at Home. New York Times, April 7. Available at https://www.nytimes.com/interactive/2020/us/ coronavirus-stay-at-home-order.html. Accessed 19 Jan 2021. Miller, F.G., and C. Grady. 2001. The Ethical Challenge of Infection-Inducing Challenge Experiments. Clinical Infectious Diseases 33 (7): 1028–1033. Munthe, C., J. Heilinger, and V. Wild. 2020. Policy Brief: Ethical Aspects of Pandemic Public PolicyMaking Under Uncertainty. Bremen, Germany: Competence Network Public Health COVID-19. National Conference of State Legislatures. 2021. COVID-19 Economic Relief Bill. Available at https://www.ncsl.org/ncsl-in-dc/publications-and-resources/covid-19-economic-relief-bill-sti mulus.aspx. Accessed 15 Jan 2021. National Institutes of Health. 2020. News Release: Promising Interim Results from Clinical Trial of NIH-Moderna COVID-19 Vaccine. Available at https://www.nih.gov/news-events/news-releases/ promising-interim-results-clinical-trial-nih-moderna-covid-19-vaccine. Accessed 18 Jan 2021. New Scientist. 2020. Estimates of the Predicted Coronavirus Death Toll Have Little Meaning. New Scientists, April 1, 2002. Available https://www.newscientist.com/article/mg24532763-600-est imates-of-the-predicted-coronavirus-death-toll-have-little-meaning/. Accessed 19 Jan 2021. Nordt, C., I. Warnke, E. Seifritz, and W. Kawohl. 2015. Modelling Suicide and Unemployment: A Longitudinal Analysis Covering 63 Countries, 2000–11. Lancet Psychiatry 2 (3): 239–245.

302

9 Public Health Emergencies

Northam, J. 2020. Calls Grow to Ban Wet Markets Amid Concerns Over Disease Spread. NPR, April 16, 2020. Available at https://www.npr.org/sections/coronavirus-live-updates/2020/04/16/ 835937420/calls-grow-to-ban-wet-markets. Accessed 19 Jan 2021. Nye, J. 2020. Apparent Suicide Rates Did Not Increase During COVID-19 Pandemic. Psychiatry Advisory, December 15. Available at https://www.psychiatryadvisor.com/home/topics/suicideand-self-harm/apparent-suicide-rates-did-not-increase-during-covid-19-pandemic/. Accessed 15 Jan 2021. Olson, T. 2020. Doctors Raise Alarm About Health Effects of Continued Coronavirus Shutdown: ‘Mass Casualty Incident’. Fox News, May 20. Available https://www.foxnews.com/pol itics/doctors-raise-alarm-about-health-effects-of-continued-coronavirus-shutdown. Accessed 19 Jan 2021. Osborn, C.Y., S. Kripalani, K.M. Goggins, and K.A. Wallston. 2017. Financial Strain Is Associated with Medication Nonadherence and Worse Self-Rated Health Among Cardiovascular Patients. Journal of Health Care of Poor and Underserved 28 (1): 499–513. Paine, N. 2020. Experts Think the Economy Would Be Stronger if COVID-19 Lockdowns Had Been More Aggressive. Five Thirty Eight, September 20. Available at: https://fivethirtyeight.com/features/experts-think-the-economy-would-be-stronger-if-covid19-lockdowns-had-been-more-aggressive/. Accessed 15 Jan 2021. Parker, M.J., C. Fraser, L. Abeler-Dörner, and D. Bonsall. 2020. Ethics of Instantaneous Contract Tracing Using Mobile Phone Apps in the Control of the COVID-19 Pandemic. Journal of Medical Ethics 46 (7): 427–431. Partington, R., and D. Rushe. 2020. Stock Market Rally Pushes Dow Jones to Record High of 30,000. The Guardian, November 25. Available at https://www.theguardian.com/business/2020/ nov/24/dow-jones-hits-record-high. Accessed 20 Jan 2021. Paterlini, M. 2020. Closing Borders Is Ridiculous’: The Epidemiologist Behind Sweden’s Controversial Coronavirus Strategy. Nature 580: 574. Pinto-Bazurco, J. 2020. The Precautionary Principle Still Only One Earth: Lessons from 50 Years of UN Sustainable Development Policy. International Institute for Sustainable Development, October 23. Available at https://www.iisd.org/articles/precautionary-principle. Accessed 20 Jan 2021. Public Health Emergency. 2020. Public Health Emergency Declaration. Available at https://www. phe.gov/Preparedness/legal/Pages/phedeclaration.aspx. Accessed 19 Jan 2021. Ranney, M.L., V. Griffeth, and A.K. Jha. 2020. Critical Supply Shortages—The Need for Ventilators and Personal Protective Equipment During the Covid-19 Pandemic. New England Journal of Medicine 382: e41. RECOVERY Collaborative Group. 2020. Dexamethasone in Hospitalized Patients with Covid-19— Preliminary Report. New England Journal of Medicine, July 17. Available at https://www.nejm. org/doi/full/10.1056/NEJMoa2021436?query=RP. Accessed: 17 July 2020. Resnik, D.B. 2012. Environmental Health Ethics. Cambridge, UK: Cambridge University Press. Resnik, D.B. 2018. The Ethics of Research with Human Subjects: Protecting People, Advancing Science, Promoting Trust. Cham, Switzerland: Springer. Rieder, T.N., A. Barnhill, J. Bernstein, and B. Hutler. 2020. When to Reopen the Nation Is an Ethics Question—Not Only a Scientific One. Hastings Bioethics Forum, April 28. Available at: https://www.thehastingscenter.org/when-to-reopen-the-nation-is-an-ethics-question-notonly-a-scientific-one/. Accessed 19 Jan 2021. Ritchie, H., J. Hasell, C. Appel, and M. Roser. 2019. Terrorism. Our World in Data. Available at https://ourworldindata.org/terrorism. Accessed 19 Jan 2021. Ritchie, H., and M. Roser. 2019a. Natural Disasters. Our World in Data. Available at https://ourwor ldindata.org/natural-disasters. Accessed 19 Jan 2021. Ritchie, H., and M. Roser. 2019b. Causes of Death. Our World in Data. Available at https://ourwor ldindata.org/causes-of-death. Accessed 19 Jan 2021. Roberts, S. 2020. Flattening the Coronavirus Curve. New York Times, March 27. Available at https:// www.nytimes.com/article/flatten-curve-coronavirus.html. Accessed 19 Jan 2021. Rome, B.N., and J. Avorn. 2020. Drug Evaluation During the Covid-19 Pandemic. New England Journal of Medicine, April 14, 2020.

References

303

Rosenbaum, L. 2020. Facing Covid-19 in Italy—Ethics, Logistics, and Therapeutics on the Epidemic’s Front Line. New England Journal of Medicine 382: 1873–1875. Sanche, S., Y. Lin, C. Xu, E. Romero-Severson, N. Hengartner, and R. Ke 2020. High Contagiousness and Rapid Spread of Severe Acute Respiratory Syndrome Coronavirus 2. Emerging Infectious Diseases 26 (7): 1470–1477. Sanders, J.M., M.L. Monogue, T.Z. Jodlowski, and J.B. Cutrell. 2020. Pharmacologic Treatments for Coronavirus Disease 2019 (COVID-19): A Review. New England Journal of Medicine 323 (18): 1824–1836. Schmidt, H. 2020. Vaccine Rationing and the Urgency of Social Justice in the COVID-19 Response. Hastings Center Report 50 (3): 46–49. Schwalbe, N. 2020. We Could Be Vastly Overestimating the Death Rate for COVID-19. Here’s Why. World Economic Forum, April 4. Available at https://www.weforum.org/agenda/2020/04/wecould-be-vastly-overestimating-the-death-rate-for-covid-19-heres-why/. Accessed 19 Jan 2021. Scott, D. 2020. The Covid-19 Risks for Different Age Groups. explained. Vox, March 23. Available at https://www.vox.com/2020/3/23/21190033/coronavirus-covid-19-deaths-by-age. Accessed 19 Jan 2021. Segal, S., and D. Gerstel. 2020. The Global Economic Impacts of COVID-19. Center for Strategic and International Studies, March 10. Available at https://www.csis.org/analysis/global-economicimpacts-covid-19. Accessed 19 Jan 2021. Selgelid, M.J. 2005. Ethics and infectious disease. Bioethics 19 (3): 272–289. Selgelid, M.J. 2009. Pandethics. Public Health 123 (3): 255–259. Shah, S.K., F.G. Miller, T.C. Darton, D. Duenas, C. Emerson, H. Fernandez Lynch, E. Jamrozik, N.S. Jecker, D. Kamuya, M. Kapulu, J. Kimmelman, D. MacKay, M.J. Memoli, S.C. Murphy, R. Palacios, Roestenberg M. Richie, A. Saxena, K. Saylor, M.J. Selgelid, V. Vaswani, and A. Rid. 2020. Ethics of Controlled Human Infection to Study COVID-19. Science 368 (6493): 832–834. Sharfstein, J.M., and C.C. Morphew. 2020. The Urgency and Challenge of Opening K-12 Schools in the Fall of 2020. Journal of the American Medical Association 324 (2): 133–134. Singer, P., and R.Y. Chappell. 2020. Pandemic Ethics: The Case for Experiments on Human Volunteers. Washington Post, April 27. Available at https://www.washingtonpost.com/opinions/ 2020/04/27/pandemic-ethics-case-experiments-human-volunteers/?arc404=true. Accessed 19 Jan 2021. Singh, R.B. 2020. SARS-CoV2 Structure. From: Features, Evaluation and Treatment Coronavirus (COVID-19). Available at https://www.ncbi.nlm.nih.gov/books/NBK554776/figure/art icle-52171.image.f3/. Accessed 19 Jan 2021. Soares, C. 2020. Reasons for Hope: The Drugs, Tests and Tactics That May Conquer Coronavirus. Reuters, April 17. Available at https://www.reuters.com/article/us-health-coronavirus-lif eline/reasons-for-hope-the-drugs-tests-and-tactics-that-may-conquer-coronavirus-idUSKBN21 Z2HP. Accessed: 19 Jan 2021. Sohn, E. 2016. Can Poverty Lead to Mental Illness? NPR, October 30, 2016. Available at: https://www.npr.org/sections/goatsandsoda/2016/10/30/499777541/can-poverty-lead-to-men tal-illness. Accessed 19 Jan 2021. Sudworth, J. 2020. Coronavirus: Wuhan Emerges from the Harshest of Lockdowns. BBC News, April 8. Available at https://www.bbc.com/news/world-asia-china-52197054. Accessed 19 Jan 2021. Sutton, D., K. Fuchs, M. D’Alton, and D. Goffman. 2020. Universal Screening for SARS-CoV-2 Women Admitted for Delivery. New England Journal of Medicine 382 (22): 2163–2164. Tappe, A. 2020. America’s Unemployment Rate Falls to 13.3% as Economy Posts Surprise Job Gains. CNN, June 5. Available at https://www.cnn.com/2020/06/05/economy/may-jobs-report2020-coronavirus/index.html. Accessed 19 Jan 2021. Taylor, A. 2014. Bhopal: The World’s Worst Industrial Disaster, 30 Years Later. The Atlantic Monthly, December 2. Available at https://www.theatlantic.com/photo/2014/12/bhopal-the-wor lds-worst-industrial-disaster-30-years-later/100864/. Accessed 19 Jan 2021. Tharoor, I. 2020. Has Sweden’s Coronavirus Strategy Failed? Washington Post, November 18. Available at https://www.washingtonpost.com/world/2020/11/18/sweden-coronavirus-surge-pol icy/. Accessed 16 Jan 2021.

304

9 Public Health Emergencies

Thomas L. 2020. Life Sciences Medical News, June 8, 2020. Cremation Numbers Reveal Possible Suppression of True COVID-19 Data by China. Available at https://www.news-medical.net/ news/20200608/Cremation-numbers-reveal-possible-suppression-of-true-COVID-19-data-inChina.aspx. Accessed 20 Jan 2021. Tsao, T.Y., K.J. Konty, G. Van Wye, O. Barbot, J.L. Hadler, N. Linos, and M.T. Bassett. 2016. Estimating Potential Reductions in Premature Mortality in New York City from Raising the Minimum Wage to $15. American Journal of Public Health 106 (6): 1036–1041. Tversky, A., and D. Kahneman. 1974. Judgment Under Uncertainty: Heuristics and Biases. Science 185 (4157): 1124–1131. United Nations Children’s Fund. 2020. Don’t Let Children Be the Hidden Victims of COVID19 Pandemic. Available at https://www.unicef.org/press-releases/dont-let-children-be-hidden-vic tims-covid-19-pandemic. Accessed 20 Jan 2021. United States Bureau of Labor and Statistics. 2021. The Employment Situation—December 2020. Available at https://www.bls.gov/news.release/pdf/empsit.pdf. Accessed 15 Jan 2021. United States Government. 2018. National Biodefense Strategy. Available at https://www.whiteh ouse.gov/wp-content/uploads/2018/09/National-Biodefense-Strategy.pdf. Accessed 20 Jan 2021. Veatch, R.M., and L.F. Ross. 2015. Transplantation Ethics, 2nd ed. Washington, DC: Georgetown University Press. Vogel, G. 2020. First Antibody Surveys Draw Fire for Quality, Bias. Science 368 (6489): 350–351. Vynnycky, E., A. Trindall, and P. Mangtani. 2007. Estimates of the Reproduction Numbers of Spanish Influenza Using Morbidity Data. International Journal of Epidemiology 36 (4): 881–889. Wagner, J., and M.D. Dahnke. 2015. Nursing Ethics and Disaster Triage: Applying Utilitarian Ethical Theory. Journal of Emergency Nursing 41 (4): 300–306. Woolf, S.H., D.A., Chapman, and J.H. Lee 2021. COVID-19 as the Leading Cause of Death in the United States. Journal of the American Medical Association 325 (2): 123–124. World Bank. 2020. Food Security and COVID-19. Available at https://www.worldbank.org/en/ topic/agriculture/brief/food-security-and-covid-19. Accessed 22 Jan 2021. World Economic Forum. 2020a. Mad March: How the Stock Market Is Being Hit by COVID19. Available at https://www.weforum.org/agenda/2020/03/stock-market-volatility-coronavirus/. Accessed 20 Jan 2021. World Economic Forum. 2020b. This Is How Much the Coronavirus Will Cost the World’s Economy, According to the UN. Available at https://www.weforum.org/agenda/2020/03/coronavirus-covid19-cost-economy-2020-un-trade-economics-pandemic/. Accessed 20 Jan 2021. World Health Organization. 2020. Determinants of Health. Available at https://www.who.int/hia/ evidence/doh/en/. Accessed 20 Jan 2021. World Health Organization. 2021. Quality of Suicide Data. Available at https://www.who.int/men tal_health/suicide-prevention/mortality_data_quality/en/. Accessed 20 Jan 2021. World Population Review. 2020. World War Two Casualties by Country 2020. Available at https:// worldpopulationreview.com/countries/world-war-two-casualties-by-country/. Accessed 20 Jan 2021. Worldometer. 2020. Age, Sex, Existing Conditions of COVID-19 Cases and Deaths. Available at: https://www.worldometers.info/coronavirus/coronavirus-age-sex-demographics/. Accessed 20 Jan 2021. Wu, Z., and J.M. McGoogan. 2020. Characteristics of and Important Lessons from the Coronavirus Disease 2019 (COVID-19) Outbreak in China: Summary of a Report of 72,314 Cases from the Chinese Center for Disease Control and Prevention. Journal of the American Medical Association 323 (13): 1239–1242. Zack, N. 2009. Ethics for Disaster. Lanham, MD: Rowman and Littlefield. Zhou, P., X.L. Yang, X.G. Wang, B. Hu, L. Zhang, W. Zhang, H.R. Si, Y. Zhu, B. Li, C.L. Huang, H.D. Chen, J. Chen, Y. Luo, H. Guo, R.D. Jiang, M.Q. Liu, Y. Chen, X.R. Shen, X. Wang, X.S. Zheng, K. Zhao, Q.J. Chen, F. Deng, L.L. Liu, B. Yan, F.X. Zhan, Y.Y. Wang, G.F. Xiao, and Z.L. Shi. 2020. A Pneumonia Outbreak Associated with a New Coronavirus of Probable Bat Origin. Nature 579 (7798): 270–273.

Chapter 10

Conclusion

10.1 Summary of Key Arguments and Conclusions I began this book with some general reflections on how we think about risks, benefits, and precautions. I observed that as individuals we make decisions involving precautions in a variety of situations that we face each day, ranging from deciding whether to drive to work when snow is in the forecast, to taking a new job, to seeking medical treatment for chest pain. As groups we make decisions involving precautions in business, industry, government, research, engineering, and education. Precautionary reasoning, I argued, involves deciding what is a reasonable precaution to take, given the available options (e.g. risk avoidance, risk minimization, or risk management), our values (e.g. public health and safety, human rights, economic development, environmental protection), our tolerance for risk and uncertainty, and our knowledge and information. Risks may arise from our choices and actions or they may arise on their own with little or no human involvement (e.g. natural disasters). Decisions about precautions can be difficult to make because we often face scientific (or epistemological) or moral (or value) uncertainty concerning outcomes. Scientific uncertainty occurs when do not have enough knowledge, information, data, or evidence to make accurate and precise predictions about the outcomes of different options available to us. Moral uncertainty occurs when we do not have a clear understanding of how to evaluate possible outcomes, because we do not know what we value or how to rank our values, or we are making a group decision and we disagree about values or how to rank them. While we may be able to reduce our uncertainty by conducting scientific research, analyzing data, soliciting public opinion, and engaging in moral reflection and debate, it may often be the case that we must make decisions in the face of significant uncertainty. Decision-making rules and strategies address uncertainty in different ways. In Chapter 1, I introduced the Precautionary Principle (PP) as a rule for making decisions involving risks when we face significant uncertainty. I suggested, but did not argue at

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0_10

305

306

10 Conclusion

length, that the PP can complement other approaches to decision-making involving risks, such as rules and strategies from decision theory. In Chapter 2, I described and examined some rules and strategies from decision theory, including those that deal with decisions under ignorance (i.e. decisions where we do not know the probabilities related to possible outcomes), decisions under risk (i.e. decisions where we know the probabilities related to possible outcomes), and social choices (i.e. group decision-making). I discussed how to apply these rules and strategies to different types of decisions and considered some of their strengths and limitations. I argued that while decision theory provides us with some valuable insights into rational decision-making, it does not tell us how to make reasonable decisions because its rules and strategies are morally-neutral, and for a decision to be reasonable it must incorporate moral and social values. Therefore, we must look beyond decision theory for the moral guidance needed for making reasonable decisions. In Chapter 3, I considered how some influential moral theories can provide us with guidance for making reasonable decisions related to risks, benefits, and precautions insofar as they endorse specific values or tell us how to resolve value-conflicts. The values emphasized by different theories include individually-oriented values, such as happiness, life, individual health, knowledge, virtue, and human dignity and rights; social values, such as public health, social relationships, social justice, and economic development; and environmental values, such as protection of non-human species, habitats, ecosystems, and biodiversity. I argued that these theories encounter some substantial objections that call into question their ability to resolve value-conflicts and provide normative guidance for precautionary reasoning. Since there is no single, overarching moral theory that answers all objections satisfactorily, we must deal with an assortment of incommensurable values that we must consider, weigh, and prioritize when making choices concerning risks, benefits, and precautions. In Chapter 4, I defended the PP as a rule for decision-making when we face scientific or moral uncertainty. I reviewed the history of the PP and argued that it emerged as an alternative to evidence-based rules for public decision-making, such as expected utility theory (EUT) and its offshoots (e.g. cost/benefit analysis and risk management). I considered three main criticisms of the PP, i.e. that the principle is vague, that it is incoherent, and that it is opposed to science, technology, and progress, and I defined and explicated a version of the PP that can overcome these objections. I argued that to meet these objections the PP must be defined clearly and include a minimum standard of evidence for risks, such as plausibility, as well as a concept of reasonableness for assessing precautionary measures. I also argued that we can use four criteria for determining whether precautions are reasonable: proportionality (balance risks and benefits proportionally), fairness (distribute risks and benefits fairly and make decisions based on fair procedures), consistency (ensure that policies are consistent), and epistemic responsibility (base policies on the best available data and evidence). I examined the relationship between the PP and decision theory and moral theory and argued that the PP is best understood as a confluence of ideas from decision theory and moral theory. It is like a rule from decision theory insofar as it provides us with guidance for making decisions about risks and benefits,

10.1 Summary of Key Arguments and Conclusions

307

and it is like a moral principle insofar as involves balancing and prioritizing values. I also considered alternative interpretations of the PP, i.e. that it is a type of burden of proof or a procedural rule, and I argued that the PP should be interpreted as a rule for making practical decisions. Finally, I showed how one can apply the PP to practical decisions. In Chapter 5, I developed my approach to precautionary reasoning in greater depth. I argued that there are different rules and strategies we can use to make precautionary decisions and that the decision about which rule or strategy to use is itself an important choice. Whether it is reasonable to adopt a particular rule or strategy for decision-making depends on contextual factors related to the decision, such as scientific or moral uncertainty, moral and social values, and tolerance for risk and uncertainty. I argued that EUT (and its offshoots) is a reasonable approach to decision-making when scientific and moral uncertainty are both low, but that the PP is a better approach when scientific or moral uncertainty are high. Rules for decision-making under ignorance may be used when scientific uncertainty is high and moral uncertainty is low. I also argued that it may be reasonable to change decisionmaking rules or strategies as conditions warrant. For example, one could use the PP for making decisions about introducing a new technology when scientific or moral uncertainty are high, but then switch to EUT or risk management when uncertainty concerning the technology diminishes significantly. The PP is therefore one among many rules that we can use to make decisions involving possible harms and risks and it complements other rules. I also illustrated my approach to precautionary reasoning in three different types of decisions: individual decisions, decision-making for others, and social choices. I also discussed the importance of using democratic procedures for making social choices and answered several different objections to democratic decision-making. I also argued that public, community, and stakeholder engagement can play an important role in promoting fairness in democratic decision-making by ensuring the underrepresented and disadvantaged groups have meaningful input into these decisions.

308

10 Conclusion

10.2 Applications In Chapters 6 through 9 I applied the PP to policy choices in four different areas: chemical regulation, genetic engineering, dual use research in the biomedical sciences, and public health emergencies.1 Some of the main conclusions drawn in these chapters were2 : Regulation of toxic substances. The PP would advise us to balance benefits and risks of regulating toxic substances proportionally. Highly protective laws concerning toxic substances would minimize risks to public health and the environment but would also negatively impact industry and the economy. Permissive laws/regulations could benefit industry and the economy but increase risks to public health and the environment. Reasonable precautions would seem to fall somewhere between these two extremes and could include pre-market measures (such as pre-market safety testing), post-market ones (such as post-market research, surveillance, and monitoring), and others (such as research, education, and product labelling). The PP would support some form of pre-market safety testing that is more stringent than mere registration but less stringent than the testing required for new drugs or pesticides. To meet the requirements of distributive fairness, laws and regulations should include some extra protections for susceptible (or vulnerable) populations, so that they do not bear an unfair burden of chemical risks. To satisfy procedural fairness, laws and regulations should be developed with meaningful input (i.e. engagement) from the public as well as stakeholders (such as public health or environmental groups and industry) or directly impacted (such people living or working near sources of toxic substances). Access to life-saving experimental treatments. Access to life-saving experimental treatments for patients who have no other reasonable alternatives (including patients who need treatment during public health emergencies) presents difficult choices involving potential conflicts between rigorous research, public health, and patients’ rights. The PP would support policies that encourage rigorous research to move forward while granting access to experimental treatments to patients who need life-saving treatment and who have no reasonable alternatives. Access to experimental treatments, through EUAs or other mechanisms, can be granted to patients who are not participating in randomized, controlled trials (RCTs), provided this policy does not undermine RCT recruitment. RCTs can be modified to accelerate research or include more patients in studies. Patients can also participate in nonrandomized, uncontrolled trials that include methods for minimizing bias. Although EUA decisions may be based on preliminary analyses or incomplete data, final 1 In theory, one could apply the PP to many issues not discussed in this book, such as climate change,

geoengineering, electromagnetic radiation, and energy production (Munthe 2011; Hartzell-Nichols 2017). Physicist Stephen Hawking (1942–2018) suggested that we should take a precautionary approach toward attempting to contact extraterrestrial intelligent beings, because they might destroy humanity in order to pillage the Earth’s resources (Whalen 2018). 2 It is worth noting the others might use the PP to support different conclusions from the ones drawn here, depending on how they apply they proportionality, fairness, consistency, and epistemic responsibility conditions to the issues.

10.2 Applications

309

approval decisions should be based on data from RCTs. Researchers and physicians should do their best to ensure that patients who are facing life-threatening illnesses clearly understand the risks and benefits of treatment options, since they are likely to be desperately ill and may be willing to try almost anything that could save their lives. Under such dire circumstances, it is of utmost importance to promote informed decision-making by patients or their legal representatives and to avoid taking advantage of their vulnerability to achieve scientific or professional goals. Regulation of drugs and other medical products. The PP lends some insight into the amount of evidence needed to approve drugs and other medical products. The PP implies that the amount of evidence needed to approve medical products or make them available to the public is partly a function of the potential benefits and risks of approval for patients and society (i.e. proportionality). When the benefits to patients are very high (e.g. potentially life-saving) a medical product may be approved or made available with minimal testing even though the potential harms may be great as well. When the risks of a medical product are very low it could also be made available with minimal testing, if it has potential benefits for patients and the impacts on society of approval are minimal. More extensive testing could be required for products where risks are significant but potential benefits are not very high. Drug safety. The PP would support reasonable measures for ensuring the safety of drugs such as: requiring manufacturers to conduct long-term studies of the health effects of their drugs; sponsoring independent long-term studies of the health effects of drugs; encouraging and incentivizing health care professionals to report adverse drug effects to the MedWatch program; encouraging medical boards to closely monitor off-label prescribing; overseeing medical advertisements to ensure that the information conveyed is truthful, accurate, and understandable to the public; taking steps to promote integrity in clinical research; and enforcing requirements for clinical trial registration. Regulation of electronic cigarettes. While e-cigarettes pose significant risks to public health that are not well-understood currently, they also offer important public health benefits because they can help smokers quit. Though e-cigarettes are not harmless, they can help to reduce the harms of smoking. Banning e-cigarettes would be an unreasonable precaution because it would not balance risks and benefits proportionally. Banning e-cigarettes would also violate the consistency condition, since regular cigarettes are legal, and we have substantial evidence that they significantly harm human health. Some form of regulation (possibly combined with taxation) would seem to be the most reasonable policy approach for addressing the risks of e-cigarettes. Protecting susceptible (or vulnerable) populations from chemical risks. Some groups within the general population (such as chronically ill people or children) may be more susceptible or vulnerable to risks caused by chemicals than others (such as healthy adults). The PP would recommend that policymakers consider the proportionality of risks to benefits as well as their distribution when making regulatory decisions concerning pollutants, pesticides, toxic substances, and other chemicals. Some

310

10 Conclusion

situations may involve ethically challenging trade-offs between fairness and proportionality. Fairness would require not only that risks are distributed fairly but also that fair procedures with meaningful input from the public and affected stakeholders are used for making these decisions. Genetic engineering of microbes. The PP would support policies that minimize and mitigate the risks of genetic engineering of microbes by means of scientificallyinformed laws and regulations; rigorous oversight of scientists, staff, pathogens, and laboratories; and public, stakeholder, and community engagement. Genetic engineering of plants. Genetically modified crops can significantly improve agricultural productivity and efficiency and help fight world hunger, but they also pose risks to public health, the environment, and society. Although scientific data and evidence do not support a ban on GM crops, many people oppose GM crops and some countries have banned them. While proportionality and consistency would not support a ban on GM crops, procedural fairness would support a ban if a majority of citizens in a nation favor a ban. If a majority of citizens in a nation (or their duly elected representatives) decide to ban GM crops/foods, then this decision is, for that nation, the most reasonable choice. It may not be reasonable from a scientific perspective, but it is reasonable from a democratic one. However, citizens and policymakers have the right to change their minds about GM crops. To enable citizens and policymakers to make epistemically and morally responsible choices concerning the management of risks related to genetic engineering and other emerging technologies, scientists should educate and inform the public about these issues. Mandatory labelling of GM foods is a relatively inexpensive and reasonable way of minimizing or mitigating risks because it allows consumers to make choices concerning their risk exposure. People who do not want to eat GM foods should be provided with the information they need to make choices that reflect their values. Genetic engineering of animals. Genetic engineering of animals offers important benefits to science and society but also poses risks to the animals themselves, public health, and the environment. The PP would support a policy that minimizes and mitigates the risks of animal genetic engineering by means of regulation and oversight. GM animals should have the same ethical and legal protections that other animals have, and scientists who create or work with GM animals should stay abreast of the latest developments in genetic engineering and veterinary medicine to minimize harms to animals. Additional research should be done on the risks and benefits of consuming GM meat and animal products to inform public policy decisions. The PP would advise us to avoid genetic engineering projects that could plausibly result in significant animal suffering or environmental harm and to proceed slowly and carefully with some types of animal genetic engineering, such as genetically modifying pigs to serve as sources of tissues/organs or creating human-like, animal-human chimeras, until we have a better understanding of these technologies and their associated risks and benefits. Meaningful and effective community engagement is an ethical prerequisite for releasing GM animals (such as mosquitoes) into the environment. Communities should have the right to decide whether the benefits of releases (such as field trials) are worth the risks, irrespective of whether scientists or regulatory officials think benefits are worth the risks.

10.2 Applications

311

Genetic engineering of human beings/somatic. The PP would advise us to take reasonable precautions when moving from pre-clinical studies of somatic genetic engineering to clinical ones to ensure that risks to human subjects are proportional to benefits to the subjects and society. Phase I gene therapy clinical trials should not be approved until there is convincing evidence concerning safety and possible efficacy. When a Phase I study is approved, it would be reasonable to take precautionary measures to minimize risks to subjects, such as testing the treatment on only one subject at first to determine how safe it is before testing it on others; clinical monitoring of subjects to protect their health and withdrawing them the study if necessary; using data and safety monitoring boards to monitor data; and developing clear and comprehensive inclusion/exclusion criteria to protect subjects from harm. Informed consent can also play an important role in protecting subjects from risks. Subjects should understand the benefits and risks of clinical trials, so they can make participation decisions that reflect their values. Genetic engineering of human beings/germline. Germline genetic engineering offers potential benefits to children who would otherwise be born with severe monogenic diseases (and their parents) but poses significant risks to children created by this technology, future generations, and society. The PP would support a policy of germline genetic engineering to prevent the birth of children with serious, wellunderstood, severe monogenic disorders when there are no other reasonable alternatives. Germline genetic engineering used for this purpose should not be banned but should be tightly regulated and controlled to minimize and mitigate risks. However, since germline genetic engineering may not be safe enough to attempt in human beings at present, clinical trials should not be initiated until scientists have obtained more evidence concerning safety and efficacy. Thus, a temporary moratorium (e.g. 5 years) would be a reasonable precautionary measure to give scientists more time to do additional research related to germline genetic engineering.3 Once the moratorium is lifted, the risks of germline genetic engineering to prevent severe monogenic disorders can be managed through existing legal and ethical frameworks. Using germline genetic engineering to prevent the birth of children with polygenic diseases is much more technically challenging than using it to prevent the birth of children with severe monogenic disorders and should be banned for the foreseeable future. However, this ban could be lifted when we have a better understanding of the benefits and risks of this type of germline engineering. Using germline genetic engineering for reasons other than preventing the birth of children with genetic diseases (e.g. enhancement) should be banned for the foreseeable future because not only is it very risky but it also raises significant moral and social issues that have not been resolved, such as eugenics, discrimination against people with disabilities, and exacerbation of socioeconomic inequalities. Dual use research in the biomedical sciences. Dual use research in the biomedical sciences raises difficult questions for scientists, funding agencies, and journals, because methods, data, and results that could be used for public health purposes 3 As

noted in Chapter 7, this moratorium would not apply to GGE conducted only for research purposes and not used to produce children.

312

10 Conclusion

(such as vaccine development) might also be used for malevolent purposes (such as bioweapons development). Dual use research in microbiology also creates a risk of accidental contamination when researchers try to reproduce or build upon experiments. The PP would support policies designed to avoid, minimize, or mitigate the risks of dual use research. Government funding agencies should decide whether the risks of research projects with dual use potential are proportional to the benefits. In some cases, an agency may decide not to fund research if it determines that the risks are not proportional to the benefits; or it may decide to classify the research if it determines that the research has important benefits but also has risks that cannot be managed if it is published. Journals face their own dilemmas when they decide whether to publish dual use research. Though open, transparent publication of research is the norm in science, redacted publication, with full access to granted to responsible scientists and public health officials, is a potential option for managing the risks of dual use biomedical research in some cases. Since journals may not have the resources needed to manage a system for handling redacted publications, organizations with more resources, such as government agencies, could support this activity. Public health emergencies. Public health emergencies present citizens and government officials with difficult choices that may involve conflicts among fundamental values, such as protecting public health, respecting human rights, promoting economic growth, and protecting the most vulnerable members of society. Policy dilemmas may arise when responding to emergencies or preparing for them. Societies often cannot avoid the risks created by public health emergencies and the best that can be done is to minimize or mitigate risks. To minimize the public health risks of the COVID-19 pandemic, most governments enacted policies, known as lockdowns, that significantly restricted human rights and economic activity. The PP would recommend that leaders who are making decisions concerning lockdowns (e.g. when and how to implement or lift them) consider the proportionality of risks to benefits as well as their distribution. While lockdowns can save lives and reduce burdens on health care systems, they can also have severe economic and psychosocial impacts, which can negatively impact human life, health, and wellbeing. The proportionality of risks to benefits from a lockdown depends, in large part, on the transmissibility, lethality, treatability, and preventability of the disease: lockdowns that were a widely regarded as a reasonable response to the COVID-19 pandemic would probably not be viewed as a reasonable response to seasonal influenza. Concerning the distribution of risks and benefits, a lockdown may benefit medically vulnerable groups but impose significant burdens on others, including socioeconomically vulnerable people and children. Leaders should also consider fairness issues in decision-making. To promote procedural fairness, government officials should conduct public, community, and stakeholder engagement concerning lockdowns policies. Procedural fairness may be difficult to achieve during public health emergencies, because decisions often must be made quickly, with little time for meaningful public engagement or education. Medical resource allocation decisions during public health emergencies present difficult ethical dilemmas for health care professionals and health care organizations

10.2 Applications

313

because they involve conflicts between fundamental values, such as promoting the overall good of society and treating people as having equal moral worth. The PP would recommend that allocation policies should be developed with an eye toward proportionality, fairness, consistency, and epistemic responsibility. The PP would advise policymakers to prevent allocation issues from arising in the first place by taking appropriate measures to prepare for public health emergencies. Lack of preparation for disasters, such as public health emergencies, chiefly occurs because government officials and citizens do not give enough fiscal and political priority to disaster preparedness. The PP has little to say about dealing with economic, moral, and political problems concerning disaster preparedness, other than to advise us to take reasonable measures to minimize or mitigate the risk of disasters. The PP would also recommend that policies should be based on meaningful input from the public and affected communities and stakeholders, in order to satisfy the procedural fairness requirement. To achieve this input, governments should seek such input long before disasters arise, so there is ample time to for discussion and debate. Decisionmakers should continue to collect and process new knowledge and information during disasters and make appropriate changes in policies. Post hoc review of disasters and scientific research on disasters can be valuable tools for preparing for future disasters. The PP would support human challenge studies that intentionally expose healthy volunteers to pathogens that are causing pandemics, provided that the benefits of the research for society are proportional to the risks imposed on human subjects.

10.3 Limitations and Further Research Before concluding this book, I would like to discuss some limitations of my account of precautionary reasoning and areas where further research is needed. One of the important limitations of my account of precautionary reasoning is that the PP4 does not address human rights issues in a straightforward way, because it focuses on the proportionality and distribution of risks and benefits when taking precautionary measures and does not explicitly consider rights-based limitations on those measures. While the PP is not a form utilitarianism, it is a consequentialist principle because it focuses on the outcomes of actions and policies, rather than on deontological constraints (such as human rights considerations) on actions and policies. That being said, the PP can address concerns for individual rights (and welfare) in an indirect way. First, the PP can take human rights considerations into account when we think about the proportionality of risks to benefits, because proportionality is a moral justification of risks, based on expected benefits, not a quantitative comparison of risks and benefits. For example, lockdowns impose significant restrictions on human rights, such as the right to move and associate freely and the right to work. When deciding whether a lockdown is a reasonable precautionary measure to take in response to a 4 This

limitation applies to my version of the PP as well as other versions.

314

10 Conclusion

pandemic, government officials should consider whether the benefits of the lockdown are proportional to its risks. While human rights restrictions are not a risk per se, they can—and should—impact how government officials think about the proportionality of risks and benefits: policies that significantly restrict human rights are justified only if they offer compelling social benefits. The PP would deal with other issues involving limitations on human rights, such as regulation of medical products, foods, and human reproductive technologies, in a similar way. Second, the PP can take human rights considerations into account when we think about the distribution of risks and benefits, since policies that impose significant risks or burdens on some people in order to benefit others may be regarded as unfair. For example, in the controversy over access to life-saving experimental medications (discussed in Chapter 6), there was a conflict between the right to try life-saving experimental medications and promoting public health by means of rigorous research. While the balance of risks and benefits tends to favor restricting the right to access lifesaving experimental medications in order to promote public health, one might argue that excessive restrictions are unfair because they require some patients to forego potentially significant treatments so that others may benefit. Thus, fairness considerations support compromise policies that allow patients to have access to potentially life-saving experimental medications while rigorous research moves forward. While the PP does not resolve this conflict by means of an appeal to human rights, human rights considerations nevertheless impact our thinking about the reasonableness of precautionary policies when we consider the fairness of these policies. Another potential limitation of my account of precautionary reasoning is that it often does not yield a single solution to a decision problem, because what counts as a solution depends on the reasonableness of precautionary measures, and reasonableness depends on the context of the decision. My view implies that different people (or societies) may arrive at different solutions to the same decision problem because they have different values, knowledge, and tolerances for risk or uncertainty. While some readers might regard this feature of my view as a significant limitation that shows my approach is not very rigorous or precise, I tend to view it as a strength, because it accurately reflects the complexity and indeterminacy of real-world decision-making. What my approach lacks in rigor and precision it makes up for in realism and explanatory power. Further research is needed in several areas related to the PP. As discussed in Chapter 2 and numerous other times in this book, important questions concerning knowledge and evidence arise when we are deciding whether to apply EUT, the PP, or rules for decision-making under ignorance to a choice we face, because to apply EUT to a decision we must know the probabilities of different outcomes. But what counts as knowing the probability of an outcome? Knowledge, according to most philosophers, is more than mere opinion or guesswork: it is belief that is based on evidence or justification (Chisholm 1977; Goldman and McGrath 2014). But how much evidence is required to know the probability of an outcome? As we saw in Chapter 2, the answer to this question depends, in part, on what we mean by probability. If we are estimating the probability that a coin will turn up heads when flipped, we can use the mathematical approach to say that probability is 0.5. If we are

10.3 Limitations and Further Research

315

betting on a horse race, we may make a subjective, educated guess concerning the probability that it will win. In most public policy decisions, however, probabilities are based on objective, empirical evidence from observed frequencies of events or statistical modelling of dynamic systems (such as the climate or disease epidemics). Ideally, probabilities used in EUT decisions should be accurate and precise. The question of how much evidence is needed to apply EUT to decisions is not a purely scientific issue but also has a moral, social, or political dimensions, since the amount of evidence required for estimating a probability used in decision-making depends, in part, on the consequences of the decision, especially the consequences of making a mistake (Douglas 2009; Steel 2015). One might require more evidence for a decision with significant consequences for public health, society, or the environment, such as a decision concerning the approval of a drug or pesticide, than for a decision with less of an impact, such as a decision concerning the name of a national park. Additional research and analysis in disciplines such as decision theory, statistics, economics, and philosophy of science, can provide some insight into this issue. Questions concerning knowledge and evidence also arise when we judge that a hypothesis, theory, explanation, prediction, is plausible, and further research is needed on this concept. What is plausibility? How does plausibility differ from confirmation, validity, proof, probability, and possibility? How much and what type of evidence is needed to demonstrate plausibility? Are there degrees of plausibility? As we have seen, plausibility plays an important role in the PP insofar as possible harms and benefits addressed by this principle must be plausible. It is not reasonable to take action to deal with possible implausible harms, nor is it reasonable to fail to take action to deal with plausible harms in order to promote implausible benefits. Additional research and analysis in the philosophy of science and in scientific disciplines that make plausibility judgments could lend some insight into these issues. Further research is also needed on public, community, and stakeholder engagement (National Academies of Sciences, Engineering, and Medicine 2016, 2017). What are these different forms of engagement? How are they different? Why are they important? How can one engage the public, communities, or stakeholders effectively and meaningfully? What are some examples of successful and unsuccessful engagement? How does one measure success? As we have seen several times in this book, engagement is an important part of procedural fairness because people who are impacted by decisions should have meaningful input into them. Additional research in disciplines such as political science, sociology, communication science, social psychology, and political and moral philosophy can help us better understand public, community, and stakeholder engagement.

10.4 Final Thoughts This concludes my book on precautionary reasoning. The take-home message from my book is that the PP is one among many different forms of precautionary reasoning. What type of precautionary reasoning we should decide to use depends on the context

316

10 Conclusion

of the decision, and we may decide to switch modes of reasoning as the context changes. Putting these points together, I would describe my approach to precautionary reasoning as heterogenous, dynamic, and contextual. While my interpretation of the PP is hardly earth-shattering, it is novel and important because it points to a possible compromise between proponents and opponents of the PP. Proponents of the PP have claimed that the principle should be applied widely to many different types of public policy choices, while opponents have asserted that we should use evidence-based forms of reasoning, such as EUT or cost/benefit analysis, for these decisions. I have argued that the most reasonable viewpoint lies somewhere between these two extreme positions. My book is not likely to be the last word on precautionary reasoning or the PP, and I look forward to participating in further discussion and debate about these important topics.

References Chisholm, R. 1977. Theory of Knowledge, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall. Douglas, H. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh, PA: University of Pittsburgh Press. Goldman, A.I., and M. McGrath. 2014. Knowledge: A Contemporary Introduction. New York, NY: Oxford University Press. Hartzell-Nichols, L. 2017. A Climate of Risk: Precautionary Principles, Catastrophes, and Climate Change. New York, NY: Routledge. Munthe, C. 2011. The Price of Precaution and the Ethics of Risks. Dordrecht, Netherlands: Springer. National Academies of Sciences, Engineering, and Medicine. 2016. Gene Drives on the Horizon: Advancing Science, Navigating Uncertainty, and Aligning Research with Public Values. Washington, DC: National Academies Press. National Academies of Sciences, Engineering, and Medicine. 2017. Communicating Science Effectively: A Research Agenda. Washington, DC: National Academies Press. Steel, D. 2015. Philosophy and the Precautionary Principle. Cambridge, UK: Cambridge University Press. Whalen, A. 2018. Stephen Hawking on Alien Life, Extraterrestrials, and the Possibility of UFOs Visiting Earth. Newsweek, March 14. Available at: https://www.newsweek.com/stephen-haw king-death-alien-contact-aliens-theory-ufo-search-breakthrough-844920. Accessed 20 Jan 2021.

Bibliography

Arrow, K. 1951. Social Choice and Individual Values. New York, NY: Wiley. Barash, J.R., and S.S. Arnon. 2014. A Novel Strain of Clostridium Botulinum That Produces Type B and Type H Botulinum Toxins. Journal of Infectious Disease 209 (2): 183–189. Bartlett, J.G., M.S. Ascher, E. Eitzen, A.D. Fine, J. Hauer, M. Layton, S. Lillibridge, M.T. Osterholm, T. O’Toole, G. Parker, T.M. Perl, P.K. Russell, D.L. Swerdlow, K. Tonat, and Working Group on Civilian Biodefense. 2001. Botulinum Toxin as a Biological Weapon: Medical and Public Health Management. Journal of the American Medical Association 285 (8): 1059–1070. Beasley, D. 2020. Exclusive: Trial of Gilead’s Potential Coronavirus Treatment Running Ahead of Schedule, Researcher Says. Reuters, April 24. Available at: https://www.reuters.com/article/ushealth-coronavirus-gilead-exclusive/exclusive-trial-of-gileads-potential-coronavirus-treatmentrunning-ahead-of-schedule-researcher-idUSKCN2262X3. Accessed 18 Jan 2021. Berger, L. 2020. AstraZeneca Says It May Consider Exposing Vaccine Trial Participants to Virus. Reuters, May 28. Available at: https://www.reuters.com/article/us-health-coronavirus-astraz eneca-challe/astrazeneca-says-it-may-consider-exposing-vaccine-trial-participants-to-virus-idU SKBN2342CC. Accessed 18 Jan 2021. Berger, M.W. 2021. How Can the World Allocate COVID-19 Vaccines Fairly? Penn Today, January 7. Available at: https://penntoday.upenn.edu/news/how-can-world-allocate-covid-19-vaccinesfairly. Accessed 18 Jan 2021. Biotechnology Innovation Organization. 2020a. Synthetic Biology Explained. Available at: https:// archive.bio.org/articles/synthetic-biology-explained. Accessed 18 Jan 2021. Boone, C.K. 1988. Bad axioms in Genetic Engineering. Hastings Center Report 18 (4): 9–13. Bowler, J. 2020. Study of High-Dose Chloroquine for COVID-19 Stopped Early Due to Patient Deaths. Science Alert, April 14. Available: https://www.sciencealert.com/clinical-trial-for-highdose-of-chloroquine-stopped-early-due-to-safety-concerns. Accessed 18 Jan 2021. Burton, F., and S. Stewart. 2008. Busting the Anthrax Myth. Strafor, July 30. Available at: https:// worldview.stratfor.com/article/busting-anthrax-myth. Accessed 18 Jan 2021. Callaway, E. 2020b. Hundreds of People Volunteer to Be Infected with Coronavirus. Nature, April 22. Available at: https://www.nature.com/articles/d41586-020-01179-x. Accessed 29 Apr 2020. Cartagena Protocol on Biosafety. 2001. Available at: https://bch.cbd.int/protocol/text/. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020a. Social Distancing, Quarantine, Isolation. Available at: https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/social-dis tancing.html. Accessed 18 Jan 2021.

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0

317

318

Bibliography

Centers for Diseases Control and Prevention. 2020b. How to Protect Yourself and Others. https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/prevention.html. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020c. Suicide Mortality by State. Available at: https:// www.cdc.gov/nchs/pressroom/sosmap/suicide-mortality/suicide.htm. Accessed January. Centers for Disease Control and Prevention. 2020d. Age 21 Minimum Legal Drinking Age. Available at: https://www.cdc.gov/alcohol/fact-sheets/minimum-legal-drinking-age.htm. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020e. Disease Burden of Influenza. Available at: https://www.cdc.gov/flu/about/burden/index.html. Accessed 18 Jan 2021. Centers for Disease Control and Prevention. 2020f. Different COVID-19 Vaccines. Available at: https://www.cdc.gov/coronavirus/2019-ncov/vaccines/different-vaccines.html. Accessed 14 Jan 2021. Centers for Disease Control and Prevention. 2021. When Vaccine Is Limited, Who Should Get Vaccinated First? Available at: https://www.cdc.gov/coronavirus/2019-ncov/vaccines/recommendati ons.html. Accessed 18 Jan 2021. Environmental Protection Agency. 2019. Process of Reviewing the National Ambient Air Quality Standards. Available at: https://www.epa.gov/criteria-air-pollutants/process-reviewing-nationalambient-air-quality-standards. Accessed 18 Jan 2021. Fontanarosa, P.B., D. Rennie, and C.D. DeAngelis. 2004. Postmarketing Surveillance: Lack of Vigilance, Lack of Trust. Journal of the American Medical Association 292 (21): 2647–2650. Galdone, P. 2013. Henny Penny: A Folk Tale Classic. New York, NY: Houghton-Mifflin. Gandhi, M., D.S. Yokoe, and D.V. Havlir. 2020. Asymptomatic Transmission, the Achilles’ Heel of Current Strategies to Control Covid-19. New England Journal of Medicine, April 24. Goldstein, B.D. 2001. The Precautionary Principle Also Applies to Public Health Actions. American Journal of Public Health 91 (9): 1358–1361. Goldstein, B.D., and R.S. Carruth. 2004. Implications of the Precautionary Principle: Is It a Threat to Science? International Journal of Occupational Medicine and Environmental Health 17 (1): 153–161. Guan, W.J., Z.Y. Ni, Y. Hu, W.H. Liang, C.Q. Ou, J.X. He, L. Liu, H. Shan, C.L. Lei, D.S.C. Hui, B. Du, L.J. Li, G. Zeng, K.Y. Yuen, R.C. Chen, C.L. Tang, T. Wang, P.Y. Chen, J. Xiang, S.Y. Li, J.L. Wang, Z.J. Liang, Y.X. Peng, L. Wei, Y. Liu, Y.H. Hu, P. Peng, J.M. Wang, J.Y. Liu, Z. Chen, G. Li, Z.J. Zheng, S.Q. Qiu, J. Luo, C.J. Ye, S.Y. Zhu, N.S. Zhong, and China Medical Treatment Expert Group for Covid-19. 2020. Clinical Characteristics of Coronavirus Disease 2019 in China. New England Journal of Medicine 382 (18): 1708–1720. Healthline. 2020a. What Is Herd Immunity and Could It Help Prevent COVID-19? Available at: https://www.healthline.com/health/herd-immunity#effectiveness. Accessed 19 Jan 2021. Hume, D. 2000 [1739]. A Treatise of Human Nature, ed. D.F. Norton and M.J. Norton. New York, NY: Oxford University Press. Institute for Health Metrics and Evaluation. 2020. COVID-19 Projections. Available at: https://cov id19.healthdata.org/united-states-of-america. Accessed 19 Jan 2021. Jamieson, D., and D. Wartenberg. 2001. The Precautionary Principle and Electric and Magnetic Fields. American Journal of Public Health 91 (9): 1355–1358. Jamrisko, M. 2020. Here’s How Much Money Countries Have Pledged for Virus Relief. Bloomberg, March 18. Available at: https://www.bloomberg.com/news/articles/2020-03-05/here-s-all-thecash-asian-nations-have-pledged-for-virus-relief. Accessed 19 Jan 2021. John, A., J. Pirkis, D. Gunnell, L. Appleby, and J. Morrissey. 2020. Trends in Suicide During the Covid-19 Pandemic. British Medical Journal 371: 1–2. Kahneman, D., and A. Tversky. 1979. Prospect Theory: An Analysis of Decision Under Risk. Econometrica 47: 263–291. Kuhlau, F., A.T. Höglund, K. Evers, and S. Eriksson. 2011. A Precautionary Principle for Dual Use Research in the Life Sciences. Bioethics 25 (1): 1–8.

Bibliography

319

Kupferschmidt, K. 2020. The Lockdowns Worked—But What Comes Next? Science 368 (6488): 218–219. Law, T. 2019. These Presidents Won the Electoral College—But Not the Popular Vote. Time Magazine, May 15. Available at: https://time.com/5579161/presidents-elected-electoral-college/. Accessed 19 Jan 2021. Lebergott, S. 1957. Annual Estimates of Unemployment in the United States, 1900–1954. In The Measurement and Behavior of Unemployment, 211–242. Washington, DC: National Bureau of Economic Research. Løkke, S. 2006. The Precautionary Principle and Chemicals Regulation: Past Achievements and Future Possibilities. Environmental Science and Pollution Research International 13 (5): 342– 349. McKinnon, K. 2009. Runaway Climate Change: A Justice-Based Case for Precautions. Journal of Social Philosophy 40: 187–207. McNeil Jt., D.G. 2020. How Much Herd Immunity Is Enough? New York Times, December 24, A1. Messer, K.D., S. Bligh, M. Costanigro, and H.M. Kaiser. 2015. Process Labeling of Food: Consumer Behavior, the Agricultural Sector, and Policy Recommendations. Council for Agricultural Science and Technology 10: 1–16. Miller, F.G., and S. Joffe. 2009. Benefit in Phase 1 Oncology Trials: Therapeutic Misconception or Reasonable Treatment Option? Clinical Trials 5 (6): 617–623. Miller, F.G., and S. Joffe. 2011. Balancing Access and Evaluation in the Approval of New Cancer Drugs. Journal of the American Medical Association 305 (22): 2345–2346. Miller, M.D., and M.A. Marty. 2017. Childhood—a Time Period Uniquely Vulnerable to Environmental Exposures. In The Praeger Handbook of Environmental Health, vol. 4, ed. R.H. Friis, 203–226. Santa Barbara, CA: Praeger. Munthe, C. 2017. Precaution and Ethics: Handling Risks, Uncertainties, and Knowledge Gaps in the Regulation of New Biotechnologies. Bern, Switzerland: Federal Office for Buildings and Publications and Logistics. Mutch, R.E. 2014. Buying the Vote: A History of Campaign Finance Reform. New York, NY: Oxford University Press. Naess, A. 1986. The Deep Ecological Movement: Some Philosophical Aspects. Philosophical Inquiry 8 (1/2): 10–31. National Institutes of Health. 2014. Statement on Funding Pause on Certain Types of Gain-ofFunction Research, October 16. Available at: https://www.nih.gov/about-nih/who-we-are/nihdirector/statements/statement-funding-pause-certain-types-gain-function-research. Accessed 19 Jan 2021. National Institutes of Health. 2020. Dual Use Research of Concern. Available at: https://osp.od.nih. gov/biotechnology/dual-use-research-of-concern/. Accessed 19 Jan 2021. National Security Decision Directive 189. 1985. Available at: https://fas.org/irp/offdocs/nsdd/nsdd189.htm. Accessed 19 Jan 2021. Occupational Safety and Health Administration. 2019. Toluene. Available at: https://www.osha. gov/SLTC/toluene/exposure_limits.html. Accessed 19 Jan 2021. Paine, N. 2020. Experts Think the Economy Would Be Stronger If COVID-19 Lockdowns Had Been More Aggressive. Five Thirty Eight, September 20. Available at: https://fivethirtyeight.com/features/experts-think-the-economy-would-be-stronger-if-covid19-lockdowns-had-been-more-aggressive/. Accessed 15 Jan 2021. Parens, E. (ed.). 1998. Enhancing Human Traits: Ethical and Social Implications. Washington, DC: Georgetown University Press. Popper, K. 1959. The Propensity Interpretation of Probability. British Journal of the Philosophy of Science 10: 25–42. Rieder, T.N., A. Barnhill, J. Bernstein, and B. Hutler. 2020. When to Reopen the Nation Is an Ethics Question—Not Only a Scientific One. Hastings Bioethics Forum, April 28. Available at: https://www.thehastingscenter.org/when-to-reopen-the-nation-is-an-ethics-question-notonly-a-scientific-one/. Accessed 19 Jan 2021.

320

Bibliography

Resnik, D.B. 2008. Environmental Disease, Biomarkers, and the Precautionary Principle. In Genomics and Environmental Regulation, ed. R.R. Sharp, G.E. Marchant, and J.A. Grodsky, 242–257. Baltimore, MD: Johns Hopkins University Press. Rome, B.N., and J. Avorn J. 2020. Drug Evaluation During the Covid-19 Pandemic. New England Journal of Medicine, April 14. Rosenbaum, L. 2020. Facing Covid-19 in Italy—Ethics, Logistics, and Therapeutics on the Epidemic’s Front Line. New England Journal of Medicine 382: 1873–1875. Second International Conference on the Protection of the North Sea. 1987. Available at: https:// seas-at-risk.org/old/1mages/1987%20London%20Declaration.pdf. Accessed January. Segal, S., and D. Gerstel. 2020. The Global Economic Impacts of COVID-19. Center for Strategic and International Studies, March 10. Available at: https://www.csis.org/analysis/global-eco nomic-impacts-covid-19. Accessed 19 Jan 2021. Shamoo, A.E., and D.B. Resnik. 2015. Responsible Conduct of Research, 3rd ed. New York, NY: Oxford University Press. Simon, H.A. 1997. Models of Bounded Rationality: Empirically Grounded Economic Reason. Cambridge, MA: MIT Press. Snell, K. 2020. What’s Inside the Senate’s $2 Trillion Coronavirus Aid Package. NPR, March 26. Available at: https://www.npr.org/2020/03/26/821457551/whats-inside-the-senate-s-2-trillioncoronavirus-aid-package. Accessed 14 Apr 2020. Sohn, E. 2016. Can Poverty Lead to Mental Illness? NPR, October 30, 2016. Available at: https://www.npr.org/sections/goatsandsoda/2016/10/30/499777541/can-poverty-lead-to-men tal-illness. Accessed 19 Jan 2021. UNAIDS. 2019. Global HIV and AIDS Statistics—2019 Fact Sheet. Available at: https://www.una ids.org/en/resources/fact-sheet. Accessed 20 Jan 2021. Victora, C.G., J.P. Habicht, and J. Bryce. 2004. Evidence-Based Public Health: Moving Beyond Randomized Trials. American Journal of Public Health 94 (3): 400–405. Wagner, W.E. 2000. The Precautionary Principle and Chemical Regulation in the U.S. Human and Ecological Risk Assessment: An International Journal 6 (3): 459–477. Wenar, L. 2015. Rights. Stanford Encyclopedia of Philosophy. Available at: http://plato.stanford. edu/entries/rights/#5.2 Accessed 20 Jan 2021. World Meteorological Association. 2019. A History of Climate Activities. Available at: https://pub lic.wmo.int/en/bulletin/history-climate-activities. Accessed 20 Jan 2021. World Population Review. 2020. World War Two Casualties by Country 2020. Available at: https:// worldpopulationreview.com/countries/world-war-two-casualties-by-country/. Accessed 20 Jan 2021.

Index

A Abigail Alliance for Better Access to Developmental Drugs v. von Eschenbach, 152 Abortion, 210, 211 Access to life-saving experimental treatments, 308 Access to treatment, 132, 153, 154, 287, 308 Accidental contamination, 312 Accountability, 88, 120 Acquired immunodeficiency syndrome (AIDS), 131, 155, 274 Adaptation, 174, 201 Adoption, 12, 25, 39, 52, 53, 57, 79, 82, 89, 122, 130, 139, 211, 217, 223–225, 227, 255, 262, 263, 275, 294, 307 Advance directives, 254 Advertising, 121, 137, 220 Agent Orange, 241 Agriculture, 177, 180, 191, 193, 195, 200, 202, 203, 256 Air pollution, 82, 96, 144, 158, 195, 279 Alcohol, 9, 129, 137, 146, 147, 190, 195, 280 Alcohol use, risks of, 137 Allele, 173, 174, 211, 216 Allocation of COVID-19 vaccines, 291 of dialysis, 292 of medical resources, 61, 290, 291, 312 of ventilators, 290, 292 Al-Qaeda, 244 Amino acids, 140, 167, 168 Anderson, French, 205 Animal experimentation, 199 Animal rights, 202, 255

Animal welfare, 70, 199 three Rs reduction, replacement, refinement, 199 Anthrax, 243–245, 258 Anthrax letters, 244, 245, 247, 258 Anthropocentrism, 67, 68 Antibodies, 130, 167, 177, 286–289 AquaBounty, 181, 183, 201 Aquinas, St. Thomas, 60 Aristotle, 58–60, 106 Arnon, Stephen, 252 Arrow, Kenneth, 40–42, 44 Arrow’s impossibility theorem, 40, 43 artificial intelligence, 101 Asbestos, 78, 129, 140–142, 149 Asilomar, CA, 175 Asilomar conference, 220 Astell, Mary, 69 Asteroids, 96 Aum Shinrikyo terrorist attacks, 244 Autonomous vehicles, 101–104, 106 Autonomy, 36, 37, 55, 63, 101, 115–118, 146, 148, 152, 195

B Bacillus thuringiensis (Bt), 193 Background checks, 265 Bacteria, 165, 167, 169, 174–177, 182, 186, 243–245, 252, 258 Baier, Annette, 69 Bans, Banning, 196 Barry, Brian, 67 Bayes’ Theorem, 28, 29

© This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2021 D. B. Resnik, Precautionary Reasoning in Environmental and Public Health Policy, The International Library of Bioethics 86, https://doi.org/10.1007/978-3-030-70791-0

321

322 Bayesianism, 29–32 Beecher, Catherine, 69 Benefits plausibility, 89, 106, 213, 306, 315 Benefits and risks, 308, 311 Berg, Paul, 174, 220 Best interest standard, 117, 224 Bhopal, India chemical spill, 274 Bias, 27–31, 50, 62, 81, 82, 88, 131, 133, 154, 182, 192, 258, 260, 308 Biocontamination, 189 Biodiversity, 70, 90, 184, 306 Bioengineered food label, 198 Biological and chemical weapons, immorality of, 243 Biological warfare (biowarfare), 243 Biological weapons, 90, 184, 187, 243, 244, 247 Biological Weapons Convention (BWC), 184, 244 Biologic drugs, 177, 183, 184, 186 Biosafety, 175, 186–190, 248, 251, 252, 258–263 levels, 188 risk estimates, 259 Biosecurity, 252, 259 Biosphere, 68, 70, 90, 94 Bioterrorism, 247, 248, 258 legislation, 245 risk estimates, 258 Biowarfare, 242 Bioweapons, 242, 312 Bisphenol A (BPA), 142, 143, 146 Blackburn, Luke, 242, 243 Black market, 190, 196, 203 Borda counting, 41 Botulinum neurotoxin, 245, 246, 252 Botulinum toxin, 177, 249, 263 Brazil, 180, 181, 204, 271, 283, 287 British, use of smallpox as a weapon, 242 Bt crops, 180, 183, 193, 197 Burden of proof, 78, 95–97, 141, 149, 307 Bureaucracies, 120 Business, 11, 37, 38, 88, 118, 227, 257, 277, 278, 280, 282–285, 305 closures, 279, 282

C Cancer, 226 Carcinogen, 129, 140, 142–145, 149–151, 156 Carcinogenicity, 138, 140

Index Care ethics, 69 Carson, Rachel, 139 Cartagena Protocol on Biosafety, 78, 184 Catastrophe, 85 Categorical imperative (CI), 55–57, 63, 68, 112 vs. hypothetical imperatives, 55 Causes of death, global, 275 Cayman Islands, 204 Cello, Jernimo, 248 Cells differentiation, 170 division, 169, 214 eukaryotic vs. prokaryotic, 167 somatic vs. germline, 165, 175, 178, 183, 205, 214, 215 Censorship, 254, 256 Centers for Disease Control and Prevention (CDC), 137, 153, 173, 188, 213, 243– 245, 250, 273, 277, 278, 280, 284, 291, 292 Certainty, 6, 17, 18, 79, 113, 125 scientific, 2, 78, 79 Charles Darwin, 216 Charpentier, Emanuelle, 176, 177 Chemical regulation, 78, 96, 129, 130, 140, 142, 143, 146, 148, 149, 157–159, 308 Chemical weapons, 241, 243 Chemie-Grunenthal, 133 Children, 6, 77, 90, 98, 100, 101, 104, 117, 132, 139, 143–147, 151, 157, 179, 205, 207, 209–215, 217–226, 228, 273, 280, 282, 284, 285, 291, 292, 309, 311, 312 development, 214, 218 education, 213, 284, 285 impacts of lockdowns on, 282, 284, 285 welfare of, 255 Chimeric antigen receptor (CAR) T-cell therapy, 179, 205, 219 China, 271, 272, 279, 293 Chloroquine, 287 Chromosomes, 167, 169, 174, 175, 177, 201, 211 Classified information, 254 Classified research, 253, 254 Classify, 263 Clean Air Act (CAA), 120, 144 Clean Water Act, 120, 145 Climate change, 3, 28, 39, 67, 77, 91, 96, 308 Clinical testing, 152–154, 287 Clinical trial registration, 134, 156, 309

Index Clinical trials controlled, 35 Phase I, 130, 132, 151, 152, 208, 209, 221, 223, 288, 289, 311 Phase II, 131, 288 Phase III, 131, 152, 154, 289 phases of, 224 uncontrolled, 131, 154, 288 Clostridium botulinum, 252 Clustered regularly interspaced short palindromic repeats (CRISPR), 176–178, 180, 214, 220, 221 Cochlear implant, 216, 226 Coherence, 50 Combined Immunodeficiency Disease (SCID), 178 Community(ies), 262, 308, 313, 315 Community engagement, 121, 122, 204, 310 Compassionate use, 132 Competence, 117 Compromise, 116, 124, 134, 155, 180, 287, 288, 314, 316 Confirmation, 32, 80, 81, 315 Consensus, 38, 88, 117, 292 Consistency, 5, 20, 87–89, 91, 100, 147, 150, 151, 156, 190, 195, 196, 203, 220, 224, 227, 261, 265, 282, 285–287, 293, 295, 306, 308–310, 313 Consists of RNA, 289 Consumer education, 196 Contact tracing, 294 Contextual factors, 307 Controlled Substances Act (CSA), 135 Convention on Biological Diversity (CBD), 184 Convergence theorems, 30 Cost-benefit analysis, 34, 94, 106, 115, 125, 148, 158, 306 Courage, 58, 59 COVID-19, 189, 214, 263, 271, 312 economic impacts, 260, 279, 283 human challenge studies, 289 preparedness, 277, 293, 294 public health impacts, 283 tests, 132, 154, 155, 273, 286, 289 treatments, 132, 152, 154, 286, 288, 291 vaccines, 132, 153, 154, 272, 279, 286, 288–291 COVID-19 pandemic, 260 Credibility, 80, 154 Credible, 288 Crick, Francis, 166 Culture, 70, 143, 177, 179, 207, 289

323 Cystic fibrosis, 177, 211, 222

D Daschle, Tom, 244 Data and safety monitoring board (DSMB), 311 Deafness, 207, 211, 215, 216, 222, 226 De Beauvoir, Simone, 69 De Borda, Jean-Charles, 41 Decision context of, 2, 7, 10, 27, 314, 315 options, 5, 17, 38, 94, 112, 125, 159, 187 outcomes, 11, 17–19, 22–24, 30, 38, 44, 76, 80, 87, 91, 92, 94, 97, 112, 125, 126, 222, 257, 306 practical, 23, 95, 97, 121, 307 Decision-making business, 16, 26, 34, 38 capacity, 116, 117 conditions, 2, 10, 12, 35, 158, 159, 307 context, 1, 2, 16 democratic, 38, 39, 88, 119, 123, 196, 307 expert, 12, 31, 39 governmental, 16, 34, 38, 119, 121, 123, 124 group, 6, 8, 9, 12, 16, 21, 38, 111, 113, 118, 306 guidance, 16, 84, 85, 89, 98, 148 individual, 16, 21, 38, 115, 126 informed, 75, 121, 288, 309 medical, 33, 205 moral, 54, 59, 76 for others, 2, 116, 126, 307 procedures, 6, 10, 38, 112, 125 rules, 10, 16, 22, 24, 25, 31, 32, 38, 80, 84, 91, 104–106, 112, 114, 115, 124, 126, 148, 153, 158, 305–307, 314 strategies, 10, 34, 305, 307 under certainty, 18 under ignorance, 18, 21, 22, 24, 25, 31, 80, 84, 91, 104, 112, 114, 115, 307, 314 under risk, 21–25, 28, 31, 32, 34, 35, 112, 114, 115 Decision-making problem, single solution, 314 Decision-making rules, choosing, 124 Decisions under ignorance, 306 Decisions under risk, 306 Decision theory, 11, 12, 15–18, 23, 33, 35, 38–40, 44, 49, 51, 66, 75, 93, 95, 104, 105, 111, 112, 114, 306, 315

324 normative vs. descriptive, 15 De Condorcet, Nicolas, 39 Delegation of authority, 119 Democracy arguments for, 38, 118 deliberative, 119, 121, 123 direct, 38, 119, 148 problems with, 38, 44, 119–122 representational, 38, 119, 148 Democratic, 307, 310 Dengue, 201 Dengue fever, 124, 181 Deontology, 51, 54, 95, 313 Deoxyribonucleic acid (DNA) coding vs. non-coding, 167 expression, 167–169 replication, 167 transcription, 168 translation, 168 Department of Health and Human Services (DHHS), 140, 245–249, 253, 286, 294 Dialogue, 58, 123, 248 Dichlorodiphenyltrichloroethane (DDT), 139, 140, 149 Dictatorship, 38, 41, 44, 118, 119 Dietary Supplement Health and Education Act (DSHEA), 136 Dietary supplements, 129, 130, 136, 137, 159, 227 Difference, principle of, 66 Dignity, 185 Dignity, human, 118, 148, 185, 219, 306 Diploid vs. haploid cells, 169 Disability, 116, 117, 207, 212, 214–216, 292 Disaster preparedness, 293 Discrimination, 216 Disease, definition of, 202 Divine command theory, 51 Double-effect, doctrine of, 60, 112 Doudna, Jennifer, 176, 177 Drug(s), 309 abuse, 135, 275, 280 access, 131, 132, 135, 152–155, 287 control laws, 132, 135 efficacy, 76, 130, 134 emergency use, 131 expanded access to, 131, 132 experimental, 131, 132, 152, 153 labelling, 132 life-saving, 131, 152, 154, 155 long-term studies of, 134, 155, 309

Index regulation, 12, 18, 35, 118, 122, 130, 133, 140, 141, 146, 152, 159, 309 research and development, 257, 262 risks/benefits, 4, 6, 31, 35, 76, 125, 130, 132, 134, 150, 153, 155, 158, 215 safety, 76, 130, 131, 133, 134, 154, 155, 309 testing, 130, 132, 150, 151, 197, 287, 295, 308. See also Pharmaceuticals Dual use research, 311 of concern, 248, 249, 252, 256, 259, 260, 262, 263, 265, 266 ethical dilemmas use, 256 funding of, 252, 262, 263 journal policies, 248, 264 legal issues, 253 oversight of, 252 publication of, 248, 249, 253, 255, 265 risks/benefits, 252, 257, 260, 263, 312 Due process, 122, 152 Duties conflicts of, 56, 85 perfect vs. imperfect, 56, 57, 68, 85 vs. rights, 62, 69 Dworkin, Ronald, 67 Dynamite, 241

E Ebola, 244 Ebola epidemic, 153, 154 Economic depression, 279 Economic development, 49, 52, 67, 68, 70, 89, 90, 104, 105, 112, 129, 148, 194, 305, 306 Economic growth, 23, 70, 150, 273, 277, 312 Economic recession, 279, 283 Economics, 26, 315 behavioral, 15, 29 Economic theory, 37 Economy, 12, 18–21, 33, 64, 77, 84, 91, 105, 146, 149, 150, 158, 256, 260, 279, 281, 283–286, 295, 308 Ecosystems, 68, 70, 83, 90, 94, 185, 192, 193, 201, 306 Editors, 249, 251, 252, 263–265 Education, 49, 54, 64, 65, 69, 90, 98, 120, 121, 146, 150, 206, 215, 227, 241, 279, 280, 285, 294, 305, 308, 312 Egalitarianism, 11, 66, 120 Electronic cigarettes (e-cigarettes), 138, 156, 157, 309 regulation of, 12, 156, 159, 309

Index risks/benefits, 156 Embryo, 165, 170, 175, 180, 199, 200, 203, 205, 206, 210, 211, 214, 221, 228 Embryo research, ethics of, 210 Emergency preparedness, 275, 277 Emergency response, 276 ethical issues, 286 Emergency use authorization (EUA), 132, 152–154, 286, 287, 289, 290, 308 Emotion, 58, 215, 294 Endocrine disrupting compound, 142 Engagement, 223, 308, 315 Engineered nanomaterials (ENM), 143, 144 Enhancement, 311 Environmental ethics, 11, 67, 68 Environmental health, 3, 51, 76, 84, 97, 118, 143 Environmentalism, 68, 90, 193 Environmental protection, 112 Environmental Protection Agency (EPA), 34, 36, 51, 77, 95, 96, 120, 122, 124, 139–145, 149, 151, 183, 197 Epidemics, 97, 242, 260, 274, 293, 294, 315 Epigenetics, 168, 174 Epistemic responsibility, 87, 88, 91, 100, 150, 151, 156, 157, 190, 196, 203, 220, 224, 261, 263, 265, 282, 286, 287, 293, 295, 306, 308, 313 Epistemic uncertainty, 12, 23, 79, 113, 115, 125, 126, 305. See also Scientific, uncertainty Epistemology, 12, 17, 23, 75, 79, 95, 113, 115, 125, 126, 305 Equality, 118, 121, 148, 292, 293 principle of, 66 Ethics, 22, 23, 60, 62, 68, 69, 131, 203, 207. See also Moral; Morality European, 274 European Chemicals Agency (ECA), 140, 141 European Commission, 2, 76–78, 80, 83, 86–88, 98, 184 European Medicines Agency, 130 European Union (EU), 96, 119, 120, 130, 141, 149, 150, 181, 184, 196 Evidence amount needed for approval, 155 empirical, 26, 31, 81, 282, 315 level or degree of, 32, 92, 114, 115 minimal standard of, 91 standards of, 80, 81, 306 Expected utility theory (EUT), 32–35, 37, 38, 51, 54, 76, 84, 91, 94, 104–106,

325 114, 115, 124–126, 148, 153, 158, 189, 194, 203, 222, 257, 260, 265, 282, 306, 307, 314–316 Experiments of concern, 248 Expert committees, 120, 248, 262 Expertise, 32, 87, 88, 103, 123, 151, 258, 263 Exploitation, 227 Export controls, 255

F Fairness, 5, 42, 61, 65, 87, 91, 93, 100, 101, 104, 119, 121, 150, 151, 153, 154, 156, 158, 190, 196, 203, 204, 220, 224, 261–263, 282, 284, 285, 287, 291, 293, 295, 306–308, 310, 313, 314 distributive, 87, 88, 151, 157, 285, 308 procedural, 87, 88, 105, 151, 157, 204, 285, 295, 308, 310, 312, 313, 315. See also Justice Family, 38, 70, 98, 100, 101, 106, 117, 177, 213, 215, 223–225, 292 Famine, 60, 82, 83, 90 Federal Insecticide, Fungicide, and Rodenticide Act (FIDRA), 120, 139 Feminism, 69 Feminist ethics, 11, 69 Fertility, 201 Fertilization, 169, 184, 215, 218, 219 Field trials, 310 Fink, Gerald, 248, 282 Fink Report, 248 Flame retardants, 97, 142, 146 Food and Drug Administration (FDA) approval, 131–133, 136, 155, 183 auditing, 132 MedWatch, 132 post-approval monitoring, 132 warning, 132, 133 Food Quality Protection Act (FQPA), 139 Fort Detrick, MD, 244 Fouchier, Ron, 250–252, 255, 257–260, 263, 264 Franklin, Rosalind, 166 Freedom of Information Act (FOIA), 255, 264 Funding agencies, 249, 252, 262, 263, 265, 311, 312 Funding of research, 134, 175, 219, 249, 253, 255–257, 260–262, 264, 312

326 G Gain of function experiments, 249, 252, 255, 258–263, 282 Galston, Arthur, 241 Gametes, 169, 212 Game theory, 16 Gelsinger, Jesse, 208, 209 Gene drive organisms, 182 Gene edited babies, 220 Gene editing, 177, 180, 203, 206, 214, 221 off-target effects, 177, 180 risks/benefits, 203 Genes, 165, 167, 169, 172–174, 176–178, 185, 191, 192, 199–203, 210–213, 215, 216, 221, 225, 250, 252, 271 Gene therapy, 178, 183, 184, 198, 204–208, 311 in vivo vs. ex vivo, 178, 208 regulation of, 182 risks/benefits, 208 Genetically engineered monkeys, 180, 203, 208 Genetically modified (GM) meat and animal products, 181, 183, 199, 200, 202, 203, 310 mice, 177, 191, 204 mosquitoes, 124 pigs, 179 plants, 182, 186, 190, 191, 196, 202 salmon, 181, 200, 201, 212 Genetically modified (GM) animals chimeras, 202, 203, 310 risks/benefits, 199, 200, 203, 310 unnaturalness, 202 Genetically modified (GM) crops bans, 196 cross-fertilization, 192 democracy, 196 environmental risks, 181, 192, 194, 195, 197 fear of, 194 horizontal gene transfer, 192, 201 invasiveness, 192, 193, 195, 197 public opinion, 181, 194, 196 regulation, 310 risks/benefits, 192–197, 202 unnaturalness, 194 Genetically modified (GM) foods labelling, 196–198, 310 risks/benefits, 181, 190, 191, 194, 195, 310 safety testing, 197

Index substantial equivalence standard for approval, 197 Genetically modified (GM) microbes, 187– 190 risks/benefits, 177 Genetically modified organisms (GMO), 18, 118, 124, 181, 197 Genetic disease or condition, serious, 221, 222, 226 Genetic diseases, costs of, 213 Genetic diversity, 215 Genetic engineering applications of, 177, 186 of animals, 165, 181, 185, 198–200, 202, 203, 221, 310 of microbes, 165, 177, 184, 187, 189, 190, 228, 310 of microbes, 187 of plants, 165, 181, 185, 190, 310 playing God objection, 185, 186, 219 public opinion of, 181, 196 regulation, 182, 184, 186, 310 slippery slope objection, 186 somatic vs. germline, 198, 205, 208, 209, 212, 214, 220, 221, 224, 226, 311 technical problems, 175 Genetic enhancement, 205–207, 212–214, 217, 218, 226, 227 Genetic modification of viruses, 180, 207 Genetic testing, 219 Genetic tests, 155 Genetic therapy vs. genetic enhancement, 205, 207 Geneva Protocol, 243 Genome, 165, 167, 169, 174–177, 183, 184, 186, 207, 211, 212, 214, 218, 219, 221, 222, 261, 263, 289 human, 1, 3, 167, 174, 180, 184, 187, 205, 219 Genotypes vs. phenotypes, 172, 174, 202 Germany, 217 Germ cells vs. somatic cells, 175, 205 Germline genetic engineering (GGE) efficacy, 210, 221, 223, 224, 227, 228 moratorium, 220, 223, 227, 228 risks/benefits of, 220, 226, 228 safety, 210, 221, 223, 224, 227, 228 types of, 209, 210, 215, 220, 227 Gilbert, Walter, 174 Gilligan, Carol, 69 Glyphosate, 140, 146, 180, 192, 193 Golden rice, 180, 191 Goodness, 58

Index Good will, 57 Government, 1, 3–5, 9, 23, 26, 34, 38, 44, 63, 64, 76, 78, 88, 89, 94, 96, 104, 118–123, 125, 126, 132, 134, 137, 138, 145, 146, 148, 151, 152, 157, 190, 191, 219, 242, 248, 249, 252– 255, 262, 264, 272, 273, 277–283, 285, 286, 289, 290, 293–295, 305, 312–314 Greenpeace, 191 Gross domestic product (GDP), 279, 280

H Habitats, 68, 70, 94, 185, 192, 193, 306 Happiness, 49, 51–54, 57, 60, 67, 70, 94, 306 Harm catastrophic, 82, 85 economic, 82 environmental, 82, 203, 310 to human health, 78, 82, 90, 309 irreversible, 79, 82 plausible, 83, 90–93, 100, 103, 315 possible, 1, 3, 76, 78, 80, 82, 93, 94, 98, 101, 102, 104–106, 194, 227, 307, 315 reduction, 194 serious, 12, 80, 82, 83, 85, 89, 91–93, 106 Hartzell-Nichols, Lauren, 2, 9, 77, 85, 86, 308 Hawking, Stephen, 79, 308 Hazard, 3, 140, 145, 175, 188 Health, 280 definition of, 273 environmental, 1, 3, 11, 12, 51, 76, 82, 84, 88, 89, 94, 97, 98, 118, 119, 122, 124, 126, 143, 184, 228 individual, 52, 306 public, 1, 2, 4, 7, 11, 12, 23, 38, 49, 51, 52, 70, 76–78, 82–84, 88–91, 94, 97, 98, 102, 104, 105, 118, 119, 122, 124– 126, 132, 134, 136–141, 143–156, 165, 181, 184, 190, 192, 194, 195, 199, 201, 203, 204, 213, 214, 228, 241, 245, 248– 252, 256, 257, 260, 262, 265, 273, 274, 280–286, 295, 305, 306, 308–312, 314, 315 Health insurance, 219 Healthy volunteers in research, 288, 313 He, Jiankui, 179, 214 Hemoglobin, 167, 173 Herd immunity, 272 Heterozygous, 173 Heuristic(s)

327 anchoring, 29 availability, 30, 294 HIV/AIDS, 131, 155, 274, 275 Hobbes, Thomas, 55 Homozygous, 173, 211 Honesty, 59 Honeybees, 193, 204 Hormones, 130, 142, 167, 200, 241 Hospitals, 283, 290 H5N1 avian flu virus, 249 H5N1 gain of function experiments, 257, 258 Human challenge studies, 289, 313 Human disasters, 274 Human enhancement, 226 Human experimentation, 150 Human Fertilisation and Embryology Authority (HFEA), 184 Human, function of, 58, 60, 227 Human Genome Project, 219 Human immunodeficiency virus (HIV), 29, 30, 131, 155, 180, 188, 207, 212, 213, 226, 274, 275 Human life, 36, 37, 52, 53, 56, 60–62, 115, 241, 273, 295, 312 price of, 36 Human nature, 60, 61 Human rights, 49, 50, 61, 82, 83, 85, 87, 95, 119, 213, 217, 223, 256, 273, 277, 279, 281, 283, 305, 312–314 Hume, David, 61 Hunger, 181, 185, 310 Huntington’s disease, 211, 217 Hurricane Harvey, 274 Hurricanes, 96 Hydroxychloroquine, 287

I Ideal decision-maker, 148 Immune response, 144, 287, 289 Immune system, 142, 170, 177, 179, 180, 207, 212, 213, 219, 226, 289 Immunity, 177, 180, 200, 212, 242, 250, 272 Implausible outcomes, 23, 24, 80, 93 Incoherence, 84, 85, 89, 92, 93, 222 Income, 7, 36, 52, 54, 65–67, 121, 219, 279, 280, 283, 284 relationship to health, 280 Inconsistency, 85, 88, 206 Indecisiveness, 84, 106 Indifference, principle of, 21, 22, 24, 32, 54, 67, 94, 104, 114

328 Industrial agriculture, 193, 195 Industry, 39, 44, 122, 123, 132, 133, 136, 137, 140, 141, 143, 145, 146, 150, 151, 192, 194, 214, 241, 279, 284, 305, 308 Inequality, 54, 65, 66, 217–219, 225, 292, 311 Influenza, 261, 284 Informed consent, 209, 223, 311 Informed decision-making, 121, 288, 309 Institutional Animal Care and Use Committees (IACUC), 199, 200 Institutional biosafety committees, 175 Institutional Review Board (IRB), 131, 183, 208, 222, 223, 289 Insulin production, 174, 175 Interest groups, 122, 123 Interests, 39, 51, 52, 54, 59, 62, 67, 68, 88, 98, 103, 116–118, 121, 123, 129, 142, 143, 153, 154, 156, 181, 200, 204, 220, 224, 247, 257, 260, 288, 289, 294 Intergovernmental Panel on Climate Change (IPCC), 4, 28, 113 Italy, 290 Ivins, Bruce, 245

Index Knowledge, 3, 7, 8, 12, 23–25, 33, 60, 70, 76, 79, 87, 88, 112–114, 117, 123, 129, 151, 186, 199, 209, 215, 228, 241, 249, 251, 261, 263, 265, 286, 289, 305, 306, 313–315 of probabilities, 6, 25, 32, 112 scientific, 78, 88, 151, 262

J Jackson, Ronald, 247, 262 Jaggar, Allisson, 69 Japan, 293 Japanese biological weapons research, 243 Journals, 312 Justice, 293, 312 distributive, 53, 54, 65–67 principles of, 66, 67 procedural, 65, 119 social, 36, 37, 52, 70, 306. See also Fairness

L Labelling of consumer products, 184 of drugs, 132 of genetically modified foods, 196–198, 310 Laboratory acquired infection (LAI), 188, 189, 259 onward transmission, 189, 259 Laboratory animals, 139, 142, 143, 177, 191, 199–202 Legislation, 121, 138, 148, 245 Legislators, 285 Lenz, Widukind, 133 Libertarianism, 64 Liberty, 63, 70 Life, 8, 18, 22, 35–37, 52, 54, 57–64, 66, 70, 81, 94, 105, 116–118, 125, 153, 171, 180, 181, 186, 213, 218, 252, 280, 306 Life-threatening diseases, 9, 131, 146, 151, 153, 188, 287, 288, 309 Liu, Yifan, 249, 261 Lockdowns, 279, 281, 312, 313 impacts of, 279, 281, 284 risks/benefits, 281, 283–285, 312, 313 Locke, John, 55, 63 Logic, 15, 89 London Fog, 144 Loss-aversion, 16 Low probability, catastrophic events, 82, 259 Lying, 56

K Kaffa, siege of, 242, 243 Kantian/Kantianism, 11, 51, 54, 56, 58, 63, 291 Kant, Immanuel, 36, 54–57 Kawaoka, Yoshihiro, 250–252, 255, 257, 258 Kelsey, Frances, 133 Key Haven, FLA, 181, 182, 204 Keynes, John Maynard, 27 Kingdom of Ends, 55

M MacKinnon, Catherine, 67 Malaria, 82, 139, 174, 181–183, 201, 204, 207, 213, 216, 226, 274, 275 Malnutrition, 83, 91, 181, 185, 275 Malpractice law, 132 Marx, Karl, 66 Mask-wearing, 278 Mathematics, 26, 31, 79, 254 Maximax rule, 19, 20, 24 Maximin rule, 18, 20, 24, 90

Index McBride, William, 133 Measles, 272 Medical devices, 32, 120, 130, 142, 154, 155, 286 Meiosis, 169, 171 Mendel, Gregor, 172–174 Mendel’s laws of inheritance, 172 Merck, 133 Mesothelioma, 141, 142 Microtubules, 167, 169 Middle Eastern respiratory syndrome (MERS), 252 Middle East Respiratory Syndrome Coronavirus (MERS-CoV), 293 Mill, Harriet Taylor, 69 Mill, John Stuart, 51, 63 Minimax regret rule, 19, 20, 24, 54, 67, 93, 94, 104, 114 Mitochondrial DNA, 167 Mitosis, 169, 170 Models, 28, 77, 80, 81, 91, 153, 166, 177, 191, 199, 219, 258–260, 280, 282 mathematical/statistical, 27, 28, 79, 81, 189, 249, 261, 265, 282, 315 problems with, 27 Monarch butterfly, 193 Mongols, 242 Monogenic disorders, 210–212, 220–225, 227, 228, 311 vs. polygenic disorders, 211, 224, 228 Moore, G.E., 61 Moral absolutism, 56 conflicts, 11, 59, 64, 70, 84, 292 decision-making, 11, 44, 54, 59, 76, 111 philosophy, 54, 315 pluralism, 12, 104 principle, 5, 50, 53, 58, 63, 68, 83, 94, 95, 307 value, 11, 23, 35, 37, 44, 49, 55, 57, 59, 61, 62, 68, 69, 75, 76, 93, 104, 111, 116, 198, 199, 210, 226, 261, 305, 307 virtue, 58. See also Decision-making, moral Moral confusion, 202 Morality, 53, 55, 56, 60, 62–64, 69, 185, 243. See also Ethics Moral status, 210 Moral status of animals, 198 Moral theory(ies), 11, 12, 49–51, 54, 57, 59, 62–65, 67–70, 75, 84, 93–95, 105, 111, 112, 114, 116, 125, 126, 306 deontological, 51, 95

329 teleological, 51 Moratoria, 3, 147, 227 Moratorium, 175, 311 Morbidity, 37, 125, 131, 158, 279, 283 Mosaicism, 214 Mosquito(es), 310 Aedes aegypti, 181, 201 Anopheles, 182, 201 genetically modified, 124, 181, 183, 201, 203, 204 Mousepox, 247, 262 Müller, Paul, 139 Munthe, Christian, 2–4, 6, 24, 76, 80, 82–86, 90, 92, 94, 95, 97, 98, 273, 308 Murray, Montague, 141 Mutagen, 149, 150 Mutation, 149, 176, 178, 180, 181, 185, 207, 208, 210, 214, 215, 223, 225, 249–251, 256, 257, 271 Mutual understanding, 123

N Nanomaterials, 80, 129, 143–145, 159 National Academies of Sciences, Engineering, and Medicine (NSAEM), 123, 182, 185, 186, 191–193, 196, 201, 204, 210–223, 225, 226, 228, 315 National Institute of Environmental Health Sciences (NIEHS), 143 National Institutes of Health (NIH), 171, 175, 188, 220, 249, 252, 260, 262, 263, 265, 289 National Research Council (NRC), 3, 34, 76, 187, 201, 241, 247, 248, 255, 256, 258, 260 National Science Advisory Board for Biosecurity (NSABB), 248, 249, 251, 252, 258, 263, 265 National security, 256 National Security Decision Directive 189, 254 National Toxicology Program (NTP), 140, 142, 143 Native Americans, 68, 242 Natural, 194 Natural disasters, 51, 96, 201, 274, 275, 305 Natural law, 11, 60–62, 76, 94, 95, 185 Natural rights, 62–66, 85, 95 Natural selection, 173, 174 Nature, scientific journal, 193, 249–252 Nazi, 217

330 New England Journal of Medicine, 133, 291 1918 pandemic flu, 245, 249, 272, 274 Nobel, Alfred, 241 Nobel Prize, 166 Nobel Prize Winners, 181, 191 Noddings, Nel, 69 Non-human species, 28, 67, 68, 70, 306 Non-steroidal anti-inflammatory drug (NSAID), 133 Normal functioning, 206, 207 Normative vs. descriptive, 15 Normativity, 4, 5, 15, 30, 50, 61, 148, 306 Nozick, Robert, 55, 63, 64, 66, 212 O Obligation. See Duties, 5 Occupational Safety and Health Administration (OSHA), 34, 122, 124, 145 Off-label prescribing, 132, 155, 309 Oligarchy, 38, 44, 118 Opportunity, 19, 24, 49, 54, 65–67, 87, 94, 98–101, 103, 121, 212, 215, 218, 223, 228, 284, 285 fair equality of, 66 Optimism-pessimism rule, 20, 21, 24, 104, 114 Options, 3, 9, 11, 12, 17, 19, 21, 23, 34, 38, 39, 59, 65, 84–86, 92, 94, 99, 100, 102, 106, 111, 149, 153, 154, 156, 158, 189, 190, 194, 199, 211, 220, 224, 260, 264, 282, 288, 305, 309 Oregon salmonella attacks, 247 Organs, 310 Organ transplantation, 179, 202, 292 Original Position, 66 Oxitec, 181–183, 201, 203, 204 P Pandemics, 96, 251, 260, 271, 312 planning, 294 policy, 186, 252, 273, 277, 281–283, 293, 294, 312 preparedness, 277, 295 Parental decision-making, 118 Parents, 211, 217, 311 Passaging, 251, 295 Pathogen, 188–190, 192, 201, 244, 245, 248, 249, 252, 258–263, 281, 289, 295, 310, 313 lethality, 245 transmissibility, 248, 252, 258–260 Patients’ rights, 152, 153, 308

Index Perkins-Gilman, Charlotte, 69 Persistent organic pollutant, 140, 149 Personal protective equipment, 293, 294 Pesticides, 3, 36, 77, 78, 118, 120, 139–141, 144, 145, 149, 150, 157–159, 180, 181, 183, 192–195, 197, 308, 309, 315 Peterson, Martin, 2, 15, 16, 18, 19, 21, 26, 33–35, 79, 83, 84, 94, 95 Pharmaceutical manufacturing, 177 Pharmaceuticals, 4, 96, 97, 125, 130, 133, 153, 250, 257, 260, 261 Philosophy, 315 Plague, 37, 242–244, 274 Plasmids, 174 Plato, 58, 120 Plausibility, 80, 81, 89, 106, 154, 213, 306, 315 Plausible, 215, 218, 257, 271 Pluralism, 52, 70, 75, 104 Policy, 1–3, 18, 39, 64, 67, 76, 77, 79, 84, 88, 90, 97, 98, 102, 103, 106, 115, 121, 125, 126, 129, 138, 142, 146, 148, 149, 151–154, 156, 157, 165, 189, 194, 197, 205, 212, 217, 218, 224, 228, 242, 252, 253, 256, 260, 262– 264, 273, 277, 279, 281–283, 285, 287, 288, 290, 294, 308–312 Policymakers, 34, 51, 76–78, 104, 123, 136, 142, 145, 196, 242, 258, 265, 282– 284, 292, 309, 310, 313 Political/Politics, 1, 2, 5, 23, 36–39, 41, 44, 54, 63–65, 69, 76, 88, 111, 118, 119, 121–123, 146, 148, 196, 216, 227, 261, 281, 285, 294, 295, 313, 315 Political philosophy, 65 Politicians, 77, 148, 287 Polygenic diseases, 212, 213, 225, 311 Polygenic traits, 174, 220 Positional vote counting, 41, 42 Post-hoc review of policies, 286, 313 Post-marketing studies, 134 Potential pandemic pathogen (PPP), 252, 253, 259, 262, 263 Poverty, 54, 82, 83, 90, 280 Power, imbalance of, 121 Practical wisdom, 59, 94 Precaution, 1, 6, 8, 11, 12, 52, 56, 59–61, 64, 70, 75, 78, 85, 86, 89, 90, 96, 97, 100, 101, 103, 111, 115, 118, 122– 126, 148, 150, 156, 186, 189, 208, 213, 221, 305, 306, 308, 309, 311 Precautionary measures, 293, 314

Index Precautionary principle (PP) alternative interpretations of, 307 applications, 1, 308 considerations for using, 126 criticisms, 82, 84, 306 decision tree, 98, 99 definitions, 78, 79 history, 306 interpretations, 95, 97 relationship to decision theory, 12, 93, 306 relationship to moral theory, 12, 93, 306 usefulness of, 104, 112, 158 Precautionary reasoning, 1, 2, 4, 10–12, 19, 25, 30, 37, 49, 62, 65, 67, 68, 70, 75, 79, 91, 97, 98, 106, 111, 112, 146, 305–307, 313–316 Precision, 79, 314 Preferences, 10, 17, 37, 39, 40, 43, 52, 54, 116, 117, 119, 123 aggregation, 39 well-ordered, 17 Preimplantation genetic testing (PIGT), 211, 212, 223, 225 Pre-market testing, 149, 150, 196 Prenatal genetic testing (PNGT), 210–212, 223, 225 Primary goods, 67 Principle of indifference, 21, 22, 24, 32, 54, 67, 94, 104, 114 Priority-setting, 64 Prisoner’s Dilemma, 16 Probability accuracy, 222 axioms, 15, 28 classical view, 26 conditional, 28, 29 estimates, 27–31, 35, 39, 54, 125, 149, 189, 257–260, 282 interpretations of, 26–28, 30–32 knowledge of, 6, 25, 32, 112, 314 mathematical, 26, 314 precision, 222 propensity theory, 27 statistical, 26–28, 31 subjective, 28, 30, 81, 315 vs. uncertainty, 112, 115. See also Probability, mathematical Proceedings of the National Academy of Sciences (PNAS), 247, 249 Procreation, 60, 70 Professional boards, 132, 224

331 Progress, 2, 89, 90, 175, 176, 208, 248, 256, 306 Proof, inductive vs. deductive, 79 Proportionality, 60, 62, 87, 91, 93, 100, 101, 103, 104, 150, 153–156, 158, 189, 190, 194–196, 203, 220–222, 261, 282, 283, 287, 295, 306, 308–310, 312–314 Proteins, 130, 165, 167, 168, 176, 181, 191, 208, 249, 260 The public, 308, 313 Public accountability, 88, 120 Publication, 263 Public education, 49, 119, 147, 223, 228, 263 Public engagement, 119, 123, 263, 285, 312 Public health, 1, 2, 4, 7, 11, 12, 23, 38, 49, 52, 70, 76–78, 82, 83, 88–91, 94, 97, 98, 102, 104, 105, 119, 122, 124–126, 136, 138–141, 144–154, 156, 165, 181, 184, 190, 192, 194, 195, 199, 201, 203, 204, 213, 214, 228, 241, 245, 248–252, 256, 257, 260, 262, 265, 273, 274, 280–286, 295, 305, 306, 308–312, 314, 315 Public health emergency, 12, 51, 87, 131, 153, 188, 273, 274, 277, 281, 282, 285–288, 290, 292, 293, 295, 308, 312, 313 Public opinion, 305 Public policy, 1, 2, 9, 12, 16, 26–28, 31, 35, 37, 39, 54, 105, 137, 203, 204, 241, 310, 315, 316 Public trust, 123, 292

R Rajneesh, Bagwan Shree, 244 Randomized controlled trial (RCT), 22, 131, 152, 154, 287, 288, 308, 309 Rational, 306 Rational agent, 30, 55–57 Rationality instrumental, 5, 15, 16 ordering rules, 86, 87 Rationing, 291 Rawls, John, 11, 50, 52, 53, 55, 65–67, 70, 87, 88, 118, 119, 196, 261 Reasonable, 306 Reasonableness, 5, 6, 83, 86, 87, 89, 93, 104– 106, 112, 156, 157, 187, 190, 195, 196, 199, 204, 210, 213, 220, 223, 261, 265, 282, 292, 295, 306, 314 vs. rationality, 5, 15

332 Receptors, 167, 249, 260 Recombinant DNA Advisory Committee (RAC), 175 Recombinant DNA technologies, 174 Redacted publication, 251, 252, 264, 312 Redaction, 264 Reflective equilibrium, 50, 65, 89 Registration, 96, 196, 308 of chemicals, 146, 150, 196 of pesticides, 139 of products, 147 Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), 96, 141, 149 Regulation(s) of dietary supplements, 137 of drugs, 18, 118, 140, 141, 146, 151, 159, 309 of electronic cigarettes, 12, 156, 157, 159, 309 of genetic engineering, 182, 184, 186, 308, 310 of medical products, 309, 314 permissive vs. restrictive, 150, 308 protective, 140, 146, 150 restrictive, 146 of toxic substances, 12, 77, 140, 141, 145, 146, 149, 150, 158, 308 Relationships, interpersonal and social, 6–8, 10, 112 Religion, 63, 68, 70, 122, 148, 185, 198, 254, 261 Remdesivir, 153, 286 Reproduction rate (R0 ), 189, 272 Reproductive decision-making, 189 Reproductive technology, 184, 207, 210, 213, 226, 314 Research, 308 Respect for humanity, 55 Responsibility, 39 epistemic, 87, 88, 91, 100, 150, 151, 156, 157, 190, 196, 203, 220, 224, 261, 263, 265, 282, 286, 287, 293, 295, 306, 308, 313 Restrictions on social gatherings, 54 Ribonucleic acid (RNA), 153, 165, 169, 177, 248, 249, 286, 289 vaccines, 153, 289 Rights conflicts of, 64, 85, 314 human, 49, 50, 61, 82, 83, 85, 87, 95, 119, 213, 217, 223, 273, 277, 279, 281, 283, 305, 312–314

Index moral vs. legal, 62, 63, 69, 152, 202 natural, 62–66, 85, 95 negative vs. positive, 63 property, 63, 70, 295 relation to duties, 69 Rigorous, 314 Rigorous testing, 152, 287 Rigor, scientific, 154, 288 Rio Declaration on Environment and Development, 2, 78 Risk aversion, 15, 16, 105, 114, 115 avoidance, 3, 82, 86, 90, 92, 97, 124, 150, 189, 194, 203, 220, 221, 260, 282, 305 management, 1, 4, 16, 76, 77, 79, 94, 105, 111, 141, 184, 187, 190, 196, 199, 287, 293, 305–307, 310 minimization, 3, 86, 92, 124, 150, 189, 190, 194, 203, 220, 223, 260, 282, 305 mitigation, 3, 86, 92, 124, 150, 189, 190, 194, 203, 220, 260, 282 neutrality, 10, 11, 105, 115 prevention, 1, 226 prohibition, 145, 195 taking, 4, 5, 9–11, 19, 57, 78, 82, 92, 96, 97, 115, 130, 134, 195, 215, 221, 313 tolerance, 6, 9, 112, 194, 305, 307, 314 Risk assessment, 3, 34, 77, 94, 151, 184, 197, 263 quantitative vs. qualitative, 37 Risks/benefits, distribution of, 65, 67, 88, 153, 285, 293, 312–314 Risks and benefits, 306 Rosengard, Ariella, 247 Roundup Ready Crops, 140, 192, 193 Rousseau, Jean Jacques, 55 Russia, 184, 272

S Safety, 3, 6, 8, 9, 22, 77, 78, 96, 101–104, 122, 129–136, 139, 141, 145–151, 154, 157, 183, 188, 191, 192, 194, 196, 197, 201, 209, 221, 228, 249, 287–289, 295, 305, 308, 311 gap, 77, 141, 150 Salmonella, 244, 258 Sandin, Per, 2, 76, 78, 79, 83, 91, 94, 95 Sanger, Frederick, 174 SARS-CoV-19, 285 SARS-CoV-2, 271, 281 School closures, 284 Science fiction, 216, 228

Index Science, scientific journal, 248–252 Scientific certainty, 2, 78, 79 evidence, 1, 2, 34, 39, 76–80, 88, 91, 92, 94, 97, 103, 104, 106, 125, 153, 156, 158, 222, 291 freedom, 256 journals, 120, 247, 263 method, 80, 81 openness, 256 progress, 256 proof, 79, 80 rigor, 79, 154, 220 uncertainty, 12, 23, 75, 78, 79, 89, 103, 105, 113, 115, 125, 126, 148, 156, 159, 165, 187, 189, 194, 203, 204, 228, 260, 265, 273, 282, 305–307 Scientific research, 12, 147, 170, 198, 203, 224, 241, 255, 256, 286, 305, 313 funding of, 249, 255 Security clearance, 254, 265 Select agents, 245–247, 255 Sen, Amartya, 43, 52, 67 Sensitivity analysis, 33 Sever acute respiratory syndrome (SARS), 189, 245, 252, 281, 293, 295 Sexual reproduction, 169 Shenzhen, China, 179 Sickle Cell Anemia (SCA), 173, 174, 211, 216, 222 Singapore, 278, 293, 294 Smallpox, 242, 244, 245, 247 Smoke, second-hand, 137, 138 Smoking, 1, 5, 8, 137, 138, 156, 195, 274, 290, 309 Social choices, 306 Social choice theory, 16, 38, 39, 44 Social contract, 55, 63 Social distancing, 277 Social justice, 36, 37, 52, 70, 306 Social relationships, 7, 8, 10, 49, 60, 67, 70, 94, 112, 306 Social welfare function (SWF), 39 Society, 2, 3, 5, 7, 49, 51, 55, 56, 63, 65–68, 81, 89–91, 96, 103, 106, 116, 118, 121, 123, 124, 130, 142, 146–148, 150, 154–156, 165, 175, 187, 189, 190, 199, 206, 208, 210, 213, 215, 221, 223–227, 241, 250, 256, 257, 261, 264, 273, 285, 286, 289, 291, 292, 309–313, 315 Socioeconomically disadvantaged, 54, 121, 284

333 Socrates, 58 Somatic genetic engineering (SGE), 205, 208, 209 regulation of, 208 Somatic genetic therapy, 205 Southern University of Science and Technology, 179, 180 South Korea, 277, 293, 294 Soviet secret bioweapons programs, 244 Soviet Union, 244, 256 Stakeholder engagement, 123, 285, 307, 312, 315 Stakeholders, 308, 313 State of nature, 63. See also Social contract Statistical models, 79, 81, 265, 282, 315 Statistics, 315 Stay-at-home orders, 277, 278, 282, 283, 285 Steel, Daniel, 2, 31, 32, 77, 80, 82, 84, 92, 95, 97, 114, 315 Steinem, Gloria, 69 Stem cell(s), 155 adult, 170 embryonic, 170 multipotent, 170 pluripotent, 170 Substituted judgment, 116, 117 Suicide, 280 Suicide, relationship to unemployment, 280 Susceptible populations, 77, 86, 151, 285 protection of, 151 Sweden, 280, 282, 283

T Taxation, 64, 156, 157, 309 Technocracy, 120 Technology, 1–3, 9, 26, 62, 76, 80, 89–91, 96, 101–103, 105, 120, 123, 129, 165, 174, 180, 185–187, 189, 190, 194, 196, 203, 210, 213, 214, 216, 218, 219, 224, 227, 248, 249, 306, 307, 310, 311 Terrorism, 82, 258, 274, 294 Thalidomide, 133 Theology, 51, 60 Timmons, Mark, 37, 49–54, 56, 58–62, 69, 70 Tissues, 125, 130, 140, 143, 144, 147, 150, 157, 170, 171, 179, 183–185, 199, 200, 202, 203, 310 Tobacco, 1, 9, 137, 138, 146, 147, 156, 195 products, 129, 130, 137, 138, 147, 157 Tobacco use, risks of, 137

334 Tort law, 5, 130, 138 Tort liability, 130 Toxicity, 57, 136, 143, 200, 208, 209 Toxic Substance Control Act (TSCA), 77, 95, 140–142, 149, 151 Toxic substances, 77, 95, 96, 140, 141, 146, 149–151, 158, 159, 308, 309 regulation of, 12, 77, 140, 141, 145, 146, 149, 150, 158, 308 Toxin, 180, 184, 188, 190, 193, 244–247, 249, 252, 255, 262, 263 Transformative technologies, 227 Transgenic mice, 177 Transhumanism, 206 >Trans-humans, 217 Transitivity, 17, 40, 86 Transmitters, 167 Travel restrictions, 277, 282, 283 Triage, 51, 291 Truth, 55, 56, 58, 81, 147, 155, 309 U Uncertainty epistemological, 12, 17, 23, 75, 79, 113, 115, 125, 126, 305 moral, 12, 17, 23, 35, 59, 75, 89, 93, 103– 105, 113, 115, 125, 126, 148, 159, 187, 194, 203, 204, 220, 228, 260, 265, 266, 273, 282, 305–307 scientific, 12, 23, 75, 78, 79, 103, 105, 113, 115, 125, 126, 148, 156, 159, 165, 187, 189, 194, 203, 204, 220, 228, 260, 265, 266, 273, 282, 305–307 value, 12, 22, 23, 35, 70, 93, 113, 115, 126, 305, 307 Unemployment, 90, 279–281, 283 United Nations Economic, Scientific, and Cultural Organization (UNSCO), 220 United Nations (UN), 78, 145, 191, 244 United States, 4, 8, 38, 41, 44, 50, 54, 63, 69, 77, 96–98, 102, 118, 120–122, 124, 130, 132–142, 144, 145, 149, 150, 152, 155, 180–184, 189, 190, 193, 197, 199, 200, 206, 208, 213, 216, 217, 219, 242–244, 249, 252– 255, 265, 271–274, 278–280, 283, 284, 291, 294 United States Army Medical Research Institute of Infectious Diseases (USAMRIID), 244, 245 Uniting and Strengthening America by Providing Appropriate Tools

Index Required to Intercept and Obstruct Terrorism Act (USAPATRIOT Act), 245 Universalizability, 55–57, 94 US Congress, 38, 77, 120, 210, 245, 248, 280 US Department of Agriculture (USDA), 139, 183, 197, 245 Utilitarian, 291 Utilitarianism, 11, 36, 43, 51–54, 56, 58, 65, 95, 291, 292, 313 act, 53, 54, 65, 84 rule, 53, 54, 63, 65, 95 Utility interpersonal comparisons, 52 measurement, 35, 93 ordinal vs. cardinal, 17, 22, 43 quantitative, 35 V Vaccine(s), 261 clinical trials, 152–154, 286, 288, 289, 291, 292 development, 257, 260, 262, 288–290, 312 priority, 291, 292 research, 257, 288 Vagueness, 82, 83 Value(s) balancing of, 307 conflict, 11, 52, 57, 60, 68, 70, 75, 113, 115, 125, 194, 195, 256, 265, 273, 291, 306, 312, 313 environmental, 68, 70, 306 incommensurability, 11, 70, 87, 306 individually-oriented, 306 intrinsic, 49, 52, 54, 55, 57, 59, 62, 68 neutrality, 5, 75 pluralism, 12, 52, 70, 75, 104 prioritization of, 11, 68, 75, 95, 113, 114, 194, 306, 307 social, 11, 15, 44, 70, 262, 306, 307 types of, 36 uncertainty, 12, 23, 35, 70, 113, 115, 126, 305 vs. facts, 61 Variola major vs. variola vaccinia virus, 247 Veil of Ignorance, 66, 67 Ventilator rationing, 283 Veracity, 80, 154 Vietnam, 241 Vioxx, 133, 134 Virtue ethics, 11, 58, 59, 69

Index Voting, 6, 10, 38, 39, 41–44, 106, 119, 121–123, 148 paradoxes, 39, 40, 44, 119 Vulnerability medical, 288 socioeconomic, 309 Vulnerable, 273, 284, 312 Vulnerable populations, 159, 309

W Walzer, Michael, 67 War, 274 Warfare, morality of, 243 Water-borne diseases, 145 Water pollution, 67, 96, 145, 195 Watson, James, 166 Wealth, 10, 49, 52–54, 64–67, 70, 121, 217, 218, 280 Wealth-health gradient, 280 Weaponization, 245 Wein, Lawrence, 249, 261

335 Welfare, 49, 52, 131, 152, 199, 200, 227, 255, 313 Wet markets, 271, 295 Willingness to pay (WTP), 36, 37, 115 Wingspread Statement, 78, 79, 82, 88, 90, 93, 96 Wollstonecraft, Mary, 69 World Climate Conference, 77 World Health Organization (WHO), 140, 153, 181, 251, 280, 281 World War I, 243 World War II, 56, 217, 243, 244, 274 Wuhan, China, 271, 277

X Xenotransplantation, 179

Z Zoonoses, 202 Zoonotic diseases, 295 Zygote, 169, 170, 211