Ethics and engineering : an introduction 9781316822784, 1316822788


369 57 1MB

English Pages [250] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
1. Ethics and engineering: an ethics up-front approach; Part I. Assessment and Evaluation in Engineering: 2. Risk analysis and the ethics of technological risk; 3. Balancing costs, risks, benefits, and environmental impacts; Part II. Ethics and Engineering Design: 4. Values in design and responsible innovation; 5. Morality and the machine; Part III. Engineering Ethics, Sustainability, and Globalization: 6. Sustainability and energy ethics; 7. Engineering ethics in the international context: globalize or diversify?
Recommend Papers

Ethics and engineering : an introduction
 9781316822784, 1316822788

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Ethics and Engineering

The world population is growing, yet we continue to pursue higher levels of well-being, and as a result, increasing energy demands and the destructive effects of climate change are just two of many major threats that we face. Engineers play an indispensable role in addressing these challenges, and whether they recognize it or not, in doing so they will inevitably encounter a whole range of ethical choices and dilemmas. This book examines and explains the ethical issues in engineering, showing how they affect assessment, design, sustainability, and globalization, and explores many recent examples including the Fukushima Daiichi nuclear disaster, Dieselgate, “naked scanners” at airports, and biofuel production. Detailed but accessible, the book will enable advanced engineering students and professional engineers to better identify and address the ethical problems in their practice. behnam taebi is Professor of Energy and Climate Ethics at Delft University of Technology and Scientific Director of the University’s Safety & Security Institute. He is Co-Editor of The Ethics of Nuclear Energy (with Sabine Roeser, Cambridge University Press, 2015) and Co-Editor-in-Chief of the journal Science and Engineering Ethics.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Cambridge Applied Ethics

Titles published in this series

E T H I C S A N D B U S I N E S S Kevin Gibson E T H I C S A N D T H E E N V I R O N M E N T Dale Jamieson E T H I C S A N D C R I M I N A L J U S T I C E John Kleinig E T H I C S A N D A N I M A L S Lori Gruen E T H I C S A N D W A R Steven P. Lee T H E E T H I C S O F S P E C I E S Ronald L. Sandler E T H I C S A N D S C I E N C E Adam Briggle and Carl Mitcham E T H I C S A N D F I N A N C E John Hendry E T H I C S A N D L A W W. Bradley Wendel E T H I C S A N D H E A L T H C A R E John C. Moskop E T H I C S A N D T H E M E D I A , second edition Stephen J. A. Ward E T H I C S A N D E N G I N E E R I N G Behnam Taebi

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Ethics and Engineering An Introduction

BEHNAM TAEBI Delft University of Technology

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107177536 DOI: 10.1017/9781316822784 © Behnam Taebi 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Taebi, Behnam, author. Title: Ethics and engineering : an introduction / Behnam Taebi, Delft University of Technology. Description: Cambridge, United Kingdom ; New York, NY, USA : Cambridge University Press, 2021. | Series: Cambridge applied ethics | Includes bibliographical references and index. Identifiers: LCCN 2021005353 (print) | LCCN 2021005354 (ebook) | ISBN 9781107177536 (hardback) | ISBN 9781316628409 (paperback) | ISBN 9781316822784 (epub) Subjects: LCSH: Engineering ethics. Classification: LCC TA157 .T28 2021 (print) | LCC TA157 (ebook) | DDC 174.962–dc23 LC record available at https://lccn.loc.gov/2021005353 LC ebook record available at https://lccn.loc.gov/2021005354 ISBN 978-1-107-17753-6 Hardback ISBN 978-1-316-62840-9 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Ethics and Engineering

The world population is growing, yet we continue to pursue higher levels of well-being, and as a result, increasing energy demands and the destructive effects of climate change are just two of many major threats that we face. Engineers play an indispensable role in addressing these challenges, and whether they recognize it or not, in doing so they will inevitably encounter a whole range of ethical choices and dilemmas. This book examines and explains the ethical issues in engineering, showing how they affect assessment, design, sustainability, and globalization, and explores many recent examples including the Fukushima Daiichi nuclear disaster, Dieselgate, “naked scanners” at airports, and biofuel production. Detailed but accessible, the book will enable advanced engineering students and professional engineers to better identify and address the ethical problems in their practice. behnam taebi is Professor of Energy and Climate Ethics at Delft University of Technology and Scientific Director of the University’s Safety & Security Institute. He is Co-Editor of The Ethics of Nuclear Energy (with Sabine Roeser, Cambridge University Press, 2015) and Co-Editor-in-Chief of the journal Science and Engineering Ethics.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Cambridge Applied Ethics

Titles published in this series

E T H I C S A N D B U S I N E S S Kevin Gibson E T H I C S A N D T H E E N V I R O N M E N T Dale Jamieson E T H I C S A N D C R I M I N A L J U S T I C E John Kleinig E T H I C S A N D A N I M A L S Lori Gruen E T H I C S A N D W A R Steven P. Lee T H E E T H I C S O F S P E C I E S Ronald L. Sandler E T H I C S A N D S C I E N C E Adam Briggle and Carl Mitcham E T H I C S A N D F I N A N C E John Hendry E T H I C S A N D L A W W. Bradley Wendel E T H I C S A N D H E A L T H C A R E John C. Moskop E T H I C S A N D T H E M E D I A , second edition Stephen J. A. Ward E T H I C S A N D E N G I N E E R I N G Behnam Taebi

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Ethics and Engineering An Introduction

BEHNAM TAEBI Delft University of Technology

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107177536 DOI: 10.1017/9781316822784 © Behnam Taebi 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Taebi, Behnam, author. Title: Ethics and engineering : an introduction / Behnam Taebi, Delft University of Technology. Description: Cambridge, United Kingdom ; New York, NY, USA : Cambridge University Press, 2021. | Series: Cambridge applied ethics | Includes bibliographical references and index. Identifiers: LCCN 2021005353 (print) | LCCN 2021005354 (ebook) | ISBN 9781107177536 (hardback) | ISBN 9781316628409 (paperback) | ISBN 9781316822784 (epub) Subjects: LCSH: Engineering ethics. Classification: LCC TA157 .T28 2021 (print) | LCC TA157 (ebook) | DDC 174.962–dc23 LC record available at https://lccn.loc.gov/2021005353 LC ebook record available at https://lccn.loc.gov/2021005354 ISBN 978-1-107-17753-6 Hardback ISBN 978-1-316-62840-9 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Ethics and Engineering

The world population is growing, yet we continue to pursue higher levels of well-being, and as a result, increasing energy demands and the destructive effects of climate change are just two of many major threats that we face. Engineers play an indispensable role in addressing these challenges, and whether they recognize it or not, in doing so they will inevitably encounter a whole range of ethical choices and dilemmas. This book examines and explains the ethical issues in engineering, showing how they affect assessment, design, sustainability, and globalization, and explores many recent examples including the Fukushima Daiichi nuclear disaster, Dieselgate, “naked scanners” at airports, and biofuel production. Detailed but accessible, the book will enable advanced engineering students and professional engineers to better identify and address the ethical problems in their practice. behnam taebi is Professor of Energy and Climate Ethics at Delft University of Technology and Scientific Director of the University’s Safety & Security Institute. He is Co-Editor of The Ethics of Nuclear Energy (with Sabine Roeser, Cambridge University Press, 2015) and Co-Editor-in-Chief of the journal Science and Engineering Ethics.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Cambridge Applied Ethics

Titles published in this series

E T H I C S A N D B U S I N E S S Kevin Gibson E T H I C S A N D T H E E N V I R O N M E N T Dale Jamieson E T H I C S A N D C R I M I N A L J U S T I C E John Kleinig E T H I C S A N D A N I M A L S Lori Gruen E T H I C S A N D W A R Steven P. Lee T H E E T H I C S O F S P E C I E S Ronald L. Sandler E T H I C S A N D S C I E N C E Adam Briggle and Carl Mitcham E T H I C S A N D F I N A N C E John Hendry E T H I C S A N D L A W W. Bradley Wendel E T H I C S A N D H E A L T H C A R E John C. Moskop E T H I C S A N D T H E M E D I A , second edition Stephen J. A. Ward E T H I C S A N D E N G I N E E R I N G Behnam Taebi

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Ethics and Engineering An Introduction

BEHNAM TAEBI Delft University of Technology

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107177536 DOI: 10.1017/9781316822784 © Behnam Taebi 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Taebi, Behnam, author. Title: Ethics and engineering : an introduction / Behnam Taebi, Delft University of Technology. Description: Cambridge, United Kingdom ; New York, NY, USA : Cambridge University Press, 2021. | Series: Cambridge applied ethics | Includes bibliographical references and index. Identifiers: LCCN 2021005353 (print) | LCCN 2021005354 (ebook) | ISBN 9781107177536 (hardback) | ISBN 9781316628409 (paperback) | ISBN 9781316822784 (epub) Subjects: LCSH: Engineering ethics. Classification: LCC TA157 .T28 2021 (print) | LCC TA157 (ebook) | DDC 174.962–dc23 LC record available at https://lccn.loc.gov/2021005353 LC ebook record available at https://lccn.loc.gov/2021005354 ISBN 978-1-107-17753-6 Hardback ISBN 978-1-316-62840-9 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Ethics and Engineering

The world population is growing, yet we continue to pursue higher levels of well-being, and as a result, increasing energy demands and the destructive effects of climate change are just two of many major threats that we face. Engineers play an indispensable role in addressing these challenges, and whether they recognize it or not, in doing so they will inevitably encounter a whole range of ethical choices and dilemmas. This book examines and explains the ethical issues in engineering, showing how they affect assessment, design, sustainability, and globalization, and explores many recent examples including the Fukushima Daiichi nuclear disaster, Dieselgate, “naked scanners” at airports, and biofuel production. Detailed but accessible, the book will enable advanced engineering students and professional engineers to better identify and address the ethical problems in their practice. behnam taebi is Professor of Energy and Climate Ethics at Delft University of Technology and Scientific Director of the University’s Safety & Security Institute. He is Co-Editor of The Ethics of Nuclear Energy (with Sabine Roeser, Cambridge University Press, 2015) and Co-Editor-in-Chief of the journal Science and Engineering Ethics.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Cambridge Applied Ethics

Titles published in this series

E T H I C S A N D B U S I N E S S Kevin Gibson E T H I C S A N D T H E E N V I R O N M E N T Dale Jamieson E T H I C S A N D C R I M I N A L J U S T I C E John Kleinig E T H I C S A N D A N I M A L S Lori Gruen E T H I C S A N D W A R Steven P. Lee T H E E T H I C S O F S P E C I E S Ronald L. Sandler E T H I C S A N D S C I E N C E Adam Briggle and Carl Mitcham E T H I C S A N D F I N A N C E John Hendry E T H I C S A N D L A W W. Bradley Wendel E T H I C S A N D H E A L T H C A R E John C. Moskop E T H I C S A N D T H E M E D I A , second edition Stephen J. A. Ward E T H I C S A N D E N G I N E E R I N G Behnam Taebi

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Ethics and Engineering An Introduction

BEHNAM TAEBI Delft University of Technology

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India 79 Anson Road, #06–04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning, and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9781107177536 DOI: 10.1017/9781316822784 © Behnam Taebi 2021 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 2021 A catalogue record for this publication is available from the British Library. Library of Congress Cataloging-in-Publication Data Names: Taebi, Behnam, author. Title: Ethics and engineering : an introduction / Behnam Taebi, Delft University of Technology. Description: Cambridge, United Kingdom ; New York, NY, USA : Cambridge University Press, 2021. | Series: Cambridge applied ethics | Includes bibliographical references and index. Identifiers: LCCN 2021005353 (print) | LCCN 2021005354 (ebook) | ISBN 9781107177536 (hardback) | ISBN 9781316628409 (paperback) | ISBN 9781316822784 (epub) Subjects: LCSH: Engineering ethics. Classification: LCC TA157 .T28 2021 (print) | LCC TA157 (ebook) | DDC 174.962–dc23 LC record available at https://lccn.loc.gov/2021005353 LC ebook record available at https://lccn.loc.gov/2021005354 ISBN 978-1-107-17753-6 Hardback ISBN 978-1-316-62840-9 Paperback Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

For my parents, Simin and Rusbeh

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Contents

Preface 1 Ethics and Engineering: An Ethics-Up-Front Approach

page xi 1

1.1 The Dieselgate Scandal: Who Was Responsible?

1

1.2 Three Biases about Engineering and Engineering Ethics

4

1.3 Ethics and the Engineer 1.4 Ethics and the Practice of Engineering: Ethics Up Front

7 15

Part I Assessment and Evaluation in Engineering 2 Risk Analysis and the Ethics of Technological Risk

23

2.1 The Unfolding of the Fukushima Daiichi Nuclear Accident

23

2.2 Assessing Technological Risk

27

2.3 How Did the Fukushima Disaster Fall through the “Cracks” of Risk Assessment? 2.4 What Risk Assessments Cannot Anticipate: “Normal Accidents”

30 33

2.5 The Ethics of Risk: Social Acceptance versus Ethical Acceptability

36

2.6 How to Deal with Uncertainties in Technological Risks

42

2.7 Summary

51

3 Balancing Costs, Risks, Benefits, and Environmental Impacts

53

3.1 The Grand Ouest Airport of Nantes: The End of Fifty Years of Controversy

53

3.2 What Is SCBA?

59

3.3 The Philosophical Roots of CBA: Utilitarianism

60

3.4 SCBA: Why, When, and How?

64

3.5 How to Deal with the Problems of CBA

72

3.6 Summary

78

vii

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

viii

Contents

Part II Ethics and Engineering Design 4 Values in Design and Responsible Innovation

81

4.1 The “Naked Scanner” as the Holy Grail of Airport Security

81

4.2 Why Does Ethics Matter in Engineering Design?

86

4.3 Designed for the Sake of Ethics: Persuasive Technology

87

4.4 How Do Values Matter in Engineering Design?

90

4.5 How to Systematically Design for Values

92

4.6 How to Deal with Conflicting Values

96

4.7 Responsible Research and Innovation

103

4.8 Summary

109

5 Morality and the Machine

111

5.1 The Uber Autonomous Car Accident in Arizona

111

5.2 Crash Optimization and Programming Morality in the Machine

119

5.3 The Ethics of AI

125

5.4 “Trustworthy” and “Responsible” AI: Toward Meaningful Human Control 5.5 Summary

132 136

Part III Engineering Ethics, Sustainability, and Globalization 6 Sustainability and Energy Ethics

141

6.1 Biofuel and a “Silent Tsunami” in Guatemala

141

6.2 There Is No Such Thing as “Sustainable Energy”!

147

6.3 Sustainability as an Ethical Framework

149

6.4 Sustainable Nuclear Energy: A Contradiction in Terms?

153

6.5 Energy Ethics

162

6.6 Summary

167

7 Engineering Ethics in the International Context: Globalize or Diversify?

169

7.1 Earthquakes and Affordable Housing in Iran

169

7.2 Moving beyond the Dilemmas of Western Engineers in Non-Western Countries 7.3 Is Engineering Ethics a Western Phenomenon?

174 176

7.4 The Need to Consider Engineering Ethics in the International Context

178

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Contents

7.5 Globalizing or Diversifying Engineering Ethics?

181

7.6 Summary

187

Bibliography

189

Index

218

ix

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784

Preface

Tell an engineer that you teach engineering ethics and it will not be long before you hear one of the biases about engineering, about ethics, or about how the two fields relate to each other. Ironically, one persistent bias is the view that engineering is an unbiased and objective practice, because “engineers deal with facts and figures, so there is no place for opinions or ethics in the practice of engineering.” Another bias, fueled by much thought and discussion about ethics in relation to malpractices, is that an engineer “must be honest.” Sometimes you hear other affirmative responses that are no less upsetting: “Of course it is very important that engineers follow the law.” Unfortunately, these oversimplifications of the respective fields of engineering and ethics (and their relation to each other) have given rise to some dismissiveness of ethics from engineers. This is disconcerting because engineers and prospective engineers have an essential role to play in meeting the grand challenges of the twenty-first century and in shaping our future societies. The world population is growing, yet we continue to pursue higher levels of well-being. Increasing energy demands and the problems resulting from climate change are only two of the many major challenges that humanity is facing in this century. Whether engineers recognize it or not, in addressing these challenges, they will encounter a whole range of ethical choices and dilemmas that they will have to deal with. This certainly applies to various existing technologies and engineering practices, but more and more it applies to innovative and emerging technologies, such as technologies that employ artificial intelligence (AI). With technological developments in general, and with innovations more specifically, ethics is sometimes seen merely as a moral brake giving a green or red light to development. This is why some engineers shun ethics in practice, because they often consider ethics to be the red light in terms of what they could develop through innovation. In this book, I aim to consider xi

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.001

xii

Preface

ethics in a nonbinary way, which could help ethics to steer technological developments at an early stage. This kind of nonbinary moral analysis is already the focus of current “ethics and engineering” studies, and it definitely requires an interdisciplinary approach, demanding the involvement of applied ethicists, but it is equally important to take engineers on board with such analyses. In other words, engineers should have a certain degree of knowledge and awareness of these ethical problems. I aim to contribute to that awareness through this book by introducing the concept of ethics up front. Rather than dwelling on ethical reflections in retrospect, this approach aims to facilitate the proactive involvement of engineers in addressing ethics in engineering practice and design. Immersing engineers in real-world ethical issues of engineering should enable them to identify the ethical problems at hand, and to choose different tools and frameworks to proactively address those problems in their practice of engineering. Let me briefly mention what I mean by the practice of engineering and how that is reflected in the set-up of the book. In their normal practice, engineers employ and engage in a variety of activities including the assessment and evaluation of risks, costs and benefits, and the design and development of new artifacts and systems, for instance, energy systems. In those activities, ethical values are expressed either explicitly or implicitly, and other ethically laden choices are being made, whether they are recognized or not.1 It is my intention to help engineers to become more sensitive to these ethical issues in their practice and to think more intentionally about them. After the introductory chapter (Chapter 1), in which I set the scene for the ethics-upfront approach, the book is arranged into three parts, each containing two chapters. Part I is about the assessment method used in the engineering practice. In the first chapter here (Chapter 2), assessments of risk are discussed, along with questions about societal and ethical aspects of risk. The second chapter (Chapter 3) discusses the Cost–Benefit Analysis (CBA) as a crucial assessment method for engineering projects, along with critiques of the method in terms of ethics. This chapter also discusses alternative methods for balancing costs, benefits, and environmental impacts of engineering projects. While both chapters detail critiques of the proposed assessment methods, it is not my 1

I thank an anonymous reviewer for helping me to better pinpoint what I mean by “practice” in this book.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.001

Preface

xiii

intention to dismiss any of these methods; that would be throwing the baby out with the bathwater. Instead, I have tried to highlight the possible shortcomings of these methods in order to help engineers to apply the methods in a more vigilant manner. Moreover, I have tried to present the adjustments, amendments, and so on, that the literature suggests in order to deal with the ethical problems of each of these methods. In this part, I also briefly discuss ethical theories in relation to the engineering examples discussed. The focus of Part II is on engineering design as a crucial aspect of engineering practice. The first chapter in this part (Chapter 4) covers methods for including ethical values in design, especially when conflicting values are encountered. At the same time, the text presents a broad notion of design – including, for instance, the design of an energy system – in such a way that it endorses important values. The second chapter (Chapter 5) addresses a specific type of design that has recently become very important, namely the question of designing morality into the machine. New applications based on AI present a host of unprecedented ethical issues. Part III of the book covers issues pertaining to sustainability and globalization in engineering ethics. The first chapter (Chapter 6) focuses on the questions of sustainability in energy discussions, presenting a method of using sustainability as an ethical framework rather than as a yardstick to pass a judgment on an energy technology, as, say, sustainable or unsustainable. Building on this nonbinary thinking, the book considers the broader questions of energy ethics, including important considerations of justice that may be involved in engineering practice. The second chapter of Part III, and the last chapter of the book (Chapter 7), focuses on being more inclusive in engineering ethics. While the literature already fully acknowledges the importance of thinking about the globalization of engineering and engineering ethics, the current focus in the literature is predominantly on the ethical implications of technology transfer, or the conflicts that a Western engineer might come across when working in non-Western countries. Consequently, in this chapter I argue that the ethical issues that engineering raises in different parts of the world, especially in emerging economies that are industrializing at a fast pace, are definitely worthy of consideration. Those issues extend far beyond the problems of technology transfer or problems that a Western engineer might encounter when working in a country with different ethical standards. I review several reasons why we need to incentivize international approaches in thinking

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.001

xiv

Preface

about and teaching engineering ethics. In addition, I focus on two main approaches for the latter, namely globalizing and diversifying engineering ethics. Let me make three remarks about the set-up of the book. First, there may seem to be a topical asymmetry between different chapters in each part and also between different parts of the book. For example, one might argue that “morality and the machine” (Chapter 5) is itself a specific kind of designing for values (Chapter 4) – so why keep them separate? In selecting the themes of this book, I have taken several important ethical issues that emerge as a result of engineering practice to be very broadly conceived. Sometimes certain issues have received additional attention. For instance, while Chapter 5, in a sense, extends Chapter 4, a specific focus on “morality and the machine” enabled me to discuss the broader ethical issues associated with AI more explicitly, including questions of agency and control. Another argument might be that sustainability and globalization do not necessarily relate to engineering practice. While sustainability is indeed not an engineering practice per se (as design is), thinking about sustainability in evaluating different energy technologies does come up in many engineering discussions and decisions. We could call these the “macro-ethical issues” in the practice of engineering. This brings me to the second remark. In this book, I have reviewed two approaches to discussing engineering ethics, namely, (a) ethics and the engineer and (b) ethics and the practice of engineering. While there are some discussions about the former, particularly about the responsibilities of an engineer in Chapters 1 and 7, this book predominantly discusses the latter approach. This is by no means a normative judgment about one approach being more important than the other but merely a pragmatic attempt to fill a gap in the engineering ethics literature (especially in textbooks). Indeed, many engineering ethics books currently on the market already focus on the former approach and discuss in detail the “micro-ethical” issues that individual scientists and engineers will have to deal with on a daily basis.2 2

The interested reader might wish to consider another book in the same Cambridge Applied Ethics series, Ethics and Science by Briggle and Mitcham on the broader issues of science and engineering (including research ethics), and also Van de Poel and Royakkers’ Ethics, Technology and Engineering, Engineering Ethics by Harris et al., Davis’ “Thinking Like an Engineer,” and Johnson’s “Engineering Ethics” for elaborate discussions of engineers’ responsibilities and codes of conduct and other relevant issues in professional ethics.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.001

Preface

xv

The current book aims to focus on the macro-ethical issues, by expounding on the second approach pertaining to ethics and the practice of engineering, in which the proactive approaches to ethics in design and other engineering practices will be reviewed and expanded. Third, my ambition with this book is to reach out to both advanced engineering students and professional engineers. While the book has not been specifically designed as a textbook (with assignments at the end of each chapter), I hope that its thematic breadth and accessibility of discussing ethics will make it appealing for advanced graduate (PhD) students in engineering who wish to know more about ethics. I also hope that the book can reach practicing engineers with an interest in ethics. I have further tried to keep the book interesting for a broad international readership both in the developing world and emerging economies and in the industrialized countries. In the process of writing this book, there have been many groups and individuals that I wish to thank for their support, for helpful feedback, and for encouragements. First, I wish to thank the Section of Ethics and Philosophy of Technology at Delft University of Technology, and in particular Ibo van de Poel and Sabine Roeser, for their helpful feedback and support in finishing this project. Part of this book was written during a research visit to Politecnico di Milano in 2019, where I was hosted by the META group, which is an interdisciplinary network of scholars from engineering, architecture, and design departments. I wish particularly to thank Viola Schiaffonati, Daniele Chiffi, and Paolo Volonté, not only for their hospitality but also for allowing me to teach the contents of some of these chapters in their ethics classes. I owe special thanks to Hilary Gaskin of Cambridge University Press for inviting me to contribute to this Cambridge Applied Ethics series, and also for being patient with me over the many deadline extensions I had to request for various personal and professional reasons. I wish also to thank four anonymous reviewers for their careful reading of the manuscript and their helpful feedback. I wish also to thank many colleagues, friends, and family for feedback on the chapters, but also for their encouragement and support as I completed this book: especially Diane Butterman, Ali Dizani, Neelke Doorn, John Downer, Paul Fox, Pieter van Gelder, Jeroen van den Hoven, Peyman Jafari, Roozbeh Kaboly, Alireza Karimpour, Ramin Khoshnevis, Milos Mladenovic, Kaweh Modiri, Genserik Reniers, Zoë Robaey, Filippo Santoni de Sio, Susan

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.001

xvi

Preface

Steele-Dunne, Behnaz, and Pak-Hang Wong. My special thanks go to Azar Safari for her helpful comments and for her encouragement at moments when I needed it most. While different parts of this book have been read and commented on by different colleagues from both engineering and philosophy perspectives, the usual disclaimer applies: Any remaining mistakes are my own.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.001

1

Ethics and Engineering An Ethics-Up-Front Approach

1.1 The Dieselgate Scandal: Who Was Responsible? In September, 2015 the US Environmental Protection Agency (EPA) discovered irregularities with certain software in the board computer of Volkswagen diesel cars. The software enabled the car to detect when it was running under controlled laboratory conditions on a stationary test ring and to respond to that by switching to a mode of low engine power and performance. As a result, the emissions detected under laboratory conditions (or while the car was being tested) were substantially lower than the actual emissions when the car switched back to the normal mode on the road. This resulted in Volkswagen vehicles emitting up to forty times more nitrogen oxide pollutants than the levels allowed under US regulations. In what later came to be popularly known as the “Dieselgate” scandal, Volkswagen admitted that 11 million of its vehicles – including 8 million in Europe – had this software problem. In a Congressional hearing in the US, the CEO of Volkswagen’s American division, Michael Horn, apologized for this “defeat device” that served to “defeat the regular emission testing regime”1 but denied that the decision to incorporate the deceptive device was a corporate one. When he was asked whether he personally knew about the practice, he responded, “Personally, no. I am not an engineer.” Horn continued to blame a few rogue engineers.2 As this book goes to the press, several executives have been imprisoned for their role in the scandal; a larger group of executives have been charged for their involvement.3

1 2

Horn, Testimony of Michael Horn, 1. O’Kane, “Volkswagen America’s CEO Blames Software Engineers for Emissions Cheating Scandal.”

3

O’Kane, “VW Executive Given the Maximum Prison Sentence for His Role in Dieselgate.”

1

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

2

Ethics and Engineering

The Dieselgate scandal provides an important case study in engineering ethics for several reasons. First, deception is clearly a breach of anethically acceptable practice in engineering; Volkswagen first claimed that the problem was due to a technical glitch,4 but the “defeat device” was later admitted to have been intentionally included. Yet it remained unclear where and at which level of the organization the responsibilities lay. This brings us to the second issue, namely the responsibilities of engineers. In his original testimony to the House Committee on Energy and Commerce of the US Congress, Horn wholeheartedly accepted responsibility – “we at Volkswagen take full responsibility for our actions”5 – but in the questions and answers that followed with the members of Congress, he blamed a “few rogue engineers.” He did not feel any personal responsibility because – he claimed – as the CEO, he could not have known about the software problem. He further pointed out that the software was designed by engineers in Germany and not in the US, where he was the boss: “I feel personally deceived.”6 On the one hand, this pinpoints an interesting question regarding the responsibilities of different engineers in an organization versus those in the higher echelons of the organization. On the other hand, another question pops up: whether such a big fraud in the automotive industry could be the work of only a few rogue engineers. Horn’s stark distinction between engineering and management choices was also doubted by car industry veterans. As Joan Claybrook, former administrator of the US National Highway Traffic Safety Administration, said in an interview with the Los Angeles Times, rogue engineers cannot “unilaterally decide to initiate the greatest vehicle emission fraud in history. . . . They have teams that put these vehicles together. They have a review process for the design, testing and development of the vehicles.”7 The fact that several executives have been charged for fraud also stresses the lack of such a sharp division between engineering and management. The third important feature of this example is this: Engineering choices are often collaborative choices made by different people at different 4

See Ewing and Mouawad, “Directors Say Volkswagen Delayed Informing Them of Trickery.”

5 6 7

Horn, Testimony of Michael Horn, 2. Kasperkevic and Rushe, “Head of VW America Says He Feels Personally Deceived.” Puzzanghera and Hirsch, “VW Exec Blames ‘a Couple of’ Rogue Engineers for Emissions Scandal.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

3

organizational levels, for instance by engineers in Germany and the US, as Horn stated in his testimony. This is sometimes referred to as “the problem of many hands.”8 Fourth, and somewhat related to the previous issues, the scandal reveals broader issues of responsibility in engineering. Horn’s testimony before the Congressional committee could be seen as an attempt to restore the trust of “customers, dealerships, and employees, as well as the public and regulators.”9 Engineering corporations operate in broad societal contexts, and they deal with large groups of stakeholders, to whom they have certain responsibilities. Likewise, engineers have broad social responsibilities that extend beyond their direct answerability to their employers; more about this will be said later in the chapter. Fifth, this case emphasizes that many engineering choices made in the process of design – both intentional and unintentional – are not easily reversed afterward. These choices often have ethical implications. The Dieselgate scandal is an extreme example of ethically questionable decisions. Many ethical choices in engineering and design practice are implicitly made. Moreover, in many such situations there are no clear right and wrong options. In contrast to the Dieselgate affair, there may be a large gray area in which many ethically relevant questions reveal themselves. While we need to realize that engineering ethics is often about less extreme situations and examples, the example of Dieselgate does help me to introduce two different aspects of the field of ethics and engineering, namely “ethics and the engineer” and “ethics and the practice of engineering.” I will focus on the former in the remainder of this chapter. Various issues will be discussed relating to the responsibility of an engineer in general and within organizations, including corporate social responsibility (CSR) and codes of conduct (such as professional engineering codes, company codes, and other important international codes). While discussing these issues I shall highlight the concepts that have to do with the role of engineers in their practice. In the last part of this chapter, I will introduce the concept of ethics up front and the forward-looking responsibility of an engineer. The other chapters of this book expand this ethics-up-front approach. Before focusing more on discussions about the first approach – ethics and the engineer – let me first clarify what the field of engineering ethics is not about. 8

Van de Poel, Royakkers, and Zwart, Moral Responsibility and the Problem of Many Hands.

9

Horn, Testimony of Michael Horn, 2.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

4

Ethics and Engineering

1.2 Three Biases about Engineering and Engineering Ethics There are persistent biases about ethics, engineering, and how the two fields relate to each other. In some instances, these are misunderstandings about what the role of ethics is in engineering practice, and in other instances, they represent only a narrow or incomplete view of ethics. It may be unconventional to introduce a field by first saying what it is not about, but because these biases often stand in the way of a better understanding, I will discuss them explicitly. This can help us to demarcate the boundaries of this academic field and – perhaps more importantly for the purposes of this book – establish what will be discussed in the book.

1.2.1 Isn’t Engineering Based Only on Facts and Figures? A commonly heard argument is that engineering deals predominantly withfacts and figures that are based upon the formulas and methods commonly accepted in engineering. These are also where the authority of engineering stems from. There is thus no room for ethics! However, ostensibly unbiased and objective issues in engineering often encompass important moral assumptions and choices. Take, for example, the probabilities assigned to the occurrence of major accidents, which often rest on a range of ethical assumptions. At times, the moral issue is not labeled or recognized as such. For instance, the question of “how safe is safe enough” when new nuclear power plants are designed is not only a legal and regulatory one. Nor can economic optimization models straightforwardly answer this question. Chapter 2 will extensively discuss this issue in the context of the Fukushima Daiichi nuclear disaster by focusing on the risk assessments that were made and, more importantly, on how such a major accident could fall through the cracks of such risk assessments. Chapter 3 will discuss the quantifications that are often used for comparing the costs and benefits of certain engineering projects, focusing not only on the underlying assumptions but also on the ethical implications of such quantifications. To sum up, seemingly exact issues in engineering may contain several assumptions of great ethical relevance.

1.2.2 Isn’t Engineering Ethics about Abiding by the Law and Engineering Norms? Another misconception, related to the previous one, is that engineering ethics is particularly (or even solely) about abiding by laws and regulations. Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

5

Thus, an engineer needs to comply with various laws, regulations, and standards, for instance, those regarding safety. As the bias goes, ethics is about successfully complying with those standards or simply not cheating in relation to the standards. This might well be seen as an interpretation of ethics but perhaps in its most basic and least demanding form. Ethics in engineering aims to go further than this basic demand and to explore the responsibilities of engineers, which are certainly not confined to merely abiding by their legal obligations. Ethics and law are unquestionably intertwined. Laws usually stem from what is commonly morally accepted in society; however, saying that something is legal does not necessarily mean it is ethically correct. Slavery, apartheid, and gender inequality might be part of the legal system of a country, but their ethical rightness might be very much questioned. Likewise, saying that something is ethically sound does not necessarily mean that it is embedded in the legal system. The latter has particular relevance in engineering because the law generally tends to lag behind technological innovations. In this regard, a typical example that various engineering ethics books mention concerns the development of the Ford Pinto. In the 1970s Ford started developing a new two-door car, the Pinto.10 The development of this model went at an unprecedented pace, but the final result had a technical error: The gas tank was situated behind the rear axle, which meant that a rear-end accident (at speeds as low as 35 km per hour) could rupture it. This could easily lead to a fire, which is particularly worrisome in a two-door vehicle. The company was made aware of this problem by its engineers prior to the first release but decided to continue with the release. Legally speaking, Ford was meeting all the requirements because the crash tests in the US at the time did not require rear-end testing. This was clearly a situation in which ethical responsibility was not legally defined, especially because in this respect legislation was lagging behind, and the only people who were aware of the error were the engineers involved in developing and testing the Ford Pinto. Such a situation creates certain responsibilities for engineers, because they are often at the forefront of technological development and will – in principle – know before anyone else when laws are outdated or have become otherwise inappropriate or inadequate to deal with the engineering issues at hand.

10

Van de Poel and Royakkers, Ethics, Technology and Engineering, 67–69.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

6

Ethics and Engineering

It was shortly after Ford released the Pinto that the problematic crash tests were modified and a rear-end crash test without fuel loss was made obligatory. In this example, it was certainly not the engineers who decided to proceed with the release of the model. That was decided at executive level, which again shows that determining who is responsible in a large organization is a rather complex matter.11

1.2.3 Isn’t Engineering Ethics a Moral Brake on Innovation? In engineering and technological innovation, ethics is sometimes considered to be a moral yardstick that can pass yes/no judgments on development.12 Indeed, it may sometimes be the case that moral considerations can urge engineers to stop developing a new technology altogether. The Precautionary Principle has now reverberated throughout engineering design for over two decades, since lack of scientific knowledge about potential risk cannot provide sufficient reason for further development.13 The Precautionary Principle is perhaps one of the most misunderstood principles in engineering, as it can do much more than give a dichotomous yes/no verdict about a technological development. Indeed, sometimes it might be recommendable categorically to say no to a certain development. A good example is the recent campaign to “Stop Killer Robots,” in which over 1,000 artificial intelligence (AI) scholars, philosophers, and other professionals pleaded for a ban on the development of fully autonomous weapons that are capable of engaging targets without 11

This case study is often used in ethics and engineering textbooks for another purpose. When Ford was later sued for the many losses and serious injuries attributable to technical failure, the company justified its choice not to modify the design before release by using a Cost–Benefit Analysis (CBA). Ford had two modification methods, and even the more expensive method would have cost $11 per vehicle. However, Ford had decided not to modify, and this decision was justified in court using a CBA. I will briefly return to this CBA in Chapter 3. For more details about the Ford Pinto case, see Van de Poel and

12

Royakkers, Ethics, Technology and Engineering. Van den Hoven, Lokhorst, and Van de Poel, “Engineering and the Problem of Moral Overload.”

13

The definition of the Precautionary Principle, according to the Wingspread Statement, emphasizes that (1) lack of fully scientifically established risk is no reason to assume that there is no risk and (2) it is the proponent of a new activity that should bear the burden of proof to show no risk. See www.gdrc.org/u-gov/precaution-3.html.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

7

human intervention.14 Nowadays such campaigning is, however, more the exception than the rule. Modern approaches to applied ethics, however, often reflect on technology within its societal boundaries. Let me elucidate this by describing an example of ethics of nuclear energy that has long been associated with yes/no dichotomies. In view of the reality of the energy demands and consumption levels of the twenty-first century, our societies cannot afford the luxury of holding an isolated binary opinion about nuclear energy. Rather, we must investigate all the different paths for nuclear energy production and consider the future promises and possibilities afforded by these technologies while bearing in mind the burdens and benefits that each path creates for present and future generations. It is only after such moral analysis that we can compare different types of nuclear energy with other energy sources in order to reach conclusions on whether nuclear energy should have a place in the desirable future energy mix and on whether, if we are to deploy it, what type of nuclear energy should be further developed.

1.3 Ethics and the Engineer The first – and perhaps best-known and best-established – approach to engineering ethics focuses on the engineer and their roles and responsibilities from a broad societal perspective; this approach has also been referred to as professional engineering ethics. In this section, I will first discuss the question of whether engineering is to be considered a profession and, if so, what that means for the associated professional responsibilities. I will then present three different categories of responsibilities, namely the responsibilities of (1) an engineer to society, (2) an engineer in an engineering organization, and (3) engineering corporations toward society.

1.3.1 Is Engineering a Profession? This question is not, of course, applicable only to engineering. In several other professions, such as medicine, the question has been addressed for much longer. The consensus there seems to be that a profession must be “based upon the mastery of a complex body of knowledge and skills” and “used in the service of others,” while members of the professions must 14

See www.stopkillerrobots.org/.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

8

Ethics and Engineering

accept “a social contract between a profession and society, which, in return, grants the profession a monopoly over the use of its knowledge-base [and] the right to considerable autonomy in practice.”15 Thus, society grants certain rights to a profession, which – in turn – bring certain responsibilities. Michael Davis, a pioneer in engineering ethics, identifies a similar distinction between an occupation and a profession, stating that the exercise of an occupation does not require society’s approval and recognition, while the profession itself aims to serve ideals upheld by society.16 Thus, society “has a reason to give it special privileges.” Members of an occupation therefore serve their own interests, while members of a profession must primarily serve the interests of others. When investigating whether engineering is considered a profession everywhere in the world, Davis distinguishes between the economic and political traditions underlying the definition of a profession.17 The economic tradition sees a profession as “a means of controlling market forces for the benefit of the professionals themselves,” whereas the political tradition considers professions to carry legal conditions that “set standards of (advanced) education, require a license to practice, and impose discipline upon practitioners through formal (governmental) structures.”18 Both definitions fall short in that they fail to include reflections on the moral rightness or wrongness of professions. That leads Davis to his own philosophically oriented definition of a profession as an occupation that is organized in such a way that the members can “earn a living by openly serving a moral ideal in a morally-permissible way beyond what law, market, morality, and public opinion would otherwise require.”19 It is thus emphasized that the professions should both serve a moral ideal and strive to achieve that ideal in a morally permissible way. It is on this definition that the rest of this book is based as far as the morally relevant questions of engineering practice are concerned.20

15

Cruess, Johnston, and Cruess, “Profession,” 74.

16

Davis, “Thinking Like an Engineer,” 154.

17

Davis, “Is Engineering a Profession Everywhere?”

18

Ibid., 213–14. When defining a profession, Michael Davis also distinguished a third

19 20

tradition, namely the anthropological tradition. Ibid., 217. Several authors have identified the characteristics of the engineering profession. Perhaps the most notable examples are provided by Van de Poel and Royakkers and by Harris

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

9

1.3.2 What Are the Responsibilities of Individual Engineers to Society? Professional Codes of Conduct If we accept the reasoning above, that is, that engineering is a profession, then the following two questions arise: (1) what is the moral ideal that engineering should serve and (2) what are the professional responsibilities of individual engineers? Both questions have frequently been addressed in the ethical standards that govern this profession, as reflected in codes of ethics or codes of conduct. Again, the desire to formulate such ethical standards is not unique to engineering. Many other professions have already formulated such standards in their professional codes of ethics. Undoubtedly, the most familiar example is to be found in the field of medicine, where the roots of the first codes of conduct are found in the Hippocratic Oath, which derives from Ancient Greece. Modern medicine has extended and modernized this ancient code into codes of conduct that serve to govern the present-day profession of medical doctors. Important discussions of codes of conduct in engineering go back to the questionable role that many scientists and engineers played in the Second World War. One of the most famous examples emerged from the “Engineers’ Creed” that the American National Society of Professional Engineers (NSPE) adopted in 1954. In this pledge – based on the doctors’ oath – issues such as respecting and maintaining the public interest as well as upholding the highest ethical standards were emphasized: “As a Professional Engineer, I dedicate my professional knowledge and skill to the advancement and betterment of human welfare.”21 Similar pledges were drawn up by the German Engineering Association in the 1950s, and they emphasized, among other matters, that engineers should not work for those who fail to respect human rights. This was a reference to the highly problematic role that many engineers had played in Nazi Germany.22 Pledges such as the Engineers’ Creed have been criticized for encompassing predominantly self-serving functions such as “group identification and self-congratulation” rather than addressing “hard decisions about how to

et al. See Van de Poel and Royakkers, Ethics, Technology and Engineering, 35; Harris et al., Engineering Ethics, 13–14. 21

NSPE, NSPE Ethics Reference Guide, 2.

22

Van de Poel and Royakkers, Ethics, Technology and Engineering, 38.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

10

Ethics and Engineering

behave in difficult situations.”23 Indeed, such pledges are too general to have a meaningful impact on behavior, and they can, at most, serve to remind engineers of their social responsibilities. Later attempts to formulate professional engineering codes were much more detailed, such as the NSPE’s “Code of Ethics for Engineers.”24 Like many other engineering codes, these code was the upshot of discussions between members of professional organizations who sought to formulate ethical standards for the profession of engineering. The code was presented as a dynamic document that should “live and breathe with the profession it serves” and should be constantly reviewed and revised to “reflect the growing understanding of engineering professionalism in public service.”25 Of course, this approach does not necessarily eliminate all the objections to codes. One may consider, for instance, the fact that ethics cannot always be codified and that forming a proper judgment about a situation (and thereby a potentially serious impact on decision-making in engineering) requires an understanding of the specifics of that situation.26 However, the purpose of such codes is not necessarily to point to unequivocal answers in ethically problematic situations. Instead, they mainly serve to emphasize the place that the profession of engineering has in society. Thus, society has granted engineers certain rights to exercise their profession, and with those rights and privileges come certain responsibilities; or, conversely, one may talk of the social contract that engineers have not only with society but also among themselves. This social contract is reflected in professional codes of conduct. The NSPE code is a general code applicable to engineering. Sometimes, in an attempt to bring the codes closer to actual practice, particular engineering fields formulate their own codes, such as those of the American Society of Civil Engineers (ASCE) and the American Society for Mechanical Engineering (ASME). Furthermore, professional engineering organizations in many other countries have adopted their own codes of conduct. Professional codes – both general engineering codes and specific codes related to individual

23

Kultgen and Alexander-Smith, “The Ideological Use of Professional Codes,” 53.

24

25 NSPE, NSPE Ethics Reference Guide. Ibid., 1. Ladd, “The Quest for a Code of Professional Ethics.” For an overview of the scope and

26

limitations of codes of conduct, see Van de Poel and Royakkers, Ethics, Technology and Engineering, section 2.3.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

11

engineering fields – can be found in Australia, Canada, Chile, China, Iran, Japan, Hong Kong, and Finland, to name but a few countries.27 While some countries of the world do not have formal written codes of ethics, that does not mean to say that they do not have the same social contract or uphold the same ideals that exist in those countries. As Davis correctly argues, in many countries the technical standards serve the same purpose as the code of ethics, in that engineers expect each other to abide by the same standards.28 In discussions of the responsibilities of engineers, there is a potential tension between individual and collective responsibilities. Engineering practice often involves several engineers, often from different organizations. This makes the attribution of individual responsibilities in collective action a rather daunting task. As already mentioned, this is referred to as the “problem of many hands.” The notion was originally developed by Dennis Thompson with regard to the assigning of responsibilities to public officials when “many different officials contribute in many different ways to decisions and policies in the modern state.”29 Ibo Van de Poel, Royakkers, and Zwart have extended the notion to the realm of engineering, where there are also often very many “hands” involved in certain decisions and activities.30 Discussing the Deepwater Horizon disaster in the Gulf of Mexico in 2010, these authors argue that “it is usually very difficult, if not impossible, to know who contributed to, or could have prevented a certain action, who knew or could have known what, etc.”31 This is problematic because it implies that nobody can be reasonably held morally responsible for a disaster.

1.3.3 What Are the Responsibilities of an Engineer in an Organization? Corporate Codes of Conduct In addition to their responsibilities to society, engineers also have certain responsibilities to their employers. Engineers often work in organizations that understandably expect them to abide by certain rules and regulations.

27

28

The Illinois Institute of Technology has listed the internationally known codes. See for an overview of all the countries and fields http://ethics.iit.edu/ecodes/ethics-area/10. Davis, “Is Engineering a Profession Everywhere?,” 223.

29

Thompson, “Moral Responsibility of Public Officials,” 905.

30

Van de Poel, Royakkers, and Zwart, “Introduction.”

31

Ibid., 4.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

12

Ethics and Engineering

Often they have a special role in these organizations because they are at the forefront of technological development. This is, on the one hand, justification enough for professional codes to emphasize the broader responsibilities of engineers to society at large. On the other hand, it is also particularly important for an organization to secure an engineer’s loyalty, to ensure that the same engineer shares the organization’s most sensitive information will not jeopardize the interest of that organization. This could otherwise give rise to dilemmas in which professional codes and organizational codes might require different courses of action.32 Naturally, this argument does not render codes of conduct redundant. Instead, it emphasizes the need to acknowledge the potentially conflicting duties that engineers may have in such situations, thereby highlighting the need for a broader understanding of the socio-ethical questions of engineering. Of course, there is no one-sizefits-all solution, and so all situations need to be considered in the light of their specific circumstances. Whereas professional codes of conduct explicate engineers’ responsibilities as members of the profession, organizational codes of conduct (in the case of corporations they are called corporate codes) spell out, for instance, engineers’ responsibilities to their organization as well as their responsibilities as members of that same organization to the organization’s stakeholders.33 This brings me to the broader issues of the responsibilities of engineering corporations.

1.3.4 What Are the Responsibilities of Engineering Corporations? CSR Let me elaborate on this issue by returning to the case discussed at the beginning of this chapter, because an interesting development seems to have taken place at Volkswagen. Presumably in response to Dieselgate, the Volkswagen Group has updated and expanded its codes of ethics. These codes, like many other corporate codes, deal with the broader issues of the

32

33

See for an extensive discussion of this issue Van de Poel and Royakkers, Ethics, Technology and Engineering, 46–47. The terminology can be a bit confusing here. Sometimes professional codes also discuss engineers’ responsibilities to employers or to clients (e.g., the NSPE code). These codes do not, however, replace organizational codes such as corporate codes.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

13

responsibilities of the corporation. More specifically, they discuss the corporation’s responsibilities (1) as a member of society, (2) as a business partner, and (3) in the workspace (and toward employees):34 “Every employee in the Volkswagen Group must be aware of their social responsibility, particularly as regards the well-being of people and the environment, and ensure that our Company contributes to sustainable development.”35 What is particularly striking about these renewed codes is the fact that they attempt in several ways to assist employees to arrive at ethically justifiable decisions in their own practice. First, there are a number of hands-on questions that encourage self-testing, including a “Public Test” (“do I still think my decision is right when my company has to justify it in public?”), an “Involvement Test” (“would I accept my own decision if I were affected?”), and a “Second Opinion” test: “What would my family say about my decision?”36 Furthermore, there are clear and detailed procedures included for “whistle-blowers”: “If we suspect a violation of the Code of Conduct or any other misconduct in our work environment, we can use the Volkswagen Group whistle-blowers system to report this – either giving our name or making our report anonymously.”37 Further provisions are included to protect whistle-blowers, who are presumed innocent until convicted of an offense. What is also interesting in these Volkswagen codes is the explicit reference to a lengthy list of other voluntary commitments, such as the UN Declaration of Human Rights, the guidelines of the Organization for Economic Cooperation and Development (OECD), the International Labour Organization’s Declaration on Fundamental Principles and Rights at Work, and the Ten Principles of the UN Global Compact.38 In general, corporations’ broader societal responsibilities are often subsumed under the heading of CSR. CSR (along with corporate codes of conduct) became important toward the end of the last century, the assumption being that they could “enhance corporations’ social and environmental commitments by articulating the norms and standards by which they profess to be bound.”39 Such corporate codes, as well as CSR measures, often 34

Indeed, corporate codes discuss not only issues such as confidentiality and the employee’s responsibility to the employer but also the employee’s rights in the workplace and,

35

hence, the responsibilities of employers. Volkswagen, Volkswagen Group Code of Conduct, 6.

36

Ibid., 65.

38

See www.volkswagenag.com/en/sustainability/policy.html#.

39

Rosen-Zvi, “You Are Too Soft,” 537.

37

Ibid., 62.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

14

Ethics and Engineering

emphasize the need to respect human rights, environmental protection and, more recently, problems associated with climate change and the muchneeded global greenhouse emission cuts to which large multinational corporations can substantially contribute. Two important organizations to which the updated Volkswagen codes of conduct also refer are the OECD and the UN. The OECD has developed principles that build on the notion of transparency, including the OECD Principles of Corporate Governance and the OECD Guidelines for Multinational Corporations.40 The UN Global Compact is perhaps the most recognizable initiative for the collection and disclosure of CSR-related information.41 It lists ten conduct-oriented principles covering subjects such as “human rights, labour, environmental and anticorruption values,” all of which aim to create a framework for corporate accountability.42 Before moving to the next section, let me make three brief remarks about CSR. First, CSR and corporate codes do not, of course, relate only to engineering firms, but given the role that engineering firms play in various CSRrelated issues, such as the environment, it is becoming increasingly important to consider them in relation to engineering corporations. Second, in discussions of CSR, there is an inherent assumption that a corporation aims and wishes to go beyond merely meeting legal requirements. CSR also covers issues that are not fully regulated and that are not regulated at all. In that sense, it is comparable to what we call “engineering ethics” in this book, that is, principles that go beyond what is legally required. Third, and in conjunction with the previous issue, committing to CSR-related requirements and having corporate codes is one thing, but acting upon them is another. The implied criticism here is that some such initiatives can easily be seen as window-dressing,43 or – in the case of environmental restrictions that corporations voluntarily impose on themselves – as greenwashing.44 It is therefore important to understand that the added value of CSR and corporate codes often lies in the way in which they are monitored, for instance in

40

OECD, OECD Principles of Corporate Governance; OECD, OECD Guidelines for Multinational Enterprises.

41 42

Akhtarkhavari, Global Governance of the Environment. Backer, “Transparency and Business in International Law,” 115.

43

Amazeen, “Gap (RED).”

44

Laufer, “Social Accountability and Corporate Greenwashing.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

15

external and independent audits,45 and in whether the outcome of such audits is publicly available and could, for instance, lead to naming and shaming, all of which could incentivize changes of behavior among corporations.46

1.4 Ethics and the Practice of Engineering: Ethics Up Front In the remainder of this book, I will focus on this ethics-up-front approach in the practice of engineering. Let me clarify what I mean by both notions while introducing how ethics up front features throughout the book. In their practice, engineers employ and engage in a variety of activities including the assessment and evaluation of risks, costs, and benefits, and the design and development of artifacts and systems, such as energy systems. At every turn in those activities, values are expressed either explicitly or implicitly, and choices have ethical ramifications, whether recognized or not. It is my aim to help engineers to better understand this aspect of their practice and think more intentionally about the ethical issues associated with their work.47 By “ethics up front” I mean proactive thinking about ethical issues. Rather than dwelling on ethical reflections in retrospect, this approach aims to facilitate the proactive involvement of ethics in engineering practice in order to identify the ethical problems at hand and to provide tools and frameworks to address those problems. It should be clear that I claim no originality for this approach. What I call ethics up front in this book builds on a tradition of ethics in technology and engineering that aims to include ethical reasoning and thinking as early as possible in a project. It neatly fits a number of approaches and methodologies discussed in this book, including value-sensitive design (VSD) and responsible innovation.48 The ethics-up-front approach features in all three parts of this book, as outlined below. 45 46

Morimoto, Ash, and Hope, “Corporate Social Responsibility Audit.” Jacquet and Jamieson, “Soft but Significant Power in the Paris Agreement”; Taebi and Safari, “On Effectiveness and Legitimacy of ‘Shaming’ as a Strategy for Combatting Climate Change.”

47

I thank an anonymous reviewer for helping me to better determine the scope of what I mean by “practice” in this book.

48

In presenting this term, I am building on the work of Penelope Engel-Hills, Christine Winberg, and Arie Rip, who present ethics up front as necessary for proactive thinking

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

16

Ethics and Engineering

Part I: Ethics and Assessment and Evaluation in Engineering In Chapters 2 and 3 of the book, I will focus on engineering assessment and the evaluations that often take place prior to the start of an engineering project and, hence, can facilitate an ethics-up-front approach. Chapter 2 will examine risk analysis from the perspective of the broader societal aspects of risk and ethics of risk. Risk is a crucial aspect in any engineering practice. The introduction of new technology to society often brings great benefits, but it can also create new and significant risks. Serious efforts have been made to assess, map, understand, and manage such risks. For instance, in the chemical industry, risk assessment methods have been proposed for describing and quantifying the risks associated with hazardous substances, processes, actions, and events. Perhaps the most notable example is the probabilistic risk assessment approach, originally developed in order to determine and reduce both the risk of meltdown in nuclear reactors and the risk of crashing in aviation. However, these and other risk analysis methods have limitations. By reviewing how the Fukushima Daiichi nuclear accident could fall through the cracks of risks assessments, I will discuss some of these limitations. Naturally, this is not intended to dismiss risk assessment but rather to make engineers more aware of what risk assessments can, and in particular cannot, do. Moreover, risk assessment methods have been criticized for ignoring the social and ethical aspects of risk. I will discuss in detail the ethical issues associated with risk analysis, distinguishing between individual-based approaches to ethics of risks (e.g., informed consent) and collective and consequence-based approaches. I will finish the chapter by reviewing several methods for dealing with uncertainties in engineering design and applications. In assessing technological risks, we will inevitably run into the problem of the social control of technology, also known as the Collingridge dilemma; that is, the further we progress in the development of new technology, the more we learn about the associated risks and the less we can control those risks.49 I will discuss approaches such as redundancies, barriers, and safety factors, as well as the Precautionary

about ethical issues in an organizational setting when building a university; see EngelHills, Winberg, and Rip, “Ethics ‘Upfront.’” My understanding of this idea is that ethics can also be front-loaded in thinking about the practice of engineering. 49

Collingridge, The Social Control of Technology.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

17

Principle and more modern approaches that take safety to the core of engineering design, specifically Safe by Design. In Chapter 3, I will discuss other engineering assessment methods that are focused on balancing costs, risks, benefits, and environmental impacts. More specifically, I will review Cost–Benefit Analysis (CBA) as one of the most commonly applied assessment methods, both when private money is to be invested in engineering projects and when choices are to be made between public policy alternatives. It is often wrongly assumed that a CBA is an objective way of assessing costs and benefits and that the result unequivocally presents the best outcome. CBA is rooted in consequentialist thinking in ethics, which argues that moral rightness depends on whether positive consequences are being produced. A specific branch of consequentialism is utilitarianism, which aims to not only create but also maximize positive consequences. Classically, a CBA aims to judge alternatives on the basis of which can maximize any positive consequences, by first tallying all the positive and negative consequences and then assigning a monetary value to each one. CBA and its underlying ethical theory have been abundantly criticized in the literature; can we assess moral rightness only in terms of consequences? And even if we assume that the latter is possible, can we objectively assign monetary values to those consequences? While these are essentially valid objections, the chapter is not intended to criticize and dismiss CBA. Instead, following the reasoning that formal “analyses can be valuable to decision-making if their limits are understood,”50 the chapter aims to show what a CBA can and cannot do, the aim being to make it maximally suitable for assessing the risks, costs, and benefits of an engineering project. Thus, the chapter provides several ways of circumventing some of the ethical objections to a CBA by amending, adjusting, or supplementing it, or – when none of these can help – rejecting and replacing the CBA as a method. This approach accommodates ethics-up-front thinking about the consequences of engineering projects.

Part II: Ethics and Engineering Design The Volkswagen Dieselgate scandal has drawn much media attention to the foul play seen in the automotive industry, but another aspect has also come 50

Fischhoff, “The Realities of Risk–Cost–Benefit Analysis.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

18

Ethics and Engineering

to the fore. The scandal has reminded us that already in the design phase, engineering involves certain moral values and choices. Part II of the book will focus on ethical issues in the design of technology. It will build upon the argument that technological developments are not morally neutral. Especially in the design of new technologies, there are many implicit ethical issues. We should therefore be aware of, and proactively engage with, those ethical issues at a early stage of development. In Chapter 4, I will show how different values including security, privacy, and safety have been at stake in the design of whole-body scanners at airports. VSD and Design for Values will be discussed as two approaches for proactively identifying and including values in engineering design.51 When designing for values, one can run into conflicting individual values that cannot all be accommodated at the same time. Different strategies for dealing with value conflicts will be discussed, including designing out the conflict and balancing the conflicting values in a sensible and acceptable way. The chapter does not claim to offer the holy grail of design for ethics; indeed, complex and ethically intricate situations will emerge during the actual process of design. Instead, it offers a way to become more sensitive to these conflicts when they occur and to be equipped as far as possible to deal with them. The chapter will further discuss responsible research and innovation in proactive thinking about technological innovations. In so doing, it will extend the notion of design beyond simply technical artefacts and focus on the process of innovation. In the twenty-first century, we are moving at a fast pace toward the era of machines that are in charge of moral decisions, such as self-driving cars. In reviewing the Uber self-driving car accident in Arizona in 2018, Chapter 5 will first discuss the complexities associated with assigning responsibilities when such an accident occurs. This is a typical problem of “many hands” because it is difficult, if not impossible, to say precisely whose hands caused the accident. The problem is made even more complicated by the fact that one of the pairs of “hands” involved is that of a machine, and it raises the question: Who is responsible for the accident? Can we ascribe any form of responsibility to the car, or is the responsibility solely with the car designer or manufacturer? Who else might have contributed to the accident? I will 51

See, e.g., Friedman, Kahn, and Borning, Value Sensitive Design; Van den Hoven, Vermaas, and Van de Poel, Handbook of Ethics and Values in Technological Design.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Ethics and Engineering: An Ethics-Up-Front Approach

19

review how crash optimization programs include ethical considerations while focusing more broadly on the ethics of self-driving cars. More broadly, I will discuss the ethics of AI, focusing specifically on the problems of agency and bias. If we are to assign any responsibility to an autonomous system, we must assume that it can exercise agency, that is, it can independently and autonomously decide how to act. This is problematic for fundamental and practical reasons.

Part III: Engineering Ethics, Sustainability, and Globalization This part of the book will focus on intergenerational and international thinking about engineering and engineering ethics. Engineering practices produce not only benefits but also risks that clearly extend beyond generational and national borders. Chapter 6 focuses on sustainability and intergenerational justice in energy production and consumption. Sustainability is perhaps one of the most frequently misused and abused concepts, and what exactly it means has often been interpreted in a rather binary fashion. In this chapter, I will argue against the use of sustainability in a dichotomous mode. Two extreme examples of energy technologies are discussed: biofuel and nuclear energy. In a binary view, a desire for sustainability is likely to lead to excessive haste in dismissing or endorsing an energy system without understanding its technical specificities. This is unsatisfactory because one needs to first be aware of the technological possibilities, such as the different existing and future production methods of energy production, and of the social and ethical implications of each method. If we ignore the complexity associated with sustainability, it can easily be (mis)used for greenwashing and windowdressing

purposes,

leading

to

potential

ideological

and

political

manipulation. Engineering is becoming increasingly globalized, and this raises the question of how to consider the ethical considerations of engineering in the international context. In Chapter 7, I will first explore two existing strands in the literature, namely the ethical issues associated with technology transfer from Western to non-Western countries and, somewhat related, the dilemmas that a Western engineer may encounter when working in non-Western countries with other ethical standards. I will argue that discussions of international approaches deserve to be much broader than only

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

20

Ethics and Engineering

these two strands. Engineering ethics should not be predominantly considered a Western phenomenon; nor should we take only the Western perspective as the point of departure for discussions of engineering ethics. I will review the “why” and the “how” questions of international thinking in engineering ethics. I will also distinguish between the two approaches of globalization and diversification of engineering ethics. While the idea of globalizing engineering ethics is intuitively compelling, it must go forward in ways that do not damage the interests of newcomers to engineering fields. That is, in order to be recognized in this “global” engineering community, engineers in developing countries might be expected to adopt the global standards and their underlying values that are already established in the West. Diversification focuses on acknowledging the cultural and contextual differences between countries when dealing with questions of engineering ethics, but it can be easily misinterpreted as ethical relativism. Diversification is most helpful if we manage to facilitate a cross-cultural exchange and reflection. This can increase mutual understanding and respect among engineers and can help to educate culturally sensitive engineers.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.002

Part I

Assessment and Evaluation in Engineering

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.003

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.003

2

Risk Analysis and the Ethics of Technological Risk

2.1 The Unfolding of the Fukushima Daiichi Nuclear Accident On March 11, 2011, the Great East Japan earthquake with a magnitude of 9.0, occurring 130 km offshore, caused a major tsunami, approximately thirteen meters high, that hit the coast of Japan.1 One of the affected areas was the Fukushima Daiichi nuclear power plant, which hosts six reactors, three of which were operational at the time of the earthquake. In line with their design, the operational reactors automatically “scrammed,” that is, the control rods were instantly inserted into the reactor core to reduce the nuclear fission and the heat production.2 However, as a result of the earthquake the plant was disconnected from its power lines; the connection with the electricity grid was needed for cooling down the reactor core. When all external power was completely lost, the emergency diesel generators kicked in to cool the reactor cores: this was another built-in design measure to improve safety. The real problem started to unfold when forty-five minutes after the earthquake the ensuing tsunami reached the coast: a series of waves inundated the plant causing serious damage, as a result of which eleven of the twelve emergency generators stopped working. When reactors are designed, the complete loss of external power from electricity grids and from the internal power of the diesel generators – a situation referred to as “station blackout” – is anticipated. In a blackout, batteries come into action that can continue the cooling of the reactor. However, the problem was that the batteries were also flooded in reactors 1

The course of events has been adapted from the World Nuclear Association’s website (www.world-nuclear.org/information-library/safety-and-security/safety-of-plants/fukush ima-accident.aspx) and from a report published by the Carnegie Endowment for International Peace; see Acton and Hibbs, Why Fukushima Was Preventable.

2

Ibid., 4.

23

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

24

Ethics and Engineering

1 and 2, while reactor 3 had a functioning battery that continued working for about thirty hours. More importantly, the problems in the Fukushima Daiichi accident were not only attributable to a lack of electricity. Soon after the tsunami had inundated the plant, the seawater pumps that were supposed to remove the extracted heat from the reactors were destroyed by the tsunami.3 In boiling water reactors of the type operational in Fukushima Daiichi, water in a primary loop circulates around the reactor core, which produces enormous amounts of heat; that water then starts boiling and becomes the driving force of generators that produce electricity. The hot water is then cooled down through a secondary loop that takes away the heat from the primary loop. Cooled water then reenters the reactor core. The secondary loop is often connected to “fresh” sources of cool water, which explains why nuclear reactors are often built close to seas or rivers. So, even if electricity had been on supply, no fresh water from the sea could have circulated in the secondary loop. When the heated water could no longer be removed, the cooling water in the primary loop inside the reactor started to evaporate, turning into steam. That in turn oxidized the zirconium cladding that surrounded the extremely hot nuclear fuels. As a result, massive amounts of hydrogen were produced. The accumulation of hydrogen – in addition to the high pressure caused by the steam – led to an explosion in reactors 1 and 3. Since reactors 3 and 4 were connected with each other through their vent system, hydrogen also accumulated in reactor 4, leading to an explosion one day later. While reactor 4 was not operational at the time of the accident, it did host a large number of fuel rods in the spent fuel pools. Spent fuel rods are often kept in such a location to cool down before being shipped off. As was the case in the reactor cores, because of a lack of cooling, the water in the pools soon evaporated and after the explosion, the pool was unshielded and exposed to the open air. All the explosions led to the large-scale emission of radiation into the atmosphere. Ironically, the biggest concern surrounding the nuclear accident in Fukushima Daiichi did not relate to the reactors but rather to the spent fuel rods in the nonoperational reactor 4. In a report, the Japanese government revealed that for a time it had even considered evacuating Tokyo,4 when it was not clear whether the pool could be managed and properly contained. Evacuating Tokyo, a city of 35 million people, could have 3

Ibid., 5.

4

Quoted from Downer, “The Unknowable Ceiling of Safety.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

25

easily led to many casualties and injuries, much distress, and major financial losses.5 It was indeed the convergence of several failures – “the perfect storm” – and the cascading effects that gave rise to one of the largest nuclear accidents in the history of nuclear power production. As a result, 300,000 people in the direct area were evacuated, many of whom have since returned to their homes.

2.1.1 The Aftermath of the Disaster In November, 2017, six and half years after the accident, I visited the Fukushima prefecture, together with colleagues from the International Commission on Radiation Protection (ICRP). Upon our arrival in Fukushima City, the very first reminder of the accident was the air-monitoring stations on every major street corner that constantly monitored radiation levels, sometimes in real time and sometimes in the old-fashioned way with levels chalked up on small boards that were updated several times a day. Depending on the direction and the strength of the wind, there were days when radiation could shoot up to higher levels, but generally the boards were there to put across the “reassuring” message that the radiation levels in Fukushima City were far lower than the legally acceptable levels, at least at the time when I visited the city.6 The impact of the accident was most visible when I left the city. There were still many black bags literally behind many of the houses. This was a reminder of the first days after exposure but also of the first days after people were allowed to return to their homes, when the government had instructed the population on how to clean up radiation residue on the roofs and everything else that had been exposed to large amounts of radiation. 5

Ibid.

6

The radiation levels I registered in the days I spent there were around 0.135 microsieverts per hour (µSv/h). Please note that radiation exposure is always linked to time. In other words, the period of exposure – in addition to the radiation intensity – very much matters for the health impacts. It is, however, much more common to communicate legal exposure limits per year (rather than per hour). For instance, in different national legislation it is indicated that levels of radiation of 2 to 3 millisievert per year (mSv/y) are acceptable. The level of radiation registered by me in November, 2017 corresponds to 1.2 mSv/y, which is a considerable amount of radiation but still below the threshold line.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

26

Ethics and Engineering

Those bags were then supposed to be collected by the government. While many had already been collected, there were still a number waiting to be removed from people’s backyards. In addition to the collection of bags from private addresses, another, perhaps much larger, cleaning operation involved removing contaminated soil from public areas, where a layer of road, sidewalk, school playground, or any other public area literally had to be removed. Indeed, depending on the group of people potentially exposed to radiation, different thicknesses of layers had to be removed: in school playgrounds, for instance, larger portions were removed because children are generally more sensitive than adults to the health impacts of radiation. Contaminated soil was then bagged and shipped off to improvised waste treatment facilities in the hills, valleys, and any other places where, in this densely populated region, space was available for the creation of such facilities. In their first “destinations,” the bags were inspected (and opened to remove any high-emitting sources), then rebagged in much thicker blue bags (to guard against leakage into the soil) before then being transported to their “second destination” – another similarly improvised facility. This major – and still ongoing – cleaning operation has resulted in 15–22 million cubic meters of “contaminated soil from decontamination work [being] piled up or buried at about 150,000 locations in [the] Fukushima Prefecture, including plots near houses and schoolyards.”7 The Japanese government is still looking for a “final destination” for all the contaminated soil. While the original plan was to dispose of the soil as waste, that plan was quickly revised when the magnitude of the problem became more apparent. The government considered reusing the soil for road construction, and also on agricultural land for crops that are not consumed by humans. The latter led to quite some controversy in Japan but also elsewhere, in conjunction with the possible dispersal of radiation. All agriculture is for human consumption at some level; also feed for animals will finally reach human bodies and the impact of radiation may be even be worse, through the process of biomagnification, or the worsening of toxic effects through the cascading of biological

7

Ishizuka, “Official Storage of Contaminated Soil Begins in Fukushima.” Different sources give different figures. The larger number of 22 million is reported by the Japanese Citizens’ Nuclear Information Center; see www.cnic.jp/english/?p=4225the (consulted March 5, 2019).

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

27

impacts.8 Meanwhile, the famous blue bags are a defining feature in the scenery of the Fukushima prefecture. The last stop during our visit was at one of the improvised facilities on top of a hill overlooking the beautiful Date City in the valley below, with the view of countless blue bags in the foreground.

2.2 Assessing Technological Risk Introducing new technology to society often brings great benefits, but it can also create new and significant risks. Understandably, much of the focus in engineering has been on addressing these risks with respect to different technologies and industries (e.g., the chemical industry) to assess, understand, and manage such risks. For instance, in the chemical industry, risk assessment methods have been proposed for describing and quantifying “the risks associated with hazardous substances, processes, actions, or events.”9 One of the most systematic approaches to assessing risk is the probabilistic risk assessment or probabilistic safety assessment. This is a method that is often used for assessing the risk of major accidents, such as a nuclear meltdown. A probabilistic risk assessment usually involves identifying an undesired outcome (typically a major accident) that we wish to prevent from happening. It aims to identify different sequences of events that could lead to such an outcome, and to assign probabilities to each event. A probabilistic risk assessment calculates the probability of the undesired effect, and riskreduction efforts will then be devoted to reducing the probabilities of each individual event so that the probability of an ultimate accident will be reduced.10

8 9 10

Yap, “Blowback over Japanese Plan to Reuse Tainted Soil from Fukushima.” Covello and Merkhofer, Risk Assessment Methods, 3. Doorn and Hansson, “Should Probabilistic Design Replace Safety Factors?” Proske has conducted a comparison of computed and observed probabilities of failure and core damage frequencies following the Fukushima event which has again questioned the safety of nuclear power plants. Statisticians and parts of the public claim that there is a significant difference in the core damage frequencies based on probabilistic safety analysis and on historical data of accidents including the Fukushima event. See Proske, “Comparison of Computed and Observed Probabilities of Failure and Core Damage Frequencies.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

28

Ethics and Engineering

Probabilistic risk assessment has proven to be rather influential in policymaking because it neatly conceptualizes risk and presents it to policy-makers in terms of probability of occurrence. These probabilities enable policymakers then to agree on the minimum risks with which a new technology (or project) must comply. One of the first questions that various reports reviewing the accident aimed to answer was whether the Fukushima disaster was the result of improper science and engineering practice, particularly in terms of risk assessment. Various reports responded to this question affirmatively, arguing that the Fukushima Daiichi accident was (at least partially) the result of “bad science,” but also the result of bad governance by the policy-makers and the company that operated the reactor (i.e., TEPCO).11 Indeed, the Fukushima Daiichi accident shows many instances of inappropriate assessment, but it remains above all else “a sobering warning against overconfidence in hazard prediction.”12 In this chapter, I will focus on several limitations of risk assessment. Naturally this is not intended to dismiss risk assessment but rather to make engineers more aware of what assessments involve.

2.2.1 The Science–Policy Interface: What Does “Probabilities” Mean? At the heart of this (and other accidents), there is a more fundamental problem surrounding the science–policy interface, and more specifically policy-makers’ expectations from science: expectations that are partially created by the scientists and engineers involved in making the assessments in question. Indeed, policy-making on technological risk very much depends on scientific findings, and specifically on risk and reliability assessments. The classic distinction made in risk discussions is between the scientific endeavors of risk assessments and the policy endeavors for risk management. It would be beyond the scope of this chapter systematically to review this distinction and other relevant literature on risk governance. Instead,

11

More specifically regulators, who “regulate” risk, or determine the rules that a vendor needs to comply with, and then go on to monitor compliance with those rules. In this chapter, I shall not discuss those governance issues in detail. For more information, see Nuclear Accident Independent Investigation Committee, The National Diet of Japan.

12

Acton and Hibbs, Why Fukushima Was Preventable, 11.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

29

I will focus on one specific aspect of how the outcome of risk assessments is presented and used in policy-making on risky technologies. Risk assessments, especially probabilistic risk assessments, often conclude with the probability of an accident, which is then followed by a minimum criterion for making policy with respect to a specific technology. The probability of a nuclear meltdown is, for instance, often used when a minimum standard that a new nuclear reactor needs to comply with is presented. The seemingly exact numbers, figures, and engineering calculations can, however, sometimes appear to reflect a level of reliability and confidence that is not there.13 This becomes particularly problematic when the public and policy-makers are encouraged to accept such calculations as objective truths,14 especially when the implicit conclusion is that the indicated probability is “so low as to be negligible.”15 Let me illustrate this by discussing the probabilities of fatal nuclear meltdown in nuclear reactors, specifically by examining what exactly the assigned probabilities mean. A very important development in the systematic assessment of risks based on probabilities goes back to the 1970s, and was actually based on an improved understanding of how to reduce the risk of nuclear meltdowns. A couple of years before the Three Mile Island accident of 1979, the US Atomic Energy Commission initiated a new study to assess the safety of nuclear reactors by mapping all the events that could possibly lead to such an accident and then assigning probabilities to each single event. The study soon came to be known as the Rasmussen Report,16 after the Massachusetts Institute of Technology professor Norman Rasmussen, who led the investigations; the proposed method was termed probabilistic risk assessment.17 The Rasmussen Report found the core damage frequency of the then standard (so-called Generation II) reactors to be approximately 5  10 5. While the historical data with regard to reactors operating between 1969 and 1979 suggested a more pessimistic probability of meltdown (namely 10 3), thinking in terms of probability of core meltdown soon became acceptable for 13

For a more detailed discussion of nuclear safety and probabilistic risk assessment, see Taebi and Kloosterman, “Design for Values in Nuclear Technology.”

14

Downer, “Disowning Fukushima.”

15

Downer, “The Unknowable Ceiling of Safety,” 36; emphasis in original. Nuclear Regulatory Commission, Reactor Safety Study.

16 17

Keller and Modarres, “A Historical Overview of Probabilistic Risk Assessment Development and Its Use in the Nuclear Power Industry.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

30

Ethics and Engineering

nuclear energy policy in subsequent years:18 a core meltdown of 10

4

was

commonly accepted to be the minimum level of safety that different national legislations required new nuclear reactors to guarantee.19 Let us now have a closer look at this kind of probability actually means. In the first place, these probabilities indicate core damage frequency per reactor per year, that is, years of reactor operation, so an occurrence frequency of 10

4

corresponds to one accident in every 10,000 reactor years. On the basis

of the number of reactors in operation in the 1980s (i.e., almost 500), this therefore meant that an accident could occur once in every twenty years.20 However, in the 1980s, a supposed “nuclear renaissance” was anticipated that would entail a growth of as many as 5,000 reactors worldwide. Ten times more reactors would amount to ten times more reactor years. As a result, accidents could become ten times more likely. A core meltdown accident probability would then increase to once every two years,21 which was deemed unacceptable in terms of both public confidence in nuclear energy and the political acceptability of nuclear energy.22 Safer nuclear reactors were therefore needed.

2.3 How Did the Fukushima Disaster Fall through the “Cracks” of Risk Assessment? In the previous section we discussed the actual meaning of probabilities in terms of risk assessments, arguing that risk assessments are potentially effective tools for public policy but only if we properly understand and use them. In the remainder of this section we will discuss why the Fukushima disaster was not anticipated, arguing that several issues that gave rise to the 18 19

Minarick and Kukielka, Precursors to Potential Severe Core Damage Accidents, 1969–1979. The Nuclear Regulatory Commission often distinguishes the probability of core-melts from the probability of core-melts with significant releases (i.e., a major accident, like Fukushima). The frequency of 10

4

refers to the former and not to the latter, and it is a

standard from the 1970s. Modern reactors are expected to perform at a probability that is substantially lower. Nuclear Regulatory Commission, Safety Goals for the Operations of Nuclear Power Plants. I wish to thank John Downer for pointing this out to me. 20

Each year, 500 reactor years would pass, which means that based on the probability of 10 4, the expected number of accidents would be 5  10 once in every twenty years.

(i.e., 500  10 4), or simply

21

Calculation: 5,000  10

22

Weinberg and Spiewak, “Inherently Safe Reactors and a Second Nuclear Era.”

4

= 5  10

2

1

or simply once in every two years.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

31

accident could have been anticipated and hence, the accident could have been prevented. Let us discuss several limitations of risk assessment, or what risk assessment cannot anticipate. Interestingly, the Onagawa power plant, which was actually closer to the epicenter of the earthquake and experienced tsunami waves as high as the Fukushima Daiichi plant, was not affected.23 This gives rise to the question as to why the Fukushima Daiichi plant was affected. Different reports have concluded that the Fukushima Daiichi accident was the result of problematic assumptions made in the risk assessments. There were at least four important aspects of risk assessment that were problematic, insufficient, or faulty. First, when designing the Fukushima Daiichi power plant in 1960s, TEPCO estimated the maximum height of a tsunami to be about 6.1 m, less than half the height of the waves that reached the plant. This was based on historic seismic evidence in the region.24 The question is, which earthquakes over what period would have to be taken into account? The design of the Fukushima Daiichi plant seems to have taken too short a historical period into account. The elevation of the reactor was determined on the basis of the Shioya-Oki earthquake in 1938, which had a maximum magnitude of 7.8 on the Richter scale.25 TEPCO made two sets of calculations in 2008 based on datasets from different sources, each of which suggested that tsunami heights could top 8.4 m – possibly even rising above 10 m. Even if the height of the barrier had been raised to 8.4 m (as suggested), safety measures for 10 m would have been insufficient, since the tsunami waves on the day of accident reached an altitude of approximately 13 m. The reason why the Onagawa plant – closer to the epicenter of the March, 2011 earthquake – was not affected was that a “wider” time frame on the basis of historic events, including an earthquake in 869 with a magnitude of 8.3 on the Richter scale, resulted in expected wave heights of 13.6 m, which proved to be the appropriate estimation.26 This pinpoints an important issue with regard to new scientific insights and how those are reflected in actual safety policies; the Fukushima Daiichi plant was designed in the 1960s, when seismology was a much less mature science. While TEPCO was already catching up on the new

23 24

Synolakis and Kânoğlu, “The Fukushima Accident Was Preventable,” 6. Acton and Hibbs, Why Fukushima Was Preventable, 12.

25

Abe, “Tectonic Implications of the Large Shioya-Oki Earthquakes of 1938.”

26

Synolakis and Kânoğlu, “The Fukushima Accident Was Preventable,” 6.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

32

Ethics and Engineering

insights in order to make its plant safer, its improvements did not go fast enough and, even with the proposed adjustment, they would not have been sufficient to prevent the accident. Second, it has been argued that the tsunami risk was inappropriately understood and included in the risk assessment.27 Tsunami risk, even though very familiar in Japan, was added to the nuclear power plant guidelines of Japan’s Nuclear Safety Commission only in 2006.28 It has been observed by tsunami experts that there was insufficient understanding of an important tsunami risk phenomenon:29 that is, while individual waves 10 km from the coast were maximally 6 m high, it was the convergence of earlier waves coming back from the land and the incoming waves that culminated in the new waves, all of which gave rise to tsunami heights as high as 13 m.30 Third, the total loss of off-site power (i.e. grid) and on-site power (i.e. diesel generators and batteries) was not considered as a possibility.31 This loss of on-site electricity occurred because the power from the electricity grid was disconnected as a result of the tsunami and the diesel generators were situated at an insufficiently high elevation, which resulted in their being inundated, unlike what had been predicted in the risk assessment. Indeed, such station blackouts were anticipated in the assessments; that is, batteries were added to the design of the plants as an extra layer of safety. The problem was that the batteries were also partially inundated, and they could last for only a short time. What would happen if the batteries also did not work (or stopped working) was not anticipated in the risk assessments. Fourth, and in conjunction with the latter, crucial components were flooded during the tsunami. As mentioned in the previous paragraph, the batteries were placed beneath the inundation level, which was why the electricity in the plant was completely lost, and other crucial components for safety – such as the seawater pumps – were not protected either, so that they were severely damaged after the tsunami arrived on the coast. More generally, one might argue that the focus of risk assessment for nuclear

27

IAEA, The Fukushima Daiichi Accident Report by the Director General.

28

Acton and Hibbs, Why Fukushima Was Preventable. Synolakis and Kânoğlu, “The Fukushima Accident Was Preventable.”

29 30

Acton and Hibbs, Why Fukushima Was Preventable, 10.

31

Synolakis and Kânoğlu, “The Fukushima Accident Was Preventable.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

33

power plants is often mainly – or sometimes solely – upon the core damage. As mentioned earlier in the description of this case, the biggest reason for concern in the Fukushima Daiichi disaster was not the reactor cores of the operational (and nonoperational) reactors: the reason why the Japanese government even considered evacuating Tokyo was the presence of the spent fuel rods pool in nonoperational reactor 4. In conclusion, there were indeed several aspects of risk assessments that had been clearly carried out inappropriately, which is why several reports concluded that the Fukushima Daiichi accident could have been prevented, but let us have a closer look to see what we could and should expect from risk assessments and, more importantly, whether we can anticipate and address all possible risks.

2.4 What Risk Assessments Cannot Anticipate: “Normal Accidents” Since risk and reliability assessments are becoming increasingly influential in how we actually perceive and deal with risk, it is important that we also critically review these assessment methods. If the science were done properly, could we assume that we could, in principle, anticipate and include all risk in our (probabilistic) assessment? How we deal with uncertainties remains a major problem. I will review several important limitations of reliability assessments, and more specifically probabilistic assessments, arguing that – in principle – there are things that risk assessments cannot address. From the outset, it should be stated that I am merely trying to make engineers sensitive to what can and cannot be anticipated with such assessments, all of which could potentially help engineers to gain a richer notion of what risks entail and how we can deal with them. The next section, on the societal issues of risk assessments, serves the same purpose. A fuller understanding of risk in engineering could also improve how assessment is presented and used in policy-making, all of which could be more conducive to better use of risk assessment in public policy. What, then, are the risks that we cannot anticipate? First of all, assessment methods are often based on the premise of perfect human performance.32 It is because of this assumption – that the operator will always function flawlessly – that after accidents government reports often conclude that “calculations would have been sound if people had only

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

34

Ethics and Engineering

obeyed the rules.”33 The official Fukushima report also concluded that the accident had to be viewed as a “man-made” disaster.34 Nongovernmental reports on Fukushima reiterated the same conclusion; for instance, the Carnegie Report concluded that “the Fukushima accident does not reveal a previously unknown fatal flaw associated with nuclear power,” and rather “it underscores the importance of periodically reevaluating plant safety in the light of dynamic external threats and of evolving best practices.”35 This is in a way true, because the types of risks manifested during the unfolding of the chain of accidents were known: i.e., proper periodic reevaluating of tsunami risks, and following international best practice as well as best practice from other plants elsewhere in Japan (such as in the Onagawa plant), could have prevented the chain of accidents. Yet, while we can try to estimate lower bounds and upper bounds of the occurrence probabilities,36 the fundamental issue remains, which is that some risks cannot be fully anticipated; ironically, we learn from the mistakes of each accident to make our engineering systems less prone to similar risks. An important lesson learned from the Fukushima accident was that the diesel generators and batteries should have been placed at a higher altitude even if the tsunami risk assessments assessed otherwise. This might sound like common sense, but it was not standard practice in many nuclear power plants throughout the world, including even those situated close to rivers and coastlines. This was perhaps due to an overconfidence in the risk assessments regarding the potential wave lengths of tsunamis. More generally speaking, we need always to deal with the “paradox of safety,” meaning that “safety is often measured in its absence more than in its presence”; the term “accident” essentially means that “as long as hazards . . . and human fallibility continue to co-exist, unhappy chance can combine them in various ways to bring about a bad event.”37 We can therefore conclude that several impacts cannot be systematically included in risk and safety assessments.

32

Recent improvement to risk assessment methods have introduced a human error probability in order to incorporate human error in quantitative risk assessments. See for instance Steijn et al., “An Integration of Human Factors into Quantitative Risk Analysis Using Bayesian Belief Networks towards Developing a ‘QRA+.’”

33 34

Downer, “Disowning Fukushima,” 296. Nuclear Accident Independent Investigation Committee, The National Diet of Japan.

35

Acton and Hibbs, Why Fukushima Was Preventable, 2; emphasis added by me.

36

Cooke, Experts in Uncertainty.

37

Reason, “Safety Paradoxes and Safety Culture.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

35

Second, and in conjunction with the latter issue, some risks simply cannot be anticipated. In highly complex socio-technical systems with many interconnected components (as in the case of nuclear power plants), there are incidents that are very difficult to predict and, therefore, to prevent from happening. The sociologist Charles Perrow calls these “Normal Accidents.”38 In other words, he means that complex systems are messy, and that even functionally unrelated elements can influence each other in ways thatare very difficult (if not impossible) to anticipate.39 The explosion in reactor 4 in the Fukushima Daiichi plant vividly illustrates a Perrovian Normal Accident. It involved an unanticipated coincidence that – because reactors 3 and 4 were connected – led to an explosion.40 The result was something potentially much worse than the meltdowns in the reactor cores of reactors 1 and 3. Perrow’s Normal Accidents are an acknowledgment of the limitations of assessment, a point that has been observed by other scholars, albeit differently expressed. “The bitter reality is that severe nuclear accidents will occur in the future, no matter how advanced nuclear technologies become; we just do not know when, where, and how they will occur.”41 Perrow’s Normal Accidents are, in principle, instances of unanticipated risks. We could also conceive of risks that could essentially be anticipated, yet that we choose not to include in our calculations because we consider them to be statistically negligible or because we argue that our engineering system can withstand such risks. This is the third limitation of risk assessments; John Downer calls it the framing factor that pertains to “outside-theframe” limitations and gives the example of a “fuel-laden airliner being hijacked by extremists and then flown into a nuclear plant.”42 While this kind of risk has been recognized by the builders of modern reactors, no attempt is made to calculate its probability (and to include that in the probabilistic risk assessment). Instead, the engineers argue that the hardened containment building around the reactor can withstand such an impact. Even if we assume that this statement is true, we could argue that 38

Perrow, Normal Accidents.

39

This is based on John Downer’s reading of Perrow; see Downer, “The Unknowable Ceiling of Safety,” 44.

40

It seems that while reactors 3 and 4 were connected for the purpose of air circulation efficiency, it was not included in the risk assessments.

41

Ahn et al., Reflections on the Fukushima Daiichi Nuclear Accident, viii.

42

Downer, “The Unknowable Ceiling of Safety,” 40.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

36

Ethics and Engineering

the explosion and ensuing fire of such an attack could seriously disrupt the reactor’s surrounding safety systems by disconnecting the grid cables, diesel generators, and batteries, and maybe even the cooling pumps that take water to the reactor. Generally speaking, one could say that security issues associated with a reactor – be it a cyber-attack or someone intentionally trying to make the reactor malfunction and cause a meltdown – are the type of risks that are not typically included in most probabilistic assessments.43

2.5 The Ethics of Risk: Social Acceptance versus Ethical Acceptability An important criticism of risk assessment methods has often been that they ignore societal and ethical aspects of risk. This criticism has led to two important developments in the literature. First, a powerful strand of social science scholarship is devoted to developing the concept of the “social acceptance” of technological risk, arguing that when introducing technological risks, people need to accept – or at least tolerate – those same risks.44 During the last three decades, social acceptance studies have gained more relevance for major technologies, most notably in relation to large-energy projects such as sizable wind parks and nuclear energy technologies.45 This is due to the controversies and public opposition emerging from the introduction or implementation of such technologies. In discussions about risky technologies, a distinction is often made between the actual acceptance of technology and the ethical questions concerning which levels of risk should be acceptable to the public.46 The second development in the literature on risk assessment methods has come from 43

This argument is about the usual approaches to with risk assessment. However, important efforts to quantify security risks with methods and techniques from Game Theory and decompositioning techniques (such as attack trees) and combining or combining safety and security risks need to be acknowledged. See Chockalingam et al., “Integrated Safety and Security Risk Assessment Methods.”

44 45

Flynn, “Risk and the Public Acceptance of New Technologies.” Chung, “Nuclear Power and Public Acceptance”; Albrechts, “Strategic (Spatial) Planning Reexamined”; Wüstenhagen, Wolsink, and Bürer, “Social Acceptance of Renewable

46

Energy Innovation.” Grunwald, “Technology Policy between Long-Term Planning Requirements and ShortRanged Acceptance Problems”; Hansson, “Ethical Criteria of Risk Acceptance”; Asveld and Roeser, The Ethics of Technological Risk.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

37

the fields of ethics and philosophy. There is a growing body of literature in applied ethics that takes up the issue of the ethical acceptability of risky technologies.47 It has been argued that when focusing on social acceptance alone, we might easily overlook important ethical issues; so there is a noticeable gap between social acceptance and ethical acceptability.48 Fittingly, many philosophy researchers are now considering methods for assessing the ethics of technological risk and consequently the ethical acceptability of

risky

technology.

These

assessments

often

involve

conceptual

philosophical considerations. Many authors have emphasized the interrelatedness of the two concepts of social acceptance and ethical acceptability.49 Lack of social acceptance can sometimes be attributed to the fact that important ethical issues that the new technologies engender are overlooked in the decision-making phase. For instance, public opposition to siting issues may stem from an unfair distribution of risk and benefit between a local community (which will be exposed to additional risks) and a larger region or even nation (which will enjoy the benefits). It has, for instance, been empirically shown that in the case of sustainable energy technologies, the acceptance of individual members of a community is affected by those members’ social norms, as well as by their feelings about distributive and procedural justice. In the case of major wind energy projects, it has also been argued that what matters is “not only mere acceptance, but [also] the ethical question of acceptability.”50 I will focus on two approaches to assessing the ethical acceptability of technologies. In conjunction with each approach, I will also discuss why focusing solely on social acceptance may turn out to be utterly inadequate when the ethical issues associated with risk are addressed.

47

See, e.g., Hansson, “Ethical Criteria of Risk Acceptance”; Asveld and Roeser, The Ethics of Technological Risk; Hansson, “An Agenda for the Ethics of Risk.”

48

Van de Poel, “A Coherentist View on the Relation between Social Acceptance and Moral Acceptability of Technology”; Taebi, “Bridging the Gap between Social Acceptance and Ethical Acceptability.”

49

Cowell, Bristow, and Munday, “Acceptance, Acceptability and Environmental Justice”; Van de Poel, “A Coherentist View on the Relation between Social Acceptance and Moral Acceptability of Technology.”

50

Oosterlaken, “Applying Value Sensitive Design (VSD) to Wind Turbines and Wind Parks.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

38

Ethics and Engineering

First I will focus on individual-based approaches to ethically consenting to risk. While the need to ensure social acceptance does itself stem from the deeper ethical belief that when an individual is exposed to certain risks, they should be able to be informed about and consent to those risks, it is also relevant to know to what extent the individual has been informed about the risk and has the freedom to consent to it. Second, I will discuss approaches to ethical acceptability that focus on the consequences for ethical evaluation of an action, aiming to reduce the negative and increase the positive consequences of risks and some associated distributional concerns.

2.5.1 Individual-Based Approaches: Informed Consent At the beginning of this chapter, I described my first encounter with the nuclear accident in Fukushima City in the form of air-monitoring stations on major street corners. Similar stations are also found along the roads in the region. The basic idea behind these stations was, indeed, to inform the population about the radiation risks they were being exposed to. The more fundamental ethical thinking is the belief that whenever risk is being imposed on an individual, that individual has the moral right to be informed about and to consent to the risk. The right to be informed was formalized in environmental law through the Aarhus Convention, which grants a number of rights to the public. It mentions (1) “access to environmental information” and (2) “public participation in environmental decision-making.”51 Consenting to such risk is an additional criterion originating from the informed consent principle. The principle of informed consent has its roots in biomedical ethics, where it is used with regard to medical procedures (e.g., surgery) and clinical practices (e.g., testing a new drug). Its deeper fundamental roots go back to respecting an individual’s autonomy, in the deontological – or duty-based – school of thinking in ethics. As was asserted by Immanuel Kant, the dignity of the individual needs to be inherently respected.52 In Kantian thinking, 51

While this convention primarily refers to “the state of the environment,” it also includes “the state of human health and safety where this can be affected by the state of the environment.” This quotation is from the website of the United Nations Economic Commission for Europe; see http://ec.europa.eu/environment/aarhus/.

52

Respect for autonomy could also be defended from the perspective of John Stuart Mill, whose ideas about utilitarianism will be discussed later in this chapter and in the next

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

39

therefore, respect for autonomy “flows from the recognition that all persons have unconditional worth, each having the capacity to determine his or her own moral destiny.”53 So when an individual rational agent who can decide for themselves is exposed to some risk, they have every right to be fully informed about the consequences of such risk and to consent to it.54 Informed consent is built on the two crucial assumptions of (1) being fully informed and (2) consenting to the risk. Let us review these assumptions. The first assumption is that when people accept (or consent to) a proposed risk, they will have been correctly and fully informed beforehand. The literature on environmental justice provides many examples of why this is a problematic assumption.55 For instance, in a case study of a uranium enrichment facility in Louisiana, local communities were asked to “nominate potential sites for a proposed chemical facility.”56 While the communities did apparently nominate – and hence consent to – host sites, there were several inherent ethical problems with this situation. One was that the company never informed the local communities about the exact nature of these “chemical plants”; enrichment facilities are indeed chemical plants, but they are very specific types with radiological risks. In addition, the company never presented any quantitative or qualitative risk assessments. Thus “it [was] impossible to know, reliably, the actual risks associated with the plant” when accepting those risks. As a matter of fact, after the nuclear accident, many people in the Fukushima prefecture started measuring levels of radiation for themselves, essentially questioning the officially communicated levels (and exclusion zones). This gave rise to a proliferation of all kinds of devices and citizen-science approaches to measuring and communicating radiation levels in the region.57 The second assumption upon which informed consent is built is that those who are exposed to potential risk

chapter. Mill’s account is very much that individual freedoms need to be respected as long as they do not interfere with or hamper another individual’s freedom. The interested reader might like to consult the discussion of autonomy in biomedical ethics: See Beauchamp and Childress, Principles of Biomedical Ethics, chap. 4. 53

Ibid., 103.

54

Scheffler, “The Role of Consent in the Legitimation of Risky Activity.”

55

Wigley and Shrader-Frechette, “Environmental Justice,” 72. This is a quotation from the draft Environmental Impact Statement of the NRC. It is

56

quoted here from Wigley and Shrader-Frechette, “Environmental Justice,” 71. 57

Taebi, “Justice and Good Governance in Nuclear Disasters.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

40

Ethics and Engineering

must consent to that risk. These assumptions also lead to the question of whose consent we should actually take into account and, in conjunction with that, whether we can give each individual a veto vote. From the early days of engineering ethics, there have been different proposals for applying informed consent to technological risks because there, too, autonomous individuals are being exposed to risks, to which they should in principle be able to consent.58 This principle is straightforwardly applicable in biomedical ethics, where usually the interest of just one individual patient is at stake, but extending it to include collective technological risk can be rather problematic. As Sven Ove Hansson correctly argues, informed consent is “associated with individual veto power, but it does not appear realistic to give veto power to all individuals who are affected for instance by an engineering project.”59 In the same vein, while we must respect the rights of each individual who is exposed to risk, modern societies would not be able to operate if all imposition of risk were prohibited.60 We can further safely assume that the “affected groups from which informed consent is sought cannot be identified with sufficient precision,” and hence, the question of whose consent is being sought seems to be highly relevant.61 This is also reflected in the Louisiana enrichment facility case study concerning the site-application process, where the opinions of host communities located very close to the proposed facilities were not considered. Instead, communities further away from the facilities were consulted.62 There are also ample other examples of local communities objecting to a proposed local facility against is broader (often national) public endorsement of the same technology; wind energy provides many rich examples.63 The reason why communities often have more difficulty with proposed facilities at a local level touches on the more fundamental question of the distribution of the burdens and benefits of new technology, or – to put it more bluntly – the winners and losers that new technology creates. This is

58

Martin and Schinzinger, Ethics in Engineering.

59

Hansson, “Informed Consent out of Context.”

60

Hansson, “An Agenda for the Ethics of Risk,” 21.

61

Hansson, “Informed Consent out of Context,” 149. Wigley and Shrader-Frechette, “Environmental Justice.”

62 63

Devine-Wright, Renewable Energy and the Public; Pesch et al., “Energy Justice and Controversies.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

41

the focus of the next subsection, on the consequence-based approach to ethical acceptability.

2.5.2 Collective, Consequence-Based Approaches Another leading approach to assessing acceptability of risk involves looking at the consequences of risky technology. Indeed, risky technology is often presented to society from the perspective of its benefits (e.g., well-being). By looking at the benefits and comparing them with the risks, we can – in principle – determine whether the new technology is justified. The fundamental thinking behind this goes back to consequentialism as proposed by Jeremy Bentham, which argues that for an action to be morally right, it should produce net positive consequences. The idea behind this school of thought in ethics is that we should tally the good and the bad consequences in order to be able to compare the categories and arrive at a conclusion about the rightness of an action or, in this case, a proposed technology. Consequentialism is a highly influential approach not only in public policy but also at the interface of engineering and policy. It has also been extensively criticized in the literature. I will discuss the approach, its applicability to engineering assessments, and the relevant criticism in Chapter 3. Let me just touch upon one criticism here that is very relevant in the context of risk, which is that consequentialism cannot deal with the distributional issues of risk. Consequentialism is essentially an approach in which we aggregate the good and bad consequences and in which no serious distinction is made between those to whom those consequences accrue. Distributional issues, however, underlie new technologies, both spatially and temporally. When risky facilities are sited, several fundamental ethical issues need to be addressed in the realm of the spatial, including questions about how environmental burdens and benefits should be distributed. In addition, there are also more practical questions with ethical relevance, such as the matter of how to establish an acceptable distance between potential major accidents with risky technology and residents who would be exposed.64 The distributional issue becomes more complex when we have to deal with international (spatial) and intergenerational (temporal) risks. Some 64

Watson and Bulkeley, “Just Waste?”; Basta, “Siting Technological Risks.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

42

Ethics and Engineering

technological projects engender international risks. For instance, some of the technological solutions propoed for dealing with climate change, such as geoengineering (i.e., intentional climate change designed to reverse undesired change), raise serious international procedural and distributive justice questions as well as those regarding international governance and responsibility.65 The multinational character of such proposals makes it virtually impossible to address their desirability only in social acceptance studies. They are perhaps most ethically complex in relation to temporal distribution, alternatively known as intergenerational issues. For instance, at what pace should we consume nonrenewable resources, and what level of change in the climatic system will be acceptable to future generations? These questions become especially intricate when new technology that could help us to safeguard future interests compromises the interests of people alive today. Such a situation gives rise to moral questions that are not easy to address in public acceptance studies. For example: Do we have a moral obligation to provide benefits for or to prevent losses for future generations if that comes at a cost to ourselves?

2.6 How to Deal with Uncertainties in Technological Risks In assessing technological risks, we will inevitably run into the problem of social control of technology, also known as the Collingridge dilemma: that is, the further we are in the development of new technology, the more we will know about the technology (and the associated risks) and the less we can control it. Let me first offer some explanation regarding definitions, because until now, I have used “risk” as a rather generic term. Ibo van de Poel and Zoë Robaey have presented a helpful taxonomy that is particularly relevant to discussions about the risks of new technology and the associated uncertainties. They distinguish between risk, scenario uncertainty, ignorance, and indeterminacy.66 In discussions of technological risk, we speak of risks when we are familiar with the nature of the consequences and can meaningfully 65

See, e.g., Pidgeon et al., “Deliberating Stratospheric Aerosols for Climate Geoengineering and the SPICE Project.”

66

Van de Poel and Robaey have a category called “normative ambiguity” that I have not included here; see Van de Poel and Robaey, “Safe-by-Design.” Instead, I will present the idea of normative uncertainties in risk discussions: see Taebi, Kwakkel, and Kermisch, “Governing Climate Risks in the Face of Normative Uncertainties.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

43

assign a probability to those consequences;67 “scenario uncertainty” is a situation when “we do not know all the scenarios (or failure mechanisms) that may lead to an undesirable outcome”; “ignorance” is a situation in which “we not only lack knowledge of all failure mechanisms but furthermore do not know about certain undesirable consequences that might occur”; and indeterminacy is when users or operators “may employ a technology differently than foreseen or expected by the designers.”68 I will add a fifth category: normative uncertainties as situations in which there is uncertainty from a normative (ethical) point of view as to which course of action with respect to risk should be preferred.69 Many principles in engineering safety – which aims to increase safety – have been devised to reduce the uncertainties associated with risks, including situations of ignorance and indeterminacy.70 I will first focus on how engineering has endeavored to respond to this quest, before moving on to discuss approaches that are informed and driven by ethics.71

2.6.1 Redundancies, Barriers, and Safety Factors In engineering, common approaches to acknowledging uncertainties include, among others, building and including redundancies, safety barriers, and safety factors.72 Redundancies are additional components added to an engineering system to protect the system from failing in one component: For instance, the diesel generators inside a nuclear reactor are often multiplied with redundancy in mind. Safety barriers are often physical systems designed to protect against a certain type of risk; the sea wall in the Fukushima Daiichi reactor is a typical example of such a physical barrier 67 68 69

See for other definitions of risk Hansson, “Risk and Safety in Technology.” Van de Poel and Robaey, “Safe-by-Design,” 298–99. Taebi, Kwakkel, and Kermisch, “Governing Climate Risks in the Face of Normative Uncertainties.”

70

Möller and Hansson, “Principles of Engineering Safety”; Doorn and Hansson, “Should Probabilistic Design Replace Safety Factors?”

71

There are also mathematical ways of dealing with uncertainties, for instance by “integrating out” uncertainties; see Van Gelder, “On the Risk-Informed Regulation in the

72

Safety against External Hazards.” It is not my intention to systematically review safety engineering approaches. For an overview see Doorn and Hansson, “Should Probabilistic Design Replace Safety Factors?”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

44

Ethics and Engineering

that was supposed to protect against tsunami risks.73 With their roots in antiquity, safety factors are perhaps the oldest of these ways of reducing uncertainty in engineering. They have mostly been used in construction engineering and mechanical engineering, in order to design engineering artifacts and constructions that are able to resist loads higher than those they should withstand during their intended use.74 A bridge, for instance, could be designed to withstand loads three times the maximum weight it needs to withstand; the safety factor would then be 3. This safety factor is a direct way of acknowledging the unanticipated hazards that could take place. While the rationale of adding extra strength to construction has very old roots, it was only in the nineteenth century that numerical values were added to quantify the extent of the extra strength to be added to a system.75 Basically, the less we know, the larger the safety factor should be. Likewise, the larger the potential major consequences of an undesired effect (e.g., a hydropower dam collapsing), the greater the safety factor should be. More nuanced thinking in safety engineering has guided us toward expressing risks in terms of probabilities of occurrence. Probabilistic risk assessment has been presented by some as an alternative to safety factors. Probabilistic approaches, the probabilistic risk assessment being a prominent type, are intended to address risk in terms of the probabilities of the different scenarios that could lead to an undesired effect, the aim being to reduce each probability so that the total probability of occurrence of an accident will be reduced. They also focus on reducing the probabilities of a possible undesired impact after a risk has materialized. In general, probabilistic assessments have been contrasted with deterministic approaches to risk, such as safety factors, in that they present a more nuanced view of the notion of risk. Some disagree, however, maintaining that probabilistic assessment is not necessarily superior to deterministic approaches because a risk to which no meaningful probability can be assigned needs to be addressed by the use of a safety factor.76 The inability to assign meaningful probabilities to certain events has indeed been a serious problem in probabilistic risk assessment, as has also been discussed above.

73 74

IAEA, Safety Related Terms for Advanced Nuclear Plants. Hammer, Product Safety Management and Engineering.

75

Doorn and Hansson, “Should Probabilistic Design Replace Safety Factors?”

76

Möller and Hansson, “Principles of Engineering Safety.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

45

Sometimes, a probabilistic risk assessment is used in conjunction with a deterministic approach such as using a safety factor.

2.6.2 Changing the Design Philosophy: Safe-by-Design Systematic risk assessments such as probabilistic risk assessments have contributed to making technologies safer by helping to reduce the probability of an accident.77 Among other things, they have triggered a change in the design philosophy of nuclear reactors – from being active to passive and ultimately to becoming inherently safe reactors. Active safety regimes are those that depend on human intervention (along with other external sources such as electricity grids). Existing nuclear power reactors (of the Generation II type) partially deploy this safety philosophy. When some of the “potential causes of failure of active systems, such as lack of human action or power failure” have been removed, we call the systems passively safe.78 Generation II reactors have been adjusted – to some degree – to include this design philosophy. For instance, when the earthquake was detected in the Fukushima plant, the operational reactors scrammed as designed so that the control rods were automatically inserted into the reactor core to slow down the chain reaction and heat production and prevent damage to the reactor core. The surrounding safety features of these reactors also assumed a certain degree of passivity in that – in principle – they did not rely on external sources such as electricity grids. Reactors are supposed to remain safe with diesel generators and batteries when all external power is disconnected. When reliance on more external factors has been reduced, higher levels of passivity can be achieved, for instance, by removing reliance on all power sources (externally or internally produced) for cooling. Generation III and III+ reactors incorporate this safety philosophy into their design, for instance by including large sources of water at a higher altitude that can flow into the reactor and cool down the reactor core if the external power and the ability to cool down the reactor are shut down. Passively safe reactors already

77

One might argue that it is only the structural or procedural measures that make technologies safer; the probabilistic risk assessment helps to quantify the risk levels and may recommend which measures to take from a decision tree analysis. This paragraph draws on Taebi and Kloosterman, “Design for Values in Nuclear Technology.”

78

IAEA, Safety Related Terms for Advanced Nuclear Plants, 10.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

46

Ethics and Engineering

reduce the probability of an accident substantially (by a factor of about a hundred times).79 A policy of inherent safety elevates safety to a different level by removing certain hazards altogether. Simply put, if you do not want your system to be exposed to fire hazard, you should build it from a material that cannot even catch fire; you will then make it inherently resistant to fire hazard.80 Likewise, if we want to build reactors that are resistant to meltdown risks, we should build their cores from materials than cannot melt at the temperatures produced in the reactor.81 Safety improvement in reactor design can follow two different paths. First, an existing design can be improved incrementally. Incremental improvement does not mean small improvements; the above-mentioned example of reducing the likelihood of a meltdown could be achieved by introducing one incremental safety improvement, for instance reducing some of the complexities of the system such as the number of pipes, valves, cables, or any other physical components that could fail. It could also be realized by simply rearranging the safety systems; e.g., bringing safety pumps closer to the reactor vessel would already represent a substantial reduction in complexity and, hence, a safety improvement.82 Moreover, passively safe features could be achieved by placing sources of water at a higher altitude, which could provide the “safety-related ultimate heat sink for the plant” for the existing design of conventional nuclear power plants.83 Safety improvement can also be achieved by introducing revolutionary changes to design, that is designing from scratch with safety as the leading criterion. The notion of Design for Values, as presented later in this book, is very much in line with the idea discussed here, but it is not only about the value of safety; other key values of engineering such as sustainability, security, and affordability can play a crucial role in design too.84 On a related subject, a fairly recent approach to engineering design is what is termed the Safe-by-Design approach, which has been developed to 79

See Taebi and Kloosterman, “Design for Values in Nuclear Technology,” 809.

80

Nolan, Handbook of Fire and Explosion Protection Engineering Principles.

81

For more information regarding the three safety regimes in relation to new reactors, see

82 84

Taebi and Kloosterman, “Design for Values in Nuclear Technology.” 83 Ibid., 820. Schulz, “Westinghouse AP1000 Advanced Passive Plant,” 1552. This is a reference to the so-called Pebble-Bed reactors, which are inherently safe and cannot melt down; see Goldberg and Rosner, Nuclear Reactors.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

47

address safety issues at the research and development and design stage. It was introduced particularly in response to the unanticipated risks presented by emerging technologies such as synthetic biology or nanotechnology. Safeby-Design aims to mitigate risks as much as possible during the design process rather than “downstream during manufacturing or customer use.”85 This is a fascinating approach that frontloads thinking about safety at an early stage of development. In the beginning of this subsection, I distinguished between risk, scenario uncertainty, ignorance, and indeterminacy. I have adopted this taxonomy from the work done by Van de Poel and Robaey on the Safe-by-Design approach, particularly focusing on how Safe-by-Design deals with more complex uncertainties. Risk can be eliminated “by taking away the (root) causes,” which can result in inherent safety approaches as discussed above; Safe-by-Design can further set out to “either reduce the likelihood of undesirable scenarios (or failure mechanisms) or to reduce the consequences of undesirable scenarios, for example by providing containment.”86 The strategies for reducing risk are not always successful in dealing with scenario uncertainty because they can consider only known scenarios, which is why unknown scenarios must be dealt with differently. This may mean deploying safety factors (as discussed above), but safety factors are not directly applicable – as Van de Poel and Robaey argue – to synthetic biology, nanotechnology, and other emerging technologies.87 Situations of ignorance – which, simply put, are situations in which we don’t know and we don’t know that we don’t know – are perhaps even more difficult to deal with in Safe-by-Design approaches, because we do not even know the nature of what could go wrong and how to design for (or against) it. The response to ignorance is often sought in precautions involving thinking about risk (such as the Precautionary Principle, as will be discussed below) and in adaptive approaches to risk governance, which do not rely on anticipation of risk but, instead, enable us to deal with potential risk if and when it materializes.88

85

Morose, “The 5 Principles of ‘Design for Safer Nanotechnology.’”

86

87 Van de Poel and Robaey, “Safe-by-Design,” 299. Ibid. Klinke and Renn, “Adaptive and Integrative Governance on Risk and Uncertainty”;

88

International

Risk

Governance

Council,

“Risk

Governance

Guidelines

for

Unconventional Gas Development.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

48

Ethics and Engineering

Indeterminacy is also very difficult to design for because it relates to human factors in risk when technology is entering into use . This issue has been discussed earlier in this chapter with respect to the social aspects of risk that need to be acknowledged. Van de Poel and Robaey distinguish between two types of indeterminacy: the knowable and the unknowable. The first type are ones that we may be able to forecast and design for.89 The best example is perhaps designing a passively safe reactor which, in principle, does not depend on an operator who must actively pay attention and intervene if the core starts to get too hot or the pressure rises. This approach was – as discussed above – a response to the Chernobyl disaster, which was partially the result of inattentive operators. The second type of indeterminacy is, however, much more difficult to design for because – as with ignorance – we don’t know what we don’t know about certain human mistakes during operations. That remains a fundamental challenge in discussions on risk assessment and management. Finally, Safe-by-Design can be a helpful approach for addressing normative uncertainties, or situations in which there is more than one right answer to the ethical questions surrounding risk. Nuclear waste disposal provides an excellent example, because there are various different methods for dealing with it, for instance in a retrievable fashion that allows for future generations either to retrieve and further deactivate it (and hence respects their freedom of action) or to permanently dispose of it, which is the better option from the perspective of future safety. How to deal with nuclear waste is essentially an ethical question, but one with several (diverging and even contrasting) ethical implications. Yet we cannot state with certainty that one option is to be preferred from an ethical point of view. Safe-by-Design is presented in public policy to encourage frontloading these questions of safety, but also to help weigh safety against other important ethical considerations.

2.6.3 The Precautionary Principle One important definition of the Precautionary Principle is laid down in the Rio Declaration on Environment and Development (1992): It requires that 89

Van de Poel and Robaey call this “designing out indeterminacy”; see Van de Poel and Robaey, “Safe-by-Design,” 301.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

49

“lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”90 What this means is that a lack of full scientific certainty that a hazard exists is not a valid reason against preventive action. Later approaches to the Precautionary Principle involved more prescriptive guidelines, such as the Wingspread Statement (1998), which prescribes that “precautionary measures should be taken” in certain situations.91 The literature on the Precautionary Principle is full of examples of “late lessons from early warnings,”92 including health problems associated with lead in petrol fuel combustion that were responded to long after the first problems were observed, and asbestos in construction material. The latter is a commonly cited example in discussions of the Precautionary Principle, since the first signs of its associated health problems date back to 1906; in the “1930s and 1940s cancer cases were reported in workers involved in the manufacture of asbestos,” but it was only in 1989 that the first ban on asbestos was introduced. Various studies have concluded that thousands of lives could have been saved if the early warnings had been taken seriously.93 The Precautionary Principle is perhaps one of the most misinterpreted and misunderstood ethical principles. Because of the abundance of examples in the literature that seem to imply that the use of risky technologies or materials should have been stopped when the first signs of risk were visible, the Precautionary Principle is often understood in a binary mode and as a conservative, risk-averse restriction on innovation. As a result, in some parts of the world the principle has had difficulty in becoming embedded in serious thinking about risk. However, it can also be interpreted as a principle that can guide action and make us more sensitive to certain risks. Per Sandin’s approach, for instance, is helpful in that it distinguishes between four common elements, which can be recast into the following if-clause: “If there is (1) a threat, which is (2) uncertain, then (3) some kind of action (4) is

90

See www.un.org/documents/ga/conf151/aconf15126-1annex1.htm; italic added by me.

91

See www.iatp.org/sites/default/files/Wingspread_Statement_on_the_Precautionary_Prin

92

This is the title of a large report prepared by the European Environment Agency, including many historic examples of early warnings that were not taken seriously in

.htm.

time; see European Environment Agency, “Late Lessons from Early Warnings.” 93

Randall, Risk and Precaution, 4.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

50

Ethics and Engineering

mandatory.”94 The first two clauses will then relate to risk assessments and indicate “when the precautionary principle can be applied,” while the latter two clauses are about “how to apply the principle.”95

2.6.4 Resilience Engineering As an acknowledgment that perhaps some uncertainties cannot be reduced and that we should regard them as a given, it has been argued that we should build our systems so that they are resilient. Resilience, generally speaking, concerns the ability of a system to regain a stable position after a disturbance such as a major mishap.96 It is therefore relevant to the capacity of a system, and as such it “reflects a significant shift away from traditional risk management strategies that focus on levels of risk.”97 The concept has become popular in the early 2000s, and it has been discussed a lot in disaster management and, for instance, in relation to how to return the infrastructure to predicate conditions.98 The Organization for Economic Co-operation and Development (OECD) defines resilience as the idea that “people, institutions and states need the right tools, assets and skills to deal with an increasingly complex, interconnected and evolving risk landscape . . . to increase overall well-being.”99 Resilience has many different definitions and approaches. Neelke Doorn argues that these vary not only between disciplines but also within disciplines. As an example, she mentions the evolution of the concept in the influential reports of the Intergovernmental Panel on Climate Change (IPCC), where the definitions seem to change from absorption and adaptations to more sophisticated anticipation and reduction of risk as well as recovery of the system.100

94

Sandin, “Dimensions of the Precautionary Principle.”

95

Hansson, “The Precautionary Principle,” 264; emphasis in original.

96

Hollnagel, “Resilience.”

98

Liao, “A Theory on Urban Resilience to Floods”; OECD, Guidelines for Resilience Systems Analysis; Doorn, “How Can Resilient Infrastructures Contribute to Social Justice?”

99 100

97

Doorn, “Resilience Indicators,” 711–12.

OECD, Guidelines for Resilience Systems Analysis, 1. Doorn, “Resilience Indicators,” 713.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

Risk Analysis and the Ethics of Technological Risk

51

2.6.5 Technology as a Social Experiment The last approach to dealing with the inherent uncertainties of new technologies – including situations of ignorance – is to consider technology as a social experiment. Van de Poel argues that the current appraisals of technologies with large impacts on society (such as synthetic biology) cannot be scientifically proven either through “science-based or evidence-based approaches” or through “precautionary approaches,” because these both fail to account for “important actual social consequences of new technologies and making us blind to surprises”; he further argues that we should consider technology a social experiment.101 Following the principles of biomedical ethics (several of which have been mentioned in the previous sections of this chapter), Van de Poel presents a number of principles that can help to assess the ethical acceptability of responsible experimentation. A social experiment should be set up in a “flexible” fashion, and it should not “undermine resilience,” while risks and hazards should remain contained “as far as is reasonably possible,” and it should be “reasonable to expect social benefits from the experiment.”102

2.7 Summary Risk is a crucial aspect of any engineering practice. The introduction of new technology to society often brings great benefits, but it can also create new and significant risks. Serious efforts have been made to assess, map, understand, and manage these risks. For instance, in the chemical industry, risk assessment methods have been proposed for describing and quantifying the risks associated with hazardous substances, processes, actions, and events. Perhaps the most notable example is the probabilistic risk assessment approach, originally developed to systematically understand and reduce both the risk of meltdown in nuclear reactors and the risk of crashes in aviation. However, these and other risk analysis methods have limitations, some of which I have discussed by reviewing how the Fukushima Daiichi nuclear 101

102

Van de Poel, “An Ethical Framework for Evaluating Experimental Technology”; Robaey and Simons, “Responsible Management of Social Experiments.” Van de Poel, “Nuclear Energy as a Social Experiment,” 289. See also Taebi, Roeser, and Van de Poel, “The Ethics of Nuclear Power”; Van de Poel, “Morally Experimenting with Nuclear Energy.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

52

Ethics and Engineering

accident could fall through the cracks of risks assessments. Naturally, this is not intended to dismiss risk assessments, but rather to make engineers more aware of what assessments can and in particular cannot do, in connection with, for example, the concept of Normal Accidents as introduced by Charles Perrow. Risk assessment methods have been criticized for ignoring the social and ethical aspects of risk, so I have discussed the ethical issues associated with risk analysis, distinguishing between individual-based approaches to ethics of risks (e.g., informed consent) and collective and consequence-based approaches. I have finished by reviewing several methods for dealing with uncertainties in engineering design and applications, including redundancies, barriers, and safety factors, as well as discussing the Precautionary Principle and more modern approaches that take safety to the core of engineering design, specifically Safe-by-Design.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.004

3

Balancing Costs, Risks, Benefits, and Environmental Impacts

3.1 The Grand Ouest Airport of Nantes: The End of Fifty Years of Controversy After a number of failed attempts involving tear gas and rubber bullets, the French police decided in 2016 to stop trying to evacuate protesters from the woods near to the city of Nantes in France.1 The protesters, who had set up camps and were living in improvised cabins and tree houses in the woods, were protesting about the proposed new airport, the Aeroport du Grand Ouest at Notre-Dame-des-Landes.2 After continued protests, and against a backdrop of increasingly intense climate debates and the prominent global role which France was championing in climate change mitigation, in January 2018 the government of President Macron decided to abandon plans for the this €580 million project.3 This decision marked an important turning point in a fierce and longstanding debate that had lasted for about fifty years. The airport had been proposed in the 1970s to draw traffic away from Paris’s two large airports; this was presumably thought necessary at a time when Concorde, the prestigious Anglo-French supersonic airliner that could halve long-distance travel times, was entering into operation and was expected to increase air traffic substantially. While the impact of Concorde on air travel actually remained

1

The description of the case, along with the economic analysis and alternative calculations, is partly based on media reports but much more on a report prepared by CE Delft that reviews the existing social cost–benefit analysis of the Grand Ouest Airport and presents alternative analyses; see Brinke and Faber, Review of the Social Cost–Benefit Analysis of Grand Ouest Airport. The alternative calculations presented in Table 3.1 were adopted from this report, with the kind permission of the authors.

2

Schofield, “Battle for a New Nantes Airport.”

3

BBC, “France Scraps Controversial Airport Plan.”

53

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

54

Ethics and Engineering

fairly limited, thus making the need to draw traffic away from Paris virtually negligible, the proposal for the new airport remained, though its aim changed. In 2008, when it was finally decided to build the new airport, its official aim had become to replace Nantes Atlantique airport, which was supposedly approaching its maximum capacity, and to provide a new, larger airport to facilitate transportation to “France’s fastest-growing region” around the city of Nantes.4 The proposed project was assessed and evaluated in 2006 in an economic impact assessment method known as Social Cost–Benefit Analysis (SCBA), which provided the economic justification for building the new airport. The expected costs were mainly the construction costs of the new airport, and the expected benefits included an array of quantifiable and unquantifiable factors, among them noise reduction, safety improvements, and the urbanization of the region as a result of the larger airport. The proposed Grand Ouest airport would be built further away from the city than the existing Nantes Atlantique, which is very close to the city. This would result in less noise for residents and would improve safety by reducing the casualties and other damage if an accident were to occur at or near the airport. The economic benefits, along with the urbanization benefits, were however considered to be negligibly low and very difficult to quantify; they would at any rate not justify the costs associated with building the new airport. The main categories of benefits that tilted the economic balance strongly toward the positive aspect – hence favoring the new airport – were the saving of travel times and the anticipated growth from the existing 3 million to the anticipated 9 million passengers passing through the new airport, which would be able to handle such numbers.5 The conclusion of the SCBA was that the expected benefits would far outweigh the costs. The Grand Ouest airport could thus be economically justified. The SCBA did not seem to convince everybody of the need for the new airport. Several organizations, including Solidarité Ecologie, continued to object to it on the basis of the proposed location, which was presumably of high ecological value, and also because, they argued, the existing Nantes Atlantique airport could still suffice if it were further improved and

4 5

Schofield, “Battle for a New Nantes Airport.” Brinke and Faber, Review of the Social Cost–Benefit Analysis of Grand Ouest Airport, 8. Three scenarios were discussed; Scenario 2 is included in Table 3.1.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

55

optimized.6 A large public consultation process regarding environmental matters in France, carried out between 2007 and 2009 – the “Grenelle de l’Environnement” – concluded that when the impact of a project on a region is too great, alternatives to the project need to be carefully studied. The SCBA at that time had considered only one option, namely building the new airport, while alternatives such as optimizing the existing airport were not studied. Some elected representatives also had doubts about the need for a new airport, and their organization asked the consultancy CE Delft to “carry out a review of the existing SCBA and to compare the economic impacts of the new airport with the continued use of Nantes Atlantique”; and it is this report that I will further discuss in my review of the SCBA in the present chapter.7 This report carefully scrutinized the existing SCBA (in one of the selected scenarios, shown as Scenario 2 in Table 3.1) and presented an alternative analysis and calculations for the new airport based on a more realistic estimation of construction costs and passenger numbers (Scenario 4). It also considered another scenario based on a conservative estimate of the construction costs, taking into account more cost overruns, which are common in such large-scale infrastructural projects (Scenario 5). The report furthermore compared the building of a new airport with scenarios optimizing the existing Nantes Atlantique airport with improved access via a better railroad connection as well as with improved access (Scenario 6), in addition to a new runway that would accommodate larger airplanes and thereby be able to process more passengers (Scenario 7); the new runway would be built perpendicular to the existing runway to reduce noise pollution over the city of Nantes. All these scenarios are summarized in Table 3.1. The alternative calculations showed that optimizing the existing airport seemed to be the more sensible option from an economic perspective if we were to accept these alternative SCBAs. When, in 2018, the French government decided to abandon the plans for the airport, the French prime minister, Edouard Philippe, cited “bitter opposition between two sides of the population that are nearly equal in size.”8 The Grand Ouest airport controversy is probably one of the longest-

6

“Notre-Dame-des-Landes.”

7

Brinke and Faber, Review of the Social Cost–Benefit Analysis of Grand Ouest Airport, 7.

8

BBC, “France Scraps Controversial Airport Plan.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

56

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Table 3.1 The alternative SCBAs for the building of the new Grand Ouest airport (a.k.a. Notre-Dame-des-Landes airport) as compared with the existing Nantes Atlantique airport costs and benefits are in millions of euros and according to the price levels of 2006. Grand

Cost–benefit

Optimization of

Optimization of Nantes

Nantes Atlantique:

Atlantique:

capacity extension,

capacity extension, local

conservative estimate

local radar system,

radar system, and new

of construction costs

fast taxiways

runway in 2023

Scenario 5

Scenario 6

Scenario 7

Ouest:

Grand Ouest:

existing

realistic costs and

SCBA,

passenger numbers,

Grand Ouest:

2006

realistic values of time, etc. Scenario 4

category Scenario 2 Travel time

911

317

317

297

297

Road safety

1

1

1

1

1

1

1

1

1

1

10

26

26

24

24

Noise

20

19

19

0

0

Exploitation of

45

42

0

40

40

121

114

114

107

107

Emissions road Emissions air

airport Interactions with other modes

330

304

757

93

134

0

0

(construction costs) Water



PM ()

PM ()

management Value of nature



15

15

0

0

Loss of agricultural



26

26

0

0



70

98

4

4



5

5

0

0

External safety



PM (þ)

PM (þ)

Cost of adjusting



land Construction of tramway/ renovation of train track Agroenvironmental plan annual cost 0

0

184

707

PM () PM (þ/)

PM () 0

aircraft fleet Net benefit

57

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Public authorities

514

106

65

58

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Table 3.1 (cont.) Grand

Cost–benefit

Optimization of Nantes

Nantes Atlantique:

Atlantique:

capacity extension,

capacity extension, local

conservative estimate

local radar system,

radar system, and new

of construction costs

fast taxiways

runway in 2023

Scenario 5

Scenario 6

Scenario 7

Grand Ouest:

existing

realistic costs and

SCBA,

passenger numbers,

Grand Ouest:

2006

realistic values of time, etc. Scenario 4

category Scenario 2 Effects on

Optimization of

Ouest:

93

93

93

0

93

607

91

614

106

158

urbanization through property market Net benefit

Note: PM = pro memorie; cost or benefit cannot be calculated easily (or cannot be calculated at all). Source: Based on Brinke and Faber, Report of the Social Cost–Benefit Analysis of Grand Ouest Airport, with the kind permission of the authors

Balancing Costs, Risks, Benefits, and Environmental Impacts

59

lasting and largest planning controversies of modern times, which – ironically enough – did not even end after the plan was dropped by the national government, because this went against the “spirit” of decentralization. Local authorities were very much in favor of the airport, “especially for its economic development and businesses,” and the government’s decision ignored a local referendum in favor of the airport.9

3.2 What Is SCBA? A Cost–Benefit Analysis (CBA) is an economic analysis that assesses the costs and benefits of a new project in order to determine whether the project is worth carrying out. It compares two or more options, identifying and quantifying the costs and benefits associated with each one and comparing these categories in order to show which options are worth pursuing and which is the best one. Essentially, a CBA measures benefits in terms of “increase in human well-being” and costs in terms of “reduction of human well-being” in order to establish whether a project’s benefits outweigh its costs.10 CBA is a commonly used instrument for decision-making in large-scale projects, particularly for environmental policy and infrastructural planning.11 It is sometimes also applied when the benefits and costs (as well as the risks) of new technologies such as nanotechnology, geoengineering, and synthetic biology are assessed in attempts to determine their overall acceptability.12 CBAs are currently widely used for decision-making in large engineering projects, including those in energy, transportation, and environmental policies that require substantial investments; various guidelines are provided by national governments, as well as by international organizations such as the Organization for Economic Cooperation and Development (OECD). Such guidelines often specify several steps, starting generally with identifying the policy or project(s) that will be evaluated, establishing possible alternatives, identifying groups that could gain (i.e., benefit) and lose (i.e., have costs), and setting the parameters for which and whose costs and benefits are to be included in the analysis and for how long. The next crucial step is to 9 10

Willsher, “France Abandons Plan for €580m Airport and Orders Squatters Off Site.” OECD, Cost–Benefit Analysis and the Environment: Recent Developments, 16.

11

Priemus, Flyvbjerg, and van Wee, Decision-Making on Mega-projects.

12

Fischhoff, “The Realities of Risk–Cost–Benefit Analysis.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

60

Ethics and Engineering

quantify those costs and benefits – often in monetary terms – in order to facilitate a total comparison between all the aggregated costs and benefits, thus helping decision-makers to arrive at conclusions about the desirability of the project. Since a CBA is often prepared for specific time periods, one also needs to determine how to evaluate future costs and benefits in relation to the current monetary values.13 While CBAs can be carried out from many different perspectives (for instance, in the private interests of a company introducing a new measure, to see whether it will pay off ), the CBAs performed for engineering projects in the public sector are often SCBAs, even if they do not explicitly emphasize the “social” element. In such analyses, society is often simply treated as a “sum of individuals” for whom the social costs and social benefits of a project will be aggregated.14 In other words, a SCBA is essentially a CBA that “provides an overview of [the] current and future pros and cons of a particular investment or policy project for society as a whole as objectively as possible.”15 It focuses on both market goods (with available prices) and nonmarket goods (including air or noise pollution, environmental degradation, etc.). Conflicts often arise from the uncertainties and difficulties of assigning monetary values to nonmarket goods (such as environmental pollution or climate change) and estimating how they will change in the future. The aim of a SCBA is to establish whether a certain proposed policy is worth the investment. Alternatively, one might say that a SCBA has the power to justify a policy decision, as its task is to help policy-makers decide whether the benefits produced by the proposed investments justify the spending of public funds. The basic goal is that investment should increase general societal well-being.

3.3 The Philosophical Roots of CBA: Utilitarianism CBA has its roots in a school of thinking in ethics known as consequentialism (briefly mentioned in Chapter 2), later advanced as utilitarianism, by the 13

This list is loosely based on guidelines provided by the government of New Zealand and the OECD; see New Zealand Government, “Guide to Social Cost Benefit Analysis”; OECD, Cost–Benefit Analysis and the Environment: Recent Developments.

14

OECD, Cost–Benefit Analysis and the Environment: Recent Developments, 16. This is in line with other work in the literature that discusses SCBAs without explicitly mentioning the social aspect; see for instance Van Wee, Transport and Ethics, 17.

15

Brinke and Faber, Review of the Social Cost–Benefit Analysis of Grand Ouest Airport, 7.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

61

English philosopher Jeremy Bentham and the philosopher and political economist John Stuart Mill.16 Consequentialism rests on the fundamental assumption that moral rightness depends on whether positive consequences are being produced. Utilitarianism is a type of consequentialism which argues that different morally relevant values can – in principle – be translated into one single (higher-order) value, the value of happiness,17 alternatively referred to as utility or well-being. The utilitarian doctrine is based not only on creating but also on maximizing this value of utility. Utilitarianism furthermore rests on the intuitively appealing assumptions of equality and impartiality that the founder of the approach, Bentham, captured in the famous phrase “every individual in the country tells for one; no individual for more than one.”18 In the utilitarian calculus, therefore, the action that maximizes utility over all individuals (regardless of who they are) is morally right. Paradoxically, this fundamental assumption of equality can give rise to inequality and inequity in a SCBA, as will be discussed in the next section. It is this simplicity and intuitive moral appeal in utilitarian reasoning that has given rise to the immense popularity of this school of thought, not only in the legal and political arenas (for assessing the rightness of policy) but also in engineering assessments and evaluations. Yet this same simplicity – according to the opponents of consequentialism – is a potential source of simplistic conclusions. Utilitarianism has been abundantly criticized in the literature, not least by philosophers. In fact, debates about consequentialism and utilitarianism have engendered “a great deal of hostility among philosophers”19 – also facetiously termed a “shouting match”20 – about whether the rightness of an action can be measured only in terms of its consequences, and whether usefulness – as advocated by utilitarianism and also as captured in the phrase “the end justifies the means” – trumps all other ethical considerations. Here it is not my intention to join this debate, not because I disagree with the critique of utilitarianism, but because I am not prepared to dismiss 16 17

18

Bentham, An Introduction to the Principles of Morals and Legislation; Mill, “Utilitarianism.” This is a specific type of utilitarianism, also referred to as hedonism; see for more information Sinnott-Armstrong, “Consequentialism.” Quoted from Jamieson, Ethics and the Environment, 82. Bentham’s dictum has often been misquoted as “everybody to count for one and none for more than one.”

19

Ibid., 85.

20

Rudolph, “Consequences and Limits,” 64.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

62

Ethics and Engineering

utilitarianism and the CBA method altogether because of this critique. Instead, I think we need to be aware of what a CBA can and cannot do to assess the impact of an engineering project, when it can be exercised, for what purpose, how it can be used, and when it can be amended as a method or used to complement other methods. CBA is already deeply embedded in the public policy of modern societies as a means to assess the impacts of major engineering projects and whether to accept the benefits of such projects while considering their costs.21 But the serious and quite widespread critique of this method should not be taken lightly. A part of this critique – especially in engineering practice and in the literature on engineering ethics – is aimed at what is regarded as the painfully simplistic application of consequentialist thinking to the assessment and evaluation of technological risk, as is for instance vividly shown in the Ford Pinto example briefly discussed in Chapter 1. This is the case of a two-door car designed and manufactured at unprecedented speed in the 1970s. Because of this fast pace of development, a technical error was overlooked during the design: The gas tank was situated behind the rear axle, meaning that a rear-end accident at speeds as low as 35 km per hour could rupture the gas tank, which, in turn, would cause a fire; fire in a twodoor car often proves to be lethal for passengers in the back seats. Although Ford was made aware of the problem by the development engineers just before putting the Pinto onto the market, it chose not to rectify the error. Years later, when sued for the many casualties and serious injuries attributable to the technical failure, Ford defending this decision in court with a simplistic CBA: the repair of 12.5 million vehicles at the unit value of $11 per vehicle would amount to a total of $137 million dollars in costs, while the benefits would be less than $50 million, based on the saving of 180 individuals from being burned to death with a unit cost of $200,000 and another 180 from serious burn injuries (unit cost $67,000), and some material benefits. Hence, the myopic conclusion of the CBA was that repairing the cars was not justified, even though so many lives could be saved and serious injuries prevented. Ford produced this CBA to defend its case only after it had been sued, and the defense proved to be utterly unsuccessful in court, but for understandable reasons the case reverberates throughout the literature on engineering 21

Shapiro, The Evolution of Cost–Benefit Analysis in U.S. Regulatory Decision-Making.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

63

ethics and places a question mark over the suitability of CBA as a method for making decisions about risk mitigation, not only in engineering design but also in broader assessments of risks and benefits. It is safe to assume that since the Pinto fiasco, the CBA method has matured considerably both in its methodology and in the acknowledgment of its limitations.22 As mentioned above, I find myself among the authors who are critical of CBA and its philosophical underpinnings, but who also acknowledge the value of this approach and its relevance to actual decision-making in engineering.23 CBA can have an important role to play if its limitations are sufficiently acknowledged and appropriately dealt with and if it is used at the right moment and for the right purposes (which is often as an aspect of larger assessments). In the critique of consequentialism and utilitarianism, two issues have received a lot of attention.24 First, any consequentialist approach that bases an assessment on future consequences is prone to difficulties in predicting what will happen in the future as a result of present choices . This can be particularly problematic if a moral rightness verdict is based on unknown consequences that are hard to predict. What happens if the predicted consequences do not materialize and fundamentally different consequences occur instead, tilting the consequentialist balance toward fewer benefits than costs? Predictions with respect to technological developments bring an inherent component of uncertainty because “the development of technology depends on social factors, such as decisions by individuals and social groups on whether and how to make use of technological options,” and new developments sometimes come as a surprise.25 In other words, some effects of technology cannot be known until they have been extensively developed and used.26 A related objection is that certain actions or decisions may be morally

22

To be sure, there are still many people who question the validity of CBA, for instance when sued in the context of a risk CBA; see for instance Fischhoff, “The Realities of Risk– Cost–Benefit Analysis.” In Section 3.4 I will return to this issue and discuss it in more detail.

23

Schmidtz, “A Place for Cost–Benefit Analysis”; Hansson, “Philosophical Problems in Cost–Benefit Analysis”; Shapiro, The Evolution of Cost–Benefit Analysis in U.S. Regulatory Decision-Making; Van Wee, Transport and Ethics; Van Wee, “How Suitable Is CBA for the

24

Ex-ante Evaluation of Transport Projects and Policies?” See, e.g., Rudolph, “Consequences and Limits”; Sinnott-Armstrong, “Consequentialism.”

25

Hansson, “Philosophical Problems in Cost–Benefit Analysis,” 171.

26

Collingridge, The Social Control of Technology.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

64

Ethics and Engineering

right even if their benefits do not outweigh their costs.27 Such a situation may occur, for instance, with respect to health, safety, or environmental impacts. Adhering to consequentialist approaches presumably limits the focus of moral argument to an assessment based only on consequences. The second fundamental criticism is specifically of utilitarianism, the argument being that utility cannot be objectively defined. This is echoed in objections made to the CBA when it comes to answering the question of how to value possible consequences. I will return to this issue extensively when reflecting on CBA, and specifically on the question of how to value costs and benefits, in Section 3.5.

3.4 SCBA: Why, When, and How? Having briefly mentioned some of the general criticisms of consequentialism and utilitarianism in the preceding section, in this section I will discuss the detailed objections to CBA by reviewing the four key questions of (1) why we perform a CBA, (2) when we do that, (3) which costs and benefits we include, and (4) how we do that. Indeed, there is much overlap between these questions, some of which I will also discuss. In addressing the four questions, I will also focus on the shortcomings of the CBA method.

3.4.1 Why Do We Perform a CBA? A CBA is often performed to help decision-makers to arrive at well-informed decisions regarding large infrastructural engineering projects (e.g., on whether new railroads should connect certain major cities in a country),28 regulatory intervention for risk mitigations,29 or comparisons between potential alternatives.30 One of the principal objections is that in choosing certain alternatives (or policy measures) for further consideration, we have already made a preselection from a larger set of choices on the basis of other considerations.31 This may sound like an unreasonable objection, since in 27

Kelman, “Cost–Benefit Analysis.”

28

Annema, “The Use of CBA in Decision-Making on Mega-projects.”

29

Shapiro, The Evolution of Cost–Benefit Analysisin U.S. Regulatory Decision-Making. Van Wee, Transport and Ethics, 20.

30 31

Sven Ove Hansson calls this the “topic selection issue”; see Hansson, “Philosophical Problems in Cost–Benefit Analysis,” 165.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

65

everyday life when we wish to carefully consider several options, we first narrow thems down to a more manageable number. If we want to buy a car, for example, we first do an exploratory search, bringing the choice down to a few possibilities, before carefully consdering the costs and advantages of each. This preselection can, however, pose a serious impediment to technological choices. Let me explain this by returning to the case study of the Grand Ouest airport, where the original CBA was used to consider whether the new airport would generate enough social benefits to justify its construction. The optimization of the current airport was thus left out of the analysis on the two assumptions that (1) there was a need to accommodate a substantial growth in the number of passengers in the region and (2) the existing Nantes Atlantique airport could not handle this anticipated growth. The latter assumption turned out to be problematic – as CE Delft argued in its alternative report32 – and, if the expectations about the future growth in passenger numbers had been lowered, an improved and optimized Nantes Atlantique airport with certain modifications could very well have qualified: e.g., the existing runway could have been replaced by a larger one that could cater for larger airplanes. When an option is not included in the CBA from the beginning, this will indeed affect the rest of the analysis. On a related note, in the case of the Grand Ouest airport the focus was on whether a new airport was needed (at best, as compared with the question of whether the current airport could be improved and optimized). The CBA never considered other transportation improvements, for instance highspeed trains between Nantes and other major cities with airports nearby. Similarly, it never asked whether building a new airport and facilitating – and perhaps incentivizing – more air traffic is the right thing to do in view of the global discussions on climate change and with transportation, and specifically leisure air travel, constituting one of the major polluters in the form of greenhouse gases. The latter was perhaps one of the factors considered by the Macron administration when it decided to scrap the airport plans. It is important to acknowledge this, not necessarily as a shortcoming of the CBA method, but rather as a limitation of using a CBA as the sole tool in any decision-making; the outcomes of a CBA should therefore not be seen as the definitive answers to a policy question.

32

Brinke and Faber, Review of the Social Cost–Benefit Analysis of Grand Ouest Airport.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

66

Ethics and Engineering

This may also be described as the problem of determining the focus of the analysis, or the problem of “topic selection” as Hansson calls it: For understandable reasons involving resource limitations, not every single decision or option can be included in an analysis.33 This seems to reflect a larger problem in much engineering assessment. Many choices need to be made regarding the focus of the analysis, without which the assessment cannot be carried out at all. Yet what often gives rise to public controversy is that people wish to ask these bigger questions, while assessment methods focus, by definition, on much narrower matters or are more limited in the topics of the questions they ask – such as whether the benefits of a specific project outweigh the costs, or whether the risks introduced by a new facility are justified and manageable. Let me illustrate this with an example from another assessment method regarding the manageability of risk. In 2011 there was a controversy in the Netherlands about exploration of unconventional shale gas, located in shale layers that are difficult to access and to extract from. It is not commonly known that the Netherlands has Europe’s largest gas-field and has been a gas-oriented country since the 1960s. This was why the assumption was easily made that drilling for unconventional shale gas would be welcomed in a country that was abundantly producing and consuming conventional gas. However, people raised concerns about seismic risks and the risk of water pollution, while the country pondered the bigger question of whether, in the light of climate change, drilling for even more fossil fuels in a country that already relies heavily on gas is justified. The Dutch government’s response was, however, to commission a engineering assessment report, reiterating that the risks were fairly low and manageable, while emphasizing the larger benefits of gaining access to new, unexplored natural resources. The report did not seem to have any effect; if anything, it rather added fuel to the controversy, because people felt unheard. Part of the problem was that there was a parallel discussion regarding the matter of whether gas was needed at all in an era of climate change. All of this served to exacerbate the controversy.34

33 34

Hansson, “Philosophical Problems in Cost–Benefit Analysis,” 165. This is a very brief summary of a long controversy with vociferous proponents and opponents. I do not intend to argue that this was the only reason why the controversy was exacerbated, but this issue did play a role in the debate, leading the government to

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

67

3.4.2 When Do We Perform a CBA? In the Grand Ouest case study, different CBAs were performed at different stages, with different estimates of travel times and passenger numbers, along with focuses on different aspects of the existing airport as compared with the existing Nantes Atlantique airport. The alternative CBA carried out by CE Delft concluded that its study presented a “very strong case for a full analysis of the costs and benefits of all the options for improving air traffic.”35 Thus, basically, it argued for yet another, more detailed CBA. A CBA can thus be performed early in a process as an ex ante analysis to ascertain whether a project is worth pursuing, but it can also be used while the project is being pursued in order to determine whether a specific change (for instance for risk mitigation) is justified.36 More will be said about this issue in the next two subsections, on how to value costs and benefits.

3.4.3 Which Consequences Do We Include? It goes without saying that the outcome of a CBA will partially depend on which consequences one includes in the analysis, and identifying those consequences is a major challenge. Precisely which consequences need to be included depends on the scope of the analysis, which consists of two dimensions: (1) the scope of the spatial or geographic benefits and burdens to include with respect to each option and (2) the temporal or time-related scope.37 To return to the case study described in this chapter, important objections were made to the Grand Ouest airport regarding what was termed the ecological value of the region in which the construction was proposed. Interestingly, loss of natural habitats was not included in the original CBA

place a moratorium on the exploration and exploitation of onshore shale gas. See for more detailed discussions Dignum et al., “Contested Technologies and Design for Values”; Pesch et al., “Energy Justice and Controversies.” 35

Brinke and Faber, Review of the Social Cost–Benefit Analysis of Grand Ouest Airport, 10.

36

Odgaard, Kelly, and Laird, “Current Practice in Project Appraisal in Europe.”

37

It should be noted that Rudolph discusses scope and time as separate problems; see Rudolph, “Consequences and Limits.” My treatment of time as the “temporal dimension” of the scope serves only a semantic purpose and helps me to distinguish between the different dimensions of the scope.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

68

Ethics and Engineering

(see Scenario 2 in Table 3.1). Therefore, these costs were an important addition in the alternative CBA (see Scenarios 4 and 5 in Table 3.1, which recalculated the costs and benefits). It should be noted, however, that the fairly small economic values assigned to the loss of natural habitats in the alternative analysis could not have tilted the balance of the CBA. What may have had an larger impact on the calculations, potentially affecting the outcome, are the geographic boundaries of the analysis. This is not just a practical matter. Hansson gives the example of a greenhouse gases reduction measure in one EU country; when this is considered from the national perspective, it is important to include the extra costs and hence the “expected economic disadvantages” in relation to other EU countries.38 However, when the spatial lens of the analysis is widened to include the EU, the added burdens may be more easily canceled out if the broader benefits of climate mitigation for the EU are taken into account. On a related note, how wide the temporal lens is – that is, how far into the future the analysis holds – can also potentially determine the outcome of the analysis; Hansson’s climate mitigation example perfectly illustrates this. I will return to the issue of temporal impact in the next subsection.

3.4.4 How Do We Calculate Costs and Benefits? In each CBA, two sets of questions need to be answered before any calculations can be made: (1) how many of each unit do we include in the analysis, and (2) how do we assign values to each unit? The former question is, to an extent, a practical one (which does not necessarily make it less controversial), while the latter question is more theoretical and fundamental, and is perhaps the biggest source of contention in any CBA. Let me show this by returning to the example of the Grand Ouest airport. With regard to the two questions – how many passengers we include and what the associated travel time benefit is – the growth in passenger demand seemed to be overestimated in the original CBA in 2006, where it was anticipated that 9 million passengers would use the new airport as compared with the current 3 million who used the existing Nantes Atlantique airport. The biggest benefit in the calculation was supposed to be gained from the travel time saved by these passengers: the expected travel time benefit was 38

Hansson, “Philosophical Problems in Cost–Benefit Analysis,” 168.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

69

€911 million, which resulted in a total net benefit of €611 million (see Scenario 2 in Table 3.1). The estimate for the growth in passenger numbers was partially based on the low crude oil prices at that time, but it overlooked an important political development, which was that from 2012 onward, aviation was planned to be included in the EU Emission Trading System, which would apparently give rise to higher ticket prices and thus to a reduction in air travel. Regarding the second question – how to value the unit of travel time – CE Delft concluded in its report that the travel time value used was higher than that accepted in France at the time when the assessment was made. Recall that this was the biggest factor that supposedly justified building the new airport and that a change in travel time value alone could have changed the outcome of the total analysis (as illustrated in Scenarios 4 and 5 in Table 3.1). One might say that disagreements about travel time units are also a practical problem; assessors need to agree on, and perhaps policy-makers need to give more clear guidelines about, which travel time units may be included in such CBAs. It is also important to realize that reduction in travel time is often the crucial factor used to justify building new roads or railroads, or increasing the speed limits on highways. In all such projects, there will be casualties, serious injuries, and increased emissions as well as environmental damage, all of which need to be accounted for in a CBA. Assigning monetary values to nonmarket goods such as human life gives rise to serious conflicts. Wouldn’t the monetization of nonmarket goods devalue those goods?39 Sometimes people are appalled to hear that a CBA places monetary value on invaluable things such as human life or nature, because this can easily come across as meaning that at this price one may take someone else’s life or destroy nature.40 This would, of course, be a caricature of what monetary value in this context means. The correct way of reading this value is this. One has to determine whether an investment is worth making if it can help to save one human life. Those who are appalled by the very idea of assigning a monetary value to human life might argue that it is always worth making an investment that can save human lives. That is indeed true, but these investments cannot be considered in isolation and often reflect choices that need to 39

Kelman, “Cost–Benefit Analysis.”

40

Hansson, “Philosophical Problems in Cost–Benefit Analysis,” 177.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

70

Ethics and Engineering

be made against the backdrop of limited funds – for instance public funds – that can be spent only once, and the outcome must be evaluated in terms of the public goods and benefits created, often expressed in terms of well-being. This problem is, in more general terms, called the problem of valuation,41 and underlying it is another fundamental problem known in the philosophical literature as incommensurability. When two things are incommensurable, it means that “they cannot be expressed or measured on a common scale or in terms of a common . . . measure.”42 This, of course, does not mean that they are completely incomparable, but it does mean that they cannot be compared by being expressed in terms of one single unit. CBA naturally runs into this problem, because in a CBA “multi-dimensional decision problems are reduced to uni-dimensional ones.”43 The problem of incommensurability thus goes beyond the intuitive objection that we should not place a value on nonmarket goods. As Hansson correctly argues, even if we remove money from the formula, the problem of comparing things that we consider incomparable remains. This problem reverberates throughout the process of CBA. The next question – assuming that we can surmount the objection of placing monetary values on nonmarket goods – is how to assign value to such goods, which seems to be an essential step when a CBA is executed. A lot of contention about valuation problems centers on the question of how to value environmental benefits and burdens. How do we value nature, in monetary terms or otherwise? What is the value of one animal? How about an animal species, if there is a risk of the species becoming extinct? Interestingly, CBA has historically been introduced – and fervently defended by environmentalists – as a mechanism for “introducing accountability to decisions that affect whole communities” and for holding “decision-makers publicly accountable for external costs.”44 This attitude changed drastically when politicians started using CBAs as a threshold for justifying riskreduction measures in public policy. For instance, in the US, President Reagan issued an executive order in 1981 to require governmental agencies such as the Environmental Protection Agency (EPA) to justify, for instance,

41 42

Hansson, “Philosophical Problems in Cost–Benefit Analysis.” Van de Poel, “Can We Design for Well-Being?,” 301. See also Raz, The Morality of Freedom.

43

Hansson, “Philosophical Problems in Cost–Benefit Analysis,” 177.

44

Schmidtz, “A Place for Cost–Benefit Analysis,” 151.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

71

regulatory public measures for the mitigation of risks to the environment derived from CBAs. In order to assign monetary values to non-market goods, economic methods have been proposed that are based on willingness to pay (WTP) and willingness to accept,45 defined as “the maximum amount an individual would be willing to pay . . . to secure the change or the minimum amount they would be willing to accept . . . to forgo it.”46 WTP has been criticized for several reasons. First, and in conjunction with the fundamental problem of monetization in CBA, one could argue that protecting people against environmental risk cannot be properly measured by WTP.47 Likewise, WTP cannot reflect whether or not endangered species, nature, and wildlife should be protected. In a functioning democracy, citizens’ informed judgments should take priority in decision-making, and not the proposed “aggregate private consumption choices” of WTP.48 Second, it has been argued that WTP creates inequity when we have to make moral choices about levels of acceptable risk. This is partly the result of how WTP has been applied. That is, WTP is not only a function of how great an individual value is – as the principle defends – but also of the “resources available for bidding on those values.”49 Basically, then, we are measuring a hypothetical WTP, which potentially leads to distributional unfairness, because “we [can] justify building a waste treatment plant in a poorer neighborhood when we judge that poorer people would not pay as much as richer people would to have the plant built elsewhere.”50 The problem of justice or equity is not only a result of the WTP approach, but more broadly the result of the fact that CBA is an aggregate or accumulative method that – in principle – pays attention to the sum of all costs and benefits and does not distinguish between the people or groups of people to whom those costs and benefits accrue. Distribution remains a major challenge to all aggregate methods and certainly to CBA. There are further questions as to how to value future costs and benefits. In a way, these are also distributional problems, but in a temporal sense instead of a spatial sense . In a CBA, any investment now requires that its benefits in

45

Willingness to accept is also known as WTA.

46

Hanemann, “Willingness to Pay and Willingness to Accept,” 635. Sunstein, “Cost Benefit Analysis and the Environment,” 354.

47 48

Ackerman and Heinzerling, Priceless.

49

Schmidtz, “A Place for Cost–Benefit Analysis,” 163.

50

Ibid., 164.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

72

Ethics and Engineering

the future must be at least as large as what one could earn by “simply banking the money.”51 Assuming that a sum of money today will be likely to be worth less next year, economists argue that future benefits should be discounted against present costs, with an annual rate of interest. How to deal with future benefits as compared with current costs is a much-debated ethical issue, because many investments that are needed now (e.g., for reducing gas emissions) will probably not introduce benefits until a few decades later, while the costs must be mostly borne by the present (and perhaps immediately following) generations. The proponents of discounting argue that it is necessary to use the actual rate of return as the discount rate in order to “ensure consistent comparisons of resources spent in [a] different time period,” while the critics emphasize “intergenerational neutrality,” stating that belonging to a particular generation is not a solid moral basis for being treated differently from others.52 The first group, known as positivists, argue that the actual “market-determined discount rate” is the only defensible one; positivists reject any reasons other than purely economic (and thus also ethical) ones for determining discount rates.53 The second group,the ethicists, make a basic assumption that a pure CBA with discounting may create “unethical choices”; a discount rate of 5 percent could easily assess the value of people living five decades from now to be only a small fraction of the value of those living today.54

3.5 How to Deal with the Problems of CBA Let me briefly recap the problem of valuation (and the problem of incommensurability) as well as the problem of distribution (in both space and time), before moving on to discuss several alternatives that have been proposed to deal with these ethical issues. The problem of valuation is that we cannot easily put a monetary value on certain nonmaterial goods such as human health and environmental risks. Monetarization attracts a fundamental objection – which is that linking money to things such as human health is ethically questionable – and some practical objections, concerning how we can objectively assign a monetary value to human life or to the environment. The WTP approach that has been presented by economists also 51

Randall, Risk and Precaution, 47.

53

54

Ibid., 149.

52

Posner and Weisbach, Climate Change Justice, 144.

Ibid., 153.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

73

runs into several difficulties, including the fact that people’s willingness to pay depends on their capacity to pay; this can easily result in environmental injustice, or in risky technologies ending up in less wealthy communities simply because they are less able (and thus also less willing) than others to pay to avoid such risks. Another fundamental issue underlying the problem of valuation is incommensurability. The problem of distribution has everything to do with the fact that CBA is, by definition, an aggregate method that – in utilitarian fashion – aims at maximizing well-being without paying attention to the matter of to whom those improvements in well-being (and the associated costs) will accrue. The WTP approach also creates some distributional inequities, as mentioned above. The temporal aspect of the problem of distribution is most visible in how we choose a discount rate to determine the future value of money as compared with its present value. It is furthermore important to realize that this critique is not only found in the philosophical literature, which might go unnoticed by practitioners. Quite the contrary. Several empirical studies based on interviews carried out with politicians and practitioners (including consultants and scientists) in charge of transportation policies in different countries in Europe, North America, and Pacific Asia show an increasing awareness of the above-mentioned problems and shortcomings of CBA.55 Dutch politicians, for instance, found “the aggregate outcome . . . of CBAs pretentious” and wished to use the method only in a “non-decisive manner,”56 while several practitioners found the CBA to be plainly “inadequate as a tool to appraise Mega Transport Projects,”57 which are a typical area of application . The guidelines of severalrelevant organizations, such as the OECD, include proposals to adjust the methods of CBA so that it can better deal with at least some of these problems. The proposals for dealing with the aforementioned problems of CBA can be divided into two separate categories: (1) amending or adjusting a existing CBA and (2) supplementing or replacing the method. 55

Nyborg, “Some Norwegian Politicians’ Use of Cost–Benefit Analysis”; Mouter, Annema, and Van Wee, “Attitudes towards the Role of Cost–Benefit Analysis in the DecisionMaking Process for Spatial-Infrastructure Projects”; Dimitriou, Ward, and Wright,

56

“Mega Transport Projects.” Jan Anne Annema, Niek Mouter, and Jafar Razaei, “Cost–Benefit Analysis (CBA), or Multicriteria Decision-Making (MCDM) or Both,” 788.

57

Dimitriou, Ward, and Wright, “Mega Transport Projects,” 23.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

74

Ethics and Engineering

3.5.1 Amending or Adjusting a CBA Some proposals for dealing with these problems concern methods for adjusting or amending a CBA, for instance by offering alternative ways to pass judgment on the outcome of a CBA or by adding an extra test to ensure a fairer distribution of costs and benefits. A typical decision rule for applying a CBA is to look at which alternative would maximize net benefits; this is a strictly utilitarian proposal, but we could also think of alternative decision rules. How we judge a comparison of costs and benefits is particularly relevant in a Risk–Benefit Analysis, which is essentially a CBA that helps us compare risks and benefits but not necessarily to maximize benefits in relation to costs and risks. A Risk–Benefit Analysis can, for instance, help us to assess “which risks merit the greatest attention” or whether “a technology’s expected benefits [are] acceptable, given its risks and other expected costs.”58 For example, in public health policy a test of “gross disproportionality” may be applied; that is, when risk-reduction measures have been proposed that involve costs, we do not expect the costs to be exactly estimated and directly justified by the benefits, and we argue that the measure is unreasonable only when the marginal costs of improvements are “grossly disproportional.”59 This is applied with respect to the ALARA principle in risk management, which argues that risks must be “As Low As Reasonably Achievable”; this slightly shifts the burden of proof so that riskreduction costs are essentially acceptable unless they are grossly disproportionate to the benefits they generate. Of course, this does not entirely remove the problem, because exactly what constitutes “reasonable” will remain a subject of the ethical debate and also of the political and regulatory debate when risk-reduction measures are decided upon.60 Such approaches can help us to bypass some of the problems of valuation, even though some of the fundamental problems, such as incommensurability, essentially remain. Yet another proposal for amending a CBA is to add a distribution test, which is helpful for avoiding – or at least reducing – the distribution problem. Schmidtz, for instance, distinguishes between two situations: “when a

58 59 60

Fischhoff, “The Realities of Risk–Cost–Benefit Analysis.” Ale, “Tolerable or Acceptable,” 236. See, e.g., Godard, “Justification, Limitation, and ALARA as Precursors of the Precautionary Principles.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

75

proposal fails the test of CBA, when costs exceed benefits, the implication is more decisive, namely that further discussion is not warranted,” whereas when a proposal shows that “winners are gaining more than losers are losing, [it] counts for something, but it is not decisive.”61 Therefore, the proposal does not dismiss the CBA, and nor does its aggregative approach, but it amends it by adding the test of fairness. In a situation in which two alternatives both present net benefits, but the one with the lower net benefit is preferable from a distributional point of view, the distribution test will recommend the latter alternative. So it does not follow that maximization is the basic premise of utilitarianism, but the consequentialist premise of comparing burdens and benefits still remains intact. We can also amend a CBA by adding a deontological (individual-based) test to the effect that a CBA that produces net benefit is acceptable only if it does not violate the rights of any individual person. Note that this does not reject the consequentialist premise of CBA. The fundamental radiation protection principle of the International Commission on Radiological Protection (ICRP) essentially follows such reasoning. Radiation protection has three fundamental principles, namely the justification principle – when aa radiation exposure situation is altered, the change must be justified by a positive net benefit – the optimization principle – all exposure must be kept as low as is reasonably achievable – and the dose limit principle, which states that individual doses must not exceed the limits proposed by the ICRP.62 The justification and optimization principles clearly have a consequentialist approach; the optimization principle is often understood and applied in terms of a CBA or a Risk_Benefit Analysis,63 which may – among other things – impose unacceptable radiation on a single person or a group of individuals.64 Indeed, the three fundamental principles need to be considered in relation to each other; i.e., when the optimization practice is acceptable overall, it is still necessary to consider the dose limit principle, which stems from deontological thinking on ethics and from respect for the rights of individuals.

61 62

Schmidtz, “A Place for Cost–Benefit Analysis,” 153. ICRP, 1990 Recommendations of the International Commission on Radiological Protection.

63

ICRP, Cost–Benefit Analysis in the Optimization of Radiation Protection.

64

ICRP, Ethical Foundations of the System of Radiological Protection.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

76

Ethics and Engineering

Furthermore, the WTP approach to monetization can give rise to serious distributional concerns, in that – in its starkest form – it aggregates individuals’ willingness to pay for benefits or to accept compensation for losses, “regardless of the circumstances of the beneficiaries or losers.”65 This problem is also referred to as the problem of marginal utility; one proposal for dealing with it assigns greater weight to benefits and costs accruing to disadvantaged or low-income groups, because, as the OCED argues, “marginal utilities of income will vary, being higher for the low income group.”66 As regards the temporal distributional problems of CBA, specifically the problem of discounting, these are also acknowledged in the practice of and guidelines for preparing CBAs. A recent OECD publication, for instance, discusses the “social discount rate” for use in CBAs assessing projects that affect the environment.67 When the interests of people living in the distant future are assessed, the a discount rate that declines over the course of time (and reaches zero in a fairly short period of time) is perhaps the most defensible option. Without such a declining discount rate, decision-making about long-term risks (such as climate change) could result in counterintuitive conclusions; e.g., no serious investment in mitigation would be easily accepted, although mitigation is commonly agreed to be the “gold standard” way of limiting the growth of irreversible climate change.

3.5.2 Supplementing or Replacing CBA: Multi-criteria Analysis Multi-criteria analysis (MCA) has been presented in the literature as an alternative appraisal method that can deal with some of the ethical problems of CBA. MCA can supplement CBA as an extra decision tool or replace it when a CBA is unsuitable for decision-making on the situation in hand. MCA, sometimes referred to as multi-criteria decision-making, refers to “a class of decision-making methods” on the basis of which “a number of alternatives are evaluated with respect to a number of criteria.”68 This method thus uses several criteria (as opposed to the single criterion of the CBA) for evaluation and is particularly suitable for complex situations which cannot be expressed 65 67 68

66 OECD, Cost–Benefit Analysis and the Environment: Recent Developments, 17. Ibid. OECD, Cost–Benefit Analysis and the Environment: Further Development and Policy Use.

Annema, Mouter, and Razaei, “Cost–Benefit Analysis (CBA), or Multi-Criteria DecisionMaking (MCDM) or Both,” 789.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Balancing Costs, Risks, Benefits, and Environmental Impacts

77

in terms of a single unit. The choice is then made by assigning weight to each criterion, leading to an “aggregate” outcome that sufficiently acknowledges the individual criteria used for lower-level assessments. Different infrastructural proposals can then be compared according to a set of criteria. To returning to the opening case study in this chapter, tdifferent configurations of the proposed new airport would be compared with different optimization scenarios for the existing airport, based on criteria such as environmental degradation, air quality, safety, and travel time, each of which would be assigned a relative weight. The overall comparison would then consist – in principle – of distinct criteria that would provide for a more nuanced analysis of the options at stake. MCA has been praised for removing the need for monetizing when the issues that are being compared are very hard – if not impossible – to monetize, such as landscape deterioration.69 Instead of monetizing, MCA assesses variables by using weight factors. In addition, this method has the broader advantage that it can help decision-makers to circumvent the problem of incommensurability in general (of which monetization is only one dimension). The need to express incommensurable units as one single unit is removed and, therefore, different criteria can be introduced to compare different incommensurable entities, such as environmental deterioration versus the saving of travel time. With MCA, several criteria can be taken into account simultaneously in a complex situation. The method is designed to assist decision-makers to integrate the different options into either a prospective or a retrospective framework, while reflecting the opinions of the actors concerned. The participation of the decision-makers in the process is a central part of the approach. These are desirable features in a method that is used to assess considerations of equity. Indeed, using MCA gives rise to another problem, that is, “the subjectivity of weights to be used [and] possibilities for manipulations, and lack of robustness.”70 In other words, in the example above, depending on the relative weights that the decision-makers put on the

69

Barfod, Salling, and Leleur, “Composite Decision Support by Combining Cost–Benefit and Multi-criteria Decision Analysis”; Annema, Mouter, and Razaei, “Cost–Benefit Analysis (CBA), or Multi-criteria Decision-Making (MCDM) or Both.”

70

Van Wee, “How Suitable Is CBA for the Ex-ante Evaluation of Transport Projects and Policies?,” 5.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

78

Ethics and Engineering

criteria, different scenarios could “score” as the best scenario – either the optimization of the existing airport or the building of the new airport. In brief, MCA can help us to circumvent some of the problems associated with CBA, such as the issue of incommensurability. An MCA provides us with a more fine-grained analysis to consider projects not just in terms of one single criterion, but it is not necessarily a panacea for the assessment of engineering projects. Hybrid combinations of CBA and MCA have also been proposed so that quantitative variables that can be valued in a “generally acceptable way” receive monetary values, while other difficult-to-monetize variables are weighted as prescribed by MCA.71

3.6 Summary In this chapter, I have reviewed CBA as a method for assessing the costs, risks, and benefits of engineering projects. CBA is rooted in consequentialist thinking in ethics, which argues that moral rightness depends on whether positive consequences are being produced. A specific branch of consequentialism is utilitarianism, which aims not only to create but also to maximize positive consequences. Classically, a CBA aims to judge different alternatives on the basis of which alternative can maximize positive consequences, by first tallying all the positive and negative consequences and then assigning a monetary value to each consequence. CBA and its underlying ethical theory have been much criticized in the literature: Can we assess moral rightness only in terms of consequences? And even if we assume that the latter is possible, can we objectively assign monetary values to those consequences? While these are essentially valid objections, this chapter is not intended to criticize or dismiss CBA. Instead, following the reasoning that formal “analyses can be valuable to decision-making if their limits are understood,”72 the chapter aims to show what a CBA can and cannot do and how it canbe made more suitable for assessing the risks, costs, and benefits of engineering projects. As such, the chapter provides several ways of circumventing some of the ethical objections to a CBA by amending, adjusting, or supplementing it, and – when none of these can help – by rejecting and replacing the CBA as a method.

71

Ibid.

72

Fischhoff, “The Realities of Risk–Cost–Benefit Analysis.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:43, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.005

Part II

Ethics and Engineering Design

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.006

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.006

4

Values in Design and Responsible Innovation

4.1 The “Naked Scanner” as the Holy Grail of Airport Security In a shockingly honest confession – “Dear America, I saw you naked” – a former security officer of the US Transportation Security Administration admitted that the naked body beneath a passenger’s clothes was fully visible when they passed through the airport scanner.1 This confirmed a fear that many passengers had, namely that the security officers indeed saw their naked bodies and that they looked at them more than was necessary for security purposes. “Many of the images we gawked at were of overweight people, their every fold and dimple on full awful display,” this former Transportation Security Administration officer wrote.2 “Piercings of every kind were visible. Women who’d had mastectomies were easy to discern – their chests showed up on our screens.”3 In the aftermath of the 9/11 attacks, security measures at airports were considerably tightened up. One question that received increasing attention was how to prevent nonmetallic hazardous items from being smuggled aboard planes. Several attempts had been made to carry nonmetallic explosive and flammable materials onto flights. The conventional security procedures were based on X-ray portals and could therefore detect only metals (e.g., knives and firearms), and the additional touch search – or “pat-down” – by security officers proved to be ineffective; it was further considered timeconsuming and invasive for both the officers and the passengers. The whole-body scanner emerged as a solution to both problems. It could help to identify nonmetallic objects, and it relieved the passengers and security officers of the uncomfortable experience of a pat-down.

1

Harrington, “Dear America, I Saw You Naked.”

2

Ibid.

3

Ibid.

81

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

82

Ethics and Engineering

There are two types of whole-body scanners, and they differ in both the scanning and the imaging techniques they use (the two are interconnected). As regards the scanning technique, a distinction is made between backscatter scanners and millimeter wave scanners, both of which are capable of producing highly detailed images of a naked body. A backscatter scanner uses a low-intensity X-ray beam to create a three-dimensional holographic silhouette image of the body and any foreign objects on it, based on the backscattered (reflected) X-rays. The image produced is a highly detailed three-dimensional holographic silhouette of the body that depicts any foreign object on it. A millimeter wave scanner uses electromagnetic nonionizing frequencies to construct an image that reveals the exact shapes and curves of a human body, while also showing any objects on it. Because of the level of detail of the human body underneath clothes that whole-body scanners can reveal, they became known in the popular media as the “naked scanners” or, even worse, as the “virtual strip search.”4 At first, whole-body scanners were introduced only to supplement the existing X-ray scanners, and were used when there were reasons to search a passenger more thoroughly. Many trials were started at airports throughout the world to assess the the security improvements they brought. Soon after the first trials, it became clear that the whole-body scanners were giving rise to serious privacy concerns, which caused some controversy. The controversy was exacerbated when it was decided that they should fully replace the existing X-ray scanners, after a “near-miss” incident in 2009 when a terrorist managed to carry eighty grams of a highly explosive material, sewn into his underwear, onto a Northwestern Airlines flight in the US.5 His attempt to detonate the material on the plane failed, but the fact that he had managed to take it through the security portal (consisting of an X-ray scanner and a touch search) significantly increased the need for other airport screening methods to identify such explosive materials. Whole-body scanners were acclaimed as the holy grail of airport security. The full replacement of X-ray portals with whole-body scanners also gave rise to serious controversies – for instance, regarding whether they would

4

Cavoukian, Whole Body Imaging in Airport Scanners.

5

Harrington, “Dear America, I Saw You Naked.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

83

lead to severe violations of civil rights.6 The large-scale use of whole-body scanners also added to the safety concerns.

4.1.1 Designing Scanners for Security and Privacy Public security has become a matter of paramount importance for many governments around the world. Privacy has also been considered important, certainly in the second half of the twentieth century, but in the midst of the post-9/11 antiterrorism crisis, it seemed to suffer a major setback,7 or at least was implicitly considered ethically less important and thus easily trumped in favor of security improvements. This was evidenced by a range of emerging legislation that increased the possibility of public surveillance, as well as by a number of technologies that were proposed to ensure security, for instance, CCTV cameras in the public sphere and the use of biometrics in surveillance and identification systems.8 The whole-body scanner is an example of the technologies that emerged as a solution to the increasing security concerns. During the trials, however, the problem of the visibility of the human body underneath clothing and the associated privacy concerns became clear fairly fast. In some places, these concerns led to the trials and further use of such scanners being abandoned. In India, for instance, the authorities decided to ban whole-body scanners after a trial in New Delhi airport, because “the images the machines produced were too revealing and would offend passengers, as well as embarrass the security officials.”9 Yet this was notthe case everywhere, as the benefits of the scanners seemed to be too great for them to be rejected altogether. In order to increase the privacy of the scanning process, several improvements were proposed – for instance, separating the officer in contact with the passenger being scanned from the officer operating the scanner, who reviews the image in a back room and does not see the actual passenger.10 Other, more far-reaching improvements involved algorithmic changes to add 6

See here, for instance, some concerns expressed by the American Civil Liberties Union: www.aclu.org/aclu-backgrounder-body-scanners-and-virtual-strip-searches?redirect= technology-and-liberty/aclu-backgrounder-body-scanners-and-virtual-strip-searches (con-

7 9 10

sulted July 24, 2019). Cavoukian, Security Technologies Enabling Privacy (STEPs), 1.

8

Ibid., 4.

Irvine, “Airport Officials Get X-ray Vision.” Cavoukian, Whole Body Imaging in Airport Scanners, 3.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

84

Ethics and Engineering

“privacy filters” to reduce the identifiability of the image – for instance, by blurring the face and the private parts of the human body – or to eliminate features that could be considered too personal.11 Another privacy filter reduced the quality of the image produced of the body, following the argument that we do not need to use the imaging technology to its fullest potential, but only to see “objects hidden underneath the clothing of airline passengers.”12 This could be done by, for instance, reducing the quality of the three-dimensional holographic image that a millimeter wave scanner produces, while increasing the contrast between the body and any foreign object on it. Another adjustment made the screen portray only a generic or cartoon-like body, on which objects could still be indicated.13 Another important privacy concern associated with whole-body scanners is the question of whether the generated data is stored, and if so, how. As with other information technology applications, it is important to avoid “unnecessary or unlawful collection, use and disclosure of personal data” or creating methods and tools for individuals to access and control the data generated.14 Whole-body scanners were introduced primarily from the perspective of security, but they compromised privacy; privacy improvement was then added to amend this system. Privacy deserves, however, to be more important than a mere afterthought. Privacy needs to be built into the system, or, as Ann Cavoukian – a former Information and Privacy Commissioner in Canada – says, we should “design for privacy.”15 This approach promotes thinking about privacy from the outset, instead of trying to alleviate privacy concerns after systems have been put into use.

4.1.2 Designing Scanners also for Safety When it comes to designing whole-body scanners, there have been many contentious discussions about security versus privacy. There is also another important aspect that deserves attention: that is, what are the health and safety issues associated with the different scanning techniques? Should we consider safety as a third concern, in addition to security and privacy? As 11

12

Keller et al., “Privacy Algorithm for Airport Passenger Screening Portal”; Cavoukian, Whole Body Imaging in Airport Scanners. Cavoukian, Security Technologies Enabling Privacy (STEPs), 7.

13

Cavoukian, Whole Body Imaging in Airport Scanners, 6.

15

Cavoukian, Security Technologies Enabling Privacy (STEPs).

14

Ibid., 1.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

85

already mentioned, the two methods commonly used are millimeter wave and backscatter X-ray scanning. They differ essentially in that the former uses radio waves to detect energy reflected from the body to construct a three-dimensional image, while the backscatter X-ray uses low-intensity X-rays and detects the backscattered radiation. X-rays are a form of ionizing radiation, meaning that they pose a potential health hazard (e.g., cancer risks) which depends on the type, intensity, and length of exposure. Millimeter radio waves are essentially nonionizing and therefore presumably safer than X-rays. It should be noted, though, that the doses of radiation associated with backscatter scanners are very low – “representing at most an extremely small cancer risk” – and are thus safe for most individuals who travel by air only a few times a year; the risks will, of course, be higher for frequent fliers and flight personnel.16 At least equally important is the broader public health perspective: that is, given that a very large number of individuals are exposed to very many small doses per year (up to a billion scans a year in the US alone), there are concerns about “the long-term consequences of an extremely large number of people all being exposed to a likely extremely small radiation-induced cancer risk.”17 From an ethics-of-risk perspective (as discussed in Chapter 2), these concerns are particularly relevant when the ethical acceptability of this scanning method is assessed, because there is alternatives that has the same benefits but no such risks (i.e., the millimeter wave scanner). The ethical legitimacy of the risk imposed by backscatter scanners could therefore be questioned. In this chapter, I will discuss how we can design for socially and ethically important features, such as security, privacy, safety, and more. In Section 4.2, I will discuss why ethics matter in engineering design. Section 4.3 focuses on persuasive technologies, or designing to promote a certain ethically desirable behavior. Section 4.4 moves the discussion to values and how they matter in the design, while Section 4.5 discusses how we can systematically design technology for values, specifically elaborating the two approaches of value-sensitive design (VSD) and Design for Values. An important question in the process of designing for values is how to address value conflicts and value trade-offs; this is discussed in Section 4,6. The concluding section, Section 4.7, discusses a new and influential policy approach, “responsible 16

Brenner, “Are X-ray Backscatter Scanners Safe for Airport Passenger Screening?,” 6.

17

Ibid.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

86

Ethics and Engineering

innovation,” that front-loads ethical thinking in innovation in order to create not only technological artifacts but also socio-technical systems more responsibly. The role that values play in discussions of responsible innovation will also be further explored in this section.

4.2 Why Does Ethics Matter in Engineering Design? The claim that technology is not a “neutral” practice based only on indisputable facts and figures has reverberated throughout this book. Engineering and technological design are essentially normatively or ethically laden.18 An important example that is often mentioned in the literature concerns the “racist overpasses” described in a seminal paper by Langdon Winner titled “Do Artifacts Have Politics?” in 1980.19 Winner discussed the extraordinarily low-hanging overpasses over the parkways in Long Island in the US state of New York. There are about 200 of these overpasses, and they each have a clearance height of only nine feet (2.75 m) at the curb. Even when one notices this “structural peculiarity,” the reason for it is not immediately clear. In fact, the overpasses were “deliberately designed to achieve a particular social effect.”20 Robert Moses – a famous planner and builder who was responsible for many public works in New York between 1920 and the 1970s – apparently intended the overpasses to prevent public buses (which were twelve feet, or 3.65 m high) from using the routes. This was discovered years later by Moses’ biographer, who pointed out that Moses had “social-class bias and racial prejudice.”21 White “upper-class” people, who owned automobiles could use the routes to access facilities such as Jones Beach, one of Moses’ widely accredited public parks, while poor “lower-class” people – often racial minorities – who depended on public transit could not To reinforce this social and racial effect, Moses also vetoed “a proposed extension of the Long Island Railroad to Jones Beach”; the automobile therefore remained the only means to this park.22 This example illustrates how inequalities can be deliberately designed into construction, and how they can have an impact for many years afterward. While the example might seem a little extreme – reminiscent of the 18 19 22

Radder, “Why Technologies Are Inherently Normative.” 20 Winner, “Do Artifacts Have Politics?” Ibid., 123.

21

Ibid., 124.

See for Moses’ biography Caro, The Power Broker. The references to this work here are based on Winner’s article “Do Artifacts Have Politics?”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

87

discussions of racism in the first half of the twentieth century – the types of questions it raises are still very relevant to engineering design, which always needs to take into account ethical issues such as security, privacy, and safety, as in the case of whole-body scanners. This process will then give rise to questions about how far of each of these issues should be taken into account, and whether the prioritization of one could compromise another. Should security improvements, for instance, be made at the expense of basic liberties such as privacy? Later in this chapter I will argue that the lens of values can help us to address such questions. However, before moving on to values and how we can systematically design technologies for them, I will first focus on an approach that aims to incentivize ethically acceptable behavior in design.

4.3 Designed for the Sake of Ethics: Persuasive Technology In the previous section, I argued that the design of technical artifacts can have an ethically problematic effect. This, indeed, builds on the argument against the neutrality thesis in engineering design.23 At a fundamental level, there are two strands of discussion within this thesis: first, about “whether technology itself or its influence on human life may be evaluated as morally good or bad,” and second, about whether we may “ascribe some form of moral agency to technology.”24 A fuller discussion of agency will be found in Chapter 5, which concerns the ethics of autonomous technologies. The discussion in this chapter is focused on how the design and use of technologies can involve ethical dimensions or, alternatively, how technologies can mediate human behavior, perceptions, and decisions.25 Acknowledging this “moral” characteristic, we can design technologies in such a way that they can incentivize the “right choice.” Such incentivization has been called “nudging.” Stemming from social psychology, the concept of nudging initially had little to do with technological design; it was introduced by Richard Thaler and Cass Sunstein to mean designing an environment in such a way as to incentivize, or nudge, the user toward a certain behavior, choice, or attitude.26 An example often used to illustrate the purpose and 23

Flanagan, Howe, and Nissenbaum, “Embodying Values in Technology.”

24

Kroes and Verbeek, “Introduction,” 1.

26

Thaler and Sunstein, Nudge.

25

Verbeek, Moralizing Technology, 2.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

88

Ethics and Engineering

usefulness of nudging concerns the displaying of food choices in school cafeterias. Apparently, by merely rearranging the food, the consumption of certain items can be increased by as much as 25 percent.27 Simply by placing carrot sticks, salad bowls, and fresh fruit at eye level and early in the cafeteria line (and before the deep-fried food), we can create an awareness of healthy food and nudge kids to eat more of it. The same rationale of nudging also applies to the intentional designing of technology so that it will incentivize or persuade the user to behave in a way that is considered good.” Persuasive technologies are intentionally designed to incentivize certain behaviors or attitudes that are considered desirable.28 They aim to give “the user feedback on his actions (or omissions)” and try to “suggest” to the user “a desired pattern of behavior.”29 Among the good examples of such persuasive technology are gas pedals that “increase their resistance to the foot of the driver when the car is going above a certain speed so as to encourage a more economical use of energy.”30 A similar example that aims to promote more sustainable driving is a new design of dashboard in several hybrid cars that shows the fuel consumption in real time. The Honda Insight, for instance, “has a little display field, on which ‘leaves icons’ virtually grow, if the user is driving in an environmentally friendly manner”; if the driver accelerates and brakes a lot and thereby consumes fuel less responsibly, “the leaves will disappear again.”31 Thus, the dashboard displayencourages you to do the “right thing” while driving. It is important to realize that “persuasion” here means that the choice can be made freely and voluntarily and on the basis of correct information; if the incentivized choice were imposed or forced (coercion) or based on wrong or incomplete information (manipulation or deception), the technology would not fall into the category of persuasive technologies.32 In other words, persuasive technologies do indicate what the right choice is, but they leave the making of that choice to the user. If the intended choice is not made, persuasive technologies will remind the user of their wrong, or at least unadvised, choice, but all choices must remain open at all times.

Ibid.

29

Spahn, “And Lead Us (Not) into Persuasion . . .?,” 634. Brey, “Ethical Aspects of Behavior-Steering Technology,” 357.

30

28

Fogg, “Now Is Your Chance to Decide What They Will Persuade Us to Do.”

27

31

Spahn, “And Lead Us (Not) into Persuasion . . .?,” 634.

32

IJsselsteijn et al., “Persuasive Technology for Human Well-Being,” 1.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

89

Persuasive technologies are supposed only to encourage, and they do so to varying degrees ranging from subtle hints, such as a blinking light, to a more persuasive signals, such as an audible buzz. While it is reasonable to assume that human beings are the best persuaders, technologies may have certain advantages: for instance, they can be more persistent, and they do not give up easily when ignored. Indeed, they can be irritatingly persistent: think of the seatbelt reminders that give an urgent buzz whose frequency and intensity continue to grow until you put your seatbelt on.33 Even if the buzz were to stop after a few seconds (or perhaps after a minute), it would still be more effective thanan average human persuader. If the buzz did not stop, we might question the voluntariness of the choice it was incentivizing. That would then likely place the technology beyond the realm of persuasive technologies, as it would be coercing the user to choose the prescribed option,. which might affect the driving condition too negatively, maybe even increasing the risk of an accident. In fact, EU protocols insist on this voluntariness by emphasizing that seatbelt reminders “should not affect the drivability of the vehicle”; EU law requires a visual signal that persuades but does not irritate to the extent that it negatively affects the drivability of the vehicle.34 The development of these persuasive seatbelt reminders has a peculiar history. It starts in the 1970s, when some car manufacturers decided to design cars that would not start if the driver’s seatbelt was not fastened, or had a seatbelt that buckled mechanically when the car detected that someone was sitting in the driver’s seat. While this was clearly intended to improve the safety of the driver, not everyone appreciated being “mechanically forced to wear their seat belts,” and many drivers had their automatic seatbelts removed.35 This design feature was, therefore, meant not merely to steer the driver’s behavior, but also to impose an action on them, without which a key functionality of the vehicle would not be available. One might ask: Is this an ethical problem? Indeed, the feature is only meant to protect the driver, and everyone is better off if there are fewer accidents and injuries.

33

Ibid., 2.

34

See here the EU seatbelt reminder guidelines and protocols: https://ec.europa.eu/trans port/road_safety/specialist/knowledge/esave/esafety_measures_known_safety_effects/ seat_belt_reminders_en (consulted August 6, 2019).

35

Brey, “Ethical Aspects of Behavior-Steering Technology,” 358.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

90

Ethics and Engineering

The flip side of this imposed safety is that it removes individual liberties. Some people in the US took the case to court, arguing that their civil rights were being violated. They succeeded in having the regulations changed so that “wearing seat belts became again something that was mandatory but no longer mechanically forced.”36 Different seatbelt reminders were then introduced to either gently remind drivers to buckle up (i.e., with a blinking light) or earnestly urge them to do so (with an audible buzz). The effect of persuasive technologies in removing users’ liberty and autonomyhas been criticized as “technological paternalism,” which conflicts with “the ideal of a free and autonomous choice of the individual.”37 Persuasive technologies have further been accused of creating a “responsibility vacuum,” that is, a situation in which it is not clear who is responsible for an action and its consequences, especially when something goes wrong. In the next chapter, I will discuss both autonomy and responsibility in the broader context of autonomous or semiautonomous technologies.38 Let us now turn to the question of how values matter in engineering design.

4.4 How Do Values Matter in Engineering Design? In the previous section, I argued that technologies are not ethically neutral and that they can affect human perception, behavior, and actions; persuasive technologies aim, then, to steer these toward an ethically desirable course of action or attitude. We can, for instance, persuade a driver to wear a seatbelt for the sake of their own safety, or to drive more sustainably by not pushing on the gas pedal too hard or too often. This section extends the critique of the neutrality thesis by building on the reasoning that engineering design is essentially value-laden.39 That is, technologies can embody and promote not only instrumental values – such as functional efficiency and ease of use – but also substantive moral and political values, such as privacy, trust, autonomy, and justice.40 When a technology promotes certain values (e.g., safety and sustainability in driving), this enabling may in turn also hamper

Ibid.

38

Brey, “Ethical Aspects of Behavior-Steering Technology,” 363. Van de Poel, “Values in Engineering Design”; Van den Hoven, Vermaas, and Van de Poel,

39

37

Spahn, “And Lead Us (Not) into Persuasion . . .?,” 634.

36

Handbook of Ethics and Values in Technological Design. 40

Flanagan, Howe, and Nissenbaum, “Embodying Values in Technology,” 322.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

91

or reduce other values (e.g., the autonomy of the driver). In this section, we focus on what values mean and how they feature in engineering design. As a working definition, we assume that values are things that are worth striving for for moral reasons, or “what a person or group of people consider important in life.”41 Values are, therefore, different from individual preferences, wishes, and desires, in that they relate to a common good that we wish to uphold for everyone. To say that something is valuable means not only that it is valuable to me as an individual, but also that “it is or should be of value to others.”42 Not all preferences and choices are explicitly linked with and expressed in conjunction with such values. Our focus here is on preferences and choices that are motivated by and stem from profound beliefs about people’s conception of what is considered to be good. Value statements in engineering design indicate whether “certain things or state of affairs are good, valuable, or bad in a certain respect.”43 According to this definition, then, safety is to be considered a value, because we can assume that safety is relevant for moral reasons, and represents not a personal preference but a widely shared moral characteristic. In fact, increasing safety (or mitigating technological risks) has always been a primary concern in engineering; “the value of safety is almost always conceived as a ubiquitous though often implicit functional requirement.”44 In addition to safety, we might, for instance, think of privacy and security, as in the case of whole-body scanners discussed at the beginning of this chapter. As a matter of fact, security – not safety – was the leading design criterion for these scanners. In engineering design, we can indeed design for many different values, including autonomy, accountability, transparency, justice, well-being, and sustainability.45 Indeed, when we design for a value, that does not necessarily mean that we design for an uncontroversial and unified definition of that value. In fact, many of the above-mentioned values may have different interpretations, which will not necessarily converge on the same design specificities. Let me explain by discussing the value of sustainability. Prioritizing this value in a design could mean that the design should result

41

Friedman, Kahn, and Borning, “Value Sensitive Design and Information Systems,” 349.

42

43 Van de Poel, “Values in Engineering Design,” 974. Ibid. Doorn and Hansson, “Design for the Value of Safety,” 492.

44 45

See for an overview of different values and different areas of application Van den Hoven, Vermaas, and Van de Poel, Handbook of Ethics and Values in Technological Design.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

92

Ethics and Engineering

in “less degradation of our environment, less depletion of materials, and more social equity in our world,” but it could also mean a “higher level of prosperity for people in developing countries.”46 While these requirements are not contradictory and can, in principle, be accommodated simultaneously, they might impose different demands during the design. An essential part of designing technologies for values is, therefore, the process of value specification or operationalization. During our discussion of this process, we will examine the different meanings and interpretations (i.e., specifications) of individual values and the potential value conflicts (where different values cannot be achieved at the same time). Sometimes, a conflict emerges as a result of different specifications of the same value by different stakeholders; this is highlighted in the discussion of responsible innovation in Section 4.7.

4.5 How to Systematically Design for Values Value-sensitive design (VSD) is the first systematic approach to proactively understanding, addressing, and including values in the process of design. Scholars of VSD argue that the design process has value implications because new technology can shape our practice and hence promote or undermine certain values.47 VSD originates from information technology and from the acknowledgment that important ethical values, such as user autonomy or freedom from bias, are being designed into computer systems; if the designer does not include them during the design process, it may prove difficult, or even impossible, to include them after the design has been completed. In the words of an important pioneer of VSD, Batya Friedman, we must create “computer technologies that – from an ethical position – we can and want to live with.”48 VSD for the first time systematically reviews these values at an early stage and in an iterative tripartite methodology consisting of conceptual, empirical, and technical investigations.49 Conceptual investigations aim to conceptually (that is, not yet empirically) identify the direct and indirect stakeholders, the values at stake, their potential conflicts, and the inevitable value trade-offs. Conceptual investigations 46 47

Wever and Vogtländer, “Design for the Value of Sustainability,” 513–14. Flanagan, Howe, and Nissenbaum, “Embodying Values in Technology.”

48

Friedman, “Value-Sensitive Design,” 17.

49

Friedman, Kahn, and Borning, “Value Sensitive Design.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

93

are often the result of philosophical analysis at an early stage of design. Empirical investigations aim to answer the conceptual questions by investigating them among stakeholders, particularly focusing on how values are perceived and how choices are made when two values are in conflict. “How do stakeholders apprehend individual values in the interactive context? How do they prioritize competing values in design trade-offs? How do they prioritize individual values and usability considerations? Are there differences between espoused practice (what people say) and actual practice (what people do)?”50 Technical investigations focus on the suitability of a certain technology for accommodating certain values: “a given technology is more suitable for certain activities and more readily supports certain values while rendering other activities and values more difficult to realize.”51 I will return to conceptual discussions of value conflicts and value trade-offs in Section 4.6. VSD is similar to “design for privacy,” discussed earlier in this chapter, but it aims to focus not on only one value, but on the range of different values at stake.52 To returning to the case study at the beginning of this chapter, had a VSD approach been followed for the introduction of wholebody scanners, it would have required a conceptual investigation to first identify the set of values at stake at the design stage – that is, security, privacy, and safety, and perhaps more – and address the potential conflicts that could occur between them (e.g., between security and privacy). The empirical investigation would have engaged with stakeholders in order to understand how different stakeholders perceive each value – for instance, the value of privacy from the perspective of a passenger – while also addressing situations in which values conflicted. The technical investigation would then have revealed that a millimeter wave scanner is perhaps the one that can better help us address, or even bypass, the conflict between security and privacy; so this would be the technology that could accommodate both values simultaneously. In sum, a VSD approach might have helped to identify these conflicts in an early stage of development, which could in turn

50 52

51 Ibid., 3. Ibid. To be fair to what Cavoukian called “design for privacy,” it was indeed also their aim to

consider privacy in addition to security. So, essentially, their approach resembles the rationale of VSD; see Cavoukian, Whole Body Imaging in Airport Scanners.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

94

Ethics and Engineering

have contributed to reducing the controversies that occurred as a result of privacy infringements. A second approach to proactively including values in engineering design is the Design for Values approach, which can be seen as elaborating the VSD approach by extending the inclusion of moral values to other domains of technological design.53 The Design for Values approach rests on the assumption that the “explicit and transparent articulation of values” is highly relevant to design and innovation, and it allows for designing for shared values.54 The tool, methodologies, and procedures of Design for Values have mostly been developed in relation to the literature on the ethics of technology. The approach further rests on three main claims, as described in the seminal Handbook of Ethics, Values, and Technological Design, (1) values can be expressed and embedded in technology, (2) it is ethically important to think explicitly about such values, and (3) for values to have a serious bearing on design and development, they must be included at an early stage.55 While values are often general notions at a rather high level of abstraction, engineering design is often based on more concrete guidelines and instructions.56 Therefore, values must first be translated into design requirements, a process that is also called value specification or operationalization, as mentioned in the previous section. To explicate this, Ibo van de Poel has introduced the concept of a value hierarchy, which aims to show “the relation by which higher level elements are translated into lower level elements in the hierarchy.”57 A value hierarchy can be represented as a triangle divided into three levels. The top level consists of values, and the bottom level is the design requirements that pertain to “certain properties, attributes or capabilities that the designed artefact, system or process should possess.”58 Between these two levels, there is an intermediate level of norms. Norms may include capabilities (e.g., the ability to preserve one’s privacy), activities (e.g., moving quickly through the security procedure), or objectives

53 54

Van de Poel, “Values in Engineering Design.” Van den Hoven, Vermaas, and Van de Poel, “Design for Values in Nuclear Technology,” 3.

55

56

Van den Hoven, Vermaas, and Van de Poel, Handbook of Ethics and Values in Technological Design. This and next few paragraphs partly draw on the author’s contribution to Elsinga et al., “Toward Sustainable and Inclusive Housing.”

57

Van de Poel, “Translating Values into Design Requirements,” 253.

58

Ibid., 254.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

95

(e.g., minimizing the health risks posed by whole-body scanners). In this definition, norms do not, however, include specific targets to be achieved in design (e.g., a maximum level of radiation that passengers can be exposed to), which are called design requirements. It should be noted that this definition of norms differs from the definition of the term in policy (e.g., legal norms), and also from how the term is often used in engineering, for instance, in International Organization for Standardization norms. In the present terminology, such a specific requirement of design would rather be subsumed under the heading of design requirements, which themselves are informed by a specific norm. Therefore, this proactive step of translating values into norms serves the purpose of making abstract values more tangible and more applicable to designers’ practical design requirements. Van de Poel presents an example from chicken husbandry, specifically the design of battery cages, which he describes as “the most common system in industrialized countries for the housing of laying hens” because they make possible the production of eggs in an economically efficient way.59 These cages, however, have been criticized for neglecting animal welfare because they do not provide a good living environment. The leading value here in the redesigning of the battery cages is thus animal welfare, which in policymaking has been translated into measurable requirements such as “egg production per animal, . . . egg weight and the mortality of chickens.”60 In addition to animal welfare, there are also other values at stake, such as “environmental sustainability,” which relates to, for example, emissions (especially of methane and nitrogen dioxide) from the cages. Although translating the value of animal welfare was unfamiliar to engineers and designers, attempts were made to translate it into norms and requirements for the design or redesign of battery cages. For example, an interpretation of animal welfare at the level of norms could be expressed in terms of “enough litter,” which in turn could be translated into specific design requirements such as “litter should occupy at least one third of the ground surface.”61 Before moving on to a discussion of how to deal with value conflicts in design, let me first make two remarks that may help to put a value hierarchy and the relation between its three levels better into perspective. First, following Ibo van de Poel, I believe that values, norms, and design requirements have, in principle, a nondeductive relationship. That is, norms cannot 59

Ibid.

60

Ibid.

61

Ibid., 258.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

96

Ethics and Engineering

be logically deduced from values, and design requirements cannot be logically deduced from norms. Instead, the process of specification or operationalization of values is highly contextual and depends very much on the individual case. The relation between the levels has been described as one of “for the sake of”; a norm is presented for the sake of the higher-level value, and design requirements are for the sake of the norms. Second, it is very likely that within each engineering context and for each specific case, more than one norm can be found for the sake of each value and different design requirements can be presented for the sake of each norm. The norm of “enough litter” in the example above can be interpreted in terms of the percentage of the ground surface that must be covered with litter, as explained above, but it can also be interpreted in terms of the absolute surface area that must be covered by litter, for instance, “at least 250 cm2 of litter per hen.”62 The same goes for the translation of the value into the intermediate level of norms: “animal welfare” can be translated into “enough litter,” but also in terms of “enough living space” or “the presence of laying nests.”

4.6 How to Deal with Conflicting Values A central aim of Design for Values is to identify conflicting values and then address them as far as possible. This will lead to the creation of socially and ethically more acceptable designs. However, addressing conflicting values raises both a fundamental and a practical question. At a fundamental level, if values are ethically important features – things we should hold paramount – can we foresee situations in which they could be trumped? Can we, for instance, compromise the value of human equality a little bit? Or to put it more bluntly: Is there a degree of racism that we might find acceptable if it would improve public safety and security? At a practical level, how can we address the issue of conflicting values? Identifying at the design stage that two values conflict does not always mean that one of the values must be left out completely (though that might sometimes be the case). It is worth contemplating how to deal with value conflicts. I will distinguish here between the strategies of “designing out the conflict” and “balancing the values.” Let me first focus on the fundamental question of whether values must be considered to be absolute moral entities. 62

Ibid.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

97

4.6.1 Is a Little Bit of Racism Acceptable? Understandably, a first response to this question is an appalled “Absolutely not!” Of course, I am not going to argue against this. Instead, I want to give an example of a situation in which such racism is the unfortunate result of certain policies concerning the use of facial recognition software. In the UK, for instance, the police have trialed the use of real-time facial recognition when people pass CCTVcameras. The police call facial recognition software an “invaluable tool” in the fight against crime, but it is also known that the software has difficulties in coping with “black and ethnic minority faces” since “the software is often trained on predominantly white faces.”63 An engineer might rightly argue that rectifying this is a matter of “simply” changing the technology, or refining the software so that it is sensitive to different types of faces, but such a change would require additional development time. The question then is whether trials should be stopped until the software is adjusted, or whether the false positives – leading to members of ethnic minorities being falsely accused of crimes – should be tolerated because of the “greater good” that will be served by using such software. As the BBC reported in 2019, the UK police have known about this issue since 2014, but decided to put the software into use before fixing the problem. Somehow, the problem of insensitivity to ethnic minorities failed to receive enough attention before the facial recognition program was implemented.64 In a similar example in the US, and in an attempt to show the racial biases of facial recognition software, the American Civil Liberties Union (ACLU) – an influential civil rights NGO – tested Amazon’s facial recognition tool “Rekognition” on images of the members of the US Congress in 2018. Astonishingly, the results showed twenty-eight matches, falsely identifying the congressmen and congresswomen as people who had been arrested for crimes in the past. Among the falsely matched faces there was a disproportionately large number of members of color, including six members of the Black Caucus.65 The ACLU conducted this research to show the undeniable and painful deficiencies of the facial recognition software and in order to lobby Congress to impose “a moratorium on law enforcement use of face

63

White, “Police ‘Miss’ Chances to Improve Face Tech.”

65

Snow, “Amazon’s Face Recognition Falsely Matched 28 Members of Congress with

64

Ibid.

Mugshots.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

98

Ethics and Engineering

surveillance.”66 Amazon acknowledged that the software needed to be better trained and adjusted, but it also defended it, emphasizing the great success of the tool in “preventing human trafficking, inhibiting child exploitation, and reuniting missing children with their families,” among other things.67 To returning to the ethical question presented at the beginning of this section – namely whether compromising human equality is acceptable – we can easily agree that it is not. Yet, as sad as it sounds, the realistic observation is that we have technologies in use that have, for instance, racial biases. The fact that these technologies can bring tremendous “greater good” – by helping us to stop human traffickers and child abusers – seems to justify, or at least allow us temporarily to tolerate, their racial biases. One might argue that one of the aims of Design for Values and proactive thinking about values is that we should not be forced to make pragmatic choices when fundamental ethical questions are at stake.

4.6.2 Designing Out the Conflict When this kind of conflict is spotted at an early stage, the designer may be better equipped to resolve it by choosing the technology that can best accommodate the conflicting values. This may, for instance, mean following the tripartite VSD approach, whose third step, the technical investigation, aims to identify which technology can best accommodate different values. In the case of facial recognition software, once the conflict was spotted, it should have been straightforward to design also for human equality: that is, the software should have been developed using images of people of different races, ethnicities, and skin colors, so that security could still be improved but without compromising human equality. In the case of the whole-body scanner, such proactive thinking about values could have resulted in X-ray scanners being dismissed at the start because of their safety issues. It would also have helped us to develop imaging techniques that revealed few details of the human body but produced a high level of contrast between the human body and any objects on it. In this way, the values of security, privacy, and safety could all have been 66

Alba, “Amazon Rekognition Falsely Matched 28 Members of Congress with Arrest Mugshots.”

67

Ibid.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

99

accommodated in an appropriate way; that is, we could have optimized all three values. This is very much in line with how we optimize for engineering design criteria. Indeed, it is only a simplified version of an actual design question, in which a lot more will be at stake, including the price differences between scanning technologies, how fast they can be made available, how accessible each technology is for each country involved, what sort of adjustments in the software (e.g., privacy filters) can be designed and included, and so on. It is likely that a number of other values will also play an important role in these decisions. The argument here is that early engagement with these questions of value conflict could have allowed them to be designed out, resulting in an ethically more acceptable technology. Another example of the designing out of a conflict concerns a storm surge barrier in the Netherlands, a country where 40 percent of the land area is below sea level and very prone to flooding.68 In response to a major flood in 1953 that killed over 1,800 people, the Dutch government drew up the Delta Plan to prevent further dangerous flooding. Part of this plan entailed closing off a vulnerable area, the Eastern Scheldt estuary. However, in the late 1960s and early 1970s, environmental concerns became more prominent, and simply blocking off the estuary was deemed unacceptable because of the likely ecological consequences, such as the risks from desalination and a lack of tides, and local fishermen were concerned that there would be huge consequences for their employment. To put it in terms of the conflicting values, there was the value of safety on the one hand, and ecological and well-being (employment) issues on the other. However, an ingenious piece of technology enabled the conflict to be designed out by setting the flood defense system so that the estuary is open by default but can be closed off with movable barriers at times of increased risk. Thus, none of the values was compromised; technology helped to bypass the value conflict. A final example is from the city of Delft, where I am at the time of writing this chapter. If you had been driving around Delft before 2015, you might have encountered the surprising phenomenon that one of the major highways around the city stopped in an almost apocalyptic way, with the beautiful natural landscape – the untouched area of Midden Delfland – in front of you. This is the A4 highway, which was intended 68

This description of the case is based on Van de Poel, “Changing Technologies.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

100

Ethics and Engineering

to connect the capital, Amsterdam, with one of the world’s largest harbors, at Rotterdam, but a seven-kilometer stretch of the highway, which would have cut through the landscape, was simply missing. The traffic had to go through Delft in order to reach the other main highway to Rotterdam, the A13, which was understandably always congested because it had to carry the load of two highways.69 The reason why the highway did not continue through Midden Delfland, as it was originally meant to do, was an unresolved value conflict between the improved accessibility of Rotterdam harbor (with shorter travel times and better access from the capital) and its associated economic values on the one hand, and on the other hand ecological values associated with the unspoiled Midden Delfland in one of the most densely populated areas in the Netherlands, a country already known for its high population density. This controversy lasted for half a century, but at last an engineering solution was found that, although expensive, managed to accommodate both values sufficiently. A major part of this seven-kilometer missing link has now been built – not on but slightly under the surface, in a sort of “semi-tunnel” that is open on top; wildlife can move freely between the two sides of the highway along the eco-ducts and aqueducts that connect them. When hiking in the countryside around the highway, you can barely hear or see the highway because it has been partially sunk into the ground. This could also be considered an example of designing out a conflict, but it differs from the estuary example in that it was not thought of at the design stage. Instead, it was followed fifty years of controversy that had approached the problem in a binary mode: Should the economy trump ecological values? Furthermore, the eventual solution was substantially more expensive than a conventional highway: The famous Dutch tabloid Metro called it the most expensive piece of highway ever built in the country.70 One might argue that a compromise was made on the economic values in order to enable the highway to be built. This brings me to the next approach to dealing with value conflicts: balancing the values.

69

The information about the highway is adopted from the website of the Dutch Ministry of Infrastructure and Water Management: www.rijkswaterstaat.nl/wegen/wegenoverzicht/ a4/index.aspx (consulted August 12, 2019).

70

Halkes, “De A4, het duurste stukje snelweg van Nederland.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

101

4.6.3 Balancing the Values It is likely that designers will often be unable to accommodate all the values that are at stake in a project, and that choices will need to be made. Let me discuss this by returning to the example of the car that would not start until the seatbelt was fastened and the car that physically imposed the seatbelt on the driver. The conflicting values were the safety of the driver on the one hand and the driver’s autonomy on the other. The seatbelt that mechanically coerced the driver to wear it seemed to tilt the balance unjustifiably toward the value of safety. As discussed earlier, this was controversial and led to the safety regulations being changed, which pushed manufacturers to use safety systems that were less coercive. The seatbelt signs on the dashboards, as well as the buzz, dealt with this conflict in such a way that drivers’ autonomy and freedom were respected but they were reminded of their safety, albeit sometimes in a rather annoying way. Let us look at autonomy versus safety from a different angle. Could safety never trump human autonomy? Autonomy and freedom of choice are undeniably essential ethical values. John Stuart Mill argued that individual freedoms need to be respected as long as they do not interfere with or hamper another individual’s freedom.71 Freedom and autonomy can then be limited – according to this definition – if they interfere with another person’s freedom. In a discussion of Design for Values, one needs to identifythe scope of a value, or to whom it relates. The question of safety versus autonomy can be considered within the confines of the car itself, but of course a car participates in traffic. If the driver of a car refuses wear a seatbelt, we could argue that they are compromising their own safety, and we could not immediately justify limiting their autonomy and freedom. Now consider a drunk driver who is sitting behind the wheel: They jeopardize not only their own safety, but also that of other traffic participants. Would it now be more justifable to limit their freedom and autonomy for the sake of the safety of the other traffic participants? And should we build cars in such a way that drunk drivers will not be able to drive them? This is the basic rationale behind an “alcohol interlock” – an in-car breathalyzer that checks the percentage of alcohol in the driver’s breath before the car is started. Excess alcohol use while driving contributes to about

71

Mill, “On Liberty.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

102

Ethics and Engineering

25 percent of all road deaths in Europe, and according to EU road safety documentation, “high risk offenders” – “who offend regularly and/or exceed legal blood alcohol levels by a large amount” – are responsible for a large share of these fatalities. Alcohol interlocks were proposed as a means to deal with this problem of repeat offenders, and they have indeed reduced recidivism by 28–65 percent, leading to substantially fewer fatalities.72 Limiting people’s autonomy seems to be ethically much more justified in this situation. A proposed “soft” version of this process has been to incentivize the possession of personal breathalyzers: When in doubt, use the breathalyzer before you drive. This could be considered a persuasive technology. Yet the objection has been made that the low level of accuracy of commercially bought breathalyzers means that they cannot necessarily guarantee that one is fit to drive. Moreover, the breathalyzers can create a false sense of confidence and sobriety, especially in young, inexperienced drivers, who are more sensitive to smaller quantities of alcohol than more experienced drivers, and this can in turn increase the likelihood of an accident. In a systematic analysis, Van de Poel presents a number of strategies for dealing with value conflicts, including Cost–Benefit Analysis (CBA), direct trade-offs, satisficing (thresholds), and re-specifications.73 In a CBA, “all relevant considerations are expressed in one common monetary unit,” so that we can decide whether the benefits outweigh the costs.74 In fact, this builds on the rationale of monetizing costs and benfits as discussed in Chapter 2; and the same objections apply to this strategy too. Let me explain the strategy and its challenges by returning to the alcohol interlocks. In a study carried out in 2005, the benefits and the costs of an alcohol interlock were estimated for several EU countries. The benefit-to-cost ratio varied from country to country, depending on the size of the country, the numbers of drivers and drunk drivers, the statistics of earlier casualties, and the monetary value assigned to each fatality (which was different in each country). In

72

The information on the alcohol interlock is adopted from the web page of the EU on road safety:

https://ec.europa.eu/transport/road_safety/specialist/knowledge/esave/esafety_

measures_known_safety_effects/alcolocks_en (consulted August 12, 2019). 73

It should be noted that one of the strategies Van de Poel discusses is “innovation,” which is basically what is described in Section 4.6 as designing out a conflict. See for the full list of strategies and the extensive discussions Van de Poel, “Design for Values in Nuclear Technology.”

74

Ibid., 101.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

103

Spain, for instance, a reduction of 86.5 deaths per year could be achieved, which would amount to a total annual benefit of €69 million (at the rate of €800,000 per death), while the benefits of saving 5.5 lives per year in Norway would amount to almost half of that sum (€32.5 million per year). This is because of the substantially higher human life value in Norway (€5.9 million per death, as opposed to €0.5 million in Spain). So while the benefit-to-cost ratios varied substantially (and were sometimes inaccurate, owing a lack of data in some countries), it was clear that the benefits outweighed the costs. Some car manufacturers, such as Volvo and Toyota in Sweden, offered the installation of alcohol interlocks in trucks as a dealership option. Many transportation companies (for both goods and passengers) added alcohol interlocks to their vehicles to protect the interests of the large number of people who could be affected if the drivers of those vehicles were drunk.

4.7 Responsible Research and Innovation VSD and Design for Value are two categories among a larger list of approaches and strategies for “early engagement with science and technology” with the aim of front-loading societal and ethical concerns about new technologies.75 I have discussed these two value-based approaches at length because they are directly relevant to the practice of engineering and technological design. Many “early engagement” approaches have their roots in technology assessment, an “interdisciplinary research field aiming at, generally speaking, providing knowledge for better-informed and well-reflected decisions concerning new technologies [in order to] provide answers to the emergence of unintended and often undesirable side effects.”76 A new approach worth discussing here – because of its strong influence on both academic scholarship and public policy – is responsible innovation. Stemming from the same rationale as technology assessment, responsible innovation is premised on the acknowledgment that science and innovation have produced not only knowledge, understanding, progress, and well-being,

75

See for a discussion of five approaches Doorn et al., Early Engagement and New Technologies.

76

The quotation is from Grunwald, “Technology Assessment and Design for Values,” 68. Technology assessment has more recent advanced approaches in the forms of constructive technology sssessment and political technology assessment; see for discussions on these approaches Doorn et al., Early Engagement and New Technologies.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

104

Ethics and Engineering

but also “questions, dilemmas and unintended (sometimes undesirable) consequences.”77 It aims to offer a governance and policy approach to understanding and addressing those impacts at the development and innovation stage. Responsible innovation is a notion coined in the EU, first in several national policies in the UK, Norway, and the Netherlands, and later in several of the EU’s Framework Programs. In the policy sphere, it is often referred to as responsible research and innovation (RRI), and its objective is to steer the research and innovation process so that the societal and ethical aspects of new innovations are included from the outset. In other words, RRI aims to achieve ethically acceptable, sustainable, and societally desirable technologies.78 The normative dimension of the approach lies in, for example, what “ethically acceptable” or “desirable” exactly entails. For instance, in EU policy, “ethically acceptable” has been taken by some scholars and practitioners to mean compliance with the EU’s Charter of Fundamental Rights (e.g., on matters of privacy, transparency, and safety).79 Indeed, there are different approaches and definitions of ethical acceptability in the presence of technological risk.80 In the whole-body scanner example, for instance, the ethical acceptability of different scanning techniques was compared. The fact that millimeter wave scanners have roughly the same performance as X-ray scanners but substantially fewer health risks, calls into question the ethical acceptability of the latter, even if we assume that the risks posed by X-ray scanners are very low (as is indeed the case). According to an EU expert group, RRI entails involving all stakeholders in the processes of research and innovation at an early stage: “(A) To obtain relevant knowledge on the consequences of the outcomes of their actions and on the range of options open to them and (B) to effectively evaluate both outcomes and options in terms of societal needs and moral values and (C) to use these considerations (under A and B) as functional requirements for design and development of new research, products and services.”81 These are no small tasks, and each of the steps can bring a host of challenges and 77

Owen et al., “A Framework for Responsible Innovation,” 27.

78

Von Schomberg, “A Vision for Responsible Research and Innovation,” 64.

79

Von Schomberg, Towards Responsible Research and Innovation in the Information and

80

Communication Technologies and Security Technologies Fields. Van de Poel and Royakkers, Ethics, Technology and Engineering; Taebi, “Bridging the Gap between Social Acceptance and Ethical Acceptability.”

81

Van den Hoven et al., Options for Strengthening Responsible Research and Innovation, 3.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

105

obstacles. Let me mention one that is essential to the whole process. It is difficult to anticipate all consequences of innovations, because some consequences will essentially be visible only when a technology has been implemented, and sometimes only after several years; this is referred to as the Collingridge dilemma.82 Hence, in addition to being “anticipatory,” responsible innovations need to be “reflective,” “deliberative,” and “responsive.”83 That is, we must try to anticipate the intended and potentially unintended consequences of innovations, while reflecting on their underlying purposes, motivations, and impacts; we must further deliberate with stakeholders throughout the process via dialogue and debate, be responsive to criticism, and be adaptable in the way we innovate.84 While responsible innovation shares some of the basic rationales of the value-based approaches in technological design, it also differs from it and is somewhat more comprehensive in scope.85 First, responsible innovation focuses not only on the innovation itself, but also on the context surrounding of the technology – for instance, the institutions, laws, and legislation that should steer its further development. Responsible innovation thus places the technology against the backdrop of important societal issues. Second, the focus of responsible innovation is not only on technological innovations, but also on innovations in general, for instance in medicine: One might argue that many innovations depend largely on technical developments; think, for instance, of developments in prosthetics and their use in medicine, which depends not only on medicine and orthopedics, but also on various subfields of mechanical engineering. Third, and in conjunction with the previous two issues, responsible innovation is essentially an interdisciplinary effort, for no single discipline can encompass the complexities of the innovations needed to address the societal “grand challenges” of the twenty-first century. A comprehensive overarching view is very much needed, and it should connect

engineering

with

the

humanities

and

social

sciences.

Interdisciplinarity is a basic rationale that responsible innovation has

82

Collingridge, The Social Control of Technology.

83

Owen et al., “A Framework for Responsible Innovation.”

84

See ibid. for more details of these four dimensions. See also Stilgoe, Owen, and Macnaghten, “Developing a Framework for Responsible Innovation.”

85

Jenkins et al., “Synthesizing Value Sensitive Design, Responsible Research and Innovation, and Energy Justice.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

106

Ethics and Engineering

adopted from technology assessment,86 but responsible innovation has expanded its application substantially; for instance, several EU agencies fund interdisciplinary programs. In addition, EU research findings have also incentivized, and sometimes required, such interdisciplinary collaboration. In the realm of policy-making, more recent discussions of dealing with grand societal challenges have expanded this even further into transdisciplinary research that engages not only different disciplines, but also nonacademic stakeholders.87 While transdisciplinary research is not a new notion in academia, the renewed interest in it in the area of policy (according to this definition) is certainly noteworthy.

4.7.1 Fracking, Responsible Innovation, and Design for Values As already mentioned, responsible innovation follows to an extent the basic ideas of value-based approaches.88 In fact, the EU document quoted above even mentions the role of moral values from the outset, in: the outcome of innovation should be assessed in terms of, for example, the moral values it contributes to. The link between value-based approaches in responsible innovation has been explicitly discussed in the literature.89 Let me finish this chapter by illustrating how to connect responsible innovation with value-based approaches in engineering, using the example ofa proposal for the exploration of an unconventional gas field – containing shale gas – in the Netherlands. This is based on a case that my colleagues and I investigated in 2013 and 2014, in which we tried to consider responsible innovation as an endorsement of public values. The lens of values enabled us to take a finegrained look at an interesting yet difficult-to-grasp public debate.90 Let me explain this in more detail.

86

Grunwald, “Technology Assessment for Responsible Innovation.”

87

The OECD has, for instance, developed a report that sums up the challenges of transdisciplinary research in order to help incentivize this type of research; see OECD, Addressing Societal Challenges Using Interdisciplinary Research.

88

Taebi et al., “Responsible Innovation and an Endorsement of Public Values.”

89

Van den Hoven, “Value Sensitive Design and Responsible Innovation”; Van den Hoven, “Responsible Innovation”; Taebi et al., “Responsible Innovation and an Endorsement of Public Values.”

90

Correljé et al., “Responsible Innovation in Energy Projects.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

107

The Netherlands has Europe’s largest conventional gas field in its northern province of Groningen. Gas is, therefore, deeply embedded in the country’s energy infrastructure, both for direct use in households and for electricity generation. Because of the familiarity with and wide acceptance of natural gas, it was assumed that a proposal to extract gas in an unconventional manner would be uncontroversial in this “gas country.” What is particularly significant about shale gas is that its extraction method is based on hydrological fracturing – popularly called fracking – in which water mixed with certain chemicals is injected at high pressure into the hard shale layers in order to crack them open, allowing gas to be extracted. While the technology used for fracking is not new (conventional gas fields also use the method), the amount of water used and the chemicals added to it carry particular health and safety and also environmental concerns. Because of these concerns – in conjunction with the seismic risks associated with conventional gas fields, which became more serious in the Groningen gas field around the time when the application for a permit for shale gas exploration was being reviewed by the government – the debate spilled over from one type of gas extraction (for conventional gas) to another type (for unconventional shale gas); thus, the proposal for the exploration of shale gas became unexpectedly controversial.91 The controversy led the government to impose a moratorium on onshore land extraction of shale gas. The public debate about shale gas exploration was very lively and rich in content, and it provided a fruitful empirical ground in which we could try to understand the development within the framework of responsible innovation.92 In our analysis of the public debate, we reviewed the arguments presented by the vociferous proponents and opponents of shale gas, looking for important concerns voiced by different stakeholders in terms of public values. Our first finding was that the public debate often took place at the level of norms – following the value hierarchy that Van de Poel presents for the translation of values into norms, which in turn can be translated into requirements.93 We found that substantive and procedural values featured in this debate. In the first category, we distinguished, for instance, between

91 92

Cuppen et al., “When Controversies Cascade.” The findings below are based on: Dignum et al., “Contested Technologies and Design for Values.”

93

Van de Poel, “Translating Values into Design Requirements.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

108

Ethics and Engineering

environmental friendliness, public health and safety, and resource durability and affordability. Procedural values were related to the process of decisionmaking and included values such as transparency, accountability, and justice. A second finding of this empirical research was that, contrary to popular belief, proponents and opponents seemed to uphold pretty much the same set of values.94 The conflicts arose over the operationalization of these values among the different stakeholders – that is, over which norms were presented for the sake of a value. This led us to conclude not only that inter-values conflicts can occur – so that a guarantee of one value may be at the expense of another value – but that there are sometimes intra-value conflicts (i.e., conflicts within one value) when, for instance, different stakeholders operationalize the same value differently. The value of public health and safety, for example, was upheld by almost all stakeholders. While some stakeholders regarded this value as important in relation to the composition and volume of the chemicals added to the water, and emphasized that the surface water coming back from the well needed to be disposed of properly, others found safety to be important in relation to the possible seismic effects that the extraction of gas could bring about. Yet another group of stakeholders related the value of safety to the possibility of replacing coal with gas, which is a less polluting energy resource, both in terms of greenhouse gas emissions (and the associated climate change concerns) and in terms of the direct effects on health engendered by the combustion of both resources for energy production. To sum up, the lens of values enabled us to zoom in on the details of the controversies. More specifically, we tried to clarify the controversies that emerged as a a result of the inappropriate inclusion of these values. Responsible innovation is to be considered an endorsement of these values. Three remarks are in order. First, our study focused on a retrospective analysis of values in a controversial debate. The aim of responsible innovation is to proactively take account of societal and ethical issues in innovation, or to design for values. This study was meant only to contribute to the further conceptualization of the notion of responsible research and

94

It should be said that the opponents in general emphasized the procedural values more than the proponents did. See for a detailed discussion of this debate: Dignum et al., “Substantive and Procedural Values in Energy Policy.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

Values in Design and Responsible Innovation

109

innovation, which should help in future proactive approaches to responsible innovation. Second, the shale gas controversy is just an example showing the relevance of discussions of responsible innovation in conjunction with design for values. As has been mentioned, we chose the case because of its empirical richness. Indeed, the basic rationale of this discussion is also broadly applicable to other technological innovations. Third, while RRI was developed in, and has been mostly applied in, Europe, it has been rightly asked whether its scope should remain limited to Europe.95 There are interesting examples of the role RRI could playin developing countries, such as South America.96

4.8 Summary This chapter has built upon the argument that technological developments are not morally neutral. Especially in the design of new technologies, there are several values that engineers should be aware of and proactively engage with. By discussing the whole-body scanners that were primarily introduced for security at airports in the post-9/11 era, the chapter has identified three main values at stake: security, privacy, and safety. It has discussed two approaches that can help us to proactively include values in design, namely VSD and Design for Values. When including values at an early stage in design, we can run into situations in which values potentially conflict. The chapter has discussed strategies for dealing with value conflicts, distinguishing between (1) designing out the conflict and (2) balancing the two (or more) conflicting values in a sensible and acceptable way. Indeed, the chapter has not pretended to include all situations of this sort. Complex and ethically intricate situations will emerge in an actual design process, and technologies are – by their very nature – difficult to predict. Not only can this lead to unforeseen consequences, but it can also change the way a value conflict is resolved, tilting the balance in favor of a different value at a particular stage of the design. Furthermore, values are not static and can change: for instance, new values can emerge during the development and implementation of new

95

Asveld et al., Responsible Innovation 3

96

Vasen, “Responsible Innovation in Developing Countries.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

110

Ethics and Engineering

technologies.97 Values can also change in their moral meaning or relevance. How to design for changing values is yet another important topic that deserves attention.98 We have also discussed approaches that focus on innovations in their societal context, and specifically RRI, particularly in public policy. RRI and value-based approaches are very much connected in their goals and ambitions. Conceptualizing RRI in terms of endorsing values can facilitate a finegrained way of looking at complicated debates and developments in terms of which values are at stake, how those values have been operationalized by different stakeholders, and which values potentially conflict.

97

Cuppen et al., “Normative Diversity, Conflict and Transition.”

98

Van de Poel, “Design for Value Change.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.007

5

Morality and the Machine

5.1 The Uber Autonomous Car Accident in Arizona In March, 2018 a Volvo XC90 SUV – one of Uber’s self-driving cars1 – hit and killed a forty-nine-year-old pedestrian who was walking a bicycle across the street in Tempe, Arizona (US). This was not the first fatal accident with a selfdriving car: there had already been several cases of fatal accidents with Tesla cars driving on autopilot.2 The Uber accident caused a lot of commotion nonetheless, as it was the first time a self-driving car had caused the death of a pedestrian. A key question in such accidents is: Who is responsible? Assigning responsibility in complex engineering designs is generally a complicated matter: to put it in terms of the “problem of many hands” discussed in Chapter 1, there are so many “hands” involved in the design of complex technologies that it is difficult to say precisely whose hands caused an accident. This problem is even more complicated when the “pair of hands” involved is that of a machine. This is a typical situation arising from a joint decision between human and machine, raising the questions: Can we ascribe any form of responsibility to the car,3 or does the responsibility lie solely with the car designer or manufacturer? Who else might have contributed to the accident? Let us discuss the question of responsibility by reviewing the Uber accident while focusing on the various actors involved in the development of the Uber car and the implementation of the experiment that led to this fatal 1

In this chapter, I sometimes refer to self-driving cars as “autonomous cars” because this is the term used most often in the literature and the media, but almost all cars that are described as autonomous are in fact semiautonomous.

2

Stewart, “Tesla’s Self-Driving Autopilot Involved in Another Deadly Crash.”

3

Matthias, “The Responsibility Gap.”

111

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

112

Ethics and Engineering

accident.4 In so doing, we need to also consider the findings of the National Transportation Safety Board (NTSB), both the preliminary findings of May, 2018 and those of the extensive final report released in November, 2019.5 The NTSB is formally in charge of the accident investigation. In the literature on responsibility, various notions have been used to pinpoint particular categories of responsibility, including passive and active responsibility, accountability, blameworthiness, and liability. Ibo van de Poel and Lambèrt Royakkers have suggested clear demarcations of these concepts that are helpful for the purposes of this chapter.6 Passive responsibility is a backward-looking responsibility, relevant for consideration after an accident. Within passive responsibility, there is a distinction between accountability – concerning who can be held to account for the harm to the victims or to society at large – and blameworthiness, which concerns the question of who is to blame.7 Liability is a legal passive responsibility, according to the law, often “related to the obligation to pay a fine or repair or repay damage.”8 Active responsibility involves proactive thinking about responsibilities and situations when technologies are to be used in the future. In reviewing the accident, we will be focusing on passive responsibility for this specific accident, but the purpose of this chapter (and also of this book) is to use the Uber case study to illustrate how to designing of morality into the machine. So our primary interest is in what Van de Poel and Royakkers call proactive and active responsibility.

5.1.1 The Human Driver In earlier accidents with self-driving cars, it was often claimed that the bulk of responsibility lies with the human driver, especially in discussions of legal responsibility or liability.9 That is, the so-called autonomous systems are only semiautonomous, which means they are meant to assist but not replace 4

Jack Stilgoe’s recent book discusses the case study of the Uber accident in great detail; see Stilgoe, Who’s Driving Innovation?

5

Hawkins, “Serious Safety Lapses Led to Uber’s Fatal Self-Driving Crash, New Documents

6

Van de Poel and Royakkers, Ethics, Technology and Engineering. Ibid., 258.

Suggest.” 8 9

7

Ibid., 10–13.

For instance, a legal investigation of a 2016 Tesla accident found the driver of the car responsible.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

113

the driver. An assisting system is therefore considered to be correctly used only when there is human supervision. Tesla has manufactured fairly advanced “auto pilot systems,” and its website explicitly states that “while using Autopilot, it is your responsibility to stay alert, keep your hands on the steering wheel at all times and maintain control of your car.”10 While there have been test drives with driverless autonomous cars, these have been in restricted areas using predetermined routes. The consensus seems to be that autonomous or semiautonomous cars cannot yet operate safely without a human supervisory driver. That is, their supporting software needs to get used to actual roads, with all their real-world difficulties. The cars, therefore, need to be “trained” in real-life situations, and particularly in responding to unanticipated events; for this, a supervisory driver is indispensable and takes over from the autonomous system when necessary. When instructing and testing the car, it is thus the driver’s responsibility to remain vigilant at all times, continuously paying attention to the road while the car drives itself. This is a fundamental difficulty with self-driving and driver-assisted systems. On the one hand, autonomous systems are often introduced to replace the fallible human driver, assuming that a wellprogrammed system will make fewer mistakes. On the other hand, training these autonomous systems depends on those same fallible humans.11 What further complicates this is that humans are good drivers when they pay continuous attention, but remaining vigilant is exactly what we do not excel at, given that our minds are easily distracted and exhausted, especially during repetitive tasks. Experiments show that we are least vigilant when we have to continually repeat a task that our minds perceive as boring. For instance, if we have to watch an automatic car driving along the road without needing to regularly intervene, our minds will potentially become numbed if we do not add some form of stimulus.12 “Your brain doesn’t like to sit idle. It is painful,” according to Missy Cummings, director of the Humans

10

11 12

and

Autonomy

Laboratory

and Duke

Robotics

at

Duke

Quoted from the website of the manufacturer: www.tesla.com/support/autopilot (consulted August 22, 2019). Davies, “The Unavoidable Folly of Making Humans Train Self-Driving Cars.” Heikoop et al., “Human Behaviour with Automated Driving Systems”; Calvert et al., “Gaps in the Control of Automated Vehicles on Roads.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

114

Ethics and Engineering

University.13 There have been many examples of Tesla drivers sleeping at the wheel while the car drove itself; fortunately, they didn’t lead to any accidents. So we have become complacent in trusting an automatic system to run smoothly with no need for us to intervene. This complacency – to which I will return in the coming sections – was a factor in the Uber accident. In a video from the car’s dashboard camera, it became apparent that the safety driver, who was supposed to oversee the driving at all times, was looking down at their lap. While the safety driver at first claimed that they were looking at the autonomous system’s interface, which is built into the car’s center console, a police investigation revealed that an episode of the television show The Voice was being streamed on their cell phone at the time of the accident.14 Uber denied sole responsibility for the accident, implying that it was at least partly the safety driver’s fault for not paying attention. If the driver is paid to sit at the wheel and pay attention to the road, this is what they are expected to do. However, this does not answer the question of whether the driver, in full mental capacity and paying full attention, would have been able to identify the pedestrian and stop, or at least mitigate the consequences – for instance, by swerving or braking. It seems indisputable that the driver was at least partly responsible for the accident, but was perhaps not the only responsible party. Uber also acknowledged that the software hadfaults.

5.1.2 The Software Designer When software is designed for automatic vehicles, an important question is how to deal with “false positives” – that is, situations in which an object is detected but, because it is neither a danger to anyone outside the car nor likely to damage the car, does not necessitate stopping the car. Examples of such objects are a plastic bag or a piece of newspaper floating in the air, or an empty can in the path of the car. The software needs to be fine-tuned to decide when to respond to false positives. Uber’s car was tuned in such a way 13

Quoted from Davies, “The Unavoidable Folly of Making Humans Train SelfDriving Cars.”

14

Ibid.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

115

that it would not brake overcautiously or stop abruptly, which can be a source of discomfort: “emergency braking manoeuvres are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behaviour.”15 However, as Uber executives admitted, the fine-tuning of the software to ignore false positives had probably gone too far, because in the case of this accident the car treated a pedestrian pushing a bicycle as a false positive. The NTSB’s preliminary report revealed that about six seconds before the crash, the car’s sensors detected the pedestrian first as an unknown object, then as a vehicle, and finally as a pedestrian.16 By the time the car detected that it needed to stop, less than two seconds remained, requiring abrupt braking, which by default was not possible with this software, as explained above. Thus, intervention by the driver was the only option; this might not have been sufficient to prevent the fatal accident but could potentially have reduced the impact on the pedestrian by slowing down or swerving. However, this in turn could have given rise to another accident, perhaps an impact with another car coming from the opposite direction. Let us review for a moment how other manufacturers deal with false positives. Waymo, a company that emerged from Google’s self-driving car project and is considered one of Uber’s biggest competitors for building selfdriving cars, has similar software in use. However, Waymo’s software has greater sensitivity to all objects on the road, meaning that its cars will probably brake as a result of false positives more often than Uber’s. Frequent braking omeans that “the ride can be jerky, with sudden pumping of the brakes even though there’s no threat in sight.”17 But the flip side of a car stopping more often than is strictly necessary is that safety for pedestrians and other traffic is likely to increase. After the Uber accident, the CEO of Waymo confidently announced that his company’s self-driving cars could have avoided it.18 Of course, it is convenient to make such a statement in hindsight, but the jerky rides and constant braking of prototype Waymo cars do add to its credibility. While Waymo considers safety to mean putting up with a bumpier ride to avoid accidents, Uber considers abrupt stopping to

15 17

16 From the report of the NTSB, quoted from ibid. Ibid. Efrati, “Uber Finds Deadly Accident Likely Caused by Software Set to Ignore Objects

on Road.” 18

Ohnsman, “Waymo CEO on Uber Crash.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

116

Ethics and Engineering

also reduce safety for both driver and other car occupants.19 This was the reason why Uber had intentionally disabled the SUV’s factory-set automatic emergency braking system in order to prevent this erratic – and thus uncomfortable and unsafe – behavior of the car in autonomous mode.20 If we look at this issue in terms of the vocabulary of Design for Values (see Chapter 4), we might argue that the value of safety has been operationalized differently by different manufacturers of self-driving cars. The two interpretations of safety mentioned above are dramatically different. Waymo tolerates the discomfort of more frequent braking, aiming to ensure the safety of pedestrians and cyclists, while Uber puts more emphasis on the safety of the driver, which can also be at stake if the car stops too frequently or abruptly. Uber also puts a strong emphasis on driving comfort by eliminating false positives as much as possible, producing a smooth ride, though this meant increasing the risk of overlooking a pedestrian as a false positive.

5.1.3 The Car Designer: There Was No “Autopilot Nag” As discussed above, it is commonly known that we become complacent when a task we are carrying out is repetitive and monotonous. Thus a driver can lose attention if intervention, which would keep the mind sharp, is not required for long periods of time. This is paradoxical because on the one hand, manufacturers intend cars to become as autonomous as possible (hence requiring the least intervention from the driver), yet on the other hand, the less the driver’s attention is required, the less likely they will be to pay attention at crucial moments, as in the case of the Uber accident. This complacency issue is problematic for a number of different autonomous systems. The most prominent example is probably in highly automated aircraft cockpits, where most decisions are computerized and pilots are mainly responsible for supervision and oversight. While this division of labor leads to satisfactory results, there are situations in which the crew “must make split-second decisions on how to proceed, . . . sometimes putting the aircraft and its passengers in dangerous situations,” for instance when 19

Efrati, “Uber Finds Deadly Accident Likely Caused by Software Set to Ignore Objects on Road.”

20

Hawkins, “Serious Safety Lapses Led to Uber’s Fatal Self-Driving Crash, New Documents Suggest.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

117

the sensors of an airplane fail because of unexpectedly bad weather conditions.21 This happened with Air France Flight 447 in May 2009, when extreme cold caused the “pilot tubes” to be obstructed with ice crystals so that they gave an incorrect speed reading. When the crew discovered the malfunction, it was too late to avoid crashing into the Atlantic Ocean, causing the deaths of all 228 people on board.22 Therefore, it is necessary to ensure that pilots are constantly aware of the situation in order to respond properly and promptly. In other words, “in situations where sensors and automation . . . do not function properly, it is crucial that pilots have, or quickly regain, a good awareness and understanding of the situation at hand.”23 A similar situation awareness also needs to be designed into the autonomous cars that partly depend on the alertness of the supervisory driver. Manufacturers have created various types of warning systems to remind and sometimes force human drivers to pay attention. Some of these fall into the category of persuasive technologies, which are technologies that persuade the user to do “the right thing”; these have been discussed in Chapter 4. For instance, Cadillac’s Super Cruise has an infrared camera installed on the steering column that monitors the driver’s head position and reminds them to keep their eyes on the road if they happen to look away from the road for too long.24 Tesla’s automatic pilot has built-in sensors in the steering wheel to detect whether pressure is being applied on the wheel, a sign that the driver is awake. If the driver takes their hands off the wheel for too long, they first get a “visual warning” that, if ignored, will turn into an audible warning, or beep. There are thus different degrees of persuasion built into the system, which is popularly known as “autopilot nag.”25 However, Tesla has developed this technology far beyond mere persuasion. In the most recent updates, the car refuses to continue driving if the warning

21

Mulder, Borst, and van Paassen, “Improving Operator Situation Awareness through Ecological Interfaces,” 20.

22

See here the interim report of the accident: www.bea.aero/docspa/2009/f-cp090601e2.en/ pdf/f-cp090601e2.en.pdf.

23

Mulder, Borst, and van Paassen, “Improving Operator Situation Awareness through

24

Ecological Interfaces,” 21. Davies, “The Unavoidable Folly of Making Humans Train Self-Driving Cars.”

25

Lambert, “Tesla’s Latest Autopilot Update Comes with More ‘Nag’ to Make Sure Drivers Keep Their Hands on the Wheel.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

118

Ethics and Engineering

signal “Hands on the wheel” is ignored several times in a row: It automatically slows down, turns on the emergency lights, and eventually pulls over and shuts itself off.26 Thus, the car autonomously decides to stop when it detects a situation that is unsafe not only for the driver but also for other traffic participants on the road. The Uber car did not have any such system to warn the driver to pay attention. Uber did, however, claim to have instructed and trained its drivers, reminding them that they should pay attention at all times. Looking at cell phones was explicitly forbidden. What does this imply for the ascription of responsibilities after the accident? Would the cell phone clause in the drivers’ contracts also remove or reduce Uber’s moral responsibility for the accident? Here, there is a potential discrepancy between what is prohibited and the driver’s mental capacity to comply with that prohibition. Was the driver offered a detailed technical and nontechnical training to prepare them for this task that human minds are not typically good at? The answer to this question is crucial to the question of responsibility.27

5.1.4 The Pedestrian Who Was Jaywalking As mentioned above, Uber claims that the driver did not pay sufficient attention to the road and that the software was tuned too loosely, failing to correctly identify the pedestrian. One reason for the failurewas that the pedestrian was jaywalking; the software was programmed to pay extra attention to pedestrians only when approaching crosswalks. According to the NTSB’s report, Uber’s software for detecting and classifying other objects “did not include a consideration for jaywalking pedestrians.”28 On the one hand, Uber was clearly at fault for not making its software more sensitive to pedestrians who might cross the road anywhere; this has also to do with the observation made in Chapter 2 that our risk assessment methods usually assume perfect human behavior. On the other hand, one might argue that the pedestrian was at fault, too. It seems cruel to hold someone responsible for their own death when it was due to a rather minor misdemeanour such 26 27 28

Stewart, “Tesla’s Self-Driving Autopilot Involved in Another Deadly Crash.” I thank Filippo Santoni de Sio for this important addition. Quoted here from Hawkins, “Serious Safety Lapses Led to Uber’s Fatal Self-Driving Crash, New Documents Suggest.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

119

as jaywalking. It is nevertheless interesting to observe that in the minds of many, it did matter that the pedestrian was also at fault. To be sure, I am not claiming that equal shares of responsibility should be assigned to all parties. I only intend to argue that in such a moral dilemma, it may matter that the victim the accident was also at fault. In an empirical study at the Massachusetts Institute of Technology simulating similar accidents with autonomous cars, it has been shown that some participants feel less responsible for the lives of those who do not respect the law than for the lives of those who do.29

5.2 Crash Optimization and Programming Morality in the Machine In the early days of autonomous technologies, there was a prominent discussion about what to do when the use of technology involved moral judgments. Are we supposed to program morality into the machine, and if so how should that be done and who should decide, for instance, how a car should react in an accident? The first writings on the subject compare such situations to a thought experiment in moral philosophy known as the Trolley Problem.30

5.2.1 Autonomous Vehicles and the Trolley Problem Since its introduction by Philippa Foot in 1967, the Trolley Problem has been widely adopted in various forms in applied ethics because it helps to pinpoint a crucial ethical dilemma.31 This is a hypothetical situation in which a runaway trolley is rushing down a track because its brakes do not work. There are five workers working on the track, who do not hear the trolley approaching. If nothing else happens, the trolley will kill all of them. There is an escape route, though: the trolley can be diverted onto a side track, where there is only one worker. Thus, we can reduce the number of casualties from five to one by pulling a lever and diverting the trolley. Would you intervene if you had the chance? 29 30

Awad et al., “The Moral Machine Experiment.” Goodall, “Ethical Decision Making during Automated Vehicle Crashes”; Lin, “Why Ethics Matters for Autonomous Cars.”

31

Foot, “The Problem of Abortion and the Doctrine of Double Effect.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

120

Ethics and Engineering

The Trolley Problem indicates a hypothetical, simplistic, and highly exaggerated situation, but it does help us to understand the fundamental ethical issue at stake: If you had a chance to save five people at the expense of losing one, would you do so? It helps us to distinguish between consequentialist approaches to ethics, which assess moral rightness in terms of expected consequences, and deontological, duty-based approaches, which primarily focus on the question of whether the rights of one worker can be given up to save five others. This is the Trolley Problem in its starkest form. More sophisticated versions of the Trolley Problem further complicate the moral dilemmas. An alternative Trolley Problem, for instance, has a bridge with a person standing on it in place of the side track. You are standing on the same bridge. By pushing the other person off the bridge, you can achieve the same outcome of stopping the runaway trolley, saving five lives. In terms of the outcome, the question of whether to push the individual off the bridge is the same as the question of whether to pull the lever, but it emphasizes much more clearly that your intervention leads to someone losing their life. The Trolley Problem has countless variations and has been influential in various fields of applied ethics. The trolley in the Trolley Problem was clearly not designed by an engineer, my engineering students often object. First, engineering systems include redundancies in order to increase safety. For instance, it is likely that the trolley has secondary brakes that will kick in if the primary brakes fail, especially if it has been designed properly. Since the trolley could potentially cause serious harm, there may even be an emergency tertiary brake system in place to prevent accidents, or at least to reduce its speed and thereby the likelihood of harm. Second, in addition to redundancies, engineering systems have features that mitigate the consequences of an unavoidable accident. A heavy trolley that could potentially kill many people would be expected to have various types of warning mechanisms that would give those in the surrounding area a chanceto escape. The third objection is a rather fundamental one and perhaps the most problematic: that is, the types of certainties that the Trolley Problem presumes are irreconcilable with basic notions of technological risk. Because of the inherent uncertainties of reallife situations, we can never be sure about the outcomes, which makes moral decision-making about them even more complex. Later in this section, I will argue that properly understanding technological risks may be a better way of addressing the ethical issues associated with autonomous cars.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

121

The Trolley Problem has featured prominently in early discussions of ethics in autonomous cars, concerning how to program a car that will unavoidably be involved in accidents. The typical example, following the both elegantly simple and naively simplistic rationale of the Trolley Problem, is a situation in which a fully autonomous car is about to be involved in an unavoidable accident. The car identifies a group of five pedestrians too late, making an accident inevitable; braking alone fails, and the accident will most likely be fatal. The only alternative is to swerve to the other side of the road, where there is only one pedestrian; yet swerving will have a fatal outcome for that pedestrian. Should the car decide to hit one pedestrian instead of five? Another, perhaps more realistic, accident scenario is the following. Suppose that a child runs from behind a parked car and in front of an autonomous car. The system identifies the object as a person and starts to brake, but the distance is too short for it to avoid an accident. There are now several options from which the car needs to choose, each of which offers a different pattern of harm for different people; which harm and for whom should the car prefer? The first option is for the car to continue on the same trajectory and brake as hard as possible (assuming that the system allows for abrupt braking in emergencies), limiting the consequences for the running child. The second option is to swerve to the right and bump into the parked cars on the side of the road, causing damage to the autonomous car and also potentially to its driver and other passengers ; damage is also caused to the parked cars and possibly to anyone who may be sitting in them. The third option is to swerve to the left into another car approaching from the opposite direction, causing a crash with that car at the combined speed of the two cars, most likely causing serious damage to the autonomous vehicle, its driver, and its occupants as well as to the approaching car and its occupants. Which option is to be preferred? Should the car be designed in a utilitarian fashion, aiming to maximize utility and minimize harm? This reasoning would likely result in hitting the running child. Or should the algorithm take into account the vulnerability of the people who will be hurt? Pedestrians and cyclists, certainly if they are young, are considered to be among the most vulnerable traffic participants. Reasoning along these lines would make the car swerve to the right or the left, while it could probably not make a conclusive choice as to which side. Or should the car be designed to save its own driver and passenger first? The pool of scenarios becomes

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

122

Ethics and Engineering

larger and larger. What if the accident involves a choice between hitting a pedestrian or a cyclist, or between two different pedestrians of different age and gender? Should the car hit the eight-year-old girl or the eighty-year-old man?32 The Trolley Problem has been criticized for failing to fully represent the ethical issues of autonomous cars, since “it represents a false dilemma, it uses predetermined outcomes, it assumes perfect knowledge of its environment, and it has an unrealistic premise.”33 If an autonomous car is about to be involved in an accident (which can happen in a split second), will it have sufficiently registered and assessed the risks of alternative scenarios, such as swerving into the cars parked alongside the road, and considered any people who might be sitting in those cars, including the number of them and their ages? All this information is significant if the car is, for instance, designed primarily to reduce harm. To an engineer, the problem might come across very much as a practical one. More sensors and more capacity for detecting and storing data could better prepare the car for such decisions; wasn’t the basic idea of autonomous cars that they should make the best-informed decision in the case of an accident? The decision on how to act remains, however, a moral one, regardless of how much information is available. While Trolley Problem scenarios are rare, “their emotional saliency is likely to give them broad public exposure.”34 Moreover, with the enormous growth in autonomous technologies, such scenarios will become more likely to occur. Perhaps more importantly, no matter how unlikely these scenarios are, we must design algorithms to respond to them. Indeed, we are already designing for these ethical questions.35 The Uber accident, for example, could have amounted to a Trolley Problem dilemma if the car had been allowed to abruptly change course or brake. Would the car have swerved if necessary? To which side of the road? One way of arriving at morally defensible answers to such questions is to conduct an empirical investigation. In 2015, in an attempt to “align moral algorithms with human values,” the dilemmas of the autonomous vehicles that resemble the Trolley Problem were empirically investigated in a large-

32 33

Lin, “Why Ethics Matters for Autonomous Cars,” 72. Goodall, “Away from Trolley Problems and toward Risk Management,” 812.

34

Bonnefon, Shariff, and Rahwan, “The Social Dilemma of Autonomous Vehicles,” 1573.

35

Bonnefon, Shariff, and Rahwan, “The Social Dilemma of Autonomous Vehicles.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

123

scale study.36 Interestingly, the majority of the participants approved of the utilitarian calculus, assuming that minimizing damage should be the priority in crash-optimization algorithms, even at the cost of sacrificing both the car and its passengers. Yet the participants were not willing to buy cars with a utilitarian crash-optimization algorithm; they preferred to buy autonomous vehicles which primarily protected their own passengers. Thus, if cars were designed to follow the utilitarian logic of limiting overall harm at any cost – thus potentially favoring harming the driver and passengers in the car – the impact on overall safety might be limited because it would likely negatively affect consumers’ purchasing behavior. Hence, when the utilitarian calculus is designed into autonomous vehicles, it is likely to increase the safety of individual vehicles in accidents, but to have the opposite impact on overall safety. Interestingly, several empirical studies have shown that we have different expectations of robots and humans in dilemmas.37 That is, respondents expected robots and autonomous systems to make decisions in a utilitarian fashion – if necessary, to sacrifice one person for the benefit of many – but said they would blame human agents who behaved in this way. I will return to this issue in the next section. An important reason not to dismiss the Trolley Problem altogether in relation to the ethics of autonomous vehicles is that it seems to speak to the public imagination as to the relevance of ethics to the design of these vehicles. The Trolley Problem depicts a “straightforward example of ethical decision making for an automated vehicle,” and points up the difficulties of leaving a morally complex and multifaceted choice to a machine. While the strong appeal of this thinking cannot be ignored, we should be aware that an exclusive focus on this thought experiment could be easily interpreted by the public to mean that it is the only ethical problem associated with self-driving cars.38 This would be highly problematic because, as I will argue in the next subsection, the ethical issues associated with autonomous vehicles are much broader and more complex than those captured by the simplistic Trolley Problem.

36 37

Ibid. Malle et al., “Sacrifice One for the Good of Many?”; Voiklis et al., “Moral Judgments of Human vs. Robot Agents.”

38

Goodall, “Away from Trolley Problems and toward Risk Management,” 810.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

124

Ethics and Engineering

5.2.2 The Ethics of Autonomous Vehicles: Beyond the Trolley Problem The broader ethical issues of autonomous vehicles include at least the following three categories: (1) how to allocate responsibilities, (2) how to deal with risk management decisions, (3) and how to address unanticipated consequences, including misuse or abuse of data, privacy, and fairness. I will defer a discussion of responsibility until the next section, where I review the broader ethical issues of autonomous technologies, including the issue of agency. Let me briefly review the other two objections here. The Trolley Problem assumes the possible outcomes to be known with certainty, whereas – as Noah Goodall argues –the outcomes of accidents can be confidently predicted only in terms of probabilities.39 “Fatality rates depend on such arbitrary inputs as whether a passenger is sober or drunk (twice as likely to die if drunk), male or female (28% more likely to die if female), young or old (a 70-year-old is three times more likely to die than a 20-year-old).”40 What the sensors of automatic cars detect involves similar probabilities, expressing how likely it is, for example, that an autonomous vehicle will fail to detect another car in its blind spot. Therefore, even a decision as simple as changing lanes involves “straightforward risk calculations based on highly uncertain information,” and this is why Goodall argues that we should discuss the ethical issues of autonomous vehicles in terms of managing associated risks.41 Every single maneuver decision that the car takes also involves a risk calculation in terms of probabilities. The latter offers a much more nuanced lens through which to look at accident scenarios and to arrive at ethically defensible conclusions. Moreover, in addition to the anticipated risks which need to be managed (giving rise to ethical problems), there is also the problem of unanticipated risks. Patrick Lin lists some of these possible scenarios in order to show how they can give rise to unanticipated questions with serious ethical implications.42 For instance, would privately and publicly owned vehicles have

39

Goodall, “Away from Trolley Problems and toward Risk Management”; Kockelman and Kweon, “Driver Injury Severity.”

40

Quoted from Goodall, “Away from Trolley Problems and toward Risk Management,” 813. See also Evans, “Death in Traffic.”

41

Goodall, “Away from Trolley Problems and toward Risk Management,” 813.

42

Lin, “Why Ethics Matters for Autonomous Cars,” 80–81.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

125

different allegiances to their “owners”? Should a fire truck behave differently from a privately owned vehicle in an accident that requires self-sacrifice?43 If autonomous vehicles were to drive more conservatively, would the car insurance industry go bankrupt? Or would the slow and conservative driving that comes with increased safety trigger road rage, causing even more accidents? What about the possibility of hacking into the wireless systems and taking over the operation of the car, turning it into a security concern for both the passengers and the surrounding traffic participants, and potentially leading to mega-accidents? The film industry has already imagined and spelled out many scenarios involving such hacking. Moreover, if the crash avoidance algorithms of autonomous vehicles are generally known, will that not lead to other drivers driving recklessly, assuming that the autonomous vehicles are paying attention in order to avoid accidents at any cost? Other unanticipated problems and risks have to do with data management and the associated privacy issues, as well as with questions of fairness that may arise. I will return to some of these issues when discussing the broader ethical issues of AI in the next section.

5.3 The Ethics of AI So far, we have discussed the ethical implications of autonomous vehicles, including questions regarding the programming of algorithms that need to make moral decisions in situations such as accidents, and the allocation of responsibility for them. In this section, I expand the discussion to the ethical issues of AI systems more broadly.44 Let me first say what I mean by AI.45 Broadly speaking, AI is “the science of making machines smart” by teaching machines to interpret (big) data and to identify patterns, for which algorithms are used. This trains the machines perform a specific task both intelligently and autonomously.46 Learning 43

Ibid., 80.

44

For an overview of the topic, see a recently published book on this subject by Coeckelbergh, AI Ethics.

45

It is not my intention to review the different definitions of AI, or how different disciplines conceive of the concept. For an excellent overview, see Dignum, Responsible Artificial Intelligence, chap. 2.

46

This definition is loosely based on the definition of machine learning by the British Royal Society. See Royal Society, Machine Learning, 16.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

126

Ethics and Engineering

from data requires the use of algorithms, which are becoming increasingly autonomous. “As autonomy in robots and AI increases, so does the likelihood that they encounter situations which are morally salient,”47 and this raises the question of whether AI systems should be provided with the capability for moral reasoning, or at least possess the basic tools to autonomously deal with situations requiring it. Some scholars have argued that it is inevitable that we should teach AI systems to “behave ethically” or to create “autonomous ethical machines.”48 A new subfield in applied ethics has emerged which is devoted to the latter, under the heading of “Machine Ethics.” The current assumption in this field is that “the machines are coming,”49 which is not necessarily an alarmist science fiction warning, but rather an observation based on the growing interest and application of artificial moral agents (AMA). These AMAs are, in principle, capable of making complex decisions and, like any other decision-making system, can make both morally good and morally bad decisions. We need therefore to program morality into them, the argument goes, because when programmed and trained properly these systems are better, more efficient appliers of moral reasoning than humans, without potential problems such as the bias to which human moral reasoning is prone. In short, the machines are coming, and they are going to make complex moral decisions; let’s make sure they make good decisions and not bad ones. Against this view, other scholars have argued that it is problematic to leave moral decisions entirely to the machine.50 Aimee Van Wynsberghe and Scott Robbins, for instance, question the basic premises of Machine Ethics – that “robots are inevitable” and “they could cause harm” – because theymake too many assumptions.51 Robots are indeed growing in interest and applications, but their pace and extent of development are not simple facts to

47

Van Wynsberghe and Robbins, “Critiquing the Reasons for Making Artificial Moral Agents,” 720; emphasis in original.

48

Anderson and Anderson, “Robot Be Good,” 15.

49

Allen, Wallach, and Smit, “Why Machine Ethics?,” 12.

50

See, e.g., Bryson, “Robots Should Be Slaves”; Sharkey, “Can Robots Be Responsible Moral Agents?”; Van Wynsberghe and Robbins, “Critiquing the Reasons for Making Artificial Moral Agents.”

51

Van Wynsberghe and Robbins, “Critiquing the Reasons for Making Artificial Moral Agents.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

127

be taken for granted, but rather the result of investments and advances in certain application areas. While the reasoning that AI systems could harm humans is correct, it remains questionable whether building morality into them is the solution for dealing with potential harms; “lawn mowers, automatic doors, curling irons [and] blenders” are other examples of technologies that can cause harm, but we have never come up with the solution of teaching them morality in order to avoid harm, Van Wynsberghe and Robbins argue.52 Instead, we include safety features in their design; an approach such as Safe-by-Design is a tangible way of addressing this issue. To be sure, this objection does not deny that AI systems engender important moral issues; it only questions whether programming the capability of moral reasoning into them is the right way to deal with these issues. According to Deborah Johnson, the claims made by machine ethicists in favor of AMA show at best that “computers are moral entities” and not necessarily “moral agents.”53 Agency is the first ethical issue I will discuss in relation to AI: Can the AI system be said to be the acting agent in a decision that has moral implications? To put it more straightforwardly, can we assign responsibilities to the AI agent?54 In addition, I will discuss bias: Are AI systems indeed – as machine ethicists argue – more capable than humans of making ethically defensible choices? On a related note, could the data contain any bias that might result in unfair outcomes or unanticipated consequences of AI, for instance in optimization algorithms?

5.3.1 Who Is Responsible? The Problem of Agency The question of agency features prominently in philosophical discussions about responsibility – that is, “whether and to what extent machines of various designs and functions might be considered legitimate moral agents 53

Johnson, “Computer Systems,” 195.

52

Ibid.

54

A related discussion is about “moral patience,” which is often contrasted with agency in discussions of philosophy. In short, an in analogy with discussion of “animal ethics” it raises the question of whether an entity (in this case the AI system) could have moral status and receive moral considerations. It goes beyond the focus of this introductory book to elaborate on this here; interested readers should consult Gunkel, The Machine Question and another book in this Cambridge Applied Ethics series, Gruen, Ethics and Animals.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

128

Ethics and Engineering

that could be held responsible and accountable for decisions and actions.”55 In other words, if we are to assign any responsibility to an autonomous system, we must assume that it can exercise agency – that is, it can independently and autonomously decide how to act. The first and perhaps most fundamental question here is: What exactly does autonomy entail? Autonomy in the ethical sense relates to “the right to be free to set one’s own standards and choose one’s own goals and purposes in life,” and as such, it essentially entails “features of self-awareness, self-consciousness and self-authorship according to reasons and values.”56 In this sense, it is unreasonable to say that an AI system, or a robot for that matter, could decide autonomously, since we cannot assign any self-awareness or self-consciousness to a decision made by a machine. Yet we will continue to use the term “autonomous” for a specific aspect of AI: the ability of a system to decide on an action, no matter how small, without human oversight. In this sense, the “autonomous” car discussed in the Uber car example could indeed be called autonomous since it could engage in various decisions regarding accelerating, braking, changing lanes, and stopping the car. In the automotive industry, and also in several other areas (such as military applications), there is a tendency to want to move toward what might be called fully autonomous systems, or systems that can engage in all decisions without human supervision and interference. There are certain benefits associated with such fully autonomous systems, but they also give rise to important ethical issues. I will return to the legitimacy of fully autonomous AI systems in Section 5.4 when discussing the notion of meaningful human control.57

5.3.2 AI Systems as “Moral Saints” or “Moral Offenders”: The Problem of Bias The general view about automation in decision-making, certainly in technical literature and among engineers, is that it is a praiseworthy development, and not only for the increased efficiency it can bring. According to this view, it has a further advantage: it leads to less biased moral choices. 55 56

Gunkel, The Machine Question, 6. European Commission, Artificial Intelligence, Robotics and “Autonomous” Systems, 9.

57

Santoni de Sio and van den Hoven, “Meaningful Human Control over Autonomous Systems.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

129

Emphasizing the imperfection of human beings, scientists and engineers in particular tend to argue that computers can be the “moral saints” which we humans can never be, because computers are not prone to human emotions with their explicit and implicit biases.58 According to this view, we should therefore leave certain important moral decisions to AI systems: “If we could program a robot to behave ethically, the government . . . could build thousands of them and release them in the world to help people.”59 This idea stems from the very early days of programming morality in machines in the 1990s, but its main rationale still reverberates in some engineering literature.60 For example, when a bank employee has to decide whether or not to give a loan to a customer, there is always a chance that they may be biased socially, racially, or otherwise, or perhaps act with empathy toward the client and therefore create unnecessary financial risk for the bank. Wouldn’t a machine be much better equipped to decide on such matters, since it could objectively – and only on the basis of irrefutable facts and figures, such as income and credit history – decide whether the loan should be approved? In many social processes, business transactions, and governmental matters, choices and decisions are increasingly delegated to algorithms that not only advise on but sometimes decide on how to interpret data in cases such as a this.61 It has become abundantly clear in practice, as in banking applications, that this reasoning is faulty: Algorithms for bank loan approvals have often produced biased outcomes, for instance by disproportionately rejecting the applications of members of racial minorities. This is particularly puzzling because AI-based approval systems decisions do not allow for the inclusion of applicants’ race and other personal datain the application process.62 The explanation for this is that these machine learning algorithms draw on a larger set of data to arrive at decisions; “if there is data out there on you, there is probably a way to integrate it into a credit model.”63 It has, for 58

Gips, “Toward the Ethical Robot,” 250.

60

See, e.g., Wallach and Allen, Moral Machines.

61

Mittelstadt et al., “The Ethics of Algorithms,” 1.

62

Rules and regulations may be different in different parts of the world; this observation has been made on the basis of problematic examples of AI-based loan approvals in

59

Ibid.

the US. 63

Klein, “Credit Denial in the Age of AI.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

130

Ethics and Engineering

instance, been shown that several “digital footprints” can outperform more traditional credit score models in predicting who will repay a loan in full, including, for instance, the time of day at which a loan application is submitted: an application submitted in the middle of the night can be an indication of financial distress, which in turn can also be a sign of additional financial risk.64 Whether the applicant’s computer is a Mac or a PC wis another digital footprint: statistics show that Mac owners pose less credit risk to banks, and hence they receive faster loans more often and at lower rates. In addition, Mac owners, according to other statistics, are disproportionately white. which means that the outcome of a loan approval algorithm may favor white over nonwhite applicants on the basis only of their type of computer. So even when race is not explicitly taken into account, the algorithm can result in a racially biased choice, which raises the question: How far can banks go in using the personal data – that is, the digital footprints – of customers when they apply for loans?65 Another example is the socio-economic data connected to zip codes. When this data is used in the approval process, it is likely that someone who lives in a relatively poor zip code will have difficulty in getting a loan approval with a low interest rate, and even if their loan is approved, it will likely be offered at a high interest rate. Thus, the socio-economic inequalities in such a zip code will only be reinforced by the use of the zip code as an indication of payback capability. This is how historical social inequality becomes embedded and exacerbated. It has been empirically shown that “algorithmically-driven data failures,” as often found in the real estate and banking sectors, can specifically damage the interests of people of color and women, deepening and reinforcing racism and sexism.66 There are several areas in which these bring a heightened risk of discrimination, such as crime prevention, advertising, price differentiation, and image processing.67 Another example, with regard to automated facial recognition software, has been discussed in Chapter 4. A key issue in manyof the unanticipated problems caused by AI is the socalled black box problem. This refers to a situation in which it is not known where data comes from, how it has been processed and interpreted, or what models were used and how those models were built. In other words, the

65

Klein, “Credit Denial in the Age of AI.”

64

Berg et al., On the Rise of the FinTechs.

66

Noble, Algorithms of Oppression, 4.

67

Council of Europe, Discrimination, Artificial Intelligence, and Algorithmic Decision-Making.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

131

underlying algorithm in decision-making remains opaque or unclear, as if in a “black box.”68 One proposal for avoiding the black box problem is to introduce transparency and explainability as the leading design criteria for AI systems. Transparency refers to algorithms being openly accessible and verifiable. Explainability is defined as “the ability of individuals to obtain a factual, direct, and clear explanation of the decision-making process, especially in the event of unwanted consequences.”69 The main rationale behind explainability is that when algorithms are used to make decisions that could severely or even moderately affect a person’s health and well-being – e.g., giving a low credit score or a longer prison sentence70 – they should at least have transparent and understandable explanations that can be challenged if necessary. Simply put, when an algorithm is in charge of decisions that impact human lives, that algorithm needs to be transparent, understandable, and explainable. Explainability is important for respecting human autonomy, and also for assigning responsibility when things goes wrong.71 While the call for explainability and transparency is intuitively appealing, Scott Robbins and Adam Henschke have turned the argument on its head by stating that such algorithms can only be used for specific situations “in which it is acceptable to not have an explanation or to supplement the decision of the algorithm with human oversight.”72 We thus need to build further monitoring and oversight into AI systems in order to ensure that unwanted consequences (such as the exacerbation of social inequalities) are identified in time and responded to appropriately.73 So far, we have discussed how the use of an AI system can lead to unwanted consequences such as discrimination. This category of algorithmic unfairness assumes, however, that such consequences are unintentional and a result of failure to pay attention to harmful impacts. Therefore, solutions are sought that also consider the harmful impacts, for instance, by adding explainability as a design criterion. We are thus assuming that service providers are trustworthy, aware of the problem, and both willing and able to address it.

Pasquale, The Black Box Society.

70

Robbins, “A Misdirected Principle with a Catch.” Dignum, Responsible Artificial Intelligence.

71

69

Floridi et al., “AI4People,” 702.

68

72

Robbins and Henschke, “The Value of Transparency,” 588.

73

Floridi et al., “AI4People,” 702.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

132

Ethics and Engineering

A somewhat similar issue is presented by the capabilities that AI systems create to intentionally “capture and manipulate environments for the extraction of value” using optimization-based systems.74 Optimization practices, (which are often utilitarian-based), can create their own ethical problems, such as issues of justice and fairness, as discussed in Chapter 3. In AI applications, they can lead to ethical problems. For instance, the use of crowdsourced optimization applications such as Waze can cause serious inconvenience for residents of towns and neighborhoods close to busy routes, since they can “automatically” direct heavy traffic away from highways and into residential streets. Too often, the damaged parties depend on the cooperation of those who have caused harm to decrease that same harm. In other words, if Waze did not cooperate by blocking certain newly busy routes in the residential areas, they would continue to get busier. Some authors have proposed “protective optimization technologies” as a tool to help the affected parties (such as the residents mentioned above) to “intervene from outside the system [so that they] do not require service providers to cooperate, and can serve to correct, shift, or expose harms that systems impose on populations and their environments.”75

5.4 “Trustworthy” and “Responsible” AI: Toward Meaningful Human Control With the increasing use of AI systems in the public and private sectors (and within both military and civilian applications), there have been many initiatives to address the socio-technical and ethical issues associated with AI.76 The “requirements for trustworthy AI” as presented by the EU High-Level Expert Group on Artificial Intelligence are an interesting response to this. These criteria include, for example, designing to avoid unfair bias: “AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias,” and the continuation of such bias can lead to “unintended (in)direct prejudice and discrimination against certain groups or people, potentially exacerbating prejudice and marginalization.”77 Other 74 76 77

75 Kulynych et al., “POTs.” Ibid. See, e.g., European Commission, Artificial Intelligence, Robotics and “Autonomous” Systems.

These quotations are from the website of the High-Level Expert Group on Artificial Intelligence, where seven requirements have been formulated and spelled out. It should

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

133

important criteria discussed in this and various similar policy-guiding documents are respect for human autonomy and the need for oversight of AI systems. Autonomy and agency are perhaps the most important and intricate ethical issues in AI and ethics. In conjunction with these, there is the recurring question of how to allocate responsibility when an autonomous system may potentially harm human beings. As discussed above, explainability is one response to these concerns, but it has little relevance to the allocation of responsibilities among different “agents,” whether human or mechanical. There have been concerns about the possibility of a “responsibility gap,” that is, “a situation where it is unclear who can justifiably be held responsible for an outcome.”78 As the argument goes, the manufacturer cannot predict or control a machine’s future behavior, and so it would be unfair to hold the manufacturer morally responsible for it.79 The responsibility gap is particularly problematic in relation to the use of AI in modern warfare and autonomous weapon systems. Robert Sparrow addresses this issue by asking who should be held responsible “when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime.”80 Sparrow identifies three parties which could potentially be held responsible: the designer of the weapon system, the commanding officer ordering its use, and the weapon system itself; however, he states that none of these could be satisfactorily held responsible for the war crime, and he therefore fears a responsibility gap. Hence, as he argues, the deployment of fully autonomous systems, or “killer robots,” is ethically unjustified.81 This argument is frequently encountered in the academic literature,82 and also in the policy world.83

be mentioned that avoidance of unfair bias has been subsumed under the main requirement of diversity, nondiscrimination, and fairness. See https://ec.europa.eu/futurium/en/ ai-alliance-consultation/guidelines/1#Diversity (consulted December 6, 2019). 78

Nyholm, “The Ethics of Crashes with Self-Driving Cars,” 3.

79

Matthias, “The Responsibility Gap,” 175; Sparrow, “Killer Robots.”

80

Sparrow, “Killer Robots,” 62.

82

Asaro, “On Banning Autonomous Weapon Systems”; Altmann et al., “Armed Military

81

Sparrow, “Killer Robots.”

Robots”; Hevelke and Nida-Rümelin, “Responsibility for Crashes of Autonomous Vehicles.” 83

In 2018 a resolution by the European Parliament was accepted that called for the urgent negotiation of “an international ban on weapon systems that lack human control over the use of force”; see www.europarl.europa.eu/doceo/document/A-8-2018-0230_EN .html?redirect. This was also a central topic of debates in the Meeting of Experts on

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

134

Ethics and Engineering

While the sentiment “we do not want weapons that could autonomously decide to take human lives” is widely shared, the philosophical rationale of a responsibility gap is worthy of further philosophical inquiry. The responsibility gap can, in principle, also be applicable in nonmilitary AI applications, for instance, when an autonomous vehicle is involved in an accident with possible casualties.84 Sven Nyholm is less worried about the responsibility gap, as he argues that the framing of this “gap” is problematic. Rather than assigning independent agency to AI systems, Nyholm argues, we should understand how to engage in “human–machine collaborations” where AI systems are “acting under the supervision and authority of the humans involved.”85 Therefore, responsibility remains at all times with humans since they initiate and supervise human–machine collaborations, as Nyholm argues.86 This line of thinking resonates with recent discussions addressing the ethics of AI in terms of how to generate and maintain “meaning” in human–machine interactions.

5.4.1 “Meaningful Human Control” In their thinking about the future of robotics and AI systems, and in conjunction with discussions of responsible research and innovation,87 some scholars have asked how we can meaningfully intervene and participate in, or meaningfully control, AI systems.88 This focus on “meaningful” relationships has also been highlighted in a large, influential campaign and in a

Lethal Autonomous Weapons Systems in the UN Convention on Conventional Weapons in 2015; see www.unog.ch/80256EE600585943/(httpPages)/6CE049BE22EC75A2C1257C8D00513E26. 84

Hevelke and Nida-Rümelin, “Responsibility for Crashes of Autonomous Vehicles.”

85

Nyholm, “Attributing Agency to Automated Systems,” 1217.

86

The flip side of the issue of moral agency is the question of whether AI systems (such as robots) are also to be considered moral patients. That is, can these machines hold any claim to rights that must be respected in a moral sense?

87

Virginia Dignum has published a book on “responsible artificial intelligence,” which she presents in terms of meaningfully interaction with the machine and a number of other criteria; see Dignum, Responsible Artificial Intelligence.

88

Ibid., 90; Santoni de Sio and van den Hoven, “Meaningful Human Control over Autonomous Systems.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

135

series of open letters signed by many NGOs, prominent academics, and industry leaders calling for a ban on “offensive autonomous weapons beyond meaningful human control.”89 What exactly is meant by this notion, beyond the convincing argument against full autonomy for weapon systems, remains rather unclear, however. Filippo Santoni de Sio and Jeroen van den Hoven have philosophically investigated the notion of “meaningful human control” and have proposed two conditions for autonomous systems to remain meaningfully under control, namely “tracking” and “tracing,” which could be used as design criteria (in line for Design for Values).90 The tracking condition means that the system should “be able to respond to both the relevant moral reasons of the humans designing and . . . the relevant facts in the environment in which the system operates,” and the tracing condition states that it should have “the possibility to always trace back the outcome of its operations to at least one human” equipped with sufficient technical capabilities and moral awareness. Thus, this is not about causally tracing an action back, which is neither a necessary nor a sufficient moral condition.91 While the notion is primarily introduced and presented in regard to autonomous weapons, Santoni de Sio and van den Hoven argue that it has important implications in the ethics of AI and more broadly, robotics, because the issue of meaningful control over autonomous systems is highly significant in nonmilitary applications such as transportation and healthcare practices.92 Let us finish this chapter by returning to the opening example of the Uber car and by investigationg what it would mean if the car had been designed with meaningful human control conditions as design criteria. The tracking condition here suggests that we should not only sharpen the responsiveness of the car’s system for moral reasons, but also design the system or its environment to reduce and ideally eliminate the risk of encountering morally challenging situations. For instance, if a vehicle is not (yet) able to “safely interact with pedestrians and cyclists,” the traffic infrastructure should be designed in such a way “as to simply prevent the possibility of this 89

See for a list of examples: https://autonomousweapons.org/compilation-of-open-lettersagainst-autonomous-weapons/.

90

Santoni de Sio and van den Hoven, “Meaningful Human Control over Autonomous Systems.”

91

Ibid., 1 and 11.

92

Mecacci and Santoni de Sio, “Meaningful Human Control as Reason-Responsiveness.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

136

Ethics and Engineering

interaction, for instance, by providing separate lanes for autonomous and traditional vehicles.”93 The tracing condition assesses, among other things, “the reasonableness of the normative expectations attributed to the driver to perform certain tasks and supervise certain operations.”94 This could mean that certain gaps must be filled, for instance, by putting in place systems such as the “autopilot nag” discussed earlier in this chapter. It could also mean that new training courses and driving license examinations, and new laws and legislation that attribute liability in the case of an accident, are needed before before autonomous vehicles can be permitted in the traffic. Would the meaningful human control approach have prevented the Uber accident? It is very hard to make such a claim in hindsight, as even with a professionally trained driver, and the appropriate laws in place stipulating exactly who is responsible in the case of an accident, accidents can still happen. Meaningful human Control could, however, add something here. First, it would help us to avoid the responsibility gap, as extensively discussed in the case of the Uber car. Second, and in conjunction with the first issue, it would help us to be aware of the distribution of responsibilities. When responsibilities have been attributed more clearly, it is reasonable to expect that accidents will become less likely and less severe.

5.5 Summary We are moving toward the era of machines that are partially in charge of moral decisions, as in the case of self-driving cars. This chapter has reviewed the accident with the Uber self-driving car in Arizona in 2018, and discussed the complexities associated with assigning responsibilities when such an accident occurs. This is a typical “many hands” problem because it is difficult, if not impossible, to say precisely whose “hands” caused the accident. This problem is even more complicated because one of the pairs of hands involved is that of a machine. This is a typical situation that arises from a joint decision between human and machine, raising the question: Who is responsible for the accident? Can we ascribe any form of responsibility to the car, or is the responsibility solely with the car designer or manufacturer? 93

Santoni de Sio and Van den Hoven, “Meaningful Human Control over Autonomous Systems,” 12.

94

Ibid.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Morality and the Machine

137

Who else might have contributed to the accident? The chapter has further reviewed crash optimization programs and how they include ethical considerations, while focusing more broadly on the ethics of self-driving cars. While previous discussions of the latter have mostly been in terms of the classic Trolley Problem in moral philosophy, I have also discussed the broader ethical issues, including the question of how to allocate responsibilities, how to deal with risk management decisions with ethical implications, and how to address unanticipated consequences, including the issues of misuse or abuse of data, privacy, and fairness. In line with the nonbinary focus of this book, I have not dealt here with the question of whether we should design and deploy AI. Instead, I have discussed the ethics of AI, focusing more specifically on the problems of agency and bias. If we are to assign any responsibility to an autonomous system, we must assume that it can exercise agency – that is, it can independently and autonomously decide how to act. This is problematic for fundamental and practical reasons. There is a tendency, particularly among scientists and engineers, to argue that computers could be the “moral saints” we humans can never be, because they don’t have biases and other human emotions I have shown, with examples from loan approval practices, why this is incorrect and highly problematic. Finally, I have discussed the notion of meaningful human control in autonomous technologies, in order to show a new and powerful way of looking at human–machine interactions from the perspective of active responsibilities.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.008

Part III

Engineering Ethics, Sustainability, and Globalization

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.009

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.009

6

Sustainability and Energy Ethics

6.1 Biofuel and a “Silent Tsunami” in Guatemala At the beginning of this century, a global food crisis gave rise to massive social unrest throughout the world. Between 2004 and 2008, global food prices for key crops such as corn (maize) and soy doubled; the prices kept going up, and in 2012 they reached an all-time high.1 The unrest took place in various countries in Asia, Africa, and Central America, but it was most painfully visible in a country such as Guatemala, where even before the food crisis, access to basic food had already been limited. With a corn-based diet, Guatemalans saw not only tortilla prices going rapidly up but also the prices of other essential foods such as eggs; corn is also used in chicken feed.2 While Guatemala is not considered a poor country per se – the World Bank lists it as a lower-middle-income country – it ranks as one of the lowest in Latin America in the Human Development Index.3 More importantly, almost half of its population cannot afford the cost of the basic food basket.4 Moreover, Guatemala is a multi-ethnic country with huge inequalities between ethnicities, and the rising food prices exacerbated those inequalities, especially for indigenous Guatemalans, who experienced “deprivation in multiple aspects of their lives, including food security.”5

1

Weinberg and Bakker, “Let Them Eat Cake.”

2

Rosenthal, “As Biofuel Demand Grows, So Do Guatemala’s Hunger Pangs.”

3

Tomei and Diaz-Chavez, “Guatemala.”

4

This information is adapted from the website of the UN World Food Program: www.wfp .org/countries/guatemala (consulted February 6, 2020). Rosenthal’s article in the New York Times that reports on this matter, “As Biofuel Demand Grows, So Do Guatemala’s Hunger Pangs,”dates from 2013, and the situation has not changed since then.

5

Quoted from the report of the UN World Food Program, www.wfp.org/countries/guate mala (consulted February 6, 2020).

141

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

142

Ethics and Engineering

An important question during the global food crisis was to what extent (if at all) the increasing demand for biofuels had affected the prices of edible crops such as corn. While the role of biofuels was at first played down in official governmental reports in the countries that were incentivizing their production, it eventually became clear that the increasing demands for biofuel were having a severe impact on the food supply. An official UK report published in 2008 mentioned, for the first time, that the competition for corn used in biofuels had played a “significant” role in the rise of global food prices.6 Later in 2008, a World Bank report, leaked to the Guardian, observed that biofuel had forced global food prices up by 75 percent, a figure that “emphatically contradicts the US government’s [earlier] claims that plantderived fuels contribute less than 3% to food-price rises.”7 The UN World Food Program called this crisis a “silent tsunami” that was pushing more than 100 million people into hunger.8 At the beginning of the century, given the rising price of crude oil and growing awareness of the damaging impact of fossil fuel emissions on climate change, various laws and regulations were introduced to incentivize and sometimes even to require the use of biofuels (for instance as additives to fossil fuels). Policies in the US and Europe requiring suppliers of transportation fuel to blend their fossil fuels with biofuels guaranteed the biofuels industry and suppliers a minimum market. Moreover, various subsidies were put in place to incentivize such growth. Together, these disrupted the markets for several food crops,9 because it was often more beneficial for the European countries and the US to import food crops for biofuel production from other countries, such as Guatemala, that produced them more cheaply. This resulted in food shortages and food price spikes in many countries in Latin America and Africa when edible crops (such as corn) normally produced for food were exported for biofuel production. In addition, non-cornbased biofuel had a severe impact on corn prices. For instance in Guatemala, land that was previously used for domestic corn production was repurposed for growing sugarcane and African palm, which were exported mostly to the 6

Borger and Vidal, “New Study to Force Ministers to Review Climate Change Plan.”

7

Chakrabortty, “Secret Report.”

8

Quoted from UN News, https://news.un.org/en/story/2008/04/256842-global-food-crisissilent-tsunami-threatening-over-100-million-people-warns-un (consulted February 6, 2020).

9

Pilgrim and Harvey, “Battles over Biofuels in Europe.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

143

US as main ingredients of biofuels.10 Another important reason why Guatemala was hit particularly hard by the food crisis was that nearly half of the corn it needed was imported from the US, so when the US set a policy of using 40 percent of its crop to make biofuel, corn prices spiked in Guatemala.

6.1.1 The Development of Three Generations of Biofuel “Biofuel” often refers to combustible liquid fuel derived from various types of biomass such as “biodegradable agricultural, forestry or fishery products, wastes or residues, or biodegradable industrial or municipal waste.”11 Biofuel is particularly important in transportation, because it is the only alternative to fossil fuel, not only in cars but also in aviation, shipping, and heavy-goods road transportation. There are two main types of biofuel, bioethanol and biodiesel. Bioethanol is produced by fermentation, and it can replace gasoline (petrol) or be used as an additive to it. Biodiesel is produced from naturally occurring plant or seed oils and can replace or be added to diesel fuel. Biofuel thus has a huge potential for reducing the need for fossil fuels in transportation while also diminishing greenhouse gas emissions. The development of biofuels has gone through three generations, and each successive generation has advanced the technology while addressing some ethical concerns associated with the earlier biofuels. First-generation biofuels – also called “conventional” biofuels – are based on edible crops such as sugarcane and corn (for bioethanol production) and rapeseed, soybean, sunflower and palm oil (for biodiesel production). This was the generation that contributed to the “food versus fuel” dilemma and the ensuing global food crisis.12 Moreover, in order to provide for global biofuel demand, large areas of arable land were needed, creating a perverse incentive to cut down forests to make land available. Deforestation was an especially serious problem in “tropical countries such as Malaysia and Indonesia that account for

10 11 12

Rosenthal, “As Biofuel Demand Grows, So Do Guatemala’s Hunger Pangs.” Royal Academy of Engineering, Sustainability of Liquid Biofuels, 11. Gui, Lee, and Bhatia, “Feasibility of Edible Oil vs. Non-Edible Oil vs. Waste Edible Oil as Biodiesel Feedstock.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

144

Ethics and Engineering

about 80% of the world’s supply of palm oil.”13 Thus, a technological solution that had been proposed to combat climate change not only caused damage to the environment and wildlife, but also contributed to the destruction of a crucial natural means of mitigation, since forests are important carbon sinks. The increasing demands for biofuel also led to land that was previously destined for food and feed crops being used for other crops that could be exported for biofuel production. For instance, during the food crisis, the Suchitepéquez province in Guatemala, which used to be a major cornproducing region, was turned into plantations growing sugarcane and African palm for export to bioethanol production plants in Europe.14 The second generation of biofuels was designed and developed to address several of these concerns. These biofuels, also referred to as advanced biofuels, are derived from non-food biomass and fall into two categories: (1) bioethanol based on crops such as switch grass, other lignocellulosic materials such as sawmill residues and wood waste, agricultural residues, and forest and wood wastes, as well as solid waste such as municipal waste; and (2) biodiesel based on animal fats and used cooking oils.15 Secondgeneration biofuels eliminate the competition between food and fuel, since their production does not require arable farmland.16 Yet their availability is fairly limited, as they perform poorly in cold temperatures (in the case of biodiesel), and there are biosafety concerns about potentially contaminated residues and animal fats.17 The third generation of biofuels is based on micro-algae. It too emerged as a response to the first generation, and also because of the “inefficiency and unsustainability of the use of food crops as a biodiesel source.”18 Micro-algae are efficient at producing biomass because of their fast reproduction, and

13

Ahmad

et

al.,

“Microalgae

as

a

Sustainable

Energy

Source

for

Biodiesel

Production,” 586. 14

Rosenthal, “As Biofuel Demand Grows, So Do Guatemala’s Hunger Pangs.”

15

Royal Academy of Engineering, Sustainability of Liquid Biofuels, 11.

16

Pinzi et al., “The Ideal Vegetable Oil-Based Biodiesel Composition”; Leung, Wu, and Leung, “A Review on Biodiesel Production Using Catalyzed Transesterification.”

17

Singh and Singh, “Biodiesel Production through the Use of Different Sources and Characterization of Oils and Their Esters as the Substitute of Diesel”; Janaun and Ellis, “Perspectives on Biodiesel as a Sustainable Fuel.”

18

Ahmad et al., “Microalgae as a Sustainable Energy Source for Biodiesel Production,” 584.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

145

they give a high yield of oil when used in biodiesel.19 This type of biofuel has received a lot of attention: some experts, for instance, have argued that algae-based third-generation biofuels are the only sustainable biofuels, because they “do not require agricultural land and potable water resources.”20 However, the question of exactly what “sustainability” means is problematic.21 For example, are there unified definitions and criteria that can help us assess the sustainability of a certain type of biofuel, for instance in terms of land and water use? There is also disagreement among environmental organizations about whether algae oil can be called sustainable if it is genetically modified (as it is in some industrial production), because this can “bring about disruptive ecological, economic and social risks.”22 This brings us to the question of what “sustainable biofuel” exactly means.

6.1.2 How Sustainable Is Biofuel? Biofuel has many important advantages, such as being a natural resource that is renewable at a fast pace, but of course we cannot focus only on the naturalness of this type of fuel in calling it sustainable. As the development of the three generations has shown, there may very well be other important ethical concerns associated with biofuel. In Section 6.2, I will introduce the provocative argument that there is no such thing as “sustainable energy.” This is an argument against using the term “sustainability” in a merely binary mode. Here, let me first briefly discuss how the literature on biofuel conceives of its sustainable aspects. Assessing sustainability issues is admittedly a complex task, and it brings with it a host of uncertainties.23 Since the low carbon footprint of biofuel (as compared with fossil fuels) is one of the key justifications for its use, many questions arise about how to calculate such a footprint. There are indeed complex issues involved at different scales of the analysis. At the micro-scale, 19

Mata, Martins, and Caetano, “Microalgae for Biodiesel Production and Other Applications”; Ahmad et al., “Microalgae as a Sustainable Energy Source for Biodiesel Production.”

20

Gajraj, Singh, and Kumar, “Third-Generation Biofuel,” 307.

21

Patil, Tran, and Giselrød, “Towards Sustainable Production of Biofuels from Microalgae”; Balat and Balat, “Progress in Biodiesel Processing.”

22

Asveld and Stemerding, Algae Oil on Trial, 6.

23

Asveld and Stemerding, “Social Learning in the Bioeconomy.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

146

Ethics and Engineering

“changes in soil carbon content over time” are significant, while at the macroscale the extent and pace of “development of global biofuel supply chains” will also influence the outcome of any analysis.24 These considerations are very much interconnected and can vary over time, which makes analysis particularly challenging. In addition to the carbon footprint, a number of other matters have to be considered, including water demands for crops, foodrelated issues (for biofuel based on edible crops), land use, and energy security. Throughout the history of biofuel development, there have been both technological and policy responses to these problems that have aimed to eliminate or at least diminish some of the concerns. In Section 6.1.1, we discussed the evolution of biofuel in terms of the three generations and how each successive generation has tried to address the concerns of the earlier one(s). In policy-making there have also been attempts to formulate sustainability criteria in order to limit problematic consequences, and these have often served as import criteria. In the EU, for instance, where 20 percent of the total consumption of biofuel was imported between 2008 and 2010, sustainability criteria were established that required the importing countries to ensure compliance if their imports were to qualify for financial incentives.25 Sustainability in biofuel use, as conceptualized by the EU’s Renewable Energy Directive of 2009, required a 35 percent reduction in greenhouse gases, and the criteria placed a strong emphasis on land use, stating that biofuel “shall not be made from raw material obtained from land with high biodiversity value” such as forests and other wooded land, or from areas that are designated for “the protection of rare, threatened or endangered ecosystems or species.”26 Whether the criteria did indeed achieve what they had been designed for remains a subject of discussion. Julia Tomei has examined these criteria in the specific case of sugarcane-based ethanol produced in Guatemala, which is mostly exported to the European market.27 While the criteria successfully dealt with the issue of land use, and Guatemalan crops for biofuel were certified as sustainable, they failed to capture several other ethical issues, especially those relevant to marginalized

24

Royal Academy of Engineering, Sustainability of Liquid Biofuels, 2.

25

European Commission, “On the Promotion of the Use of Energy from Renewable Sources.”

26

Ibid., articles 17(2) and 17(3).

27

Tomei, “The Sustainability of Sugarcane-Ethanol Systems in Guatemala,” 94.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

147

communities.28 Like the technology, the policy criteria also kept evolving. The revised EU Renewable Energy Directive of 2018 had a strong focus on indirect land use change, which refers to the factthat biofuel production typically takes place on land previously used for other agricultural purposes, such as growing food or animal feed. The issue of indirect land use change is important because it can lead to the “extension of agriculture land into non-cropland, possibly including areas with high carbon stock such as forests, wetlands and peatlands.”29 While this can reduce the greenhouse gas emissions, it can also lead to higher food prices in the countries that are producing the crops, with a disproportionate impact on the marginalized communities of those countries. Sometimes, sustainability assessments recommend some types of biofuels over others. For instance, Abdul Latif Ahmad et al. have compared the three generations of biofuel based on food and non-food crops as well as microalgae-derived biodiesel, concluding that only the latter type can be sustainable in the future.30 In their definition of sustainability, these authors were specifically referring to food security issues and the environmental impact of feedstock-based biofuel, especially palm oil. In another study, Tuazon et al. argue that a holistic assessment of sustainability must include economic, social, and environmental assessments. While all of these “may be present in the same study, the results are often presented exclusively of one another with little to no attempt to consider coupling effects.”31 Building on the same line of reasoning, I will argue in this chapter that such a holistic sustainability assessment will reveal instances where several important ethical considerations cannot be achieved simultaneously. In my approach to sustainability as an ethical framework, I will identify five interconnected values that, together, comprise this overarching framework of sustainability.

6.2 There Is No Such Thing as “Sustainable Energy”! At the turn of the century, and in the wake of renewed interest in combatting dangerous climate change, biofuel began to receive increasing attention 28

Ibid.

29

See https://ec.europa.eu/commission/presscorner/detail/en/MEMO_19_1656 (consulted

30

March 7, 2020). Ahmad et al., “Microalgae as a Sustainable Energy Source for Biodiesel Production.”

31

Tuazon and Gnansounou, “Towards an Integrated Sustainability Assessment of Biorefineries,” 259.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

148

Ethics and Engineering

as the sustainable alternative to fossil fuel, since it could help to deal with two main problems that made fossil fuel unsustainable: its main resource for energy production is renewable, and it can help to reduce the emissions of greenhouse gases substantially.

32

I argue that the black and white dichot-

omy of sustainable versus unsustainable fuel is unhelpful because it leaves out many other important “colors” that reflect the complexity of the energy issues. This requires some explanation. On the one hand, it seems fair to call biofuel sustainable and to contrast it with unsustainable fossil fuel based on nonrenewable hydrocarbon fuels, which cause serious long-term climate damage. On the other hand, it seems myopic to reduce sustainability to merely renewability and lower greenhouse gas emissions. As explained in the previous section, there may be a host of other important issues at play. The notion of sustainability has been crucial in discussions of the future of energy systems, and it has been used by both proponents and opponents of different energy systems – from biofuel all the way to nuclear energy – mostly in the evaluative mode. In this sense, the term “sustainable” has often been used as a surrogate for “good” or “acceptable.” According to this definition, no one would be against a sustainable energy system, and calling any energy system unsustainable would amount to rejecting it. Thus, sustainability has often been used as a yardstick to help us judge the desirability of certain energy technologies. Against such simplistic use of the notion, I argue that the concept of sustainability can help us to understand important dilemmas of energy production and consumption only if we can escape the “sustainable versus unsustainable” dichotomy. In political and public debate, the question of whether or not a specific type of energy should be deployed is a very important one. But to answer the question, we must first understand the important ethical issues associated with that particular energy system, while also focusing on its different technological possibilities (e.g., the generations of biofuel).33 For instance, calling biofuel “sustainable” solely because of its renewability might obscure the fact that there are several other sustainable (or otherwise morally

32

This and the next section partially draw on Taebi, “Sustainable Energy and the Controversial Case of Nuclear Power” and Kermisch and Taebi, “Sustainability, Ethics and Nuclear Energy.”

33

Grunwald and Rösch, “Sustainability Assessment of Energy Technologies”; Löfquist, “After Fukushima.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

149

relevant) features that we need to take into account, such as land and water use.34 It also overlooks the fact that different generations of biofuel technology address each issue differently. We therefore need to revisit the concept of sustainability in the context of each energy system. Sustainability should be conceptualized as an ethical framework in order to take account of the complexity of ethical issues associated with energy technologies, and to encourage to proactive ethical reflection. We will then be able to understand how an individualtechnology can be sustainable in one specific sense but perhaps unsustainable in another sense. Exactly how this conflict is dealt with will be the main focus of an ethically informed sustainability framework.

6.3 Sustainability as an Ethical Framework In the second half of the last century there was growing public awareness of the fact that the earth is a living space that we share not only with our ancestors but also with our children, our grandchildren, and future generations. The natural resources upon which our economies heavily depend seemed to be running out as a result of the ever rising world population and unstoppable industrialization. In addition, the accompanying pollution presented a serious problem; indeed, as early as 1972 we had been urged by the Club of Rome to consider “the limits to growth.”35 Thus, the technological progress that had once brought wealth and prosperity had come to create concerns for people living now and in the future. These discussions eventually culminated in a report published by a World Commission on Environment and Development with the revealing title Our Common Future. The first systematic definition of sustainable development proved to be an attempt to balance economic growth and industrialization on the one hand and environmental damage on the other. A concept of sustainable development as development that “meets the need of the present without compromising the ability of future generations to meet their own needs” was named after the commission’s chairwoman, the then Norwegian prime minister, Gro Harlem Brundtland.36

34

Nuffield Council on Bioethics, Biofuel.

36

35

World Commission on Environment and Development, Our Common Future, 43.

Meadows et al., The Limits to Growth.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

150

Ethics and Engineering

6.3.1 Ethical Roots of Sustainability: Social Justice Brundtland’s concept of sustainability was founded on principles of social justice, viewed from two main angles: justice toward our contemporaries, or spatial justice, and justice toward future generations, or temporal justice.37 Spatially speaking, sustainability concerns how the natural resources and environmental benefits and burdens are distributed among the people currently living; this is also called intragenerational justice. It relates to matters such as how to rectify the inequalities between the so-called north and south, or between the industrialized and less industrialized or industrializing countries, as well as to questions of domestic justice within individual countries. Many of the sustainability considerations of biofuel have also to do with intragenerational issues, such as that of food versus fuel, that exacerbate the inequalities among the present generations between countries and also within individual countries. For instance, in Guatemala, indigenous people were disproportionately affected by the global food crisis. Temporally speaking, sustainability is about the distribution of burdens and benefits between us and future generations. Yet precisely who those future generations are and just what it is that we should be sustaining for them remain somewhat ambiguous. A commonly accepted understanding of the temporal aspect of sustainability is the idea that equality between generations, or intergenerational justice, is desired. In this interpretation, roughly equal benefits between generations must be guaranteed, the idea being that “one generation may just enjoy certain benefits only if those advantages can be sustained for subsequent generations as well.”38 The question of to how to balance the interests of different generations, however, remains a source of controversy. Some scholars, for instance, have argued that “it is difficult to see why one should attach crucial normative significance to the current level of welfare” and that “past generations seem to have survived with far less.”39 Against such skeptical views of sustainability, Brian Barry argues that “unless people in the future can be held responsible for the situation that

37

Sustainability has a third main theme, namely that of the relationship between human beings and their natural environment, which, again, has both a spatial and a temporal dimension. I therefore subsume that theme under the two themes summarized here.

38

Goodin, “Ethical Principles for Environmental Protection,” 13.

39

Beckerman, “Sustainable Development and Our Obligations to Future Generations,” 73.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

151

they find themselves in, they should not be worse off than we are.”40 In my approach to sustainability, I follow Barry’s reasoning: there are plenty of situations in which we might want to consider letting the interests of future generations trump those of contemporary people. Energy-related questions, especially those focusing onthe use of nonrenewable resources, provide excellent examples of such situations.

6.3.2 Sustaining What, for Whom, and How? While discussions of sustainability have both a spatial and a temporal dimension, most of the discussion (and contention) about sustainable energy is focused on the temporal, intergenerational aspects. Yet these are both conceptually more complex and morally more intricate than the spatial aspects. In order to clarify them, I will focus on the three key questions of sustainability: (1) What is it – morally speaking – that we should sustain? (2) For whom (and for how long) should we sustain it? and (3) How should we sustain it? The notion of sustainability implies that there is something that needs to be sustained. To answer the question of what we should sustain, I will borrow from a seminal paper by Brian Barry in which he explicitly connects sustainability with intergenerational justice.41 Barry argues that in order to understand our relations with future generations (and the extent of our possible duties to them), we should start by examining the relations among our contemporaries and investigate whether and how they can be extended into the future. The premise he starts from is the fundamental equality of human beings, which leads him to define the “core concept of sustainability” as follows: “there is some X whose value should be maintained, in as far as it lies within our power to do so, into the indefinite future.”42 On the basis of the idea that a fundamental characteristic of human beings is “their ability to form their own conception of the good life,” Barry defines X as “the opportunity to live good lives according to their conception of what constitutes a good life.”43 Sustainability should then be understood as the requirement for us to provide future generations with such opportunities. In other words, as we are not in a position to foresee how 40

Barry, “Sustainability and Intergenerational Justice” (1999), 106.

41

Barry, “Sustainability and Intergenerational Justice” (1997).

43

Ibid., 52.

42

Ibid., 50.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

152

Ethics and Engineering

future people will conceive of the good life, we should not narrow the range of opportunities open to them. Keeping equal opportunities open for future generations is associated with the requirements to preserve: (1) their vital interests (also called the “no harm” requirement) and (2) their opportunities for well-being. The first of these is a fundamental requirement for future generations if they are to be in a position to pursue their opportunities; the second specifically emphasizes the opportunity for a morally relevant value, their well-being. The question of for whom should we sustain these vital interests and opportunities for well-being ensures from this.44 Also in the same vein is the question: For how long we should sustain vital interests of and equal opportunities for well-being for future generations? In this regard it is important to address a key distinction between the two requirements mentioned above. It is difficult to argue that sustaining well-being has a very long temporal reach, since we cannot reasonably assume what later generations, especially those in the distant future, will need for their well-being. We do know that they will need access to resources upon which well-being relies, but we cannot easily say what those resources will be a hundred years in the future. The “no harm” requirement, on the other hand, may have a much longer impact, because – in principle – if we leave long-lived waste behind, it may affect the lives of people in the future.45 In other words, one might argue that the first requirement, to sustain vital interests, is most relevant to the long-term considerations of the environment and, indeed, the public health issues associated with them, and that the second requirement, to preserve opportunities for well-being, is most relevant to the availability of natural resources, including those for the production of energy, assuming that access to energy is a basic requirement for ensuring well-being. If we consider the period from the industrial revolution until the present, it is

44

While this question has a strong temporal connotation, one could also read it in the spatial sense, as a situation in which actions in one place would have (undesired) consequences for people living in another place, for instance, when using biofuel leads to increasing food prices elsewhere. Yet most discussions about sustainable energy strongly focus on the intergenerational and temporal aspects. I will return to the intragenerational issues in Section 6.5, where I discuss energy ethics.

45

Here, I am not focusing on the extent of moral stringency of these duties. Interested readers can consult Taebi, “The Morally Desirable Option for Nuclear Power Production.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

153

fairly straightforward to conclude that the availability of energy resources has played a key role in augmenting and sustaining people’s well-being. For the third and last question of how to sustain vital interests and opportunities for well-being, we need to incorporate technologies into the analysis. In other words, ethically speaking, when we contemplate this question, it is useful to know what we can do right now, and what we suspect we will be able to do in the future on the basis of reasonable scientific expectations. The latter is highly relevant to questions of future energy provision and the place that different energy systems should occupy.

6.4 Sustainable Nuclear Energy: A Contradiction in Terms? Discussions about the desirability of nuclear energy technology often give rise to serious public controversy. An important part of the debate concerns the question of whether the production of nuclear energy technology is sustainable. On the one hand, its proponents hail nuclear energy as a sustainable energy source when compared with fossil fuels, because it is produced from natural resources that will not be depleted any time soon, and no (or greatly reduced) greenhouse gases are emitted as a result of its production.46 Opponents argue, on the other hand, that the production of nuclear energy is utterly unsustainable, because of the long-term impact of nuclear waste upon people and the environment, and also because uranium is a finite resource.47 Some discussions have focused on a possible role that nuclear energy could play in facilitating a transition to sustainable energy technologies. Some scholars argue that it could serve as a bridge toward truly sustainable energy technologies in the future,48 for instance by enabling large-scale production of hydrogen that could serve as a fuel for transportation.49 Yet others believe that it might actually burn rather than build bridges toward sustainable energy 46

Brooks, “Sustainability and Technology”; Bonser, “Nuclear Now for Sustainable Development”; IAEA, Nuclear Power and Sustainable Development; Poinssot et al., “The Sustainability, a Relevant Approach for Defining the Roadmap for Future Nuclear Fuel Cycles.”

47

Greenpeace, Nuclear Power, Unsustainable, Uneconomic, Dirty and Dangerous; O’Brien and

48

Bruggink and Van der Zwaan, “The Role of Nuclear Energy in Establishing Sustainable Energy Paths.”

O’Keefe, “The Future of Nuclear Power in Europe.”

49

Duffey, “Sustainable Futures Using Nuclear Energy”; Nowotny et al., “Towards Sustainable Energy.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

154

Ethics and Engineering

policy, because nuclear energy requires large investments, and such investments would be better devoted to truly sustainable energy technologies.50 In many of these discussions, the idea of sustainability is thus used in the evaluative mode and for passing a verdict on the desirability of nuclear energy for either short-term use (i.e., facilitating an energy transition) or long-term use. It is often used narrowly to refer to the durability of natural resources or environmental consequences. As I have argued in the previous section, this does not do justice to the sophisticated analysis that a consideration of sustainability can offer, and with which we can compare different production methods for nuclear energy production according to a larger set of criteria, which I will present here in terms of moral values. Following the principles of Design for Values (as discussed in Chapter 4), I conceive of sustainability as a framework that consists of several moral values. I will focus here on the key ethical questions of sustainability, namely what we should sustain, for whom, and how. In the previous section, I formulated the two requirements to sustain future people’s vital interests (the “no harm” requirement) and to sustain their equal opportunities for well-being, considered in terms of access to natural resources. The “no harm” requirement can be conceived in terms of the ethical values of safety, security, and environmental benevolence, and the requirement to sustain opportunities for well-being in terms of resource durability and economic viability.51 These five values (mentioned in Section 6.1.2 above) are interconnected, in that when we aim to change one, we need to assess how the others will change. In this framework, there are also important societal and ethical dilemmas. Let me first introduce the values in more detail.

6.4.1 The Open Fuel Cycle I will introduce these values by focusing on nuclear energy production and more specifically the open fuel cycle.52 Nuclear energy is produced all around 50

Butler and McGlynn, “Building or Burning the Bridges to a Sustainable Energy Policy”; Shrader-Frechette, What Will Work.

51

For a detailed discussion of these values and how they contribute to the central ethical questions of sustainability, readers should consult Taebi and Kadak, “Intergenerational Considerations Affecting the Future of Nuclear Power.”

52

This subsection and Section 6.4.2 draw on Taebi, “Moral Dilemmas of Uranium and Thorium Fuel Cycles.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

155

the world using two methods, the open and closed fuel cycles. The open fuel cycle consists of five main steps.53 In Step 1, natural uranium is mined and milled: this process is similar to the mining of other metals, with the difference that uranium and its decay products emit ionizing radiation. Step 2 involves the chemical purification and enrichment of uranium. Natural uranium consists of the two main isotopes 235U and 238U. Only 235U is fissile and deployable as a fuel in currently operational light-water reactors (LWRs).54 However, this fissile uranium constitutes only 0.7 percent of all natural uranium. In order to produce a type of fuel that can be efficiently used in LWRs, the content of this isotope must be increased to 3–5 percent; this process is known as enrichment. Enriched uranium is converted into uranium dioxide and used to fabricate fuel (Step 3), which can be used in an LWR (Step 4). A typical fuel assembly will remain in the reactor for about four years; the remainder that is discharged from the reactor is called spent fuel. Spent fuel is not necessarily waste, but in the open fuel cycle it is disposed of as waste. Before final disposal underground and in deep geological repositories (Step 5), spent fuel must be temporarily stored and cooled in storage facilities for several decades. Let us review how the five values mentioned above feature throughout these five steps. The International Atomic Energy Agency (IAEA) defines public (nuclear) safety as “the safety of nuclear installations, radiation safety, the safety of radioactive waste management and safety in the transport of radioactive material.”55 Safety as a value refers here to those concerns that pertain to the exposure of the human body to radiation and to the subsequent health effects. All phases of the open fuel cycle emit ionizing radiation from natural uranium, and it is important to consider the safety risks to workers, who are potentially exposed to such low-dose radiation in Steps 1 to 3. The high-dose radiation in the reactor is of a different type and can also carry serious risks because of the strong radioactive decay in the fuel; this radiation is shielded in the reactor. The spent fuel (Step 4) also emits radiation and has to be carefully isolated, since serious decay and heat production occur during the

53 54

I have adapted the description of the fuel cycles from ibid. This is a Generation II type of reactor. Almost all operational nuclear energy reactors in the world are of this type.

55

IAEA et al., Fundamental Safety Principles, 5.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

156

Ethics and Engineering

first years after fuel has been removed from the reactor. Spent fuel is often kept in pools on the reactor site for several years in order to cool it down. In conjunction with the longevity of nuclear waste, safety as a value specifically relates to future generations as well as the present generation, and this has been one of the concerns from the early days of nuclear power production.56 How we should protect future generations from the harmful effects of radiation remains the subject of an ongoing discussion, both in the technical literature (in relation to how and where to build repositories that best guarantee long-term protection) and in policy-related documents.57 Let me note that while there are references to the environment in many safety definitions, including the one given by the IAEA, in this chapter I am discussing environmental benevolence as a separate value, in order to distinguish the different issues associated with environmental sustainability. Safety, as I refer to it here, thus refers specifically to public health issues. Whereas safety in this definition refers to unintentional adverse effects of ionizing radiation on health, security refers to intentional effects. In the IAEA’s Safety Glossary, a nuclear security risk is defined as “any deliberate act directed against a nuclear facility or nuclear material in use, storage or transport, which could endanger the health and safety of the public or the environment.”58 Even though safety and security apparently overlap to an extent, I shall keep the value of security separate so as to be able to distinguish between unintentional and intentional harm. Security here relates both to sabotage and theft and to concerns associated with the proliferation of nuclear weapons (and the knowledge of how to manufacture them). In the open fuel cycle, proliferation threats arise from the enrichment of uranium. Uranium needs to be enriched to 3–5 percent (and in some reactor types 20 percent) in order to generate power in a reactor. When the enrichment exceeds 20 percent, the application can only be for nuclear arms; the IAEA has well-developed inspection methods to detect such activity in any facility under its control. Highly enriched uranium is produced when the enrichment exceeds 70 percent, a level required only for the manufacture of nuclear weapons; the Hiroshima bomb dropped in 1945 was created from

56 57

National Research Council, Understanding Risk. For a detailed discussion of repositories and how they can help protect the interests of future generations, see Taebi, “Intergenerational Risks of Nuclear Energy.”

58

IAEA, IAEA Safety Glossary, 133.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

157

highly enriched uranium. Security thus also involves the nonproliferation of nuclear weapons and fissile (weapon-usable) materials.59 The value of environmental benevolence relates to the accompanying radiological risks to the environment. Radiological risks express the possibility or rather the probability that radioactive nuclides may leak into the biosphere and harm both humans and the environment. Because issues ofharm to human beings have already been discussed in the category of safety, I will deal here only with the effects of radiation on the environment and nonhuman animals. Whether we should protect the environment for what it means to human beings (anthropocentrism) or for its own sake (non-anthropocentrism) is the subject of a longstanding and still ongoing debate in environmental philosophy. I do not intend to take a stance on this matter here. Instead, I treat environmental benevolence as a separate value in order to allow a broader range of views to be reflected. Those who adhere to the anthropocentric approach will then simply merge this value with safety; the environment is important in this approach insofar as it has an impact on human health. On the other hand, those who adhere to the non-anthropocentric approach will specifically include in their analysis the risks and burdens that other species will be exposed to as a result of humanity’s nuclear power production and consumption, which may drastically change the ethical analysis. In recent discussions of radiological protection, there has been an explicit focus on such non-anthropocentric approaches.60 The value of resource durability is defined as the availability of natural resources for the future or the provision of equivalent alternatives (i.e., compensation). In the open fuel cycle, uranium and nuclear fuel are used only once. The remaining spent fuel then must, by law, be disposed of underground for several hundreds of thousands of years. Spent fuel contains various isotopes, including uranium and plutonium, that can also be used as fuel. Resource durability thus has not only a technical component (i.e., concerning the quantities of resources that remain) but also an inherent

59

In various discussions about nuclear technology, a separate value is distinguished to emphasize this aspect, that is, the value of safeguards; for the purpose of the current chapter, I am subsuming safeguards under security.

60

ICRP, The 2007 Recommendations of the International Commission on Radiological Protection; Nolt, “Non-anthropocentric Nuclear Energy Ethics.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

158

Ethics and Engineering

economic component (i.e., whether these resources are available at acceptable prices). For instance, to determine the availability of uranium in the future, we need to make assumptions not only about future consumption, but also about the prices at which uranium may be extracted. In other words, at what price does it make economic sense to extract uranium? Most natural uranium is to be found in seawater, but its separation has proved to be rather expensive. Hence, the value of resource durability is strongly related to the value ofeconomic viability of the technology. One might question whether economic issues have inherent moral relevance and whether it is justified to present economic viability as a moral value. We can safely assume that the safeguarding of the general well-being of society (including health issues) has moral relevance. However, in my interpretation of economic viability in this chapter, I refer only to those aspects of well-being that have to do with nuclear energy production and consumption. In this approach, economic aspects do not have inherent moral relevance; rather, it is what stands to be achieved from economic potential that determines its moral worth. This is why I present the value of economic viability in conjunction with other values. Economic viability should be considered in connection not only with resource durability, but also with safety and environmental benevolence. As we shall see below, certain future nuclear energy production methods that could potentially enhance resource durability or safety may well requireextensive research and development and other investments before they can become industrial realities. In particular, new methods that are based on new types of reactors will require serious investment. Economic viability may, for instance, be a highly relevant value for new technology that can reduce the lifetime of nuclear waste, thereby protecting the safety and security of future generations. In general, economic viability is defined here as the economic potential to embark on a new technological activity and to ensure its continuation while upholding all the other values.

6.4.2 The Closed Fuel Cycle and Intergenerational Dilemmas Thus far, I have briefly assessed the open fuel cycle on the basis of the five values of safety, security, environmental benevolence, resource durability, and economic viability. In this subsection, I will assess the closed fuel cycle as the alternative to the open cycle and in terms of the same moral values.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

159

In this cycle, spent fuel is no longer viewed as waste and is recycled it. As stated above, less than 1 percent of natural uranium consists of the fissile isotope 235U. The major isotope of uranium, 238U, is not fissile and must be converted into fissile plutonium (239Pu), which is deployable for energy production. In the closed fuel cycle, spent fuel undergoes a chemical recycling process, known as reprocessing, to extract the usable elements, including plutonium. During reprocessing, the uranium and plutonium isotopes in the spent fuel are isolated and recovered, and the remaining materials are immobilized in a glass matrix; this waste is known as high-level waste (HLW). There are two rationales for the closed fuel cycle. First, in terms of radiotoxicity, it can reduce the waste’s lifetime to ca. 10,000 years; simultaneously, the volume of the remaining HLW can be reduced by two-thirds. Second, the closed fuel cycle enables nuclear fuel to be used more efficiently, since recycled uranium can be added at the beginning of the cycle. The extracted plutonium must be used for manufacturing mixed oxide fuel (MOX), a nuclear fuel based on uranium and plutonium oxide. MOX fuel is deployable in LWRs. I argue that the choice between the fuel cycles gives rise to an intergenerational dilemma. That is, the closed fuel cycle is beneficial from the perspective of future generations but less beneficial than the open fuel cycle from the perspective of the present generation. The closed fuel cycle (1) compromises short-term safety while it enhances long-term safety, (2) has more shortterm security and proliferation concerns associated with plutonium, (3) considerably enhances resource durability for future generations, and (4) is less economically viable, owing to the expensive reprocessing. Let me review these four arguments. First, the closed fuel cycle compromises short-term safety while it enhances long-term safety. Reprocessing has important long-term benefits, since it reduces the period of necessary care by a factor of twenty (from 200,000 years to 10,000 years). It does, however, introduce various shortterm safety risks in comparison with the open fuel cycle. Reprocessing is a chemical process with radiological risks. It further produces more shortlived (but less radiotoxic) waste that needs to be disposed of. Moreover, reprocessing plants are available in only a handful of countries, and so the transporting of radiotoxic material, often across national boundaries, introduces additional risks. To elucidate: The closed fuel cycle is the favored cycle

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

160

Ethics and Engineering

in Western Europe, and European countries that have opted for it need to transport their spent fuel to the reprocessing plants in La Hague (France) or Sellafield (UK). HLW and in some cases separated uranium and plutonium must then be sent back to the country of origin because each country is responsible for disposing of its own waste; alternatively, uranium and plutonium can be sent to a third country for further use as fuel. It should be noted that the risk of large quantities of radioactive waste being released during the transporting of spent fuel and HLW is small; the countries concerned have extensive experience both with sea transportation and with rail transportation in Europe. In relation to the latter point, however, a 2006 US National Research Council report emphasized that the vulnerability of such transportation to terrorist attack needed to be examined.61 As stated above, security relates both to the dissemination of knowledge and technology on the manufacturing of nuclear weapons and to sabotage with radiotoxic materials. Second, the closed fuel cycle carries further security concerns in relation to plutonium. In the closed fuel cycle, the remaining uranium in spent fuel, along with various isotopes of plutonium, is removed for reuse as fresh fuel, but of course the extracted plutonium also carries proliferation risks, and these are by far the most important concern in reprocessing. The issue of security is one of the main reasons why the US, which has about a quarter of the world’s nuclear reactors, does not reprocess. The US has major stockpiles of plutonium deriving from the nuclear warheads that were dismantled after the Cold War era, and the idea of producing more plutonium in the US is generally considered to be highly undesirable. Third, reprocessing creates important benefits in terms of resource durability as well. Instead of the uranium fuel being used only once, both the remaining uranium in spent fuel and the plutonium produced can be reused. In the first years of commercial nuclear power development in the 1960s, all countries considered reprocessing. In the light of current knowledge on the abundance of uranium resources, the resource durability argument seems to have become less persuasive, but it still plays an important part in countries that do not have access to other types of energy resources. For instance in Japan, a country with no fossil fuel resources,

61

National Research Council, Going the Distance?

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

161

there is strong interest in reprocessing. Japan is the only country without nuclear weapons that has a reprocessing plant within its own borders, in Rokkasho. This plant has, however, never been put into operation, and following the Fukushima Daiichi accident, it does not seem likely that Japan will expand its nuclear energy production (for which this reprocessing plant was built). Fourth, the closed fuel cycle would probably score badly on economic viability. Reprocessing plants are very expensive chemical facilities, and this fact, along with the proliferation concerns, is the main reason why the US is not in favor of the closed cycle option. It is also the reason why small producers of nuclear power that do adhere to the closed cycle system, such as the Netherlands, prefer to transport their spent fuel to commercial reprocessing plants in other countries rather than build their own plants.

6.4.3 The Value-Based Approach and Reflection on Future Nuclear Technologies The value-based approach to sustainability is useful in that it can help us to choose between the two existing fuel cycles, by pinpointing the important ethical dilemmas at hand. But it can also be used when we reflect on which type of nuclear energy should be preferred if nuclear energy is to be part of a future energy mix. This is again very much in line with the Design for Values approach, which can help us not only to understand the values at stake in existing designs, but also to design future technologies (and technological systems) for those values. Moreover, when designing for a specific value, we need to investigate how the rest of the set of interconnected values will change, bearing in mind that in improving one value, we may very well compromise another one. Sustainability as an ethical framework may help to focus discussion of the ethical relevance of these values, and of their relations to each other. Let me explain this by focusing on an example of a technology that improves the value of long-term safety. In nuclear energy technology development, a lot of attention has been understandably devoted to developing new nuclear technologies that can reduce the lifetime of nuclear waste. One might, for instance, argue that the only ethically acceptable nuclear energy is the one that can most reduce the safety and environmental concerns for future generations, or that leaves

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

162

Ethics and Engineering

behind the fewest burdens for the future.62 If we follow this reasoning, it is desirable to have a fuel cycle that can help us to reduce the lifetime of waste substantially. A specific type of fuel cycle that can help us achieve this in the future is the extended closed fuel cycle, also called partitioning and transmutation (P&T), which involves separating (partitioning) the waste left after recycling so that it can be eliminated (transmuted); the waste remaining after P&T is radiotoxic for 500–1,000 years. P&T brings with it certain costs and burdens that are unequally distributed over generations and further sharpen the intergenerational dilemmas of the closed fuel cycle, as discussed in the previous subsection. First of all, this technology, although scientifically feasible, requires years or rather decades of development and investment before it can become an industrial reality.63 The scientific and economic burdens of this will predominantly fall upon present generations. Second, the additional public safety burdens of ultimately deploying the technology will also mainly be borne by the current and the very next generations, as P&T involves further nuclear activity. Thus, public safety is compromised in the short term, while the same value is enhanced in the long run because a shorter waste lifetime means a shorter period of necessary care. It is therefore clear that the costs of deploying P&T will predominantly fall upon the shoulders of present generations while the benefits will be for future generations, all of which gives rise to intergenerational conflicts of interest.64 Some of the costs and burdens are related to the development of new types of reactors in order to deactivate isotopes that the conventional LWRs cannot deactivate, and of new multiple reprocessing technologies,which differ from the one-time reprocessing used in the existing closed fuel cycle.

6.5 Energy Ethics Thus far, I have argued that the issue of sustainability – when moved beyond the yes/no dichotomy – is a valuable framework that can help us assess many ethical issues associated with energy technologies and systems. In this 62

In fact, I have defended this argument elsewhere: Taebi, “The Morally Desirable Option for Nuclear Power Production.”

63

IAEA, Implications of Partitioning and Transmutation in Radioactive Waste Management.

64

Taebi and Kloosterman, “To Recycle or Not to Recycle?”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

163

section, I will focus on the ethics of energy technologies, also called energy ethics. While sustainability and energy ethics have a great deal in common, especially when both environmental and energy resource issues are involved, I argue that it may be more beneficial to focus on energy ethics, as it can cover a larger number of societally and ethically relevant issues associated with energy systems. Let me elaborate on this by returning to the discussion of sustainability and nuclear energy and explain how it differs from the perspectives that the ethics of nuclear energy can offer. In the wake of the Fukushima Daiichi nuclear accident, it was popularly believed that nuclear energy was dying a slow death. A closer look showed, however, that nuclear energy production was still very much ongoing, with many new nuclear technologies being developed and several countries (mostly developing countries and emerging economies) halfway to introducing nuclear energy into their grids for the first time. In the light of this, and with the aim of reviving the field of the ethics of nuclear energy, a few of my colleagues started a project at TU Delft, inviting a group of scholars – from fields ranging from science and engineering to social sciences and humanities – to join us in thinking about the socio-technical and ethical issues of nuclear energy production and waste management in the post-Fukushima era.65 Three main themes that were identified to shape the discussion were risk, policy, and justice. These themes are of course not mutually exclusive, and there are various overlaps between them, as I will show later in this section. As regards a focus on risk, questions were raised about safety cultures in the nuclear industry,66 responsible risk communication in light of “moral emotions,”67 the role of gender in risk perception,68 and the need to decide on radioactive waste management issues in conjunction with affected com-

65

This project was conducted by Sabine Roeser, Ibo van de Poel, and myself. Two publications emerged from the project: Taebi and Roeser, The Ethics of Nuclear Energy; and Taebi and Van de Poel, “Socio-Technical Challenges of Nuclear Power Production and Waste Disposal in the Post-Fukushima Era.”

66

Kastenberg, “Ethics, Risk, and Safety Culture.”

67

Fahlquist and Roeser, “Nuclear Energy, Responsible Risk Communication and Moral Emotions.”

68

Henwood and Pidgeon, “Gender, Ethical Voices, and UK Nuclear Energy Policy in the Post-Fukushima Era.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

164

Ethics and Engineering

munities.69 There were further discussions about the limitations of risk assessment methods,70 and about citizens’ rights to be informed about and to consent to the consequences of nuclear accidents.71 Matters of policy have a significant overlaps with matters of risk, because various legal and regulatory decisions on acceptable levels of radiation risks have a strong bearing on the ethical issues underlying them. More specifically, in this project questions were addressed regarding the long-term management of nuclear waste,72 for instance in the long-term governance of repositories,73 but also regarding the role of nuclear energy in a future energy landscape,74 its legitimacy as a climate mitigation strategy in view of its proliferation risk,75 and whether nuclear energy can be a solution for the energy needs of developing countries.76 As regards justice, there were discussions focusing on two types of justice: procedural justice issues, including issues of power, participation, and community empowerment – which clearly overlap with policy issues – and distributive justice, which approaches justice in terms of distributions of burdens and benefits.77 The issue of justice to a great extent resembles a recent yet powerful interdisciplinary concept in the literature known as energy justice. According to its pioneers, energy justice can serve as a conceptual and analytical tool for decision-making about energy systems,78 and it aims to apply the principles of justice to energy policy in relation to energy production, provision, and consumption.79 Energy justice is usually conceptualized in the three categories of distributive justice, procedural justice, and recognition. It has its roots in discussions of environmental justice from the

69

Bergmans et al., “The Participatory Turn in Radioactive Waste Management.”

70

Downer, “The Unknowable Ceiling of Safety.”

71

Shrader-Frechette, “Rights to Know and the Fukushima, Chernobyl, and Three Mile Island Accidents.”

72

Bråkenhielm, “Ethics and the Management of Spent Nuclear Fuel.”

73

Landström and Bergmans, “Long-Term Repository Governance.”

74

Hillerbrand, “The Role of Nuclear Energy in the Future Energy Landscape: Energy Scenarios, Nuclear Energy, and Sustainability.”

75 76

77

Lehtveer and Hedenus, “Nuclear Power as a Climate Mitigation Strategy.” Gardoni

and

Murphy,

“Nuclear

Energy,

the

Capability

Approach,

and

the

Developing World.” Krütli et al., “Distributive versus Procedural Justice in Nuclear Waste Repository Siting”; Panikkar and Sandler, “Nuclear Energy, Justice and Power.”

78

Sovacool and Dworkin, Global Energy Justice.

79

Jenkins et al., “Energy Justice,” 174.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

165

1970s onward, which aimed to deal with issues of environmental degradation and the unequal distribution of burdens and benefits from a social justice point of view.80 Many of these case studies have focused on laying bare unequal environmental protections, arguing that communities of color and low-income communities are often exposed to heavier environmental burdens than other groups.81 Thus, scholarship in the areas of environmental justice and energy justice focuses explicitly on the issue of recognition – that is, the contention that discussions of justice should start by recognizing to whom considerations of justice are due. Indeed, these concepts are interrelated to each other, and they have been defined and categorized in various ways. In a book on ethics and energy, Benjamin Sovacool, a pioneering figure in energy justice scholarship, identifies eight interrelated factors that jointly comprise energy ethics – including affordability, availability, due process, transparency, and intergenerational justice – as a first step toward conceptualizing energy justice.82 Earlier in this chapter, I discussed some (but not all) of these as values that jointly comprise sustainability as an ethical framework. It is of course not my intention here to nitpick about definitions and concepts; instead, I aim to emphasize that discussion of energy ethics can provide us with a more fine-grained analysis of the broader ethical issues associated with energy. Let me illustrate this by briefly discussing a few examples of the new insights which energy ethics can add to the debate. While sustainability issues have a strong connection with justice, and more specifically intergenerational justice, the broader scope of energy ethics can add important insights from the perspective of recognition and procedural justice, including issues of transparency, due process, and participation. Moreover, while the values of safety, security, and environmental benevolence concern risk for people and the environment, it is clear that energy ethics will bring new insights to such discussions, including those associated with the ethical acceptability of risk, risk communications, gender perspectives, and safety regulations, to mention just a few. Finally, policy discussions about the desirability of nuclear energy can be framed in terms of an individually based capability approach, or “genuine opportunities [that]

80

Dobson, Justice and the Environment

82

Sovacool, Energy and Ethics.

81

Walker, “Beyond Distribution and Proximity.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

166

Ethics and Engineering

individuals are free to achieve.”83 In their analysis of nuclear energy in terms of the capability approach, Paolo Gardoni and Colleen Murphy present sustainability as one of the main underlying themes in the broader discussion of the moral justifiability of nuclear energy.84 Rafaela Hillerbrand follows a similar line and discusses a desirable energy mix in terms of the capability approach, showing how current sustainability assessments cannot offer a complete ethical analysis of an energy mix and the role of nuclear energy in that mix.85 I build on the same line of reasoning: Energy ethics does not dismiss sustainability, but it amends it by bringing new insights to the table. Let me conclude this section by returning to the opening case study of this chapter on biofuel, to see whether energy ethics can offer additional insights there. A seminal report on the ethical issues associated with biofuel, published by the Nuffield Council on Bioethics in 2011 (in the midst of the food crisis), formulates several ethical principles for biofuel production,86 including respect for human rights and a commitment to solidarity with vulnerable populations, because biofuel production “endangers local food security or displaces local populations from the land they depend on for their daily subsistence.”87 The principles further require that burdens and benefits are shared equitably so that “burdens are not laid upon the most vulnerable in society,” and that biofuel production leads to lower greenhouse gas reductions.88 Clearly, these ethical principles coincide with the principles of sustainability discussed earlier in this chapter; one of the principles is even formulated in terms of respecting environmental sustainability because “[r]apid upscaling of production driven by current biofuels targets also poses the danger of leading to serious harm for the environment.”89 Nevertheless, a discussion of the ethics of biofuel clearly adds important insights here in terms of respecting people’s fundamental rights and ensuring equitable distributions of burdens and benefits.

83

Gardoni and Murphy, “Nuclear Energy, the Capability Approach, and the Developing World,” 216.

84

Gardoni

and

Murphy,

“Nuclear

Energy,

the

Capability

Approach,

and

the

Developing World.” 85

Hillerbrand, “The Role of Nuclear Energy in the Future Energy Landscape.”

86

Nuffield Council on Bioethics, Biofuel. The principles are quotted from a brief publication based on the content of the Nuffield

87

report; see Buyx and Tait, “Biofuels,” 633. 88

Ibid.

89

Ibid., 634.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

Sustainability and Energy Ethics

167

6.6 Summary In this chapter, I have argued against viewing sustainability in a dichotomous mode. In a binary view, a discussion of sustainability is likely to lead to excessive haste in dismissing or endorsing an energy system without fully understanding its technical specificities. This is unsatisfactory because we need first to be aware of the different technological possibilities of each system – for instance, the existing and future production methods of energy production and the social and ethical implications of each method. Indeed, this kind of comparison of technological possibilities is the first step toward answering questions about social desirability or whether a particular energy production method should be considered for the future. However, if we ignore the complex nature of sustainability, it can be easily be (mis)used for greenwashing and window-dressing purposes, leading to potential ideological and political manipulation. I have conceptualized sustainability, as it relates to nuclear energy technologies, as an ethical framework in terms of the five interconnected values of safety, security, environmental benevolence, resource durability, and the economic viability of the technology. However, it would be naive to assume that simply doing this will remove all of the ambiguity surrounding sustainability. On the contrary, questions will then arise as to how to deal with conflicts and trade-offs. Such conflicts occur when a technology is sustainable in one specific sense and unsustainable in another sense; what is believed to constitute sustainability determines how these conflicts are dealt with. The trade-offs can also have a temporal dimension. Moreover, an awareness of interconnected values can help us to understand the effects of changing individual values: for instance, if we improve long-term safety in nuclear energy production by introducing new technologies, we may compromise other values, such as short-term security, environmental benevolence, and also short-term safety. Let me close the chapter with three remarks. First, although the sustainability framework has been described in this chapter with respect to nuclear energy production, this is, of course, not an endorsement of nuclear energy technologies. Instead, it aims to facilitate comparison between different types of existing and future nuclear energy production. The outcome of such an analysis may very well be that – in view of the important values at stake – no current or future nuclear energy production will qualify. The analysis

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

168

Ethics and Engineering

should, however, also include other types of energy production and compare them according to the same criteria. That goes beyond the scope of this book. Second, while the framework has been presented for nuclear energy technologies, its basic rationale is also applicable to other energy technologies. Indeed, we need to carry out a similar analysis for each energy technology, understanding the different ethical issues at stake, in order to make the framework suitable for that specific technology. Third, while the value of sustainability can help us to address some of the ethically relevant issues associated with energy technology, energy ethics does provide us with a better lens through which to look at the ethical issues and to make them more relevant to policy-making.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:45, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.010

7

Engineering Ethics in the International Context: Globalize or Diversify?

7.1 Earthquakes and Affordable Housing in Iran In the evening of November 12, 2017, an earthquake with a magnitude of 7.3 on the Richter scale hit the province of Kermanshah in Iran. More than 500 people were killed, more than 9,000 were injured, and about 70,000 were made homeless.1 While the quake was among the strongest in that region, the amount of damage caused was shocking, given that Iran, as one of the world’s most earthquake-prone countries, has very stringent seismic construction

standards.

What

was

particularly

striking

was

that

government-subsidized newly built houses appeared to have been disproportionately affected. These buildings belonged to the so-called Mehr Housing Plan, a massive affordable housing project which provided two million houses for low-income households. Pictures of the damaged governmentbuilt houses circulated widely on social media, giving rise to public outrage. One picture that went viral was a photograph taken in the city Sarpol-eZahab, the worst-affected city in that region, showing a moderately damaged regular building next to a severely damaged government-built Mehr building. In the public debate that followed, structural engineers received most of the blame for the damage caused. Shortly after the quake, Iranian President Rouhani emphasized that corrupt practices in construction contracts had caused the collapse of these buildings.2 While visiting Sarpol-e-Zahab, he promised that whoever had not upheld the safety standards for construction would be held accountable, echoing the public sentiment regarding engineers’ responsibilities.3 Wild stories floated around on social media of

1

“Iran Quake Survivors Plead for Help.”

2

“Collapsed State Housing in Iranian Quake Shows Corruption.”

3

“Rouhani Vows Action over Quake Collapses.”

169

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

170

Ethics and Engineering

engineers who were in charge of structural safety “selling” their signatures to construction contractors, who could thus get approval for their work more easily. “If there are any problems with the construction, the individuals who were negligent must answer for their deeds,” the chief prosecutor in the province of Kermanshah said, echoing public sentiment. In addition to the individual engineers, in the public debate a lot of attention was paid to the role of the professional engineering organization. In Iran, engineers who plan to work independently in construction engineering – which includes disciplines such as architecture, civil engineering, electrical engineering, urban design, and more – must be members of the Iranian Construction Engineering Organization (IRCEO).4 To gain membership, a construction engineer must first pass the state examinations, after which they may apply to the Ministry of Roads and Urban Development for a license. Depending on their length of relevant experience, the license may be at one of four levels; the higher the level of license, the greater the area of a building’s gross floor area an engineer can be involved in designing, developing, or constructing. It is, in this light, understandable that the public blamed not only individual engineers but also the IRCEO the damage caused by the earthquake. Indeed, the IRCEO fully acknowledged that the individual engineers who had made mistakes should be held accountable. While there is no justification for a supervisory engineer lending their name and signature to a contractor, the IRCEO warned that focusing only on such incidents would obscure the more fundamental shortcomings underlying the safety problems in construction.5 First, the IRCEO stated that while a building project of this kind in Iran normally involves many engineers (including those who design the buildings and those in charge of quality assurance and compliance with safety requirements during the construction phase), the contractors who execute the project and carry out the construction are not subjected to the same scrutiny. This has to do with different pieces of legislation passed at different times, 4

See www.irceo.net/.

5

The following three issues and, more generally, the information included in Section 7.1, are partly based on my impression of a heated public debate that took place in the Iranian media, as well as on my exchanges with members of the IRCEO in the First National Congress on Engineering Ethics in Iran, organized by the IRCEO in 2018. Elsewhere in this section, I will return to this congress and discuss my experiences in more detail.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

171

and the rationale behind it is, presumably, that the contractors have no authority to approve the design or the completed construction. While the engineers in charge of the approvals are required to be licensed, contractors are only recommended but not required to be licensed.6 The IRCEO argued that many things went wrong with the contractors. Second, the IRCEO pointed to the fact that certain bureaucratic steps were reduced or even removed in order to speed up the construction process: for instance, for many of the buildings, no licensed engineers were involved in overseeing compliance with safety standards. The Mehr Housing Plan started in the last two years of President Ahmadinejad’s term, and while it was indeed based on a correctly observed societal need, there was a strong political interest in speeding up the construction process, and this had caused certain bureaucratic safety steps to be eased throughout the process. Third, the IRCEO emphasized that although passing the examinations gives an engineer access to the professional organization and entitles them to receive a license, it is no guarantee of the quality of education that the engineer has received. In the decade preceding the earthquake, there had been a proliferation of private universities throughout the country, and there were serious doubts about the academic credentials of some of those universities and their ability to teach engineering knowledge and skills at the required level. A further factor that exacerbated this problem was that the political desirability of speeding up the process created a high demand for contractors to build the two million houses, and as a result, contractors were forced to hire engineers with little (or sometimes no) practical experience.

7.1.1 An Increasing Interest in Engineering Ethics Shortly after the earthquake, in December, 2017, I was invited to teach a crash course on engineering ethics at the First National Congress on Engineering Ethics in Iran, co-organized by the IRCEO. Excited about teaching in the country I grew up in, I accepted this invitation. The congress was planned before the earthquake, but understandably, the recent events had made discussions about the responsibilities of engineers for the collapsed buildings very topical. In the years prior to the congress, engineering ethics 6

A disclaimer is in order. This is my understanding of the situation, based on the underlying pieces of legislation that are about ten years apart.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

172

Ethics and Engineering

was receiving increasing attention in Iran, in two related areas. First, in 2016, the Codes for Professional Ethical Conduct of Construction Engineers were introduced in Iran.7 Although the IRCEO had been established twenty years earlier, this professional organization had never formulated codes of conduct. Second, there was a growth in the teaching of engineering ethics, both at universities of technology and to engineering students and professional engineers. Facilitating courses on ethics were also important for practicing engineers to be able to relate to the proposed Codes for Professional Ethical Conduct. In fact, my lectures at the congress were presented as a “learning on the job” course, as a part of the package of courses that licensed engineers were required to take in order to retain or upgrade their licenses.8 A fascinating discussion took place at the congress about the question of whether the phenomenon of engineering ethics – as a growing field in Iran – should be understood globally or only at the national level. Both approaches had fervent defenders. One group insisted that the basic concepts of engineering are essentially shared by all engineers, regardless of which country they are based in; why should the ethical issues of engineering be different in different countries? As a participant put it: Safety and seismic standards have no nationality; why should the associated ethical issues be any different?9 Another group argued that the information for determining and enforcing those safety standards was very much context-dependent and included – in addition to broadly shared engineering methods and standards – complex regulatory, social, and economic issues; thus,the ethical issues could vary from country to country. An important issue that made these discussions particularly interesting was the very existence of the codes of conduct in Iran. On the one hand, these codes were for many participants in the congress the proof that the IRCEO was already a member of the global engineering community. At many points in the discussion, a comparison was drawn between the IRCEO’s Codes for Professional Ethical Conduct and the codes of ethics of the American Society 7

These codes were introduced by the Iranian Ministry of Roads and Urban Development; they can be found in Persian at: http://inbr.ir/wp-content/uploads/2016/08/akhlagh-herfei .pdf (consulted March 30, 2020).

8

The course concluded with a written examination, which was organized and graded as a state examination, similar to other written examinations for acquiring a license from the IRCEO.

9

I am paraphrasing an idea that I came across among some participants in this congress.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

173

for Civil Engineers (ASCE). In fact, many articles of the IRCEO’s codes – in their essence – resembled the ASCE’s codes of ethics:10 for instance, the articles emphasizing that engineers must prioritize public health and safety, or those emphasizing the need for competence and expertise when they accept engineering jobs. What made the Iranian codes different was that they were developed by the Ministry of Roads and Urban Development (with input from the IRCEO). Perhaps more importantly, they were legally binding and enforceable: failure to abide by the codes could lead to serious court-imposed sentences. Both aspects were criticized by the IRCEO on the grounds that codes need to be developed and agreed upon by engineers themselves in a bottom-up process, as was the case with the ASCE’s codes. Likewise, the NSPE’s “Code of Ethics for Engineers” was also the result of deliberations between members of professional organizations who sought to formulate ethical standards for the engineering profession.11 The IRCEO codes were clearly developing through another top-down process. Moreover, in their legal character they confused the idea of ethical codes of conduct with legally enforceable safety standards and requirements, with fines and sanctions for noncompliance. In the same vein, assessing the moral rightness of an action is a much more demanding task than monitoring compliance with legal requirements such as safety regulations; it also raises the question of who is capable of assessing compliance with such ethical codes.12 Let me make an important remark about the case study before turning to the structure of this chapter. As I have mentioned in footnotes throughout the description, most of the information included here on the case study is based on my impressions of a heated public debate that took place in the Iranian media after the earthquake. Part of this debate concerned the question of whether the issue of safety was, since the Mehr Housing Project

10

See here for the ASCE codes of ethics: www.asce.org/uploadedFiles/About_ASCE/Ethics/ Content_Pieces/Code-of-Ethics-July-2017.pdf (consulted March 30, 2020).

11

See www.nspe.org/resources/ethics/code-ethics.

12

Again, I am recapitulating this critique on the basis of exchanges I have had with individual engineers during and after this congress and in various Persian-language resources in the public media. I wish to particularly thank Ali Dizani, an Iranian engineer who is undoubtedly one of the pioneers of engineering ethics in Iran and who has helped me to better grasp these issues. It goes without saying that any unintended misrepresentations or mistakes in describing the case are my responsibility.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

174

Ethics and Engineering

extended over two different administrations and two different presidents. It is not my intention to take a stance in this complex debate; nor do I intend to blame any organization. I include this case study to show the relevance of engineering ethics in a country that is not typically discussed in engineering ethics textbooks, in order to pave the way for a discussion of engineering ethics in the international context. In Section 7.2, I will first discuss two main strands in the literature on engineering ethics in the international context: first, the ethical issues associated with technology transfer from Western to non-Western countries, and second, somewhat related, the dilemmas that a Western engineer may encounter when working in non-Western countries with different ethical standards. I will argue that discussions of international approaches deserve to be broader than only these two strands. In Section 7.3, I will argue that engineering ethics should not be considered a predominantly Western phenomenon; nor should we take the Western perspective as the point of departure for discussions of engineering ethics generally. In Section 7.4, building on recent work by Colleen Murphy and colleagues,13 I will review the reasons why we need to incentivize international approaches in thinking about and teaching engineering ethics, while in Section 7.5 I will discuss two main approaches to the latter, globalizing and diversifying engineering ethics, while reviewing some of the pitfalls of each approach.

7.2 Moving beyond the Dilemmas of Western Engineers in Non-Western Countries In the literature on engineering ethics, a lot of attention has been given to the international aspects, focusing mostly on two specific and related issues. First, there has been a strong focus on the transfer of technology from Western to non-Western countries with perhaps different levels and types of expertise and different environmental and safety legislation. The Bhopal disaster of 1984 is discussed as an important case study in various textbooks on engineering ethics. This was a major industrial accident in the city of Bhopal in India, where the American Union Carbide Corporation and its Indian subsidiary corporation had built a plant to produce the chemical methyl isocyanate (MIC), a basic ingredient of pesticide. In this industrial 13

Murphy et al., Engineering Ethics for a Globalized World.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

175

accident, which was one of the largest of its kind, MIC was released into the air, causing the deaths of around 20,000 people in the surrounding neighborhood.14 The literature on business ethics and corporate social responsibility discusses similar ethically complex situations (not only related to engineering), mostly from the perspective of multinational enterprises working in an increasingly international environment, but also from the perspective of an individual engineer encountering ethically complex situations.15 In the international context, this second strand of thinking about engineering ethics takes the perspective of Western engineers working in non-Western countries. It has some similarity to the first strand, that of technology transfer, but differs from it in that it mostly focuses on individual engineers’ roles, responsibilities, and dilemmas rather than the responsibilities of the firms. In the international context, engineers can encounter different ethical standards from their own, which in turn can give rise to ethical dilemmas. Surgrue and McCarthy argue that engineers are constantly dealing with three categories of ethical standards and constraints that are not always compatible: those imposed by “the society from which the professionals come,” those imposed by “the host country [or] the country that is the target of the professional intervention,” and “a specific profession’s norms, responsibilities, and limitations.”16 Charles Harris et al. focus specifically on the problems encountered by “US and other Western engineers whose employment takes them to non-US and particularly non-Western countries with different ethical standards.”17 When confronted with such a “boundary crossing” problem, a Western engineer can choose either an “absolutist solution,” which involves following the ethical rules and standards of their home country, or a “relativist solution,” which entails following the ethical standards of the host country.18 When encountering problems such as bribery,

14

There are often major controversies about the numbers of casualties of large accidents. In the case of the Bhopal disaster, estimates vary between 3,000 and 30,000 deaths, and 20,000 is a figure often mentioned in writings on the subject. See for a detailed description McGinn, The Ethical Engineer, 92–93.

15

16

See, e.g., Seeger and Steven, “Legal versus Ethical Arguments”; Wong, “Revisiting Rights and Responsibility”; Ladd, “Bhopal”; McGinn, The Ethical Engineer. Surgrue and McCarthy, “Engineering Decisions in a Global Context and Social Choice,” 80.

17

Harris et al., Engineering Ethics, 192.

18

Ibid.; all emphases in original.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

176

Ethics and Engineering

“grease” payments, and nepotism in non-Western countries, however, the Western engineer cannot solely follow either of the two solutions satisfactorily. Harris et al. have crafted a set of ethical resources for use by Western engineers facing such problematic situations.19 Both strands in the literature pinpoint important ethical problems, but the ethical issues that technology raises in different parts of the world, especially in emerging economies, extend far beyond the problems of technology transfer or those that a conflicted Western engineer encounters when working in a country with different ethical standards. I argue that we should not take the Western perspective as the sole point of departure for discussing ethical issues in engineering in the international context, because doing so can obscure broader international perspectives.

7.3 Is Engineering Ethics a Western Phenomenon? It goes without saying: Applied ethics is deeply rooted in the thinking of mostly European philosophers. Moreover, the literature on engineering ethics has been predominantly developed in the US, and later in Europe, on the basis of the problems that engineers encounter in those parts of the world. However, I argue that neither of these two issues makes the existing field purely Western, especially if the second is to be interpreted as unfit for or inapplicable to other parts of the world. Framing it as solely a Western undertaking could contribute to the unpopularity of the field in some countries. Let me first outline why I think that the “West versus the rest” dichotomy is problematic when it comes to discussing ethical issues in engineering. There is little that non-Western countries share (culturally, economically, or otherwise) by virtue of not belonging to the West. Moreover, “the West” as a demarcation is rather ambiguous, as it can mean different things geographically, economically, geopolitically, and culturally. Geographically, for instance, “the West” refers to the Western Hemisphere, which includes the Americas and the western part of Europe as well as parts of Africa, Antarctica, and Asia,20 while culturally, it often means North America and 19

Ibid.

20

This definition is adopted from the Encyclopedia Britannica; see www.britannica.com/ place/Western-Hemisphere (consulted March 20, 2020).

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

177

Europe (sometimes only Northern and Western Europe). Geopolitically, the West is often used as a surrogate for developed or industrialized countries as opposed to developing countries that are subsumed under the heading of “non-Western.” Do these geopolitical demarcations reflect homogeneity within each category when it comes to engineering problems and their ethical issues? They do not. Let me clarify this in relation to an important issue in engineering that has many ethical implications: the fossil-fuel-based combustion engine. While the combustion engine is among the most important of the inventions that have contributed to the improvement of global well-being, it has also contributed to air pollution and greenhouse gas emissions, which in turn contribute to climate change. These problems have been dealt with differently around the world, and it is not difficult to spot these differences. For instance, while many European countries respond by making combustion engines more efficient, switching to alternative personal transportation (for instance, hybrid or fully electric cars) or other modes of transportation (for instance, enabling more public transportation), or even banning certain types of fossil-fuel-based cars (especially diesel cars) altogether, the US has followed a pro-fossil-fuel agenda, especially under former President Trump, who “doesn’t believe in climate change.”21 The differences of approach among the non-Western countries are certainly not any smaller. What may differs are the impacts in different parts of the world, which sometimes have to do with the economic situations in the affected countries. Climate change, for instance, may affect certain countries in the world more drastically than others, and low-lying countries and those with extreme weather conditions (either warm or cold) or extreme levels of precipitation are the most vulnerable. In poor countries, the effects are likely to be felt more. Air pollution has a similar impact: that is, it affects populations in lowincome cities the most drastically, especially because, in addition to fossilfuel-based transport, it is exacerbated by a host of other sources, including “household fuel and waste burning, coal-fired power plants, and industrial activities.”22 According to the air quality database of the World Health

21 22

“Trump Dismisses US Climate Change Report.” This is based on an air quality model initiated by the WHO; see www.who.int/en/newsroom/detail/27-09-2016-who-releases-country-estimates-on-air-pollution-exposure-andhealth-impact (consulted March 28, 2020).

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

178

Ethics and Engineering

Organization (WHO), “97% of cities in low- and middle-income countries with more than 100,000 inhabitants do not meet WHO air quality guidelines” compared with 49 percent in high-income countries.23 In sum, it is not my intention to dismiss the terminology of “Western versus non-Western” per se, because it can help to remind us of the different types of problems faced by different global regions, not only in that engineering may have different impacts in different parts of the world, but in the sense that the nature of an ethical problem may be interpreted or or construed differently because of social and cultural differences. At the very least, the categories of “Western” and “non-Western” countries can be useful heuristically, as long as we view neither as a homogeneous whole.24

7.4 The Need to Consider Engineering Ethics in the International Context Engineering is globalizing at a fast pace: It increasingly takes place across national and cultural borders and in a global context. This globalization has changed and complicated the issues involved in of engineering and in engineering ethics.25 In a recently published edited volume, Murphy and colleagues have reinvigorated discussion of engineering ethics in a global context by investigating how “increased global interdependence has fundamentally transformed the environment in which engineers learn and practice.”26 Building on work published in their volume, I will summarize three reasons why we should consider engineering ethics in an international context: (1) a global shift in technological innovations;27 (2) a continuing internationalization of engineering education, and the need to follow the same path in engineering ethics education;28 and (3) the need to establish competence and expertise in the international context, and to determine whether international standards and licensure can help.29 Let me review these three reasons in more detail. 23

See www.who.int/airpollution/data/cities/en/ (consulted March 28, 2020).

24

I owe this suggestion to Pak-Hang Wong.

25

Luegenbiehl and Clancy, Global Engineering Ethics.

27

Lynn and Salzman, “Engineers, Firms and Nations.” Murrugarra and Wallace, “A Cross Cultural Comparison of Engineering Ethics

28

26

Murphy et al., “Introduction,” 3.

Education.” 29

Murphy et al., “Introduction.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

179

First, engineering is growing fast in many emerging economies and developing countries and, as a result, Murphy and colleagues argue, there is a “lag between rapid technological advancement and the development of appropriate professional and ethical standards.”30 This is particularly problematic because an “innovation shift” is taking place, which challenges the “longstanding views about the persistence of inherent innovation advantages of advanced industrial nations and of particular regions, such as Silicon Valley.”31 As engineering becomes increasingly international, emerging economies are increasingly making innovations. It seems unlikely that in the long term any nation or region in the world can retain absolute dominance in all areas of technological innovation.32 Part of this shift is due to multinational enterprises opening branches to other parts of the world. In a sense, the ethical issues associated with this are more or less the same as those discussed as the ethics of technology transfer in Section 7.2. Another part of the “innovation shift” is due to important developments in large emerging economies such as China and India. Their unique mix of demand, culture, talent, and expertise makes them “spawning grounds for innovation.”33 It is also important to note that innovation takes place in both high-end and low-end technologies. Smartphones provide an interesting example. While premium, high-end smartphones are indeed a growth area, there is also an increasing demand for low-cost smartphones that larger proportions of the world’s population can afford. This also constitutes an important push for innovation.34 Such innovations have an impact on technology flow, which mirrors technology transfer from advanced economies to developing and emerging economies. Consideration of the ethical issues associated with innovative and emerging technologies cannot be limited to the Western world. Second, engineering ethics education is becoming more and more international. Student populations at the world’s large universities are becoming increasingly diverse and international, and many of these universities include an ethics component in their programs. Moreover, in acknowledgment of the importance of creating awareness of ethical issues in engineering, there is a growing interest in teaching engineering ethics at modern

30 33

31

Ibid., 4. Ibid.

34

Lynn and Salzman, “Engineers, Firms and Nations,” 17.

32

Ibid., 18.

Lynn and Salzman, “Engineers, Firms and Nations.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

180

Ethics and Engineering

universities of technology throughout the world.35 Both the teaching materials and the teaching methods must be understandable by and relatable for international groups of students, which means that localized content may sometimes need to be developed to emphasize problems in a specific setting. Moreover, cross-cultural and international perspectives also need to be more thoroughly cultivated in teaching materials. In China, for instance, five major technical universities are teaching ethics in science and technology courses. They combine traditional Chinese ethics with non-Chinese ethics and use teaching methods that aim to stimulate students’ ability to solve ethical problems in the real world.36 Another interesting example is provided by Chile, where a typical engineering ethics course developed for Western engineers was adapted for a Chilean classroom by translating some of the material into Spanish and including other publications in Spanish; this facilitated a cross-cultural comparison.37 There have been further efforts to make engineering ethics education more suitable for the international context, for instance by offering courses on the responsible conduct of research; these may help the development of a global engineering ethics curriculum with universal standards, while taking cultural and local contexts into account.38 Third, in a context in which engineers are becoming increasingly mobile and engineering firms are becoming increasingly international, how can we create a methodology for mutually recognized qualifications in engineering? More specifically, the question is whether engineering licenses are the right solution for this.39 Several countries, including the US and Canada, already have a longstanding tradition of issuing licenses to establish competence. In the US, for instance, an engineer needs to be licensed as a Professional Engineer (PE). “To a client, it means you’ve got the credentials to earn their trust. To an employer, it signals your ability to take on a higher level of responsibility. Among your colleagues, it demands respect. To yourself, it’s a

35

Taebi, van den Hoven, and Bird, “The Importance of Ethics in Modern Universities of Technology.”

36

Wang and Yan, “Development of Ethics Education in Science and Technology in

37

Technical Universities in China.” Murrugarra and Wallace, “A Cross Cultural Comparison of Engineering Ethics Education.”

38

Jordan and Gray, “Engineers, Firms and Nations.”

39

Weil, “Professional Standards.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

181

symbol of pride and measure of your own hard-won achievement.”40 With a PE license an engineer can prepare, sign, and seal engineering plans for approval by a public authority; sometimes a PE license is even legally required. License mechanisms also help educators to prepare students for their future in engineering. There is a growing list of countries with license mechanisms, including large emerging economies such as Brazil, Russia, India, and China, but also other countries such as Iran, as discussed in the opening case study of this chapter. Yet many countries still do not have such licensing mechanisms in place. This is not a sign of less advanced engineering per se, but it may also be a deliberate choice in some cases. For instance, in the Netherlands – which can certainly pass as an engineering-advanced country – there is no licensing mechanism for professional engineers: Engineers are hired only on the basis of their engineering degrees along with their expertise and other credentials. Moreover, for the countries that do have such systems, it remains a challenge to compare licenses in the international context.41 Mutual and multilateral recognition of licenses is not only a practical problem – i.e., are the assessment methods and what they assess similar enough in all countries? – but also a fundamental problem, since it requires uniform standards that engineers in all countries are expected to follow.42 Standards often reflect local custom and moral norms, which vary internationally.43 Sometimes the different regions of a country have different regulatory frameworks and different licenses. In the US, for instance, to be a PE requires an engineering graduate to earn the license of the specific state’s licensure board,44 and licenses are not automatically transferable between states.

7.5 Globalizing or Diversifying Engineering Ethics? In the previous section, I argued that the impacts of globalization provide several reasons for broadening the scope of engineering ethics. We can think 40

The quotation is from the website of the National Society of Professional Engineering (NSPE) in the US: www.nspe.org/resources/licensure/what-pe (consulted August 27, 2020).

41

Loyalka et al., “Factors Affecting the Quality of Engineering Education in the Four

42

Largest Emerging Economies.” Murphy et al., “Introduction,” 4.

44

43

Harris et al., Engineering Ethics.

Information from the website of the NSPE: www.nspe.org/resources/licensure/what-pe (consulted August 27, 2020).

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

182

Ethics and Engineering

about this broadening in two different ways: in terms of the globalization and the diversification of engineering ethics. Let me review each approach while also focusing on the potential pitfalls.

7.5.1 Globalizing Engineering Ethics While Remaining Sensitive to Contextual Differences There have been two distinct approaches to the globalization of engineering: (1) adjusting engineering curricula – engineers need to be educated to be sensitive to and able to function in the global environment – and (2) seeking principles and codes that can be shared by engineers globally. The first approach has led to a number of educational programs and activities around the world that aim to make engineering students more sensitive to global differences. An interesting proposal presented by Gary Lee Downey and colleagues is to educate the “globally competent engineer.”45 This approach has a strong focus on the increasingly global and multicultural nature of engineering work: “the key achievement in the often-stated goal of working effectively with different cultures is learning to work effectively with people who define problems differently than oneself.”46 This approach is not necessarily a matter of ethics but concerns engineering education more broadly; it underscores the global nature of engineering by creating awareness and increasing engineers’ ability to better understand and relate to how different engineers in different cultures “interpret, address, and engage aspects of the world in particular ways.”47 Ethics is implicitly assumed in the creation of awareness about cross-cultural differences in the global setting. The second approach focuses on establishing commonalities in engineering between different countries and across cultures in order to formulate “global ethics codes” for engineering.48 This approach has met with some criticism, perhaps most prominently from Michael Davis, who argues that such efforts would resemble “re-inventing the wheel.”49 Davis argues that global standards for engineering already exist, as do sufficient resources for dealing with the ethical dimensions of engineering in a global context. Engineering culture and the engineering community are independent of 45

Downey et al., “The Globally Competent Engineer.”

48

Zhu and Jesiek, “Engineering Ethics in Global Context.”

49

Davis, “Global Engineering Ethics.”

46

Ibid., 107.

47

Ibid., 110.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

183

people’s cultural backgrounds: “engineering has a global culture, its own ways of doing things wherever around the globe those things are to be done.”50 This view seemed to resonate strongly with the first group of Iranian engineers at the 2018 congress, which I discussed in Section 7.1.1. To these engineers, engineering did not depend on national and cultural boundaries, or as one engineer eloquently put it: “safety and seismic standards have no nationality.” Yet there have been many efforts in the literature to explicitly spell out codes that reflect the commonalities in global engineering. To be sure, these efforts do not contradict Davis’s rationale that engineering rests on global standards. Quite the contrary: they build on the same rationale. However, they differ from it in that they explicitly seek to formulate codes that show these commonalities. Proponents of such thinking refer to successful transnational attempts to formulate commonalities in specific geographic regions. For example, in East Asia, the Chinese Academy of Engineering (CAE) and its Japanese and South Korean counterparts issued a “Declaration on Engineering Ethics” that included “Asian Engineers’ Guidelines for Ethics.”51 These guidelines emphasized “cherishing the Asian cultural heritage of harmonious living with neighboring people and nature, and were intended to assist practicing engineers in those countries that shared cultural values of Confucian ethics.”52 The question that immediately arises is whether we can also identify commonalities in countries that do not necessarily share our fundamental cultural values. In an interesting analysis, Saif alZahir and Laura Kombo investigated the compatibility of the IEEE code of ethics with the codes of ethics of professional engineering societies in thirty-two different countries across Africa, Asia, Australia, Europe, and Latin America.53 They found that while only four countries had adopted the complete IEEE code of ethics, the other twenty-eight had endorsed some version of the same code, adopting parts of the code and adding new parts that were more in line with their individual national settings.54 The authors argue that since commonalities that are endorsed by culturally different countries in to different global regions, a global professional code of ethics must be conceivable.

51

Zhu and Jesiek, “Engineering Ethics in Global Context.”

50

Ibid., 73.

52

Zhu, “Engineering Ethics Studies in China,” 94.

53

AlZahir and Kombo, “Towards a Global Code of Ethics for Engineers.”

54

Ibid., 1.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

184

Ethics and Engineering

Building on the same logic, Heinz Luegenbiehl argues that engineers should be considered community with a shared set of values, “whatever their nationality or cultural background.”55 He presents six “Foundational Principles of Engineering Ethics” that he deems applicable to engineering, regardless of the cultural context: public safety, respect for human rights, environmental and wildlife preservation, engineering competence, and openness and honesty.56 However, Luegenbiehl’s foundational principles run into two related difficulties. First, they might unjustifiably disregard cultural context, overlooking the fact that that some of these principles may mean morally different things in different cultures. “Honesty,” for instance, might have more pragmatic connotations in Confucian ethics from than in Western ethical theories; it is the context of its application that puts flesh on the bones of this principle.57 Second, the list of principles does not say anything about their relative moral relevance when they conflict, which again depends on the cultural, geographic, or sociopolitical context. The principle of environmental and wildlife preservation, for instance, might easily become subordinate to the principle of public safety when the two clash in a flood-prone area in a developing country. Yet these critiques do not entirely undermine the existence of foundational principles such as those proposed by Luegenbiehl. One could argue that foundational principles do not necessarily offerunequivocal answers to moral quandaries in all situations. Even within the confines of one culture and one country, codes of ethics or any other foundational principles for engineering require contextual information before they can be applied, according to the specificities of the engineering and the (sometimes diverging) interests of different stakeholders, along with social boundaries. For example, upholding public health and safety as paramount without taking additional contextual information into account and might conflict with other important ethical provisions and considerations. In a later book, Global Engineering Ethics (2017), Heinz Luegenbiehl and Rockwell Clancy extend the same argument by proposing an approach that does not prioritize any culture or nationality.58 They start with an important observation: Engineering ethics has its roots in the US; it has been developed

55

Luegenbiehl, “Ethical Principles for Engineers in a Global Environment,” 149.

56

Ibid., 154–58.

58

Luegenbiehl and Clancy, Global Engineering Ethics, 9.

57

Zhu and Jesiek, “Engineering Ethics in Global Context,” 6.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

185

mostly in the US and Europe; and it has some unique American or Western features, such as “an emphasis on [Western] ethical theory and the ideal of professions,”59 some of which are neither readily nor appropriately adaptable to other parts of the world. The approach is based on rethinking engineering ethics “from the ground up . . . without any cultural presuppositions”; it would then be in principle acceptablein any specific culture.60 While Luegenbiehl and Clancy’s proposed approach is intuitively very appealing and sympathetic as a move toward cultural and ethical inclusiveness, Pak-Hang Wong warns of an unwanted side effect: Such global engineering ethics codes could create an exclusivity for those who use them. As a result, the “globality” of engineering might be confined to cultures and societies with shared values.61 This could be particularly harmful to the interests of developing countries as newcomers in engineering fields, if, in order to be recognized in this “global” engineering community, their engineers are expected to adopt the “global” standards as well as their own underlying cultural and ethical values, which are often based on those values already established by developed nations. To address this challenge, Wong proposes a proactive approach to global engineering ethics, which focuses on the promotion of well-being through engineering but also offers a critical reflection on the values embedded in engineering products and practices. His approach requires a profound understanding of, and respect for, norms and values in the cross-cultural context.62

7.5.2 Diversifying Engineering Ethics While Avoiding Cultural Relativism Another group of scholars argues that we need to understand certain issues in engineering within the context of a specific country, a culture, or perhaps a geographic region with shared cultural values. For instance, in order to understand why engineering ethics courses were not developed in France until the late 1990s you need to be aware of the cultural and historical context of how the French engineers organized themselves.63 Similarly, 59 61

60 Ibid. Ibid. Wong, “Responsible Innovation for Decent Nonliberal Peoples”; Wong, “Global

Engineering Ethics.” 62

Wong, “Global Engineering Ethics.”

63

Didier, “Engineering Ethics in France.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

186

Ethics and Engineering

there have been discussions about the rise of engineering ethics in the US,64 and about recent developments in China and in Europe.65 I will discuss these efforts in the category of diversification of engineering ethics. Other diversification approaches include comparisons between engineering ethics education in different countries, for instance, China and the US,66 or Japan and Malaysia.67 Some work has further focused on addressing important issues and concepts from different perspectives in order to create a greater degree of sensitivity to cultural differences in engineering ethics. Qin Zhu and Brent Jesiek call this the “cultural studies approach” to engineering ethics in the global context.68 This approach emphasizes that certain concepts central to Western thinking about engineering ethics (such as autonomy) might be wrongly considered necessary to global engineering ethics. In countries that have a Confucian cultural heritage as the backbone of their ethical thinking – specifically, China, Japan, South Korea, and Singapore – autonomy might even be discouraged in professional practice and in society.69 Ethical decision-making in these countries always includes some kind of consideration of one’s relation to others. Engineers are certainly not exempted from this thinking, and giving autonomy too much weight to autonomy in global engineering ethics might therefore disproportionately favor Western ethics. This critique resembles the previously mentioned critique that global engineering ethics codes can favor Western thinking because it is the best developed, both in the scholarly literature and in engineering practices (i.e., codes of conduct) throughout the world. How do we avoid giving too much weight to Western ethics and thereby disadvantaging other ethical thinking? Diversification is in a sense a response to this critique, but the flip side of diversification is the problem of cultural relativism. As Wong argues, if an 64

Weil, “The Rise of Engineering Ethics.”

65

For China, see, e.g., Fan, Zhang, and Xie, “Design and Development of a Course in Professionalism and Ethics for CDIO Curriculum in China”; Wang and Yan, “Development of Ethics Education in Science and Technology in Technical Universities in China.” For Europe, see Didier, “Engineering Ethics: European Perspectives.”

66

Cao, “Comparison of China–US Engineering Ethics Educations in Sino-Western Philosophies of Technology.”

67 68 69

Balakrishnan, Tochinai, and Kanemitsu, “Engineering Ethics Education.” Zhu and Jesiek, “Engineering Ethics in Global Context.” Luegenbiehl, “Ethical Autonomy and Engineering in a Cross-Cultural Context”; Zhu and Jesiek, “Engineering Ethics in Global Context.”

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Engineering Ethics in the International Context

187

analysis results in the mere observation that different cultures might think differently about ethical concepts, it can be interpreted as an open invitation to endorse ethical relativism.70 This removes the possibility of cross-cultural reflection, which is a cornerstone of ethics as a field of study. While people’s cultures do indeed play an essential role in their ethical decision-making, engineering communities from different cultures need to be enabled “to communicate norms and values to each other, to respect – or, at least, tolerate – their differences.”71 Teaching cultural differences from such a perspective could contribute to a fruitful exchange of opinions and shared normative understandings of ethical issues in engineering. This is, of course, no guarantee that a normative consensus can be found. However, even if finding shared ground is impossible in some circumstances, the approach could promotemutual understanding among engineers and arriving at culturally informed choices. Shan Jing and Neelke Doorn facilitate such cross-cultural comparison by comparing how the notion of responsibility has been perceived in Confucianism and in Aristotelian virtue ethics, with the aim of investigating what a Confucian perspective can contribute to the existing literature on engineering ethics.72 They argue that while the two approaches have clear differences, Confucian and Western virtue-based ethics also have much in common and that this commonality “allows for a promising base for culturally inclusive ethics education for engineers.”73

7.6 Summary In this chapter, I have focused on engineering ethics in the international context. I have first explored two main strands in the literature, namely the ethical issues associated with technology transfer from Western to nonWestern countries, and those associated with the dilemmas that a Western engineer can encounter when working in non-Western countries with different ethical standards. Discussions of international approaches deserve to be much broader than just these two strands. I have argued that engineering ethics shold not be predominantly considered a Western phenomenon; nor should we take the Western perspective as the point of departure for 70

Wong, “Global Engineering Ethics.”

72

Jing and Doorn, “Engineers’ Moral Responsibility,” 233.

71

Ibid., 625. 73

Ibid.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

188

Ethics and Engineering

discussions of engineering ethics. I have further reviewed the “why” and the “how” questions of international thinking in engineering ethics. I have discussed three reasons why we need to incentivize international approaches in thinking about and teaching engineering ethics: the global shift in technological innovation, the continuing internationalization of engineering and engineering ethics education, and the need to establish competence and expertise in the international context. We need to cultivate thinking about engineering in the broad international context. We can do this by following either of the two strategies of globalization and diversification, but we should remain vigilant about pitfalls of each approach. Globalization focuses on establishing the commonalities in engineering between different countries and cultures, in order to formulate global ethics codes. While the idea of globalizing engineering ethics is intuitively compelling, it must go forward in ways that do not damage the interests of newcomers in the engineering fields, namely the emerging economies and developing countries. In order to be recognized in this “global” engineering community, the developing nations will have to adopt the global standards and their associated values, as already established by the developed nations. Diversification in engineering ethics focuses on acknowledging the cultural and contextual differences between countries and cultures. In this approach, we must ensure that the analysis does not end with mere observations that different cultures might think differently about ethical concepts; . the latter is an open invitation to endorse relativism. Diversification is most helpful if it facilitates cross-cultural dialogue and reflection, rather than merely stating that cultures are different. At the least, it can increase mutual understanding and respect among engineers and encourage cultural sensitivity, thus contributing to culturally informed decisions. Both globalization and diversification, if done properly, can lead to more inclusive thinking in engineering ethics.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.011

Bibliography

Abe, K. “Tectonic Implications of the Large Shioya-Oki Earthquakes of 1938.” Tectonophysics 41, no. 4 (August 31, 1977): 269–89. https://doi.org/10.1016/ 0040-1951(77)90136-6. Ackerman, F., and L. Heinzerling. Priceless: On Knowing the Price of Everything and the Value of Nothing. New York: The New Press, 2004. Acton, J. M., and M. Hibbs. Why Fukushima Was Preventable. Nuclear Policy. Washington, DC: Carnegie Endowment for International Peace, March, 2012. Ahmad, A. L., N. H. Mat Yasin, C. J. C. Derek, and J. K. Lim. “Microalgae as a Sustainable Energy Source for Biodiesel Production: A Review.” Renewable and Sustainable Energy Reviews 15, no. 1 (January 1, 2011): 584–93. https://doi.org/ 10.1016/j.rser.2010.09.018. Ahn, J., C. Carsen, M. Jensen, K. Juraku, S. Nagasaki, and S. Tanaka, eds. Reflections on the Fukushima Daiichi Nuclear Accident: Toward Social-Scientific Literacy and Engineering Resilience. Heidelberg: Springer Open, 2015. www.springer.com/ engineering/energy+technology/book/978-3-319-12089-8. Akhtarkhavari, A. Global Governance of the Environment: Environmental Principles and Change in International Law and Politics. Cheltenham, UK: Edward Elgar, 2010. Alba, Davey. “Amazon Rekognition Falsely Matched 28 Members of Congress with Arrest Mugshots.” BuzzFeed News, July 26, 2018. www.buzzfeednews .com/article/daveyalba/amazon-rekognition-facial-recognition-congress-false. Albrechts, L. “Strategic (Spatial) Planning Reexamined.” Environment and Planning B: Planning and Design 31, no. 5 (October 1, 2004): 743–58. https://doi.org/ 10.1068/b3065. Ale, B. J. M. “Tolerable or Acceptable: A Comparison of Risk Regulation in the United Kingdom and in the Netherlands.” Risk Analysis 25, no. 2 (2005): 231–41. https://doi.org/10.1111/j.1539-6924.2005.00585.x. Allen, C., W. Wallach, and I. Smit. “Why Machine Ethics?” IEEE Intelligent Systems 21, no. 4 (July, 2006): 12–17. https://doi.org/10.1109/MIS.2006.83. 189

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

190

Bibliography

Altmann, Jürgen, Peter Asaro, Noel Sharkey, and Robert Sparrow. “Armed Military Robots: Editorial.” Ethics and Information Technology 15, no. 2 (June 1, 2013): 73–76. https://doi.org/10.1007/s10676-013-9318-1. alZahir, S., and L. Kombo. “Towards a Global Code of Ethics for Engineers.” IEEE International

Symposium

on

Ethics

in

Science,

Technology,

and

Engineering, Chicago, 2014. Amazeen, Michelle. “Gap (RED): Social Responsibility Campaign or Window Dressing?” Journal of Business Ethics 99, no. 2 (2011): 167–82. https://doi.org/ 10.1007/s10551-010-0647-2. Anderson, M., and S. L. Anderson. “Robot Be Good: A Call for Ethical Autonomous Machines.” Scientific American 303, no. 4 (2010): 15–24. Annema, J. A. “The Use of CBA in Decision-Making on Mega-projects: Empirical Evidence.” In International Handbook on Mega-projects, edited by H. Primus and B. Van Wee 291–312. Cheltenham and Northampton: Edward Elgar, 2014. Annema, Jan Anne, Niek Mouter, and Jafar Razaei. “Cost-Benefit Analysis (CBA), or Multi-criteria Decision-Making (MCDM) or Both: Politicians’ Perspective in Transport Policy Appraisal.” Transportation Research Procedia 10 (January 1, 2015), 788–97. https://doi.org/10.1016/j.trpro.2015.09.032. Asaro, Peter. “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making.” International Review of the Red Cross 94, no. 886 (June, 2012): 687–709. https:// doi.org/10.1017/S1816383112000768. Asveld, L., and D. Stemerding. Algae Oil on Trial: Conflicting Views of Technology and Nature. The Hague: Rathenau Instituut, 2016. “Social Learning in the Bioeconomy: The Case of Ecover.” In Experimentation beyond the Laboratory: New Perspectives on Technology in Society, edited by I. Van de Poel, L. Asveld, and D. Mehos, 103–24. London: Routledge, 2018. Asveld, L., and S. Roeser, eds. The Ethics of Technological Risk. London: Earthscan, 2009. Asveld, L., R. van Dam-Mieras, T. Swierstra, S. Lavrijssen, K. Linse, and J. van den Hoven, eds. Responsible Innovation 3: A European Agenda? Cham: Springer, 2017. Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. “The Moral Machine Experiment.” Nature 563, no. 7729 (November, 2018): 59–64. https://doi.org/10.1038/s41586-018-0637-6. Backer, L C. “Transparency and Business in International Law: Governance between Norm and Technique.” In Transparency in International Law, edited

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

191

by A. Bianchi and A. Peters, 477–501. Cambridge: Cambridge University Press, 2012. Balakrishnan, Balamuralithara, Fumihiko Tochinai, and Hidekazu Kanemitsu. “Engineering Ethics Education: A Comparative Study of Japan and Malaysia.” Science and Engineering Ethics 25, no. 4 (2019): 1069–83. https://doi.org/10.1007/ s11948-018-0051-3. Balat, Mustafa, and Havva Balat. “Progress in Biodiesel Processing.” Applied Energy 87, no. 6 (June 1, 2010): 1815–35. https://doi.org/10.1016/j. apenergy.2010.01.012. Barfod, M. B., K. B. Salling, and S. Leleur. “Composite Decision Support by Combining Cost–Benefit and Multi-criteria Decision Analysis.” Decision Support Systems 51, no. 1 (April 1, 2011): 167–75. https://doi.org/10.1016/j. dss.2010.12.005. Barry, B. “Sustainability and Intergenerational Justice.” In Fairness and Futurity: Essays on Environmental Sustainability and Social Justice, edited by A. Dobson, 93–117. New York: Oxford University Press, 1999. “Sustainability and Intergenerational Justice.” Theoria 45, no. 89 (1997): 43–65. Basta, Claudia. “Siting Technological Risks: Cultural Approaches and CrossCultural Ethics.” Journal of Risk Research 14, no. 7 (2011): 799–817. BBC. “France Scraps Controversial Airport Plan.” Europe, BBC News, January 17, 2018. www.bbc.com/news/world-europe-42723146. Beauchamp, T. L., and J. F. Childress. Principles of Biomedical Ethics. 6th ed. New York and Oxford: Oxford University Press, 2009. Beckerman, W. “Sustainable Development and Our Obligations to Future Generations.” In Fairness and Futurity: Essays on Environmental Sustainability and Social Justice, edited by A. Dobson, 71–92. New York: Oxford University Press, 1999. Bentham, J. An Introduction to the Principles of Morals and Legislation. Garden City, NY: Doubleday, 1961. First published in 1789. Berg, T., V. Burg, A. Gombović, and M. Puri. On the Rise of the FinTechs: Credit Scoring Using Digital Footprints. Federal Deposit Insurance Corporation, Center for Financial Research, 2018. Bergmans, Anne, Göran Sundqvist, Drago Kos, and Peter Simmons. “The Participatory Turn in Radioactive Waste Management: Deliberation and the Social–Technical Divide.” Journal of Risk Research 18, nos. 3–4 (2015): 347–63. Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan. “The Social Dilemma of Autonomous Vehicles.” Science 352, no. 6293 (June 24, 2016): 1573–76. https://doi.org/10.1126/science.aaf2654.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

192

Bibliography

Bonser, D. “Nuclear Now for Sustainable Development.” Nuclear Energy 42, no. 1 (2003): 51–54. Borger, Julian, and John Vidal. “New Study to Force Ministers to Review Climate Change Plan.” Environment, Guardian, June 18, 2008. www.theguardian .com/environment/2008/jun/19/climatechange.biofuels. Bråkenhielm, Carl Reinhold. “Ethics and the Management of Spent Nuclear Fuel.” Journal of Risk Research 18, no. 3 (2015): 392–405. https://doi.org/ 10.1080/13669877.2014.988170. Brenner, David J. “Are X-ray Backscatter Scanners Safe for Airport Passenger Screening? For Most Individuals, Probably Yes, but a Billion Scans per Year Raises Long-Term Public Health Concerns.” Radiology 259, no. 1 (April 1, 2011): 6–10. https://doi.org/10.1148/radiol.11102347. Brey, P. “Ethical Aspects of Behavior-Steering Technology.” In User Behavior and Technology Development: Shaping Sustainable Relations between Consumers and Technologies, edited by P. P. Verbeek and A. Slob, 357–64. Berlin: Springer, 2006. Brinke, L., and J. Faber. Review of the Social Cost–Benefit Analysis of Grand Ouest Airport: Comparison with Improvements of Nantes Atlantique. Delft: CE Delft, 2011. Brooks, H. “Sustainability and Technology.” In Science and Sustainability: Selected Papers on IIASA’s 20th Anniversary, 29–60. Laxenburg: IIASA, 1992. Bruggink, J. J. C., and B. C. C. Van der Zwaan. “The Role of Nuclear Energy in Establishing Sustainable Energy Paths.” International Journal of Global Energy Issues 18, no. 2 (2002): 151–80. Bryson, J. “Robots Should Be Slaves.” In Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issue, edited by Y. Wilks, 63–74. Amsterdam: John Benjamins Publishing, 2008. Butler, G., and G. McGlynn. “Building or Burning the Bridges to a Sustainable Energy Policy.” In Nuclear or Not? Does Nuclear Power Have a Place in a Sustainable Energy Future?, edited by D. Elliott, 53–58. Basingstoke, Hampshire: Palgrave Macmillan, 2007. Buyx, A. M., and J. Tait. “Biofuels: Ethics and Policy-Making.” Biofuels, Bioproducts and Biorefining 5, no. 6 (2011): 631–39. Calvert, Simeon C., Giulio Mecacci, Bart van Arem, Filippo Santoni de Sio, Daniel D. Heikoop, and Marjan Hagenzieker. “Gaps in the Control of Automated Vehicles on Roads.” IEEE Intelligent Transportation Systems Magazine (2020): 1–8. https://doi.org/10.1109/MITS.2019.2926278. Cao, G. H. “Comparison of China–US Engineering Ethics Educations in SinoWestern Philosophies of Technology.” Science and Engineering Ethics 21, no. 6 (2015), 1609–35. https://doi.org/10.1007/s11948–014-9611-3.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

193

Caro, R. A. The Power Broker: Robert Moses and the Fall of New York. New York: Random House, 1974. Cavoukian, A. Security Technologies Enabling Privacy (STEPs): Time for a Paradigm Shift. Toronto, Canada: Information & Privacy Commissioner of Ontario, 2002. Whole Body Imaging in Airport Scanners: Building in Privacy by Design. Toronto, Canada: Information & Privacy Commissioner of Ontario, 2009. Chakrabortty, Aditya. “Secret Report: Biofuel Caused Food Crisis.” Environment, Guardian, July 3, 2008. www.theguardian.com/environment/2008/jul/03/ biofuels.renewableenergy. Chockalingam, Sabarathinam, Dina Hadžiosmanović, Wolter Pieters, André Teixeira, and Pieter van Gelder. “Integrated Safety and Security Risk Assessment Methods: A Survey of Key Characteristics and Applications.” In Critical Information Infrastructures Security, edited by Grigore Havarneanu, Roberto Setola, Hypatia Nassopoulos, and Stephen Wolthusen, 50–62. Lecture Notes in Computer Science. Cham: Springer International Publishing, 2017. https://doi.org/10.1007/978-3-319-71368-7_5. Chung, K. “Nuclear Power and Public Acceptance.” IAEA Bulletin 32, no. 2 (1990): 13–15. Coeckelbergh, Mark. AI Ethics. Cambridge, MA: MIT Press, 2020. “Collapsed State Housing in Iranian Quake Shows Corruption: Rouhani.” Reuters, November

15,

2017.

www.reuters.com/article/us-iran-quake-rouhani-

idUSKBN1DF1O1. Collingridge, D. The Social Control of Technology. London: Frances Pinter, 1980. Cooke, Roger M. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford, New York, and Toronto: Oxford University Press, 1991. Correljé, A., E. Cuppen, M. Dignum, U. Pesch, and B. Taebi. “Responsible Innovation in Energy Projects: Values in the Design of Technologies, Institutions and Stakeholder Interactions.” In Responsible Innovation 2: Concepts and Approaches, edited by E. J. Koops, I. Oosterlaken, H. A. Romijn, T. E. Swierstra, and J. van den Hoven, 183–200. Berlin: Springer International Publishing, 2015. Council of Europe. Discrimination, Artificial Intelligence, and Algorithmic DecisionMaking. Strasbourg: Council of Europe, 2018. Covello, V. T., and M. W. Merkhofer. Risk Assessment Methods: Approaches for Assessing Health and Environmental Risks. New York: Plenum Press, 1993. Cowell, R., G. Bristow, and M. Munday. “Acceptance, Acceptability and Environmental Justice: The Role of Community Benefits in Wind Energy Development.” Journal of Environmental Planning and Management 54, no. 4 (May 1, 2011): 539–57.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

194

Bibliography

Cruess, Sylvia R., Sharon Johnston, and Richard L. Cruess. “‘Profession’: A Working Definition for Medical Educators.” Teaching and Learning in Medicine 16, no. 1 (January 1, 2004): 74–76. https://doi.org/10.1207/ s15328015tlm1601_15. Cuppen, Eefje, Olivier Ejderyan, Udo Pesch, Shannon Spruit, Elisabeth van de Grift, Aad Correljé, and Behnam Taebi. “When Controversies Cascade: Analysing the Dynamics of Public Engagement and Conflict in the Netherlands and Switzerland through ‘Controversy Spillover.’” Energy Research & Social Science 68 (October 1, 2020): 101593. https://doi.org/ 10.1016/j.erss.2020.101593. Cuppen, Eefje, Udo Pesch, Sanne Remmerswaal, and Mattijs Taanman. “Normative

Diversity,

Conflict

and

Transition:

Shale

Gas

in

the

Netherlands.” Technological Forecasting and Social Change 145 (August, 2019): 165–75. https://doi.org/10.1016/j.techfore.2016.11.004. Davies, Alex. “The Unavoidable Folly of Making Humans Train Self-Driving Cars.” Wired, June 22, 2018. www.wired.com/story/uber-crash-arizona-humantrain-self-driving-cars/. Davis, Michael. “‘Global Engineering Ethics’: Re-Inventing the Wheel?” In Engineering Ethics for a Globalized World, edited by Colleen Murphy, Paolo Gardoni, Hassan Bashir, Charles E. Harris, Jr., and Eyad Masad, 69–78. Philosophy of Engineering and Technology. Cham: Springer International Publishing, 2015. “Is Engineering a Profession Everywhere?” Philosophia 37, no. 2 (June 1, 2009): 211–25. https://doi.org/10.1007/s11406–008-9125-9. “Thinking Like an Engineer: The Place of a Code of Ethics in the Practice of a Profession.” Philosophy and Public Affairs 20, no. 2 (1991): 150–67. Devine-Wright, P. Renewable Energy and the Public: From NIMBY to Participation. London: Earthscan, 2011. Didier, Christelle “Engineering Ethics: European Perspectives.” In Ethics, Science, Technology, and Engineering: A Global Resource, edited by J. B. Holbrook, 87–90. Farmington Hills, MI: MacMillan Reference, 2015. “Engineering Ethics in France: A Historical Perspective.” Technology in Society 21, no. 4 (1999): 471–86. https://doi.org/10.1016/S0160–791X(99) 00029-9. Dignum, M., A. Correljé, E. Cuppen, U. Pesch, and B. Taebi. “Contested Technologies and Design for Values: The Case of Shale Gas.” Science and Engineering Ethics 22, no. 4 (2016): 1171–91. Dignum, Virginia. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Artificial Intelligence: Foundations, Theory, and Algorithms.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

195

Cham: Springer International Publishing, 2019. https://doi.org/10.1007/9783-030-30371-6. Dimitriou, H. T., E. J. Ward, and P. G. Wright. “Mega Transport Projects – Beyond the ‘Iron Triangle’: Findings from the OMEGA Research Programme.” Progress in Planning 86 (November 1, 2013): 1–43. https://doi.org/10.1016/j. progress.2013.03.001. Dobson, A. Justice and the Environment: Conceptions of Environmental Sustainability and Theories of Distributive Justice. New York: Oxford University Press, 1998. www .questia.com/PM.qst?a=o&d=49013236. Doorn, Neelke. “How Can Resilient Infrastructures Contribute to Social Justice? Preface to the Special Issue of Sustainable and Resilient Infrastructure on Resilience Infrastructures and Social Justice.” Sustainable and Resilient Infrastructure 4, no. 3 (2019): 99–102. https://doi.org/10.1080/23789689.2019. 1574515. “Resilience Indicators: Opportunities for Including Distributive Justice Concerns in Disaster Management.” Journal of Risk Research 20, no. 6 (2017): 711–31. Doorn, Neelke, and S. O. Hansson. “Design for the Value of Safety.” In Handbook of Ethics, Values, and Technological Design, edited by J. van den Hoven, P. Vermaas, and I. Van de Poel, 491–511. Dordrecht: Springer, 2015. “Should Probabilistic Design Replace Safety Factors?” Philosophy & Technology 24, no. 2 (2011): 151–68. Doorn, Neelke, D. Schuurbiers, I. Van de Poel, and M. E. Gorman, eds. Early Engagement and New Technologies: Opening Up the Laboratory. Dordrecht: Springer, 2014. Downer, J. “Disowning Fukushima: Managing the Credibility of Nuclear Reliability Assessment in the Wake of Disaster.” Regulation & Governance 8, no. 3 (2014): 287–309. https://doi.org/10.1111/rego.12029. “The Unknowable Ceiling of Safety: Three Ways That Nuclear Accidents Escape the Calculus of Risk Assessments.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 35–52. Cambridge: Cambridge University Press, 2015. Downey, Gary Lee, Juan C. Lucena, Barbara M. Moskal, Rosamond Parkhurst, Thomas Bigley, Chris Hays, Brent K. Jesiek, et al. “The Globally Competent Engineer: Working Effectively with People Who Define Problems Differently.” Journal of Engineering Education 95, no. 2 (2006): 107–22. https:// doi.org/10.1002/j.2168-9830.2006.tb00883.x. Duffey, Romney B. “Sustainable Futures Using Nuclear Energy.” Progress in Nuclear Energy 47, no. 1 (January 1, 2005): 535–43. https://doi.org/10.1016/j. pnucene.2005.05.054.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

196

Bibliography

Efrati, Amir. “Uber Finds Deadly Accident Likely Caused by Software Set to Ignore

Objects

on

Road.”

The

Information,

May

7,

2018.

www

.theinformation.com/articles/uber-finds-deadly-accident-likely-caused-bysoftware-set-to-ignore-objects-on-road. Elsinga, Marja, Joris Hoekstra, Mohamad Sedighi, and Behnam Taebi. “Toward Sustainable and Inclusive Housing: Underpinning Housing Policy as Design for Values.” Sustainability 12, no. 5 (January, 2020): 1920. https://doi.org/ 10.3390/su12051920. Engel-Hills, Penelope, Christine Winberg, and Arie Rip. “Ethics ‘Upfront’: Generating an Organizational Framework for a New University of Technology.” Science and Engineering Ethics 25, no. 6 (2019): 1705–20. https:// doi.org/10.1007/s11948–019-00140-0. European Commission. Artificial Intelligence, Robotics and “Autonomous” Systems. Brussels: European Commission – European Group on Ethics in Science and New Technologies, 2018. “On the Promotion of the Use of Energy from Renewable Sources and Amending and Subsequently Repealing Directives 2001/77/EC and 2003/30/ EC.2009.04.23.” Directive 2009/28/EC, 2009. https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=CELEX%3A52016PC0767. European Environment Agency. Late Lessons from Early Warnings: Science, Precaution, Innovation. EEA Report no. 1/2013. Copenhagen: European Environment Agency, 2013. Evans, Leonard. “Death in Traffic: Why Are the Ethical Issues Ignored?” Studies in Ethics, Law, and Technology 2, no. 1 (2008), art. 1. https://doi.org/10.2202/19416008.1014. Ewing, Jack, and Jad Mouawad. “Directors Say Volkswagen Delayed Informing Them of Trickery.” International Business, New York Times, October 23, 2015. www.nytimes.com/2015/10/24/business/international/directors-say-volkswa gen-delayed-informing-them-of-trickery.html. Fahlquist, Jessica Nihlén, and Sabine Roeser. “Nuclear Energy, Responsible Risk Communication and Moral Emotions: A Three Level Framework.” Journal of Risk

Research

18,

no.

3

(2015):

333–46.

https://doi.org/10.1080/

13669877.2014.940594. Fan, Yinghui, Xingwei Zhang, and Xinlu Xie. “Design and Development of a Course in Professionalism and Ethics for CDIO Curriculum in China.” Science and Engineering Ethics 21, no. 5 (2014): 1381–89. https://doi.org/ 10.1007/s11948–014-9592-2.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

197

Fischhoff, Baruch. “The Realities of Risk–Cost–Benefit Analysis.” Science 350, no. 6260 (October 30, 2015): aaa6516. https://doi.org/10.1126/science.aaa6516. Flanagan, M., D. C. Howe, and H. Nissenbaum. “Embodying Values in Technology. Theory and Practice.” In Information Technology and Moral Philosophy, edited by J. van den Hoven and J. Weckert, 322–53. Cambridge: Cambridge University Press, 2008. Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, et al. “AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.” Minds and Machines 28, no. 4 (December 1, 2018): 689–707. https://doi.org/10.1007/s11023–018-9482-5. Flynn, R. “Risk and the Public Acceptance of New Technologies.” In Risk and the Public Acceptance of New Technologies, edited by R. Flynn and P. Bellaby, 1–23. New York: Palgrave Macmillan, 2007. Fogg, B. J. “Now Is Your Chance to Decide What They Will Persuade Us to Do – and How They’ll Do It.” Communications of the ACM 42, no. 5 (1999): 27–29. Foot, P. “The Problem of Abortion and the Doctrine of Double Effect.” Oxford Review 5 (1967): 5–15. Friedman, B. “Value-Sensitive Design.” Interactions 3, no. 6 (1996): 16–23. Friedman, B., P. H. Kahn, Jr. and A. Borning. Value Sensitive Design: Theory and Methods. University of Washington Technical Report 02-12-01. Seattle: University of Washington, 2002. Gajraj, Randhir S., Gajendra P. Singh, and Ashwani Kumar. “Third-Generation Biofuel: Algal Biofuels as a Sustainable Energy Source.” In Biofuels: Greenhouse Gas Mitigation and Global Warming: Next Generation Biofuels and Role of Biotechnology, edited by Ashwani Kumar, Shinjiro Ogita, and Yuan-Yeu Yau, 307–25. New Delhi: Springer India, 2018. https://doi.org/10.1007/978-81-3223763-1_17. Gardoni, P., and C. Murphy. “Nuclear Energy, the Capability Approach, and the Developing World.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 216–30. Cambridge: Cambridge University Press, 2015. Gips, J. “Toward the Ethical Robot.” In Android Epistemology, edited by K. M. Ford, C. Glymour, and P. Hayes, 243–52. Cambridge, MA: MIT Press, 1994. Godard, O. “Justification, Limitation, and ALARA as Precursors of the Precautionary Principles.” In Ethics and Radiological Protection, edited by G. Eggermont and B. Feltz, 133–46. Louvain-la-Neuve: Academia-Bruylant, 2008.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

198

Bibliography

Goldberg, S. M., and R. Rosner. Nuclear Reactors: Generation to Generation. Cambridge, MA: American Academy of Arts and Sciences, 2011. Goodall, Noah J. “Away from Trolley Problems and toward Risk Management.” Applied Artificial Intelligence 30, no. 8 (September 13, 2016): 810–21. https://doi. org/10.1080/08839514.2016.1229922. “Ethical Decision Making during Automated Vehicle Crashes.” Transportation Research Record 2424, no. 1 (January 1, 2014): 58–65. https://doi.org/10.3141/ 2424-07. Goodin, R. “Ethical Principles for Environmental Protection.” In Environmental Philosophy, edited by R. Elliot and A. Gare, 411–26. St. Lucia: University of Queensland Press, 1983. Greenpeace. “Nuclear Power, Unsustainable, Uneconomic, Dirty and Dangerous: A Position Paper.” UN Energy for Sustainable Development, Commission on Sustainable Development, CSD-14, New York, 2006. Gruen, Lori. Ethics and Animals: An Introduction. Cambridge: Cambridge University Press, 2011. Grunwald, A. “Technology Assessment and Design for Values.” In Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, edited by J. van den Hoven, P. Vermaas, and I. Van de Poel, 67–86. Dordrecht: Springer, 2015. “Technology

Assessment

for

Responsible

Innovation.”

In

Responsible

Innovation 1: Innovative Solutions for Global Issues, edited by J. van den Hoven, N. Doorn, T. Swierstra, B. J. Koops, and H. Romijn, 15–31. Cham: Springer, 2014. “Technology Policy between Long-Term Planning Requirements and ShortRanged

Acceptance

Problems:

New

Challenges

for

Technology

Assessment.” In Vision Assessment: Shaping Technology in 21st Century Society, edited by J. J. Grin, 99–147. Berlin: Springer, 2000. Grunwald, A., and C. Rösch. “Sustainability Assessment of Energy Technologies: Towards an Integrative Framework.” Energy, Sustainability and Society 1, no. 1 (2011): 1–10. Gui, M. M., K. T. Lee, and S. Bhatia. “Feasibility of Edible Oil vs. Non-Edible Oil vs. Waste Edible Oil as Biodiesel Feedstock.” Energy 33, no. 11 (November 1, 2008): 1646–53. https://doi.org/10.1016/j.energy.2008.06.002. Gunkel, David J. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, MA: MIT Press, 2012. Halkes, Job. “De A4, het duurste stukje snelweg van Nederland.” Metro Nieuws, November 11, 2015. www.metronieuws.nl/binnenland/rotterdam/2015/11/ de-a4-het-duurste-stukje-snelweg-van-nederland.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

199

Hammer, W. Product Safety Management and Engineering. Englewood Cliffs, NJ: Prentice-Hall, 1980. Hanemann, W. M. “Willingness to Pay and Willingness to Accept: How Much Can They Differ?” The American Economic Review 81, no. 3 (1991): 635–47. Hansson, S. O. “An Agenda for the Ethics of Risk.” In The Ethics of Technological Risk, edited by L. Asveld and S. Roeser, 11–23. London: Earthscan, 2009. “Ethical Criteria of Risk Acceptance.” Erkenntnis 59, no. 3 (2003): 291–309. “Informed Consent out of Context.” Journal of Business Ethics 63, no. 2 (2006): 149–54. “Philosophical Problems in Cost–Benefit Analysis.” Economics and Philosophy 23, no. 2 (2007): 163–83. “The Precautionary Principle.” In Handbook of Safety Principles, edited by N. Möller, S. O. Hansson, J. E. Holmberg, and C. Rollenhagen, 258–83. John Wiley & Sons Inc., 2018. “Risk and Safety in Technology.” In Philosophy of Technology and Engineering Sciences, edited by A. Meijers, 1069–1102. Amsterdam: Elsevier, 2009. Harrington, Jason Edward. “Dear America, I Saw You Naked.” Politico Magazine, January 30, 2014. www.politico.com/magazine/story/2014/01/tsa-screenerconfession-102912.html. Harris, C. E., M. Pritchard, M. J. Rabins, R. James, and E. Englehardt. Engineering Ethics: Concepts and Cases. 5th ed. Boston: Wadsworth Cengage Learning, 2014. Hawkins, Andrew J. “Serious Safety Lapses Led to Uber’s Fatal Self-Driving Crash, New Documents Suggest.” Verge, November 6, 2019. Heikoop, Daniël D., Marjan Hagenzieker, Giulio Mecacci, Simeon Calvert, Filippo Santoni de Sio, and Bart van Arem. “Human Behaviour with Automated Driving Systems: A Quantitative Framework for Meaningful Human Control.” Theoretical Issues in Ergonomics Science 20, no. 6 (November 2, 2019): 711–30. https://doi.org/10.1080/1463922X.2019.1574931. Henwood, K., and N. Pidgeon. “Gender, Ethical Voices, and UK Nuclear Energy Policy in the Post-Fukushima Era.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 67–85. Cambridge: Cambridge University Press, 2015. Hevelke, Alexander, and Julian Nida-Rümelin. “Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.” Science and Engineering Ethics 21, no. 3 (2015): 619–30. https://doi.org/10.1007/s11948–014-9565-5. Hillerbrand, R. “The Role of Nuclear Energy in the Future Energy Landscape: Energy Scenarios, Nuclear Energy, and Sustainability.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 231–49. Cambridge: Cambridge University Press, 2015.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

200

Bibliography

Hollnagel, E. “Resilience : The Challenge of the Unstable.” In Resilience Engineering: Concepts and Precepts, edited by E. Hollnagel, D. D. Woods, and N. C. Leveson, 9–18. Aldershot: Ashgate, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu: diva-38166. Horn, Michael. Testimony of Michael Horn, President and CEO of Volkswagen Group of America, Inc. The House Committee on Energy and Commerce Subcommittee on Oversight and Investigations, 2015. https://docs.house.gov/ meetings/IF/IF02/20151008/104046/HHRG-114-IF02-Wstate-HornM20151008.pdf?rel=mas. IAEA. The Fukushima Daiichi Accident Report by the Director General. Vienna: IAEA, 2015. IAEA. IAEA Safety Glossary: Terminology Used in Nuclear Safety and Radiation Protection. Vienna: IAEA, 2007. Implications of Partitioning and Transmutation in Radioactive Waste Management. Vienna: IAEA, 2004. www.iaea.org/publications/7112/implications-of-parti tioning-and-transmutation-in-radioactive-waste-management. Nuclear Power and Sustainable Development. Vienna: IAEA, 2006. Safety Related Terms for Advanced Nuclear Plants. Vienna: IAEA, 1991. IAEA, Euratom, FAO, IAEA, ILO, IMO, OECD-NEA, PAHO, UNEP, and WHO. Fundamental Safety Principles. IAEA Safety Standards Series, vol. SF1. Vienna: Euratom, FAO, IAEA, ILO, IMO, OECD-NEA, PAHO, UNEP, and WHO, 2006. ICRP. The 2007 Recommendations of the International Commission on Radiological Protection. ICRP Publication 103. Annals of the ICRP 37, nos. 2–4 (2007). Cost–Benefit Analysis in the Optimization of Radiation Protection. ICRP Publication 37. Annals of the ICRP 10, nos. 2–3 (1983). Ethical Foundations of the System of Radiological Protection. ICRP Publication 138. Annals of the ICRP 47, no. 1 (2018). Protection of the Environment under Different Exposure Situations. ICRP Publication 124. Annals of the ICRP 43, no. 1 (2014). 1990 Recommendations of the International Commission on Radiological Protection. ICRP Publication 60. Annals of the ICRP 21, nos. 1–3 (1991). IJsselsteijn, Wijnand, Yvonne de Kort, Cees Midden, Berry Eggen, and Elise van den Hoven. “Persuasive Technology for Human Well-Being: Setting the Scene.” In Persuasive Technology, edited by Wijnand A. IJsselsteijn, Yvonne A. W. de Kort, Cees Midden, Berry Eggen, and Elise van den Hoven, 1–5. Lecture Notes in Computer Science. Berlin Heidelberg: Springer, 2006. “Iran Quake Survivors Plead for Help.” Middle East, BBC News, November 14, 2017. www.bbc.com/news/world-middle-east-41977714. International Risk Governance Council. Planning Adaptive Risk Regulation. Lausanne: International Risk Governance Council, 2016.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

201

Irvine, D. “Airport Officials Get X-ray Vision.” CNN, March 6, 2007. http://edition .cnn.com/2007/TRAVEL/03/06/bt.backscatterxray/index.html. Ishizuka, H. “Official Storage of Contaminated Soil Begins in Fukushima.” Asahi Shimbun, October 29, 2017. Jacquet, J., and D. Jamieson. “Soft but Significant Power in the Paris Agreement.” Nature Climate Change 6, no. 7 (July, 2016): 643–46. https://doi.org/10.1038/ nclimate3006. Jamieson, D. Ethics and the Environment: An Introduction. Cambridge: Cambridge University Press, 2008. Janaun, Jidon, and Naoko Ellis. “Perspectives on Biodiesel as a Sustainable Fuel.” Renewable and Sustainable Energy Reviews 14, no. 4 (May 1, 2010): 1312–20. https://doi.org/10.1016/j.rser.2009.12.011. Jenkins, K., D. McCauley, R. Heffron, H. Stephan, and R. Rehner. “Energy Justice: A Conceptual Review.” Energy Research & Social Science 11 (2016): 174–82. Jenkins, Kirsten E. H., Shannon Spruit, Christine Milchram, Johanna Höffken, and Behnam Taebi. “Synthesizing Value Sensitive Design, Responsible Research and Innovation, and Energy Justice: A Conceptual Review.” Energy Research & Social Science 69 (November 1, 2020): 101727. https://doi.org/ 10.1016/j.erss.2020.101727. Jing, Shan, and Neelke Doorn. “Engineers’ Moral Responsibility: A Confucian Perspective.” Science and Engineering Ethics 26, no. 1 (2020): 233–53. https:// doi.org/10.1007/s11948–019-00093-4. Johnson, Deborah G. “Computer Systems: Moral Entities but Not Moral Agents.” Ethics and Information Technology 8, no. 4 (November 1, 2006): 195–204. https:// doi.org/10.1007/s10676–006-9111-5. Jordan, S. R., and P. W. Gray. “Responsible Conduct of Research Training for Engineers: Adopting Research Ethics Training for Engineering Graduate Students.” In Engineering Ethics for a Globalized World, edited by Colleen Murphy, Paolo Gardoni, Hassan Bashir, Charles E. Harris, Jr., and Eyad Masad, 213–28. Philosophy of Engineering and Technology. Cham: Springer International Publishing, 2015. Kasperkevic, Jana, and Dominic Rushe. “Head of VW America Says He Feels Personally Deceived.” Business, Guardian, October 8, 2015. www.theguardian .com/business/live/2015/oct/08/head-of-vw-america-testifies-us-congressemissions-live-updates. Kastenberg, W. E. “Ethics, Risk, and Safety Culture: Reflections on Fukushima and Beyond.” Journal of Risk Research 18, no. 3 (2015): 304–16. https://doi. org/10.1080/13669877.2014.896399.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

202

Bibliography

Keller, Paul E., Douglas L. McMakin, David M. Sheen, A. D. McKinnon, and Jay W. Summet. “Privacy Algorithm for Airport Passenger Screening Portal.” In Applications and Science of Computational Intelligence III, edited by Kevin L. Priddy, Paul E. Keller, and David B. Fogel, 4055:476–83. Orlando, FL: SPIE, 2000. https://doi.org/10.1117/12.380602. Keller, W., and M. Modarres. “A Historical Overview of Probabilistic Risk Assessment Development and Its Use in the Nuclear Power Industry: A Tribute to the Late Professor Norman Carl Rasmussen.” Reliability Engineering & System Safety 89, no. 3 (2005): 271–85. Kelman, S. “Cost–Benefit Analysis: An Ethical Critique.” Regulations (January– February, 1981): 33–40. Kermisch, C, and B. Taebi. “Sustainability, Ethics and Nuclear Energy: Escaping the Dichotomy.” Sustainability 9, no. 3 (2017): 446. Klein, A. “Credit Denial in the Age of AI.” Brookings Institution, 2019. www .brookings.edu/research/credit-denial-in-the-age-of-ai/. Klinke, A., and O. Renn. “Adaptive and Integrative Governance on Risk and Uncertainty.” Journal of Risk Research 15, no. 3 (2012): 273–92. https://doi. org/10.1080/13669877.2011.636838. Kockelman, Kara Maria, and Young-Jun Kweon. “Driver Injury Severity: An Application of Ordered Probit Models.” Accident Analysis & Prevention 34, no. 3 (May 1, 2002): 313–21. https://doi.org/10.1016/S0001–4575(01)00028-8. Kroes, P., and P. P. Verbeek. “Introduction: The Moral Status of Technical Artefacts.” In The Moral Status of Technical Artefacts, edited by P. Kroes and P. P. Verbeek, 1–9. Dordrecht: Springer, 2014. Krütli, P., K. Törnblom, I. Wallimann-Helmer, and M. Stauffacher. “Distributive versus Procedural Justice in Nuclear Waste Repository Siting.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 119–40. Cambridge: Cambridge University Press, 2015. Kultgen, John, and Robin Alexander-Smith. “The Ideological Use of Professional Codes.” Business & Professional Ethics Journal 1, no. 3 (1982): 53–73. Kulynych, B., R. Overdorf, C. Troncoso, and S. Gürses. “POTs: Protective Optimization Technologies.” ArXiv (December, 2019). Ladd, John. “Bhopal: An Essay on Moral Responsibility and Civic Virtue.” Journal of Social Philosophy 22, no. 1 (1991): 73–91. https://doi.org/10.1111/j.14679833.1991.tb00022.x. “The Quest for a Code of Professional Ethics: An Intellectual and Moral Confusion.” In Ethical Issues in Engineering, edited by D. G. Johnson, 130–36. Englewood Cliffs, NJ: Prentice Hall, 1991.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

203

Lambert, Fred. “Tesla’s Latest Autopilot Update Comes with More ‘Nag’ to Make Sure Drivers Keep Their Hands on the Wheel.” Electrek (blog), June 11, 2018. https://electrek.co/2018/06/11/tesla-autopilot-update-nag-hands-wheel/. Landström, C., and A. Bergmans. “Long-Term Repository Governance: A SocioTechnical Challenge.” Journal of Risk Research 18, nos. 3–4 (2015): 378–91. https://doi.org/10.1080/13669877.2014.913658. Laufer, William S. “Social Accountability and Corporate Greenwashing.” Journal of Business Ethics 43, no. 3 (2003): 253–61. https://doi.org/10.1023/ A:1022962719299. Lehtveer, M., and F. Hedenus. “Nuclear Power as a Climate Mitigation Strategy – Technology and Proliferation Risk.” Journal of Risk Research 18, no. 3 (2015): 273–90. https://doi.org/10.1080/13669877.2014.889194. Leung, Dennis Y. C., Xuan Wu, and M. K. H. Leung. “A Review on Biodiesel Production Using Catalyzed Transesterification.” Applied Energy 87, no. 4 (April 1, 2010): 1083–95. https://doi.org/10.1016/j.apenergy.2009.10.006. Liao, Kuei-Hsien. “A Theory on Urban Resilience to Floods – A Basis for Alternative Planning Practices.” Ecology and Society 17, no. 4 (2012). www .jstor.org/stable/26269244. Lin, P. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving. Technical, Legal and Social Aspects, edited by M. Maurer, J. C. Gerdes, B. Lenz, and H. Winner, 69–85. Berlin and Heidelberg: Springer Open, 2015. Löfquist, L. “After Fukushima: Nuclear Power and Societal Choice.” Journal of Risk Research

18,

no.

3

(2015):

291–303.

https://doi.org/10.1080/

13669877.2013.841730. Loyalka, Prashant, Martin Carnoy, Isak Froumin, Raffiq Dossani, J. B. Tilak, and Po Yang. “Factors Affecting the Quality of Engineering Education in the Four Largest Emerging Economies.” Higher Education 68, no. 6 (December 1, 2014): 977–1004. https://doi.org/10.1007/s10734-014-9755-8. Luegenbiehl, H. C. “Ethical Autonomy and Engineering in a Cross-Cultural Context.” Techné: Research in Philosophy and Technology 8, no. 1 (March 1, 2004): 57–78. https://doi.org/10.5840/techne20048110. “Ethical Principles for Engineers in a Global Environment.” In Philosophy and Engineering, edited by I. Van de Poel and D. Goldberg, 147–59. Dordrecht: Springer, 2010. Luegenbiehl, H. C., and R. F. Clancy. Global Engineering Ethics. Oxford: Elsevier Butterworth-Heinemann, 2017. Lynn, Leonard, and Hal Salzman. “Engineers, Firms and Nations: Ethical Dilemmas in the New Global Environment.” In Engineering Ethics for a Globalized World, edited by Colleen Murphy, Paolo Gardoni, Hassan Bashir,

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

204

Bibliography

Charles E. Harris, Jr., and Eyad Masad, 15–33. Philosophy of Engineering and Technology. Cham: Springer International Publishing, 2015. Malle, Bertram F., Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. “Sacrifice One for the Good of Many? People Apply Different Moral Norms to Human and Robot Agents.” In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human–Robot Interaction, 117–24. New York: ACM, 2015. https://doi.org/10.1145/2696454.2696458. Martin, W. M., and R. Schinzinger. Ethics in Engineering. New York: McGraw-Hill, 1989. Mata, Teresa M., António A. Martins, and Nidia. S. Caetano. “Microalgae for Biodiesel Production and Other Applications: A Review.” Renewable and Sustainable Energy Reviews 14, no. 1 (January 1, 2010): 217–32. https://doi.org/ 10.1016/j.rser.2009.07.020. Matthias, Andreas. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.” Ethics and Information Technology 6, no. 3 (September 1, 2004): 175–83. https://doi.org/10.1007/s10676–004-3422-1. McGinn, Robert. The Ethical Engineer: Contemporary Concepts and Cases. Princeton, NJ, and Oxford: Princeton University Press, 2018. Meadows, D. H., D. L. Meadows, J. Randers, and W. W. Behrens. The Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind. New York: Universe Books, 1972. Mecacci, Giulio, and Filippo Santoni de Sio. “Meaningful Human Control as Reason-Responsiveness: The Case of Dual-Mode Vehicles.” Ethics and Information

Technology

(December

6,

2019).

https://doi.org/10.1007/

s10676–019-09519-w. Mill, J. S. “On Liberty.” In On Liberty and Other Essays, edited by J. Gray, 1–128. New York: Oxford University Press, 1998. First published in 1859. “Utilitarianism.” In On Liberty and Other Essays, edited by J. Gray, 129–201. New York: Oxford University Press, 1998. First published in 1861. Minarick, J. W., and C. A. Kukielka. Precursors to Potential Severe Core Damage Accidents, 1969–1979: A Status Report. Oak Ridge, TN: US Nuclear Regulatory Commission, Oak Ridge National Laboratory, 1982. Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society 3, no. 2 (December 2016): 1–21. https://doi.org/10.1177/ 2053951716679679. Möller, Niklas, and Sven Ove Hansson. “Principles of Engineering Safety: Risk and Uncertainty Reduction.” Reliability Engineering & System Safety 93, no. 6 (June 1, 2008): 798–805. https://doi.org/10.1016/j.ress.2007.03.031.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

205

Morimoto, Risako, John Ash, and Chris Hope. “Corporate Social Responsibility Audit: From Theory to Practice.” Journal of Business Ethics 62, no. 4 (2005): 315–25. https://doi.org/10.1007/s10551–005-0274-5. Morose, Gregory. “The 5 Principles of ‘Design for Safer Nanotechnology.’” Journal of Cleaner Production 18, no. 3 (February 1, 2010): 285–89. https://doi.org/ 10.1016/j.jclepro.2009.10.001. Mouter, Niek, Jan Anne Annema, and Bert van Wee. “Attitudes towards the Role of Cost–Benefit Analysis in the Decision-Making Process for SpatialInfrastructure Projects: A Dutch Case Study.” Transportation Research Part A: Policy and Practice 58 (December 1, 2013): 1–14. https://doi.org/10.1016/j. tra.2013.10.006. Mulder, Max, Clark Borst, and Marinus M. van Paassen. “Improving Operator Situation

Awareness

through

Ecological

Interfaces:

Lessons

from

Aviation.” In Computer–Human Interaction Research and Applications, edited by Andreas Holzinger, Hugo Plácido Silva, and Markus Helfert, 20–44. Cham: Springer International Publishing, 2019. https://doi.org/10.1007/ 978-3-030-32965-5_2. Murphy, Colleen, Paolo Gardoni, Hassan Bashir, Charles E. Harris, Jr., and Eyad Masad, eds. Engineering Ethics for a Globalized World. Philosophy of Engineering and Technology. Cham: Springer International Publishing, 2015. “Introduction.” In Engineering Ethics for a Globalized World, edited by Colleen Murphy, Paolo Gardoni, Hassan Bashir, Charles E. Harris, Jr., and Eyad Masad, 1–11. Philosophy of Engineering and Technology. Cham: Springer International Publishing, 2015. Murrugarra, R. I., and W. A. Wallace. “A Cross Cultural Comparison of Engineering Ethics Education: Chile and United States.” In Engineering Ethics for a Globalized World, edited by Colleen Murphy, Paolo Gardoni, Hassan Bashir, Charles E. Harris, Jr., and Eyad Masad, 189–211. Philosophy of Engineering and Technology. Cham: Springer International Publishing, 2015. National Research Council. Going the Distance? The Safe Transport of Spent Nuclear Fuel and High-Level Radioactive Waste in the United States. Washington, DC: National Research Council (NRC), 2006. www.nap.edu/openbook.php? record_id=4912&page=R1. Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Research Council, National Academy Press, 1966. New Zealand Government. “Guide to Social Cost Benefit Analysis.” The Treasury, New Zealand Government, July 27, 2015. https://treasury.govt.nz/publications/guide/guide-social-cost-benefit-analysis.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

206

Bibliography

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018. Nolan, D. P. Handbook of Fire and Explosion Protection Engineering Principles: For Oil, Gas, Chemical and Related Facilities. 3rd ed. Amsterdam: William Andrew, 2014. Nolt, J. “Non-nnthropocentric Nuclear Energy Ethics.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 157–75. Cambridge: Cambridge University Press, 2015. “Notre-Dame-des-Landes: Does France Really Need a New Airport Hub in the West?” The Local, January 17, 2018. www.thelocal.fr/20180117/does-francereally-need-a-new-airport-hub-in-the-west. Nowotny, Janusz, Tsuyoshi Hoshino, John Dodson, Armand J. Atanacio, Mihail Ionescu, Vanessa Peterson, Kathryn E. Prince, et al. “Towards Sustainable Energy: Generation of Hydrogen Fuel Using Nuclear Energy.” International Journal of Hydrogen Energy 41, no. 30 (August 10, 2016): 12812–25. https://doi. org/10.1016/j.ijhydene.2016.05.054. NSPE. NSPE Ethics Reference Guide. Alexandria, VA: National Society of Professional Enginners (NSPE), 2017. Nuclear Accident Independent Investigation Committee. The National Diet of Japan: The Official Report of the Fukushima Nuclear Accident Independent Investigation Commission. Executive Summary. Japan: Nuclear Accident Independent Investigation Commission (NAIIC), 2012. Nuclear Regulatory Commission. Reactor Safety Study. An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants. WASH-1400-MR; NUREG-75/014MR. Washington, DC: Nuclear Regulatory Commission, 1975. Safety Goals for the Operations of Nuclear Power Plants: Policy Statement; Republication.

51

FR

30028.

Washington,

DC:

Nuclear

Regulatory

Commission (NRC), 1986. Nuffield Council on Bioethics. Biofuel: Ethical Issues. Abingdon, Oxfordshire: Nuffield Council on Bioethics, 2011. Nyborg, Karine. “Some Norwegian Politicians’ Use of Cost–Benefit Analysis.” Public Choice 95, no. 3 (June 1, 1998): 381–401. https://doi.org/10.1023/ A:1005012509068. Nyholm, Sven. “Attributing Agency to Automated Systems: Reflections on Human–Robot

Collaborations

and

Responsibility-Loci.”

Science

and

Engineering Ethics 24, no. 4 (2018): 1201–19. https://doi.org/10.1007/ s11948–017-9943-x. “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II.” Philosophy Compass 13, no. 7 (2018): e12506. https://doi.org/10.1111/phc3.12506.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

207

O’Brien, G., and P. O.’ Keefe. “The Future of Nuclear Power in Europe: A Response.” International Journal of Environmental Studies 63, no. 2 (April 1, 2006): 121–30. https://doi.org/10.1080/00207230600661619. Odgaard, T., C. E. Kelly, and J. Laird. “Current Practice in Project Appraisal in Europe: Deliverable 1 of the HAETCO Project,” 2005. https://eprints.whiterose.ac.uk/2502/1/Current_practice_in_project_appraisal_uploadable.pdf. OECD. Addressing Societal Challenges Using Interdisciplinary Research. Paris: Organization for Economic Co-operation and Development (OECD), 2020. Cost–Benefit Analysis and the Environment: Further Development and Policy Use. Paris: OECD Publishing, 2018. https://read.oecd-ilibrary.org/environment/costbenefit-analysis-and-the-environment_9789264085169-en#page9. Cost–Benefit Analysis and the Environment: Recent Developments. Paris: OECD Publishing, 2006. https://doi.org/10.1787/9789264010055-en. Guidelines for Resilience Systems Analysis. Paris: Organization for Economic Cooperation and Development (OECD), 2014. OECD Guidelines for Multinational Enterprises. Paris: Organization for Economic Co-operation and Development (OECD), 2008. OECD Principles of Corporate Governance. Paris: OECD, 2004. Ohnsman, Alan. “Waymo CEO on Uber Crash: Our Self-Driving Car Would Have Avoided Pedestrian.” Forbes, March 24, 2018. www.forbes.com/sites/alanohns man/2018/03/24/waymo-ceo-on-uber-crash-our-self-driving-car-would-haveavoided-pedestrian/. O’Kane, Sean. “Volkswagen America’s CEO Blames Software Engineers for Emissions Cheating Scandal.” Verge, October 8, 2015. www.theverge.com/ 2015/10/8/9481651/volkswagen-congressional-hearing-diesel-scandal-fault. “VW Executive Given the Maximum Prison Sentence for His Role in Dieselgate.” Verge, December 6, 2017. www.theverge.com/2017/12/6/ 16743308/volkswagen-oliver-schmidt-sentence-emissions-scandal-prison. Oosterlaken, Ilse. “Applying Value Sensitive Design (VSD) to Wind Turbines and Wind Parks: An Exploration.” Science and Engineering Ethics 21, no. 2 (2015): 359–79. Owen, R., J. Stilgoe, P. Macnaghten, M. Gorman, E. Fisher, and D. Guston. “A Framework for Responsible Innovation.” In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 27–50. Chichester: John Wiley & Sons, 2013. Panikkar, B., and R. Sandler. “Nuclear Energy, Justice and Power: The Case of Pilgrim Nuclear Power Station License Renewal.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 141–56. Cambridge: Cambridge University Press, 2015.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

208

Bibliography

Pasquale, F. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press, 2015. Patil, Vishwanath, Khanh-Quang Tran, and Hans Ragnar Giselrød. “Towards Sustainable Production of Biofuels from Microalgae.” International Journal of Molecular Sciences 9, no. 7 (July, 2008): 1188–95. https://doi.org/10.3390/ ijms9071188. Perrow, C. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ: Princeton University Press, 1999. Pesch, U., A. Correljé, E. Cuppen, and B. Taebi. “Energy Justice and Controversies: Formal and Informal Assessment in Energy Projects.” Energy Policy 109 (2017): 825–34. Pidgeon, N., K. Parkhill, A. Corner, and N. Vaughan. “Deliberating Stratospheric Aerosols for Climate Geoengineering and the SPICE Project.” Nature Climate Change 3, no. 5 (May, 2013): 451–57. Pilgrim, S., and M. Harvey. “Battles over Biofuels in Europe: NGOs and the Politics of Markets.” Sociological Research Online 15, no. 3 (2010): 45–60. https://doi.org/ 10.5153/sro.2192. Pinzi, S., I. L. Garcia, F. J. Lopez-Gimenez, M. D. Luque de Castro, G. Dorado, and M. P. Dorado. “The Ideal Vegetable Oil-Based Biodiesel Composition: A Review of Social, Economical and Technical Implications.” Energy & Fuels 23, no. 5 (May 21, 2009): 2325–41. https://doi.org/10.1021/ef801098a. Poinssot, C., S. Bourg, S. Grandjean, and B. Boullis. “The Sustainability, a Relevant Approach for Defining the Roadmap for Future Nuclear Fuel Cycles.” Procedia Chemistry 21 (2016): 536–44. Posner, E. A., and D. Weisbach. Climate Change Justice. Princeton, NJ: Princeton University Press, 2010. Priemus, H., B. Flyvbjerg, and B. van Wee, eds. Decision-Making on Mega-projects: Cost–Benefit Analysis, Planning and Innovation. Cheltenham and Northampton: Edward Elgar, 2008. Proske, D. “Comparison of Computed and Observed Probabilities of Failure and Core Damage Frequencies.” In 14th International Probabilistic Workshop, edited by Robby Caspeele, Luc Taerwe, and Dirk Proske, 109–22. Cham: Springer International Publishing, 2017. https://doi.org/10.1007/978-3-319-47886-9_8. Puzzanghera, J., and J. Hirsch. “VW Exec Blames ‘a Couple of’ Rogue Engineers for Emissions Scandal.” Los Angeles Times, October 28, 2015. www.latimes .com/business/autos/la-fi-hy-vw-hearing-20151009-story.html. Radder, H. “Why Technologies Are Inherently Normative.” In Philosophy of Technology and Engineering Sciences, edited by A. Meijer, 9:887–921.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

209

Handbook of the Philosophy of Science. Amsterdam and Boston: Elsevier, 2009. https://doi.org/10.1016/B978-0-444-51667-1.50037-9. Randall, Alan. Risk and Precaution. Cambridge: Cambridge University Press, 2011. Raz, J. The Morality of Freedom. Oxford: Oxford University Press, 1986. Reason, James. “Safety Paradoxes and Safety Culture.” Injury Control and Safety Promotion 7, no. 1 (March 1, 2000): 3–14. https://doi.org/10.1076/1566-0974 (200003)7:1;1-V;FT003. Robaey, Z., and A. Simons. “Responsible Management of Social Experiments: Challenges for Policymaking.” In Responsible Innovation 2: Concepts and Approaches, edited by E. J. Koops, I. Oosterlaken, H. A. Romijn, T. E. Swierstra, and J. van den Hoven, 87–103. Berlin: Springer International Publishing, 2015. Robbins, Scott. “A Misdirected Principle with a Catch: Explicability for AI.” Minds and Machines, October 15, 2019. https://doi.org/10.1007/s11023–019-09509-3. Robbins, Scott, and Adam Henschke. “The Value of Transparency: Bulk Data and Authoritarianism.” Surveillance & Society 15, nos. 3–4 (August 9, 2017): 582–89. https://doi.org/10.24908/ss.v15i3/4.6606. Rosenthal, Elisabeth. “As Biofuel Demand Grows, So Do Guatemala’s Hunger Pangs.” Science, New York Times, January 5, 2013. www.nytimes.com/2013/ 01/06/science/earth/in-fields-and-markets-guatemalans-feel-squeeze-of-bio fuel-demand.html. Rosen-Zvi, I. “You Are Too Soft: What Can Corporate Social Responsibility Do for Climate Change.” Minnesota Journal of Law, Science & Technology 12 (2011): 527–70. “Rouhani Vows Action over Quake Collapses.” Middle East, BBC News, November 14, 2017. www.bbc.com/news/world-middle-east-41988176. Royal Academy of Engineering. Sustainability of Liquid Biofuels. London: Royal Academy of Engineering, 2017. Royal Society. Machine Learning: The Power and Promise of Computers That Learn by Example. London: Royal Society, 2017. Rudolph, Jared. “Consequences and Limits: A Critique of Consequentialism.” Macalester Journal of Philosophy 17, no. 1 (March 28, 2011): 64–76. Sandin, P. “Dimensions of the Precautionary Principle.” Human and Ecological Risk Assessment: An International Journal 5, no. 5 (1999): 889–907. https://doi.org/10 .1080/10807039991289185. Santoni de Sio, Filippo, and Jeroen van den Hoven. “Meaningful Human Control over Autonomous Systems: A Philosophical Account.” Frontiers in Robotics and AI 5, art. 15 (2018): 1–14. https://doi.org/10.3389/frobt.2018.00015.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

210

Bibliography

Scheffler, S. “The Role of Consent in the Legitimation of Risky Activity.” In To Breathe Freely: Risk, Consent, and Air, edited by M. Gibson, 75–88. Totowa, NJ: Rowman & Littlefield, 1985. Schmidtz, D. “A Place for Cost–Benefit Analysis*.” Philosophical Issues 11, no. 1 (2001): 148–71. https://doi.org/10.1111/j.1758-2237.2001.tb00042.x. Schofield, Hugh. “Battle for a New Nantes Airport.” Europe, BBC News, November 27, 2012. www.bbc.com/news/world-europe-20502702. Schulz, T. L. “Westinghouse AP1000 Advanced Passive Plant.” Nuclear Engineering and Design 236, no. 14–16 (2006): 1547–57. Seeger, M. W., and J. H. Steven. “Legal versus Ethical Arguments: Contexts for Corporate Social Responsibility.” In The Debate over Corporate Social Responsibility, edited by Steven K. May, George Cheney, and Juliet Roper, 155–66. Oxford and New York: Oxford University Press, 2007. Shapiro, S. The Evolution of Cost–Benefit Analysisin U.S. Regulatory Decision-Making. Jerusalem Papers in Regulation & Governance, Working Paper no. 5. New Brunswick, NJ: Jerusalem Forum on Regulation & Governance, 2010. Sharkey, Amanda. “Can Robots Be Responsible Moral Agents? And Why Should We Care?” Connection Science 29, no. 3 (July 3, 2017): 210–16. https://doi.org/10 .1080/09540091.2017.1313815. Shrader-Frechette, K. “Rights to Know and the Fukushima, Chernobyl, and Three Mile Island Accidents.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 53–66. Cambridge: Cambridge University Press, 2015. Shrader-Frechette, K. What Will Work: Fighting Climate Change with Renewable Energy, Not Nuclear Power. New York: Oxford University Press, 2011. Singh, S. P., and Dipti Singh. “Biodiesel Production through the Use of Different Sources and Characterization of Oils and Their Esters as the Substitute of Diesel: A Review.” Renewable and Sustainable Energy Reviews 14, no. 1 (2010): 200–16. Sinnott-Armstrong, W. “Consequentialism.” In The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), edited by E. N. Zalta. https://plato.stanford .edu/archives/win2015/entries/consequentialism. Snow, Jacob. “Amazon’s Face Recognition Falsely Matched 28 Members of Congress with Mugshots.” American Civil Liberties Union, July 26, 2018. www .aclu.org/blog/privacy-technology/surveillance-technologies/amazons-facerecognition-falsely-matched-28. Sovacool, B. K. Energy and Ethics: Justice and the Global Energy Challenge. New York: Palgrave Macmillan, 2013. Sovacool, B. K., and M. H. Dworkin. Global Energy Justice: Problems, Principles, and Practices. Cambridge: Cambridge University Press, 2014.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

211

Spahn, Andreas. “And Lead Us (Not) into Persuasion . . . ? Persuasive Technology and the Ethics of Communication.” Science and Engineering Ethics 18, no. 4 (2012): 633–50. https://doi.org/10.1007/s11948–011-9278-y. Sparrow, Robert. “Killer Robots.” Journal of Applied Philosophy 24, no. 1 (2007): 62–77. Steijn, W. M. P., J. N. Van Kampen, D. Van der Beek, J. Groeneweg, and P. H. A. J. M. Van Gelder. “An Integration of Human Factors into Quantitative Risk Analysis Using Bayesian Belief Networks towards Developing a ‘QRA+.’” Safety Science 122 (February 1, 2020): 104514. https://doi.org/10.1016/j.ssci .2019.104514. Stewart, Jack. “Tesla’s Self-Driving Autopilot Involved in Another Deadly Crash.” Wired, March 31, 2018. www.wired.com/story/tesla-autopilot-self-drivingcrash-california/. Stilgoe, Jack. Who’s Driving Innovation? New Technologies and the Collaborative State. Cham: Palgrave Macmillan, 2020. Stilgoe, Jack, Richard Owen, and Phil Macnaghten. “Developing a Framework for Responsible Innovation.” Research Policy 42, no. 9 (November 1, 2013): 1568–80. https://doi.org/10.1016/j.respol.2013.05.008. Sunstein, C. R. “Cost Benefit Analysis and the Environment.” Ethics 115 (2005): 351–85. Surgrue, Noreen M., and Timothy G. McCarthy. “Engineering Decisions in a Global Context and Social Choice.” In Engineering Ethics for a Globalized World, edited by Colleen Murphy, Paolo Gardoni, Hassan Bashir, Charles E. Harris, Jr., and Eyad Masad, 79–90. Philosophy of Engineering and Technology. Cham: Springer International Publishing, 2015. Synolakis, C., and U. Kânoğlu. “The Fukushima Accident Was Preventable.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 373, no. 2053 (October 28, 2015): 20140379. https://doi .org/10.1098/rsta.2014.0379. Taebi, Behnam. “Bridging the Gap between Social Acceptance and Ethical Acceptability.” Risk Analysis 37, no. 10 (2017): 1817–27. “Intergenerational Risks of Nuclear Energy.” In Handbook of Risk Theory. Epistemology, Decision Theory, Ethics and Social Implications of Risk, edited by S. Roeser, R. Hillerbrand, P. Sandin, and M. Peterson, 295–318. Dordrecht: Springer, 2012. “Justice and Good Governance in Nuclear Disasters.” In Ethics and Law for Chemical, Biological, Radiological, Nuclear & Explosive Crises, edited by D. O’Mathúna and I. de Miguel Beriain, 65–74. The International Library of Ethics, Law and Technology. Cham: Springer, 2019.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

212

Bibliography

“Moral Dilemmas of Uranium and Thorium Fuel Cycles.” In Social and Ethical Aspects of Radiation Risk Management, edited by D. Oughton and S. O. Hansson, 259–80. Amsterdam: Elsevier, 2013. “The Morally Desirable Option for Nuclear Power Production.” Philosophy & Technology 24, no. 2 (2011): 169–92. “Sustainable Energy and the Controversial Case of Nuclear Power.” In Sustainability Ethics: 5 Questions, edited by R. Raffaelle, W. Robison, and E. Selinger, 233–42. Copenhagen: Automatic Press, 2010. Taebi, Behnam, A. Correljé, E. Cuppen, M. Dignum, and U. Pesch. “Responsible Innovation and an Endorsement of Public Values: The Need for Interdisciplinary Research.” Journal of Responsible Innovation 1, no. 1 (2014): 118–24. Taebi, Behnam, Jeroen van den Hoven, and Stephanie J. Bird. “The Importance of Ethics in Modern Universities of Technology.” Science and Engineering Ethics 25, no. 6 (2019): 1625–32. https://doi.org/10.1007/s11948-019-00164-6. Taebi, Behnam, and A. C. Kadak. “Intergenerational Considerations Affecting the Future of Nuclear Power: Equity as a Framework for Assessing Fuel Cycles.” Risk Analysis 30, no. 9 (2010): 1341–62. Taebi, Behnam, and J. L. Kloosterman. “Design for Values in Nuclear Technology.” In Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, edited by J. van den Hoven, P. Vermaas, and I. Van de Poel, 805–29. Dordrecht: Springer, 2015. “To Recycle or Not to Recycle? An Intergenerational Approach to Nuclear Fuel Cycles.” Science and Engineering Ethics 14, no. 2 (2008): 177–200. Taebi, Behnam, J. Kwakkel, and C. Kermisch. “Governing Climate Risks in the Face of Normative Uncertainties.” WIREs Climate Change 11, no. 5 (2020): e666. https://doi.org/10.1002/wcc.666. Taebi, Behnam, S. Roeser, and I. Van de Poel. “The Ethics of Nuclear Power: Social Experiments, Intergenerational Justice, and Emotions.” Energy Policy 51 (2012): 202–06. Taebi, Behnam, and S. Roeser, eds. The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era. Cambridge: Cambridge University Press, 2015. Taebi, Behnam, and A. Safari. “On Effectiveness and Legitimacy of ‘Shaming’ as a Strategy for Combatting Climate Change.” Science and Engineering Ethics 23, no. 5 (2017): 1289–1306. Taebi, Behnam, and I. Van de Poel. “Socio-Technical Challenges of Nuclear Power Production and Waste Disposal in the Post-Fukushima Era: Editors’ Overview.” Journal of Risk Research 18, no. 3 (2015): 267–72.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

213

Thaler, R. H., and C. R. Sunstein. Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT, and London: Yale University Press, 2008. Thompson, Dennis F. “Moral Responsibility of Public Officials: The Problem of Many Hands.” American Political Science Review 74, no. 4 (December, 1980): 905–16. https://doi.org/10.2307/1954312. Tomei, Julia. “The Sustainability of Sugarcane-Ethanol Systems in Guatemala: Land, Labour and Law.” In “Implementing Sustainable Bioenergy Systems: Insights from the 2014 RCUK International Bioenergy Conference.” Special issue, Biomass and Bioenergy 82 (November 1, 2015): 94–100. https://doi.org/10 .1016/j.biombioe.2015.05.018. Tomei, Julia, and Rocio Diaz-Chavez. “Guatemala.” In Sustainable Development of Biofuels in Latin America and the Caribbean, edited by Barry D. Solomon and Robert Bailis, 179–201. New York: Springer, 2014. https://doi.org/10.1007/ 978-1-4614-9275-7_8. “Trump Dismisses US Climate Change Report.” US & Canada, BBC News, November 26, 2018. www.bbc.com/news/world-us-canada-46351940. Tuazon, D., and E. Gnansounou. “Towards an Integrated Sustainability Assessment of Biorefineries.” In Life-Cycle Assessment of Biorefineries, edited by Edgard Gnansounou and Ashok Pandey, 259–301. Amsterdam: Elsevier, 2017. https://doi.org/10.1016/B978–0-444-63585-3.00010-3. Van de Poel, I. “Can We Design for Well-Being?” In The Good Life in a Technological Age, edited by Philip Brey, Adam Briggle, and Edward Spence, 295–306. New York: Routledge, 2012. “Changing Technologies: A Comparative Study of Eight Processes of Transformation of Technological Regimes” (PhD dissertation, University of Twente, 1998). “A Coherentist View on the Relation between Social Acceptance and Moral Acceptability of Technology.” In Philosophy of Technology after the Empirical Turn, edited by M. Franssen, P. E. Vermaas, P. Kroes, and A. W. M. Meijers, 177–93. Philosophy of Engineering and Technology, vol. 23. Switzerland: Springer, 2016. https://doi.org/10.1007/978-3-319-33717-3_11. “Conflicting Values in Design for Values.” In Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, edited by J. van den Hoven, P. Vermaas, and I. Van de Poel, 89–116. Dordrecht: Springer, 2015. “Design for Value Change.” Ethics and Information Technology (June 26, 2018): 1–5. https://doi.org/10.1007/s10676–018-9461-9. “An Ethical Framework for Evaluating Experimental Technology.” Science and Engineering Ethics 22, no. 3 (2016): 667–86. https://doi.org/10.1007/s11948– 015-9724-3.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

214

Bibliography

“Morally Experimenting with Nuclear Energy.” In The Ethics of Nuclear Energy: Risk, Justice and Democracy in the Post-Fukushima Era, edited by B. Taebi and S. Roeser, 179–99. Cambridge: Cambridge University Press, 2015. “Nuclear Energy as a Social Experiment.” Ethics, Policy & Environment 14, no. 3 (2011): 285–90. “Translating Values into Design Requirements.” In Philosophy and Engineering: Reflections on Practice, Principles and Process, edited by D. P. Michelfelder, N. McCarthy, and D. E. Goldberg, 253–66. Dordrecht: Springer, 2014. “Values in Engineering Design.” In Philosophy of Technology and Engineering Sciences, edited by A. Meijer, 973–1006. Amsterdam: Elsevier, 2009. Van de Poel, I., and Z. Robaey. “Safe-by-Design: From Safety to Responsibility.” NanoEthics 11, no. 3 (December 1, 2017): 297–306. https://doi.org/10.1007/ s11569–017-0301-x. Van de Poel, I., and L. M. M. Royakkers. Ethics, Technology and Engineering. An Introduction. Chichester, West Sussex: Wiley-Blackwell, 2011. Van de Poel, I., L. Royakkers, and S. D. Zwart. “Introduction.” In Moral Responsibility and the Problem of Many Hands, edited by I. Van de Poel, L. Royakkers, and S. D. Zwart, 1–11. New York and London: Routledge, 2015. Van de Poel, I., L. Royakkers, and S. D. Zwart, eds. Moral Responsibility and the Problem of Many Hands. New York and London: Routledge, 2015. Van den Hoven, J. “Responsible Innovation: A New Look at Technology and Ethics.” In Responsible Innovation 1: Innovative Solutions for Global Issues, edited by J. van den Hoven, N. Doorn, T. Swierstra, B. J. Koops, and H. Romijn, 3–13. Cham: Springer, 2014. “Value Sensitive Design and Responsible Innovation.” In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 75–83. Chichester: John Wiley and Sons, 2013. Van den Hoven, J., K. Jacob, L. Nielsen, F. Roure, L. Rudze, and J. Stilgoe. Options for Strengthening Responsible Research and Innovation: Report of the Expert Group on the State of Art in Europe on Responsible Research and Innovation. Brussels: European Commission, 2013. Van den Hoven, J., G.-J. Lokhorst, and I. Van de Poel. “Engineering and the Problem of Moral Overload.” Science and Engineering Ethics 18, no. 1 (2012): 143–55. https://doi.org/10.1007/s11948–011-9277-z. Van den Hoven, J., P. Vermaas, and I. Van de Poel. “Design for Values: An Introduction.” In Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, edited by J. van den Hoven, P. Vermaas, and I. Van de Poel, 1–6. Dordrecht: Springer, 2015.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

215

Van den Hoven, J., P. Vermaas, and I. Van de Poel,, eds. Handbook of Ethics and Values in Technological Design: Sources, Theory, Values and Application Domains. Dordrecht: Springer, 2015. Van Gelder, P. “On the Risk-Informed Regulation in the Safety against External Hazards.” In Handbook of Safety Principles, edited by N. Möller, S. O. Hansson, J. E. Holmberg, and C. Rollenhagen, 417–33. Hoboken, NJ: John Wiley & Sons Inc., 2018. Van Wee, B. “How Suitable Is CBA for the Ex-ante Evaluation of Transport Projects and Policies? A Discussion from the Perspective of Ethics.” Transport Policy 19, no. 1 (January 1, 2012): 1–7. https://doi.org/10.1016/j .tranpol.2011.07.001. Transport and Ethics: Ethics and the Evaluation of Transport Policies and Projects. Cheltenham and Northampton: Edward Elgar, 2011. Vasen, F. “Responsible Innovation in Developing Countries: An Enlarged Agenda.” In Responsible Innovation 3: A European Agenda?, edited by L. Asveld, R. van Dam-Mieras, T. Swierstra, S. Lavrijssen, K. Linse, and J. van den Hoven, 93–109. Cham: Springer, 2017. Verbeek, P. P. Moralizing Technology: Understanding and Designing the Morality of Things. Chicago: University of Chicago Press, 2011. Voiklis, John, Boyoung Kim, Corey Cusimano, and Bertram F. Malle. “Moral Judgments of Human vs. Robot Agents.” In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 775–80, 2016. https://doi.org/10.1109/ROMAN.2016.7745207. Volkswagen. Volkswagen Group Code of Conduct. Version 10/2017. Wolfsburg, Germany: Volkswagen, 2017. Von Schomberg, R. Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields. Brussels: European Commission, 2011. “A Vision for Responsible Research and Innovation.” In Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 51–74. Chichester: John Wiley & Sons, 2013. Walker, G. “Beyond Distribution and Proximity: Exploring the Multiple Spatialities of Environmental Justice.” Antipode 41, no. 4 (September 1, 2009): 614–36. https://doi.org/10.1111/j.1467-8330.2009.00691.x. Wallach, W., and C. Allen. Moral Machines. Teaching Robots Right from Wrong. Oxford and New York: Oxford University Press, 2009. Wang, Qian, and Ping Yan. “Development of Ethics Education in Science and Technology in Technical Universities in China.” Science and Engineering Ethics 25, no. 6 (2019): 1721–33. https://doi.org/10.1007/s11948–019-00156-6.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

216

Bibliography

Watson, M., and H. Bulkeley. “Just Waste? Municipal Waste Management and the Politics of Environmental Justice.” Local Environment 10, no. 4 (2005): 411–26. Weil, V. M. “Professional Standards: Can They Shape Practice in an International Context?” Science and Engineering Ethics 4, no. 3 (1998): 303–14. https://doi.org/ 10.1007/s11948-998-0022-1. “The Rise of Engineering Ethics.” Technology in Society 6, no. 4 (January 1, 1984): 341–45. https://doi.org/10.1016/0160-791X(84)90028-9. Weinberg, A. M., and I. Spiewak. “Inherently Safe Reactors and a Second Nuclear Era.” Science 224 (1984): 1398–1402. Weinberg, Joe, and Ryan Bakker. “Let Them Eat Cake: Food Prices, Domestic Policy and Social Unrest.” Conflict Management and Peace Science 32, no. 3 (2015): 309–26. Wever, R., and J. Vogtländer. “Design for the Value of Sustainability.” In Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, edited by J. van den Hoven, P. Vermaas, and I. Van de Poel, 513–49. Dordrecht: Springer, 2015. White, Geoff. “Police ‘Miss’ Chances to Improve Face Tech.” Technology, BBC News, May 13, 2019. www.bbc.com/news/technology-48222017. Wigley, D. C., and K. Shrader-Frechette. “Environmental Justice: A Louisiana Case Study.” Journal of Agricultural and Environmental Ethics 9, no. 1 (1996): 61–82. Willsher, Kim. “France Abandons Plan for €580m Airport and Orders Squatters Off Site.” World News, Guardian, January 17, 2018. www.theguardian.com/ world/2018/jan/17/france-abandons-plan-for-580m-airport-in-west-ofcountry. Winner, Langdon. “Do Artifacts Have Politics?” Daedalus 109, no. 1 (1980): 121–36. Wong, Loong. “Revisiting Rights and Responsibility: The Case of Bhopal.” Social Responsibility Journal 4, nos. 1–2 (2008): 143–57. Wong, P.-H. “Global Engineering Ethics.” In Routledge Handbook of Philosophy of Engineering, edited by D. M. Michelfelder and N. Doorn, 620–29. New York: Routledge, 2021. “Responsible Innovation for Decent Nonliberal Peoples: A Dilemma?” Journal of Responsible Innovation 3, no. 2 (May 3, 2016): 154–68. https://doi.org/10.1080/ 23299460.2016.1216709. World Commission on Environment and Development. Our Common Future. Oxford: World Commission on Environment and Development, 1987. Wüstenhagen, R., M. Wolsink, and M. J. Bürer. “Social Acceptance of Renewable Energy Innovation: An Introduction to the Concept.” Energy Policy 35, no. 5 (2007): 2683–91.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Bibliography

217

Wynsberghe, Aimee van, and Scott Robbins. “Critiquing the Reasons for Making Artificial Moral Agents.” Science and Engineering Ethics 25, no. 3 (2019): 719–35. Yap, B. “Blowback over Japanese Plan to Reuse Tainted Soil From Fukushima.” Bloomberg Environment, June 14, 2018. https%3A%2F%2Fwww.bna.com% 2Fblowback-japanese-plan-n73014476527%2F. Zhu, Q. “Engineering Ethics Studies in China: Dialogue between Traditionalism and Modernism.” Engineering Studies 2, no. 2 (August 1, 2010): 85–107. https:// doi.org/10.1080/19378629.2010.490271. Zhu, Q, and B. K. Jesiek. “Engineering Ethics in Global Context: Four Fundamental Approaches (Paper ID #19721).” In Proceedings of the 124th ASEE Annual Conference and Exposition, 2017. https://peer.asee.org/28252.

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.012

Index

9/11, 81

second generation (advanced), 144

A4 highway, 99 Aarhus Convention, 38

sustainability and, 145–49 third generation (algae-based), 144

AI. See artificial intelligence

Boiling Water Reactor (BWR), 24

alcohol interlock, 101–3

Brundtland report, 149

CBA and, 102 algorithms, 83, 122, 125, 127, 129–31 American Society for Civil Engineers (ASCE), 173 artificial intelligence, 6 agency, 19, 127, 133 autonomy, 38, 90–91, 101, 125, 130

Cadillac, 117 CE Delft report on Nantes Airport, 55–59 chemical industry, 16 Chinese Academy of Engineering (CAE), 183 climate change, 14, 42, 65–66, 76, 108, 142, 144, 147, 177

bias, 19

Club of Rome, 149

definition, 125

Codes of Conduct. See codes of ethics

responsibility gap, 133–34

codes of ethics, 9–12, 172, 183–84

trustworthy, 132 Artificial Moral Agents (AMA), 126 autonomous vehicles, 119–25, 131, 136 Cadillac, 117 situation awareness, 117 Tesla, 113, 117 Uber, 111–16, 118, 135–37 Waymo, 115 autonomous weapons, 6, 133–35 aviation, 116

corporate, 11 professional, 3, 7, 9–11, 172–73, 175, 179, 183, 186 Collingridge Dilemma, 16, 42, 105 complacency autonomous systems and, 114, 116 Concorde, 53 consequentialism, 17, 41, 60, 63 Corporate Social Responsibility (CSR), 12–15, 175

Barry, Brian, 150–52

Cost Benefit Analysis topic selection, 66

Bentham, Jeremy, 41, 61

Cost Benefit Analysis (CBA), 17, 59–60

Bhopal disaster, 174

calculating costs and benefits, 68–72

biofuel, 141–49, 166

dealing with problems of, 73–78

energy ethics and, 166

identifying consequences, 67

first generation (conventional), 143

limitations, 63

intergenerational justice and, 150

objections, 64–72

intragenerational justice and, 150

roots in utilitarianism, 59–64

218

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.013

Index

Sven Ove Hansson on, 66, 68, 70

globalization, 20, 178–85

value conflicts and, 102

international context, 174

crash optimization, 19, 119–23 cultural relativism, 185–87

219

Iran, 171–74 Western vs. non-Western, 174–78 engineering practice, xii

Davis, Michael, 7, 11, 182

assessment method, xii

Deepwater Horizon, 11

ethics, 15–20

Delta Plan, 99 deontology school of thinking, 38, 75, 120 test, 75 Design for Values (DfV), 18, 46, 94–98, 101, 109, 116, 135, 154, 161 Dieselgate, 1–3, 12 design phase, 18 Dose Limit Principle, 75 Dutch shale gas controversy, 66, 106–9

macro-ethical issues, xiv Engineer’s Creed, 9–11 environmental benevolence, 154, 156–58, 165, 167 Environmental Protection Agency (EPA), 1, 70 ethics artificial intelligence and, 125–37 engineer and, 7–12 engineering corporations and, 12–15 engineering practice and, 15–20 moral brake on innovation, xi, 6

economic viability, 161, 167 energy

non-binary, xii nuclear energy and, 7

ethics, 163, 165

risk, 85

sustainable technologies, 37

technological risk, 37

engineer profession, 8 responsibilities, 7–15

technology transfer, 19, 174–76, 178–79 ethics up-front definition of approach, xii, 15

engineering assessment and evaluation, 16

facial recognition software, 97

biases about ethics and, 4–7 intergenerational thinking, 19, 41

Ford Pinto, 5

law and, 4

fracking, 99–105

license, 180–81

Fukushima Daiichi, 23–36, 38–39, 45, 160,

fossil fuel, 142, 148, 177

moral ideal, 9 moral issues, 4

163 Boiling Water Reactor (BWR), 24

qualifications, 180

informed consent, 38

safety, 43

radiation, 24–26, 38–39

engineering corporations

radiation protection, 25, 75, 157

responsibilities, 12–15 engineering design designing out the conflict, 18, 98–100

global food crisis, 141–47 Grand Ouest airport, 53–59, 65, 67,

ethical issues, 18 neutrality thesis, 87, 90

69, 77 greenwashing, 14, 19

nudging, 87

Guatemala, 141–44, 146, 150

values, 91 engineering ethics

Hansson, Sven Ove, 40, 61, 68, 70

diversification, 20, 182, 185–88

High Level Waste, 158–60

education, 179, 188

Hippocratic Oath, 9

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.013

220

Index

incommensurability, 70, 74, 77 India, 83, 174 Indirect Land Use Change (ILUC), 146 informed consent, 38 critique of, 38–40 innovation shift, 179

Nantes Atlantique airport, 54–55, 65, 67–68, 77 National Society for Professional Engineers (NSPE), 9–10 National Traffic Safety Board (NTSB), 112, 118

intergenerational dilemmas, 158–62

no harm requirement, 151–54

intergenerational justice, 42, 150, 165

normal accidents, 35, 51 nuclear energy, 29, 36, 163–68

intergenerational neutrality, 72

closed fuel cycle, 158–61

Intergovernmental Panel on Climate

ethics, 163–66

Change (IPCC), 50 International Atomic Energy Agency, 155–57 Iran

open fuel cycle, 154–58 Partitioning & Transmutation (P&T), 162 reprocessing, 158–61

earthquake, 169 engineering ethics in, 171–74

sustainable, 153–55 nuclear power reactors, 45–46

Iranian Construction Engineering

Boiling Water Reactor (BWR), 24

Organization (IRCEO), 169–73

Light Water Reactor (LWR), 155

Mehr Housing Plan, 169, 171, 173

nuclear waste, 155 nuclear weapons, 156, 160

Japan. See Fukushima Daiichi justice distributive, 42, 164 intergenerational, 42, 150, 165 intragenerational, 150, 153, 162–66

operationalization, 92, 94 Optimization Principle, 75 Organization for Economic Co-operation and Development (OECD), 14, 50, 73

social, 150–53, 165 spatial, 41, 150–51

Paradox of Safety, 31

temporal, 41, 71, 73, 76, 150–53 Justification Principle, 75

Perrow, Charles, 35, 52

Kant, Immanuel, 38 killer robots. See autonomous weapons

Poel, Ibo van de, 11, 47, 51, 112 policy-making, 28–29, 33, 60, 168

persuasive technology, 87–90, 117 criticism of, 89

Precautionary Principle (PP), 6, 48–50 legislation lagging behind technology, 5

Per Sandin’s approach, 49 privacy, 82–84

Light Water Reactor (LWR), 155

Probabilistic Risk Assessment (PRA), 27,

machine ethics, 126

Probabilistic Safety Assessment (PSA). See

objections, 126 Many Hands, Problem of, 11, 111

Probabilistic Risk Assessment (PRA) problem of distribution, 73–74

Meaningful Human Control, 132–36

profession

29–30, 44

Mill, John Stuart, 38, 61, 101 Moses, Robert, 86 Multi-Criteria Analysis, 76–78

definitions, 8 public health, 74, 101, 152, 156, See also safety

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.013

Index

racist overpasses, 86 radiation, 24–27, 38–39, 85, 154–57, 164 protection, 75, 157 Rasmussen Report, 29 reliability assessments limitations, 33

221

Social Cost Benefit Analysis (SCBA), 54–60 station blackout, 23, 32 sustainability holistic assessment, 153 value of, 91

reprocessing, 158–61 resilience engineering, 50 resource durability, 157–61

technological risk, 36 uncertainty, 42, 45–48

responsibility, 111

Technology Assessment (TA), 97, 106

categories, 112

TEPCO, 28, 31

gap, 133

Tesla (car), 113, 117

Many Hands, Problem of, 11, 111

Three Mile Island, 29

Responsible Innovation (RI), 103, 108

Trolley Problem, 119–24

Responsible Research and Innovation (RRI),

autonomous vehicles and, 121–24

104–5, 134 Rio Declaration on Environment and

objections, 120–22

Development, 48 risk

Uber (car), 111–16, 118, 135–37, See also autonomous vehicles

probabilistic vs. deterministic approach, 44 taxonomy, 42 Risk Analysis, 23–51

software and false positives, 113–16 UN Global Compact (UNGC), 13 uncertainty, 43, 63 technological risk, 42, 48

risk assessment, 27–28, 50 consequence-based, 41–42

utilitarianism, 17, 59–64, 75

risk assessment methods, 16–17, 27, 118

Value Hierarchy, 94–96, 107

Human Error Probability (HEP), 33

values

limitations, 36

balancing, 18, 103

risk reduction, 43

conflicts, 96–103 definition, 91

Safe-by-Design (SbD), 46, 127

Value-Sensitive Design (VSD), 92–94, 98

safety, 84, 89 autonomy vs., 101

Volkswagen (VW), 1, 12–14

paradox, 31

Waymo, 115

value, 91

Waze, 132

scenario uncertainty, 43, 47 science-policy interface, 28–30 seatbelts, 89, 101

whole-body scanners, 81–87, 93, 98, 104 privacy filters, 84, 99 techniques, 82

security

Willingness to Accept (WTA), 70, 74

airport, 81–85 social acceptance vs. ethical acceptability,

Willingness to Pay (WTP), 71–72, 76 Wingspread Statement, 49

36–37

Winner, Langdon, 86

Social Control ofTechnology. See Collingridge Dilemma

X-ray scanners, 81, 85, 95, 104

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.013

Downloaded from https://www.cambridge.org/core. University of Virginia Libraries, on 22 Jul 2021 at 03:19:44, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781316822784.013