Connected and Automated Vehicles: Integrating Engineering and Ethics (Studies in Applied Philosophy, Epistemology and Rational Ethics, 67) [1st ed. 2023] 3031399900, 9783031399909

This book reports on theoretical and practical analyses of the ethical challenges connected to driving automation. It al

119 4 3MB

English Pages 209 [204] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Introduction
Contents
Minding the Gap(s): Different Kinds of Responsibility Gaps Related to Autonomous Vehicles and How to Fill Them
1 Introduction
2 What is a Self-driving Car and What is Responsibility?
3 Responsibility Gaps in General
4 Responsibility Gaps Related to Self-driving Cars
5 Should We Mind the Gaps?
6 Suggestions About How/Whether Responsibility Gaps Can Be Filled
7 Concluding Remarks
References
Designing Driving Automation for Human Autonomy: Self-determination, the Good Life, and Social Deliberation
1 Introduction
2 Ethics of Technology and Driving Automation
3 Human Autonomy and Driving Automation
4 One Concept, Three Dimensions
5 Driving Automation and the Self-determination of Driving Tasks
6 Driving Automation and the Good Life
7 Driving Automation and Independent Policy-Making
8 Conclusions
References
Contextual Challenges to Explainable Driving Automation: The Case of Machine Perception
1 Introduction
2 Explainability and Driving Automation: An Ethical Perspective
3 The Contextual Nature of Explainability
3.1 Content
3.2 CAV Stakeholders
3.3 Explainability in Driving Automation: A Working Definition
4 Machine Perception in Driving Automation
5 Contextual, Human-Centred Explainability and Machine Perception
5.1 CAV Users
5.2 Developers
5.3 Legal Professionals
6 Conclusion
References
Design for Inclusivity in Driving Automation: Theoretical and Practical Challenges to Human-Machine Interactions and Interface Design
1 Introduction
2 Inclusivity, Transportation, and Driving Automation
3 From Automation to Human–Machine Interactions
4 From “Agent” to “Agents”
5 Design for Inclusivity: A Case Study
5.1 Preliminary Methodological Considerations
5.2 Case Study: Setting the Stage
5.3 Physical Impairments
5.4 Cognitive Impairments and Diversity
6 Conclusion and Future Outlooks
References
From Prototypes to Products: The Need for Early Interdisciplinary Design
1 Introduction
2 Prototypes, Products, and Design Ethics
3 A Case Study
3.1 Setting the Stage: Driving Automation and Communication
3.2 Detecting Collision Risk at Intersections: A Hypothetical Scenario
3.3 From Prototypes to Products: Ethical Challenges
4 Start Early! Insights from Design Ethics
5 Conclusion
References
Gaming the Driving System: On Interaction Attacks Against Connected and Automated Vehicles
1 Introduction
2 Cybersecurity in Driving Automation: Hacking, Sensor Interference, and Interaction Attacks
3 Safety-Oriented Trajectory Planning
3.1 Robust Approaches
3.2 Probabilistic Approaches
3.3 Trajectory Planning and Interaction Attacks
4 Possible Solutions and Related Hurdles
4.1 Secrecy
4.2 Diversity
5 Conclusion
References
Automated Driving Without Ethics: Meaning, Design and Real-World Implementation
1 Introduction
2 The Semantics of Automated Driving
2.1 The Problem with Machine Ethics
2.2 Addressing the AV’s Problem
3 Implementing Ethically-Aware Decision-Making
3.1 The Scope of Ethical Deliberation
3.2 Ethical Valence Theory
3.3 The Technical Implementation of Ethical Decision-Making
3.4 Deliberation in Dilemma Situations
4 Contractarian Approaches
5 Utilitarian Approaches
6 Egalitarian Approaches
7 Conclusion
References
Thinking of Autonomous Vehicles Ideally
1 Introduction
2 The Moral Machine Experiment from an Empirical Perspective
3 The Moral Machine Experiment from an Ideal Perspective
4 Making the Ideal Perspective Work
5 Ideality Guiding (Harsh) Reality
6 Emerging Algorithmic Technologies, Globalisation and Super-Abstraction
References
Thinking About Innovation: The Case of Autonomous Vehicles
1 Introduction
2 Existing Taxonomies and Their Limitations
3 A New Typology of Technological Innovation
4 The Case of Autonomous Vehicles
5 Conclusion
References
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding Narratives
1 Introduction
2 Part One: Framing the AV Narrative as a Business Tool and Strategy
2.1 The Digital Narrative Turn
2.2 The AV Narrative Context
2.3 Framing Narrative Value
2.4 Growing AV Narrative Tensions
2.5 AV Narrative and the Question of Risk Communication
2.6 Narrative Risk
3 Part Two: The Relationality of the AV and AI Narratives
3.1 The AI Creator Narrative
3.2 The AI Prosperity Narrative
3.3 The AI Creator Counter Narrative
3.4 The AI Unknown Impacts Counter Narrative: Science Technology Studies, Technology Ethics, AI Ethics, AI Risk and AV Ethics
4 Part Three: Deconstructing the Human Intelligence Innovation Narrative. A World Built on Transportation and Human-Driving Intelligence
4.1 Unpacking AV Risk Narrative
4.2 Narrative Versus AV Reality
4.3 Anticipating and Investigating AV Impacts Research
5 Concluding Remarks
References
Recommend Papers

Connected and Automated Vehicles: Integrating Engineering and Ethics (Studies in Applied Philosophy, Epistemology and Rational Ethics, 67) [1st ed. 2023]
 3031399900, 9783031399909

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies in Applied Philosophy, Epistemology and Rational Ethics

Fabio Fossa Federico Cheli   Editors

Connected and Automated Vehicles: Integrating Engineering and Ethics

Studies in Applied Philosophy, Epistemology and Rational Ethics Volume 67

Editor-in-Chief Lorenzo Magnani, Department of Humanities, Philosophy Section, University of Pavia, Pavia, Italy Editorial Board Atocha Aliseda, Universidad Nacional Autónoma de México (UNAM), Mexico, Mexico Giuseppe Longo, CNRS - Ecole Normale Supérieure, Centre Cavailles, Paris, France Chris Sinha, University of East Anglia, Norwich, UK Paul Thagard, University of Waterloo, Waterloo, Canada John Woods, University of British Columbia, Vancouver, Canada Advisory Editors Akinori Abe, Faculty of Letters, Chiba University, Chiba, Japan Hanne Andersen, Centre for Science Studies, Aarhus University, Aarhus, Denmark Selene Arfini, Department of Humanities, Philosophy Section and Computational Philosophy Laboratory, University of Pavia, Pavia, Italy Cristina Barés-Gómez, Fac Filosofía, Lógica F. de la Ciencia, Universidad de Sevilla, Sevilla, Sevilla, Spain Otávio Bueno, Department of Philosophy, University of Miami, Coral Gables, FL, USA Gustavo Cevolani, IMT Institute for Advanced Studies Lucca, LUCCA, Lucca, Italy Daniele Chiffi, Department of Architecture and Urban Studies, Politecnico di Milano, MILANO, Italy Sara Dellantonio, Dipartimento di Scienze, Università degli Studi di Trento, Rovereto (TN), Italy Gordana Dodig Crnkovic, Department of Computer Science and Engineering, Chalmers University of Technology, Gothenburg, Sweden Matthieu Fontaine, Department of Philosophy, Logic, and Philosophy of Science, University of Seville, Seville, Sevilla, Spain Michel Ghins, Institut supérieur de philosophie, Université Catholique de Louvain, Louvain-la-Neuve, Belgium Marcello Guarini, Department of Philosophy, Univeristy of Windsor, Windsor, ON, Canada Ricardo Gudwin, Computer Engineering and Industrial Automation, State University of Campinas, Campinas, Brazil Albrecht Heeffer, Sarton Centre for the History of Science, Ghent University, Gent, Belgium Mireille Hildebrandt, Erasmus MC, Rotterdam, The Netherlands Michael H. G. Hoffmann, School of Public Policy, Georgia Institute of Technology, Atlanta, GA, USA Jeroen van den Hoven, Faculty of Technology, Policy and Management, Delft University of Technology, Delft, Zuid-Holland, The Netherlands Gerhard Minnameier, Goethe-Universität Frankfurt am Main, Frankfurt, Hessen, Germany Yukio Ohsawa, School of Engineering, The University of Tokyo, Tokyo, Japan Sami Paavola, Faculty of Educational Sciences, University of Helsinki, HELSINKI, Finland Woosuk Park, Humanities and Social Sciences, KAIST, Daejeon, Korea (Republic of) Alfredo Pereira, Institute of Biosciences, Universidade de São Paulo, São Paulo, Brazil Luís Moniz Pereira, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa, Caparica, Portugal Ahti-Veikko Pietarinen, Department of Philosophy, University of Helsinki, Helsinki, Finland Demetris Portides, Department of Classics and Philosophy, University of Cyprus, Nicosia, Cyprus Dagmar Provijn, Centre for Logic and Philosophy, Ghent University, Ghent, Belgium Joao Queiroz, Institute of Arts and Design, Universidade Federal de Juiz de Fora, Juiz de Fora, Brazil Athanassios Raftopoulos, Psychology, University of Cyprus, Nicosia, Cyprus Ferdie Rivera, Department of Mathematics and Statistics, San Jose State University, San Jose, CA, USA Colin T. Schmidt, Ensam ParisTech and Le Mans University, Laval, France Gerhard Schurz, Department of Philosophy, DCLPS, Heinrich Heine University, Düsseldorf, Nordrhein-Westfalen, Germany Nora Schwartz, Department of Humanities, Universidad de Buenos Aires, Buenos Aires, Argentina Cameron Shelley, Centre for Society, Technology and Values, University of Waterloo, Waterloo, ON, Canada Frederik Stjernfelt, Center for Semiotics, University of Aarhus, Aarhus C, Denmark Mauricio Suárez, Logic and Philosophy of Science, Complutense University, Madrid, Spain Peter-Paul Verbeek, Faculty of Science, University of Amsterdam, Amsterdam, The Netherlands Riccardo Viale, Economics, Management and Statistics, University of Milano-Bicocca, Milan, Italy Marion Vorms, Pantheon-Sorbonne University, Paris, France Donna E. West, Department of Modern Languages, State University of New York College, Cortland, NY, USA

Studies in Applied Philosophy, Epistemology and Rational Ethics (SAPERE) publishes new developments and advances in all the fields of philosophy, epistemology, and ethics, bringing them together with a cluster of scientific disciplines and technological outcomes: ranging from computer science to life sciences, from economics, law, and education to engineering, logic, and mathematics, from medicine to physics, human sciences, and politics. The series aims at covering all the challenging philosophical and ethical themes of contemporary society, making them appropriately applicable to contemporary theoretical and practical problems, impasses, controversies, and conflicts. Book proposals can be sent to the Series Editor, Prof. Lorenzo Magnani at [email protected]. Please include: A short synopsis of the book or the introduction chapter A Table of Contents The CV of the lead author/editor(s) For more information, please contact the Editor-in-Chief at [email protected]. Our scientific and technological era has offered “new” topics to all areas of philosophy and ethics – for instance concerning scientific rationality, creativity, human and artificial intelligence, social and folk epistemology, ordinary reasoning, cognitive niches and cultural evolution, ecological crisis, ecologically situated rationality, consciousness, freedom and responsibility, human identity and uniqueness, cooperation, altruism, intersubjectivity and empathy, spirituality, violence. The impact of such topics has been mainly undermined by contemporary cultural settings, whereas they should increase the demand of interdisciplinary applied knowledge and fresh and original understanding. In turn, traditional philosophical and ethical themes have been profoundly affected and transformed as well: they should be further examined as embedded and applied within their scientific and technological environments so to update their received and often old-fashioned disciplinary treatment and appeal. Applying philosophy individuates therefore a new research commitment for the 21st century, focused on the main problems of recent methodological, logical, epistemological, and cognitive aspects of modeling activities employed both in intellectual and scientific discovery, and in technological innovation, including the computational tools intertwined with such practices, to understand them in a wide and integrated perspective. Studies in Applied Philosophy, Epistemology and Rational Ethics means to demonstrate the contemporary practical relevance of this novel philosophical approach and thus to provide a home for monographs, lecture notes, selected contributions from specialized conferences and workshops as well as selected Ph.D. theses. The series welcomes contributions from philosophers as well as from scientists, engineers, and intellectuals interested in showing how applying philosophy can increase knowledge about our current world. Indexed by SCOPUS, zbMATH, SCImago, DBLP. All books published in the series are submitted for consideration in Web of Science.

Fabio Fossa · Federico Cheli Editors

Connected and Automated Vehicles: Integrating Engineering and Ethics

Editors Fabio Fossa Department of Mechanical Engineering Politecnico di Milano Milan, Italy

Federico Cheli Department of Mechanical Engineering Politecnico di Milano Milan, Italy

ISSN 2192-6255 ISSN 2192-6263 (electronic) Studies in Applied Philosophy, Epistemology and Rational Ethics ISBN 978-3-031-39990-9 ISBN 978-3-031-39991-6 (eBook) https://doi.org/10.1007/978-3-031-39991-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Foreword

Transportation has always been deeply entangled with social and ethical values. Safety as the protection and promotion of road users’ physical integrity has long been acknowledged as a pivotal objective in transport engineering. As a result, and even if much remains to be achieved, efforts have been taken to embed it into both technologies and engineering practices. But the social and ethical relevance of transportation extends well beyond safety. Social justice, equality, and fairness are also deeply influenced by accessibility to transportation options, with tangible effects on well-being and quality of life. The worrisome environmental effects of transportation in terms of emissions, pollution, consumption of non-renewable resources, land use, and so on also raise thorny ethical, social, and regulatory challenges. Addressing transport in isolation from these and similar problems is no longer acceptable. New approaches aimed at integrating ethical and technical expertise are needed to face the complex challenges of future mobility. Technological innovation is a critical factor in the endeavour of transitioning towards a transportation system that better satisfies relevant social needs and promotes social well-being. Innovation offers unprecedented opportunities to tackle this challenge, while at the same time raising new risks that require to be carefully identified and handled. Driving automation fits this framework perfectly. On the one hand, much of the public and financial attention that the sector has attracted in the past years has also been justified by insisting on the ethical promises of the technology in terms of safety, environmental sustainability, accessibility, and traffic efficiency—so that a reference to ethical values and social well-being, in a sense, has always been part of the discourse. On the other hand, great expectations have been thus fuelled, and meeting them will require much more than solving the technical puzzles of driving automation. Moreover, new risks must be carefully determined and measured, so as to responsibly foresee their impacts and minimise their harmful effects. Here as elsewhere, engineers are called to face issues that they are only partially used—or trained—to deal with. Even though social and ethical values are inextricably entangled with transport technologies and infrastructure, engineering curricula do not normally dedicate much space and resources to build the skills necessary to v

vi

Foreword

identify socio-ethical issues and manage them responsibly. This is not just an apparent problem. It is also a missed opportunity. Acknowledging the relevance of socioethical challenges to the design of resilient technologies and providing engineers with the interdisciplinary support necessary to become aware of and manage them could contribute greatly to the socially beneficial development, deployment, and use of transportation technologies. Much there is to gain from the integration of engineering and ethics. Sure enough, significant efforts are necessary to make the integration of engineering and ethics work on both individual and institutional levels. Many barriers need to be overcome—just think of the profound diversity in languages, methodologies, styles of thinking, skills, expertise, research contexts, and cultures that differentiate engineers from philosophers and social scientists. Even when the obstacles are successfully tamed, the problems to be addressed are extremely complex and multilayered. Building a shared framework through which to rigorously and effectively deal with such thorny issues is certainly difficult. It is definitely a matter that requires plenty of energy, time, and resources. It is, however, also a matter of responsibility. Responsibility for building better technologies, and for building technologies better. Politecnico di Milano believes that this responsibility must not be waived. On the contrary, we have an obligation to acknowledge and practice it. This book showcases the commitment of Politecnico di Milano to fostering a culture of scientific innovation that puts individual, social, and environmental values at the forefront of its mission. As part of a wider project intended to pursue this goal, and with the support of Fondazione Silvio Tronchetti Provera, a research position in ethics of driving automation has been funded at our Department of Mechanical Engineering. Thanks to this, engineering researchers working on different aspects of driving automation were given the opportunity to interact with philosophers and social scientists with the aim of bringing technological and socio-ethical requirements closer to each other and explore the many challenges of designing driving automation technologies in alignment with relevant ethical values. Several chapters of this volume testify to both the importance and complexity of interdisciplinary collaboration in the field of driving automation. Moreover, contributions by renowned international experts enlarge and enrich the debate, showing the relevance of the issues here brought into focus and setting the stage for a cutting-edge dialogue on such timely concerns. The opportunities opened by these studies are as clear as the challenges that the future of transportation poses. Bringing engineering and human sciences closer to each other is fundamental to living up to the responsibilities associated with technological innovation. The new, hybrid culture we need to navigate through the agitated water of the technological age cannot but rest on interdisciplinary cooperation. Let us build it together. May 2023

Ferruccio Resta Politecnico di Milano Milan, Italy

Introduction

Connected and Automated Vehicles (CAVs) are often presented as an important step towards safer and more sustainable mobility. Thanks to this innovative technology, for instance, the number of road injuries and casualties is expected to be substantially reduced. In fact, road accidents are caused for the most part by inadequate human behaviours—such as negligence, carelessness, drunk driving, overtiredness, and so on—and driving automation could provide a solid solution to avoid human error. Also, driving automation is expected to allow for better traffic management, which might lead to both social and environmental benefits. Finally, CAVs revolutionise infrastructure needs, opening new possibilities to minimise the impact of transportation on nature and animal life. In sum, it seems there are strong ethical reasons in support of developing such technologies. This does not mean, however, that CAVs come without any ethical risk. It would be even less appropriate to belittle or hide these risks not to undermine the good reputation of driving automation. To reap all its foreseen ethical benefits, it is instead crucial to carefully identify and clarify the risks raised by CAVs while thoroughly differentiating between ethically promising and worrisome deployment and adoption scenarios. Indeed, such a critical analysis is essential to anticipate and manage ethical concerns, elaborate on effective regulative frameworks, and establish best practices to minimise risks or, at least, mitigate their effects. Addressing ethical worries with commitment and care is fundamental to creating the conditions for the technology to be firmly, fairly, and optimally embedded in transportation systems. Even though similar considerations might sound as overly cautious, they are going to play a central role in the future of driving automation. Integrating engineering and ethics—as our subheading reads—is a powerful means to secure an essential ingredient for sustainable innovation, i.e., justified user trust in technology. Earning justified social trust in the set of human actors involved in the design, development, production, validation, regulation (and so on) of CAVs will be crucial to foster their widespread adoption and, thus, reap the expected benefits while responsibly managing the related risks. A necessary step to get there is to think critically about the implications of the technology and to incorporate relevant ethical values in the scientific, technological, and social practices underlying the domain of driving automation. vii

viii

Introduction

Philosophy and applied ethics can provide valuable support in this respect. Joined efforts between engineers and philosophers is thus particularly recommended to pursue the goal of developing CAVs worthy of justified social trust. This book aims at exploring theoretical and ethical issues concerning driving automation from different perspectives, so as to provide a rich overview of such a timely subject matter. In doing so, it takes part in a lively debate that since the mid2010s has been massively contributing to the identification, study, and discussion of the many ethical challenges raised by the automation of driving and the deployment of CAVs. As proof of its vitality, in the last few months only, two volumes of collected essays—Autonomous Vehicle Ethics. The Trolley Problem and Beyond edited by R. Jenkins, D. Cerny, and T. Hribek for Oxford University Press; and Test-Driving the Future. Autonomous Vehicles and the Ethics of Technological Change edited by D. Michelfelder for Rowman and Littlefield—and a monograph—Ethics of Driving Automation. Artificial Agency and Human Values by F. Fossa for Springer—have been published, adding new voices and arguments to the dispute. The distinguishing character of the present volume resides in the fact that it gathers several contributions authored by both philosophers and engineers. Its main objective is to foster a multidisciplinary approach according to which philosophy, ethics, and engineering are productively integrated, rather than just juxtaposed. Its scope reflects the belief that applied ethics issues such as those here under scrutiny can be appropriately discussed only in an interdisciplinary fashion committed to bridging gaps between theory and practice. The need for philosophers and engineers to work together in such a context is tangible. While engineers add the necessary technical knowledge to the table, philosophers can help clarify relevant concepts and shed light on thorny ethical problems. Both ingredients are required to merge theory and practice together and turn ethical discourses into actual efforts. The contributions collected in this volume exhibit a two-fold, although deeply interwoven, nature. On the one hand, they present philosophical inquiries into the conceptual, scientific, social, and regulative relevance of driving automation, thus setting the stage for a wide critical appraisal of its socio-technical system. On the other hand, and against this background, they discuss more applied ethical issues concerning the design of CAVs, showcasing the results of interdisciplinary research teamwork aimed at tackling problems lying at the intersection of engineering and ethics. More in particular, the book addresses many of the challenges posed by the concrete application of the recent European guidelines advanced in the 2020 report Ethics of Connected and Automated Vehicles: Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility.1 Indeed, principles are rather abstract notions, and guidelines only go so far in translating them into more concrete measures. As a result, it is not always easy to grasp how to apply them in the everyday practical contexts of technological research and development, social deployment, and use. The risk, then, is for them to remain dead letters, words incapable of sorting any 1

https://op.europa.eu/en/publication-detail/-/publication/5014975b-fcb8-11ea-b44f-01aa75ed7 1a1/.

Introduction

ix

appreciable practical effect. What is even worse, one might feel morally satisfied just by drafting a list of ethical principles and asking stakeholders to comply with them. Practitioners, on the other hand, might react to the difficulty of translating ethical values and guidelines into best practices and technical requirements by dismissing the framework as altogether inapplicable or settling for ultimately inconsistent measures. The risk run here is to render the ethical effort ineffective, relegating it to superficial actions. The objective, on the contrary, is to substantially modify the social practices linked to the design, development, deployment, and use of CAVs. To do so, a clear understanding of the magnitude and quality of the challenges that lie ahead is the most necessary step. In line with the general objectives of the volume, and following the framework proposed in the European report, the book opens with in-depth discussions of the many difficulties that complicate the adoption of the ethical values relevant to driving automation in the practical contexts of the design, deployment, use, and regulation of CAVs. In the first chapter “Minding the Gap(s): Different Kinds of Responsibility Gaps Related to Autonomous Vehicles and How to Fill Them”, Sven Nyholm addresses the critical value of responsibility, analysing its multifaceted profile and tackling the worrisome problem of responsibility gaps. The second chapter “Designing Driving Automation for Human Autonomy: Self-determination, the Good Life, and Social Deliberation”, by Fabio Fossa and Filippo Santoni de Sio, focuses on the value of personal autonomy, identifying and discussing hurdles in its concretisation throughout various levels of driving automation and by reference to other noteworthy ethical values. The thorny task of implementing explainability into CAV perception systems is tackled in the third chapter “Contextual Challenges to Explainable Driving Automation: The Case of Machine Perception”, which presents the results of an interdisciplinary study by Matteo Matteucci, Simone Mentasti, Viola Schiaffonati, and Fabio Fossa. Subsequent chapters also report the outcomes of similar interdisciplinary research experiences. In the fourth chapter “Design for Inclusivity in Driving Automation: Theoretical and Practical Challenges to Human-Machine Interactions and Interface Design”, Selene Arfini, Pierstefano Bellani, Andrea Picardi, Ming Yan, Fabio Fossa, and Giandomenico Caruso consider the value of inclusivity and the many challenges that need to be proactively faced for its meaningful integration into driving automation technologies. The fifth chapter “From Prototypes to Products: The Need for Early Interdisciplinary Design”, by Stefano Arrigoni, Fabio Fossa, and Federico Cheli, discusses privacy and cybersecurity shortcomings that might emerge in the transition from prototyping to production of CAV communication systems, thus stressing the importance of engaging in ethically informed interdisciplinary practices since the very outset of the design process. In the sixth chapter “Gaming the Driving System: On Interaction Attacks Against Connected and Automated Vehicles”, Luca Paparusso, Fabio Fossa, and Francesco Braghin examine the cybersecurity risk of interaction attacks—i.e., potential situations where CAVs are intentionally designed to maliciously influence the driving behaviour of other CAVs, thus exerting indirect control over their operations— and consider possible countermeasures. The seventh chapter “Automated Driving Without Ethics: Meaning, Design and Real-World Implementation”, authored by

x

Introduction

Katherine Evans, Nelson de Moura, Stéphane Chauvier, and Raja Chatila, tackles the much-discussed problem of unavoidable collisions by proposing a machine ethics algorithm based on the Ethical Valence Theory—a theoretical perspective that seeks to fully acknowledge the relevance of contextual factors and frames the issue in terms of claim mitigation. Applied research is fundamental to moving from theory to practice and fostering the integration of ethical values into driving technologies, but it represents only one side of the whole endeavour. More general, high-level inquiries into driving automation are all-too necessary to inform practical work and regulatory angles. The last three chapters offer valuable insights in this regard, thus enriching the perspective on the ethics of CAVs developed in the volume. In the eighth chapter “Thinking of Autonomous Vehicles Ideally”, Simona Chiodo criticises the empirical framing of the unavoidable collision problem and proposes an analysis of the issue by drawing on the philosophical tools of abstraction and idealisation. The ninth chapter “Thinking About Innovation: The Case of Autonomous Vehicles”, by Daniele Chiffi and Luca Zanetti, discusses CAVs by reference to the literature on innovation and tries to determine to which category of innovative technology they belong. Finally, in the tenth chapter “Autonomous Vehicles, Artificial Intelligence, Risk and Colliding Narratives”, Martin Cunneen takes a closer look at the ethical narratives in which CAVs have been embedded, clarifying the economic interests behind such characterisations and calling for a thorough criticism of their main components. Taken together, we believe that the contributions collected in this volume point to two main claims. First, much work is still needed to set the right path to the integration of engineering and ethics in the domain of driving automation. Narratives that salute CAVs as game-changing technologies destined to drive us into a more ethical transportation future grossly underestimate the amount of engineering, social, and regulatory efforts needed to convert promises into reality. Driving automation might very well offer valuable opportunities to steer the transportation domain towards less harmful directions and bring about the conditions for fairer, more inclusive mobility. However, catching these opportunities requires much more than putting CAVs on the streets. Any ethical benefit of driving automation can only be achieved proactively by raising social awareness, promoting widespread involvement, and supporting stakeholders in its intentional pursuit. Second, the challenges to be faced in this process of theoretical awareness and practical endorsement are both massive and extremely intricate. Interdisciplinarity is key in this sense. Forming diverse research and design teams is a necessary condition to deal with these challenges at an adequate level of granularity. And yet, most chapters evidently submit more questions than answers, and applicability remains a puzzle to crack. Nevertheless, suggestions and insights on which to ground further inquiries are not missing either. Methodologies and research tools aimed at tackling its manyfold aspects are proposed and preliminarily scrutinised, which opens promising avenues for future research and practice. Moreover, a disenchanted and thorough measurement of the complexity of the matter is also to be regarded as valuable information with reference to the wider socio-political goal of establishing

Introduction

xi

a more ethical transportation system—to which the contributions of CAV technologies are yet to be precisely determined. That being said, and however formidable the challenge of integrating engineering and ethics in the domain of driving automation might appear, a shared commitment to the task runs through the following pages and represents their most noticeable fil rouge. The interdisciplinary nature of the book corresponds to the interdisciplinarity of its intended audience, which comprises philosophers of technology, ethicists, engineers, developers, manufacturers, producers, mobility experts, regulators, politicians, and so on. Its approach is meant to increase the opportunity to exchange ideas, anticipate potential risks, bring ethical issues into focus, study their potential social impacts, and think of possible solutions already at the design stage. It is not to be forgotten that making driving automation a factor in the ethical improvement of the transport system is primarily a social and political mission—one to which engineers can contribute greatly. By introducing ethical reflection directly into the social venues where technological innovation unfolds, we might have one more promising chance to appropriately manage ethical issues in driving automation, thus earning justified social trust and securing the benefits that the adoption of CAVs is often supposed to yield. We hope that this book might serve as a useful tool to bridge ethics and engineering in this field, thus concurring with the great challenge of designing, developing, deploying, and using CAVs in sustainable ways. The preparation of this volume would have been much more difficult without the help of many colleagues and friends. First, we would like to thank the authors who accepted our invitation to participate in the book project for their cooperation and thoughtfulness. We are grateful to Paolo Bory, Simona Chiodo, Francesco Corea, Shreias Kousik, Giulio Mecacci, Francesca Foffano, Guglielmo Papagni, Luca Possati, Steven Umbrello, and Mario Verdicchio for offering insightful comments on previous versions of the chapters. Many thanks also to the members of the Polimi META Research group—Simona Chiodo and Daniele Chiffi in particular— for encouraging the project since the very outset and keeping an attentive eye on its development. We would also like to extend our gratitude to the former Rector of Politecnico di Milano Ferruccio Resta for agreeing to contribute a foreword to this volume as a much-appreciated token of support to the mission of integrating engineering and ethics in our institution. Finally, we are deeply grateful to Lorenzo Magnani for accepting our book proposal with enthusiasm, and to Leontina De Cecco and Shakila Sundarraman for the editorial and administrative assistance throughout the publication process. Milan, Italy

Fabio Fossa Federico Cheli

Contents

Minding the Gap(s): Different Kinds of Responsibility Gaps Related to Autonomous Vehicles and How to Fill Them . . . . . . . . . . . . . . . Sven Nyholm

1

Designing Driving Automation for Human Autonomy: Self-determination, the Good Life, and Social Deliberation . . . . . . . . . . . . Filippo Santoni de Sio and Fabio Fossa

19

Contextual Challenges to Explainable Driving Automation: The Case of Machine Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matteo Matteucci, Simone Mentasti, Viola Schiaffonati, and Fabio Fossa

37

Design for Inclusivity in Driving Automation: Theoretical and Practical Challenges to Human-Machine Interactions and Interface Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selene Arfini, Pierstefano Bellani, Andrea Picardi, Ming Yan, Fabio Fossa, and Giandomenico Caruso

63

From Prototypes to Products: The Need for Early Interdisciplinary Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefano Arrigoni, Fabio Fossa, and Federico Cheli

87

Gaming the Driving System: On Interaction Attacks Against Connected and Automated Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Luca Paparusso, Fabio Fossa, and Francesco Braghin Automated Driving Without Ethics: Meaning, Design and Real-World Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Katherine Evans, Nelson de Moura, Stéphane Chauvier, and Raja Chatila Thinking of Autonomous Vehicles Ideally . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Simona Chiodo

xiii

xiv

Contents

Thinking About Innovation: The Case of Autonomous Vehicles . . . . . . . . 161 Daniele Chiffi and Luca Zanetti Autonomous Vehicles, Artificial Intelligence, Risk and Colliding Narratives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Martin Cunneen

Minding the Gap(s): Different Kinds of Responsibility Gaps Related to Autonomous Vehicles and How to Fill Them Sven Nyholm

Abstract When autonomous vehicles crash and people are injured or killed, the question of who should be held responsible arises. Some commentators have worried that there might be gaps in responsibility. If the “driver” was not driving—and was unable to directly control or predict what the car would do—then how can the driver be responsible? That is one argument that is sometimes made, and similar arguments have been made in relation to developers of self-driving cars. On the flip-side, if there would be a responsibility gap after a crash, this might mean that there is also a form of forward-looking responsibility gap just before a crash happens: it might be unclear who the main moral agent is whose responsibility it is to react in certain ways in risky traffic scenarios. The aim of this chapter is to identify different possible forwardlooking and backward-looking responsibility gaps related to autonomous vehicles and to assess different strategies for how to fill these gaps. The focus will be on moral responsibility, but questions of legal responsibility will also be briefly discussed. Keywords Autonomous vehicles · Responsibility gaps · Negative responsibility · Positive responsibility · Forward-looking responsibility · Backward-looking responsibility

1 Introduction This chapter discusses questions of responsibility related to autonomous vehicles, also known as self-driving cars. It discusses different aspects of autonomous driving in relation to which questions of responsibility arise. It pays special attention to worries that self-driving cars might create gaps in responsibility. One aim of this chapter is to illustrate that while most academic discussions about potential responsibility gaps related to self-driving cars have been about one S. Nyholm (B) Faculty for Philosophy, Philosophy of Science, and Religious Studies, LMU Munich, Geschwister-Scholl-Platz 1, 80539 Munich, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Fossa and F. Cheli (eds.), Connected and Automated Vehicles: Integrating Engineering and Ethics, Studies in Applied Philosophy, Epistemology and Rational Ethics 67, https://doi.org/10.1007/978-3-031-39991-6_1

1

2

S. Nyholm

important type of gap, there are also other general types of responsibility gaps that are worth discussing. The type of gap the existing literature has tended to focus on is what I will call backward-looking, negative responsibility gaps: gaps related to responsibility for bad things that have happened. We should also reflect on other possible kinds of gaps. These other types of gaps can be related to what is sometimes called forward-looking responsibility. They can also be related to what might be called positive responsibility. This yields the following four possibilities, which will be explained in Sects. 2 and 3: backward-looking negative responsibility gaps, backward-looking positive responsibility gaps, forward-looking negative responsibility gaps, and forward-looking positive responsibility gaps. Each general type of responsibility gap identified in this chapter can be sub-divided into further types of possible gaps within the four broad categories I will identify in what follows [for further discussion, see 12]. Here, though, the discussion will be kept at a fairly general level. Another aim of this chapter is to explore and evaluate different possible strategies for filling these gaps. A third, and final aim is to review some different stances authors have taken in the academic literature about how one should assess the observation that the introduction of autonomous technologies might create gaps in responsibility. As we will see, there is disagreement about how serious of a problem this is. Notably, the question about who is responsible for how self-driving cars behave in traffic and any damage they might cause is one of the first topics that researchers interested in social and ethical aspects of autonomous driving started discussing. Legal theory researchers started discussing this topic around 2012, whereas moral philosophers started just a little later, around 2014 [13]. At the beginning of the discussion, this was all hypothetical. Researchers were asking “what if a self-driving car would be involved in a crash, and somebody would be hurt—who should then be held responsible for this?” But in 2015, this went from being a thought experiment to becoming a real-world issue, with around 20 minor crashes involving experimental self-driving cars in that year. Those were all blamed on drivers of regular cars, who accidentally bumped into the self-driving cars. However, on Valentine’s Day in 2016, it for the first time happened that a self-driving car—an experimental Google car—was recognized as having caused an accident. The self-driving car crashed into a bus. And Google admitted that their car was “partially responsible” for the crash. Just a few months later in the same year, for the first time a person riding in a self-driving car—a Florida man riding in a Tesla Model S in “Autopilot” mode—was killed when his car crashed into a big white truck that the artificial intelligence in the car did not recognize as being a dangerous obstacle. And in March of 2018, it for the first time happened that a pedestrian—a woman named Elaine Herzberg— was hit and killed by an experimental self-driving car, operated by Uber in Tempe, Arizona. In both cases, there were reports that the “drivers” of the cars were distracted and not paying sufficient attention to the road. Quite quickly, then, questions about who is responsible for how self-driving cars behave in traffic went from being an academic question for legal theorists and moral philosophers to becoming a realworld problem that society has to deal with [for more discussion and an overview

Minding the Gap(s): Different Kinds of Responsibility Gaps Related …

3

of the early academic literature on responsibility for crashes involving self-driving cars, see 13].

2 What is a Self-driving Car and What is Responsibility? Other chapters in this book discuss the concept of autonomous driving at greater length than this chapter will do. Here, I will simply note the following. As I am using the term “self-driving car” in this chapter, any car able to perform driving tasks typically performed by a human being so that the human occupant(s) can for certain periods of time hand control over to the technologies in the car, at least in certain traffic situations, qualifies as a self-driving car. This could be what is often called a level 5 car able to perform all driving tasks in all circumstances.1 Or it could be a car with a lower level of automation capabilities, which is only able to operate autonomously in certain limited traffic situations. I will here only be talking about those times when the human driver is not actively doing the driving and the car is operating in an automated mode. Accordingly, a Tesla Model S that can operate in “Autopilot” mode counts as a self-driving car according to this way of using the term “self-driving car”, as do future level 5 cars that we do not yet have. What I will say a little more about here is the notion of responsibility, since that is the main topic of this chapter. The first thing to note is that a lot can be said about what responsibility is [9, 13]. There are so many distinctions that legal theorists and philosophers have drawn concerning the notion of responsibility, and so many questions that can be asked about responsibility, that I will not be able to cover all the basics here. Instead, I will limit myself to aspects of this notion that are relevant to the discussion in this chapter. One key distinction is between legal responsibility and moral responsibility. Legal responsibilities are specified by the traffic rules and laws pertaining to driving within a particular jurisdiction. Moral responsibility refers to what people are responsible for (e.g., what they are blameworthy or praiseworthy for) from the point of view of some set of moral norms and values, which may or may not match up the legal responsibilities specified within some specific set of laws. Before any laws have been made about or re-interpreted to apply to a new form of technology, for example, there might not be any clear answers to questions about legal responsibility as they apply to that technology. But it might still be possible to answer questions about moral responsibility related to that new technology. Here, I am primarily focusing on moral

1

Information about the oft-quoted SAE taxonomy and definitions of terms related to driving automation systems for motor vehicles can be found here: https://www.sae.org/standards/content/j3016_ 202104/ (accessed on 18 November 2022).

4

S. Nyholm

responsibility.2 But a lot of what I say will hopefully be of relevance both for legal theory and moral theory. When it comes to both moral theory and legal theory, the philosophers Wulf Loh and Janina Loh [9] point out that we can ask questions such as (1) who is responsible for (2) what, (3) in response to whom, (4) for whose sake, (5) according to what norms? For example, (1) person A might have (2) crashed a car, and therefore (3) needs to answer to a judge, (4) because of the damages their car crash caused for person B, all while (5) being held up against the traffic regulations and moral norms in a certain society. This picks up on the idea that being responsible for something often means that one has to be able to respond to questions or criticisms of what one has done, or that one is being held to account and one has to answer for oneself. In such situations, being held responsible is likely to be associated with potentially being punished or blamed for something that one has done, allowed to happen, failed to prevent, and so on. Responsibility can also be of a more positive nature as well [12]. If there is a good outcome—e.g., somebody’s life is saved—there can be a question of who is responsible for bringing about this good outcome. That person might then be praised or rewarded and in that way held responsible for this good outcome. Human beings do not only care about responsibility related to negative outcomes or behaviors; we also tend to find it important to give recognition where recognition is due, e.g., when somebody does something good or excels in some way, or when some good outcome can be traced back to somebody’s efforts or talents. Accordingly, both negative and positive forms of responsibility need to be taken into consideration when we reflect on what it means for human society to start handing over more and more tasks that were previously performed by human beings to technologies with artificial intelligence. I will refer to responsibility related to bad outcomes, where blame and punishment might be called for, by the general expression “negative responsibility”. When there are good outcomes or somebody is doing something well, and there might be talk of praise and rewards or other forms of recognition, I will use the expression “positive responsibility”. This distinction between responsibility in positive and negative senses is one of the key distinctions I am interested in relating to automated driving in this chapter. Another key distinction is between responsibility in a backward-looking sense and responsibility in a forward-looking sense [12]. After something good or bad has happened, we can look back and ask who was responsible for this good or bad thing. We might ask this because we think somebody should be praised or blamed. This is responsibility in the backward-looking sense. In contrast, if there is some good 2

In this discussion, I understand moral responsibility in a broad way. As I understand the idea of moral responsibility here, it includes not only what is blameworthy or praiseworthy with respect to narrowly moral duties or norms related to interpersonal obligations (e.g., duties not to harm, disrespect, or otherwise do injustice to others), but also more broadly ethical ideals about ways in which we can excel as human beings. Accordingly, just as one might—for example—be morally responsible (and blameworthy) for having harmed somebody, one might be morally responsible (and praiseworthy) for having developed one’s talents or skills, or displayed excellence in some other way as a human being.

Minding the Gap(s): Different Kinds of Responsibility Gaps Related …

5

thing or some bad thing that might happen, we can ask whether somebody should be assigned, or perhaps be viewed as already having, the task or obligation to make sure that the good thing happens or that the bad thing is avoided. Parents, for example, are typically thought to have a forward-looking responsibility to make sure that their children will do well in life. They discharge this responsibility by taking good care of their children and raising them well. A driver of a regular car, in turn, has a forwardlooking responsibility to make sure that potential accidents are avoided as he or she is driving from A to B. With these distinctions between positive and negative responsibility and backward-looking and forward-looking responsibility, we can identify four general kinds of responsibilities, as illustrated in this classification matrix: Responsibility

Negative

Positive

Backward-looking Backward-looking negative Backward-looking positive responsibility: being responsible for responsibility: being responsible for something bad that has happened something good that has happened Forward-looking

Forward-looking negative responsibility: being responsible for trying to make sure that something bad will be avoided

Forward-looking positive responsibility: being responsible for trying to make sure that something good will be achieved

When it comes to driving, here are some examples of these different kinds of responsibilities. These examples are intended to help to put the driving of vehicles into a larger societal perspective. After there have been injuries or deaths involving cars, the question always arises whether it was a pure accident, or whether it was somebody’s fault. Did the driver cause the crash? Had the car manufacturers failed to make the car safe enough? Did somebody else—e.g., somebody behaving irresponsibly around the car—cause this to happen? Who should be blamed or punished? These are all questions related to backward-looking negative responsibility. Typically, these questions are answered with reference to who was or should have been, but wasn’t able to predict, prevent, or otherwise be in control over the functioning of the car. Consider next forward-looking negative responsibility. Car manufacturers have a forward-looking negative responsibility to design and install safety features that prevent avoidable car crashes from happening. They might not be able to predict or control for specific future events (since they have yet to happen). But they have responsibilities to think about what kinds of situations the cars will be in and enable the cars to handle such situations in safe ways. Drivers, in turn, have responsibilities to be on the look-out for dangerous situations, to try to steer away from them, and to more generally drive in ways that avoid harm to others. Other people have responsibilities to not behave in ways that might cause the driver to crash their car—e.g., randomly stepping in front of the car as it is driving along the road. While car manufacturers are developing new types of cars or improving their old models, they might be thought to have a forward-looking positive responsibility to create cars that promote valuable goals like traffic safety, sustainability, or other

6

S. Nyholm

positive outcomes. People who drive cars—perhaps especially professional drivers— can have forward-looking responsibilities to drive in ways that lead to good outcomes. In general, people have responsibilities to behave in traffic in ways that are mutually beneficial to all members of society. Speaking of professional drivers, this is a large group of people who contribute positively to society through the work they do as professional drivers. After carrying out their job as a driver in a professional and good way, they may be praised or rewarded for a job well done and thereby recognized for the contribution they make to society. This is a kind of positive responsibility that might be associated with driving. Another kind is associated with displaying excellence in driving, as some do who compete in motor sports, and who may be admired by others for their driving skills. This is another domain in which people might be held responsible in a positive sense for what they have done—i.e., their excellent driving while competing in the motor sport (e.g., Formula 1 or whatever).3 Or, to go back to car manufacturers, if traffic safety has been increased, the car companies (or their engineers) who have created safer cars might be praised and held responsible in the backward-looking positive way.

3 Responsibility Gaps in General Worries about gaps in responsibility are discussed in two different contexts. One is the context of large organizations or other collectives, which might cause problems, but where it might be hard to identify who exactly is responsible. The other context is the context of technologies with autonomous functionality and/or artificial intelligence. In the latter context, the term “responsibility gap” was coined by Andreas Matthias in his oft-cited 2004 article about what he called “learning automata” [11]. Matthias was discussing different forms of AI technologies involving “machine learning, neural networks, genetic algorithms and agent architectures”, which he thinks create “a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it” [11: 175]. As Matthias describes this idea in more detail, he explains that he thinks that responsibility only makes sense when the human agent being held responsible has control over what they are responsible for. And Matthias thinks that AI technologies involving machine learning will not be under our control because we cannot predict or explain the patterns according to which these technologies start operating once they are up and running. Matthias, then, thinks that predictability and explainability are aspects of control, and he thinks that having control over something is a condition 3

As noted in footnote 2 above, I am here not only considering moral responsibility in the narrow sense of what is related to interpersonal duties, but also in the sense of what is involved in excelling as a human being, which might involve things such as mastery or virtuosity in our use of tools and instruments—which could involve things such as mastery over musical instruments, but also mastery or virtuosity with respect to something such as driving cars.

Minding the Gap(s): Different Kinds of Responsibility Gaps Related …

7

of being responsible for it. (We will see below that there are those who agree that control is a condition of responsibility, but who claim that control can be related to other things than predictability and explainability). The kind of responsibility that Matthias is talking about is backward-looking, negative responsibility. He thinks that machine learning systems may undermine the possibility of blaming and punishing manufacturers or operators of autonomous technologies for harm these technologies cause. Yet he thinks that what he calls “our sense of justice” will make us feel that somebody should be held responsible. Accordingly, there is a “responsibility gap”. It seems that somebody should be held responsible, but there is nobody who it is right or fair or otherwise correct to hold responsible. Or so Matthias argues. We can call what Matthias discusses a “backward-looking, negative responsibility gap”. But there are also forward-looking responsibilities of both negative and positive kinds, as well as backward-looking, positive responsibility. These might also involve gaps. So, it is possible to distinguish among the following four general kinds of potential responsibility gaps [12]: Responsibility gaps

Negative

Positive

Backward-looking Backward-looking, negative responsibility gaps: nobody can rightly be held responsible for something bad that has happened, though it seems that somebody should be held responsible

Backward-looking, positive responsibility gaps: nobody can rightly be held responsible for something good that has happened, though it seems like somebody should be held responsible

Forward-looking

Forward-looking, positive responsibility gaps: nobody can be seen as being responsible for making sure that something good is achieved, though it seems that somebody should be responsible for it

Forward-looking, negative responsibility gaps: nobody can be seen as being responsible for making sure that something bad is avoided, though it seems that somebody should be responsible for it

Just as Matthias’ discussion is about backward-looking negative responsibility gaps, most other discussions about responsibility gaps have also been about that general type of gap. But there is also some work in AI ethics and the ethics of technology more generally that has been about the other kinds of responsibility gaps, if not explicitly, then at least implicitly. When it comes to forward-looking negative responsibility gaps, I have elsewhere discussed what I call “obligation gaps”. Such a gap is created when there is some agent that faces a choice, where one option might be better than other options, but where the agent in question (e.g., some AI system) is not a responsible agent that is able to have moral obligations. Notably, authors like Carissa Véliz [18] argue that AI systems cannot be moral agents who are responsible for what they do. At the same time, others—like Luciano Floridi [5]—argue that while AI systems cannot be moral agents of a sort that can be responsible, they are nevertheless a form of

8

S. Nyholm

minimal moral agents: they can act in the world, and their acts can have morally good or bad effects. If Matthias is right that we cannot fully predict or explain the behavior of some of these agents, we might have a situation where there is a form of forward-looking, negative responsibility gap here, or what I have elsewhere called “obligation gaps” [14]. That is, there is a potentially dangerous AI agent operating in the world, but it cannot be responsible for its own dangerous actions. And perhaps no human is fully responsible for what it does either. If so, there might be nobody whose forward-looking responsibility it is to make sure that the AI agent avoids causing bad outcomes. Consider next forward-looking positive responsibility gaps. If there are certain very positive outcomes that could be caused by, say, powerful AI systems, it might seem that somebody should take it upon themselves to create these positive powerful AI systems. But nobody might wish to assume this responsibility. And, more importantly, there might not be anybody who is both a clear candidate for shouldering that responsibility and who would have the ability to do so. If so, there might be a forward-looking, positive responsibility gap here [12]. Consider next backward-looking, positive responsibility gaps. John Danaher and I have discussed what we call “achievement gaps”, which we think is a form of backward-looking, positive responsibility gaps [3]. The idea is that if more and more tasks normally performed by human beings—e.g., in workplaces like hospitals—are taken over by AI systems and other technologies, there might be less and less room for human achievement in the domains in question. If good outcomes are increasingly achieved by the technologies we use, and many of us do not fully understand these technologies and also lack control over them, there might be less room for many of us to claim credit for good outcomes that are brought about in society. As more and more tasks previously performed by human beings are automated or taken over by artificially intelligent technologies, there might come to be fewer opportunities for human beings—such as people working in different fields—to contribute positively to society or to excel in their domains of activity. If this all happens, more positive responsibility gaps might be created by the technologies taking over positive tasks previously performed by human beings. The technologies might create good outcomes, on the one hand. But they might at the same time rob human beings of opportunities to contribute positively to society, on the other hand.

4 Responsibility Gaps Related to Self-driving Cars Matthias wrote his well-known article in 2004. Self-driving cars were not commonly discussed in ethics articles on artificial intelligence and responsibility back then. So, Matthias did not apply his arguments to the possibility of responsibility gaps related to self-driving cars. However, an early article on self-driving cars and responsibility by the philosophers Alexander Hevelke and Julian Nida-Rümelin from 2015 presents a number of intriguing arguments related to this issue [7].

Minding the Gap(s): Different Kinds of Responsibility Gaps Related …

9

Those arguments are not explicitly put in terms of the distinctions I am drawing. But they can be restated in the terms introduced above. Hevelke and Nida-Rümelin base some of their arguments on legal arguments made by the legal theorists Gary Marchant and Rachel Lindor, who wrote one of the earliest influential legal analyses of this topic in 2012 [10], but they also provide various arguments of their own. Let us start, however, with one of the arguments they attribute to Marchant and Lindor. This argument concerns whether we should blame car manufacturers for crashes involving self-driving cars. The idea is that if self-driving cars have a potential to become safer—perhaps much safer—than regular cars, then we should be careful about blaming car manufacturers for any crashes that these cars might be involved in, so as to avoid discouraging car manufacturers from developing and introducing self-driving cars into traffic. A related argument here—of a perhaps less pragmatic nature—is that if car developers have indeed managed to create much safer cars, which crash much less seldomly than regular human-driven cars do, then they have taken great care to promote the avoidance of crashes. This can be taken to mean that their responsibility for specific crashes that do still happen diminishes in proportion as the cars they are developing are becoming safer and safer. In other words, the safer the self-driving cars that car manufacturers develop become, the less it might seem that it makes sense to blame them when a crash with one of those very safe cars does arise. After all, they had taken great pains to create a safer form of driving. It is just that even extremely safe cars will—unless they drive extremely slowly and far away from all other cars or other obstacles—sometimes crash, even if only seldomly. Putting this in terms of forward-looking and backward-looking forms of negative responsibility, one could say, firstly, that while manufacturers of self-driving cars have a forward-looking responsibility to prevent the occurrence of predictable and probable types of crashes, it makes less sense to view them as having a forward-looking, negative responsibility to prevent the occurrence of any particular unpredictable and improbable actual crashes that might occur. So, after the fact—unless the crash was of a type that the car manufacturer should have and could have predicted and guarded against—it might be seen as having been outside of their control, and outside of what they might have been able to predict, to foresee, and to prevent that particular crash. The other obvious candidate to be held responsible is the person using the selfdriving car. With respect to this candidate, Hevelke and Nida-Rümelin present (i) one argument that can be read as suggesting that people using self-driving cars often do not have a forward-looking negative responsibility to prevent specific crashes and (ii) another argument suggesting that if a crash has occurred, then the people in the self-driving car will typically not have a backward-looking, negative responsibility for the crash. The first argument can be put like this. Initially, people might be thought to have a duty—or, if you will, a forward-looking, negative responsibility—to be alert and on the lookout for possible accidents and, along with that, a duty to take back control over the car if an accident scenario suddenly comes about. However, supposing that the car is generally driving in a safe way—and indeed perhaps even in a safer way than

10

S. Nyholm

a human-driven car would be—then dangerous situations will occur too seldomly for it to be reasonable to expect the human occupant of the car to be sufficiently alert and sufficiently able to quickly step in and take control over the operation of the vehicle. Presumably, it is easier to be alert and to pay attention if one is actually driving a car. Then one has to constantly pay attention to the road and react to what is happening all the time anyway. But, in Hevelke and Nida-Rümelin’s assessment, if one is not actually driving the car, it is too much to expect of the person in the car that they are on the lookout all the time for possible accidents. Accordingly, one could say that Hevelke and Nida-Rümelin, in providing this argument, are formulating a reason to deny that users of self-driving cars have a forward-looking, negative responsibility to be in charge of making sure that their self-driving cars do not cause accidents. If the self-driving car does cause an accident—and here comes the second argument alluded to above—then Hevelke and Nida-Rümelin think that it is a matter of bad luck on the part of the person(s) in the car that their self-driving car caused an accident. They may not have been doing anything different from what people in other self-driving cars, which were not causing any accidents, were doing. They were not being reckless. They may even have chosen to use a self-driving car because they believed (perhaps with good reason) that it was safer than a regular car that they would have to drive themselves. Therefore, if they are unlucky, and their self-driving car causes harm to somebody else, it can seem wrong and unfair to hold them responsible for this harm. Or so Hevelke and Nida-Rümelin argue. If we accept these arguments—or some version of them—we might need to start worrying about both backward-looking and forward-looking negative responsibility gaps in relation to crashes and harms caused by self-driving cars. At least we might need to do so if (1) we follow the legal theorists Marchant and Lindor and the philosophers Hevelke and Nida-Rümelin in assuming that self-driving cars will only be widely used if they have proven to be safer (and perhaps much safer) than regular cars and (2) we do not take seriously the possibility that the self-driving car itself might be seen as being responsible for its behavior on the road. I mention this second idea here—which might strike some readers as crazy—because there are actually some who have taken that idea seriously, as we will see below. Let us first, however, consider whether there might be potential responsibility gaps having to do with selfdriving cars that are related to forward-looking or backward-looking positive types of responsibility. We can start by relating that question to responsibilities on the part of car manufacturers. Let us first assume again that self-driving cars have proven themselves to be safer—and perhaps even much safer—than regular cars. This might then still leave room for even greater safety improvements. That is, it might be possible to imagine self-driving cars that would be even safer than the really, really safe self-driving cars that we are assuming are available. Let us suppose, for example, that the self-driving cars that are available are 50% safer than human-driven cars. But let us also assume that it is imaginable that newer models might be developed that were even 75% safer, or 80, 90, or 95% safer. That would be a really good outcome. It might seem that it would be great if somebody—e.g., some car manufacturer—were to take it upon themselves to take on the responsibility of creating this even safer form of

Minding the Gap(s): Different Kinds of Responsibility Gaps Related …

11

self-driving car, which would be even safer than the already very safe self-driving cars that they have developed, which we are assuming are much safer than regular cars. A problem here might be that nobody might wish to accept this responsibility of creating that even much safer form of self-driving car. And the developers of selfdriving cars might plausibly be able to say “hold on, we have already developed a much safer form of car than the types of cars we had before. So we cannot be said to have a forward-looking, positive responsibility to create an even safer form of self-driving car.” This might, in other words, be the sort of positive outcome that it would be great if it were to come about. But there might not be anybody who can be seen as having a forward-looking positive responsibility for bringing it about—even if it would be nice if somebody would have such a responsibility. So, there might be a form of forward-looking gap in positive responsibility here [12]. Let us lastly consider one more type of potential positive responsibility gaps related to self-driving cars. As noted above, lots of people currently earn their living— and have opportunities for contributing positively to society—by working as different forms of drivers. (This could be taxi-drivers, ambulance drivers, truck drivers, private chauffeurs, and so on.) If the tasks they are performing are increasingly being automated, so that the services of this large group of people are not needed anymore—or at least not to the same extent—then the opportunities all these people have to contribute positively to society might in effect be diminished. Where they were previously able to be positively responsible for the work that they did as drivers, they would no longer have those same opportunities if most of the driving that is being done is taken over by the AI systems within self-driving cars. And there might not be enough other jobs available for these people where they can get the kind of recognition that they were able to get in their role as drivers. So their jobs’ being automated away might create positive responsibility gaps in the form of what Danaher and I have called “achievement gaps” [7]. In general, if the activity of driving is automated, this might be one less domain of activity in which there is room or a need for human excellence. Accordingly, unless the people in question find other areas in which they can excel, gaps might emerge with respect to where they can receive recognition or in other ways be treated as positively responsible for good outcomes that are being achieved (e.g., that goods or people are taken safely and comfortably from one location to another). Having considered some different possible kinds of responsibility gaps that might arise in relation to autonomous driving, let us now consider different possible responses to these gaps. We will both consider the question of whether and to what extent we should mind the gaps and the question of how we might try to fill them, if possible.

12

S. Nyholm

5 Should We Mind the Gaps? When the philosophers Thomas Simpson and Vincent Müller discuss responsibility gaps in another context—namely, responsibility gaps that might exist in relation to autonomous weapons systems—they suggest that if two conditions hold, then we might have reason to tolerate responsibility gaps [16]. They are discussing “killer robots”—i.e., artificially intelligent weapons that act autonomously. They suggest that it could come to be the case that these technologies make war safer for all whose safety should be protected, such as certain soldiers and civilians. That is the first condition that might make the occurrence of responsibility gaps in cases where the wrong people are in fact killed tolerable. The second condition is that the autonomous weapons systems have been made as safe as possible. In a similar way, one could imagine that somebody might argue that responsibility gaps—e.g., backward-looking, negative responsibility gaps—related to self-driving cars could be seen as tolerable if the following two conditions hold. First condition: if the introduction of self-driving cars into traffic makes traffic, on the whole, safer for everyone (even if there is still the occasional deadly accident). And second condition: if self-driving cars have been made as safe as we can currently make them. If those two conditions obtain, somebody taking inspiration from Simpson and Müller’s argument about autonomous weapons systems might argue that we should tolerate gaps in negative responsibility on those occasions that self-driving cars cause harm. An even more controversial view is suggested by the legal theorist and philosopher John Danaher [1]. He suggests that we should welcome some responsibility gaps. He does so in a more general discussion about responsibility gaps. But his argument too could perhaps be applied to self-driving cars. Why does Danaher think that the creation of (some) responsibility gaps should be welcomed? It can be burdensome, he argues, to be held responsible for “tragic choices” that we sometimes have to make in life. And so it would be good, at least sometimes, to be able to outsource these tragic choices to AI systems, so that human beings do not have to bear the responsibility, and there is instead a gap in responsibility. Danaher is also a skeptic about whether human beings have the sort of free will that can make us ultimately responsible for our actions and the outcomes of our actions [2]. That is another reason, he thinks, to welcome responsibility gaps created by AI systems. In the same way, perhaps some people might argue that we should welcome responsibility gaps introduced by self-driving cars in particular. On the opposite side of the spectrum of possible views about how we should feel about responsibility gaps is a view put forward by the philosopher Christian List [8]. As List sees things, gaps in responsibility are, as we might put it, intolerable. List suggests that when we introduce new forms of agents into “high-stakes settings”, we must either find ways of filling responsibility gaps or refrain from introducing these new forms of agents into the high-stakes settings in question. List applies this both to organizations of human beings who might function as risky “group agents” and to AI systems that might function as risky “artificial agents”.

Minding the Gap(s): Different Kinds of Responsibility Gaps Related …

13

List thinks that society should only allow people to create potentially risky or dangerous organizations, which might harm others, if either (a) they have a representative who is willing and able to take responsibility for what the group agent does or (b) the group agent can itself be responsible on an organizational level. Similarly, List argues that we should only allow risky artificial agents—such as self-driving cars—which might cause harms if (a) some human being(s) can be a representative who takes responsibility or (b) if the artificial agent itself can somehow be responsible for its actions. Otherwise, we cannot and should not tolerate that these dangerous novel forms of agents operate in our societies. I will not discuss these different views about to what extent we should or should not mind responsibility gaps in detail here. Instead, I will simply note the following. If we were to rank these views with respect to how likely they are to be convincing to people in general and to those who reflect on responsibility gaps in their academic research, my estimation is that the view that would seem most convincing is List’s view, viz. the view that in general, we should not tolerate the introduction of new forms of agents that create responsibility gaps unless we can come up with ways of filling those gaps. Next in line in terms of how convincing people are likely to find these views is most likely Simpson and Müller’s view that if the two conditions they describe obtain, then we could be to some degree tolerant of some responsibility gaps. Danaher’s view is likely to be the most controversial view, which would meet with the most resistance. This is not a knock-down argument against Danaher’s view. Perhaps, if we all reflect long and carefully on this, we would come to accept Danaher’s view, or some view that leans in that direction. However, given that most people tend to find the practice of holding each other responsible for things we do and things that happen to be an important social practice, let us explore ways in which we might try to fill responsibility gaps. I also wish to highlight here that when the authors whose views we just considered discuss how we should feel about potential responsibility gaps, they primarily focus on backward-looking, negative responsibility gaps. But, as argued in the sections above, there can be different possible forms of responsibility gaps. So, we should not only be considering how to respond to one type of gap. For example, it might be that some people would agree with Danaher that not having to be blamed or punished for bad things that happen might be something to be welcomed. But those same people might not welcome the prospect of not being praised, rewarded, or otherwise recognized as responsible for good things that happen if certain valuable tasks previously performed by them are taken over by AI systems. In other words, people are sometimes motivated to take responsibility for things that happen, especially when it comes to good things that happen or that we bring about by acting in certain ways. Let us therefore start our brief review of different strategies for filling responsibility gaps by considering whether that option—i.e., having people step forward to volunteer themselves as responsible—might be a way of filling potential responsibility gaps related to self-driving cars.

14

S. Nyholm

6 Suggestions About How/Whether Responsibility Gaps Can Be Filled Interestingly, some car companies have said that when they will have self-driving cars on the market, they will themselves take responsibility—at least in the sense of accepting legal liability—for any crashes that their cars will cause. Volvo and Audi have both made such claims [13]. Back when they made those claims, however, they did not yet have self-driving cars on the market. Meanwhile, companies that do already have self-driving cars on the market or who are actively trying to create selfdriving car ride services—like Tesla and Uber—seem less inclined to take responsibility for crashes their cars cause, as was seen in the cases mentioned in the introduction. Instead, they have denied responsibility for crashes involving their self-driving cars. In general, the strategy of trying to fill responsibility gaps by having people (or organizations) step forward to volunteer themselves as being willing to take responsibility seems to face the following problems with potential asymmetries in people’s motivation to take responsibility. In general, people (and organizations) tend to be much more motivated to take responsibility for good things that have happened than to take responsibility for bad things that have happened. So, when it comes to filling backward-looking responsibility gaps, it might be much easier to find people or organizations that are willing to fill backward-looking, positive responsibility gaps than to find people or organizations willing to fill backward-looking, negative responsibility gaps by voluntarily taking responsibility. This is to be expected. After all, taking responsibility for bad things that have happened involves making oneself liable to blame or punishment (which is costly), whereas taking responsibility for good things that have happened might involve reaping benefits, such as praise or rewards or other forms of recognition. At the same time, while people and organizations might be strongly motivated to take responsibility for good things that have already happened (thereby volunteering to fill backward-looking, positive responsibility gaps), they are often likely to be less motivated to volunteer to fill forward-looking, positive responsibility gaps. Accepting the responsibility to make sure that good things will happen in the future can be costly, just as taking responsibility for bad things that have already happened can be. People and organizations typically only accept forward-looking positive responsibilities when they stand in a special relationship to some individual or individuals (e.g., the relation of a parent to their child or a doctor to their patient). So, closing responsibility gaps by hoping or expecting people or organizations to voluntarily take responsibility is not likely to be a successful strategy other than when it comes to filling backward-looking positive responsibility gaps [12]. What other strategies are there? As noted above, there are those who suggest that the self-driving cars themselves could potentially be held responsible for what they do or problems they cause. This idea has been defended in different ways. For example, List argues that while current AI systems lack capacities that enable them to be morally responsible agents, there is no in-principle reason to reject the possibility

Minding the Gap(s): Different Kinds of Responsibility Gaps Related …

15

that future AI systems—including self-driving cars—could perform the functions we commonly associate with responsible moral agents [8]. To motivate this general idea, List first argues that groups of people can form organizations that are morally responsible, because they are able to perform the sorts of functions responsible moral agents perform: some groups can be said to have aims, to make normative judgments, and understand what choices they are facing. In fact, as List sees things, a well-functioning organization of people is a form of artificial intelligence. In such an organization, there is a form of artificially created intelligence at the group level, which is a type of social technology that is not identical to the natural intelligence of the different people who are members of the organization. Its hardware is people, rather than computers. AI systems made up by technologies, in turn, can also become able to make moral judgments, to understand that they face choices, and to do other things associated with being a moral agent. Or so List argues. But at the same time, he suggests that current AI systems cannot yet do so. So if that is right, then it also follows from List’s logic that current self-driving cars cannot yet be morally responsible agents: they lack the capacities needed to qualify as a responsible moral agent. And perhaps it will take very long before any AI agents come to possess the capabilities List discusses. There are, however, also those who do not focus on the capacities of the selfdriving cars themselves, but who instead focus on other considerations that could enable them to be considered as responsible for what they do or cause. Such arguments have been made in the context of legal responsibility for crashes. The legal expert Jacob Turner, for example, argues that a self-driving car can be made into a legal person, with, among other things, the right to hold property [17]. If that self-driving car injures or kills a human being, it can use its property to pay damages to the injured party or to family members who have lost a loved one. Another legal expert, Jaap Hage, suggests that the only question we should ultimately be asking is a pragmatic one: we should be asking whether the practice of holding the self-driving car responsible for its behavior could have good consequences [6]. If holding a self-driving car or any other AI system responsible for their behavior would somehow have good consequences, then there is nothing stopping us from regarding a self-driving car as being responsible for its behavior, Hage argues. This might be seen as another way of filling some responsibility gaps. However, it is not clear whether treating self-driving cars as if they are responsible for what they do would have good consequences. Another view is defended by the philosophers Loh and Loh, who were already mentioned earlier on [9]. They suggest that while a self-driving car cannot be an independent responsible agent, it can be part of what they call a “responsibility network”. This phrase refers to a network of different kinds of agents (individual agents, organizations, artificial agents, and whatever else it might be) that can be seen as having different forms of responsibilities distributed within these networks. On such a view, there might be some sense in which the self-driving car has a share—if only a small share—in the responsibility for what happens when the selfdriving car is operating on public roads. But the main responsibility would remain in the hands of human beings.

16

S. Nyholm

Relatedly, I have suggested in some of my previous work that we can regard human beings and self-driving cars as forming a kind of human–machine teams, where some of the human beings involved can be seen as having the roles of managers or supervisors [13, 14]. If this is how we think about the interaction between human agency and machine behavior, we can think of human beings as having what is sometimes called “command responsibility” over the behavior of the self-driving cars (i.e., responsibility of the sort military commanders might have over soldiers under their command). The idea here is that technologies like self-driving cars are not operating independently. They are always operating under our supervision. And we can stop using them, or update them, or otherwise change the way we interact with these technologies over time. That goes a long way, I have argued, towards making us responsible for what happens when we use these technologies. One last way of attempting to resolve responsibility gap worries that I will mention is found in the work of the philosophers Filippo Santoni de Sio and Giulio Mecacci [15]. In effect, they argue that while Matthias might be right that we might not be able to fully predict and explain all the behaviors of AI systems, we can have other forms of control over AI systems. We can have what they call “meaningful human control”, which they argue does not require the ability to predict and fully explain the behavior of AI systems. So long as two conditions—the “tracking” and “tracing” conditions—hold, Santoni de Sio and Mecacci think that meaningful human control is achieved. The “tracking” condition is that the behavior of the AI system should track what Santoni de Sio and Mecacci call our human reasons. As I understand this idea, it is similar to what is sometimes called “value alignment” in AI ethics—i.e. the AI system should behave in a way that aligns with human values and goals [12]. The “tracing” condition, in turn, is that there should be some person or persons who understands two things: (i) how the technology in question works and (ii) what kinds of ethical issues might be associated with using this technology in society. If these “tracking” and “tracing” conditions hold, Santoni de Sio and Mecacci think that we have meaningful human control over the technologies in question—which might be self-driving cars. And the presence of this form of meaningful human control makes many responsibility gaps disappear. Or so these researchers argue.4

4

What about Hevelke and Nida-Rümelin? How do they suggest that we solve the problems that they identify in their above-cited article? Their suggestion is that we can hold all users of self-driving cars collectively responsible for what these cars do. We can do so, they argue, by having a mandatory tax that all users of self-driving cars must pay. This money could then be used to pay damages to those negatively affected by any problems that self-driving cars might cause [7].

Minding the Gap(s): Different Kinds of Responsibility Gaps Related …

17

7 Concluding Remarks This is not the place for an in-depth discussion of these different suggestions about how to fill responsibility gaps related to self-driving cars. Instead, I will end this chapter by highlighting some general challenges associated with the just-considered suggestions. One issue is that most of these views are not responses to all the four broad kinds of responsibility gaps identified above. They are rather primarily formulated as responses to worries about backward-looking, negative responsibility gaps, which is the main kind of gap that is discussed in most literature about this topic [12]. However, as I have argued, given that responsibility also has forward-looking as well as positive dimensions, we need ways of filling the other three kinds of responsibility gaps as well [3]. A second issue is that some of the above-mentioned suggestions for how to fill responsibility gaps work best in contexts in which people work together in an organized way. For example, when Loh and Loh claim that self-driving cars can be part of responsibility networks, where most of the responsibility can be distributed among certain humans, they are making a suggestion that seems to work best in a context in which some organization (e.g., the military or the police force) is using self-driving cars, and in which distributing responsibilities among different members of the organization is a manageable task. This is harder to do when the different parties involved are not all “on the same team”, so to speak [4]. A third and last issue I will mention is that while the above-considered suggestions for how to fill responsibility gaps point to general considerations that are surely of great relevance to filling these gaps, it is not straightforward how to apply these general theoretical ideas in practice to specific cases [12]. I think that this applies both to the suggestion I myself have made in some of my own work and to the suggestion Santoni de Sio and Mecacci make in relation to what they call meaningful human control. For example, whose goals, values, or reasons should the self-driving car’s behavior track? The people riding in the car, those of members of society in general, or what? And who is it that should have an understanding of the technology and its potential ethical issues—the same people as those whose values, reasons, or goals are hopefully being tracked? Or could it be different people? If so, why would certain specific people be responsible for how the cars are behaving in society? In conclusion, I submit that while there are many interesting contributions to the discussion of responsibility gap issues related to self-driving cars in the existing literature, both from legal theorists and moral philosophers, there is more work to do here. Moreover, we should not forget, when we do this further work, that questions about responsibility are not only backward-looking and focused on harm, blame, and punishment. We also need to be discussing what I have been calling positive responsibility gaps, including both forward-looking and backward-looking, positive responsibility gaps related to self-driving cars.

18

S. Nyholm

Acknowledgements Many thanks to the editors of this volume and an anonymous reviewer. My work on this article was part of the research program Ethics of Socially Disruptive Technologies, which is funded through the Gravitation program of the Dutch Ministry of Education, Culture, and Science and the Netherlands Organization for Scientific Research (NWO grant number 024.004.031).

References 1. Danaher, J.: Tragic choices and the virtues of techno-responsibility gaps. Philos. Technol. 35(2), 1–26 (2022) 2. Danaher, J.: Robots and the future of retribution. In: Edmonds, D. (ed.) Future Morality, pp. 93– 101. Oxford University Press, Oxford (2021) 3. Danaher, J., Nyholm, S.: Automation, work and the achievement gap. AI Ethics 1(3), 227–237 (2021) 4. De Jong, R.: The retribution-gap and responsibility-loci related to robots and automated technologies: a reply to Nyholm. Sci. Eng. Ethics 26(2), 727–735 (2020) 5. Floridi, L.: On the morality of artificial agents. In: Anderson, M., Anderson, S.L. (eds.) Machines Ethics, pp. 184–212. Cambridge University Press, Cambridge (2011) 6. Hage, J.: Theoretical foundations for the responsibility of autonomous agents. Artif. Intell. Law 25(3), 255–271 (2017) 7. Hevelke, A., Nida-Rümelin, J.: Responsibility for crashes of autonomous vehicles: an ethical analysis. Sci. Eng. Ethics 21(3), 619–630 (2015) 8. List, C.: Group agency and artificial intelligence. Philos. Technol. 34(4), 1213–1242 (2021) 9. Loh, W., Loh, J.: Autonomy and responsibility in hybrid systems: the example of autonomous cars. In: Lin, P., Abney, K., Jenkins, R. (eds.) Robot Ethics 2.0, pp. 35–50. Oxford University Press, Oxford (2017) 10. Marchant, G., Lindor, R.: The coming collision between autonomous vehicles and the liability system. Santa Clara Law Rev. 52(4), 1321–1340 (2012) 11. Matthias, A.: The responsibility gap: ascribing responsibility for the actions of learning autonoma. Ethics Inf. Technol. 6(3), 175–183 (2004) 12. Nyholm, S.: Responsibility gaps, value alignment, and meaningful human control over artificial intelligence. In: Placani, A., Broadhead, S. (eds.) Risk and Responsibility in Context. Routledge, London (in press) 13. Nyholm, S.: The ethics of crashes with self-driving cars: a roadmap. II. Philos. Compass 13(7), e12506 (2018) 14. Nyholm, S.: Humans and Robots: ethics, Agency, and Anthropomorphism. Rowman & Littlefield, London (2020) 15. Santoni de Sio, F., Mecacci, G.: Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos. Technol. 34(4), 1057–1084 (2021) 16. Simpson, T.W., Müller, V.C.: Just war and robots’ killings. Philos. Q. 66(263), 302–322 (2016) 17. Turner, J.: Robot Rules: regulating Artificial Intelligence. Springer, Berlin (2019) 18. Vélis, C.: Moral zombies: why algorithms are not moral agents. AI Soc. 36(2), 487–497 (2021)

Designing Driving Automation for Human Autonomy: Self-determination, the Good Life, and Social Deliberation Filippo Santoni de Sio

and Fabio Fossa

Abstract The present chapter analyses the complex ways in which driving automation affects human autonomy with the aim of raising awareness on the design and policy challenges that must be faced to effectively align future transportation systems to this ethical value. Building on the European report Ethics of Connected and Automated Vehicles, we consider three dimensions of the relation between human autonomy and driving automation: autonomy as self-determination of driving decisions; autonomy as freedom to pursue a good life through mobility; and, finally, autonomy as the capacity and opportunity to influence social deliberation concerning transportation policies and planning. In doing so, the chapter shows that delegating driving tasks to CAVs might both infringe and support user autonomy, thus calling for a reconsideration of widespread frameworks concerning the role of humans and technological systems in this domain. Moreover, it stresses the importance of promoting inclusive and participated decision-making processes on transportation policies and planning, so to avoid situations where the development and adoption of transport innovations are led by agents willing to respond only to a limited set of stakeholders’ needs. Keywords Driving automation · Human autonomy · Self-determination · Transport policy and planning · Inclusive social deliberation

F. Santoni de Sio Section Ethics/Philosophy of Technology, Delft University of Technology, TBM Building, 2628BX Delft, The Netherlands e-mail: [email protected] F. Fossa (B) Department of Mechanical Engineering, Politecnico di Milano, Via Privata Giuseppe La Masa, 1, 20156 Milan, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Fossa and F. Cheli (eds.), Connected and Automated Vehicles: Integrating Engineering and Ethics, Studies in Applied Philosophy, Epistemology and Rational Ethics 67, https://doi.org/10.1007/978-3-031-39991-6_2

19

20

F. Santoni de Sio and F. Fossa

1 Introduction The purpose of the present chapter is to analyse the complex ways in which driving automation affects the ethical value of human autonomy. In doing so, it aims at raising awareness on the design and policy challenges that must be faced to effectively align future transportation systems to relevant claims grounded on this value. The intersection of human autonomy and technological automation is notoriously controversial, with technical automation concurrently opening possibilities for supporting and restricting the enjoyment of autonomy. As such, it lies at the heart of several ethical quandaries across various AI-based applications [30]—e.g., Autonomous Weapon Systems [54] and Recommender Systems [64]. Connected and Automated Vehicles (CAVs) make no exception [6]. In the context of driving automation, threats to and opportunities for human autonomy are so numerous and deeply entangled with each other that much philosophical work is needed to clarify how this value is to be effectively pursued and promoted. Building on the analysis proposed in the European report Ethics of Connected and Automated Vehicles [26], the chapter considers three dimensions of the relation between human autonomy and driving automation. First, we analyse how CAVs impact the self-determination of driving decisions and actions such as swerving, speeding, or choosing routes. Secondly, we examine how driving automation impinges on human autonomy as freedom to purse a good life through mobility. Finally, we assume a wider perspective and investigate how current narratives surrounding driving automation (see also Chap. 10) affect human autonomy intended as free and open social deliberation and policy-making. For each case, we discuss the challenges of aligning driving automation to the demands of human autonomy. Our analysis, then, is based on the presupposition that the promotion of human autonomy as an ethical value is not to be understood only as a desired side-effect of CAV adoption. Rather, we intend to explore how driving automation can be a means to support this value in the practical domain of transportation. Therefore, the scope of human autonomy is neither defined by reference to the SAE levels of automation nor by taking the widespread adoption of CAVs for granted. Both levels of automation and modes of social adoption are instead assessed on the basis of their compatibility with relevant claims grounded on human autonomy. To be clear, we do not assume that autonomy is the only or most important value to be promoted by CAVs. In line with the European report, we acknowledge that the design, development, and use of CAVs should comply with or directly promote a number of other values such as safety, justice, responsibility, and so on. In this chapter, however, our focus is exclusively on human autonomy as one important value in itself and as an example of the way in which human values should be analysed in relation to technological development. The remainder of this chapter is structured as follows. Section 2 introduces some general remarks on the ethics of technology, thus sketching the theoretical background of our analysis. Section 3 discusses how the ethical value of human autonomy

Designing Driving Automation for Human Autonomy …

21

has been brought to bear on driving automation in the context of the European report Ethics of Connected and Automated Vehicles [26]. Section 4 draws attention to the composite nature of the definition of autonomy there proposed. Three dimensions of human autonomy as affected by driving automation are thus specified: the selfdetermination of driving tasks, the freedom to pursue a good life through mobility, and the openness of social deliberation on transportation planning. The subsequent sections discuss the alignment of driving automation to these three dimensions of human autonomy. Section 5 shows how upholding the selfdetermination of driving tasks suggests supporting conditional forms of driving automation. However, Sect. 6 argues that important benefits in terms of freedom to pursue a good life through mobility can only be reaped through full automation. This sets the stage for future design challenges aimed at moving beyond the SAE levels of automation framework to strike new balances between such seemingly contradictory claims. Finally, Sect. 7 tackles issues at the intersection of driving automation and transportation policy-making—the very context from which the European Commission Report and other analogous documents have originated. Finally, Sect. 8 summarises the results of the analysis and presents some conclusive remarks.

2 Ethics of Technology and Driving Automation Ethical inquiries surrounding CAVs evidently belong to the wider field of the ethics of technology [22, 43]. As an instance of applied ethics [1], the ethics of technology is characterised by the exposure to a highly multifaceted set of challenges [17]. On a conceptual level, ethical values and notions relevant to the case at hand must be identified, determined, and discussed [2, 57]. On a practical level, the various needs, interests, expectations, perspectives, and aims of different stakeholders must be considered and agreement must be sought out of inclusive social deliberation processes [45, 47, 66]. On a technical level, both theoretical and practical insights must be operationalised so to be understandable and actionable for engineers, designers, and other professionals involved in the development of technological products, which come with their own share of complexity [29, 61]. Moreover, each level iteratively affects the others, contributing to an open (and necessarily fuzzy) process of critical appraisal, refinement, and specification [16]. At the same time, however, adherence to the criteria of applicability, effectiveness, and usability pulls for the provision of concrete recommendations, methodologies, and best practices. Such a composite blend of theoretical, practical, ethical, social, political, legal, policy, and engineering ingredients is distinctive of applied ethics efforts and lies at the basis of their intricacy. Faced with the compound composition of socio-technical systems, ethical efforts endeavour to align the technical development, social deployment, and use of technological products to the relevant ethical values. In more concrete terms, these efforts aim at minimising harm caused by technology and putting it at the service of the social good [23].

22

F. Santoni de Sio and F. Fossa

Clearly, similar objectives can only be pursued through interdisciplinary collaboration [63]. Finding a common ground on which to build shared languages, methodologies, and practices is critical to the mission that the ethics of technology is called to carry out. Concurrently, a truly informed debate can only arise from the sharing of thorough disciplinary perspectives. Up to a point, then, disciplinary analysis must be pursued also separately, by recurring to the epistemological resources and tools of the relative research fields. For what concerns ethics, the identification, definition, and practical application of relevant values constitute perhaps the most remarkable task to attend to—and one that must also be executed with philosophical means of inquiry. Building on these considerations, the present chapter intends to focus on human autonomy as an ethical value that requires to be adequately integrated to the driving automation domain. Driving automation promises to reorganise the way in which many transport decisions and actions are taken, shifting the boundaries of human autonomy on different levels. These reconfigurations of the balance between autonomy and automation in road transport call for further analysis and clarification. The importance of such clarification should not be underestimated. Critically examining how widespread conceptions of human autonomy apply to the case of CAVs is key to realise relevant ethical opportunities and risks. Effective design choices and policy decisions considerably depend on it.

3 Human Autonomy and Driving Automation From an ethical perspective, the importance of aligning the design, deployment, and use of CAVs to the value of human autonomy could hardly be belittled. Surely, the philosophical status of human autonomy is controversial and intricately intertwined with thorny notions such as free will, freedom, self-determination, agential causation, subjectivity, and so on [28]. As a key ethical value, however, it enjoys widespread socio-political recognition (see, e.g., the UN Universal Declaration of Human Rights). In this sense, the value of autonomy characterises human beings as self-determining entities who deserve respect and protection. Moreover, at least within Western philosophy and culture, it buttresses fundamental components of the moral life, such as the exercise of responsible behaviour and the full enjoyment of human dignity [7, 11]. Given its relevance, human autonomy evidently qualifies as a value to be pursued through technological innovation as well. The European approach to the ethics of driving automation confirms the latter statement. In 2020, an interdisciplinary group of fourteen experts appointed by the European Commission authored the report Ethics of Connected and Automated Vehicles. Recommendations on Road Safety, Privacy, Fairness, Explainability and Responsibility [26]. The document establishes an ethical framework for CAVs and offers concrete recommendations aimed at guiding stakeholders in the effort of aligning driving automation to relevant ethical values.

Designing Driving Automation for Human Autonomy …

23

In close connection with the European approach to trustworthy Artificial Intelligence [25], the report starts by identifying and describing the basic normative cornerstones of the framework. Acknowledging its relevance, the authors indicate human autonomy as one of the eight overarching ethical principles for driving automation along with non-maleficence, beneficence, dignity, responsibility, justice, solidarity, and inclusive deliberation [51]. According to the report, the principle of autonomy states that human beings are to be conceived as “free moral agents” [26: 22] whose right to self-determination ought to be respected. In relation to driving automation, the principle of autonomy demands that CAVs are designed so to “protect and promote human beings’ capacity to decide about their movements and, more generally, to set their own standards and ends for accommodating a variety of conceptions of a ‘good life’” [26: 22]. As such, autonomy plays a crucial role in several recommendations, ranging from the protection of privacy rights and the promotion of user choice to reducing opacity and enhancing explainability (see Chap. 3). The insistence on protecting and promoting human autonomy in the context of driving automation rests on solid grounds. Evidence from the ethics of technology and innovation clearly stresses the importance of upholding the value of human autonomy throughout the technological domain. Bypassing individual decision-making through technical means might risk leading to situations where personal decisions are taken by actors (e.g., designers, engineers, manufacturers, policy-makers) who, however, have no right nor particular competence to do so [38]. On a more social level, unduly constraining public deliberation processes might lead to harmful or short-sighted decisions [15], and in any case to limiting the people’s contribution to determining their future. These states of affair are evidently incompatible with the individual and social right to self-determination and should be carefully avoided when designing, deploying, and using CAVs. The protection of autonomy in driving automation is also critical to support other important ethical values. Consider, for instance, the case of responsibility (see Chap. 1). Qualifying human beings as free moral agents by principle means, at the same time, holding them to be responsible agents as well to the extent that they can exercise such freedom. This is a necessary presupposition to establishing who is responsible, and why, when harmful consequences follow from the design, deployment, and use of CAVs [42]. The value of human autonomy, then, is vital to the ethics of driving automation for many reasons. On the one hand, CAVs designed, deployed, and used in ways that promote autonomy will meet claims grounded on the protection of human dignity, thus supporting social acceptance and trust. On the other hand, upholding human autonomy is key to distributing responsibility in a clear and fair way while, at the same time, encouraging responsible behaviour. But how is driving automation to be concretely aligned to the demands of human autonomy?

24

F. Santoni de Sio and F. Fossa

4 One Concept, Three Dimensions Whilst the ethical relevance of human autonomy to driving automation is evident, it is difficult to specify how the value is to be operationalised into specific socio-technical systems. Driving automation, after all, consists in the delegation of driving tasks from human agents to technical systems. Arguably, this amounts to a reduction of human (driving) autonomy. What needs to be further clarified, then, is how to automate driving functions without impacting too negatively on human autonomy. This raises thorny practical questions. What aspects of human autonomy are relevant to driving automation? Which of them should be prioritised? What model of driving automation should be promoted through design and policy decisions? What guidelines should be offered to practitioners in this sense? To answer such questions, it is first necessary to identify what aspects of driving are relevant for moral autonomy. These aspects, in turn, would serve as tangible constraints to driving automation: CAVs should be developed and deployed in ways that allow for their exercise. In sum, defining the boundaries of human autonomy in driving automation is a necessary step towards providing effective guidelines to its operationalisation [14]. In this sense, we propose an approach that differs from the way in which the relation between human autonomy and driving automation has been commonly debated. Building on SAE levels of automation [48], human autonomy has been primarily conceived in terms of what remains to be handled once a particular driving system is deployed (see Chap. 9). In other words, human autonomy is defined by reference to which driving tasks are delegated to the driving system and which remain competence of the human driver. In what follows, we overturn the order of the terms. Instead of inducing the scope of human autonomy by moving from the scope of driving automation, we ask how different forms of driving automation can serve different ethically valuable aspects of human autonomy. Accordingly, the ethical worthiness of CAV development, deployment, and use will be assessed on the basis of their potential in enabling human autonomy within future transportation systems. By doing so, we wish to stress the priority of the ethical value of human autonomy over choices concerning driving automation and its levels. Integrating ethics and technology, we contend, does not only require ensuring that new products comply with relevant ethical values. It also (and, perhaps, most importantly) requires determining how products should be designed, deployed, and used as means to proactively support fundamental ethical values. Therefore, our ethical analysis of the different dimensions of autonomy cuts across and does not coincide with the engineering definition of the levels of driving automation. And it is not necessarily the case that higher levels of automation will necessarily cause lower levels of human autonomy (even though this may sometimes be the case). As it will become clearer in the remaining of the chapter, the relationship between human autonomy and driving automation is more complex than this.

Designing Driving Automation for Human Autonomy …

25

The discussion of human autonomy provided in the European report represents a good starting point. Three dimensions of the relation between human autonomy and driving automation can be found through the report: (a) autonomy as self-determination of driving decisions; (b) autonomy as freedom to pursue a good life through mobility; and (c) autonomy as the capacity and opportunity to influence social deliberation concerning transportation policies and planning. Let us take a closer look to these three dimensions of human autonomy as they are affected by driving automation. (a) Autonomy as self-determination of driving decisions concerns the exercise of individual control by human drivers/operators over decisions that pertain to driving behaviour—i.e., to the ways in which the vehicle reaches its destination from its starting point. To use traffic psychologist John Michon’s control framework [34, 35; see 33], the scope of (a) includes tactical and operational levels of control over driving systems. At the tactical level, drivers exercise control over the undertaking of traffic manoeuvres such as overtaking, turning, stopping, and speeding. The operational level pertains to the physical activation of driving controls that is necessary to execute traffic manoeuvres: pushing pedals, turning the steering wheel, and so on. In the European report, this component is referred to when authors recommend to design, deploy, and use CAVs so to “protect and promote human beings’ capacity to decide about their movements” [26: 22]. In this sense, respecting human autonomy would mean to let users exercise some sort of control over those system operations, if any, that impact on their moral sphere. (b) Autonomy as freedom to pursue a good life through mobility exhibits a wider scope. It refers to the freedom of pursuing happiness and to mobility as a noteworthy enabler of what makes life worth living. In Michon’s terms [34: 5–6], this aspect of human autonomy pertains to the strategic level of control, which comprises “the planning stage of a trip, incorporating the determination of trip, goal, route and vehicle choice, and evaluation of the costs and risks involved”. In this sense, driving automation is not considered only as supporting a given experience—a journey between two points in space –, but rather as one important element for a good life. As stated in the European report, upholding human autonomy also means to “protect and promote human beings’ capacity to (…) set their own standards and ends for accommodating a variety of conceptions of a ‘good life’” [26: 22]. From this perspective, aligning driving automation to the principle of human autonomy would mean to envision CAVs as means to support the individual pursuit of personal flourishing and well-being—i.e., as means to realise significant transportation needs grounded on the value of autonomy. (c) This third dimension of autonomy acknowledges the relevance of policy decisions concerning transport solutions and their massive impacts on people’s wellbeing—including the operationalisation of (a) and (b). Indeed, policy-makers

26

F. Santoni de Sio and F. Fossa

(along with users and designers) figure as one of three stakeholder classes to which the report is mainly addressed. Moreover, inclusive deliberation—the obligation to promote a widely participated debate on the design, deployment, and use of CAVs—is endowed with the status of fundamental principle. By insisting on the necessity of inclusive deliberation processes and participation, the authors of the report stress the importance of promoting open, well-informed, unbiased, and independent policy-making when it comes to transportation planning. Shedding light on the ways in which widespread visions of driving automation as a resource to improve the ethical conditions of current transportation systems affect social decision-makers’ autonomy is yet another aspect of our problem that requires to be carefully examined. In what follows, the ethical challenges related to these three dimensions of human autonomy in driving automation are outlined. The results of the analysis can be used to ground more concrete design and policy initiatives to promote autonomy in the context of driving automation.

5 Driving Automation and the Self-determination of Driving Tasks Let us start by considering how human autonomy as in (a) could be promoted through driving automation. In this sense, human autonomy partakes in driving automation mostly as a threatened individual value that requires to be adequately safeguarded. Particular care is required since the exercise of human autonomy at the operational and strategic levels of control is variously constrained by driving automation [67]. However, the experience of manual driving is a complex one, composed by a myriad of decisions. Some of these decisions might have a considerable impact on the moral sphere. Taking ethically relevant driving decisions out of users’ hands might lead to infringements of their autonomy. Indeed, much of what is valued in driving stems from control over our vehicles and, through it, our movements. Steering, accelerating, and braking according to personal traits, attitudes, and needs offer a tangible medium to the expression of one’s own inner self [49, 60]. More importantly, driving is a moral experience. Different driving styles convey various moral values such as safety, respect for others, care for vulnerable road users, environmental friendliness, and so on. Recklessness, aggressivity, carelessness, and negligence are signs of unethical driving attitudes that all stem from how—in the Michon’s terms presented above—tactical and operational control is exercised. The relation between human driving and ethical values is significantly determined by the element of control that drivers exercise on their vehicles. The delegation of driving tasks to automated systems poses the risk of bypassing human judgment in this ethically-laden domain, thus restricting the scope of human autonomy.

Designing Driving Automation for Human Autonomy …

27

Of course, not every instance of driving task delegation is to be understood as a threat to autonomy. However, in some occasions driving automation might bypass users’ moral judgment on matters that could be construed as lying within the purview of user autonomy. At low levels of automation, for instance, a speed control system that could not be overridden by human intervention even in case of emergency might be considered as problematic with reference to human autonomy [44, 53, 56]. At the opposite extreme, suppose that fully autonomous vehicles will be able to distribute harm during unavoidable collisions according to given ethical values (see Chaps. 7 and 8). In this case, it might be problematic in terms of human autonomy if said values were set not by passengers themselves, but rather by other stakeholders [9, 36, 37]— which would also create a potentially problematic shift of responsibility away from the drivers. Considering less futuristic scenarios involving high levels of automation, automated features concerning ethical driving behaviour—e.g., regarding the safety distance to be accorded to vulnerable road users or traffic etiquette at pedestrian crossings—might qualify as constrains to the exercise of human autonomy. Finally, relying on CAVs would restrict the possibility of taking timely decisions concerning routes, which could variously impact the execution of self-determined intentions— e.g., staying away from given roads to protect one’s privacy [4]. In light of the above, it seems reasonable to conclude that the rush towards full automation—be that in the name of road safety, efficiency, sustainability—should not obfuscate the value of drivers’ control over driving tasks, at least when this would serve the legitimate expression of human autonomy. As suggested, for instance, by the Meaningful Human Control approach [52], if there are solid ethical reasons for driving decisions to be left to users, then CAVs should be designed, developed, and regulated to allow for their autonomy to be expressed [33].1 Suppose, for instance, that users could legitimately lay claims on ethical decisions regarding CAV behaviour during unavoidable collisions, as Millar suggests [36].2 As a consequence, they would have to be put in the condition of deciding how to shape CAV behaviour during these traffic situations. Admittedly, in a context of full automation user control over operational and tactical levels could only be accomplished indirectly—e.g., through the setting of user preferences. It is at least uncertain, however, whether this form of indirect control over system operations would satisfy the demands of the principle of human autonomy. More likely, (a) seems to encourage the development of automated features that leave enough space for the exercise of user autonomy—as happens in conditional automation, where control over driving tasks is shared with the system rather than fully delegated to it.

1

The authors do not endorse the claim that drivers should remain in control, but rather that some human actors should. Their framework however clearly explains how ethical choices about the protection of the moral agency and responsibility of different actors should be reflected in design and policy choices. 2 For a different opinion, see e.g. [20].

28

F. Santoni de Sio and F. Fossa

6 Driving Automation and the Good Life The claim according to which human autonomy would be better served by conditional automation—or other design choices that would protect the drivers’ control over driving operations—may be challenged, however, if human autonomy is intended as in (b): the freedom to pursue a good life. Higher levels of driving automation can arguably have beneficial impacts on human autonomy as the freedom to pursue a good life. At least two opportunities stand out: inclusive transportation and the improvement of travellers’ overall well-being. Both importantly enable the possibility to fulfil personal needs and desires, thus increasing the chances to live a good life. First, CAVs could massively enhance the autonomy of social categories that are currently excluded from manual driving because of physical and cognitive impairments. Independent access to transportation is critical for pursuing personal wellbeing and leading a satisfying social life [13, 46, 55]. Private and independent mobility, moreover, is commonly considered as the best solution in cases where various impairments stemming from conditions such as disability and old age make it harder to rely on public transport [5, 10]. However, current technical and societal limitations exclude many individuals from getting a driver license or being able to drive and, thus, from manual driving. As a result, those who need assistance to enjoy the freedom of road transport experience an unjustly limited and expensive access to it. In turn, this reduces opportunities for social interaction, political involvement, and employment, while hindering the recourse to important services such as education and medical care. Hence, failures in providing inclusive transport options heavily affect the personal well-being of already vulnerable social groups (e.g., [3]). If designed, funded, introduced, and regulated with this goal in mind, driving automation might play a considerable role in changing this situation. Since driving tasks would be automated, physical and cognitive impairments would no longer constitute an insurmountable barrier to the autonomous use of road vehicles [31]. Second, driving automation could support the self-determined pursuit of a good life by creating the conditions for a better travel experience. First of all, CAVs would allow users to reclaim travel time. Freed from the burden of driving themselves, CAV users would be able to employ travel time as they prefer. In addition, robust driving assistance might substantially reduce negative externalities associated with the driving experience such as stress, fatigue, and road rage, thus enhancing psychological well-being and contributing to the enjoyment of a good life [55]. Finally, autonomous decision-making on matters that importantly impact on individual wellbeing would also be supported. For instance, decisions about where to live would be less constrained by work locations and other circumstantial factors [24]. In both cases above, human autonomy benefits entirely depend on higher levels of automation. As a matter of fact, individuals excluded from manual driving would be poor candidates for shared control as well [19]. Similarly, full delegation is necessary for CAV users to freely engage in other, more satisfying activities. As the accidents

Designing Driving Automation for Human Autonomy …

29

involving Tesla and Uber automated vehicles demonstrate [40, 41], current shared control design does not allow for user distraction—even when they include tools to prompt users to remain focused on supervision tasks [21]. In order to support autonomy as freedom to pursue one’s own conception of a good life, then, it seems as if human intervention and supervision should be increasingly automated away. The conclusion just reached could, however, be challenged both by remaining within the scope of (b) and by comparing it to claims based on (a). For what concerns the first case, it has been argued that full driving automation would hinder access to significant sources of well-being for those who attach value to manual driving. Indeed, manual driving is valued by many as a source of pleasure and a powerful medium to enjoy meaningful aspects of human autonomy such as freedom and selfdetermination [39, 58]. Moreover, manual driving might be valued by those who distrust technological innovation in general or the peculiar configuration of CAVs— which, e.g., might expose not just CAV users, but all road users to privacy risks they might be unwilling to face [18, 27]. On the other hand, it is evident that (b) and (a) lead to conflicting conclusions. Benefits related to inclusivity and user well-being are strongly dependent on high or full levels of automation. However, these forms of driving automation would leave little space to the exercise of users’ autonomy at tactical and operational levels of control. If considerations on (a) and (b) are taken together, support for both partial and full automation can be advocated. As a result, compliance with the ethical principle of human autonomy would steer in directions that are difficult to harmonise. Within the framework of SAE levels of automation, then, it is hard to realise how protecting the exercise of user self-determination over driving decisions can go hand in hand with protecting the right to a self-determined good life pursued through mobility.3 This ambiguity, that stems from the complexity of the notion of autonomy and competing claims on driving automation, poses a challenge to designers and engineers. Perhaps, moving beyond the SAE framework to explore new forms of cooperation between human beings and driving systems might help open other avenues to design for autonomy in this field [59]. For now, the previous analysis has shown that design for autonomy requires a clear understanding of the nuanced ways in which different dimensions of this value can be served by driving automation. Moreover, it requires a thorough study of the tensions that might arise when design solutions aimed at supporting these nuances are introduced. Fine-grained knowledge on the intersection between user autonomy and driving automation is necessary to strike acceptable trade-offs in this sense. In the spirit of [63], it may be even said that real innovative solutions in driving automation should strive to loosen the existing tension between these different interpretations of the value of autonomy, by designing new socio-technical systems that allow to promote (more of) both of them. Moreover, such knowledge would provide a solid ground for political and policy decision-making, which arguably represents the most 3

As discussed in [26], Chap. 2, it is also important to consider the extent to which data-based transportation systems may create new forms of discrimination or domination, as is already the case with many other digital, data-based services.

30

F. Santoni de Sio and F. Fossa

adequate context where choices concerning similar trade-offs are to be made. Here lies the relevance to our analysis of the third dimension of autonomy, (c) autonomy as the capacity and opportunity of people to influence social deliberation concerning transportation policies and planning. The next session is dedicated to it.

7 Driving Automation and Independent Policy-Making Consider again the so-called moral dilemmas with self-driving cars—exceptional hypothetical situations in which a fully automated vehicle faces an unavoidable crash where the only options open are (seriously) harming one group of agents or another. In Sect. 5 we have presented the claim put forward by some ethicists and policymakers according to which such choices should be left—directly or indirectly—to the vehicles’ drivers, to protect their moral autonomy. However, one could also take a different ethical approach. These choices, it may be argued, will crucially affect the distribution of risks and harms in the public space, and should therefore be regulated according to some principles or norms of social justice, rather than just being left to the free interaction of the individual choices of drivers [26, Chap. 1]. It would arguably not be fair to have certain categories of people—say, elderly persons, cyclists, people from minority ethnic groups etc.—being systematically penalised in the crashes’ harm allocation. This may happen due to the result of the aggregation of preferences of drivers4 (see Chap. 8), and/or given the impossibility of other road users to influence the decision-making process and to have their interests and rights protected [32]. At a more procedural level, this seems to suggest that the ultimate authority to decide on such cases should be given not as much to individual users but rather to some collective, typically parliamentary or governmental, agency entrusted with the legitimate power to represent the interests of all the people [50]. This shows the importance of the third dimension of autonomy (c): the possibility for, ideally, all people to (indirectly) influence the decision-making process that will determine the design, development, and regulation of CAVs, and to make sure that their interests and rights are sufficiently reflected in the process. It is, as it were, autonomy as democratic freedom or power. This form of autonomy is very important beyond the relatively marginal issue of decision-making in dilemmatic crash-avoidance scenarios. Consider, for instance, the issue of inclusivity discussed in the previous section. Driving automation has the potential to give more people the possibility to independently use motor vehicles, by providing them with specific forms of driving assistance, or even full automated driving capabilities. This may dramatically enhance their capacity and opportunity to freely pursue a good and meaningful life, that is autonomy in our second sense (b). 4

By the way, some of these preferences would even be openly discriminatory and their implementation in the driving system therefore just illegal in many jurisdictions (imagine, e.g., a vehicle programmed to systematically hit women or people of colour).

Designing Driving Automation for Human Autonomy …

31

However, to realise this potential it is crucial that the development and introduction of automated driving happen with these stakeholders and their interests and values in mind. To the extent to which the development of driving automation technology is guided, for instance, by big (luxury) car manufacturers and tech companies, with the goal of embedding these technological features in vehicles designed for wealthy, able-bodied, neurotypical drivers, then the promise of inclusiveness and autonomyenhancing is not likely to be realised. If, on the contrary, the interests and values of a broader range of stakeholders are seriously, not only in words, considered and embedded in the development process of the technology, since its early stages, then more well-being and autonomy for more people can be expected. This leads to an even more general point. In his 1980 book The Social Control of Technology [8], the social scientist David Collingridge reflected on the ways in which technological development can help achieve broad societal goals, as opposed to just deliver new technical functionalities. Collingridge was quite sceptical that this could be achieved by leaving technological development only in the hand of scientists and engineers. He wrote: Ask technologists to build gadgets which explode with enormous power or to get men to the moon, and success can be expected, given sufficient resources, enthusiasm and organization. But ask them to get food for the poor; to develop transport systems for the journeys which people want; to provide machines which will work efficiently without alienating the men who work them; to provide security from war, liberation from mental stress, or anything else where the technological hardware can fulfil its function only through interaction with people and their society, and success is far from guaranteed. [8: 15].

After Collingridge, a full thread of academic and policy studies has emerged, under the name of Responsible Innovation. According to this approach, to ensure that the technological process is societally beneficial, at least three forms of “responsiveness” should be pursued: (a) between innovators and stakeholders [45, 65]; (b) between the innovation process, the changing information environment, and changing values (adaptivity; cfr. [12, 45, 62]); (c) among stakeholders [65]. Different aspects are emphasised by different authors in this tradition. But the general point is that for technology to be responsive to a broader range of interests and needs of a broader range of stakeholders, stakeholders’ interests and values should somehow be firmly embedded at the different levels of the technological process. This is needed both for epistemological and political reasons. On an epistemological level, it is to be noted that the knowledge required to develop a societally beneficial technology is distributed across society. “Experts”, be them engineers or policy-makers, cannot be expected to possess all the relevant knowledge. Their decisions should therefore be supported and guided by the knowledge of other stakeholders as well. From a political viewpoint, it is important to stress that bearers of the relevant knowledge and interests should have sufficient power to counteract the dominant approaches enforced by more powerful actors in the technology and policy game. We do not want to fall back in “the notorious example” of road building, where citizens were asked what the best route for the new road would be, but not whether the road was needed at all [8: 191]. The same goes with driving automation: we should ensure that we give different stakeholders the possibility and power to contribute to

32

F. Santoni de Sio and F. Fossa

the question, not only of how much automation they want, but also which automation, for whom, to achieve which values and goals—and possibly, sometimes, if, considering all possible other technological and policy options to promote their interests and needs, they need any driving automation at all.

8 Conclusions Human autonomy and technological automation intertwine in complex and often multifaceted ways. If the ethical values of freedom and self-determination are to be preserved and supported at both individual and societal levels, it is necessary to examine the ways in which our interactions with technological systems impact the scope and possibilities of our agency. The present chapter has proposed an analysis of this problem as it applies to the domain of driving automation. It has argued that delegating driving tasks to CAVs might both infringe and support user autonomy, thus calling for a reconsideration of widespread frameworks concerning the roles of humans and driving systems. Moreover, it has highlighted the importance of designing CAVs by keeping in mind their impacts on human autonomy—along with other relevant ethical values as well. Finally, it has focused the attention on decision-making processes concerning transportation policy and identified possible limitations of democratic freedom, such as in cases where the development and adoption of transport innovations are led by agents who respond only to a limited set of stakeholders’ values, needs, and interests. In light of what has been discussed, we conclude that further clarifications of how human autonomy is constrained, served, and transformed by driving automation are essential to guide the development of the technology towards ethically acceptable directions and, in so doing, contribute to improving the conditions of future transportation systems.

References 1. Beauchamp, T.: The nature of applied ethics. In: Frey, R.G., Wellman, C.H. (eds.) A Companion to Applied Ethics, pp. 1–16. Blackwell, Malden (2015) 2. Bednar, K., Spiekermann, S.: Eliciting values for technology design with moral philosophy: an empirical exploration of effects and shortcomings. Sci. Technol. Hum. Values (2022). https:// doi.org/10.1177/01622439221122595 3. Bennett, R., Vijaygopal, R., Kottasz, R.: Willingness of people who are blind to accept autonomous vehicles: an empirical investigation. Transp. Res. F: Traffic Psychol. Behav. 69, 13–27 (2019). https://doi.org/10.1016/j.trf.2019.12.012 4. Boeglin, J.A.: The costs of self-driving cars: reconciling freedom and privacy with tort liability in autonomous vehicle regulation. Yale J. Law Technol. 17(4), 171–203 (2015) 5. Bradshaw-Martin, H., Easton, C.: Autonomous or ‘driverless’ cars and disability: a legal and ethical analysis. Euro. J. Curr. Legal Issues 20(3), 1–17 (2014). http://webjcli.org/index.php/ webjcli/rt/printerFriendly/344/471

Designing Driving Automation for Human Autonomy …

33

6. Chiodo, S.: Human autonomy, technological automation (and reverse). AI Soc. 37, 39–48 (2022). https://doi.org/10.1007/s00146-021-01149-5 7. Christman, J.: Autonomy in moral and political philosophy. In: Zalta, E. (ed.) The Stanford Encyclopedia of Philosophy (Fall 2020 Edition). https://plato.stanford.edu/archives/fall2020/ entries/autonomy-moral/ 8. Collingridge, D.: The Social Control of Technology. St. Martin’s Press, New York (1980) 9. Contissa, G., Lagioia, F., Sartor, G.: The Ethical Knob: ethically-customisable automated vehicles and the law. Artif. Intell. Law 25, 365–378 (2017). https://doi.org/10.1007/s10506-0179211-z 10. Crayton, T.J., Meier, B.M.: Autonomous vehicles: developing a public health research agenda to frame the future of transportation policy. J. Transp. Health 6, 245–252 (2017). https://doi. org/10.1016/j.jth.2017.04.004 11. Darwall, S.: The value of autonomy and autonomy of the will. Ethics 116(2), 263–284 (2006) 12. de Saille, S.: Innovating innovation policy: the emergence of ‘Responsible Research and Innovation’. J. Respons. Innov. 2(2), 152–168 (2015). https://doi.org/10.1080/23299460.2015.104 5280 13. Epting, S.: Automated vehicles and transportation justice. Philos. Technol. 32, 389–403 (2019). https://doi.org/10.1007/s13347-018-0307-5 14. Fossa, F., et al.: Operationalizing the ethics of connected and automated vehicles: an engineering perspective. Int. J. Technoethics 13(1), 1–20 (2022). https://doi.org/10.4018/IJT.291553 15. Foxon, T.J.: Technological lock-in and the role of innovation. In: Atkinson, G., Dietz, S., Neumayer, E., Agarwala, M. (eds.) Handbook of Sustainable Development. Edward Elgar Publishing, Cheltenham (2014). https://doi.org/10.4337/9781782544708.00031 16. Friedman, B., Kahn, P.H.: Human values, ethics, and design. In: Sears, A., Jacko, J.A. (eds.) The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, II, pp. 1177–1201. L. Erlbaum Associates Inc., New York (2008) 17. Friedman, B., Kahn, P.H., Borning, A.: Value sensitive design and information systems. In: Himma, K.E., Tavani, H.T. (eds.) The Handbook of Information and Computer Ethics, pp. 69– 101. Wiley, Hoboken (2008) 18. Glancy, D.J.: Privacy in autonomous vehicles. Santa Clara Law Rev. 52(4), 3, 1171– 1239 (2012). https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=2728&context= lawreview&httpsredir=1&referer= 19. Goggin, G.: Disability, connected cars, and communication. Int. J. Commun. 13, 2748–2773 (2019). https://ijoc.org/index.php/ijoc/article/view/9021 20. Gogoll, J., Müller, J.F.: Autonomous cars: in favor of a mandatory ethics setting. Sci. Eng. Ethics 23, 681–700 (2017). https://doi.org/10.1007/s11948-016-9806-x 21. Hancock, P.A.: Some pitfalls in the promises of automated and autonomous vehicles. Ergonomics 62(4), 479–495 (2019). https://doi.org/10.1080/00140139.2018.1498136 22. Hansson, S.O. (ed.): The Ethics of Technology. Methods and Approaches. Rowman and Littlefield, London (2017) 23. Harris, C.E.: Engineering ethics: from preventive ethics to aspirational ethics. In: Michelfelder, D., McCarthy, N., Goldberg, D. (eds.) Philosophy and Engineering: Reflections on Practice, Principles and Process. Philosophy of Engineering and Technology, vol. 15. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-7762-0_14 24. Heinrichs, D.: Autonomous driving and urban land use. In: Maurer, M., Gerdes, J., Lenz, B., Winner, H. (eds.) Autonomous Driving, pp. 213–231. Springer, Berlin-Heidelberg (2016). https://doi.org/10.1007/978-3-662-48847-8_11 25. High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI (2019). https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustwort hy-ai. Accessed 3 May 2022 26. Horizon Commission expert group to advise on specific ethical issues raised by driverless mobility (E03659). Ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility (2020). https://op.europa.eu/en/pub lication-detail/-/publication/89624e2c-f98c-11ea-b44f-01aa75ed71a1/language-en. Accessed 3 May 2022

34

F. Santoni de Sio and F. Fossa

27. Jannusch, T., David-Spickermann, F., Shannon, D., Ressel, J., Völler, M., Murphy, F., Furxhi, I., Cunneen, M., Mullins, M.: Surveillance and privacy—Beyond the panopticon. An exploration of 720-degree observation in level 3 and 4 vehicle automation. Technol. Soc. 66, 101667 (2021). https://doi.org/10.1016/j.techsoc.2021.101667 28. Kane, R. (ed.): The Oxford Handbook of Free Will, II Oxford University Press, Oxford (2011) 29. Kroes, P., van de Poel, I.: Design for Values and the definition, specification, and operationalization of values. In: van den Hoven, J., Vermaas, P., van de Poel, I. (eds.) Handbook of Ethics, Values, and Technological Design. Springer, Dordrecht (2015). https://doi.org/10.1007/97894-007-6970-0_11 30. Laitinen, A., Sahlgren, O.: AI systems and respect for human autonomy. Front. Artif. Intell. 4, 705164, 1–14 (2021). https://doi.org/10.3389/frai.2021.705164 31. Lim, H.S.M., Taeihagh, A.: Autonomous vehicles for smart and sustainable cities: an in-depth exploration of privacy and cybersecurity implications. Energies 11(5), 1062 (2018). https://doi. org/10.3390/en11051062 32. Liu, H.Y.: Irresponsibilities, inequalities and injustice for autonomous vehicles. Ethics Inf. Technol. 19, 193–207 (2017). https://doi.org/10.1007/s10676-017-9436-2 33. Mecacci, G., Santoni de Sio, F.: Meaningful human control as reason-responsiveness: the case of dual-mode vehicles. Ethics Inf. Technol. 22, 103–115 (2020). https://doi.org/10.1007/s10 676-019-09519-w 34. Michon, J.A: Dealing with danger. Technical Report nr. VK 79-01. Traffic Research Centre, University of Groningen (1979). https://jamichon.nl/jam_writings/1979_dealing_with_danger. pdf 35. Michon, J.A.: Human Behavior and Traffic Safety. Springer, Boston (1985). https://doi.org/10. 1007/978-1-4613-2173-6. 36. Millar, J.: An ethics evaluation tool for automating ethical decision-making in robots and self-driving cars. Appl. Artif. Intell. 30(8), 787–809 (2016). https://doi.org/10.1080/08839514. 2016.1229919 37. Millar, J.: Ethics settings for autonomous vehicles. In: Lin, P., Abney, K., Jenkins, R. (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, pp. 20–34. Oxford University Press, Oxford (2017). https://doi.org/10.1093/oso/9780190652951.003.0002 38. Mitcham, C.: Justifying public participation in technical decision making. IEEE Technol. Soc. Mag. 16(1), 40–46 (1997). https://doi.org/10.1109/44.584650 39. Müller, J.F., Gogoll, J.: Should manual driving be (eventually) outlawed? Sci. Eng. Ethics 26, 1549–1567 (2020). https://doi.org/10.1007/s11948-020-00190-9 40. National Highway Traffic Safety Administration: Special Crash Investigations: On-Site Automated Driver Assistance System Crash Investigation of the 2015 Tesla Model S 70D. DOT HS812 481. National Highway Traffic Safety Administration, Washington, DC (2018) 41. National Transportation Safety Board: Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, March 18, 2018. Accident Report NTSB/HAR-19/03PB2019-101402 (2019). https://www.ntsb.gov/investigations/ accidentreports/reports/har1903.pdf 42. Nyholm, S.: The ethics of crashes with self-driving cars: a roadmap II. Philos. Compass 13(7), e12506 (2018). https://doi.org/10.1111/phc3.12506 43. Nyholm, S.: This is Technology Ethics: an Introduction. Wiley Blackwell, New York (2022) 44. Nyholm, S., Smids, J.: Automated cars meet human drivers: responsible human-robot coordination and the ethics of mixed traffic. Ethics Inf. Technol. 22, 335–344 (2020). https://doi.org/ 10.1007/s10676-018-9445-9 45. Owen, R., Stilgoe, J., Macnaghten, P., Gorman, M., Fisher, E., Guston, D.: A framework for responsible innovation. In: Owen, R., Bessant, J., Heintz, M. (eds.) Responsible Innovation: managing the Responsible Emergence of Science and Innovation in Society, pp. 27–50. Wiley, Chichester (2013). https://doi.org/10.1002/9781118551424.ch2 46. Parviainen, J.: Kinetic values, mobility (in)equalities, and ageing in smart urban environments. Ethical Theory Moral. Pract. 24, 1139–1153 (2021). https://doi.org/10.1007/s10677-021-102 49-6

Designing Driving Automation for Human Autonomy …

35

47. Pinch, T.J., Bijker, W.E.: The social construction of facts and artefacts: or how the sociology of science and the sociology of technology might benefit each other. Soc. Stud. Sci. 14(3), 399–441 (1984) 48. SAE International: J3016. (R) Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. Superseding J3016 JUN2018 (2021) 49. Sagberg, F., Selpi, G., Bianchi Piccinini, F., Engström, J.: A review of research on driving styles and road safety. Hum. Factors 57(7), 1248–1275 (2015). https://doi.org/10.1177/001872 0815591313 50. Santoni de Sio, F.: Killing by autonomous vehicles and the legal doctrine of necessity. Ethical Theory Moral Pract. 20, 411–429 (2017). https://doi.org/10.1007/s10677-017-9780-7 51. Santoni de Sio, F.: The European commission report on ethics of connected and automated vehicles and the future of ethics of transportation. Ethics Inf. Technol. 23, 713–726 (2021). https://doi.org/10.1007/s10676-021-09609-8 52. Santoni De Sio, F., van den Hoven, J.: Meaningful human control over autonomous systems: a philosophical account. Front. Robot. AI 5(15), 1–14 (2018). https://doi.org/10.3389/frobt. 2018.00015 53. Schoonmaker, J.: Proactive privacy for a driverless age. Inf. Commun. Technol. Law 25, 1–33 (2016). https://doi.org/10.1080/13600834.2016.1184456 54. Sharkey, A.: Autonomous weapons systems, killer robots and human dignity. Ethics Inf. Technol. 21, 75–87 (2019). https://doi.org/10.1007/s10676-018-9494-0 55. Singleton, P.A., De Vos, J., Heinen, E., Pud¯ane, B.: Chapter Seven—Potential health and well-being implications of autonomous vehicles. In: Milakis, D., Thomopoulos, N., van Wee, B. (eds.) Advances in Transport Policy and Planning, vol. 5, pp. 163–190. Academic Press, Cambridge (2020). https://doi.org/10.1016/bs.atpp.2020.02.002 56. Smids, J.: The moral case for intelligent speed adaptation. J. Appl. Philos. 35, 205–221 (2018). https://doi.org/10.1111/japp.12168 57. Smits, M., Ludden, G.D.S., Peters, R., Bredie, S.J.H., van Goor, H., Verbeek, P.P.: Values that matter: a new method to design and assess moral mediation of technology. Des. Issues 38, 39–54 (2022) 58. Sparrow, R., Howard, M.: When human beings are like drunk robots: driverless vehicles, ethics, and the future of transport. Transp. Res. Part C: Emerg. Technol. 80, 206–215 (2017). https:// doi.org/10.1016/j.trc.2017.04.014 59. Stayton, E., Stilgoe, J.: It’s time to rethink levels of automation for self-driving vehicles. IEEE Technol. Soc. Mag. 39(3), 13–19 (2020). https://doi.org/10.1109/MTS.2020.3012315 60. Taubman Ben Ari, O., Yehiel, D.: Driving styles and their associations with personality and motivation. Accid. Anal. Prevent. 45, 416–422 (2012). https://doi.org/10.1016/j.aap.2011. 08.007 61. van de Poel, I.: Translating values into design requirements. In: Michelfelder, D., McCarthy, N., Goldberg, D. (eds.) Philosophy and Engineering: reflections on Practice, Principles and Process. Philosophy of Engineering and Technology, vol. 15. Springer, Dordrecht (2013). https://doi. org/10.1007/978-94-007-7762-0_20 62. van den Hoven, J.: Value sensitive design and responsible innovation. In: Owen, R., Bessant, J., Heintz, M. (eds.) Responsible Innovation: managing the Responsible Emergence of Science and Innovation in Society, pp. 75–83. Wiley, Chichester (2013). https://doi.org/10.1002/978 1118551424.ch4 63. van den Hoven, J., Vermaas, P., van de Poel, I. (eds.).: Handbook of Ethics, Values, and Technological Design. Springer, Dordrecht (2015). https://link.springer.com/referencework/https:// doi.org/10.1007/978-94-007-6970-0 64. Varshney, L.R.: Respect for Human Autonomy in Recommender Systems (2020). https://arxiv. org/abs/2009.02603v1

36

F. Santoni de Sio and F. Fossa

65. von Schomberg, R.: Towards responsible research and innovation in the information and communication technologies and security technologies fields (2011). https://ssrn.com/abstract= 2436399 66. Winner, L.: Do artefacts have politics? Daedalus 109(1), 121–136 (1980) 67. Xu, W.: From automation to autonomy and autonomous vehicles. Interactions, 49–53 (2021). http://dl.acm.org/ft_gateway.cfm?id=3434580&type=pdf&dwn=1

Contextual Challenges to Explainable Driving Automation: The Case of Machine Perception Matteo Matteucci , Simone Mentasti, Viola Schiaffonati , and Fabio Fossa

Abstract As happens in the case of many Artificial Intelligence systems, explainability has been acknowledged as a relevant ethical value also for driving automation. This chapter discusses the challenges raised by the application of explainability to driving automation. Indeed, designing explainable automated vehicles is not a straightforward task. On the one hand, technical constraints must be considered to integrate explainability without impairing the overall performance of the vehicle. On the other hand, explainability requirements vary depending on the human stakeholders and the technological functions involved, thus further complicating its embedment. The goal of the chapter is thus to investigate what explainability means with reference to driving automation. To this aim, we focus on machine perception and explore the related explainability requirements and challenges. We argue that explainability is a multifaceted concept that needs to be differently articulated in different contexts. Paying due attention to the contextual aspects of explainability, in particular the content of the explanations and the stakeholders they are addressed to, is critical to serve the ethical values it is supposed to support. Keywords Driving automation · Ethics · Machine perception · Explainability · Context

M. Matteucci (B) · S. Mentasti · V. Schiaffonati Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Ponzio, 34/5, 20133 Milan, Italy e-mail: [email protected] S. Mentasti e-mail: [email protected] V. Schiaffonati e-mail: [email protected] F. Fossa Department of Mechanical Engineering, Politecnico di Milano, Via Privata Giuseppe La Masa, 1, 20156 Milan, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Fossa and F. Cheli (eds.), Connected and Automated Vehicles: Integrating Engineering and Ethics, Studies in Applied Philosophy, Epistemology and Rational Ethics 67, https://doi.org/10.1007/978-3-031-39991-6_3

37

38

M. Matteucci et al.

1 Introduction Explainability is widely acknowledged as a fundamental ethical value in the domain of Artificial Intelligence (AI). As such, it requires to be duly acknowledged in the field of driving automation as well. However, designing explainable Connected and Autonomous Vehicles (CAVs) is not a straightforward task. The goal of the present chapter is to explore the application of explainability to driving automation—with a particular focus on machine perception—and submit a preliminary discussion of some of the related challenges. We believe that our work might provide a solid methodological basis to carry out reflective inquiries into explainability issues in driving automation and might promote effective implementations of such a critical ethical value. The chapter is structured as follows. Section 2 provides a discussion of the ethical significance of explainability aimed at clarifying its main conceptual components and the moral reasons justifying its pursuit. The value of explainability is presented as instrumental in supporting human autonomous and responsible behaviour. Particular attention is dedicated to its contextual nature, which entails that explainability requirements must be carefully specified on a case-by-case basis. This is different from how most of the discussions are currently framed. Section 3 further explores the contextual nature of explainability along two dimensions: content and stakeholders. This ultimately leads us to define it as the obligation to provide relevant stakeholders with the system information they need to behave autonomously and responsibly. In order to better substantiate our claims, we then focus on machine perception to show how greatly explainability requirements vary across different contextual settings. Accordingly, Sect. 4 presents machine perception as a functional domain of driving automation to which explainability considerations apply. Section 5 provides an illustrative analysis of how explainability challenges can be identified and assessed at two different levels of driving automation—L3 and L5. Finally, Sect. 6 submits some conclusive remarks.

2 Explainability and Driving Automation: An Ethical Perspective As the magnitude of the social effects brought about by AI technologies was increasingly realised, explainability has emerged as an ethical value to be pursued and enforced. Commitment to explainability is needed as a response to the opacity of algorithmic decision-making—i.e., roughly, the inability to reasonably account for given system outputs or behaviour. The recent spread throughout societies of AI applications based on opaque Machine Learning (ML) techniques has ultimately contributed to the full establishment of explainability as one of the most crucial ethical values for the information age.

Contextual Challenges to Explainable Driving Automation: The Case …

39

Accordingly, explainability enjoys wide institutional, legal, and technical acknowledgment. Explainability has been recognised as one of the five principles for AI in society that is common to several sets of ethical principles for AI [10]. The European approach to the ethics of algorithms in particular demands automated decision-making to be “to the extent possible—explainable to those directly or indirectly affected” [15: 13]. Furthermore, the “right to an explanation” plays a central role in current legal frameworks such as the European GDPR, even if some scholars have doubted that the right to be informed at the core of the GDPR does fully entail the right to explanation [43]. Finally, eXplainable Artificial Intelligence (XAI) has grown into what is now a florid new field of research, where different approaches and solutions are advanced aimed at reducing algorithmic opacity. Different communities have dealt with this problem in different ways. As a result, different methodologies have been developed [1, 3, 6, 13, 19–21]. Such an across-the-board endorsement arguably stems from the tangible ethical importance of explainability. Indeed, the capacity to make sense of the behaviour of a system is crucial to support two most cherished and shared ethical values, i.e., autonomy (see Chap. 2) and responsibility (see Chap. 1). Factually grounded understanding of what is going on, why, and what is about to happen represents a necessary condition for stakeholders to exercise their autonomy and fulfil their responsibility [4, 27]. On the one hand, autonomous decision-making and agency heavily depend on being timely provided with relevant system information conveyed in understandable ways. On the other hand, responsible behaviour can be fully exercised only if relevant information about the system and its functioning are accessible to involved parties. Consider, for instance, the case of an airplane pilot overseeing automated flying operations. Being the human in charge, the pilot is responsible for the safety and well-being of the passengers. Fulfilling her responsibilities depends on being in the condition of autonomously deciding whether intervention is needed. Similar decisions, however, can be taken only on the basis of information concerning system status and operations. Communicating to the pilot relevant system information in ways that she can understand and timely act upon it is thus a necessary condition for her to exercise autonomous and responsible behaviour. If relevant system information were communicated ambiguously, or failed to be communicated at all, her capacity for autonomous and responsible behaviour would be severely hindered. As the previous considerations suggest, the ethical relevance of explainability is of an instrumental kind. The value of explainability consists ultimately in its potential to support human autonomous and responsible behaviour. In other words, explainability is ethically important not in itself, but as a means to protect and foster further ethical goods—i.e., autonomy and responsibility—that are valued for their own sake (see, e.g., [42]). As such, its justification is not absolute, but rather depends on the extent to which it serves the intrinsic ethical values it is conducive to. Analogous considerations have been found to apply to the field of driving automation as well [28]. Moreover, the ethical importance of explainability is being increasingly acknowledged in this context not just to foster autonomy and responsibility, but also to promote widespread acceptance—a necessary condition to reap purported

40

M. Matteucci et al.

ethical benefits in terms of, e.g., safety and sustainability [38]. Accordingly, the 2020 European report on the Ethics of Connected and Automated Vehicles [16] includes two recommendations on explainability. Recommendation 14 encourages to reduce opacity in algorithmic decision-making through user-centred interfaces and methods, stressing the importance of recurring to XAI techniques, accessible and transparent vocabulary, and intelligible system explanations and justifications. Recommendation 15 shifts the attention to education and public participation, highlighting the need of providing the public with the knowledge, opportunities, and skills necessary to adequately understand their interactions with CAVs, be aware of the risks involved, and be able to fully exercise their rights. This broad call for explainability in driving automation remains yet to be properly addressed. Most of the current literature on explainability in driving automation tackles a much narrower issue—i.e., the increased adoption of neural networks to solve driving automation tasks and the related need to cope with their intrinsic opacity. As already mentioned, the recourse to ML techniques raises worries about algorithmic opacity and invites research into XAI [31, 37, 44]. Providing the technical means to “open the black box” and generate explanations of machine behaviour is an essential component of what explainability requires. However, other aspects stand in need of further clarification as well. Even though we also intend explainability mainly as a countermeasure to algorithmic opacity, in this chapter we argue that it requires not only technical fixes, but also ethical considerations grounded on a context-based approach [11].1 Indeed, technical explanations acquire meaning only in relation to the humans who require, interpret, and act on them. Without shedding light on what is to be explained, why, to whom, and how [32], XAI techniques can only do so much. Since explanations must fit contextual needs, it would be a mistake to expect that XAI techniques will be enough to meet all explainability challenges. Conversely, technical approaches must be combined with fine-grained analysis of contextual elements such as the content of explanations and the stakeholders they are addressed to.

3 The Contextual Nature of Explainability By stating that explainability exhibits a contextual nature we wish to stress that its challenges cannot be fully faced without paying due attention to the situational aspects in which explanations are demanded and offered. As anticipated, the contextual nature of explainability can be delineated along at least two different but interrelated dimensions: the content of explanations and the stakeholders they are addressed to. Indeed, fine-grained questions concerning what needs to be (made) explainable necessarily lead to further questions concerning whom the explanations are addressed to, for what reason, in what circumstances, and so on. In the following subsections 1

Similar claims are recently getting more traction also in other AI fields—for instance, medical AI [29, 40].

Contextual Challenges to Explainable Driving Automation: The Case …

41

we elaborate on these two contextual dimensions of explainability with reference to CAVs. To concretely show their importance, we then discuss in greater detail the case of machine perception in driving automation (Sect. 4).

3.1 Content Let us start with content—i.e., with the question concerning which system operations should be made explainable. The insistence on XAI techniques might suggest that explainability is good in itself and should be pursued for all system operations independently from any contextual consideration. However, the ethical analysis elaborated in Sect. 2 has suggested otherwise: explainability is valuable only insofar as it supports autonomous and responsible behaviour. Arguably, different functions executed at different levels of automation are likely to impact autonomy and responsibility in heterogeneous ways. As a consequence, decisions concerning what to include in, and exclude from, explanatory efforts should be sensitive to contextual factors related to levels of automation and system operations. Let us consider levels of automation first (see Chap. 9). Which system operations must be made explainable to foster autonomy and responsibility could hardly be determined independently from the broader technological setting in which functions are embedded. Indeed, what is needed to support autonomy and responsibility varies at varying levels of automation. In other words, different driving functions are bound to raise different explainability challenges at different levels of automation. For instance, the motivations for and ways towards explainability at a conditional level of automation (L3)—where user supervision and intervention are expected—are notably different from those relevant to high (L4) or full (L5) levels of automation— where user supervision and intervention are no longer necessary. Specifying the diverse ways in which explainability is supposed to support autonomy and responsibility at different levels of automation is crucial to implement it effectively. Hence, levels of automation qualify as contextual factors that cannot be abstracted from when researching strategies to implement explainability in CAVs. At each level of automation, functions as diverse as path planning, obstacle detection, speed control, lane changing, emergency breaking, and fault detection are likely to affect the human exercise of autonomy and responsibility in different ways. Hence, they are likely to pose heterogeneous explainability challenges. It would then be a mistake to assume that every function executed by a CAV should be equally explainable to all possible stakeholders. One-size-fits-all approaches are ill-suited to determine what the content of explanatory effort should be. Some operations will need to be made explainable to support stakeholders’ autonomous and responsible behaviour. Other operations would probably raise no ethical concerns if some stakeholders will be unable to access or understand them. Some task-related information that could engender misunderstandings and risky reactions, thus impairing user autonomy and

42

M. Matteucci et al.

responsibility, might even be better left opaque [32]. For instance, hiding misclassifications of road objects that are inconsequential in terms of CAV safety might avoid unnecessary and possibly dangerous user intervention, thus upholding their autonomy and responsibility. This does not mean, of course, that some system operations must never be explainable. Rather, it means that given some contextual factors, some operations might be better left opaque. Indiscriminate access to all system functions for all stakeholders could in fact hinder the exercise of autonomous and responsible behaviour, ending up being ethically counterproductive. As the European report also stresses, explainability is to be provided only of “relevant CAV applications of algorithm and/or machine learning based operational requirements and decision making” [16: 48]. Explanations, then, must not be provided for all system operations. Rather, they must be provided only for those system operations that are “relevant” with reference to autonomy and responsibility—i.e., the ethical values that explainability is supposed to serve. The relevance of levels of automation and system operations to determine what deserves to be explained makes it clear that considerations concerning content are deeply related to considerations concerning stakeholders. Presenting various forms of human–machine collaboration, levels of automation already incorporate a reference to the human context of driving automation. Depending on the level of automation, autonomy and responsibility distribute differently among relevant stakeholders. Since the purpose of explainability is to support human autonomous and responsible behaviour, determining what needs to be explained could hardly be done without considering to whom explanations are due. Let us now explore human-related factors of explainability by turning to stakeholders.

3.2 CAV Stakeholders Queries concerning the contents of explainability in driving automation cannot be entirely solved without referencing to further contextual factors. To be properly answered, they must be paired with considerations concerning the target of explanatory efforts—i.e., stakeholders. In what follows, contextual factors related to stakeholders are categorised into two classes: reasons and constraints. First, different stakeholders require explainability for different reasons [35]. Specifying the legitimate reasons behind each stakeholder’s explainability demands is a necessary step towards appropriately satisfying them. Secondly, stakeholders access explanations under given constraints. Constraints mostly stem from two sources: the practical situations in which explanations are requested, provided, received, and acted upon; and the individual peculiarities determining stakeholders’ capability of accessing, understanding, and acting upon explanations. Depending on these contextual elements, algorithmic opacity is differently perceived as obstructing autonomous and responsible agency. Therefore, accounting for stakeholder-related factors is essential to figure out whether a system

Contextual Challenges to Explainable Driving Automation: The Case …

43

operation executed at a given level of automation is relevant vis-à-vis explainability and what can be done to effectively comply with explainability demands. As already noticed, the relevance of a function in terms of explainability is arguably to be determined with reference to the impact it exerts on stakeholders’ autonomous and responsible behaviour. These impacts are bound to be multifarious and to vary significantly across classes of stakeholders. In other words, given the particular role they play, stakeholders can legitimately demand explainability for different reasons. For example, CAV users would require explanations to autonomously and responsibly accept system behaviour, personalise it, exercise privacy rights and meaningfully oppose system decisions [45]. Other road users would need explanations to make sense of CAV behaviour and adjust their own actions accordingly. Lawmakers would demand explainability to hold parties accountable in a fair way [17], while developers would require it to correct, validate, and improve systems [8]. In sum, the relevance of given system operations in terms of explainability varies greatly depending on the different reasons why stakeholders demand it—reasons that rest on their specific pursuit of autonomy and responsibility. Without information about stakeholders’ reasons, the ethical demands of explainability could hardly be met. Moreover, some constraints in the process of sharing explanations must also be considered. First, reasons are always affirmed in the context of specific practical situations, the characters of which massively influence the timing, methods, and kinds of information to communicate. Some situations might require fast and easyto-interpret explanations. Other situations might allow for lengthy, nuanced explanations. For instance, in courts of law and engineering labs time is usually available, although in different measures, so that explanations can be detailed and built on vast amount of information. CAV users, on the contrary, do not have much time to take a decision, so explanations must be quickly understandable. Moreover, their attention span is arguably going to be limited, so due care should be taken not to overwhelm users, which would impair their ability to exercise responsible and considered judgment [20, 41]. Since different practical situations put different constraints on several important aspects of explainability, it is essential not to overlook their diversity. Stakeholder diversity does not regard only reasons and situational contexts. It also concerns individual peculiarities. Different stakeholders require information to be made available in different ways. Individual characters importantly define the conditions under which explanations will actually be understood and mark the necessity to modulate explanations accordingly. What makes an explanation correctly understood and acted upon cannot be determined abstractly. It significantly depends on the individual peculiarities of those to whom the explanation is addressed [6]. The set of features relevant in this sense is extremely varied and includes, among others, cognitive abilities, cultural backgrounds, levels of expertise, and familiarity with technological artefacts. For instance, CAV users suffering from visual or auditory impairments require explanations to be attuned to their capabilities (see Chap. 4). Since getting explanations right is a necessary condition to the exercise of autonomy and responsibility, contextual inputs concerning stakeholders’ individual peculiarities must also be factored in when implementing explainability.

44

M. Matteucci et al.

Taken together, information concerning practical situations and individual peculiarities pose crucial constraints to explanatory efforts. The most appropriate modalities of explanations are to be determined on their basis. Decisions concerning timing, language, degree of granularity, level of technicism, and so on can only be adequately made by identifying and carefully satisfying these constraints. In sum, explainability cannot be reduced to a mere technical problem. It exhibits an irreducible human and contextual component. Satisfying the ethical demands of explainability importantly entails providing stakeholders with explanations they can meaningfully relate to and act upon [15: 18]. It is critical, then, not to lose sight of its contextual, human-centred nature. Explanations must be carefully tailored to suit each stakeholder’s reasons. Moreover, practical situations and individual peculiarities pose constraints to explanatory efforts that must be carefully identified for stakeholders’ autonomy and responsibility to be adequately supported.

3.3 Explainability in Driving Automation: A Working Definition What said so far can be condensed into a working definition of the ethical value of explainability in driving automation. In this chapter we intend explainability as the obligation to provide relevant stakeholders with the system information they need to behave autonomously and responsibly. The obligation implies that information must be provided both in adequate quantity and quality. On the side of content, only functions that are relevant for stakeholders to exercise autonomous and responsible behaviour must be explained. Furthermore, information must be presented to stakeholders according to their reasons and constraints. We believe that the debate on explainability in driving automation has yet to provide a systematic discussion of such critical conditions to its endorsement. In what follows, we wish to submit a first contribution to this line of research. To do that, we focus on a specific driving automation function—machine perception—at two levels of automation—conditional (L3) and full (L5) [34]. The next Sect. 4 offers an introduction to machine perception in driving automation and highlights its significance vis-à-vis explainability. Building on our context-based notion of explainability, the remaining part of the chapter offers a preliminary discussion of machine perception explainability requirements with reference to three classes of stakeholders: CAV users, developers, and legal professionals.

4 Machine Perception in Driving Automation To provide a concrete example for the concepts introduced so far, this section highlights the role of machine perception in driving automation.

Contextual Challenges to Explainable Driving Automation: The Case …

45

Machine perception refers to the set of techniques which allow a machine to interpret data as to infer the context in which they are required to operate. In terms of autonomous systems, this is one of the key elements of ‘autonomy’ as defined in robotics and AI. Indeed, an accurate representation of the surroundings is required for a CAV to be able to plan future driving behaviour in an unknown and dynamic environment. In a nutshell, this representation is obtained by means of algorithms that are in charge of processing the stream of information provided by vehicle sensors— such as, e.g., LiDAR sensors, cameras, and radars. Machine perception algorithms are the first element of a chain which allows, for instance, the detection of an unexpected item on the roadway, the decision of avoiding it by a take-over manoeuvre, and the execution of the manoeuvre itself according to the classical Sense-Plan-Act paradigm in autonomous agents [33]. A failure in machine perception could result in a sure crash against the unexpected item, not having the CAV a means to take any decision under these conditions. Taken together, the sensors with which a CAV is endowed constitute its sensor setup. Although sensor setups can vary between different manufacturers, they all share some basic features. First, they aim at complete coverage in all directions, so to ensure all relevant environmental information are detected and collected. Moreover, and most importantly, they present redundancies—i.e., different sensors able to perceive the same object—so to prevent miss detections due to single-sensor fault. Let us briefly introduce the most adopted sensor types in driving automation by looking at the vehicle in Fig. 1 as an example. In most cases, one or more highdefinition LiDAR (Light Detection And Ranging) sensors are mounted on the vehicle roof to guarantee 360° coverage. LiDAR sensors return a set of measurement in the form of 3D points, which is called point cloud (Fig. 2). One or more lower resolution lasers can also be mounted on the lower area of the vehicle for close-range detections. Furthermore, a variable number of cameras are mounted facing each direction, with particular care on the forward-facing. As they cover a critical area for CAV operations, forward-facing cameras can be redundant and with higher resolution with respect to the other cameras. Finally, radars are mounted in the front and rear bumper. Being less prone to failure than LiDAR and cameras under fog, rain, and snow, radars are tasked to provide obstacle speed information and detection in adverse conditions. Figure 3 offers a visual representation of a common sensor setup. Gathering information on CAV surroundings, however, is not enough. Indeed, raw data coming from all sensors need to be interpreted by means of proper algorithms in order to provide CAV driving systems with a rich and consistent representation of the surroundings—including, to name a few, the presence and position of obstacles, drivable areas, pedestrians and their intent, signs and street markings, and so on. Data from multiple sensors can also be combined to complement each other in providing a more complete perspective via sensor fusion techniques. These fusion and interpretation processes can happen in multiple ways. Traditionally, however, processing techniques are grouped into two categories: machine (deep) learning-based methods and geometrical/computer vision-based methods. These two categories can be also named as data-driven and model-based approaches respectively.

46

M. Matteucci et al.

Fig. 1 The TEINVEIN Vehicle developed by Politecnico di Milano

Fig. 2 An example of a point cloud from a LiDAR sensor matched with images from a camera

Data-driven approaches rely on a neural network trained on data to process each sensor stream and perform one task among detection, classification, or segmentation.2 With these methods, the neural network must be adequately trained, using a pre-labelled dataset possibly covering all scenarios the CAV will encounter 2

Detection refers to the capability of isolating a region of the image containing a specific object. Classification identifies the semantic content of an image or a part of it. Segmentation provides a semantic label for each pixel in the image.

Contextual Challenges to Explainable Driving Automation: The Case …

47

Fig. 3 A visual representation of a common sensor setup

while driving—e.g., daylight, night, rain, fog, and so on. Conversely, model-based approaches rely on strong theoretical priors to derive rule-based algorithms with few hand-tuned parameters—e.g., deriving the distance of objects in the scene by triangulating a pair of images in a stereo vision system. In this case, no training is required and the developers are in control of all system parameters. Being based on strong theoretical priors, model-based approaches provide the basis for more intuitive justification of failings and easier root cause analysis, while machine learning-based methods present the characteristic opacity often associated to data-driven approaches. For instance, distance perception can be also obtained via neural networks in monocular images [12], but in this case we do not have parameters such as the distance between the cameras in the stereo pair or the sensors resolution which describes analytically how the accuracy of the system degrades as a function of the distance [14]. For this reason, model-based approaches are often reckoned to better support explainability. However, the theorical assumptions and inevitable simplifications they are based upon not always hold, impacting negatively on efficiency. On the contrary, machine learning-based methods have recently shown better performance and robustness when dealing with the real world [30]. As an example of this type of perception architecture and associated algorithms, let us briefly introduce the automated vehicle developed by Politecnico di Milano as part of the project TEINVEIN (TEcnologie INnovative per VEicoli INtelligenti: see [2] and Fig. 1). The vehicle is equipped with a 16-plane LiDAR sensor on the roof, radars on the front and back bumpers, and a stereo forward-facing camera. This could be considered as a minimal setup to drive an automated vehicle, as data from multiple sensors can be combined to guarantee both 360° coverage and redundancy. From a processing perspective, this setup is a good example of a mixed architecture combining machine learning- and geometrical/computer vision-based methods. Data

48

M. Matteucci et al.

from the camera are processed using a state-of-the-art convolutional neural network (CNN) to extract the position of the obstacles and the drivable area [7]. LiDAR data are instead analysed with a geometrical approach due to the low resolution of the employed sensors, which did not allow the training of data eager neural networks [18]. Finally, radar data already pre-processed inside the sensors return a list of objects with their velocities. All the information is then combined with sensor-fusion techniques to provide control algorithms with a fully characterised list of obstacles based on the different data provided by the sensors [9]. CAV driving decisions are ultimately taken by looking at the outcome of the vehicle perception pipeline. Thus, understanding what the system is perceiving, and why the system is perceiving as such, is the first step in explaining the behaviour of a CAV. Suppose, for instance, that a CAV is cruising on a motorway. Suddenly, its trajectory is obstructed by an unexpected obstacle—e.g., a jaywalking pedestrian, an unmapped structural element, or some debris. Before the decision of swerving to avoid it or stopping in front of it can even be taken, the presence of such an obstacle must be detected and categorised—which are tasks for the CAV perception system. Once the obstacle has been identified, several pieces of information are possibly available for being communicated to interested stakeholders in order to meet their explainability claims. First, fine-grained information concerning the status of the object, ranging from class/category to position, velocity, size, and so on. Secondly, information concerning the algorithm that has detected the obstacle—i.e., the one processing radar data, or LiDAR data, or camera data, or their fusion. This information can be highly accurate, explaining why the algorithm has triggered the detection, or less accurate, simply consisting in the output of the algorithm. As such, it might be more or less easy to interpret, as the abovementioned difference between deep learning and geometric algorithms exhibits. Finally, and most basically, raw data—i.e., the stream of data coming from the sensors, such as images or point clouds. Since control logics massively rely on information coming from perception systems in order to properly function, access to perception data is critical to satisfy explainability demands. Supporting stakeholders’ autonomous and responsible behaviour is impossible without determining how, when, and to whom to provide access to machine perception data and related information. In order to do so, however, it is necessary to inquire into the reasons and constraints that specify different stakeholders’ explainability claims. The next section takes a closer look at this issue and tries to clarify how to deal with the explainability demands of three classes of stakeholders: users, developers, and legal professionals.

Contextual Challenges to Explainable Driving Automation: The Case …

49

5 Contextual, Human-Centred Explainability and Machine Perception As the previous section shows, machine perception plays a critical role in driving automation. By detecting the elements of the environment and building a representation of the surroundings, machine perception technologies collect and handle data that could variously support the autonomous and responsible decision-making processes of multifarious stakeholders. However, each stakeholder requires explanations for different reasons and experiences different constraints, such as counting on different cultural backgrounds and expertise, having more or less time available to make an informed decision, and so on. In what follows, we offer a preliminary analysis of the explainability challenges connected to machine perception with reference to three classes of stakeholders: users, developers, and legal professionals.3 Each stakeholder, we argue, pursues responsibility and autonomy through explainability for their own reasons. As a first step, then, the autonomy and responsibility reasons backing explainability must be specified for each stakeholder. In this regard, we propose the following general characterisation: a. Users need to be in the condition of recognising safety–critical situations and act accordingly.4 Moreover, they need to be in the condition of taking autonomous mobility decisions. b. Developers, being in charge of the design and production of CAVs, need to be in the condition of validating system operations, correcting system errors, and improving system performance in terms of safety (both ex ante and ex post). c. Legal professionals need to be in the condition of distributing liability according to the law when any harm or damage is caused by CAVs. As a second step, we propose to consider the level of automation of the involved CAVs. As already discussed, different levels of automation pose significantly different challenges to stakeholders. Hence, it makes sense to specify the relevant level of automation up front, before addressing the explainability requirements of each stakeholder. In what follows, we consider two levels of automation that are 3

Users, developers, and legal professionals are not the only social actors with a stake in the explainability of machine perception operations, as we note in the conclusion of the chapter. We decided nonetheless to focus our analysis only on these stakeholders since they appear to us as the main involved parties. Moreover, considering their cases allows us to show how interestingly explainability requirements change among stakeholders and at different levels of automation. 4 It is to be noted that users might also demand explanations for reasons that exceed safety. For example, users might want explanations of machine perception operations to feel more comfortable while using CAVs. However, comfort is not evidently relevant from an ethical point of view: its role as enabler of autonomous and responsible behaviour is rather limited. That being said, we do not wish to exclude that there might be further ethically relevant reasons supporting legitimate requests for explanations with reference to machine perception operations. Given space limitations, however, we decided to consider reasons based on safety—an ethical value of clear importance that has long been acknowledged in both engineering ethics and the transportation domain. Similar considerations apply to the case of developers as well.

50

M. Matteucci et al.

exemplary in terms of the challenges they pose: conditional automation (L3), where interaction between users and systems is critical for safety; and full automation (L5), where this problem no longer exists and different issues must be tackled. Accordingly, the analysis deals with explainability requirements at L3 first and, subsequently, at L5.

5.1 CAV Users Let us start considering explainability with reference to machine perception for users at L3. In a context of shared control, users must always be attentive and take control of the vehicle should something go wrong. In this respect, explainability demands that all information necessary to support autonomous and responsible decisionmaking concerning take-over manoeuvres must be provided to users in ways that are appropriate to their cognitive capabilities, cultural background, expertise, and time availability (see Chap. 4). Arguably, explanations concerning the system representation of the surroundings are important for users to figure out whether they should take over. For instance, in the case of an unexpected obstacle the system should notify the clearance status of the roadway as it is perceived. Machine perception, therefore, must become explainable to users. Moreover, users’ cultural background and expertise can vary significantly, so that few assumptions can be made concerning the degree of complexity that users will be able to manage. As a result, it is important that information is conveyed in easily understandable ways. In addition, take-over decisions must be made on the spot, which means that time availability is limited. Explanations must then be quickly interpretable, so to support fast but informed decision-making. These preliminary considerations might suggest that informational feedback could be a valuable option to provide users with the information they need in the ways they need it. A monitor showing a representation of the surroundings as perceived by the system, the position of the CAV in it, and the planned course of action might constitute an easy-to-read and quickly interpretable interface to support autonomous and responsible decision-making concerning take-over manoeuvres. Informational feedback, however, requires making choices concerning what to include in and exclude from the representation of the surroundings. A trade-off must be struck between granularity and relevance. Similar choices have to be taken due to the already considered constraints in terms of user cognitive capabilities and time availability. Nonetheless, they might have a controversial impact on user autonomy. Consider, for instance, whether to display information about the classification of obstacles—i.e., whether the perception system classifies the motorcycle rumbling ahead of your CAV as actually a motorcycle and not, say, as a bicycle or a car. Object perception exhibits a statistical nature. Each classification comes with varying degrees of confidence. Misclassification could occur. However, some misclassifications could be inconsequential in terms of safety risks and, thus, should not prompt decisions to take over. For instance, a driving system could be programmed to apply

Contextual Challenges to Explainable Driving Automation: The Case …

51

the same safety measures when driving in the vicinity of a car or of a motorcycle as a way to enhance the protection of more vulnerable road users. Misclassification of a motorcycle as a car, then, would have no significant effect on safety. However, if classification data were showed, misclassifications that are irrelevant in terms of safety might be misinterpreted as relevant by users. A misclassification of a motorcycle as a car would arguably be source of concern for the regular user, who might lose confidence in the system capacity of handling the situation and decide to take over even if no actual threat was ongoing. As a result, it seems reasonable to claim that object classification data would be irrelevant (if not detrimental) to the task of supporting responsible decision-making on taking over control at L3. Hence, it should remain opaque. Relevant information to be provided to users would just be, roughly, whether and where an obstacle has been detected. For instance, objects could be represented just as boxes instead of bicycles, cars, trucks, and so on. In this case, an element of opacity would be introduced to serve explainability, i.e., to comply with its constraints and align with its purpose. Supporting responsibility in this way, however, might be perceived as a limitation of autonomy. Decisions concerning what to show (and how) and what to hide (and why) are bound to raise controversy. Design choices based on users’ expected ability to make sense of a situation in appropriate ways would be taken beforehand, partially bypassing user autonomy. Different stances on how to weigh user autonomy demands vis-à-vis public wellbeing will lead to different outcomes. For instance, one could hypothesise that societies where individual autonomy is highly cherished would leave to users the freedom and responsibility to personalise the quantity and quality of information displayed on the monitor—even if, on some occasions, this would lead to unnecessary and potentially dangerous take-over manoeuvres. On the contrary, social configurations where considerations supporting public well-being tend to trump individual autonomy demands would probably lean towards standardised design solutions, without letting users change the settings or allowing only for limited options. When investigating explainability requirements at L5, considerations change dramatically. The scope of user agency is much more limited in this case. Possibilities in terms of autonomous and responsible behaviour are equally limited. Since control over driving tasks is by definition delegated to the system, users are in the position of neither autonomously decide anything about driving behaviour nor being responsible for it. The scope of their autonomy, which also determines the scope of their responsibility, pertains exclusively to command a vehicle stop if needed. Vehicle stops could be necessary for many reasons. Some reasons can be framed in terms of personal autonomy and self-determination. Users might want to stop for car sickness, to stretch their legs, or to take pictures of a fascinating view. Allowing users to express their self-determination in these and analogous ways appears to be required. Vehicle stops could be requested also for ethically relevant reasons—for instance, to provide assistance to road users involved in an accident. To use a wellknown technological trope, users must be put in the position to press the red button and stop the vehicle.

52

M. Matteucci et al.

What needs to be underlined here is that stop requests like those just discussed do not stem from safety concerns. Safety worries do not belong to the set of reasons motivating stop requests at L5. In fact, at this level automation safety aspects are entirely handled by the systems. Since system operations fall outside the purview of user autonomy and responsibility, there is arguably no ethical need to relay information about machine perception to users anymore. Indeed, they could do little or nothing to act on this information. More importantly, it is not their responsibility to ensure safe driving. It follows that explainability requirements are substantially different in L5 from the L3 case. Leaving machine perception functions entirely opaque would not have any negative effects on the exercise of user autonomy and responsibility at L5. Perhaps some informational feedback ensuring that stop requests have been correctly registered by the system and are being executed would be useful to support user autonomy. Also, high-level informational feedback ensuring users that all subsystems are working well could also be useful to manage trust, comfort, and acceptability.5 However, this extends beyond the domain of machine perception and, to an extent, beyond the ethical domain of explainability as well. To conclude, discussing explainability with reference to users’ autonomous and responsible behaviour shows how massively different forms of automation could impact on requirements and obligations. Since L3 and L5 present different combinations of human agency and automation, what needs to be made explainable in order to support users’ autonomy and responsibility varies greatly. Without due attention to contextual factors, this difference would be rather hard to realise.

5.2 Developers In the case of developers, the main ethical reason supporting explainability demands rests on the value of safety. Being in charge of the design and production of CAVs, developers are evidently responsible for the quality of their operations. Satisfying safety standards, verifying that safety requirements are met, and improving safety performances are all obligations lying at the heart of responsible engineering. In order to autonomously and responsibly fulfil these obligations, a plethora of information concerning system operations are necessary. In a word, system operations must be (made) explainable to developers so that due care can be fully exercised.

5

It might be argued that the impossibility of intervening would not amount to a sufficient reason to leave machine perception functions opaque. Rather, the respect of user dignity requires information to be made available even though no possibility of action is granted. However, we claim that from an ethical point of view the obligation to provide informational feedbacks to users depends to their scope of action. If users cannot actually act responsibly and autonomously on the information provided, we believe that this information could be left opaque without raising severe ethical problems.

Contextual Challenges to Explainable Driving Automation: The Case …

53

Given the role it plays in driving automation, machine perception obviously belongs to the set of system operations that developers must be able to access, scrutinise, interpret, understand, and correct when necessary. Inquiries into the operations of machine perception systems are likely to be slow and take time. However, time availability does not (or, better, should not) represent a pressing constraint in this case. Even though CAV design and development do take place under pressure to deliver results within deadlines, responsibility requires to use all the time that is necessary to carry out safety-related research and interventions diligently and exhaustively. Finally, stakeholders are endowed with high level of technical competence and expertise, which implies that information do not need to be translated into easy-to-understand explanations. Actually, making sense of system behaviour based on technical data is precisely what is expected from developers and experts.6 Now that reasons and constraints have been spelled out, let us focus on machine perception and clarify what kinds of operations are relevant. As in the previous case, L3 driving automation will be considered first. To begin with, developers are likely interested in validating, verifying, and improving the reliability of machine perception systems in terms of spotting and recognising objects. In this sense, access must be provided to a wide variety of data. Readable data concerning perceived objects and corresponding estimations of uncertainty associated with their classification must be accessible. As already mentioned, machine perception algorithms exhibit a statistical nature, so that outputs are bound to incorporate some error percentage. Improving algorithms both in terms of successful identification of objects and precise estimation of classificatory uncertainty would have beneficial effects on safety performances. Therefore, meaningful access to data that allow similar improvements arguably falls within the scope of explainability. A further layer of complexity to underline here is that machine perception is better described not as a system, but as a system of systems. Representations of the environment result from fusing together different data incoming from sources as diverse as radar sensors, LiDAR sensors, and cameras. Safety verifications and improvements are likely to follow from studying both data gathered by single sensors and representations elaborated by sensor fusion algorithms. In other words, explainability requirements extend to both pre- and post- sensor fusion stages. Access to information concerning both stages must be allowed. Indeed, information about the internal functioning of perception algorithms is crucial at design time, in validation, and in ex post analysis to infer possible issues or bottlenecks, trace them back to their causes, and improve the system accordingly. For the same reason, raw sensor data can also be of interest for verification and ex post analysis in case of failures. In contrast to what happens in the case of users, explainability requirements do not change substantially as we move from L3 to L5. The main reasons for pursuing 6

This is not to say that developers and experts do not need support and specific expertise to interpret technical data correctly. The interpretation process is challenging and often requires the use of complicated tools, which could lead to misunderstandings and mistakes. That being said, it is arguably part of developers’ and experts’ responsibility to learn how to deal with this challenge effectively.

54

M. Matteucci et al.

autonomous and responsible behaviour remain the same, as do time constraints and the level of expertise. More or less, the contents also do not vary significantly. However, the properties of full automation put much more pressure on safety performances. While in L3 human fallback is always a possibility—in a sense, it works as a background safety measure—L5 automation must do without it. As a consequence, safety requirements are much more demanding and require taking more detailed care of a wider set of aspects. Pressures on the reliability of machine perception increases tangibly. Structuring systems in ways that allow developers to access perceptual data, validate and verify safety-related aspects, and improve performances arguably set the stage for more challenging objectives to accomplish.

5.3 Legal Professionals Explainability requirements with reference to legal professionals also do not change essentially if L3 or L5 CAVs are considered. Roughly, explainability serves legal professionals’ autonomy and responsibility by providing them with the information they need to carry out their job. In this case, the reason supporting explainability would consist in a legitimate claim to access all the information that are needed to lawfully distribute liability among involved parties when harm is caused by CAVs. Carrying out this task might include different obligations. Compliance and conformance assessments must be executed to verify whether negligence or malpractice is identifiable on the part of developers. In-depth studies of the situations at issue must also be carried out to evaluate the part that other stakeholders might have played. User behaviour must be reconstructed, assessed, and judged. Other factors that might have played a role in an accident should also be investigated and their contribution to the harm caused carefully measured. Evaluations regarding liability allocation, then, can be autonomously and responsibly produced only if a wide variety of data is made available and explainable to both litigants and decision makers. As happens for developers, time constraints do not represent a major concern here. Legal discussion will take the time that is necessary to gather all relevant information leading to a fair judgment. Technical expertise cannot be presupposed as in the case of developers. However, it is not completely lacking as in the case of users. Legal professionals cannot obviously be expected to be able to make sense of explanations and information conveyed in technical language. However, expert opinion can be summoned, and the expertise gap can be adequately bridged in court. Obtaining understandable explanations of technical aspects relevant to taking decisions concerning liability allocation is an integral step of the legal process. Legal professionals’ explainability reasons and constraints imply that all machine perception data needed to verify compliance and establish what actually happened must be made available, so to properly inform decision-making. It is difficult to specify precisely what kinds of machine perception operations must be made explainable to serve this purpose. Each peculiar case likely requires different information to

Contextual Challenges to Explainable Driving Automation: The Case …

55

be made available in order to determine the role that each stakeholder played in the unfolding of the events. At first approximation, it could be said that data pertaining to technical performances in terms of the perception of the environment—at subsystem and system levels—and data concerning user behaviour are both of interest with reference to liability allocation. Not just information about system behaviour then, but also Human–Machine Interaction information—such as, e.g., logs of user interfaces and infotainment devices and data from cabin sensor (cameras, eye-tracking devices, heart rate monitoring sensors, and so on)—are crucial to support fair and lawful liability allocation among involved stakeholders. As a matter of fact, explainability of machine perception data has already played a crucial role in litigations surrounding liability allocation for harm caused by CAVs. An accident involving a Tesla Model S vehicle running on Autopilot in 2016 in Florida was caused by the machine perception system failing to detect the side of a white tractor trailer against the bright sky, which led the driving system to collide with it [22–24, 39].7 The accident resulted in the death of the Tesla occupant. In another accident in 2018, an Uber test vehicle in Tempe, Arizona, failed to classify a pedestrian walking a bicycle across a four-lane highway around 10 pm and ran over her, causing her death [5, 25, 26, 36].8 In both cases, users were supposed to supervise system functioning and intervene if necessary to ensure safety. Data pertaining to their level of engagement and attention proved instrumental to determine liability. Therefore, in addition to data concerning machine perception systems, data 7

Interestingly enough, the authors of [22: 4] offer the following clarification in a note to their study: “Object classification algorithms in the Tesla and peer vehicles with AEB technologies are designed to avoid false positive brake activations. The Florida crash involved a target image (side of a tractor trailer) that would not be a “true” target in the EyeQ3 vision system dataset and the tractor trailer was not moving in the same longitudinal direction as the Tesla, which is the vehicle kinematic scenario the radar system is designed to detect”. Similarly, authors at [24: 15–16] write that “there was no record indicating that the Tesla’s automation system identified the truck that was crossing in the car’s path or that it recognized the impending crash. Because the system did not detect the combination vehicle—either as a moving hazard or as a stationary object—Autopilot did not reduce the vehicle’s speed, the FCW [Forward Collision Warning] did not provide an alert, and the AEB [Automatic Emergency Braking] did not activate. All recorded data were consistent with the FCW and AEB systems being enabled when the crash occurred”. 8 The reconstruction of the accident offered by the NTSB preliminary report [25: 2] highlights the importance of accessing understandable machine perception data in order to allocate liability fairly: “According to data obtained from the self-driving system, the system first registered radar and LiDAR observations of the pedestrian about 6 s before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 s before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” Particularly relevant, as noted also by Stilgoe [36], is the addition contained in the final report by NTSB [26: 16], according to which “the system never classified her as a pedestrian—or correctly predicted her path—because she was crossing N. Mill Avenue at a location without a crosswalk, and the system design did not include consideration for jaywalking pedestrians”.

56

M. Matteucci et al.

concerning user states, conditions, and actions are also necessary to allocate liability fairly in similar cases. However, different levels of driving automation do have an impact on the relevance of data. At L3, given the supervising role attached to users, data concerning their state and behaviour are as fundamental as are data concerning the technical aspects of machine perception. User responsible behaviour being the most transversal safety measure, determining whether the conditions for its full exercise were reasonably satisfied is essential to fairly distribute liability among stakeholders. Arguably, at L5 data concerning user behaviour are unnecessary, since automated driving systems are to handle safety without relying in any sense on human fallback. Unless, of course, data concerning users show gross misconduct on their part directly leading to the risky situation to arise, data concerning system operations—machine perception data included—do play a more significant role. It follows that strategies to make these data accessible and understandable to legal professional are critical for distributing liability fairly both at L3 and L5, but perhaps particularly so in the latter case. Indeed, system opacity would render liability allocation utterly difficult. On the contrary, making data concerning users accessible and explainable to legal professionals has a clear function at L3, but loses much of its significance at L5.

6 Conclusion As summarised in Tables 1 and 2, living up to explainability requirements in the CAV domain is no easy task. Depending on the stakeholders involved, different reasons and varying contextual constraints massively influence explainability efforts. Time availability, cognitive capabilities, and technical expertise considerably shape the task of providing explanations. A narrow focus on implementing XAI solutions would tackle just a limited aspect of the whole challenge. Decisions concerning what to explain to whom, why, and how can only be pondered by paying attention to aspects that extend beyond computational techniques. In this chapter, we have argued for a normative interpretation of explainability as an instrumental ethical value supporting the exercise of autonomous and responsible behaviour on the part of stakeholders. Furthermore, we have specified the contextual nature of explainability by concentrating on both technical aspects and involved stakeholders—their reasons, available expertise, cognitive capabilities, and time availability. Our approach has been put to test by applying it to the analysis of explainability requirements with reference to machine perception at L3 and L5. Among the possible stakeholders one could consider, we have focused the attention on CAV users, developers, and legal professionals. As our discussion showed, significant differences characterise the three situations due to the variety of contexts in which the value of explainability is brought to bear. Moreover, moving from L3 to L5 scenarios also introduces noteworthy variations on the meaning and context of explainability.

Contextual Challenges to Explainable Driving Automation: The Case …

57

Table 1 Explainability requirements for machine perception (L3) Reasons

Content

Contextual constraints

Users

Autonomously promoting their own and other road users’ safety (takeover manoeuvres)

Perceived surroundings Ego-vehicle position Obstacles System level view

Little time available to take decisions in case intervention is needed Lack of specific expertise, cultural background, degree of familiarity with technologies

Developers

Verifying that the system satisfies safety requirements Improving system safety

Perceived objects and surroundings with an estimate on the uncertainty about their perception Partial (each sensor’s) results of algorithms used for perception (pre- and post-sensor fusion + subsystem level) Status and output of single subsystem

No time constraints beside those required by release roadmaps Technical expertise

Legal Professionals

Allocating liability according to the law Assessing compliance and conformance

Information and data needed to verify compliance and establish what happened (system and subsystem level) Both on technical and HMI level (was the user in the position of exercising autonomy and responsibility?) Log of user interface and user behaviour and data

No time constraints beside those required by legal procedures Mediated technical expertise Legal framework

In light of the results of our inquiry, we suggest that the debate on explainability in driving automation would benefit from a more context-oriented attitude aimed at specifying different requirements for given situations. Moreover, we believe that decisions on how to handle opacity as a means to serve the purpose of explainability—i.e., supporting autonomous and responsible behaviour—should be carefully scrutinised and the related ethical trade-offs attentively discussed. Future research should pursue the exploration of explainability issues in such a contextual-aware fashion also in relation to other important stakeholders—e.g., insurers and policy-makers—and to other CAV functions—e.g., route and trajectory planning algorithms. We are convinced that fine-grained knowledge concerning the instrumental nature of explainability, technical configurations, and the specific features of involved stakeholders is essential to the academic, technical, regulatory, and public debate on explainability in driving automation.

58

M. Matteucci et al.

Table 2 Explainability requirements for machine perception (L5) Reasons

Contents

Contextual constraints

Users

Autonomously handling non-traffic emergency situation (stop vehicle)

None?

Lack of specific expertise, cultural background, degree of familiarity with technologies

Developers

Verifying that the system satisfies safety requirements Improving system safety But: more demanding in terms of system robustness and Operational Design Domains (ODDs) compared to L3

Perceived objects and surroundings with corresponding uncertainty Partial (each sensor’s) results of algorithms used for perception (pre- and post-sensor fusion + subsystem level) Status and output of single subsystem But: more tasks to be executed, more demanding in terms of data quantity and quality compared to L3

No time constraints beside those required by release roadmaps Technical expertise

Legal Professionals

Allocating liability according to the law Compliance and conformance assessment

Information and data needed to verify compliance and establish what happened (system and subsystem level) Exclusively on technical level (log of user interface + user behaviour and data not necessary anymore)

No time constraints beside those required by legal procedures Mediated technical expertise Legal framework

References 1. Angelov, P.P., Soares, E.A., Jiang, R., Arnold, N.I., Atkinson, P.M.: Explainable artificial intelligence: an analytical review. Wiley Interdiscip. Rev.: Data Mining Knowl. Discov. 11(5), e1424 (2021). https://doi.org/10.1002/widm.1424 2. Arrigoni, S., Mentasti, S., Cheli, F., Matteucci, M., Braghin, F.: Design of a prototypical platform for autonomous and connected vehicles. In: AEIT International Conference on Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE), pp. 1–6 (2021). https:// doi.org/10.23919/AEITAUTOMOTIVE52815.2021.9662926 3. Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012 4. Baum, K., Mantel, S., Schmidt, E., Speith, T.: From responsibility to reason-giving explainable artificial intelligence. Philos. Technol. 35(12) (2022). https://doi.org/10.1007/s13347-022-005 10-w

Contextual Challenges to Explainable Driving Automation: The Case …

59

5. Bonnefon, J.F.: Chapter 18: The uber accident. In: The Car That Knew Too Much. Can a Machine Be Moral? MIT Press, Cambridge (2021) 6. Confalonieri, R., Coba, L., Wagner, B., Besold, T.R.: A historical perspective of explainable artificial intelligence. WIREs Data Min. Knowl. Discovery 11(1), 1–21 (2021). https://doi.org/ 10.1002/widm.1391 7. Cudrano, P., Mentasti, S., Matteucci, M., Bersani, M., Arrigoni, S., Cheli, F.: Advances in centerline estimation for autonomous lateral control. In: 2020 IEEE Intelligent Vehicles Symposium (IV), pp. 1415–1422. IEEE (2020) 8. Cultrera, L., Seidenari, L., Becattini, F., Pala, P., Del Bimbo, A.: Explaining autonomous driving by learning end-to-end visual attention. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1389–1398. IEEE (2020). https://doi.org/ 10.1109/CVPRW50498.2020.00178 9. Dahal, P., Mentasti, S., Arrigoni, S., Braghin, F., Matteucci, M., Cheli, F.: Extended object tracking in curvilinear road coordinates for autonomous driving. IEEE Trans. Intell. Vehicles (2022) 10. Floridi, L., Cowls, J.: A unified framework of five principles for AI in society. Harvard Data Sci. Rev. 1(1) (2019). https://doi.org/10.1162/99608f92.8cd550d1 11. Fossa, F., Arrigoni, S., Caruso, G., Cholakkal, H.H., Dahal, P., Matteucci, M., Cheli, F.: Operationalizing the ethics of connected and automated vehicles: an engineering perspective. Int. J. Technoethics 13(1), 1–20 (2022). https://doi.org/10.4018/IJT.291553 12. Garg, R., Bg, V.K., Carneiro, G., Reid, I.: Unsupervised CNN for single view depth estimation: geometry to the rescue. In: European Conference on Computer Vision, pp. 740–756. Springer, Cham (2016) 13. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93–142 (2018) 14. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2004) 15. High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI. European Commission (2019). https://op.europa.eu/en/publication-detail/-/publication/ d3988569-0434-11ea-8c1f-01aa75ed71a1 16. Horizon 2020 Commission Expert Group to advise on specific ethical issues raised by driverless mobility (E03659): ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility (2020). https://op.europa.eu/en/pub lication-detail/-/publication/89624e2c-f98c-11ea-b44f-01aa75ed71a1/language-en 17. Krontiris, I., Kalliroi, G., Kalliopi, T., Zacharopoulou, M., Tsinkitikou, M., Baladima, F., Sakellari, C., Kaouras, K.: Autonomous vehicles: data protection and ethical considerations. In: Computer Science in Cars Symposium (CSCS ‘20). ACM, Feldkirchen (2020). https://doi. org/10.1145/3385958.3430481 18. Mentasti, S., Matteucci, M., Arrigoni, S., Cheli, F.: Two algorithms for vehicular obstacle detection in sparse pointcloud. In: 2021 AEIT International Conference on Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE), pp. 1–6. IEEE (2021) 19. Meske, C., Bunde, E., Schneider, J., Gersch, M.: Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Inf. Syst. Manag. 39(1), 53–63 (2022). https:// doi.org/10.1080/10580530.2020.1849465 20. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007 21. Minh, D., Wang, H.X., Li, Y.F., Nguyen, T.F.: Explainable artificial intelligence: a comprehensive review. Artif. Intell. Rev. 55, 3503–3568 (2022). https://doi.org/10.1007/s10462-021-100 88-y 22. National Highway Traffic Safety Administration: PE 16-007 (2017). https://static.nhtsa.gov/ odi/inv/2016/INCLA-PE16007-7876.PDF 23. National Highway Traffic Safety Administration: Special Crash Investigations: On-Site Automated Driver Assistance System Crash Investigation of the 2015 Tesla Model S 70D. DOT HS812 481. Washington (2018)

60

M. Matteucci et al.

24. National Transportation Safety Board: Collision Between a Car Operating With Automated Vehicle Control Systems and a Tractor-Semitrailer Truck Near Williston, Florida, Accident Report NTSB/HAR-17/02 PB2017-102600 (2017). https://www.ntsb.gov/investigations/acc identreports/reports/har1702.pdf 25. National Transportation Safety Board: Preliminary Report Highway HGW 18MH010 (2018) 26. National Transportation Safety Board: Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, Accident Report NTSB/ HAR-19/03PB2019-101402 (2019). https://www.ntsb.gov/investigations/accidentreports/rep orts/har1903.pdf 27. Nihlén Fahlquist, J.: Responsibility analysis. In: Hansson, S.O. (ed.) The Ethics of Technology. Methods and Approaches, pp. 129–142. Rowman and Littlefield, London (2017) 28. Nunes, A., Reimer, B., Coughlin, J.F.: People must retain control of autonomous vehicles. Nature 556, 169–171 (2018). https://doi.org/10.1038/d41586-018-04158-5 29. Nyrup, R., Robinson, D.: Explanatory pragmatism: a context-sensitive framework for explainable medical AI. Ethics Inf. Technol. 24(13) (2022). https://doi.org/10.1007/s10676-022-096 32-3 30. O’Mahony, N., Campbell, S., Carvalho, A., Harapanahalli, S., Hernandez, G. V., Krpalkova, L., Riordan, D., Walsh, J.: Deep learning versus traditional computer vision. In: Science and Information Conference, pp. 128–144. Springer, Cham (2019) 31. Pan, H., Wang, Z., Zhan, W., Tomizuka, M.: Towards better performance and more explainable uncertainty for 3D object detection of autonomous vehicles. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–7. IEEE (2020). https://doi. org/10.48550/arXiv.2006.12015 32. Rosenfeld, A., Richardson, A.: Explainability in human–agent systems. Auton. Agent. MultiAgent Syst. 33, 673–705 (2019). https://doi.org/10.1007/s10458-019-09408-y 33. Russell, S.J., Norvig, P.: Artificial Intelligence. A Modern Approach. Pearson Education, Upper Saddle River (2010) 34. SAE International: J3016. (R) Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. Superseding J3016 JUN2018 (2021) 35. Setchi, R., Dehkordi, M.B., Khan, J.S.: Explainable robotics in human-robot interactions. Procedia Comput. Sci. 176, 3057–3066 (2020). https://doi.org/10.1016/j.procs.2020.09.198 36. Stilgoe, J.: Who killed Elaine Herzberg? In: Who’s driving innovation?, pp. 1–6. Palgrave MacMillan, Cham (2020). https://doi.org/10.1007/978-3-030-32320-2_1 37. Suchan, J., Bhatt, M., Varadarajan, S.: Driven by commonsense. On the role of human-centered visual explainability for autonomous vehicles. In: Giacomo, D.G., et al. (eds). ECAI 2020, pp. 2939–2940. IOS Press (2020). https://doi.org/10.3233/FAIA200463 38. Tang, C., Srishankar, N., Martin, S., Tomizuka, M.: Grounded relational inference: domain knowledge driven explainable autonomous driving (2021). https://doi.org/10.48550/arXiv. 2102.11905 39. Tesla: a tragic loss (2016). https://www.tesla.com/blog/tragic-loss 40. Theunissen, M., Browning, J.: Putting explainable AI in context: institutional explanations for medical AI. Ethics Inf. Technol. 24(23) (2022). https://doi.org/10.1007/s10676-022-09649-8 41. Umbrello, S., Yampolskiy, R.V.: Designing AI for explainability and verifiability: a value sensitive design approach to avoid artificial stupidity in autonomous vehicles. Int. J. Soc. Robot. 14, 313–322 (2021). https://link.springer.com/article/https://doi.org/10.1007/s12369-021-007 90-w 42. Van de Poel, I.: Values in engineering design. In: Meijers, A. (ed.) Philosophy of Technology and Engineering Sciences. Handbook of the Philosophy of Science, vol. 9, pp. 973–1006. North Holland, Burlington-Oxford-Amsterdam (2009). https://doi.org/10.1016/B978-0-444-516671.50040-9

Contextual Challenges to Explainable Driving Automation: The Case …

61

43. Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decisionmaking does not exist in the general data protection regulation. Int. Data Privacy Law 7(2), 76–99 (2017) 44. Xu, Y., Yang, X., Gong, L., Lin, H.-C., Wu, T.-Y., Li, Y., Vasconcelos, N.: Explainable objectinduced action decision for autonomous vehicles. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9520–9529. IEEE (2020). https://doi.org/10.1109/ CVPR42600.2020.00954 45. Zablocki, É., Ben-younes, H., Pérez, P., Cord, M.: Explainability of vision-based autonomous driving systems: review and challenges (2021). https://doi.org/10.48550/arXiv.2101.05307

Design for Inclusivity in Driving Automation: Theoretical and Practical Challenges to Human-Machine Interactions and Interface Design Selene Arfini , Pierstefano Bellani, Andrea Picardi , Ming Yan , Fabio Fossa , and Giandomenico Caruso

Abstract This chapter explores Human–Machine Interaction (HMI) challenges to inclusivity in driving automation by focusing on a case study concerning an automated vehicle for public transportation. Connected and Automated Vehicles (CAVs) are widely considered a tremendous opportunity to provide new mobility options to many who currently cannot drive: the young, the elderly, people suffering from cognitive or physical impairments, and so on. However, the association between driving automation and inclusivity rests too much on the mere automation of driving tasks. More attention should instead be dedicated to the HMI field for CAVs to be suitable for multiple users with different needs. From this perspective, inclusivity is an utterly complicated objective to accomplish—one that requires a new definition of human agents, fine-grained methodological discussions, innovative design solutions, and extensive testing. Based on these considerations, we present a case study and discuss the challenges that need to be faced when designing inclusive CAVs. S. Arfini Dipartimento di Studi Umanistici - Sezione di Filosofia, Università di Pavia, Piazza Botta, 6, 27100 Pavia, PV, Italy e-mail: [email protected] P. Bellani · A. Picardi · F. Fossa · G. Caruso (B) Department of Mechanical Engineering, Politecnico di Milano, Via Privata Giuseppe La Masa, 1, 20156 Milan, Italy e-mail: [email protected] P. Bellani e-mail: [email protected] A. Picardi e-mail: [email protected] F. Fossa e-mail: [email protected] M. Yan Department of Design, Politecnico di Milano, Via Giovanni Durando, 10, 20158 Milan, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Fossa and F. Cheli (eds.), Connected and Automated Vehicles: Integrating Engineering and Ethics, Studies in Applied Philosophy, Epistemology and Rational Ethics 67, https://doi.org/10.1007/978-3-031-39991-6_4

63

64

S. Arfini et al.

Keywords Automated vehicles · Design ethics · Human–machine interaction · Interface design · Inclusivity

1 Introduction Inclusivity is one of the most invoked social and ethical values in the context of driving automation. Many have appealed to it as an example of the tangible benefits Connected and Automated Vehicles (CAVs) could introduce in modern transportation systems, thus supporting morality and social justice. As in the cases of safety and environmental sustainability, inclusivity is often mentioned as a significant reason to frame CAVs as ethically desirable technologies (see Chap. 10). According to proponents, driving automation would allow social categories currently excluded from private road transport to enjoy the freedom of travelling autonomously without requiring help from anybody else. We claim, however, that it would be a mistake to assume that achieving higher levels of driving automation would simultaneously solve inclusivity issues. Although this applies to a wide range of users, it is even more critical in the case of people suffering from disabilities or impairments. Even if full automation were available, severe problems in the realm of Human–Machine Interaction (HMI) would still need to be tackled. In this chapter, we maintain that it is inadequate to consider inclusivity as a byproduct of driving automation. Rather, inclusivity is an ethical objective that must be intentionally and proactively pursued through interface design as well—i.e., as an HMI challenge. The design of effective interfaces that accommodate diverse user needs and promote meaningful collaboration between passengers and systems is a fundamental component of inclusive driving automation. To this aim, ethical analysis and design practices must be integrated so that socio-ethical problems and possible design solutions can be adequately identified and assessed. The following pages explore the issue of inclusivity from both an ethical and technical perspective. First, the normative profile of inclusivity in driving automation is clarified, and practical guidelines for its application are proposed. Based on this, a case study concerning the deployment of an automated shuttle on a university campus is analysed and discussed. The main purpose of our research is to stress the importance of the HMI dimension and interface design to enable inclusive driving automation. Moreover, we aim at developing a clearer picture of the many obstacles that lie ahead. We are confident that the inquiry will shed a more fine-grained light on the inclusivity prospects of driving automation and set the course for future applied HMI research in this domain. The chapter is structured as follows. Section 2 outlines the ethical value of inclusivity in the context of transportation and presents how driving automation has been framed as an enabler of inclusive mobility. Section 3 claims that inclusivity cannot be understood as a by-product of driving automation. Rather, critical issues in HMI require to be specifically addressed. To this aim, however, a more fine-grained notion of “agent” is required. Section 4 suggests integrating diversity into the notion of

Design for Inclusivity in Driving Automation: Theoretical and Practical …

65

“agent” to promote a more inclusive vision of potential CAV users or road users who will interact with CAVs. Section 5 explores inclusive design as a practical methodological approach to face inclusivity issues in driving automation and interface design. Its main result is to underline the contextual nature of inclusivity, which could help manage its complexity and better identify the challenges to be tackled. Based on this, we present the case study and discuss it from the perspective of inclusivity. Finally, Sect. 6 summarises the most significant points of the chapter and offers some conclusive remarks.

2 Inclusivity, Transportation, and Driving Automation Inclusivity is a widely acknowledged ethical value in both the Artificial Intelligence and transportation communities. Essentially, it demands new technologies to be accessible to as many people as possible, paying special attention to those who would most benefit from using them [37]. Its vast social and political endorsement is paired with solid normative support. As proposed in the European Commission’s report Ethics of Connected and Automated Vehicles [32], inclusivity directly stems from one of the fundamental principles of CAV ethics—i.e., Beneficence. The principle of Beneficence mandates the obligation to design CAVs to “contribute positively to the welfare of individuals, including future generations, and other living beings.” In this sense, the “primary purpose” of driving automation “should be to enhance mobility opportunities and bring about further benefits to persons concerned, including enhancing the mobility opportunities of persons with special needs” [32: 21]. This is in line with the UN Sustainable Development Goals framework, which also acknowledges the ethical relevance of inclusivity in the transportation domain. As part of Goal 11—Make cities and human settlements inclusive, safe, resilient and sustainable—target 11.2 encourages innovating transport systems “with special attention to the needs of those in vulnerable situations, women, children, persons with disabilities and older persons”.1 As the previous considerations suggest, access to transportation is indeed fundamental to living a satisfying social life and exercising one’s self-determination [55, 56]. Transportation scholars have long insisted on the crucial role transportation plays in human flourishing and social well-being [27, 46, 63, 66]. The possibility of easily reaching critical facilities such as hospitals, schools, work locations, museums, theatres, shops, and so on through adequate transportation options contributes greatly to determining the quality of life. Moreover, transport factors significantly impact the possibility of creating and nourishing meaningful social relationships. Similarly, participation in political experiences and the exercise of political and civil rights can also be considerably constrained by the scarcity of suitable means of transportation.

1

https://sdgs.un.org/goals/goal11.

66

S. Arfini et al.

If this is true for people in general, it is even truer for those who experience difficulty in accessing transport solutions autonomously. Cognitive and physical impairments stemming from various conditions—e.g., disability or old age—often make it harder to rely on public transport. Hearing impairments or loss, visual impairments or blindness, reduced mobility in the upper or lower limbs, and neurodiversity (e.g., Autism) all variously affect the possibility of autonomously using public means of transportation. In these instances, private and independent mobility is therefore considered the best (if not only) transportation option [12, 23]. However, in most cases, these very same conditions also exclude individuals from getting a driver’s license or, more generally, from manual driving. As a result, those who need assistance to enjoy the freedom of road transport also experience unjustly limited and expensive access to it.2 In turn, this reduces opportunities for social interaction, employment, and political involvement while hindering the recourse to essential services such as education and medical care. In sum, failure to provide inclusive transport options heavily affects the personal well-being of vulnerable social groups. It is of utmost ethical importance, then, to provide comfortable and user-friendly means of transportation to social categories that cannot fully enjoy the freedom of mobility. Innovative solutions that might help break this vicious circle of social isolation would be more than welcome. In this sense, inclusivity has been frequently invoked to promote CAVs as an intrinsically ethical innovation. According to this view, not only would driving automation solve mobility issues, enhance sustainability, and improve road safety [49, 71]. It would also bridge inequalities that currently divide potential drivers from people who may temporarily or permanently be unable to drive [6, 29, 44]—a diversified set of agents including, e.g., people suffering from various physical or cognitive disabilities, the elderly, and minors. In other words, driving automation is often construed as precisely what we need to bring about a more inclusive transportation system. Theoretically, driving automation could indeed contribute to contrasting access inequality in the domain of transportation [69]. By allowing the delegation to digital systems of driving tasks that used to require full-blown, unimpaired human skills, driving automation could render various physical and cognitive obstacles irrelevant to individual road transport. Potentially, then, CAVs could offer an adequate means of private transportation to the mentioned parties, as social justice demands. This line of reasoning rests on the idea that high or full levels of driving automation would inherently beget inclusivity. Indeed, those excluded from manual driving because of physical limitations or cognitive impairments would arguably be poor candidates for the monitoring or supervising activities requested up until high levels of driving automation [30, 39; see Chap. 9]. L4 or L5 CAVs, however, would entirely free users from any driving burden and, thus, accommodate those who are currently considered unfit to drive safely. If vehicles could autonomously handle all traffic 2

Even if it surely represents a relevant controversy in the inclusive design literature, we will not explicitly address the issue of costs related to inclusive design choices in the main text of the chapter. Our avoidance of this issues is not a way to trivialise it: we simply maintain, as other authors do [77], that designing CAVs from the onset accounting for inclusivity issues can cost less than later adapt them to be used by a wide and diverse range of people.

Design for Inclusivity in Driving Automation: Theoretical and Practical …

67

situations in all relevant Operational Design Domains (ODDs), everyone would have access to autonomous private mobility regardless of their individual conditions. Inclusivity, then, would be a by-product of driving automation.3

3 From Automation to Human–Machine Interactions High or full driving automation, however, is just one of the requirements that must be satisfied to make CAVs accessible to users with special needs. Making manual driving and human supervision unnecessary does not imply solving all inclusivity problems. As van Loon and Martens [74: 3282] rightly notice, “fully automated does not imply the complete absence of the human element from the equation”. The entire domain of HMI remains to be addressed. The field of HMI examines how human users interact with technological artefacts with the aim of improving the quality of human–machine communication and cooperation [10]. Depending on the technological systems involved, the field of HMI can be further specified in various subdisciplines, which, however, remain closely connected to each other. For instance, the cornerstones of the discipline were laid down in the context of Human–Computer Interaction studies (e.g., [34, 62]), while more recently, Human–Robot Interaction studies have gained momentum [48, 67]. In the contexts of manual driving [40] and driving automation [41], HMI refers to interactions and transmission of information between users and driving systems. Information is commonly exchanged through an interface or dashboard that connects the user to the machine, system, or device and regulates the interactions between the two [9]. In recent years, several studies have explored the challenge of making the collaboration between users and driving systems more flexible and efficient by designing and validating human–machine CAV interfaces (e.g., [11, 53]). An example might help realise why high or full automation is not enough to satisfy the demands of inclusivity and why HMI and interface design issues need to be specifically addressed. Consider the case of visually impaired people. According to many studies, automated mobility solutions could help this category of agents out of the social isolation and loneliness that sometimes result from their condition [8, 52]. At the same time, CAVs could support their autonomy and alleviate the frustration caused by having to depend on others [24]. Indeed, the possibility to move around autonomously would massively impact visually impaired people’s sense of independence and, in turn, their quality of life [5]. Highly or fully automated vehicles, 3

Even if reflections regarding benefits on a diverse range of people emerge especially considering Level 5 CAVs, it is important to consider inclusive solutions also in the transition from partially to fully automated vehicles. Adhering early to inclusive design (see Chap. 5) would not just provide new ways to access automated resources and alternative means for HMI to all users, but it would also motivate people who now cannot drive due to degrees of constrained agency (we explain in detail what we mean by this term in Sect. 3) to start approaching and trusting partially automated vehicles. Their involvement as potential users of these technologies will in turn affect the research and possibly suggest new inclusive solutions, thus furthering technological advancements.

68

S. Arfini et al.

it appears, could revolutionise the way in which these agents perceive mobility. As a matter of fact, the interest of the category in inclusive driving technologies is tangible. In 2004, the US National Federation of the Blind launched the Blind Driver Challenge to support research in this field. As driving automation progressed, it was increasingly endorsed as a solution worth exploring. However, the sole automation of driving tasks would accomplish little to satisfy the needs of social categories such as that of visually impaired people. Tangible results can only be obtained by pairing it with interfaces, keeping users in the loop and informed about relevant system operations. For instance, pick-up and drop-off manoeuvres require smooth interactions between users and systems, which can be accomplished only through well-designed interfaces (e.g., [15]). The successful use of a CAV, then, not only requires that driving tasks are adequately automated. It also requires users to interact with the system safely and effectively, which is what reliable interfaces allow. Hence, as already stated, driving automation per se does not come with inclusive features. Instead, inclusivity must be proactively pursued on the level of interface design. The relevance of the HMI dimension makes it necessary to obtain fine-grained knowledge of the human agents potentially involved. If high levels of driving automation were enough to solve all the inclusivity issues, there would be no need to retrieve detailed information about the specific needs of involved agents. Conversely, the design of effective interfaces depends on what is known about the various senso-motorial and cognitive needs of those involved. Consider, for instance, the issue of explainability. As Chap. 3 discusses in detail, the value of explainability demands that relevant automated decisions performed by the onboard AI should be explainable, to the extent possible, to affected stakeholders. Part of the challenge consists in providing the necessary information in a fashion that is appropriate to the specific conditions of these stakeholders. In the case of end-users, information flows must then be conveyed through interfaces in understandable ways. What understandability means at different levels of automation, however, clearly changes depending on the peculiar characteristics of different users. A message conveyed through text displayed on a visual interface can be perfectly understandable for a user who can read the local language and has familiarity with the service. Users who suffer from physical or cognitive impairments that impinge negatively on their ability to read, users who do not understand the local language, illiterate users, or users with little knowledge of the service are likely to find the same message, conveyed in the same way, rather incomprehensible—which would obstruct the path towards smooth and effective HMI. The previous considerations show that diversity must be included in the notion of “agent” that informs the ethical approach to driving automation. In what follows, we claim that a more nuanced and granular definition of “agent” would help identify and manage inclusivity challenges. Based on this notion, guidelines on how to deal with inclusivity issues in driving automation are then discussed.

Design for Inclusivity in Driving Automation: Theoretical and Practical …

69

4 From “Agent” to “Agents” Even though the importance of inclusivity in driving automation has been variously highlighted, recent literature shows that practical endorsement is still limited and controversial [43, 70, 79]. For example, research on CAV design is often based on studies made with participants who hold a driver’s licence [7, 31, 60, 61], thus typically excluding those who would significantly benefit from the introduction of CAVs as private means of transportation. In light of this, we believe that integrating diversity in the notion of “agent” associated with potential CAV users (or road users who will interact with CAVs) could help. Let us consider the European Commission’s report again. The definition of “agent” there provided is the following: AGENT: A human individual with the power to act on the basis of intentions, beliefs, and desires. In this report, the term “agent” (and associated terms such as “agency” and “human agent”) is used in this philosophical sense and not in the legal sense of a person who acts on behalf of another. In this philosophical sense, agency is typically understood to be a prerequisite for moral capacity and responsibility. The term is only used in relation to humans and is not used to refer to artificial agents or autonomous systems [32: 12].

The benefit of adopting this definition is that it emphasises humans’ agency, which can be defined as “the socioculturally mediated capacity to act” [2: 112], and autonomy, which could be described as “the capacity for self-determination and self-government: the ability or the condition of living in accordance with reasons, motives, and goals that are one’s own rather than imposed by external forces” ([25: 132]; see Chap. 2). The received definition implicitly suggests that these traits should be highlighted in designing CAV technology and that its adoption should aim at protecting and improving them. The not-too-subtle problem that emerges in considering only this definition to depict all types of users of a technological artefact is that the degrees of agency and autonomy some human agents have and can employ depends heavily on how that technology is designed. As Rothberg [65: 125] comments: “When we discuss human-centered design, we can approach it as designing for the most common cases or designing for the most extreme cases. […] Users with disabilities present use cases that often mimic the needs of non-disabled users in disabling situations”. To consider CAV design in an inclusive way, thus, we need to discuss how adopting that technology could preserve all users’ ordinarily held agency and autonomy while improving the agency and autonomy of all specific cases, granting them access to available and integrated compensatory tools. One way to elaborate this perspective is to provide granularity in the definition of “agent”, reflecting on the intrinsic diversity that shapes different users’ experiences and affects the degrees of agency and autonomy that users ordinarily hold. Indeed, potential CAV users range from people enjoying ordinary degrees of agency and autonomy to users whose actions are constrained by temporary or permanent conditions. Constrained autonomy, for example, affects children [51], people

70

S. Arfini et al.

with dementia [78], organic mental disorder inpatients [16], and, temporarily, people whose judgment is impaired by alcohol or drugs [17]. From the point of view of HMI design, the travel experience should conform to different standards of safety depending on the users’ degree of autonomy. For example, a delegation of autonomy is in some cases attributed by default to legal guardians, who are nominated to compensate for the lesser degree of autonomy presented by these users. In other cases, as for the temporary conditions of impairment caused by intoxication, CAV policies and design should account for different types of agents as passengers. In some cases, moreover, potential CAV users may be affected by different degrees of agency, experiencing a limited capacity to act due to physiological conditions. Into this category would fit, for instance, visually impaired [13], hearing impaired [65], and speech impaired (e.g., stammering [20]) people, people on the autistic spectrum [50], people who have a sensory-motor impairment [78], people who suffer from light-sensitivity or other perceptual sensitivities, and so on. These users need inclusive design features to access or get out of the vehicle safely and express their autonomy regarding their travel destination and experience. In sum, the definition of “agent” proposed in the European Commission’s report can start the conversation on which users’ rights must be preserved and accounted for in developing CAVs. Being closely tied to HMI, however, inclusivity needs more diversity than the proposed definition considers. Accounting for a range in the degrees of autonomy and agency users may hold means recognising that various types of agents will approach and rely on CAVs differently and that this difference is crucial to address inclusivity issues.

5 Design for Inclusivity: A Case Study 5.1 Preliminary Methodological Considerations Now that we have offered a way to include diversity into the notion of “agent”, further practical questions need to be addressed: How do CAVs need to be designed to satisfy inclusivity demands? How much diversity is to be taken into consideration? Should every CAV design consider all possible special needs? Alternatively, should only a subset of special needs be addressed? How is this subset to be determined? As these important practical questions show, inclusivity challenges are at least as complex as the kinds and degrees of diversity they are supposed to cope with. Integrating diversity into the notion of “agent” is theoretically required but might lead to a controversial practical outcome. If abstractly assumed, the notion would imply that each CAV design should address each special need of each possible user. This would amount to setting an unreachable (and unreasonable) threshold, condemning inclusivity to inconsistency. Arguably, however, inclusivity does not require technologies to be designed with all possible instances of diversity in mind. CAVs need not be designed considering

Design for Inclusivity in Driving Automation: Theoretical and Practical …

71

the special needs of all possible agents. The abstract variety of user specificities should be pruned depending on the contextual features of the case at hand. How, where, and to which goals a particular CAV is used also determines which agents, with different degrees of agency and autonomy, will probably use it or interact with it. In other words, inclusivity is context-specific: the suitable needs to be considered are not those of possible agents but rather of probable ones. Relevant methodological implications follow from this. For inclusivity to be practically pursued, contextual analysis specifying the diversity relevant to the technology under development is required. The design of inclusive CAVs needs methodologies to identify probable users and their special needs. The inclusive design scholarship offers valuable insights in this sense. This design approach aims at envisioning products, services, or environments in ways that make them usable for as many people as reasonably possible without requiring specialised adaptions [37]. In addition to the peculiar needs of users suffering from cognitive and physical disabilities [21], inclusive design considers a wide range of aspects constituting human diversity, including language, culture, gender, and age. As such, it represents a solid viewpoint to shed light on implementing inclusivity in driving automation. Inclusive design involves engaging with probable users, understanding their particular needs, and informing design choices accordingly. However, identifying the right stakeholders and correctly representing their needs is a rather complicated matter [58]. Misrepresentations are easy to form. Frequently discussed methodologies to handle such a delicate task include steps such as carefully considering contextual factors, developing empathy for the needs of potential users, forming highly diverse research teams, creating and testing multiple solutions, encouraging dialogue regarding different design options, and using structured processes that guide conversations toward productive outcomes [1]. Systematic self-observation, user observation, interviews, user trials, questionnaires, checklists, simulations, and expert appraisals are all possible strategies to gather fine-grained information about probable users and their needs—each of which comes with its share of merits and problems [18, 57]. Table 1 lists commonly used methodologies that are possible to use for HMI design with some examples or case studies from the relevant literature. Inclusive design methods can be variously applied to CAV design to better grasp the probable agents involved and their needs. Based on this knowledge, inclusive design choices can be informed appropriately. As previously clarified, detailed information concerning probable users and agents who are likely to interact with the system is necessary to design for inclusivity—i.e., to design interfaces that will effectively structure interactions between systems and the diverse set of agents who will probably interact with them. Contextual knowledge of the transport tasks executed by the CAV and the specific needs of the probable agents involved thus represent the two main empirical conditions that must be considered to design inclusive interfaces and, through them, inclusive CAVs.

Definition

It attempts to optimise user interfaces around users’ working processes rather than forcing users to change their usage habits to suit the software development designers’ thoughts

Methodology

User-centred design Card sorting; interviews; focus groups; questionnaire; survey; personas; prototyping; usability testing; evaluation, etc.

Related methods To present a user-centred, iterative approach for HMI design for highly automated truck driving

To highlight the lack of inclusive concepts in the design of external communication for CAVs, especially for people with vision impairments (VIP) To understand how effectively HMI can be designed to provide the driver with safe control To collect insights and feedback from various sources, including interviews and surveys

[22]

[42]

[38]

Objective

[64]

Example/case studies

Personas

Semi-structured interviews; Focus groups; surveys; field studies; design workshop stage

Workshop with VIP (concept evaluation; requirement analysis); VR simulation

Process: analysis, design, implementation and deployment. Methods: expert workshop; qualitative methods (heuristic evaluation, thinking aloud); quantitative methods (questionnaires)

Design process/methods

Table 1 Commonly used methodologies for HMI design with some examples from relevant literature

(continued)

User-centred design can use qualitative data to represent the user’s perspective and cater to the broader non-mainstream population by specifying the importance of including diversity in sampled groups

Conclusion

72 S. Arfini et al.

To explore users’ expectations for future automotive technology To engage the local community and public transport service providers in a participatory endeavour to facilitate the development of design solutions

[59]

[75]

To identify and categorise the information used by the driver to make “transparent” a Level 3 autonomous system

To satisfy the experiential needs of blind and low-vision users

[14]

[26]

It focuses on context and is particularly appropriate for complex problems. The system (that is, the people, computers, objects, devices, and so on) is the centre of attention, while the users’ role is to set the system’s goals

System centred design

To design HMIs for autonomous vehicles with a visually impaired co-designer

[33]

Objective

To ensure that the system and the driver had a common understanding of the context of overall transportation systems involving CAVs

Observation; Interviews; focus groups; journey mapping; role-playing; Ethnography; prototyping; usability testing, etc.

It attempts to actively involve all stakeholders (e.g., employees, partners, customers, citizens, and end users) in the design process to help ensure the result meets their needs

Participatory design

Example/case studies

Scenarios; [80] Modelling; use cases; verification, etc

Related methods

Definition

Methodology

Table 1 (continued)

Cognitive Work Analysis (CWA) (1) Cognitive work analysis application; (2) Algorithm building; (3) Interface specifications (4) User tests for interface evaluation

System modelling language (SysML); defining scenarios; use cases; verification (driving simulator)

Participatory forum; Prototyping; surveys; usability testing

Drawing; Collaging; interview; prototyping

Participatory workshops; Wizard of Oz approach

Semi-structured interview; group brainstorming session; establishing system task flow using the Lucidchart diagramming tool

Design process/methods

The system-centred design focuses on organising the functionality of the system and building the product in the designer’s own interpretation and implementation of systems thinking. It can define the functions that satisfy the requirements of HMI

Participatory design has proven useful when designing for users who represent a population with unique needs. Input from the co-designer made the design selection clearer. It helped capture the end user’s perspective and fed back on earlier iterations of the prototype

Conclusion

Design for Inclusivity in Driving Automation: Theoretical and Practical … 73

74

S. Arfini et al.

5.2 Case Study: Setting the Stage These methodological considerations can be applied to explore inclusivity demands in terms of interface design in relation to a case study involving an automated shuttle transporting users within a university campus. Here is the setting of our case study. A shared CAV is to be deployed for public transportation within a university campus— e.g., the Bovisa Campus of the Politecnico di Milano, Italy. The vehicle to be adopted is an EasyMile EZ10 shuttle,4 designed to run autonomously and safely in mixed traffic scenarios (i.e., bicycles, pedestrians, low-speed autos) and under different weather conditions (such as heat, snow, rain, etc.). More specifically, the transport task of the case study involves the transportation of a group of ten people (six seated and four standing) from one point on campus to another. As its context of use immediately suggests, the shuttle is likely to be utilised by different agent categories, ranging from students and employees to more generic visitors. The set of probable agents to be considered is thus very diverse. Due to the multiple classes of people moving on the campus, design for inclusivity requires explicitly exploring various issues raised by agent diversity. Evidently, our transportation task can only partially be accomplished in an inclusive way by relying exclusively on the automation of driving tasks. In fact, it includes several high-level activities that fall into the category of HMI and must be dealt with through interface design. Consider, for instance, the following user needs: . receiving general information about the service: when it is available, how it works, and where it stops; . receiving assistance in getting in and out of the shuttle, also in the presence of specific conditions or impairments; . receiving information about the context of use: the ride through the campus, the lesson schedules, the location of special events, etc.; . while on board, receiving information about the system status, destination, and arrival; . communicating and handling emergencies, being these caused by shuttle malfunctioning or passengers’ needs; . understanding and handling possible safety–critical offboard events, such as stop requests by pedestrians or other vehicles. Adopting a design for inclusivity approach would mean supporting these and similar needs in ways that allow access to the service by the greatest possible number of probable campus users according to their capabilities. This last aspect is significant for a university campus where multiple users with different ages, genders, levels of impairments, cultural backgrounds, and technical expertise share the same space and service. In fact, they will do so under different conditions and according to different needs. These agent features determine various levels of autonomy in the use of the service or during interactions with the shuttle across the campus. The different kinds of 4

https://easymile.com/.

Design for Inclusivity in Driving Automation: Theoretical and Practical …

75

communication and assistance that these agents require directly affect how interfaces are to be envisioned and designed. Interface limitations, both in sensorial and cognitive terms, will reduce the potential of the system to support the mobility of diverse users. The inclusivity of our shuttle massively depends on how its interfaces will be designed to cope with user diversity. To guide the inclusive design of the shuttle interfaces, the first step consists in specifying user diversity as clearly as possible. As already noted, the set of probable agents to be considered in this case is broad and multifarious. A wide variety of diversity must be considered. To do so, we build on the work of Allu et al. [3], which deals with the accessibility of interfaces for personal automated vehicles. According to their study, visual, hearing, speech, upper and lower extremities impairments have a considerable impact on interface design in driving automation. Being kinds of impairments that probable campus users are likely to present, they deserve our attention. Moreover, further considerations concerning our use context have led us to include other sources of diversity, namely cognitive impairments, language, and culture. In what follows, we discuss these sources of user diversity and their impact on interface design in an attempt to devise solutions that improve the overall inclusivity of the shuttle.

5.3 Physical Impairments Our analysis starts with visual impairments, which are arguably the most challenging to address. Performing a transportation task like the one we are considering requires exchanging high-level information by means of the vehicle’s on- and off-board interfaces, which are commonly obtained through graphical elements. Consequently, meaningful collaboration with the system could be very difficult for visually impaired agents. For instance, a visually impaired agent is arguably entitled to receive critical information relative to the vehicle’s operability, such as confirmation of the journey, potential issues about the route, surroundings points of interest (POI), upcoming stops, start and stop actions, doors opening, etc. Relying exclusively on graphical elements would fail to support visually impaired users’ agency and autonomy. On the contrary, interfaces and interaction modalities should consider the limitations and peculiarities of visually impaired people together with the needs of other agents. Audio-based interfaces might prove a viable alternative to satisfy the needs of visually impaired agents. Moreover, their use is widespread in public transportation, and their implementation can be rather simple, requiring only a voice synthesis engine and an audio system. Since the amount of information conveyed in this modality is limited, the content of audio feedback must be carefully designed and tested. Indeed, too much information given in a small amount of time can rapidly overload users’ cognitive capabilities [68]. In addition, different information should reach the right agents without blending. Consider, for instance, information on the direction and status of the shuttle during the ride and its stop position relative to other points of reference and interest. This information is fundamental for visually impaired agents

76

S. Arfini et al.

to navigate the space around themselves [13]. However, it would be redundant for other agent categories, who would perceive it as annoying or confusing background noise. Concerning input modality, no effective interface has already been implemented and widely adopted for visually impaired agents. An artificial conversational agent endowed with advanced speech recognition and dialogue capabilities could represent a possible option. However, speech recognition is not an optimal solution in a public environment with high noise levels and privacy constraints. Another option could be tactile input devices such as brail renders and keyboards. Due to the public use of the shuttle, touchless haptic interfaces would arguably be more appropriate. An example of this device is the Ultra-Leap Stratos [19], which can generate mid-air haptic feedback by imitating the feeling of physical touch on actual surfaces. Since it can also generate braille text [54], the device could be a viable candidate as an input/ output interface for visually impaired agents. However, the technology is still in its initial development phase and currently lacks reliable, proven commercial solutions. Hearing-impaired agents present less complicated challenges. In this case, visual interfaces can be used as the main means of interaction. To accommodate the needs of colour-blind users as well, these interfaces should contain large, clear text and a well-contrasted colour palette [36, 76]. Input modalities could include a combination of capacitive touchscreens, physical buttons, and keyboards to complement the visual interfaces of the vehicle. The implementation of touchless interfaces, as mentioned above, is also a viable possibility. The recourse to audio-based artificial conversational agents, which already suffer from the limitations discussed above, would prove useless to hearing-impaired agents and agents with speech impairments, whose only peculiar need arguably consists of avoiding interactions with speech recognition systems. Finally, for agents with mobility impairments, the position of the interfaces assumes particular importance since it heavily impacts their accessibility. Therefore, detachable or wireless controls could help. We also need to bear in mind that some of these agents—e.g., upper extremity-impaired users—might be unable to use tactile interfaces. A mix of visual, auditory, and speech interfaces is likely to better meet their needs. As input modalities, we should consider three further options: eye tracking, gesture recognition, and via a live remote operator. Eye-tracking technology records the gaze location and movements of the users’ eyes through cameras and then uses these data to control the interface directly. Even if this technology can be suitable for people with severe motor problems, it can raise different issues in terms of setup and usability [73] and excludes visually impaired people. Similarly, gesture recognition uses the tracking of the user’s hand movements to enable interaction [28]. However, besides possible tracking issues, this technique tends to exclude people with upper arm impairments. A live remote operator, instead, is a company employee who is designated to complete tasks for users when they are unable to do so themselves. Highly impaired people can communicate with the interface through a remote operator, who can show/fulfil the user’s demands by remotely managing the interface or obtaining database information.

Design for Inclusivity in Driving Automation: Theoretical and Practical …

77

A general overview of how the previous observations affect the interface design for the case study is provided in Tables 2 and 3. These tables, which have been elaborated drawing on the work of Allu et al. [3], show the possible design options for the different interfaces considered so far. Table 2 Input and output interfaces allow users to enter and receive information through different sensory modalities Interface

Input

Output

Visual

– Eye-tracking – Gesture recognition

– Display

Auditory/speech – Voice recognition (with or without voice assistant) – Live remote operator

– Speakers (verbal/audio feedback)

Tactile

– Braille reader – Touch-less haptic interface

– Capacitive touchscreen – Physical button/keyboard – Wired/wireless connected mobile device

Table 3 Different types of physical abilities and how they influence how users give and receive information through the interface (*the use of the interface could be limited according to the single agent’s impartments) Interface

Visual impaired

Hearing impaired X

X

X

X

X

X

*

Eye tracking Gesture recognition

Speech impaired

Upper extremity impaired

Voice recognition

X

X

Remote operator

X

X

Capacitive touchscreen

X

X

*

Physical button/ keyboard

*

X

X

*

Wired/wireless connected mobile device

*

X

X

*

Display

X

Speaker

X

Braille reader

X

Touch-less haptic interface

X

X

X

X

X

X

X

*

X

X

*

78

S. Arfini et al.

5.4 Cognitive Impairments and Diversity Addressing HMI inclusivity also implies considering cognitive impairments and diversity. Indeed, able-bodied agents suffering from cognitive impairments (e.g., dyslexic agents) or experiencing cognitive limitations (e.g., inability to understand the local language) might also have various difficulties in effectively collaborating with our shuttle. Interaction failures might be engendered by visual interfaces that include too much text or difficult words. While some cognitive impairments, such as dyslexia, might be coped with by implementing interfaces that make use of verbal and auditory modalities [72], cognitive limitations such as language diversity or illiteracy would thus remain unaddressed. Language diversity and illiteracy pose hard challenges to interface design. Language, whether spoken or written, is the most common medium to exchange information and massively contributes to the ways in which information is processed and acted upon. Indeed, it has been noticed that people from different cultures approach problems differently, learn differently, and have different cognitive skills and strategies for processing information [72]. Creating the conditions for successful collaboration between artificial systems and illiterate agents or users who do not speak the local language proves to be complicated. Passengers who cannot understand, speak, or read the language used in the shuttle are difficult to reach. In a multicultural environment, knowledge of the local language cannot be presupposed, and due to the wide diversity of potential users, all languages cannot be considered. All agents, however, should have the possibility to request information and be able to give control instructions in case of need, danger, or emergency. For this reason, the on- and off-board interfaces should be straightforward to read and understand. Regarding illiteracy—an arguably rather improbable case among University campus users—relying on audio feedback, either through speech recognition technology or resorting to a live operator, might help better convey the information [45]. For what concerns cultural diversity, presenting the same message in multiple languages might prove effective when possible. Another solution could be the creation of graphic user interfaces based exclusively on (hopefully) selfexplanatory pictorial elements, such as icons and illustrations [47]. Moreover, mixing different outputs (auditory, visual, tactile…) to stress the same message might also be of help. Memory loss due to dementia, Alzheimer’s disease, or the autism spectrum are examples of disorders or neurodiversity for which the design of an HMI requires particular attention. Categorising these kinds of agents is difficult due to differences in terms of reactions that the same impairment can provide5 , and this makes it harder to design an HMI suitable to their disorder. A general design guideline should include HMI simplification by grouping its functions. In this way, agents with memory loss will have a smaller pool of function keys to choose from at any one time, resulting in shorter messages with fewer details to memorize [81]. Having difficulties in remembering the elements’ position could disorient and frustrate these 5

https://www.alzheimers.org.uk/.

Design for Inclusivity in Driving Automation: Theoretical and Practical …

79

agents if they fail [4]. Similarly, agents with Parkinson’s disease, where the neurological impairment also causes motor disorder, require a longer time to complete an action. Therefore, increasing time-outs [4] before closing a page, a pop-up, a form, or any other element could be a viable solution. In addition, agents with dementia may have issues with colour, shapes, and movements, which are important elements of the current graphical interfaces. That being said, it is to be noted that these categories of agents are unlikely to visit a University campus without further assistance. Conversely, agents in the autism spectrum exhibit different behaviours, the peculiarities of which also require dedicated attention. For instance, these agents have been found to communicate better through graphic elements rather than text [35]. A further, quite straightforward solution could be to enable the connection between the shuttle information system and the agents’ own devices (e.g., smartphones), thus allowing a new layer of personalisation and control over the vehicle. According to [15], this solution might be welcomed by some categories of physically impaired agents, such as blind and low-vision agents, who expressed interest in the possibility of interacting with the vehicle through their smartphones and the voiceover capabilities of their devices. However, this solution comes with its own share of problems, as the interfacing of external personal devices poses various conceptual, ethical, and technical challenges.

6 Conclusion and Future Outlooks As a way to apply a design for inclusivity approach, the last section has looked into several possible interfaces that could be used on our shuttle to cope with some of the impairments and limitations of probable users. Our analysis is far from comprehensive: it only showcases the methodology that could be adopted to explore inclusivity challenges. Even such a cursory discussion is enough to realise that while every proposed solution presents both advantages and disadvantages, no single interface can fully accommodate all agents’ needs. A one-size-fits-all option is not available. Consequently, the need for integrating multiple interfaces becomes inescapable. This need, however, takes time to satisfy. A wide variety of interfaces is required to cover the most significant portion of probable agents. A similar design strategy might easily generate a confusing and informationally noisy environment, which would be detrimental in terms of HMI. The use of multiple interfaces should then be paired with modalities to distribute information differentially, i.e., to provide it only to those who need it, when they need it, and in the ways in which they need it. This is where theoretical inquiries must leave the stage for more practical studies. Once the various layers of complexity inherent to inclusive design have been identified, and reasoning on HMI, user categories, and interfaces has been structured, evidence on fine-tuning interactions between the driving systems and its users must come from the field. Theoretical inquiries can only inform design choices concerning different interfaces and their setups, which must then be tested for usability. Usability verifications of various interface setups for our shuttle will provide further evidence

80

S. Arfini et al.

of its inclusivity prospects. By integrating theory and practice, our approach presents the potential to guide the design of CAVs toward inclusivity in a fine-grained and viable manner. As an example of the further stages of our research, we conclude this chapter by sketching a usability test to be conducted in the future by focusing on a specific agent category: visually impaired people. Not only visually impaired users represent one of the most challenging agent classes for designing effective HMI in driving automation. Also, interface technologies used for accommodating the needs of the visually impaired—in particular, haptic and gesture recognition technologies—can be effectively applied to other impairments as well. Actually, working with impaired agents is an opportunity to discover new interaction patterns and paradigms that may benefit all shuttle users. The study will involve a group of generic users and a group of visually impaired agents to compare a traditional touchscreen interface with a touchless haptic interface. Interactions between generic users and the shuttle will be observed to gather information on the usability and effectiveness of the two interfaces in the absence of identifiable peculiar needs. Instead, interactions between the visually impaired group and the shuttle will be assessed along with user satisfaction to verify the suitability of tactile input devices for this peculiar category of agents. Moreover, the test aims to compare the two cases and gather initial information about how diverse agents cope with different interfaces. We hope that evidence gathered through this field study will help us better understand how to build effective interfaces for the user category that, in our opinion, poses the greatest challenges to the value of inclusivity in driving automation.

References 1. Afonso, V.A., Calisto, M.D.: Innovation in experiential services: trends and challenges. In: Carvalho, L. (ed.) Handbook of Research on Internationalization of Entrepreneurial Innovation in the Global Economy, pp. 390–401. IGI Global, Hershey (2015). https://doi.org/10.4018/9781-4666-8216-0.ch019 2. Ahearn, L.M.: Language and agency. Annu. Rev. Anthropol. 30, 109–137 (2001). https://doi. org/10.1146/annurev.anthro.30.1.109 3. Allu, S., Jaiswal, A., Lin, M., Malik, A., Ozay, L., Prashanth, T., Duerstock, B.S.: Access to personal transportation for people with disabilities with autonomous vehicles (2017). https:// docs.lib.purdue.edu/ugcw/1 4. Ancient, C., Good, A.: Issues with designing dementia-friendly interfaces. In: Stephanidis, C. (ed.) HCI International 2013—Posters’ Extended Abstracts. HCI 2013. Communications in Computer and Information Science, vol. 373, pp. 192–196. Springer, Berlin-Heidelberg (2013). https://doi.org/10.1007/978-3-642-39473-7_39 5. Azenkot, S., Prasain, S., Borning, A., Fortuna, E., Ladner, R.E., Wobbrock, J.O.: Enhancing independence and safety for blind and deaf-blind public transit riders. In: CHI’11. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3247–3256. ACM, New York (2011). https://doi.org/10.1145/1978942.1979424

Design for Inclusivity in Driving Automation: Theoretical and Practical …

81

6. Banerjee, S.: Autonomous vehicles: a review of the ethical, social and economic implications of the AI revolution. Int. J. Intell. Unmanned Syst. 9(4), 302–312 (2021). https://doi.org/10. 1108/IJIUS-07-2020-0027 7. Bennett, R., Vijaygopal, R., Kottasz, R.: Willingness of people who are blind to accept autonomous vehicles: an empirical investigation. Transport. Res. F: Traffic Psychol. Behav. 69, 13–27 (2020). https://doi.org/10.1016/j.trf.2019.12.012 8. Bezyak, J.L., Sabella, S.A., Gattis, R.H.: Public transportation: an investigation of barriers for people with disabilities. J. Disabil. Policy Stud. 28(1), 52–60 (2017). https://doi.org/10.1177/ 1044207317702070 9. Bischoff, S., Ulrich, C., Dangelmaier, M., Widlroither, H., Diederichs, F.: Emotion recognition in user-centered design for automotive interior and automated driving. In: Binz, H., Bertsche, B., Bauer, W., Spath, D., Roth, D. (eds.) Stuttgarter Symposium Für Produktentwicklung, SSP 2017, Produktentwicklung im disruptiven Umfeld, Stuttgart, pp. 1–10. Fraunhofer IAO, Stuttgart (2017) 10. Boy, G.A. (ed.): The Handbook of Human-Machine Interaction. A Human-Centered Design Approach. CRC Press, Boca Raton (2011) 11. Bradley, M., Langdon, P.M., Clarkson, P.J.: An inclusive design perspective on automotive HMI trends. In: Antona, M., Stephanidis, C. (eds.) Universal Access in Human-Computer Interaction. Users and Context Diversity. UAHCI 2016. Lecture Notes in Computer Science, vol. 9739, pp. 548–555. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40238-3_ 52 12. Bradshaw-Martin, H., Easton, C.: Autonomous or ‘driverless’ cars and disability: a legal and ethical analysis. Euro. J. Curr. Legal Issues 20(3), 1–17 (2014). http://webjcli.org/index.php/ webjcli/rt/printerFriendly/344/471 13. Brinkley, J., Huff, E.W., Posadas, B., Woodward, J., Daily, S.B. and Gilbert, J.E.: Exploring the needs, preferences, and concerns of persons with visual impairments regarding autonomous vehicles. ACM Trans. Access. Comput. 13(1), Article 3 (2020). https://doi.org/10.1145/337 2280 14. Brinkley, J., Posadas, B., Sherman, I., Daily, S.B., Gilbert, J.E.: An open road evaluation of a self-driving vehicle human-machine interface designed for visually impaired users. Int. J. Hum.-Comput. Interact. 35(11), 1018–1032 (2019). https://doi.org/10.1080/10447318.2018. 1561787 15. Brinkley, J., Posadas, B., Woodward, J., Gilbert, J.E.: Opinions and preferences of blind and low vision consumers regarding self-driving vehicles: results of focus group discussions. In: Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ‘17), pp. 290–299. ACM, New York (2017). https://doi.org/10.1145/3132525. 3132532 16. Brunnauer, A., Buschert, V., Segmiller, F., Zwick, S., Bufler, J., Schmauss, M., Messer, T., Möller, H.-J., Frommberger, U., Bartl, H., Steinberg, R., Laux, G.: Mobility behaviour and driving status of patients with mental disorders—An exploratory study. Int. J. Psychiatry Clin. Pract. 20(1), 40–46 (2016). https://doi.org/10.3109/13651501.2015.1089293 17. Brunnauer, A., Herpich, F., Zwanzger, P., Laux, G.: Driving performance under treatment of most frequently prescribed drugs for mental disorders: a systematic review of patient studies. Int. J. Neuropsychopharmacol. 24(9), 679–693 (2021). https://doi.org/10.1093/ijnp/pyab031 18. Cardoso, C., Keates, S., Clarkson, J.: Assessment for inclusive design. In: Clarkson, J., Keates, S., Coleman, R., Lebbon, C. (eds.) Inclusive Design, pp. 454–474. Springer, London (2003). https://doi.org/10.1007/978-1-4471-0001-0_28 19. Carter, T., Seah, S.A., Long, B., Drinkwater, B., Subramanian, S.: UltraHaptics: multi-point mid-air haptic feedback for touch surfaces. In: Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST ‘13), pp. 505–514. ACM, New York (2013). https://doi.org/10.1145/2501988.2502018 20. Clark, L., Cowan, B.R., Roper, A., Lindsay, S., Sheers, O.: Speech diversity and speech interfaces: considering an inclusive future through stammering. In: Proceedings of the 2nd Conference on Conversational User Interfaces, pp. 1–3. ACM, New York (2020). https://doi.org/10. 1145/3405755.3406139

82

S. Arfini et al.

21. Clarkson, J.P., Coleman, R.: History of inclusive design in the UK. Appl. Ergon. 46(Part B), 235–247 (2015). https://doi.org/10.1016/j.apergo.2013.03.002 22. Colley, M., Walch, M., Gugenheimer, J., Askari, A., Rukzio, E.: Towards inclusive external communication of autonomous vehicles for pedestrians with vision impairments. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14. ACM, New York (2020). https://doi.org/10.1145/3313831.3376472 23. Crayton, T.J., Meier, B.M.: Autonomous vehicles: developing a public health research agenda to frame the future of transportation policy. J. Transp. Health 6, 245–252 (2017). https://doi. org/10.1016/j.jth.2017.04.004 24. Crudden, A., McDonnall, M.C., Hierholzer, A.: Transportation: an electronic survey of persons who are blind or have low vision. J. Vis. Impair. Blind. 109(6), 445–456 (2015) 25. Davy, L.: Philosophical inclusive design: intellectual disability and the limits of individual autonomy in moral and political theory. Hypatia 30, 132–148 (2015). https://doi.org/10.1111/ hypa.12119 26. Debernard, S., Chauvin, C., Pokam, R., Langlois, S.: Designing human-machine interface for autonomous vehicles. IFAC-PapersOnLine 49(19), 609–614 (2016). https://doi.org/10.1016/j. ifacol.2016.10.629 27. Epting, S.: Automated vehicles and transportation justice. Philos. Technol. 32, 389–403 (2019). https://doi.org/10.1007/s13347-018-0307-5 28. Funes, M.M., Trojahn, T.H., Fortes, R.P.M., Goularte, R.: Gesture4all: a framework for 3D gestural interaction to improve accessibility of web videos. In: Proceedings of the ACM Symposium on Applied Computing, pp. 2151–2158. ACM, New York (2018) 29. Goggin, G.: Disability, connected cars, and communication. Int. J. Commun. 13, 2748–2773 (2019). https://ijoc.org/index.php/ijoc/article/view/9021/2691 30. Gurney, J.K.: Crashing into the unknown: an examination of crash-optimization algorithms through the two lanes of ethics and law. Albany Law Rev. 79(1), 183–267 (2016). http://www. albanylawreview.org/Articles/vol79_1/183%20Gurney%20Production.pdf 31. Hardman, S., Berliner, R., Tal, G.: Who will be the early adopters of automated vehicles? Insights from a survey of electric vehicle owners in the United States. Transp. Res. Part D: Transp. Environ. 71, 248–264 (2019). https://doi.org/10.1016/j.trd.2018.12.001 32. Horizon 2020 Commission Expert Group to advise on specific ethical issues raised by driverless mobility: Ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility (2020). https://op.europa.eu/en/publicationdetail/-/publication/89624e2c-f98c-11ea-b44f-01aa75ed71a1/language-en 33. Huff, E.W., Lucaites, K.M., Roberts, A., Brinkley, J.: Participatory design in the classroom: exploring the design of an autonomous vehicle human-machine interface with a visually impaired co-designer. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 64(1), 1921–1925 (2020) 34. Jacko, J.A.: Human Computer Interaction Handbook. Fundamentals, Evolving Technologies, and Emerging Applications, 3rd edn. CRC Press, Boca Raton (2012). https://doi.org/10.1201/ b11963 35. Kamaruzaman, M.F., Rani, N.M., Nor, H., Azahari, M.H.H.: Developing user interface design application for children with Autism. Procedia. Soc. Behav. Sci. 217, 887–894 (2016). https:// doi.org/10.1016/j.sbspro.2016.02.022 36. Karagol-Ayan, B.: Color Vision Confusion. (2001). http://www.co-bw.com/DMS_color_ vsion_confusion.htm 37. Keates, S.: Design for the value of inclusiveness. In: van den Hoven, J., Vermaas, P., van de Poel, I. (eds.) Handbook of Ethics, Values, and Technological Design, pp. 383–402. Springer, Dordrecht (2015). https://doi.org/10.1007/978-94-007-6970-0_15 38. Kong, P., Cornet, H., Frenkler, F.: Personas and emotional design for public service robots: a case study with autonomous vehicles in public transportation. In: 2018 International Conference on Cyberworlds, pp. 284–287. IEEE Press, Piscataway (2018). https://doi.org/10.1109/CW. 2018.00058 39. Kumfer, W.J., Levulis, S.J., Olson, M.D., Burgess, R.A.: A human factors perspective on ethical concerns of vehicle automation. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 60(1), 1844–1848 (2016). https://doi.org/10.1177/1541931213601421

Design for Inclusivity in Driving Automation: Theoretical and Practical …

83

40. Kun, A.L.: Human-machine interaction for vehicles: review and outlook. Found. Trends Hum.Comput. Interact. 11(4), 201–293 (2018). https://doi.org/10.1561/1100000069 41. Kun, A.L., Boll, S., Schmidt, A.: Shifting gears: user interfaces in the age of autonomous driving. IEEE Pervasive Comput. 15(1), 32–38 (2016). https://doi.org/10.1109/MPRV.2016.14 42. Langdon, P., Politis, I., Bradley, M., Skrypchuk, L., Mouzakitis, A., Clarkson, J.: Obtaining design requirements from the public understanding of driverless technology. In: Stanton, N. (ed.) Advances in Human Aspects of Transportation. AHFE 2017. Advances in Intelligent Systems and Computing, vol. 597, pp. 749–759. Springer, Cham (2018). https://doi.org/10. 1007/978-3-319-60441-1_72 43. Large, D.R., Clark, L., Burnett, G., Harrington, K., Luton, J., Thomas, P., Bennett, P.: “It’s small talk, Jim, but not as we know it”: engendering trust through human-agent conversation in an autonomous, self-driving car. In: Proceedings of the 1st International Conference on Conversational User Interfaces—CUI ‘19, pp. 1–7. ACM, New York (2019). https://doi.org/ 10.1145/3342775.3342789 44. Lim, H.S.M., Taeihagh, A.: Autonomous vehicles for smart and sustainable cities: an in-depth exploration of privacy and cybersecurity implications. Energies 11(5), 1062 (2018). https://doi. org/10.3390/en11051062 45. Mahadevan, K., Somanath, S., Sharlin, E.: Communicating awareness and intent in autonomous vehicle-pedestrian interaction. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ‘18), pp. 1–12. ACM, New York (2018). https://doi.org/10.1145/ 3173574.3174003 46. Martens, K.: Transportation Justice. Designing Fair Transportation Systems. Routledge, London (2017) 47. Medhi, I., Sagar, A., Toyama, K.: Text-free user interfaces for illiterate and semi-literate users. In: 2006 International Conference on Information and Communication Technology and Development, ICTD2006, pp. 72–82. IEEE Press, Piscataway (2006). https://doi.org/10.1109/ICTD. 2006.301841 48. Mutlu, B., Roy, N., Šabanovi´c, S.: Cognitive human–robot interaction. In: Siciliano, B., Khatib, O. (eds.) Springer Handbook of Robotics, pp. 1907–1934. Springer, Cham (2016). https://doi. org/10.1007/978-3-319-32552-1_71 49. Nees, M.A.: Safer than the average human driver (who is less safe than me)? Examining a popular safety benchmark for self-driving cars. J. Safety Res. 69, 61–68 (2019). https://doi. org/10.1016/j.jsr.2019.02.002 50. Nguyen, P., d’Auria, V., Heylighen, A.: Detail matters: exploring sensory preferences in housing design for autistic people. In: Langdon, P., Lazar, J., Heylighen, A., Dong, H. (eds.) Designing for Inclusion, pp. 132–139. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-438654_14 51. Noggle, R.: Special agents: children’s autonomy and parental authority. In: Archard, D., Macleod, C.M. (eds.) The Moral and Political Status of Children, pp. 97–117. Oxford University Press, Oxford (2002). https://doi.org/10.1093/0199242682.003.0006 52. O’Day, B.L., Killeen, M., Iezzoni, L.I.: Improving health care experiences of persons who are blind or have low vision: suggestions from focus groups. Am. J. Med. Qual. 19(5), 193–200 (2004). https://doi.org/10.1177/106286060401900503 53. Oliveira, L., Luton, J., Iyer, S., Burns, C., Mouzakitis, A., Jennings, P., Birrell, S.: Evaluating how interfaces influence the user interaction with fully autonomous vehicles. In: Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ‘18), pp. 320–331. ACM, New York (2018). https://doi.org/10. 1145/3239060.323906 54. Paneva, V., Seinfeld, S., Kraiczi, M., Müller, J.: HaptiRead: reading braille as mid-air haptic information. In: Proceedings of the 2020 ACM Designing Interactive Systems Conference, pp. 13–20. ACM, New York (2020). https://doi.org/10.1145/3357236.3395515 55. Parviainen, J.: Kinetic values, mobility (in)equalities, and ageing in smart urban environments. Ethical Theory Moral Pract. 24, 1139–1153 (2021). https://doi.org/10.1007/s10677-021-102 49-6

84

S. Arfini et al.

56. Pereira, R.H.M., Schwanen, T., Banister, D.: Distributive justice and equity in transportation. Transp. Rev. 37(2), 170–191 (2017). https://doi.org/10.1080/01441647.2016.1257660 57. Petrie, H., Bevan, N.: The Evaluation of accessibility, usability, and user experience. In: Stephanidis, C. (ed.) The Universal Access Handbook, Chapter 20. CRC Press, Boca Raton (2009) 58. Petrie, H., Hamilton, F., King, N., Pavan, P.: Remote usability evaluations with disabled people. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘06), pp. 1133–1141. ACM, New York (2006). https://doi.org/10.1145/1124772.1124942 59. Pettersson, I., Karlsson, I.C.M.: Setting the stage for autonomous cars: a pilot study of future autonomous driving experiences. IET Intel. Transp. Syst. 9(7), 694–701 (2015). https://doi. org/10.1049/iet-its.2014.0168 60. Politis, I., Langdon, P., Adebayo, D., Bradley, M., Clarkson, P.J., Skrypchuk, L., Mouzakitis, A., Eriksson, A., Brown, J.W.H., Revell, K., Stanton, N.: An evaluation of inclusive dialogue-based interfaces for the takeover of control in autonomous cars. In: 23rd International Conference on Intelligent User Interfaces, pp. 601–606. ACM, New York (2018). https://doi.org/10.1145/317 2944.3172990 61. Politis, I., Langdon, P., Bradley, M., Skrypchuk, L., Mouzakitis, A., Clarkson, P.J.: Designing autonomy in cars: a survey and two focus groups on driving habits of an inclusive user group, and group attitudes towards autonomous cars. In: Di Bucchianico, G., Kercher, P.F. (eds.) Advances in Design for Inclusion. Advances in Intelligent Systems and Computing, pp. 161– 173. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-60597-5_15 62. Reeves, B., Nass, C.I.: The Media Equation: how People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press, Cambridge (1996) 63. Richardson, B.C.: Transportation ethics. Transp. Q. 49(2), 117–126 (1995) 64. Richardson, N.T., Lehmer, C., Lienkamp, M., Michel, B.: Conceptual design and evaluation of a human machine interface for highly automated truck driving. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 2072–2077. IEEE Press, Piscataway (2018). https://doi.org/10. 1109/IVS.2018.8500520 65. Rothberg, M.A.: Designing for inclusion: ensuring accessibility for people with disabilities. In: Edmunds, M., Hass, C., Holve, E. (eds.) Consumer Informatics and Digital Health, pp. 125–143. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-96906-0_7 66. Santoni de Sio, F.: The European Commission report on ethics of connected and automated vehicles and the future of ethics of transportation. Ethics Inf. Technol. 23, 713–726 (2021). https://doi.org/10.1007/s10676-021-09609-8 67. Sheridan, T.B.: Human-robot interaction: status and challenges. Hum. Factors 58(4), 525–532 (2016). https://doi.org/10.1177/0018720816644364 68. Shneiderman, B.: The limits of speech recognition. Commun. ACM 43(9), 63–65 (2000). https://doi.org/10.1145/348941.348990 69. Singleton, P.A., De Vos, J., Heinen, E., Pud¯ane, B.: Potential health and well-being implications of autonomous vehicles. In: Milakis, D., Thomopoulos, N., van Wee, B. (eds.) Advances in Transport Policy and Planning, vol. 5, pp. 163–190. Elsevier Academic Press, Cambridge-San Diego-Oxford-London (2020). https://doi.org/10.1016/bs.atpp.2020.02.002 70. Smyth, J., Jennings, P., Birrell, S.: Are you sitting comfortably? How current self-driving car concepts overlook motion sickness, and the impact it has on comfort and productivity. In: Stanton, N. (eds.) Advances in Human Factors of Transportation. AHFE 2019. Advances in Intelligent Systems and Computing, vol. 964, pp. 387–399. Springer, Cham (2019). https://doi. org/10.1007/978-3-030-20503-4_36 71. Stilgoe, J.: How can we know a self-driving car is safe? Ethics Inf. Technol. 23, 635–647 (2021). https://doi.org/10.1007/s10676-021-09602-1 72. Thies, I.M.: User interface design for low-literate and novice users: past, present and future. Found. Trends Hum.-Comput. Interact. 8(1), 1–72 (2015). https://doi.org/10.1561/1100000047 73. Valtakari, N.V., Hooge, I.T.C., Viktorsson, C., Nyström, P., Falck-Ytter, T., Hessels, R.S.: Eye tracking in human interaction: possibilities and limitations. Behav. Res. 53, 1592–1608 (2021). https://doi.org/10.3758/s13428-020-01517-x

Design for Inclusivity in Driving Automation: Theoretical and Practical …

85

74. van Loon, R.J., Martens, M.H.: Automated driving and its effect on the safety ecosystem: how do compatibility issues affect the transition period? Procedia Manuf. 3, 3280–3285 (2015). https://doi.org/10.1016/j.promfg.2015.07.401 75. Verma, H., Pythoud, G., Eden, G., Lalanne, D., Evéquoz, F.: Pedestrians and visual signs of intent: towards expressive autonomous passenger shuttles. Proc. ACM Interact. Mob. Wear. Ubiquitous Technol. 3(3), 1–31 (2019). https://doi.org/10.1145/3351265 76. WAI—Web Accessibility Initiative: Web Content Accessibility Guidelines [WCAG] 2 (2022). https://www.w3.org/WAI/standards-guidelines/wcag/. Accessed 29 Mar. 2022 77. Waller, S., Bradley, M., Hosking, I., Clarkson, P.J.: Making the case for inclusive design. Appl. Ergon. 46, 297–303 (2015). https://doi.org/10.1016/j.apergo.2013.03.012 78. Yang, J., Coughlin, J.F.: In-vehicle technology for self-driving cars: advantages and challenges for aging drivers. Int. J. Automot. Technol. 15, 333–340 (2014). https://doi.org/10.1007/s12 239-014-0034-6 79. Young, M., Magassa, L., Friedman, B.: Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics Inf. Technol. 21, 89–103 (2019). https://doi.org/10.1007/s10676-019-09497-z 80. Yun, S., Teshima, T., Nishimura, H.: Human-machine interface design and verification for an automated driving system using system model and driving simulator. IEEE Consum. Electron. Mag. 8(5), 92–98 (2019). https://doi.org/10.1109/MCE.2019.2923899 81. Zajicek, M.: Interface design for older adults. In: Proceedings of the 2001 EC/NSF Workshop on Universal Accessibility of Ubiquitous Computing: providing for the Elderly (WUAUC’01), pp. 60–65. ACM, New York (2001). https://doi.org/10.1145/564526.564543

From Prototypes to Products: The Need for Early Interdisciplinary Design Stefano Arrigoni , Fabio Fossa , and Federico Cheli

Abstract Aligning the design of Connected and Automated Vehicle (CAV) technologies to relevant ethical values raises many challenges. In this regard, a particularly delicate step is the transition from prototypes to products. Research and testing on prototypes usually put ethically relevant aspects on hold to focus on more pressing functional aspects. When prototypes are transformed into products, however, ethical issues can no longer be set aside, but require to be responsibly assessed and dealt with. Nevertheless, the consideration of ethical requirements at the stage of product design might lead to results that are far from optimal. Aligning technologies to relevant values at a later stage might indeed imply massive repercussions necessitating profound design revisions. Undesirable trade-offs might thus surface between technical and socio-ethical requirements. This chapter explores pitfalls that might hinder compliance with important ethical values in the design of CAV technologies during the delicate transition from prototypes to products. After some introductory considerations, it discusses the issue by presenting a hypothetical case study on privacy and cybersecurity problems in the prototypical design of a communication system connecting CAVs and smart infrastructure devices. Insights from the field of design ethics are examined to stress the importance of supporting interdisciplinary collaboration since the very outset of the design process. Keywords Driving automation · Design ethics · Prototype · Product · Cybersecurity

S. Arrigoni (B) · F. Fossa · F. Cheli Department of Mechanical Engineering, Politecnico di Milano, Via Privata Giuseppe La Masa, 1, 20156 Milan, Italy e-mail: [email protected] F. Fossa e-mail: [email protected] F. Cheli e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Fossa and F. Cheli (eds.), Connected and Automated Vehicles: Integrating Engineering and Ethics, Studies in Applied Philosophy, Epistemology and Rational Ethics 67, https://doi.org/10.1007/978-3-031-39991-6_5

87

88

S. Arrigoni et al.

1 Introduction Influencing design decisions and processes is a key objective for the ethics of technology. Even though its scope extends far beyond design practices, focusing on them is critical to ensure alignment between technological products and legitimate moral expectations. Indeed, choices concerning the configuration of a technological product significantly affect its ethical profile—i.e., the ethical values and disvalues it mediates once deployed. Incorporating the right ethical values, and in the right ways, into the configuration of technological products considerably depends on the time and effort allocated during the design process to identify and tackle ethical issues. Supporting interdisciplinary design practices aimed at duly accounting for ethical concerns is then of utmost importance. The present chapter explores difficulties in embedding ethical values into the technical configuration of Connected and Automated Vehicles [CAVs] stemming from the internal complexity of design processes. By concentrating the attention on the delicate stage of prototype design, we believe that our work offers a comparatively unique contribution to the research on design ethics in driving automation. Here is an overview of the chapter. Section 2 takes a closer look to design as an object of ethical study and claims that a distinction between prototype design and product design is relevant to the issue of embedding ethical values into technologies. In short, we argue that prototype design proves more resistant to ethical considerations, which are usually postponed to later design stages. However, integrating changes required by ethical concerns at production stage might necessitate significant changes, which would put decision-makers in highly uncomfortable positions. To better substantiate our claims, Sect. 3 presents a case study concerning privacy and cybersecurity requirements in communication between automated vehicles and smart infrastructure devices. Building on the case study, Sect. 4 draws on insights from the debate on design ethics to insist on the importance of tackling ethical concerns since the very beginning—i.e., already at prototype stage. Section 5 summarises the results of the inquiry and discusses avenues for future research.

2 Prototypes, Products, and Design Ethics In the past decades, scholars and practitioners interested in the ethical dimension of technology have increasingly focused the attention on design practices [9, 21, 26, 27]. The process through which new technologies are imagined, conceptualised, detailed, and realised is one of the most crucial circumstances where ethical values and technological artefacts intertwine [7, 30, 32]. Through design, ethical values and disvalues get embedded in technological artefacts [14, 31, 34]. Depending on which design choices are explicitly or implicitly taken, technological products are endowed with features that can either promote or hinder compliance with given moral objectives. While functioning or being used, artefacts mediate the values thus

From Prototypes to Products: The Need for Early Interdisciplinary Design

89

consciously or unconsciously embedded in them. Since designers’ choices and practices contribute to determining the ethical profile of their outcomes, design is then to be philosophically understood as a value-laden activity. The realisation that ethical values are embedded into technological artefacts through design processes immediately calls for the exercise of responsibility on the part of practitioners (e.g., [17, 18]). Given the severity of its potential social effects, value embedment through design must be explicitly tackled. Designers have an obligation to make sure that technological products align with the legitimate moral expectations of involved stakeholders. Sure enough, different degrees of ethical compliance by design can be pursued. Stated in negative terms, the obligation demands to verify that features producing ethically problematic outcomes are carefully identified and appropriately modified. A more proactive attitude would ensure that relevant ethical values are embedded into technological artefacts, so that their accomplishment is positively supported [11]. Accordingly, various research approaches—e.g., privacy-by-design, safetyby-design, inclusive design, and so on—have been developed to help practitioners explore how to integrate given ethical values into the configuration of technological products by making the right design choices. Consciously and reflectively managing value embedment is no easy task [19]. Many obstacles can obstruct awareness and ethically adequate value implementation (e.g., [2, 4, 22, 28, 33]). For example, ethical values and disvalues might be transferred to artefacts implicitly or unwillingly. Moreover, relevant values might be hard to realise and describe, so that their embedment might be overlooked or misunderstood. Even if correctly identified, ethical values could conflict with each other or with other relevant criteria such as effectiveness, efficiency, and cost reduction, thus generating thorny trade-offs. Translating moral values into measurable thresholds and quantitative requirements is also bound to raise controversy. Finally, identifying and acknowledging relevant stakeholders, so that their values and claims can be duly factored in, often prove extremely difficult. A further difficulty we would like to stress is that different stages of the design process might be more or less receptive to ethical inputs. Design ethics research has always highlighted the importance of introducing moral analysis early on in the design process. Yet, different design stages might raise different challenges in this sense. We surmise that a distinction between prototype design and product design might be relevant here.1 We believe it is important to address this distinction explicitly in the design ethics debate. Assuming that the moral obligations of the engineering profession are acknowledged, ethical considerations related to the social adoption of technology can hardly be bracketed during product design. Indeed, the closeness between production and 1

In the literature on Value Sensitive Design, prototyping has been recently acknowledged as a crucial step to be carried out before moving on to the production stage in order to assess how well a given design is able to meet value-based requirements and whether unexpected outcomes occur (e.g., [9, 23]). In this sense, prototyping is already understood as part of an ethically aware design process. Our research tackles a slightly, but importantly different object: prototyping as a technically oriented design stage where ethical considerations are arguably easier to marginalise.

90

S. Arrigoni et al.

deployment brings to the forefront the urgency of responsibly accounting for potential ethical issues. Conversely, ethical considerations are arguably bound to exert a less pressing influence during prototype design. When working on prototypes, considerations concerning social impacts are necessarily more remote. Contrary to products, prototypes are generally envisioned as early configurations to be tested in highly artificial settings mostly populated by trained experts and briefed experimental subjects. Their roll out is remote enough to incline practitioners to simplify conditions of use and, thus, postpone considerations about various particulars—ethical worries included. Moreover, being mostly aimed at proving the effectiveness and efficiency of particular solutions, functional criteria are likely to be prioritised over socio-ethical requirements. Both considerations suggest that the penetration of ethical analysis into prototype design is harder to obtain. In other words, prototype design is likely to be more resistant to ethics. The claim according to which prototype design might be more resistant to the implementation of ethical concerns has yet to be fully identified in the design ethics domain. However, a recent paper by Vakkuri and colleagues [24] suggests that concerns in this sense might be well-founded. As the authors argue, claiming that the object of research is “just a prototype” functions as a justification to dodge ethical objections or worries and put off such issues to later design stages. The article showcases the results of an empirical study aimed at assessing the extent to which ethical considerations were taken into account during the design of prototypical, proof-of-concept AI-based applications for the healthcare sector. The studied design environment is “startup-like”: a context characterised by “agile methods”, “notable time pressure”, and “scarce resources”, which captures many of the features of prototype design processes ([24], p. 200). Semi-structured interviews with developers and managers revealed that the prototypical nature of the projects was indeed perceived as a valid justification not to consider ethical issues, thus leading to lack of commitment and dismissal. Responsibility was mostly felt towards technical aspects, such as meeting project goals, finding bugs, and handling errors. As the authors conclude, In our data, the reason given by multiple respondents for not actively considering ethical issues was that they were developing a prototype. However, prototypes do influence the final product or service based on them (…). AI ethical issues should be tackled during earlier stages of development as well, seeing as many of them are higher-level design decisions (…) which can be difficult to undo later. ([24], p. 207)

We believe that similar considerations could apply even beyond the domain of AI-based healthcare applications and be relevant to driving automation as well. Even though the goals and challenges are different, the circumstantial factors that describe prototype design—agile methods, scarce resources, time pressure, focus on technical performances and functional aspects—are not exclusive to the context considered by Vakkuri and colleagues. Indeed, they arguably apply to prototype design in general. Hence, similar barriers to the consideration of ethical issues are to be expected in other areas of prototype design too—CAV prototype design included. Their identification and discussion are critical to realise how to move past them and promote the integration of ethical values into technological products before it is too late.

From Prototypes to Products: The Need for Early Interdisciplinary Design

91

In what follows, we substantiate our intuitions on CAV prototype design and ethical concerns by presenting a case study where dismissal of ethical considerations at prototype stage would require massive modifications to satisfy privacy and cybersecurity requirements at production stage. We argue that a situation where decision-makers must choose between living up to reasonable moral expectations at the cost of significant technical revisions or proceeding with production at the cost of ethical risks is already to be framed as a moral failure, regardless of the actual choice made. Indeed, the dilemma should be prevented from surfacing to begin with. Section 4 draws on the debate on design ethics to explore how to integrate ethical values in the design of technological prototypes, so not to require massive reconfiguration and put decision-makers in the difficult position of trading off efficiency and ethics.

3 A Case Study The case study we are about to introduce discusses a possible difficulty in the ethically compliant design of communications between CAVs and infrastructural devices. Before detailing the traffic situation we have in mind and the reasons why ethical issues might arise in the transition from prototypes to products, let us briefly introduce the importance of communication as a resource to improve road safety through driving automation.

3.1 Setting the Stage: Driving Automation and Communication In the last couple of decades, both industry and the academia have been extensively focusing on the emerging field of driving automation. The delegation of dynamic driving tasks to automated vehicles promises to relieve users from driving stress as well as substantially improve safety and energy efficiency, thus potentially contributing to ethically desirable road traffic objectives. At the same time, connected vehicles (United States) or cooperative Intelligent Traffic Systems (ITS, Europe) have been massively developed, adding a new functionality to automated driving systems. These concepts rely on data communication among vehicles (V2V) and/or between vehicles and infrastructure (V2I/I2V) in order to enable the implementation of ITS applications [10]. The entanglement between driving automation and communication is expected to grow tighter and tighter in the future. Data provided through connectivity, in fact, has proved to be crucial to enable safe and robust vehicle automation. Indeed, connectivity and communication are expected to significantly improve road safety, driver assistance, traffic management, and energy use [1, 12].

92

S. Arrigoni et al.

Among the advantages of allowing information exchange between vehicles and other systems, a particularly important role is played by supporting perception tasks through infrastructural devices. As thoroughly explained in Chap. 3, machine perception is a fundamental ingredient of driving automation. In order for planning and control algorithms to compute instructions and activate actuators, detailed data concerning the vehicle environment are needed. Environmental data are gathered mainly through onboard sensors such as LiDAR (Light Detection And Ranging), GPS (Global Positioning System), IMU (Inertial Measurement Unit) for self-localisation purposes and LiDAR, RADAR (RAdio Detection And Ranging) and cameras for object recognition and tracking. Data thus gathered are then fused into representations of the environment which, in turn, inform planning and control decision-making algorithms. Given the role played by machine perception in the overall performance of automated driving systems, it is necessary to ensure that perceptual data actually contain relevant information about the road environment. Even though significant advances have been made in the last years, however, environmental sensors do present several limitations in term of reliability, feasibility, and efficiency. For instance, failing to detect an obstacle or misidentify a pedestrian as a tree might lead to considerable safety hazards. Furthermore, sensors can be affected by false positives—e.g., the detection of an obstacle where there is none—or false negative—e.g., failing to detect an obstacle. Moreover, data provided by sensors might not contain enough information to take a correct decision, since relevant environmental aspects might pass undetected. For example, a speeding vehicle approaching a city intersection might remain hidden to other vehicles if the lane where it rides is closely surrounded by thick building walls. Furthermore, sensors have restricted perception capacities and operating conditions due to their intrinsic features. As an example, fading lane lines or broken traffic lights can cause misidentification of critical environmental features and, as a consequence, lead to wrong control decisions and actions. Also, sensors generally suffer from blind spots, which might obviously threaten the safe operation of automated vehicles. Moreover, weather conditions can variously affect the correct functioning of sensors. For example, the reflective properties of the road might reduce LiDAR accuracy with sunny weather, while given light conditions might disturb the capabilities of cameras. Finally, onboard sensors have a limited range, which constrains their functionality to apply only to the immediate vicinity of the CAV. Therefore, extremely valuable information regarding more distant traffic situations like traffic jams or accidents would remain undetected until they are within range, which is arguably too late to intelligently adapt CAV behaviour accordingly. These limitations have induced developers to explore redundant solutions, integrating an increasing number of sensors together. However, this solution is not free from risks, since it increases the complexity of the machine perception system and makes it more computationally demanding. At the same time, redundancy dramatically increases the overall costs of the system, while complicating the evaluation of the behaviour of the algorithms. Given the crucial importance of machine perception for driving automation and the limitations of environmental sensors, solutions to

From Prototypes to Products: The Need for Early Interdisciplinary Design

93

support the task of feeding planning and control algorithms with relevant information cannot but be welcomed. This is what connectivity and communication can provide. Communication technologies can compensate for sensor limits by making reliable data available to driving system in feasible and efficient ways. Not only this could be obtained by allowing CAVs to communicate with each other and exchange valuable traffic information. CAVs could also be connected with infrastructural devices gathering data on distant stretches of road or on road sections CAV sensors could not “perceive”, which would offer the possibility to plan routes and govern CAV behaviour in a much safer and more optimised way. Defining and testing technologies enabling information flows between CAVs and smart infrastructure devices could then serve many ethical traffic objectives—from safety to reduced energy consumption to less personal time wasted in traffic jams. Building on these considerations, the case study presented in the next section revolves around the design of a prototypical communication system between CAVs and smart infrastructure devices.

3.2 Detecting Collision Risk at Intersections: A Hypothetical Scenario To better exemplify the importance of communication, let us consider a more concrete domain of application: intersections. Also due to the limited visibility conditions they often present, intersections pose remarkable safety challenges to manual driving. According to [35], in Europe up to 38% accidents with at least serious injury outcomes occur at intersections. Driving automation is bound to face similar issues. An automated vehicle approaching an intersection where the view is obstructed by angular building walls would not be able to gather information on incoming traffic. As a consequence, it would be forced to either risk occupying the intersection or slow down and stop even if the intersection is clear. Enabling communication between vehicles and infrastructure would tangibly change the situation. Connectivity and communication are indeed expected to both significantly reduce crashes at intersections and manage traffic more efficiently [13, 15, 16]. A communication system signalling the presence, direction, and velocity of vehicles approaching the intersection would allow CAVs to safely slow down and stop only when necessary, thus enhancing safety while allowing for smoother traffic, decreasing vehicle wear and tear, and saving energy. Consider, then, a hypothetical scenario involving communication between CAVs and smart infrastructure devices aimed at detecting collision risk at intersections. The scenario is supposed to exemplify a typical research activity in the field of automated and connected vehicle development and, thus, is analysed and discussed from an engineering perspective. More specifically, in what follows it is described the typical workflow of activities that are necessary to design and realise a proof of concept by means of a prototype.

94

S. Arrigoni et al.

The main goal of the research activity we are about to introduce consists in minimising collision risk at intersections by enabling communication between CAVs and smart infrastructure devices—in our case, cameras. Sure enough, one might object that a traffic light could accomplish our goal just fine. However, traffic lights would be a less efficient solution in terms of time, energy use, emission, and vehicle wear and tear. In addition to safety, then, a solution that would allow to intelligently and flexibly manage traffic at intersections might impact positively on other socio-ethical values relevant to road transport. Enabling communication between CAVs and smart infrastructure devices could minimise collision risk while avoiding pitfalls associated with traffic lights. If considered in isolation from smart infrastructural devices and other vehicles, CAV safety performances at intersections would suffer from several constraints. Given the limitations discussed above, onboard sensors might be unable to gather relevant data concerning other road users approaching the intersection—not just cars or trucks, but also bicyclists and motorcyclists—and estimate their expected behaviour. The information contained in these data, then, are vital to handle the traffic situation safely. If they could be fed to CAVs, the risk of collisions would be computed much more reliably and, consequently, traffic situations could be handled more safely and efficiently. To feed driving systems with the data they need, but cannot gather by themselves, the perception system of the vehicle could be extended by giving it access to data generated by cameras installed in vantage points at the intersection. In this way, the driving system approaching the crossroad would be able to factor in information coming from smart infrastructure devices as well and plan future behaviour accordingly. More specifically, it would be able to actuate the steering wheel and control the velocity so to minimise collision risk while, at the same time, avoiding unnecessary braking and stops. Figure 1 details the traffic scenario that we have in mind. The red vehicle approaching the intersection from the bottom is driving on a straight line. Due to range limitations and the building walls on its right blocking perception, onboard sensors—cameras, LiDAR, RADAR—are unable to detect the yellow vehicle, which is also approaching the intersection from the right. If the red CAV had to rely exclusively on its sensors, uncertainty concerning the incoming presence of other potential road users would recommend decreasing speed—and, perhaps, stopping—in order to minimise collision risk. In fact, onboard sensors would be unable to detect the threat in time to activate any evasive manoeuvre. However, braking and stopping would be unnecessary were the intersection clear. In this case, braking and stopping for safety reasons would engender the same pitfalls that we briefly discussed with reference to traffic lights. Conversely, enabling the red CAV to access data generated by smart infrastructure devices would circumvent the problem. As showed by its detection range, the smart camera monitoring traffic at the intersection is placed in the best position to detect and track vehicles incoming from the right. Were it designed to communicate information on incoming traffic to approaching vehicles, the capability of driving systems to

From Prototypes to Products: The Need for Early Interdisciplinary Design

95

Fig. 1 Representation of the system

maximise safety while, at the same time, handling traffic situations efficiently would be significantly improved. To catch this opportunity, a reliable communication system connecting CAVs and smart infrastructure devices with each other must be set up. The task of the research activity we analyse in the next subsection consists precisely in designing a prototypical communication system between CAVs and smart cameras. As it will be shown, integrating ethical consideration to its design might prove harder than anticipated.

3.3 From Prototypes to Products: Ethical Challenges In order to structure communication between devices connected in a network, a common design approach is to define information flows and split the overall process into subtasks. Figure 2 exhibits the whole information flow and related subtasks proper to the case under study. Three main components are considered: the smart camera, an online server, and the vehicle. The information flow is articulated as follows. Left to right, images are firstly acquired by the camera. As a second step, either (i) detection and tracking algorithms running on the camera computing unit process input images and send tracking data (such as position, speed, direction and type of object detected) to an online server; or, (ii) unprocessed images are directly sent to the online server for further elaboration or human inspection (e.g., insurers gathering data on a specific situation or

96

S. Arrigoni et al.

Fig. 2 Subtasks and information flows

engineers seeking to improve the performance of the algorithms). Finally, tracking data is shared with the vehicle driving system. The control unit matches the information on incoming traffic with localisation data in order to properly react to possible threatening situations and manage longitudinal and lateral control accordingly. Since decisions concerning system behaviour at intersections must be both reliable and swift, the speed and straightforwardness of information exchange are particularly important functional criteria to evaluate the efficiency of a communication solution. In the prototype design stage, getting high performances in terms of efficiency is critical. To better highlight this, prototype developers would likely opt for solutions that align with said design goals. For instance, they might opt for a solution that uploads real-time images to the online server “as they are”, in their entirety. Moreover, absent prior arrangements, they would probably rely on public online servers and unencrypted IoT communication protocols to set up and test their network. Thanks to their immediate availability, good performance, and ease of implementation, similar solutions allow control engineers and computer scientists—i.e., those who possess the technical expertise to handle detection, tracking, and control design tasks—to set up a network even if the fine-grained expertise proper to communication and network engineers is unavailable, only partially accessible, or too time-consuming to resort to. Since, in the end, the technology under development is “just a prototype”, the solutions selected must be those that best fit functional design goals. In the end, one might think, only prototypes that have demonstrated their technical value have the chance to morph into products. Requirements other than efficiency, it is then supposed, are best addressed later on. Relying on raw images, public online servers, and open IoT protocols fits the design goal of providing easily accessible, fast, and straightforward information exchange between devices. From the technical point of view, the prototype will likely perform satisfactorily and be potentially adopted. It will incorporate the desired functional features, will allow for efficient communication, and will be presented as a viable solution to the traffic situation under discussion. However, crucial ethical requirements need to be factored in as commercial implementation, production stage, and social deployment approximate. Suppose that the prototype is adopted for implementation in automated vehicles and road infrastructure. When a technology transitions from prototype stage to product stage, ethical considerations cannot be further postponed. An ethical indepth analysis is thus carried out and, as a result, further requirements emerge. Privacy concerns speak against collecting, sharing, and storing raw images, since

From Prototypes to Products: The Need for Early Interdisciplinary Design

97

they might contain personal information such as bodily traits and plate numbers, which can directly or indirectly lead to the identification of individuals. Cybersecurity worries caution against reliance on public servers and open IoT communication protocols. The vast majority of publicly available solutions—e.g., a MQTT protocol and a public online MQTT broker—cannot guarantee a sufficient level of security from illegitimate data access and malevolent manipulation. In light of what the analysis of the ethical aspects suggests, viable fixes must be implemented in the communication architecture. A possible solution to privacy worries could be the design or implementation of real-time automatic obfuscation algorithms that would render personal information unobtainable. For what concerns servers and IoT communication protocols, encryption techniques and/or the adoption of ad-hoc Virtual Private Networks (VPN) would be required. Living up to legitimate ethical expectations in terms of privacy and cybersecurity at such a late stage, however, would likely have severe impacts on the overall design of the communication system. Once the simplified assumptions taken at prototype design stage stop being tenable, required modifications could massively influence system efficiency. Running feature obfuscation algorithms might exceed the computing power originally estimated for the camera computing unit. This would pose the need of redesigning the hardware and facing tangible changes in term of costs, dimensions, and power consumption. Moreover, machine learning algorithms executing recognition tasks would have to be re-trained in order to process partially obfuscated images—a different kind of input data—with possible drops in term of speed and accuracy. Similarly, the execution of encryption programs would require additional computational power for what concerns both smart cameras and vehicles. Ultimately, the massive changes that would be needed to adequate the design to relevant ethical values could make the overall final solution not feasible for real implementation. Consider further, for example, the problem of latency. All the possible fixes that have been mentioned in the previous lines would necessarily require more time for messages to be sent and received. Encryption and decryption processes would slow down the exchange of information between the nodes of the network. The use of VPN would imply that data would be routed to an external server before reaching the desired destination, thus significantly increasing communication latency. In turn, increases in communication latency might cause the control system response time to grow significantly, to the point that it might end up failing to meet the safety requirements initially defined and fulfilled by the prototype performances. Let us assume, for example, that the necessary measures would cause a delay of just 0.1 s for a vehicle travelling at 50 km/h: this means a delay in taking a decision that can be quantified into around 1.4 m! When ethical concerns are raised—as, in our case, privacy and cybersecurity worries—opting for prototypical solutions that boost technical efficiency at the cost of postponing more complex considerations does not appear to be a viable strategy. From an ethical perspective, framing prototype design as a purely technical enterprise suffers from at least two main moral failures. First, it fails to duly acknowledge the relevance of socio-ethical issues, the urgency of which can only problematically

98

S. Arrigoni et al.

be tempered insisting on the remoteness of social deployment. Secondly, postponing ethical design to production stage puts too much pressure on decision-makers, whose choice for ethical compliance would entail massive project disruption in terms of design, costs, and time. Arguably, putting decision-makers in such an uncomfortable position should already be intended as a moral failure. Design ethics should avoid similar situations to occur in the first place. The integration of ethical values into the design of technologies should not require revolutionary interventions. Rather, it should be pursued providently and systematically throughout the design process. The challenges that prototype design launches to the responsible identification and management of ethical problems are worthy of being carefully examined. As a final step, let us now draw on the literature discussing design ethics and look for suggestions on how to properly face them.

4 Start Early! Insights from Design Ethics The case study discussed in the previous section confirms that the importance of integrating ethical consideration already at prototype stage cannot be easily belittle. Even though remoteness from actual deployment might induce practitioners to delay more complex considerations concerning ethical impacts and requirements, there are good reasons to claim that prototype design should already be sensible to these considerations. Ethical analysis can ensure that moral values are taken into duly consideration only if it starts early and is carried out consistently throughout the entire design process. Postponing decisions concerning ethical compliance to production stage is morally unsatisfactory. Even though a focus on the specific features that make prototype design more resistant to the penetration of the ethical component is yet to emerge in the debate, some valuable insights on how to bind technical and ethical considerations in design processes can inform our discussion. A good starting point for our remarks is the attention that has been dedicated to the notion of iteration. Iteration is a widely acknowledged principle in design ethics [8, 33]. The task of aligning technologies to relevant ethical values is not to be understood as one single step in the design process. Rather, it must be carried out consistently. If necessary, previous steps must also be reconsidered in light of emerging ethical information. Later stages in product design might uncover new evidence concerning the ethical profile of a product. Real life testing and user creativity might easily bring about hardly predictable situations that lead to ethically significant outcomes. It is therefore critical not to envision ethics as just a stage in the design process. On the contrary, ethical analysis and discussion must be engaged in over and over again along the entire design process, so that countermeasure can be timely taken when new ethically important information surfaces. The ethical element in design practices must not be understood on a one-time basis. It must accompany design throughout the entire process and, if necessary, push it back to the beginning.

From Prototypes to Products: The Need for Early Interdisciplinary Design

99

Going back to the beginning, however, comes with obvious costs. As the case study shows, responsibly living up to legitimate moral expectations as a piece of technology transitions from its prototypical form to production might imply massive modifications, which would considerably disrupt the ability to meet functional requirements and project goals. Facing this choice, decision-makers might see support to the moral alternative shrinking. Indeed, introducing moral considerations only at production stage might lead to trade-offs between considerable design changes and ethical compliance. Aligning product design to ethically relevant constraints could imply severe costs in term of time and resources—and, thus, put ethical alignment in jeopardy. The emergence of thorny trade-offs between key criteria such as ethical values, functional parameters, and business-related interests at such a later stage is arguably to be framed as an ethical failure in itself—a situation that design ethics should prevent to occur. Countermeasures must be taken so that technical requirements meant to capture relevant ethical values are already part of the process when technologies transition from prototypes to products. Iteration, then, truly works only if ethical analysis and discussion is already a systematic part of the design process. If carried out inconsistently, going back to previous design stages could imply extremely high costs and disruptions—which would arguably represent a significant hindrance to moral compliance. For iteration to fulfil its function, ethics must be there since the beginning. Hence, prototype design is the most adequate phase in which to take high-level decisions based on ethical requirements. Ideally, adopting ethics methodologies during prototype design would help create the best possible conditions for relevant ethical values to be duly accounted for since the outset, thus minimising the risk of substantial ethical problems emerging at later design stages and requiring massive modifications. As it turns out, the design stage that is perceived as the most impermeable to ethical considerations plays a most crucial role in the integration of ethics into design practices. These considerations invite to strongly oppose the claim that ethical considerations could or should be bracketed during prototype design. On the contrary, design ethics methodologies must be adopted since the very outset of technology conceptualisation. This way, the risk of economic or functional considerations getting ahead of ethical compliance in later design stages can be minimised. To avoid decisionmaking conundrums where socio-ethical requirements must necessarily be balanced with technical and economical constraints, interdisciplinary collaboration aimed at tackling ethical challenges should be pursued since the prototype stage. Even if our problem has not been fully discussed in the debate, current design ethics frameworks provide solid support to the extension of ethical analysis to prototype design. In particular, the necessity to form interdisciplinary design teams is widely recognised as critical (e.g., [3, 29]). Interdisciplinary design is necessary since the complex normative profile of technological design requires diversified competences and skills to be properly tackled. On the one hand, the multifaceted technological nature of advanced systems such as CAVs require multidisciplinary engineering teams to partake in the design process. To get back to our example one last time, control engineers, computer scientists and automotive engineers should work side

100

S. Arrigoni et al.

by side with communication and networking engineers. However, the social implication of driving automation also requires that a variety of non-engineering experts are included in the design process. Insights from ethicists, social scientists, legal experts, and policy scholars are key to steer design towards ethically and socially adequate directions. Identifying potential issues, conceptualising relevant values, specifying ensuing norms, and setting related requirements are all fundamental steps in design ethics that fully depend on interdisciplinary collaboration [28]. Involving the right professionals into the prototype design process is crucial for ethical compliance [5]. If the composition of the team properly reflects design needs, problems can be timely spotted and addressed. Given the remoteness of social adoption, the focus on functionality, and the simplification of deployment conditions, the penetration of the principle of interdisciplinary into prototype design practices might be less obvious. As a result, prototype design teams are likely to be highly specialised and less diverse in terms of competences and skills, which could lead to overlooking critical ethical challenges. Established methodologies—e.g., (Constructive) Technological Assessment or analogous practices integrated in wider design ethics frameworks such as Value Sensitive Design [8, 9, 20] or Design for Values [26]—should be adopted early on as tools to guide the composition of adequately diverse design teams. At the same time, compliance with known ethical values could also be pursued through technological mediation. In this way, the burdensome moral duty of identifying ethical issues and taking design decisions accordingly would not weigh solely on individual agency but could rather be off-loaded to the wider socio-technical system within which innovation processes occur [6, 25]. For example, institutional support could be given to the design of off-the-shelf system components and devices commonly used in CAV prototype building that already implement relevant ethical requirements. In our case, industrial standards or laws mandating cameras for the automotive sectors to come by-default with obfuscation algorithms might help practitioners comply with privacy requirements even at early prototype stages. Transforming ethical requirements into technical conditions rather than eventualities, this solution would ease the moral burden placed on designers and contribute to the embedding of relevant ethical values into prototype design.

5 Conclusion In this chapter, the importance of actively pursuing the integration of ethical analysis into prototype design has been advocated for. A hypothetical case study in automated driving communication has been discussed to better substantiate the issues under examination. Addressing the transition from prototypes to products from the perspective of design ethics has led to the following results. First, prototype design is acknowledged as inherently more resistant to ethical analysis. Given its distance from deployment and given the mainly technical nature of the aims pursued through prototypes, practitioners might easily be inclined to

From Prototypes to Products: The Need for Early Interdisciplinary Design

101

put off ethical consideration so to serve more pressing functional objectives. Postponing ethical compliance to product design, however, presents noteworthy ethical risks. Fixes required for socio-ethical reasons might imply massive design modifications which would put professionals in front of very delicate choices. Similar decision-making conundrums arguably represent a considerable source of risk—one that design ethics should prevent from occurring. Opposing the view that ethical worries should be bracketed and advocating for the adoption of design ethics methodologies during prototype design could be of help in promoting the implementation of ethical values into technological products. In particular, paying attention to the formation of interdisciplinary design teams already during prototype design is likely to have an ethically positive cascade effect on the entire design process. Accordingly, the adoption of design ethics during prototype design should be supported in terms of both methodological tools, education, and best practices. Finally, embedding ethical requirements already in the design of off-theshelf system components and devices might, at least in some cases, help manage designers’ moral overload and promote a smoother transition towards ethically adequate technological configurations. To conclude, practitioners should be encouraged to raise awareness on the importance of the prototype stage to set the ethical tone of a technology. “It’s just a prototype” no longer should be advanced as an excuse to dodge ethical commitment. On the very contrary, the exercise of responsibility at prototype design stage should be both demanded and promoted.

References 1. Alalewi, A., Dayoub, I., Cherkaoui, S.: On 5G–V2X use cases and enabling technologies: a comprehensive survey. IEEE Access 9, 107710–107737 (2021). https://doi.org/10.1109/ACC ESS.2021.3100472 2. Bednar, K., Spiekermann, S.: Eliciting values for technology design with moral philosophy: an empirical exploration of effects and shortcomings. Sci. Technol. Hum. Values Online First 1–35 (2022). https://doi.org/10.1177/01622439221122595 3. Bisconti, P., Orsitto, D., Fedorczyk, F., Brau, F., Capasso, M., De Marinis, L., Eken, H., Merenda, F., Forti, M., Pacini, M., Schettini, C.: Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology. AI Soc. Online First 1–10 (2022). https://doi.org/10.1007/s00146-022-01518-8 4. Davis, J., Nathan, L.P.: Value sensitive design: applications, adaptations, and critiques. In: van den Hoven, J., Vermaas, P., van de Poel, I. (eds.) Handbook of Ethics, Values, and Technological Design, pp. 11–40. Springer, Dordrecht (2015). https://doi.org/10.1007/978-94-007-6970-0_3 5. Devon, R., van de Poel, I.: Design ethics: the social ethics paradigm. Int. J. Eng. Educ. 20, 461–469 (2004) 6. Frank, L.E.: What do we have to lose? Offloading through moral technologies: moral struggle and progress. Sci. Eng. Ethics 26, 369–385 (2020). https://doi.org/10.1007/s11948-019-000 99-y 7. Friedman, B., Kahn, P.H.: Human values, ethics, and design. In: Sears, A., Jacko, J.A. (eds.) The Human-Computer Interaction Handbook: fundamentals, Evolving Technologies and Emerging Applications, pp. 1242–1266. Taylor and Francis, New York-London (2008)

102

S. Arrigoni et al.

8. Friedman, B., Kahn, P.H., Borning, A.: Value sensitive design and information systems. In: Himma, K.E., Tavani, H.T. (eds.) The Handbook of Information and Computer Ethics, pp. 69– 101. Wiley, Hoboken (2008) 9. Friedman, B., Hendry, D.G.: Value Sensitive Design. Shaping Technology with Moral Imagination. MIT Press, Cambridge (2019) 10. Hameed Mir, Z., Filali, F.: C-ITS applications, use cases and requirements for V2X communication systems—Threading through IEEE 802.11p to 5G. In: Pathan, A.S.K. (ed.) Towards a Wireless Connected World: Achievements and New Technologies, pp. 261–285. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-04321-5_11 11. Harris, C.E.: Engineering ethics: from preventive ethics to aspirational ethics. In: Michelfelder, D., McCarthy, N., Goldberg, D. (eds.) Philosophy and Engineering: reflections on Practice, Principles and Process. Philosophy of Engineering and Technology, vol. 15, pp. 177–187. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-7762-0_14 12. Karagiannis, G., Altintas, O., Ekici, E., Heijenk, G., Jarupan, B., Lin, K., Weil, T.: Vehicular networking: a survey and tutorial on requirements, architectures, challenges, standards and solutions. IEEE Commun. Surv. Tutor. 13(4), 584–616 (2011). https://doi.org/10.1109/SURV. 2011.061411.00019 13. Khayyat, M., Arrigoni, S., Cheli, F.: Development and simulation-based testing of a 5Gconnected intersection AEB system. Veh. Syst. Dyn. 60(12), 4059–4078 (2022). https://doi. org/10.1080/00423114.2021.1998558 14. Latour, B.: Where are the missing masses? The sociology of a few mundane artifacts. In: Bijker, W.E., Law, J. (eds.) Shaping Technology/Building Society: studies in Sociotechnical Change, pp. 225–258. MIT Press, Cambridge (1992) 15. Malinverno, M., Avino, G., Casetti, C., Chiasserini, C.F., Malandrino, F., Scarpina, S.: Performance analysis of C-V2I-based automotive collision avoidance. In: 2018 IEEE 19th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), pp. 1–9. IEEE, Piscataway (2018). https://doi.org/10.1109/WoWMoM.2018.8449772 16. Miller, R., Huang, Q.: An adaptive peer-to-peer collision warning system. In: Vehicular Technology Conference. IEEE 55th Vehicular Technology Conference. VTC Spring 2002 (Cat. No.02CH37367), pp. 317–321. IEEE, Piscataway (2002). https://doi.org/10.1109/VTC.2002. 1002718 17. Nissenbaum, H.: How computer systems embody values. Computer 34(3), 120–119 (2001). https://doi.org/10.1109/2.910905 18. Roeser, S.: Emotional engineers: toward morally responsible design. Sci. Eng. Ethics 18, 103– 115 (2012). https://doi.org/10.1007/s11948-010-9236-0 19. Schiaffonati, V.: Future reflective practitioners: the contributions of philosophy. In: Michelfelder, D., McCarthy, N., Goldberg, D. (eds.) Philosophy and Engineering: reflections on Practice, Principles and Process. Philosophy of Engineering and Technology, vol. 15, pp. 79–90. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-7762-0_7 20. Simon, J.: Value-sensitive design and responsible research and innovation. In: Hansson, S.O. (ed.) The Ethics of Technology. Methods and Approaches, pp. 219–235. Rowman & Littlefield, London-New York (2017) 21. Taebi, B.: Ethics and Engineering. An Introduction. Cambridge University Press, Cambridge (2021) 22. Umbrello, S.: Imaginative value sensitive design: using moral imagination theory to inform responsible technology design. Sci. Eng. Ethics 26, 575–595 (2019). https://doi.org/10.1007/ s11948-019-00104-4 23. Umbrello, S., van de Poel, I.: Mapping value sensitive design onto AI for social good principles. AI Ethics 1, 283–296 (2021). https://doi.org/10.1007/s43681-021-00038-3 24. Vakkuri, V., Kemell, K.K., Jantunen, M., Abrahamsson, P.: “This is just a prototype”: how ethics are ignored in software startup-like environments. In: Stray, V., Hoda, R., Paasivaara, M., Kruchten, P. (eds.) Agile Processes in Software Engineering and Extreme Programming. XP 2020. Lecture Notes in Business Information Processing, vol. 383, pp. 195–210. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49392-9_13

From Prototypes to Products: The Need for Early Interdisciplinary Design

103

25. van den Hoven, J., Lokhorst, G.J., van de Poel, I.: Engineering and the problem of moral overload. Sci. Eng. Ethics 18, 143–155 (2012). https://doi.org/10.1007/s11948-011-9277-z 26. van den Hoven, J., Vermaas, P.E., van de Poel, I.: Handbook of Ethics, Values, and Technological Design. Sources, Theory, Values and Application Domains. Springer, Dordrecht (2015) 27. van den Hoven, J., Miller, S., Pogge, T. (eds.): Designing in Ethics. Cambridge University Press, Cambridge (2017) 28. van de Poel, I.: Translating values into design requirements. In: Michelfelder, D., McCarthy, N., Goldberg, D. (eds.) Philosophy and Engineering: reflections on Practice, Principles and Process. Philosophy of Engineering and Technology, vol. 15, pp. 253–265. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-7762-0_20 29. van Wynsberghe, A., Robbins, S.: Ethicist as designer: a pragmatic approach to ethics in the lab. Sci. Eng. Ethics 20, 947–961 (2014). https://doi.org/10.1007/s11948-013-9498-4 30. Verbeek, P.-P.: What Things Do. Philosophical Reflections on Technology, Agency, and Design. Robert, P. (tran.) Crease. The Pennsylvania University Press, University Park (2005) 31. Verbeek, P.-P.: Materializing morality: design ethics and technological mediation. Sci. Technol. Hum. Values 31(3), 361–380 (2006). https://doi.org/10.1177/0162243905285847 32. Verbeek, P.-P.: Moralizing Technology. Understanding and Designing the Morality of Things. Chicago University Press, Chicago (2011) 33. Winkler, T., Spiekermann, S.: Twenty years of value sensitive design: a review of methodological practices in VSD projects. Ethics Inf. Technol. 23, 17–21 (2021). https://doi.org/10.1007/ s10676-018-9476-2 34. Winner, L.: Do Artifacts have politics? Daedalus 109(1), 121–136 (1980). http://www.jstor. org/stable/20024652 35. Wisch, M., Hellmann, A., Lerner, M., Hierlinger, T., Labenski, V., Wagner, M., Feifel, H., Robescu, O., Renoux, P., Groult, X.: Car-to-car accidents at intersections in Europe and identification of use cases for the test and assessment of respective active vehicle safety systems. In: Proceedings of the 26th International Technical Conference on the Enhanced Safety of Vehicles (ESV): technology: enabling a Safer Tomorrow, Eindhoven, Netherlands, Paper Number: 190176-O. National Highway Traffic System Administration, Washington (2019). https://wwwesv.nhtsa.dot.gov/Proceedings/26/26ESV-000176.pdf

Gaming the Driving System: On Interaction Attacks Against Connected and Automated Vehicles Luca Paparusso , Fabio Fossa , and Francesco Braghin

Abstract Cybersecurity risks represent a significant obstacle to driving automation. Like any other computing device, Connected and Autonomous Vehicles (CAVs) are intrinsically exposed to numerous vulnerabilities and may thus be hacked. Even though cybersecurity attacks are usually understood as implying software manipulation or sensor interference, the behaviour of CAVs can be influenced through interaction as well. Knowledge of the behavioural patterns of the driving system might make it possible to ‘game the system’—i.e., to influence or control system choices and behaviour by purposefully interacting with it and artfully creating the conditions for it to behave in desired ways. The risks posed by such an indirect attack on CAVs could potentially be significant, ranging from massive traffic disruptions to assaults. However, strategies to contain them are difficult to pursue and have considerable side effects. The present paper shows how knowledge concerning safety-oriented trajectory planning might be abused to manipulate system behaviour not through code but rather through interactions. We consider different ways in which such knowledge can be obtained and possible countermeasures to protect its diffusion. However, defensive strategies all come with relevant costs, so the problem of developing CAVs that can reliably resist interaction attacks remains open. Keywords Connected and autonomous vehicles · Cybersecurity · Safety · Trajectory planning · Interactions

L. Paparusso · F. Fossa · F. Braghin (B) Department of Mechanical Engineering, Politecnico di Milano, Via Privata Giuseppe La Masa, 1, 20156 Milan, Italy e-mail: [email protected] L. Paparusso e-mail: [email protected] F. Fossa e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Fossa and F. Cheli (eds.), Connected and Automated Vehicles: Integrating Engineering and Ethics, Studies in Applied Philosophy, Epistemology and Rational Ethics 67, https://doi.org/10.1007/978-3-031-39991-6_6

105

106

L. Paparusso et al.

1 Introduction Connected and Automated Vehicles (CAVs) are complex digital systems. As such, they are bound to suffer from cybersecurity vulnerabilities. Communication channels can be exploited by hackers to sneak in and take control of vehicle operations such as steering and speeding. Sensor functions can be manipulated through the emission of disturbing signals or through intentional modifications of the environment (e.g., manipulating road signs). Given the evident risks that attacks would cause in terms of safety and traffic disruption, tackling cybersecurity issues responsibly figures as a clear moral obligation for the driving automation socio-technical system. The present chapter discusses a possible kind of attack to CAVs that does not intend to exploit computational vulnerabilities and sensor limitations, but rather aims at exerting control on system operations indirectly—i.e., by ‘gaming’ the driving systems. Building on extant literature, we argue that knowledge of the rules governing CAV behaviour might allow for intelligently manipulating it through interactions. Accordingly, we define this form of malicious takeover as interaction attack and explore possible ways to reduce the related risks. The chapter is structured as follows. Section 2 introduces the ethical debate on cybersecurity in driving automation and zooms in on interaction attacks, detailing their main characteristics and arguing that they should be acknowledged as a form of cybersecurity risk. Section 3 takes a more systematic look into the technical aspects causing the risks of interaction attacks to emerge. In this sense, safety-oriented trajectory planning algorithms are discussed as a necessary element of driving automation but, at the same time, a vulnerability in terms of interaction attacks. Finally, Sect. 4 considers two possible strategies to minimise the risk, i.e., secrecy and diversity. Both of them, however, are fraught with noteworthy complications. Section 5 summarises the results of the inquiry and explores future research directions.

2 Cybersecurity in Driving Automation: Hacking, Sensor Interference, and Interaction Attacks As an innovative vehicle technology combining mechanical engineering and computer science, driving automation must cope with safety issues present in both fields. In addition to more traditional considerations concerning reliability and crashworthiness, liabilities proper to digital systems must be adequately addressed. In particular, CAVs raise several cybersecurity issues. Just as any other online computing device, CAVs are susceptible to hacking [9]. The complexity of driving autonomous systems—which could be perhaps better defined as systems of systems—is bound to present numerous vulnerabilities. Their malicious exploitation could pose tremendous safety threats. Through cyberattacks, vehicles could be remotely tampered with or taken control of and hijacked to cause massive traffic disruption or harm passengers and other road users [4, 8, 13, 18, 36]. Similar

Gaming the Driving System: On Interaction Attacks Against Connected …

107

events would pose significant threats to social safety. Concurrently, they would likely undermine social trust in the technology and lead to widespread rejection [27, 30]. Being so closely connected to the protection of road users’ physical integrity, cybersecurity is of evident ethical interest. Safety is a well-established value in the realm of transportation. Stakeholders have a clear obligation to reduce safety risk by any reasonable means. Actually, the ethical significance of ensuring high levels of cybersecurity for CAVs is so widely acknowledged that little need has been felt in the literature of justifying its moral relevance. The identification, prevention, and mitigation of cybersecurity risks are commonly recognised as crucial obligations that engineers, designers, programmers, manufacturers, etc. have a duty to satisfy. Key safety-enhancing cybersecurity values such as robustness, resilience, and integrity have been adopted in the context of CAVs as well [21, 26]. Accordingly, various cybersecurity risks proper to CAVs are being systematically identified and strategies are being introduced to prevent them or to mitigate negative outcomes [19]. In line with the focus on the digital element of cybersecurity, most of the identified threats involve attacks to various system components through software manipulation or sensor interference. For instance, Rizvi and colleagues [33] have shown that DoS attacks can allow for taking remote control of brakes, acceleration, and steering. Moreover, sensors like radar, LiDAR, and cameras are variously vulnerable to the emission of signals intentionally aimed at impairing their functionality or deceiving them into perceiving non-existent objects [31]. Furthermore, systems can also be attacked more indirectly by intentionally tampering with elements of the environment. For instance, ultrasonic sensors are vulnerable to cloaking attacks, where obstacles covered with sound-absorbing materials are made undetectable [25]. Similarly, traffic signs can be modified to confuse machine vision algorithms and, possibly, influence CAV behaviour in predetermined ways [10]. Blind spots can also be exploited by placing obstacles where sensors have difficulty to perceive them [25]. In all these cases, the system is directly targeted in an attempt to assume control of its operations or force its malfunction. Countermeasures to increase system security and sensor resilience, although extremely complicated in practice, are theoretically adequate to deal with the cybersecurity issues discussed so far. In addition to attacking the system directly or intentionally modifying elements of the environment, however, system behaviour can be manipulated through interactions as well. Interaction attacks cannot be dealt with by recurring to the abovementioned countermeasures. On the contrary, even the most secure software paired with the most resilient set of sensors would still be liable to interaction attacks. As a matter of fact, these attacks assume that the system keeps working as it is supposed to. They count on it. Control over system behaviour is achieved, so to speak, by playing by its own rules. Once knowledge is obtained concerning how a system works, it can be gamed. The logic controlling its behaviour can be intelligently bent according to a plan. This way, control over system behaviour could be exerted indirectly, by exposing the system to circumstances that will elicit the desired reactions. Interaction attacks would not need to violate system security or tamper with sensors for them to malfunction or give

108

L. Paparusso et al.

false positives or negatives. On the contrary, system behaviour would be manipulated precisely because the logic behind it is known. By controlling the circumstances in which a CAV is operated, attackers can thus influence its behaviour according to their own agenda. For these reasons, we believe that interaction attacks could pose a significant threat to driving automation safety. While—at least to our knowledge—the problem is yet to be fully identified and acknowledged in cybersecurity terms, similar problems addressing the interaction between road users and CAVs have been pointed out in the literature on the ethics of driving automation [11, 13, 28, 35, 36]. More specifically, scholars have noted that if CAV safety features were known, pedestrians and other road users might exploit them to get an unfair advantage with reference to the right of way. Moreover, relying on the efficiency of these systems, road users might engage in behaviours that would otherwise be considered dangerous. For example, pedestrians could start stepping abruptly and unattentively on the street to cross it, knowing that CAVs will detect them and break. Also, cyclists or motorcyclists could start occupying crossroads regardless of the right of way, knowing that CAVs will put safety first and yield. Similar cases are of clear ethical interest, since they could lead to severe safety risks and stand in contrast with shared claims concerning traffic rights and duties. The problem is analysed in detail by Millard-Ball [29]. In his article, street crossing is framed as a “game of chicken” where right of way is ultimately determined by an equilibrium between the payoffs that each involved party gets as a result of a given choice. When deciding whether to cross the street, pedestrians consider both the prospected benefits of doing so and the risks they expose themselves to. Risks, of course, are due to the fact that drivers might fail to stop and hit them. Even if it is in the drivers’ best interests to yield, they could be distracted, intoxicated, tired, or aggressively asserting right of way. Drivers also carry out similar evaluations to decide whether or not to allow pedestrians to pass. Such a decision-making process must account for many individual and environmental aspects. For instance, considerations concerning the behaviour of the other party must be combined with expectations based on implicit norms and explicit traffic rules. Different areas might in fact be regulated differently. For example, pedestrians and drivers might exercise different degrees of attention or adopt different behaviours in a busy city centre or in a small country village. As a result of this bargaining, either pedestrians or drivers get to pass first. Driving automation would have a dramatic effect on such equilibrium. Since the CAV control logics would implement risk-averse safety features, the risk of being hit for pedestrians would substantially plunge even when jaywalking. Passengers would not even be able to assert right of way, since the driving system would automatically yield to protect pedestrians’ safety. As a result, CAVs would lose much of their attractiveness as means of urban transportation, where interaction with pedestrians and other human road users could hardly be excluded. Therefore, Millard-Ball suggests, policy and regulative frameworks must take this eventuality into consideration and introduce countermeasures to disincentivise over-assertive behaviour on the part of human road users.

Gaming the Driving System: On Interaction Attacks Against Connected …

109

Even though the abovementioned cases offer a good basis to conceptualise our problem, they tackle a slightly different issue. What is at stake, in fact, is the emergence of behaviours that put in danger those who practice them and that could potentially disrupt automated traffic. System features are maliciously exploited to gain an advantage in terms of road use. Aggressive road users, however, do not have any interest in putting CAV passengers in danger. If anything, they expose themselves to the risk of suffering harm should the system be unable to handle the situation safely. System behaviour, then, is manipulated only momentarily, with no further goals in mind. Nonetheless, this situation could represent the first step towards more articulated manipulation plans, where a series of known system reactions are intelligently stimulated to force the CAV to behave as desired. In light of the above, there are sound reasons to claim that interaction attacks should be included in the set of risks that cybersecurity should minimise. The prospect of malicious agents manipulating system behaviour by exploiting the safety features of driving automation might have significant effects both on physical integrity and acceptance. In what follows, we focus on safety-oriented trajectory planning to show how detailed knowledge concerning the logics according to which trajectories are computed could expose systems to interaction attacks.

3 Safety-Oriented Trajectory Planning A general overview of cyberattacks and their ethical implications has been provided in the previous section. Cyberattacks may take advantage of the many and varied features composing a CAV software stack to cause threats. Particularly, among the different types of attack, it has been mentioned how the knowledge of the interactionaware behaviour of CAVs—i.e., the ability to adapt to how other agents navigate traffic—can be exploited to maximise the damage of an attack, and propagate it to the whole driving scenario. This section aims to support and detail the previous statements, providing the reader with the fundamental technical knowledge to understand how a CAV interaction-aware trajectory planning algorithm is typically designed. This allows to investigate how design choices can dually and antithetically affect both safety and vulnerability. The research focus in driving automation, or more generically in autonomous robotic navigation, has drastically changed in the last decade. Earlier, experts considered how to control the vehicle so as to replicate reference trajectories—the so-called trajectory tracking problem (e.g., [24]). More recently, instead, the fundamental question revolves around how the vehicle should plan its future trajectories. This research question is per se not entirely new to the robotic community. Historically, it goes under the name of trajectory planning problem. Driving automation however, just as collaborative robotics, belongs to a spectrum of applications for which trajectory planning has not only to satisfy technical and performance requisites, but also to provide safety guarantees.

110

L. Paparusso et al.

CAVs will potentially drive in highly populated environments alongside other dynamic agents with stochastic behaviour. An agent’s behaviour is usually defined as stochastic when the decision logic for selecting future actions is neither deterministic nor predictable with total certainty. In other words, starting from the same initial situation, an agent with stochastic behaviour may take different paths, with some degree of randomness. A stochastic behaviour can then be described through a probability distribution. As an example, a pedestrian standing in front a crosswalk with traffic light exhibits stochastic behaviour. Supposing that the traffic light is red, there is still a chance that the pedestrian might decide to cross the street anyways. However, it is more likely that the pedestrian will stop until the traffic light becomes green, complying with the traffic rules and to common sense. If one also considers that the possible transition to CAVs will happen gradually, many manually driven vehicles will still be present. For this reason, it becomes even more important for a CAV to “comprehend” the surrounding environment and adapt to it. In light of the above, the current research interests represent a major change with respect to traditional robotics, which typically assumes static environments and absence of surrounding agents [1, 7]. The key question scientists and engineers must answer, thus, becomes how to generate safety-oriented manoeuvres for the egovehicle considering the environment. The obvious answer—that is, by forecasting the intent and future behaviour of surrounding agents—opens itself a number of further doubts. What does forecasting mean, in this case? How, and to what extent, is it possible to anticipate what others will do in the future? Two main approaches try to answer the previous questions. The first one is the robust approach, which is based on the worst-case paradigm. The robust approach assumes that it is not important to forecast what others will exactly do in the future. Indeed, safety of operations is ensured by keeping the vehicle away from all the situations in which a collision might happen. This often translates into an overlyconservative trajectory planning, which guarantees safety at the expense of reduced performance—e.g., longer travel times. To overcome these limitations, the probabilistic approach tries to learn the underlying cause-effect relationships between the agents and attempts to describe the future of the scene as a spectrum of possible outcomes, with associated probability to happen. Explaining this concept through the previous pedestrian-traffic light example, a probabilistic approach would ideally learn to output a bimodal1 probability distribution at each future timestep, representing the position of the pedestrian. The first mode, with a lower probability value, corresponds to the trajectory that the pedestrian would make by deciding to cross the street. The second mode, with a higher probability value, represents the case in which the pedestrian does not cross the street in the future.

1

The probability density of a bimodal distribution has two peaks, representing two different behavioural modes of the system.

Gaming the Driving System: On Interaction Attacks Against Connected …

111

In what follows, a basic technical introduction to the robust and probabilistic approaches for safety-oriented trajectory planning is provided.2 This will be useful to critically evaluate their practical implications in cyberattacks.

3.1 Robust Approaches The robust approach is based on reachability analysis. Let us introduce the key concepts through a simple example. Consider the case in which our CAV, commonly named the ego-vehicle, and another vehicle drive in the same environment. Reachability analysis aims at reconstructing the set of relative initial conditions3 between the two vehicles for which collision is inevitable in the future assuming the worst-case scenario, i.e., that the antagonist vehicle attempts at its best to provoke collision. To fully take advantage of the possibility to control the ego-vehicle, it is also assumed that the ego-vehicle can react at each time instant to the manoeuvres of the antagonist vehicle. The previous problem can be translated into an optimisation problem, and the set of dangerous initial conditions can be computed. During trajectory planning, the CAV can be controlled in such a way that it never enters the set of dangerous initial conditions with other agents, thus preserving navigation safety [2, 23, 39, 38]. Given that the approach is based on a worst-case behavioural model of the other vehicle, it is evident that the planned trajectories will either be collision free or, at least, the ego-vehicle will not be at fault in a collision.4 The robust approach, then, provides safety guarantees at the expense of a decrease in performance. In fact, the computed set of dangerous initial conditions is overly conservative and does not provide a realistic representation of what surrounding agents really do. With reference to the aforementioned pedestrian-traffic light example, if CAVs implemented a similar approach, they would stop in correspondence of any crosswalk governed by a traffic light, regardless of the colour of the traffic light itself. This is because there exists a possibility that the pedestrian might cross the street with the red light. Robust approaches do not include the notion of probability, and therefore take into account the entirety of the possible outcomes, with no distinction between more and less likely scenarios. In a more realistic setting, however, the surrounding agents themselves try to avoid collision, or at least react to

2

We clarify that the high-level division of the algorithms into the robust and probabilistic families is a simplification adopted to facilitate reading and focus the attention of the reader only on the fundamental technicalities that are necessary to support our main thesis. For a more in-depth review, the reader can refer to [3, 12, 15, 39]. 3 The relative initial conditions are a set of variables that characterise the initial situation of the scenario, e.g., the vehicles relative velocity, position, etc. 4 Some reachability analysis methods consider as safe the cases in which the ego-vehicle cannot avoid collision due to traffic rules. As an example, a vehicle, still in front of a red traffic light, cannot accelerate to avoid the following vehicle, traveling at high speed.

112

L. Paparusso et al.

the actions performed by the ego-vehicle. Based on this, the limitations of the robust approach can be overcome through probabilistic approaches.

3.2 Probabilistic Approaches Probabilistic approaches are based on modelling the future system evolution as stochastic. In other words, the future motion of the agents is regarded as uncertain and is described by means of probability distributions. The purpose is to represent the probability that a determined future may happen given the initial conditions of the agents. Differently from the robust approach, in which one is only interested in the worstcase behaviour of the surrounding agents, the scope of the probabilistic method is to comprehend and model in detail the process that brings the agents to plan their motion in space. Modelling uncertainty through probability distributions is a fundamental ingredient in automated driving applications, and it is supported by two main arguments. First, the surrounding agents can be human—e.g., pedestrians or manually driven vehicles. Human behaviour is in fact stochastic and multimodal.5 Second, the complexity of automated driving scenarios does not allow one to completely model the behaviour of the system, which is why one resorts to represent as uncertainty what the model is not able to describe. It is arduous to manually tune6 probabilistic models so as to replicate the agents’ interactions, especially in complex and multi-agent scenarios (as those of automated driving). For this reason, the parameters of the distributions and, more in general, the system model are learned from data. That is, by observing the behaviour of the agents in different situations, machine learning algorithms fit the model to the recorded data. Therefore, data are a key ingredient for developing realistic predictors. Among the probabilistic approaches, deep-learning frameworks have gained popularity in recent years for their high representation capabilities, even in complex scenarios. For the purposes of the present inquiry, the most popular methods for trajectory predictors can be briefly detailed as follows: (a) History-based predictors. State-of-the-art trajectory predictors have been developed through deep learning models. These models are trained to minimise the difference between their predictions and the future trajectories of all of the other

5

The probability density of a multimodal distribution has more than one peak, representing different behavioural modes of the system. 6 Probabilistic models are typically defined in a parametric form. This means that the shape of the probability distributions depends on the values of some parameters. As an example, to fully define the shape of a 1-dimensional Gaussian distribution, it is necessary to specify its mean and variance. Tuning a probabilistic model means selecting the values of its parameters such that the model represents the real system in the best possible way.

Gaming the Driving System: On Interaction Attacks Against Connected …

113

agents. The standard version of these algorithms, which we refer to as historybased predictors, takes as inputs the past motion of the agents in the scene and tries to forecast their future evolution in space [4, 6, 16, 17, 20, 22, 37, 40, 41]. (b) History-future-based predictors. More advanced versions of the previously mentioned deep-learning frameworks also take into account how the future motion of the surrounding agents is conditioned on the future ego-vehicle actions. This new feature is extremely important, as it allows one to generate predictors that are aware of the impact that the ego-vehicle can have on the environment. This knowledge can then be used to design smart trajectory planners [32, 34]. The importance of history-future-based predictors deserves to be emphasised through an example. Let us consider the classical scenario of an ego-vehicle in correspondence of a pedestrian crossing. We also suppose that the pedestrian has almost completed the crossing, and that the vehicle is far enough from the pedestrian to be able to avoid deceleration. In such a situation, an optimal trajectory planner would choose not to decelerate, as doing so is not required for guaranteeing safety. But the pedestrian might perceive the act of not decelerating as a source of possible danger. The reaction of the pedestrian might therefore be different from the expected one, compromising safety. As a possible outcome, the pedestrian could hesitate in completing his crossing. This example illustrates the importance of accounting for the potential effect of a CAV’s planned actions on other road users. As investigated in the remaining of this chapter, however, the same algorithms can lend themselves to be maliciously exploited.

3.3 Trajectory Planning and Interaction Attacks So far, we have discussed the two predominant approaches for safe trajectory planning adopted by the research community in recent years. Robust approaches have been described as solutions to guarantee maximum safety at the expense of a lower—and most of the times insufficient—performance in terms of travel times. To cope with these shortcomings, probabilistic approaches allow one to describe the behaviour of interacting agents without excessive conservativeness, generating a detailed representation of the driving scenario. These models are then employed to predict the future evolution of the driving scenario and design efficient collision-safe trajectory planners. At the same time, probabilistic approaches accept the risk that the predictor may fail. Discussing robust approaches and related safety methods in trajectory planning has highlighted why data-driven probabilistic approaches are so important for the automated driving community. From this point on, we will mainly focus on datadriven probabilistic approaches to investigate how these methods can be antithetically exploited to guarantee safety or to create threats. For the sake of simplicity, we will

114

L. Paparusso et al.

also assume that data-driven probabilistic approaches may provide exact forecasts of the agents’ motion in the entirety of the cases. The capability to perfectly describe a system’s evolution represents the desideratum of any engineering model, including control systems. For example, a collaborative robot can perform every task with maximum efficiency if the future motion of the human operator is known precisely (or, more realistically, with high probability). Analogously, an automated vehicle can move more rapidly and in total safety if the future motion of the other agents is known in advance. This is why motion predictors are employed to improve the overall motion of the ego-vehicle and its operational safety. However, the knowledge of the interactions and cause-effect relationships between the agents in a scene may instead have catastrophic impacts in case data-driven predictors are misused—e.g., for interaction attacks. As previously detailed, modern probabilistic predictors can learn to output different outcomes as a function of the ego-vehicle’s actions. Cyberattackers can then potentially exploit the learned model to maximise damage in a driving automation scenario. Let us explain this through a simple example. Figure 1 depicts illustrative cases of a standard cyberattack (a) and an interaction attack (b). In case (a), the cyberattack aims at causing a collision between the ego-vehicle (black) and a surrounding vehicle (yellow) coming from the opposite lane. This is, in fact, the quickest solution to provoke damage. Hackers would provoke the impact by, e.g., exploiting vulnerabilities in the communication system of the black CAV and taking control of steering. Instead, in case (b) it is assumed that the cyberattack is based on a model of the interactions—i.e., an interaction-aware intent predictor. Knowing that different actions of the ego-vehicle lead to different responses of the surrounding agents, represented with different colours, the interaction attack can provoke damage indirectly. In our example (b), the ego-vehicle is not damaged in the attack and can keep on manipulating traffic to provoke further damage. As the example shows, knowledge concerning trajectory planning rules can be exploited in interaction cyberattacks to exert indirect control on CAV behaviour and cause collisions or traffic disruption. The risks linked to the malicious manipulation of system behaviour are probable and potentially severe enough to be taken seriously. In the next section, we consider possible solutions to decrease the vulnerability of future CAVs to interaction attacks. Each option comes with considerable complications, opening up several avenues for future research.

4 Possible Solutions and Related Hurdles As the previous section shows, attackers interested in exerting indirect control over CAVs could exploit the features of safety-oriented trajectory planning algorithms to their advantage. If the rules governing the behaviour of CAVs are known, then they can be intelligently bent to serve malicious purposes, potentially putting passengers

Gaming the Driving System: On Interaction Attacks Against Connected … (a) Standard Cyberattack

115

(b) Interaction Attack

Fig. 1 Standard versus interaction cyberattacks

and road users at risk. The threat is tangible and requires countermeasures to be elaborated and applied. When thinking about possible solutions to minimise the risk of interaction attacks, a good starting point is to focus on what allows the risk to occur in the first place. The research community has an opportunity to prevent or oppose interaction attacks by addressing the vulnerabilities that malicious agents can exploit to exert indirect control on CAVs. Two vulnerabilities stand out: transparency and homogeneity. First, transparency—intended here as the release of trajectory planning algorithms for external scrutiny—can be exploited by malicious agents to retrieve the knowledge they need to carry out interaction attacks. Secondly, homogeneity in trajectory planning algorithms throughout vehicles, models, and brands would facilitate the prediction of CAV behaviour and, therefore, the exercise of indirect control. We identify the following possible solutions to each vulnerability. Transparency vulnerabilities could be handled by keeping trajectory planning algorithms secret. Homogeneity vulnerabilities could be tackled by promoting diversity and allowing for different trajectory planning algorithms to be implemented. In what follows, we take a closer look at both possibilities and highlight their limitations. Solutions based on secrecy and diversity come with significant pitfalls and are insufficient to cope with the risk of interaction attacks satisfactorily.

116

L. Paparusso et al.

4.1 Secrecy Interaction attacks are possible only under the condition that malicious agents have a model of CAV behaviour. Fine-grained knowledge of trajectory planning algorithms is required to predict how a CAV would react if exposed to given stimuli. Since indirect control can be exercised only using interaction, knowledge concerning the logic governing driving behaviour is a necessary ingredient in this sort of attacks. Regarding our problem, whatever facilitates the acquisition of knowledge concerning trajectory planning algorithms is to be understood as a vulnerability. Intuitively enough, openness and transparency represent obvious weaknesses. Unrestricted access to the lines of codes determining CAV driving behaviour would make it extremely easy for malicious agents to get the knowledge they need and execute effective interaction attacks. As a consequence, information about trajectory planning algorithms should not be made widely accessible. It should be kept as hidden as possible. Following this reasoning, secrecy could represent a viable fix to the abovementioned vulnerabilities. Placing trajectory planning algorithms under the seal of secrecy would likely help protect CAVs from interaction attacks. If the logic behind CAV driving functions is unknown, it would be more difficult to intentionally exploit it to exert indirect control over the vehicle. Therefore, limiting access to the rules governing trajectory planning algorithms might appear as a promising solution. However, keeping trajectory planning algorithms secret would come with its share of significant shortcomings. First of all, the practical efficacy of the strategy can be challenged. It is to be noted that most of the algorithms presented in this chapter are, in fact, available online and may be trained by cyberattackers for specific tasks. In the same way, many open-source datasets are available online and can be downloaded for free.7 The actual state of affairs, then, does not seem particularly respondent to secrecy needs. Moreover, even if secrecy were adopted as a policy, malicious agents could circumvent it and still get the knowledge they need. As a matter of fact, trajectory planning algorithms could be reverse-engineered. With enough time and resources, a CAV could be exposed to many traffic situations—or, more likely, to a selected subset of them—and the rules governing trajectory planning inferred from observation. Even though inferring trajectory planning specifics ex-post would arguably require significant efforts, the possibility should not be dismissed lightly. From an ethical perspective, furthermore, secrecy would risk leading to undesired outcomes. Arguably, secrecy would increase opacity in the ways in which driving functions are executed. Increased opacity, however, would collide with the pursuit of both explainability and transparency—two ethical values that are broadly

7

Berkeley DeepDrive: https://bdd-data.berkeley.edu/; Level 5: https://level-5.global/data/; nuScenes: https://www.nuscenes.org/; Oxford Robotcar Dataset: https://robotcar-dataset.robots.ox. ac.uk/; PandaSet: https://scale.com/open-datasets/pandaset; Waymo Open Dataset: https://waymo. com/open/.

Gaming the Driving System: On Interaction Attacks Against Connected …

117

acknowledged as extremely relevant with regard to AI applications, CAVs included [14]. In fact, keeping the logic behind trajectory planning algorithms secret would make it impossible to promote explainability and transparency (see Chap. 3). For instance, it would become even more problematic to provide CAV passengers with factual information about the vehicle’s present state and future planned behaviour so to promote their autonomous and responsible agency alongside with trust and acceptance. Similarly, divulging trajectory planning algorithms during legal litigations where liability is to be distributed fairly would be risky. More generally, external scrutiny on trajectory planning algorithms would be more difficult, which could also lower the chances of identifying errors and vulnerabilities. Building on this last point, secrecy concerning trajectory planning algorithms would also have a chilling effect on promoting an open data approach to driving automation. A further conflict between ethical values emerges here. Sharing data concerning driving algorithms could potentially enable the accomplishment of multifarious results of noteworthy ethical significance. Sustainability benefits often associated with driving automation, such as safety enhancements, traffic optimisation, energy use reduction, and emission minimisation, all more or less depend on the ability to reliably predict CAV’s future behaviour on a system level. However, reliable predictions are exactly what malicious agents need to carry out interaction attacks. As a matter of fact, placing trajectory planning algorithms under the seal of secrecy aims precisely at making predictions less reliable—a necessary step to oppose interaction attacks. As a side effect, however, tangible ethical opportunities that could be caught through driving automation would be lost, which might be perceived as too high a price to pay. Moreover, making trajectory planning algorithms transparent and open to external scrutiny would further assure ethically desirable features such as safety and compliance. A culture of transparency and openness is widely acknowledged as fostering responsibility and compliance with regulation and ethical standards. Introducing opacity and secrecy as a remedy to interaction attacks would imply, at least partially, losing the guarantees of care and responsibility associated with openness. Again, the ethical price for interaction attacks to be opposed in this way might be perceived as too high. In light of the above, responding to the threat of malicious agents obtaining knowledge of trajectory planning algorithms through secrecy does not seem to accomplish desirable results. Having to waive what explainability, open data, and transparency could offer in ethical terms to obstruct interaction attacks might be perceived as too much to ask. However, a similar result in the fight against interaction attacks could be obtained by promoting diversity in trajectory planning algorithms. Let us now consider this second option and see whether it fares better than the one just discussed.

118

L. Paparusso et al.

4.2 Diversity In addition to knowledge, homogeneity in trajectory planning solutions also qualifies as a vulnerability with respect to interaction attacks. Implementing similar trajectory planning algorithms on all CAVs might massively increase the scope and facility of interaction attacks. By acquiring knowledge of one driving model, indirect control could potentially be exercised on each CAV. To counter this outcome, one might think, diversity should be promoted. The more diversity is implemented in CAV control logic, the more malicious agents would be disincentivised to invest time and resources in reverse engineering driving models. Apart from the eventuality in which attackers knew in advance precisely what vehicle to target, diversity in trajectory planning algorithms would make CAV future behaviour much more difficult to predict and control. Depending on the perceived severity of the threat, diversity can be encouraged on varying levels. For instance, diversity in trajectory planning algorithms could be required at brand level—i.e., different car manufacturers could be required to develop and implement sensibly different control algorithms. If the results would not be satisfactory enough, diversity could be further implemented at model level and, perhaps, even at vehicle level. In sum, introducing or mandating variations in the ways in which CAVs react to given stimuli might help prevent or resist interaction attacks without necessarily resorting to secrecy. Promoting or mandating diversity to counter interaction attacks would also raise significant challenges. In what follows, we consider how this solution would fare with reference to the same values that emerged in the previous subsection: sustainability, safety, explainability, and transparency. To the extent that the accomplishment of sustainability goals depends on reliable predictions, diversity would surely complicate the task but would not render it entirely impossible. If diversity were pursued at brand or model level, private or institutional research centres might still have access to the technology and computing power necessary to handle such complexity and carry out reliable predictions. On the contrary, we surmise attackers would less likely be able to count on such powerful means, unless perhaps in the case of cyberwarfare. However, if diversity were pursued at vehicle level, reliable predictions would likely be impossible, and the purported ethical benefits associated with them would be lost. An analogous point, however, should be raised in terms of safety. Would CAVs be able to handle the complexity of extremely varied driving behaviour? As discussed in the previous section, the many difficulties in handling the complexity of mixed traffic are often taken to represent complex obstacles to the deployment of CAVs. Human driving behaviour is often erratic and does not lend itself to being easily modelled. Therefore, it is rather difficult to reliably predict what a human driver or road user will do. The situation is expected to improve when only CAVs are deployed since—it is supposed—their behaviour will be much more consistent and, thus, easier to predict. Mandating variability in trajectory planning algorithms to

Gaming the Driving System: On Interaction Attacks Against Connected …

119

oppose interaction attacks would challenge this presupposition, possibly leading to a less safe traffic environment. In principle, variability would only partially affect explainability and external scrutiny. Being explanations tied to individual vehicles, variations in trajectory planning algorithms should not pose any further difficulty. Each vehicle would implement algorithms that could be made explainable without increasing the risk of interaction attacks. Similarly, since no secret would cover or limit access to trajectory planning algorithms, the ethical benefits of external scrutiny and transparency would still stand. In practice, however, diversity would pose tremendous challenges to both explainability and scrutiny. The amount of work necessary to explain, review, double-check, and scrutinise an ever-growing number of trajectory planning algorithms would likely be overwhelming, paving the way to loopholes and malpractice. It is arguably for this reason that homogeneity is usually pursued by professional and regulatory institutions. Setting standards and shared legal frameworks for everyone to comply with reduces variability but helps reach important thresholds in terms of safety, sustainability, and other ethically relevant aspects. Concurrently, validation and certification processes enforce the due respect of standards, thus promoting social well-being through homogeneity. In sum, standards and certifications help manage the unpredictability of innovation processes by ensuring that highly valuable social desiderata are translated into hard constraints. As a result, a degree of homogeneity is inevitable and should be expected. Moreover, one might also add that trajectory algorithms can only be so much different from each other. Even though some variations are possible, the task to be fulfilled and the conditions for its successful fulfilment mark shared constraints that necessarily limit variability in many respects. Furthermore, differences in how trajectory planning algorithms instantiate values such as safety or comfort will inevitably impact user acceptance, so that manufacturers might be unwilling to put variability before features that would make their products more appetible to potential buyers. If the achievable degree of variability is low, however, good enough driving models would be inferable by resorting to open-source trajectory planning algorithms and publicly available datasets, thus defying the purpose of promoting variability in the first place. In light of all these constraints, it is uncertain whether the actual scope of trajectory planning variability would allow for plural solutions to be implemented that are diverse enough to confound and discourage attackers. Such uncertainty casts doubts on the viability of the option.

5 Conclusion The possibility of manipulating CAV behaviour through interactions poses worrisome cybersecurity risks to driving automation. In this chapter, we have introduced and detailed the notion of interaction attack. Moreover, we have shown how safety-oriented trajectory planning algorithms could become vulnerabilities to be

120

L. Paparusso et al.

exploited in interaction attacks. Since knowledge concerning the rules governing CAV behaviour or its most common patterns is precisely what allows interactive manipulation, we have considered secrecy as a possible strategy to minimise risk. We also considered diversity in trajectory planning algorithms at brand, model, and vehicle levels as a way to discourage driving model reverse engineering and, thus, interaction attacks. Although they tackle what is arguably the main enabler of this type of attacks, both strategies come with significant shortcomings, which caution against their application. Considering the severity of the risks associated with interaction attacks, we believe that future research should be dedicated to devising more viable solutions at both technical, policy, regulation, and social levels.

References 1. Abbas, M.A., Milman, R., Eklund, J.M.: Obstacle avoidance in real time with nonlinear model predictive control of autonomous vehicles. Can. J. Electr. Comput. Eng. 40(1), 12–22 (2017). https://doi.org/10.1109/CJECE.2016.2609803 2. Althoff, M., Dolan, J.M.: Online verification of automated road vehicles using reachability analysis. IEEE Trans. Rob. 30(4), 903–918 (2014). https://doi.org/10.1109/TRO.2014.2312453 3. Althoff, M., Frehse, G., Girard, A.: Set propagation techniques for reachability analysis. Annu. Rev. Control Robot. Autonom. Syst. 4(1), 369–395 (2021). https://doi.org/10.1146/annurevcontrol-071420-081941 4. Banerjee, S.: Autonomous vehicles: a review of the ethical, social and economic implications of the AI revolution. Int. J. Intell. Unmanned Syst. 9(4), 302–312 (2021). https://doi.org/10. 1108/IJIUS-07-2020-0027 5. Casas, S., Luo, W., Urtasun, R.: IntentNet: learning to predict intention from raw sensor data. In: 2nd Conference on Robot Learning (CoRL 2018), Zurich, Switzerland, pp. 1–10 (2018). https://doi.org/10.48550/arXiv.2101.07907 6. Casas, S., Gulino, C., Liao, R., Urtasun, R.: SpAGNN: spatially-aware graph neural networks for relational behavior forecasting from sensor data. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9491–9497. IEEE Press, Piscataway, Paris, France (2020). https://doi.org/10.1109/ICRA40945.2020.9196697 7. Chu, K., Lee, M., Sunwoo, M.: Local path planning for off-road autonomous driving with avoidance of static obstacles. IEEE Trans. Intell. Transp. Syst. 13(4), 1599–1616 (2012). https:// doi.org/10.1109/TITS.2012.2198214 8. Collingwood, L.: Privacy implications and liability issues of autonomous vehicles. Inf. Commun. Technol. Law 26(1), 32–45 (2017). https://doi.org/10.1080/13600834.2017.1269871 9. Eugensson, A., Brännström, M., Frasher, D., Rothoff, M., Solyom, S., Robertsson, A.: Environmental, safety, legal and societal implications of autonomous driving systems. In: 23rd International Technical Conference on the Enhanced Safety of Vehicles (ESV): research Collaboration to Benefit Safety of All Road Users, 13-0467, pp. 1–15 (2014). http://www-esv.nhtsa.dot.gov/ Proceedings/23/isv7/main.htm 10. Eykholt, K., Evtimov, I., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., Song, D.X.: Robust physical-world attacks on deep learning models. Cryptogr. Secur. (2017). https:// arxiv.org/pdf/1707.08945.pdf 11. Färber, B.: Communication and communication problems between autonomous vehicles and human drivers. In: Maurer, M., Gerdes, J., Lenz, B., Winner, H. (eds.) Autonomous Driving, pp. 125–144. Springer, Berlin-Heidelberg (2016). https://doi.org/10.1007/978-3-662-48847-8_ 7

Gaming the Driving System: On Interaction Attacks Against Connected …

121

12. Gulzar, M., Muhammad, Y., Muhammad, N.: A survey on motion prediction of pedestrians and vehicles for autonomous driving. IEEE Access 9, 137957–137969 (2021). https://doi.org/ 10.1109/ACCESS.2021.3118224 13. Hansson, S.O., Belin, M.Å., Lundgren, B.: Self-driving vehicles—An ethical overview. Philos. Technol. 34, 1383–1408 (2021). https://doi.org/10.1007/s13347-021-00464-5 14. Horizon 2020 Commission Expert Group to advise on specific ethical issues raised by driverless mobility (E03659): ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility (2020). https://op.europa.eu/en/pub lication-detail/-/publication/89624e2c-f98c-11ea-b44f-01aa75ed71a1/language-en 15. Huang, Y., Du, J., Yang, Z., Zhou, Z., Zhang, L., Chen, H.: A survey on trajectory-prediction methods for autonomous driving. IEEE Trans. Intell. Vehicles 7(3), 652–674 (2022). https:// doi.org/10.1109/TIV.2022.3167103 16. Ivanovic, B., Pavone, M.: The trajectron: probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2375–2384. IEEE Press, Piscataway, Seoul, South Korea (2019). https:// doi.org/10.1109/ICCV.2019.00246 17. Jain, A., Casas, S., Liao, R., Xiong, Y., Feng, S., Segal, S., Urtasun, R.: Discrete residual flow for probabilistic pedestrian behavior prediction. In: 3rd Conference on Robot Learning (CoRL), Osaka, Japan, pp. 1–13 (2019). http://proceedings.mlr.press/v100/jain20a/jain20a.pdf 18. Karnouskos, S., Kerschbaum, F.: Privacy and integrity considerations in hyperconnected autonomous vehicles. Proc. IEEE 106(1), 160–170 (2018). https://doi.org/10.1109/JPROC. 2017.2725339 19. Katrakazas, C., Theofilatos, A., Papastefanatos, G., Härri, J., Antoniou, C.: Cyber security and its impact on CAV safety: overview, policy needs and challenges. In: Milakis, D., Thomopoulos, N., van Wee, B. (eds.) Advances in Transport Policy and Planning, pp. 73–94. Academic Press (2020). https://doi.org/10.1016/bs.atpp.2020.05.001 20. Kosaraju, V., Sadeghian, A., Martín-Martín, R., Reid, I., Rezatofighi, H., Savarese, S.: SocialBiGAT: multimodal trajectory forecasting using bicycle-GAN and graph attention networks. In: NIPS’19: proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 137–146. Curran Associates Inc., Red Hook (2019). https://dl.acm.org/doi/pdf/ https://doi.org/10.5555/3454287.3454300 21. Le, V.H., den Hartog, J., Zannone, N.: Security and privacy for innovative automotive applications: a survey. Comput. Commun. 132, 17–41 (2018). https://doi.org/10.1016/j.comcom. 2018.09.010 22. Lee, N., Choi, W., Vernaza, P., Choy, C.B., Torr, P.H.S., Chandraker, M.: DESIRE: distant future prediction in dynamic scenes with interacting agents. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2165–2174. IEEE Press, Piscataway, Honolulu, HI (2017). https://doi.org/10.1109/CVPR.2017.233 23. Leung, K., Schmerling, E., Zhang, M., Chen, M., Talbot, J., Gerdes, J.C., Pavone, M.: On infusing reachability-based safety assurance within planning frameworks for human–robot vehicle interactions. Int. J. Robot. Res. 39(10–11), 1326–1345 (2020). https://doi.org/10.1177/ 0278364920950795 24. Li, L., Li, J., Zhang, S.: Review article: state-of-the-art trajectory tracking of autonomous vehicles. Mech. Sci. 12, 419–432 (2021). https://doi.org/10.5194/ms-12-419-2021 25. Lim, B.S., Keoh, S.L., Thing, V.L.: Autonomous vehicle ultrasonic sensor vulnerability and impact assessment. In: 2018 IEEE 4th World Forum on Internet of Things (WF-IoT), pp. 231– 236. IEEE Press, Piscataway, Singapore (2018). https://doi.org/10.1109/WF-IoT.2018.835 5132 26. Lim, H.S.M., Taeihagh, A.: Autonomous vehicles for smart and sustainable cities: an in-depth exploration of privacy and cybersecurity implications. Energies 11(5), 1062 (2018). https://doi. org/10.3390/en11051062 27. Liu, N., Nikitas, A., Parkinson, S.: Exploring expert perceptions about the cyber security and privacy of connected and autonomous vehicles: a thematic analysis approach. Transport. Res. F: Traffic Psychol. Behav. 75, 66–86 (2020). https://doi.org/10.1016/j.trf.2020.09.019

122

L. Paparusso et al.

28. Loh, W., Misselhorn, C.: Autonomous driving and perverse incentives. Philos. Technol. 32, 575–590 (2019). https://doi.org/10.1007/s13347-018-0322-6 29. Millard-Ball, A.: Pedestrians, autonomous vehicles, and cities. J. Plan. Educ. Res. 38(1), 6–12 (2018). https://doi.org/10.1177/0739456X16675674 30. Nikitas, A., Njoya, E.T., Dani, S.: Examining the myths of connected and autonomous vehicles: analysing the pathway to a driverless mobility paradigm. Int. J. Automot. Technol. Manag. 19(1–2), 10–30 (2019). https://doi.org/10.1504/IJATM.2019.098513 31. Petit, J., Shladover, S.E.: Potential cyberattacks on automated vehicles. IEEE Trans. Intell. Transp. Syst. 16(2), 546–556 (2015). https://doi.org/10.1109/TITS.2014.2342271 32. Rhinehart, N., McAllister, R., Kitani, K., Levine, S.: PRECOG: prediction conditioned on goals in visual multi-agent settings. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2821–2830. IEEE Press, Piscataway, Seoul, South Korea (2019). https:// doi.org/10.1109/ICCV.2019.00291 33. Rizvi, S., Willet, J., Perino, D., Marasco, S., Condo, C.: A threat to vehicular cyber security and the urgency for correction. Procedia Comput. Sci. 114, 100–105 (2017). https://doi.org/ 10.1016/j.procs.2017.09.021 34. Salzmann, T., Ivanovic, B., Chakravarty, P., Pavone, M.: Trajectron++: dynamically-feasible trajectory forecasting with heterogeneous data. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds.) Computer Vision—ECCV 2020. Lecture Notes in Computer Science, vol. 12363, pp. 683–700. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58523-5_40 35. Sparrow, R., Howard, M.: When human beings are like drunk robots: driverless vehicles, ethics, and the future of transport. Transp. Res. Part C: Emerg. Technol. 80, 206–215 (2017). https:// doi.org/10.1016/j.trc.2017.04.014 36. Taeihagh, A., Lim, H.S.M.: Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks. Transp. Rev. 39(1), 103–128 (2019). https:// doi.org/10.1080/01441647.2018.1494640 37. Tang, Y.C., Salakhutdinov, R.: Multiple futures prediction. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, Article 1382, pp. 15424–15434. Curran Associates Inc., Red Hook (2019). https://dl.acm.org/doi/abs/https://doi.org/10.5555/ 3454287.3455669 38. Vaskov, S., Sharma, U., Kousik, S., Johnson-Roberson, M., Vasudevan, R.: Guaranteed safe reachability-based trajectory design for a high-fidelity model of an autonomous passenger vehicle. In: 2019 American Control Conference (ACC), pp. 705–710. IEEE Press, Piscataway, Philadelphia (2019). https://doi.org/10.23919/ACC.2019.8814853 39. Vaskov, S., Larson, H., Kousik, S., Johnson-Roberson, M., Vasudevan, R.: Not-at-fault driving in traffic: a reachability-based approach. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 2785–2790. IEEE Press, Piscataway, Auckland, New Zeland (2019). https://doi.org/10.1109/ITSC.2019.8917052 40. Zeng, W., Luo, W., Suo, S., Sadat, A., Yang, B., Casas, S., Urtasun, R.: End-to-end interpretable neural motion planner. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 8652–8661. IEEE Press, Piscataway, Auckland, New Zeland (2019). https://doi.org/10. 1109/CVPR.2019.00886 41. Zhao, T., Xu, Y., Monfort, M., Choi, W., Baker, C., Zhao, Y., Wang, Y., Wu, Y.N.: Multi-agent tensor fusion for contextual trajectory prediction. In: IEEE/CVS Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12118–12126. IEEE Press, Piscataway, Long Beach, CA (2019). https://doi.org/10.1109/CVPR.2019.01240

Automated Driving Without Ethics: Meaning, Design and Real-World Implementation Katherine Evans, Nelson de Moura , Stéphane Chauvier, and Raja Chatila

Abstract The ethics of automated vehicles (AV) has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. After a discussion about the pertinence and cogency of the term ‘artificial moral agent’ to describe AVs that would accomplish these sorts of decisions, and starting from the assumption that human harm is unavoidable in some situations, a strategy for AV decision making is proposed using only pre-defined parameters to characterize the risk of possible accidents and also integrating the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation, into multiple possible decision rules to determine the most suitable action given the specific environment and decision context. The goal of this approach is not to define how moral theory requires vehicles to behave, but rather to provide a computational approach that is flexible enough to accommodate a number of humans’ moral positions concerning what morality demands and what road users may expect, offering an evaluation tool for the social acceptability of an automated vehicle’s decision making. Keywords Automated vehicles · Ethics · Unavoidable collisions · Artificial moral agent · Ethical valence theory

K. Evans IRCAI, Jožef Stefan Institute, Jamova Cesta 39, 1000 Ljubljana, Slovenia e-mail: [email protected] N. de Moura INRIA, 2 Rue Simone IFF, 75012 Paris, France e-mail: [email protected] S. Chauvier Sciences, Normes, Démocratie, Sorbonne Université, UFR de Philosophie-1 Rue Victor Cousin-F-75230, Cedex 5 Paris, France e-mail: [email protected] R. Chatila (B) ISIR, Sorbonne Université, 4 Place Jussieu - Pyramide -Tour 55 - CC 173, 75005 Paris, France e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Fossa and F. Cheli (eds.), Connected and Automated Vehicles: Integrating Engineering and Ethics, Studies in Applied Philosophy, Epistemology and Rational Ethics 67, https://doi.org/10.1007/978-3-031-39991-6_7

123

124

K. Evans et al.

1 Introduction In recent years, the actions and decisions of Artificial Intelligent Systems (AIS) have not only become a regular feature of many human social contexts, but have begun to threaten if not directly impact long-standing moral values such as human welfare, dignity, autonomy and trust. This capacity for topical and visceral moral impact has catapulted the field of machine ethics from its prospective and relatively experimental origins, into a new role as a critical body of literature and perspectives that must be consulted in the real-world design of many AIS. At this critical juncture, machine ethicists must therefore reassess many of the common assumptions of the field, not in the least what we might call the standard view of machine ethics, one which paints the route to morally acceptable behaviour in machines as the product of an often-complex simulation of human moral reasoning, moral theories, or substantive components of such capacities (i.e., consciousness, emotions, subjectivity) in these systems. Our aim in this chapter is not only to negate the viability of the simulated ethical decision-making that this approach prescribes, but to offer an alternative account that removes the need to place the seat of moral reasoning in the machines themselves. In effect, machines do not need to be robust artificial moral agents, nor moral partners of humans, in order to avoid making moral mistakes across their interactions with human agents. Rather, the original justification for moral decision making in AIS—namely, that safety-related design concerns might include morally salient criteria as a result of the increasing automation of different aspects of the human social sphere—can be perhaps better satisfied by the creation of ethically sensitive or minimally-harming machines that only borrow some structural aspects of the normative theories or decision procedures of human morality. The first two sections of this chapter address misconceptions of machine morality, artificial morality, and a machine’s capacity for ethical decision-making directly, taking a strong deflationist stance on the proper course of machine ethics in application to real-world machines. The final section offers a positive account and approach of this deflationist attitude, introducing the Ethical Valence Theory, alongside an application case of ethical decision-making in automated vehicles. It also tangentially addresses an important distinction in the state of the art of machine ethics, between what we will call structural ethics—known to many as AI principles, such as accountability, privacy or transparency–, which focuses on the compliance of machines and their larger sociotechnical systems with certain principles or moral standards; and decisional ethics, which aims to ensure that the decisional output of machines aligns with the moral expectations of its end-user(s), or society at large.

Automated Driving Without Ethics: Meaning, Design and Real-World …

125

2 The Semantics of Automated Driving Before broaching the discussion concerning whether an automated vehicle (AV) might be endowed with ethically-aware decision-making, it is necessary to clarify some misconceptions about the goals and limitations of machine ethics given the current technological state of the art, and how such ideas can be implemented when applied to the AV domain. The first two sections of this article address these misconceptions directly, and the final section introduces a positive account of ethically-aware decision-making in automated vehicles.

2.1 The Problem with Machine Ethics There exists a philosophical stance concerning machine ethics which has precipitated a whole stream of research and publications, one which considers machines endowed with artificial intelligence capacities as being able to make deliberate moral decisions pertaining to what is right or wrong, good or evil. In the media and among the general public, this has translated into the belief that such machines do indeed have moral agency. A logical consequence of this attitude is to consider that such machines are responsible and accountable for their actions. A further consequence is to claim that they should be endowed with a legal, if not a social form of personhood, possibly with related rights and duties. Approaches to “artificial morality” are based on formalizing values, and on translating deontic or consequentialist or other moral theories into mere algorithms, and on proceeding to apply formal logic reasoning or function optimization to make decisions. These systems implement automated decision-making. They are sometimes dubbed as “autonomous” (another confusing concept that we will not address here), when in a given domain and for given tasks, they are capable of accomplishing the specified tasks on their own despite some changes in this domain’s environment. While the above briefly summarizes a typical approach to implementing ethical decision-making in machines, we maintain that this approach is flawed. Indeed, as we will argue next, these systems do not perform any ethical deliberation. By definition, a computer can only run algorithms. A computational intelligent system is a set of algorithms using data to solve more or less complex problems in more or less complex situations. The system might include the capability of improving its performance based on learning, the currently dominant stream in Artificial Intelligence. Learning—be it supervised or unsupervised—is a computational process using a set of algorithms that classify data features to provide statistical models from the data. These models are then used to predict future situation outcomes and to make decisions consequently. Reinforcement learning is a method to sequentially optimize an objective function, which is a mathematical expression of a reward

126

K. Evans et al.

function, evaluating previous decisions to achieve more efficient ones, i.e., those increasing the total expected reward. Machines operate at the computational syntactic level. They have no understanding of the real-world meaning of the expressions, the formulae or the numerical values they manipulate. Semantics in their representations are provided by programmers and users. An image labelled “cat” in a dataset doesn’t provide any knowledge to the machine about what a real cat is. A statistical model built by machine learning, or a model using hand-made semantic descriptors, still do not carry any meaning about reality to the system. This is why we affirm that machines actually have no semantics. In addition, machines have no knowledge of the reasons behind what mattered for the development of the algorithms and the design choices made by programmers. In systems based on statistical machine learning, there is no causal link between inputs and outputs, merely correlations between data features. Fundamentally, algorithms and therefore machines lack semantics and contextual awareness. Human values such as dignity or justice (fairness, equality, …) are abstract and complex concepts. Dignity, for example, which is intrinsically attached to human beings, is an intricate concept and is not computable. It cannot be explicitly described or taught to machines. The respect of such values as a basis for human action is not reducible to simple deductive computations, or to statistics about previous situations. Human ethical deliberation is grounded in history, in lifelong education and in reflections about these concepts, about a substantive view of the “good life”, society, and the actual context of action. Understanding a context is in itself not reducible to a set of numerical parameters. A context takes into account the past, the evolving present and the future. A situation for moral deliberation has a spatial and temporal dimension, and uses abstract notions related to the societal context, all of which are absent from the data fed to machines and from their algorithmic operations, and a fortiori cannot even be described meaningfully in these terms. Thus, machines cannot determine ethical values and cannot make ethical decisions, but this does not preclude them from performing actions that have ethical impacts from the perspective of humans. This perspective stands in sharp contrast to earlier espousals of artificial morality, which often point to the need for relatively robust simulations of human moral reasoning in machines [35], or more broadly, that instilling general ethical principles in machines, and thus developing artificial moral agents, is both possible and necessary for successful human-machine interaction [1]. Van Wynsberghe and Robbins rightly criticize these views [33]. We position ourselves closer to Fossa’s “Discontinuity Approach” [13]. Humans tend to naturally, but wrongly, project moral agency onto machines. Our approach is to dispossess machines of this agency, to make it clear that morality is in this case only a human conception and interpretation of a machine action which has no moral dimension.

Automated Driving Without Ethics: Meaning, Design and Real-World …

127

2.2 Addressing the AV’s Problem If we adopt the metaphysically deflationist attitude that we have just delineated and reject the mythology of artificial moral agency, does it mean that we also have to give up any idea of machine ethics? Is machine ethics a simple by-product of the mythology of artificial moral agency? Of course not, since even if we cease to view an AV as an exotic mind located in a coach-building, an AV is still much more than a technological tool whose ethical regulation falls entirely on the human designer/ provider/user side. If there is something mythological in the idea of autonomous artificial agent or of artificial autonomy, even if there is no such thing as self-purposing agency in a machine, there is still a crucial difference between a machine that delivers its effect as a response to a command, even if the machine can calibrate the effects it delivers according to the context of their delivery (auto-regulative machines), and a machine that can act spontaneously in the social context in which it is designed to deliver its effects. Without entering into the classical metaphysical debate regarding human free will, the refusal of any capacity of self-determination to a creature does not imply the refusal of any capacity to spontaneous acting. Spontaneity does not mean self-determined acting, but self-originated acting. Let’s illustrate this idea by imagining a spontaneous coffee-delivery machine. It does not deliver coffee as a response to a command (the introducing of a coin), but it delivers coffee to whomever its program instructs it to. The in-want-of-coffee consumer has to stand in the front of the machine, he has to look at it intensely and perhaps the machine will deliver him a cup of coffee, perhaps not. He does not know why, but it is the way the machine functions. Of course, there is nothing mysterious inside the machine, no soul, no mind, no free will. There is only an algorithm that realizes a calculus whose result is that sometimes the machine delivers a coffee, sometimes not. It is obvious that we are here in what we could call the circumstances of ethics. For suppose that we discover, by observing the machine, that it spontaneously delivers coffee to young men but never to old women. Of course, the machine does not have any notion of gender and age differences, no philosophy of such differences, no value judgments concerning men and women, young and old persons, but if it is designed to spontaneously deliver effects that are meaningless to the machine, these effects have an important meaning for us, they can be viewed as an instance of discriminatory action. This small imaginary case aims to illustrate that a specific ethical problem arises from what we can label spontaneously acting machines (SAM): because these machines spontaneously deliver effects that can have an ethical resonance for the human addressees of these effects, we cannot implement these machines into the social world without including into the program of functioning of these machines an algorithmic regulation that constrains the range of effects that it can spontaneously deliver. We have to make the machine a never-harming machine or, at least, a minimally-harming machine. The question is not to give to the machine a reflexive capacity to morally evaluate its operations. Machine ethics is not the ethics of the machine, the ethics that the machine itself follows. Machine ethics is the non-harming behaviour that we have to impose on the SAM and whose technical translation

128

K. Evans et al.

lies in including into the program—or what some might consider the “soul” of the machine—a way of calculating its optimal effects which guarantees that these effects will be, for us, minimally harming or, even preferably, harm-free. This is the core idea of the Ethical Valence Theory (EVT), that applies this general idea to the regulation of the social behaviour of an AV. EVT does not aim to make an AV a moral partner of human drivers, as if, in a not-too-distant future, we will drive surrounded by genuinely Kantian or Millian cars. It only means that the program of optimization that is the “soul” of the AV includes a set of restrictions and of mitigation rules that is inspired by Kantian or Millian ethics and will guarantee that the operations of the car (braking, accelerating, etc.) will be viewed by us as minimally harming when effectuated in a social context including other vehicles, passengers, pedestrians, etc..

3 Implementing Ethically-Aware Decision-Making It has been established in the previous section that machines that act in the social world cannot be deployed without being able to mitigate the moral consequences of their actions and minimize or eliminate the risk of harm for human agents. This section sets the scope for these decisions (Sect. 3.1), a proposition of claim mitigation for dilemma situations using the ethical valence theory (EVT, Sect. 3.2) for the nonsupervised driving task and its implementation (Sects. 3.3 and 3.4), considering a generic AV architecture.

3.1 The Scope of Ethical Deliberation Initially, the deflationism implied in the conceptual shift from artificial moral agent to simple spontaneously acting machine (SAM) may appear to leave no room for machine ethics, or a forteriori moral theory, in the design of automated vehicles. Instead, the design goal of a minimally-harming machine might rather fall under the purview of product safety [5], and more precisely still, the minimization of physical harm to any human agent with whom the AV interacts. Under this view, if ethics applies at all, it does so only when evaluating certain structural design choices in the larger sociotechnical system of which physical automated vehicles are a relatively small part: we may, for instance, find it unethical that an AV manufacturer fails to disclose the full breadth (or limitations) of an AV’s functions to its end user, or that certain AV service providers collect road user data for applications that are opaque, vaguely defined, or if this data is used to infer new facts about these users without their consent. These ethical concerns are typically captured by so-called ‘ethical design principles’ such as accountability, transparency, responsibility, or privacy [10, 16], and have played a central role in the current normative landscape of intelligent system

Automated Driving Without Ethics: Meaning, Design and Real-World …

129

design. Let us call this a particular application of machine ethics, structural 1 since it yields normative criteria that impact the design structure of the technical artifacts themselves, but also the collaborative structure of the different human stakeholders involved in the artifact’s design, deployment and larger product lifecycle. By the lights of these distinctions and those made in Sect. 2.2, it would then seem that automated vehicles are wholly the concern of structural ethics in regular driving conditions. This is so because, in contradistinction to our biased coffee machine example, the automated vehicle does not make critical discriminatory decisions which impact multiple human agents during normal function within its Operational Design Domains (ODDs). It is possible, in other words, for the vehicle to successfully harm no one across its tactical planning, since the claims to safety of all road users are jointly satisfiable by the vehicle. In ODDs such as an open highway or a private industrial road, this would imply that the vehicle need only satisfy the AV occupant’s claim to safety, while in more complex ODDs such as peri-urban driving, this might imply observing additional safety standards, or additional constraints to respect alongside the satisfaction of the occupant’s claim, such as ensuring longer following distances or lower speeds when overcoming vulnerable road users such as cyclists. In the terms of our biased coffee machine example, this would be akin to satisfying the claims every customer has to coffee. Structural machine ethics allows us to appraise and eventually improve the degree to which the machine treats users, and especially end-users in practice, ensuring that it does so transparently, responsibly and in respect of the user’s privacy, but its purview does not extend past these bright lines. Consider now, however, what structural ethics recommends in situations where automated vehicles must contend with multiple claims. Importantly, this does not exclusively imply that the machine must consider the welfare of multiple end users, it also intriguingly covers the more common case of a machine which, in pursuit of the welfare of one human agent, may indirectly cause harm to another in ways which appear ethically salient to this second user, other human agents, or society at large. More succinctly, harm to human agents other than the end user is an external effect of the pursuit of the end user’s safety, but each agent holds a moral expectation as to how a balance should be struck. Rather lyrically, this might occur when our coffee machine must decide whether to drop its last remaining dose of sugar into the cup of its very thirsty end user, or into that of a person with dangerously low blood sugar levels. More pragmatically, this could occur in abnormal driving conditions in automated vehicles, such as when an inevitable collision or risky manoeuvre pits passenger safety against that of any other road user. In both cases, beyond trivially recommending that the resultant decisions of both machines be transparent, responsible, accountable, and private where applicable, structural ethics fails to recommend a solution as to how

1

Among the now numerous ontologies of machine ethics, our concept of structural machine ethics coincides with, but is not limited to, ‘Ethics of Designers’ [10], since it focuses exclusively on the actions of human agents in sociotechnical systems, but expands this class of scrutable agents to include other stakeholders such as service providers and regulators.

130

K. Evans et al.

to mitigate these competing claims; it cannot itself prescribe how to distribute safety or sugar in situations where not everyone can receive a share. Instead, the clear ethical salience of these latter types of cases suggests the need for a second and conceptually separate application of ethics, what we will call decisional ethics. Here, the goal is not to ensure that the design structure of intelligent system design beget accountable or privacy-respecting systems, but rather that the resultant decisions of these systems align with the moral expectations of the individuals which they affect. Decisional ethics provides normative criteria and decision procedures which impact or constrain the tactical planning of a given machine in contexts of ethical salience. In such situations, these criteria allow the machine to recognize and minimize the moral harm brought about by the pursuit of its practical goals, and to mitigate the competing moral claims individuals hold over the behaviour of the machine when not all claims can be jointly satisfied. Decisional ethics may recommend, for instance, that an automated coffee machine operate on a ‘first come, first serve’ basis when distributing limited resources to multiple customers, or that it not distribute caffeinated drinks to children, and never more than 6 cups per day per customer. In the case of automated vehicles, decisional ethics might recommend that the safety of child pedestrians should always be prioritized against passenger safety or comfort, or that the vehicle should always privilege the safety of the most vulnerable road user during unavoidable collisions. In sum, decisional ethics ensures that machines behave as if they were moral agents in situations where moral agency is likely required or expected in the eyes of society. Rather than creating artificial moral agents which simulate human processes of ethical reasoning and judgment to achieve this end, decisional ethics achieves ethical responsiveness in machine behaviour via a process of moral contextualization of the machine’s operational design domain. During the design process, human experts first define the relevant marks of ethical salience which occur naturally in the ODD. These are facts or features of a given design context which may carry ethical weight, such as the fact that a given individual is the AV’s passenger, or that caffeine consumption can be harmful to children. Given these marks of ethical salience, human experts then devise what we can call profiles: various decision procedures that take these moral facts into account and provide different methods for mitigating the degree of the machine’s responsiveness to each claim in its environment: for instance, ‘first come, first serve for caffeinated drinks, unless the customer is a child’, or ‘privilege passenger safety in unavoidable collisions, unless this action is lethal to another road user’. While the ontological structure, axiology or founding principles of these different profiles may beneficially take inspiration from established moral theories such as utilitarianism, contractarianism, or Kantian ethics, these profiles need not robustly or exhaustively track the types of decisional processes or resulting normative recommendations these theories typically yield. Instead, this process of moral contextualization aims at generating profiles which track common sense morality, the content of which is ideally informed by descriptive ethics, or empirical studies into acceptability. The next section briefly introduces one such approach to decisional ethics, the

Automated Driving Without Ethics: Meaning, Design and Real-World …

131

Ethical Valence Theory (EVT), contextualized for unavoidable collision scenarios in automated vehicle driving.

3.2 Ethical Valence Theory The approach behind the Ethical Valence Theory is best understood as a form of moral claim mitigation, the fundamental assumption being that any and every road user in the vehicle’s environment holds a certain claim on the vehicle’s behaviour, as a condition of their existence in the decision context. Conceptually, the EVT paints automated vehicles as a form of ecological creature [14, 15], whose agency is directly influenced by the claims of its environment. Given the particulars of a decision context, these claims can vary in strength: a pedestrian’s claim to safety may be stronger than that of the passenger’s if the former is liable to be more seriously injured as a result of an impact with the AV.2 The goal of the AV is to maximally satisfy as many claims as possible as it moves through its environment, responding proportionally to the strength of each claim. When a conflict across claimants occurs, the vehicle will adopt a specific mitigation strategy (or profile) to decide how to respond to these claims, and which claims to privilege. Within the structure of the Ethical Valence Theory, the role of claim mitigation is to capture the contribution that moral theory could make to automated vehicle decisionmaking. Claims, in other words, allow the vehicle to ascertain what morality requires in ethically salient contexts, by tracking how fluctuations in individual welfare affect the rightness or wrongness of an AV’s actions. Specifically, in some respects, the EVT takes analytical inspiration from both the ‘competing claims’ model popular in distributive ethics [25, 34], and Scanlonian contractualism [29]. These theories, far from providing the objectively ‘right’ answer to AV ethics, provide fruitful starting points for decisional ethics in automated vehicles for one simple reason: they afford multiple normatively relevant factors (such as agent-relative constraints and options [19]) to play a role in what matters morally. Since the goal of decisional ethics is to track and eventually satisfy public moral expectations as to AV behaviour, it is important that the use of moral theory does not frustrate this aim by revising upon common sense morality, or more specifically, the kinds of factors or features that can ground claims. In this sense, if a passenger feels that she holds a certain claim to moral partiality over the actions of the vehicle, for instance in virtue of the fact that she owns or has rented the AV, or views it as a type of moral proxy for her actions [18, 20] it is important that moral partiality feature in the marks of ethical salience originally identified by human experts, and thus in the resultant claim mitigation strategy. While some moral theories such as contractualism naturally accommodate these sorts of factors, viewing them as legitimate sources of moral obligation, others,

2

Analytically, individual claims can be understood as contributory or pro tanto reasons for the vehicle’s acting a certain way [7, 26].

132

K. Evans et al.

such as utilitarianism, do not. For a more exhaustive discussion of the interaction between the EVT and moral theory, see [11]. Importantly, the correlation between the recommendations of even contractualisttype moral theories and public expectations as to AV behaviour only go so far. To this end, and in order to take public expectations seriously, moral theory must be complemented by empirical accounts of acceptability. Within the structure of the Ethical Valence Theory, this is accomplished through the concept of a valence: a weight added to the strength of each individual’s claim in the vehicle’s environment, which fluctuates in relation to how that individual’s identity corresponds to set of acceptability-specific criteria, such as different age groups, genders, or road-user types. These criteria are then organized into categories, and eventually hierarchies, delineating various strengths of valences. Here, we must be careful not to confuse these sorts of acceptability criteria with simple societal biases, and thereby view a valence as something which represents a given individual’s ‘popularity’ in society (as was arguably the case within the Moral Machine Experiment [3]). The aim of valences is not to thwart important moral principles such as equality or impartiality by picking out the darlings of a society in the AV’s decision context. Rather, it is to ensure that important normative factors and features (such as relative user vulnerability, fairness or even liability) effectively impact AV decision-making, tipping the scale so-to-speak, in situations where the assessment of physical harm alone may not result in an acceptable decision on the part of the AV. Put another way, it is only through the inclusion of valences in claim mitigation that we arrive at decision recommendations with that casuistic flavour so characteristic of common-sense morality, such as ‘prioritize the safety of the most vulnerable road user, unless this action results in the death of a child’. The final conceptual piece of the Ethical Valence Theory is the notion of a moral ‘profile’: a specific decision procedure or method which mitigates the different claims and valences of road users. Essentially, each moral profile provides a specific criterion of rightness: a maxim or rule which decides the rightness or wrongness of action options. In this way, a moral profile also dictates which claims the AV is sensitive to and when, and how those claims are affected by a given individual’s valence strength. While there are surely many ways to organize the mitigation process between valences and claims, one way to reflect on the aforementioned concept of harm as an external effect of the pursuit of end-user safety is to make a preliminary categorial separation between those users inside the AV and those outside of it, thereby painting claim mitigation as the balancing of the passenger(s) claim versus those in the AV’s surrounding environment. Then, there are a number of potential mitigations we could find across these two interest groups: a risk-averse altruist moral profile would, for example, privilege that user who has the highest valence in the event of a collision, so long as the risk to the AV’s passenger is not severe. Intuitively, this type of profile supports the view that the passenger of the AV may be willing to incur some degree of harm in order to respond to the claims of other users in the traffic environment, but not so much that he or she will die or suffer seriously debilitating injuries as a result. In the end, there is no single profile that will once and for all resolve the moral and social dilemma of automated vehicle behaviour.

Automated Driving Without Ethics: Meaning, Design and Real-World …

133

Instead, the choice of moral profile, along with the choice of valence criteria, exist as different entry points for human control in automated decision-making. Importantly, these profiles, claims and valences need not be intemporal, universal or unilateral. Rather, they can and likely ought to be continuously reviewed and updated, as fluctuations in AV adoption, public acceptability and legal constraints evolve within each operational design domain.

3.3 The Technical Implementation of Ethical Decision-Making The human driver decision-making process under normal conditions can be divided into three phases [23]: strategic, tactical and operational (represented by the Plan block of Fig. 1). This structure is used in most implementations of decision-making algorithms for AV’s (except in end-to-end planning [30]) and its use in real prototypes dates back to [31]. It is the tactical layer that holds the responsibility of accounting for the behaviour of local road users and thus, dealing with any dangerous situations that may arise which impose some risk distribution concerning the AV’s actions towards the other users in the environment.

Fig. 1 AV’s decision-making structure used in [31]

134

K. Evans et al.

A plethora of different algorithms have been used to implement the tactical layer, from the common Markovian approaches [9, 27, 32], end-to-end learning [17] or more low-level methods such as RRT [12] and fuzzy approaches [6]. The work done in [9] will be used as a typical decision-making algorithm and modified in both deliberation under normal situations and deliberation in dilemma scenarios to adhere to the approach presented in this publication to treat the AV’s problem. In the original version of the Markov decision process (MDP) proposed in [9], during the evaluation of rewards for each state, according to the action considered and the next state predicted, all actions that resulted in a collision were removed from the viable action set for the policy optimization but left as an option during the evaluation of the Bellman’s Eq. (1). The cost (negative reward) was fixed and equal for every collision, and therefore resulted in an equal degree of repulsion to the collision state independently of the road user involved. [ Vs+1 (st ) = max R(st , a, st+1 ) + γ · aε A

Σ

] P(st+1 |st , a)Vt (st+1 )

(1)

st+1

This procedure originates from the consideration that in normal situations, the decisional process should not be polluted by the ethical evaluation of the current environment configuration. However, as said in Sect. 2, clear constraints on the risk that the AV’s actions might bring to other road users are needed, in all situations. Equation 2 represents this previously defined reward at a state st given that an action at will be executed, leading the AV to the state st+1 : { R(st , at , st+1 ) =

sperf + sconseq , if there are no collisions otherwise ccol ,

(2)

The term sconseq is composed by a cost3 related to adherence to the traffic code and another related to the physical proximity between the AV and other road users, evaluated by Eq. 3. This quantity measures the risk of the action to other road users given the AV’s current state, by evaluating how much the AV’s trajectory will be convergent to the predicted trajectory of others (assuming that proximity means an increase of risk of injury). Again, the cost calculated by Eq. 3 (always negative) does not account for the nature of the road user in question while the term sconseq represents the sum of each sprox calculated comparing each possible AV-road user pair. ] ] proj proj sprox = cst + wv · Δvt+1 − Δvt

(3)

Given that the EVT considers that each road user has a claim for safety that might vary in intensity, it should be reflected in the risk measurement as another parameter to the cost definition. As such, the sconseq term would be given by Eq. 4, where the 3

Cost and reward are used interchangeably to represent the quantity (negative or positive, respectively) which will determine the AV’s optimal value.

Automated Driving Without Ethics: Meaning, Design and Real-World …

135

weight wETVi represents the strength relationship of the ethical valence from the AV (its passengers) and the other road user being considered. Such a parameter does not change with time or with any other property, it is only dependent on the valence of each road user being considered. The same parameter would be valid for the cost given to a collision, formulating the new reward function as Eq. 5 shows. Σ

i