132 75 7MB
English Pages 264 [258] Year 2024
The International Library of Ethics, Law and Technology 24
Giovanni Rubeis
Ethics of Medical AI
The International Library of Ethics, Law and Technology Volume 24
Series Editors Bert Gordijn, Ethics Institute, Dublin City University, Dublin, Dublin, Ireland Sabine Roeser, Philosophy Department, Delft University of Technology, Delft, The Netherlands Editorial Board Members Dieter Birnbacher, Institute of Philosophy, Heinrich-Heine-Universität, Düsseldorf, Nordrhein-Westfalen, Germany Roger Brownsword, Law, Kings College London, London, UK Paul Stephen Dempsey, University of Montreal, Institute of Air & Space Law, Montreal, Canada Michael Froomkin, Miami Law, University of Miami, Coral Gables, FL, USA Serge Gutwirth, Campus Etterbeek, Vrije Universiteit Brussel, Elsene, Belgium Bartha Knoppers, Université de Montréal, Montreal, QC, Canada Graeme Laurie, AHRC Centre for Intellectual Property and Technology Law, Edinburgh, UK John Weckert, Charles Sturt University, North Wagga Wagga, Australia Bernice Bovenkerk, Wageningen University and Research, Wageningen, The Netherlands Samantha Copeland , Technology, Policy and Management, Delft University of Technology, Delft, Zuid-Holland, The Netherlands J. Adam Carter, Department of Philosophy, University of Glasgow, Glasgow, UK Stephen M. Gardiner, Department of Philosophy, University of Washington, Seattle, WA, USA Richard Heersmink, Philosophy, Macquarie University, Sydney, NSW, Australia Rafaela Hillerbrand, Karlsruhe Institute of Technology, Karlsruhe, Baden-Württemberg, Germany Niklas Möller, Stockholm University, Stockholm, Sweden Jessica Nihle-n Fahlquist, Centre for Research ethics and Bioethics, Uppsala University, Uppsala, Sweden Sven Nyholm, Philosophy and Ethics, Eindhoven University of Technology, Eindhoven, The Netherlands Yashar Saghai, University of Twente, Enschede, The Netherlands Shannon Vallor, Department of Philosophy, Santa Clara University, Santa Clara, CA, USA Catriona McKinnon, Exeter, UK Jathan Sadowski, Monash University, Caulfield South, VIC, Australia
Technologies are developing faster and their impact is bigger than ever before. Synergies emerge between formerly independent technologies that trigger accelerated and unpredicted effects. Alongside these technological advances new ethical ideas and powerful moral ideologies have appeared which force us to consider the application of these emerging technologies. In attempting to navigate utopian and dystopian visions of the future, it becomes clear that technological progress and its moral quandaries call for new policies and legislative responses. Against this backdrop, this book series from Springer provides a forum for interdisciplinary discussion and normative analysis of emerging technologies that are likely to have a significant impact on the environment, society and/or humanity. These will include, but be no means limited to nanotechnology, neurotechnology, information technology, biotechnology, weapons and security technology, energy technology, and space-based technologies.
Giovanni Rubeis
Ethics of Medical AI
Giovanni Rubeis Biomedical and Public Health Ethics Karl Landsteiner University of Health Sciences Krems an der Donau, Austria
ISSN 1875-0044 ISSN 1875-0036 (electronic) The International Library of Ethics, Law and Technology ISBN 978-3-031-55743-9 ISBN 978-3-031-55744-6 (eBook) https://doi.org/10.1007/978-3-031-55744-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Contents
Part I
Foundations
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Dawn of a New Age in Medicine . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Big Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Doing Things: Practices . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Changing the Game: Relationships . . . . . . . . . . . . . . . . . . 1.2.3 Transforming the Basic Structure: Environments . . . . . . . . 1.3 Structure of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Scope, Objective, and Limitations . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 6 7 8 8 9 11 11
2
Artificial Intelligence: In Search of a Definition . . . . . . . . . . . . . . . . . 2.1 Defining AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 AI as a Simulation of Human Intelligence . . . . . . . . . . . . . 2.1.2 The Behavioral Turn: Robotic AI vs. Symbolic AI . . . . . . . 2.2 Instead of a Definition: Crucial Features . . . . . . . . . . . . . . . . . . . . 2.3 “Artificial Intelligence”: A Critical Perspective . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 16 16 18 19 19 21
3
MAI: A Very Short History and the State of the Art . . . . . . . . . . . . . 3.1 The Rise of MAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Machine Learning and Deep Learning . . . . . . . . . . . . . . . . . . . . . 3.3 Further Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Big Data Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Clinical Decision Support-Systems (CDSS) . . . . . . . . . . . . 3.3.3 Natural Language Processing (NLP) . . . . . . . . . . . . . . . . . 3.3.4 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 Internet of Things (IoT) . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23 24 26 29 30 34 37 38 39 43
v
vi
4
Contents
3.4 The Shape of Things to Come . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45 47
Ethical Foundations: Medical Ethics and Data Ethics . . . . . . . . . . . . 4.1 Medical Ethics: Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Medical Ethics: Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Therapeutic Relationship . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Trust and Empathy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Confidentiality and Privacy . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 Justice and Equity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.6 Avoiding Harm: Patient Safety . . . . . . . . . . . . . . . . . . . . . 4.3 Ethics of AI and Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 AI Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Big Data Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 56 56 58 60 62 65 67 69 70 71 77 80 81
Part II 5
6
Ethical Analysis
Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Collecting Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Confidentiality and Informational Privacy . . . . . . . . . . . . . 5.1.2 Informational Privacy and Autonomy . . . . . . . . . . . . . . . . 5.1.3 New Perspectives on Informed Consent . . . . . . . . . . . . . . . 5.1.4 Technical Solutions: Blockchain and Federated Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.5 Non-technical Alternatives: Data Ownership Models . . . . . 5.1.6 Nontechnical Alternatives: Regulatory Models . . . . . . . . . . 5.2 Operationalizing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Reductionism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Justifying Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91 91 92 94 97 101 105 108 109 111 117 133 141
Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Therapeutic Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Empathy in a MAI Setting . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Empathetic MAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.3 Shared Decision-Making and Autonomy . . . . . . . . . . . . . . 6.1.4 Responsibility and Liability . . . . . . . . . . . . . . . . . . . . . . . 6.1.5 The Role of Doctors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.6 The Role of Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.7 The Role of MAI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.8 Models of a MAI-Enhanced Therapeutic Relationship . . . . 6.1.9 What About Democratization? . . . . . . . . . . . . . . . . . . . . .
151 153 153 157 159 161 165 168 170 172 174
Contents
6.2
The Nurse-Patient Relationship . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 The MAI-Enhanced Nursing Gaze . . . . . . . . . . . . . . . . . 6.2.2 Nursing Robots and the Human-Machine Interaction . . . . 6.2.3 The Role of Nurses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 The Therapeutic Relationship in Mental Healthcare . . . . . . . . . . . 6.3.1 Nature of the Therapeutic Relationship . . . . . . . . . . . . . . 6.3.2 Mental Health Disorders and Patients . . . . . . . . . . . . . . . 6.3.3 The Therapeutic Project . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Impact of MAI on the Therapeutic Relationship . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vii
. . . . . . . . . .
175 176 179 185 187 189 189 192 194 203
7
Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Work Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Digital Literacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Integration of MAI into Clinical Practice . . . . . . . . . . . . . . 7.1.3 Artificial Agents as Colleagues . . . . . . . . . . . . . . . . . . . . . 7.1.4 Replacement of Healthcare Professionals by MAI . . . . . . . 7.2 Personal Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Ecosystems of Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Medicalization and Healthism . . . . . . . . . . . . . . . . . . . . . . 7.2.3 A Question of Agency . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Urban Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Ethical Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
213 216 217 218 220 222 224 227 230 236 237 239 241
8
Instead of a Conclusion: Seven Lessons for the Present and an Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Seven Lessons for the Present . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
247 248 251 252
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Part I
Foundations
Chapter 1
Introduction
Abstract Medicine is unique in that it is an art and a science. How MAI will affect medicine’s unique nature is still an open question. I argue that in order to assess the ethical implications of MAI, we have to focus on its impact in terms of transforming medicine and healthcare as we know it. In this chapter, I provide my basic understanding of MAI as a transformative force that changes what we do, how we encounter each other, and the structures we act in. Hence, the ethical analysis in the following chapters will focus on three impact areas of MAI, which are practices, relationships, and environments. The chapter closes with an overview of the structure, topics, and limitations of the book. Keywords Artificial intelligence · Big data · Clinical practice · Digital transformation · Disruption · Evidence-based medicine (EBM) · Healthcare · Hype cycle
1.1
Dawn of a New Age in Medicine
At the dawn of modern medicine in the late nineteenth century, Sir William Osler, a Canadian physician, wrote the now famous sentence “Medicine is a science of uncertainty and the art of probability” (Bean, 1954). What Osler, who is today celebrated as one of the fathers of modern medicine, wanted to express is the distinct nature of medical knowledge that sets it apart from other fields. On the one hand, medical knowledge is never purely theoretical. Although it contains a lot of theoretical information, for example on anatomy or physiological processes, it does not simply describe the biology of the human body. Instead, medical knowledge is always applied knowledge, meaning that it is meant to inform actions. On the other hand, medical knowledge, although based on strictly scientific methods, is often ambiguous. Physicians always struggle with the contrast between the functioning of the human body as such, which mostly means the average individual, and the actual individual they are treating. What makes medicine an art and a science is that clinical reasoning, although based on scientific knowledge, always entails an individual component, which is the discretion of the physician. There is an epistemic gap between the empirical evidence, say from a large cohort study on thousands of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6_1
3
4
1 Introduction
patients, and the characteristics of the individual patient (Feinstein & Horwitz, 1997). Analyzing data and weighing information against the backdrop of an individual patient also means going beyond vital functions and lab results. It means to view the patient as a person that is embedded in a web of social relations and shaped by social determinants like age, gender, and socio-economic status. Clinical reasoning thus implies to use the best available evidence for deciding in an individual case and treating a patient as a person. The crucial challenge here is to base this decision on objective scientific facts and at the same time contextualize these facts with the individual situation of a specific person. Hence the nature of medicine as art and science. Throughout medical history, attempts have been made to improve the knowledge base of clinical reasoning to enable a better decision-making by physicians. The latest manifestation of this development can be seen in the implementation of evidence-based medicine (EBM) as the leading paradigm in the medical field. The concept of EBM, developed in the 1970s (Cochrane, 1972), was established as the gold standard of clinical reasoning in the 1990s (EBM Working Group, 1992; Sackett et al., 1996). Its basic idea is that clinical practice should be informed by the best available empirical evidence. EBM defines a hierarchy of medical knowledge, with reviews and meta-analyses at the top, followed by randomized control studies (RCTs). Further down are guidelines by medical associations and the expertise of specialists. The lowest level of evidence is individual experience, a fact that has been criticized since the introduction of EBM as the new paradigm in medicine (Tonelli, 1999). In a way, EBM may be seen as the embodiment of Osler’s definition, the ideal method to deal with uncertainty and probability for ensuring best practice (Rysavy, 2013). However, one could also criticize EBM for ignoring the aspect of art and overemphasizing the science aspect (Tonelli, 1999). Following this line of reasoning, one could argue that medical knowledge consists of three elements, knowledge from experience, pathophysiologic knowledge, and scientific evidence (Tonelli, 2017). EBM is thus without a doubt the touchstone when it comes to medical knowledge. Clinical reasoning and decision-making, however, do not rest solely on medical knowledge. They also require integrating information about the patient’s values, goals, and experiences. The skill, experience, and intuition of doctors is what achieves this, which adds an aspect of art to medicine as science. The important point here is that the standardization and quantification that EBM implies must not be understood as a reductionism. There has to be room for the creative reasoning skills of the individual physician. Otherwise, patients would be reduced to quantifiable health data and medical treatment would lose its quality of a human endeavor. The debate between those who see EBM as the fulfillment of Osler’s point of view and those who consider it too limited still continues. And yet, this debate might already be obsolete. According to some commentators, we are now at the threshold of a new era in medicine that offers the ultimate technology for realizing EBM and a more personalized treatment at the same time (Topol, 2019): Medical artificial intelligence (MAI) is supposed to be the technology that unlocks the potential of medical information. Media reports enthusiastically
1.1
Dawn of a New Age in Medicine
5
describe machines outperforming doctors in diagnosing cancer or detect irregularities in X-ray images. Smart wearables offer new ways of collecting and processing data from the everyday life of individuals, thus giving them control over their own health and making healthcare services more accessible. Smart robots could support or even substitute human labor from the operating table to the nursing home. Behind all of these ground-breaking applications is MAI as the enabling technology. One of the characteristics of the debate is that MAI is not only considered as a new and more sophisticated tool for specific purposes, but the universal enabler of a radically new medicine. In other words, MAI will not only improve certain medical practices, but revolutionize medicine itself. Going back to the quote by Osler, MAI offers the possibility to optimize the science of uncertainty and perfect the art of probability. Whenever such high hopes are uttered, one has to critically ask whether this supposed revolution is really taking place or whether we are dealing with a hype. Gartner, a US-Based tech firm, introduced the model of hype cycles in 1995 (Dedehayir & Steinert, 2016; Greenhill & Edmunds, 2020). The Gartner Hype Cycle Model describes the timeline for the adoption of new technologies in five phases: Phase 1 is defined by the innovation trigger that sparks interest in the new technology. Phase 2 is called peak of inflated expectations and signifies a period of enthusiasm and exaggerated expectations. This is followed by phase 3, the trough of disillusionment, a kind of hang-over phase after the high expectations have been shattered by failed implementation or mass production. Phase 4 is the slope of enlightenment, signifying a clearer understanding of the benefits and values of the technology and a re-kindled interest by the industry. In phase 5, the plateau of productivity, mainstream adoption starts as the technology enters the market and continues to expand from a niche product to a broader implementation. Some commentators believe that we are entering phase 5, the plateau of productivity right now, meaning that artificial intelligence technologies are about to become part of our everyday lives and work environments (Greenhill & Edmunds, 2020). It is easy to name numerous examples that underline this assumption, from speech assistants in the home environment like Siri and Alexa that order food or regulate the temperature on user command to the chatbots users interact with when contacting their mobile phone provider or the algorithm on a shopping website that knows user preferences and suggests what book to order or TV-show to watch next. But what about medicine and healthcare? Commentators see a growing acceleration of research in MAI, but also an accelerated deployment and implementation (Kaul et al., 2020). There are technological factors that enable this process, especially the growing speed of computational power, the availability of vast amounts of data, and the adoption of cloud computing that provides huge capacities for data storage and exchange (Agrawal & Prabakaran, 2020; Greenhill & Edmunds, 2020). Besides these aspects, there are other driving forces at work for the propagation of MAI. Following the leading experts in the field as well as the results from numerous empirical studies, MAI has the potential to usher in a new kind of medicine that makes better use of available health-related data, is more precise, more efficient in terms of saving time, costs, and personnel, and is better suited for a personalized
6
1 Introduction
treatment tailored to an individual’s health needs (Alonso et al., 2019; Mishra, 2022; Topol, 2019). Based on these assumptions it is safe to say that we are on the brink of the immersion of MAI in everyday clinical practice. This certainly implies a transformation of said practice and the structures and circumstances in which it takes place. In order to analyze the ethical aspects of MAI, we have to consider the advantages and challenges connected to this transformative process. If we look at the advantages and challenges MAI provides, we may ask what does this mean in medicine and healthcare? The answer is that medicine and healthcare are no exception from the general trend in society where AI is a form of smart agency that reshapes how people do things, their interactions, and the structures they do it in (Floridi et al., 2018). Accordingly, it is the fundamental hypothesis of this book that we can best understand the spectrum of ethical implications of MAI if we start from the impact the technology will have and already has on practices, relationships, and environments in the medical field. A lot has been said about the impact of MAI, its transformative power, and disruptive potential. Some commentators use terms like revolution to denote the fundamental changes that the further development and implementation of MAI will bring about (Benet & Pellicer-Valero, 2022; Coppola et al., 2021; Swan, 2012; Topol, 2019). What is revolutionary about MAI is its potential to completely transform the way we practice medicine including research, prevention, diagnosis and therapy as well as the patient-doctor relationship. Some belief that patients will benefit most from this revolution since MAI will improve patient-related services and outcomes, unlock more time for the doctor-patient relationship by reducing time-consuming data analysis, and empower patient autonomy by providing tools for self-monitoring and self-management (Topol, 2019). Another term commentators often use to describe the transformation triggered by MAI is disruption (El Khatib et al., 2022; Mazurowski, 2019; Patel et al., 2009; Galmarini & Lucius, 2020). In the context of technology, the term disruption refers to the way an innovation replaces hitherto used technologies, established practices, and structures (Christensen, 1997). It is characteristic for a disruption that it happens abruptly and sometimes in a destructive manner, unlike an evolutionary process that implies a careful integration over time. A disruption might bring about change for the better, but can also bring negative consequences for those involved. In the context of MAI’s impact, commentators see the major disruption in regard to the workflow and practices of healthcare professionals. A crucial issue here is the question, whether MAI systems will replace human labor in the healthcare sector on a large scale.
1.2
The Big Transformation
Whether the mainly positively connotated term revolution or the term disruption that also carries negative implications is the right one, is an open question. What is for certain, however, is that MAI can be seen as a transformative technology (Phillips,
1.2
The Big Transformation
7
2007): it brings about a quantitative as well as a qualitative change, meaning that by using MAI, we can do things we have always done in a better (more efficient, more personalized) manner and we can also do things we could not have done before, like integrating and analyzing huge amounts of data in real-time. That means that change and transformation are the key phenomena we are witnessing right now in the context of MAI, and this will surely continue throughout the coming years. But change and transformation of what? Into what? Before we can even begin to assess whether the transformation by MAI will mean a change for the better or a turn to the worse, we have to determine what exactly is the object of this transformation. What is disrupted, revolutionized, reshaped, or obliterated by MAI?
1.2.1
Doing Things: Practices
It seems obvious that AI in general profoundly affects the way we do things. This can easily be understood by looking at the way AI changes the business world and also our daily lives. The fact that companies are able to track our online behavior and build algorithms that recommend us products tailored to our (supposed) individual preferences has changed practices in marketing and retail. Using navigation apps, automated lawn movers, smart home systems, and, in the very near future, even selfdriving cars, changes some of our daily activities for good by providing us more possibilities, more comfort, and, hopefully, less risk. When looking at MAI, it is therefore reasonable to start with analyzing how this technology will affect practices in medicine and healthcare. What sets MAI apart from other healthcare technologies is its potential to critically enhance or constrain the practices of healthcare professionals (Gama et al., 2022). The whole idea of collecting large amounts of data and organizing them in a specific way to optimize its informational value aims at changing the practices of doctors, nurses, therapists, administrative personnel, and public health professionals. MAI is supposed to make clinical practice more precise and efficient by making better use of the vast amounts of complex data that are linked to an individual’s health. One goal is to personalize health services by tailoring them to individual needs. Another perspective is to optimize the workflow of clinicians as well as the organizational structures in healthcare institutions, which in the end serve the goals of improving the patient experience and reducing costs. Given this spectrum of possible applications and objectives, it is therefore crucial to analyze the impact of MAI on practices in order to figure out the ethical implications. The focal point of the ethical analysis of practices will be the impact MAI has on clinical reasoning, decision-making, and action. Since this means the integration of big data approaches with MAI, I call the resulting practices smart data practices. The fact that medical practices become more and more data-driven and enhanced by MAI requires an analysis of how individual patient data is collected and operationalized. What are the data security and data safety issues connected to monitoring and surveillance technologies, smart wearable sensors, mobile health
8
1 Introduction
(mHealth) applications, and the electronic health record (EHR)? How does the design and functioning of algorithms enable bias and the datafication of patients? How can patient safety and transparency of information be maintained in regard to the character of many MAI systems as a black box? How does the use of predictive analytics and clinical decision support systems (CDSS) affect clinical epistemology, decision-making, and the autonomy of patients?
1.2.2
Changing the Game: Relationships
AI already impacts and transforms social relationships in many contexts. The fact that by now we have become used to interacting with chatbots when trying to reach our mobile phone service provider instead of a human employee may serve as an example here. Since optimization of practices, workflows, and structures is one of the main goals of applying AI, it is no surprise that it continuously replaces human labor. One result of this development is the increasing human-machine interaction, a highly complex type of interaction, since we are not dealing with simply a new kind of machine here. A distinguishing aspect of MAI (and AI in general) is that it is not a simple, passive tool like a stethoscope or an X-ray machine. Rather, MAI can interact and communicate with human agents, act autonomously, and make decisions, which transcends the passivity of other machines. When interacting with an AI application, we interact with a non-human, artificial agent, which fundamentally differs from using a simple tool as other machines are. We are faced with an entity that may act in various degrees of autonomy and even make decision on its own. Interacting with a non-human and even non-biological intelligent agent is a unique situation for which we are totally unprepared. It is not only the direct interaction in the shape of a human-AI dyad that we have to consider, but also the impact of artificial agents on human relationships. The most obvious example in medicine is the doctor-patient relationship. How will the presence of a non-human intelligent agent affect this relationship? How will this MAI-enhanced setting affect trust, a crucial component of this relationship? How will it transforms the roles of health professionals and patients?
1.2.3
Transforming the Basic Structure: Environments
When we talk about practices, be they our daily activities or clinical practices, we also have to consider the circumstances and conditions for performing these actions. AI does not only affect the way we do things but also reshapes the basic environments in which we do them. First and foremost, this will impact physical environments, for example when we talk about smart homes where sensor and surveillance technology is omnipresent. Another astonishing example would be the future vision of smart cities where AI systems regulate all infrastructure-related aspects, from
1.3
Structure of This Book
9
traffic to power to water supply. In medicine and healthcare, this would imply the integration of more computerized systems, Internet of Things (IoT) applications like monitoring devices, and robots into the workflow. Hence, one crucial challenges is to integrate MAI applications into the material infrastructure of medical institutions such as hospitals, e.g. by providing the necessary technological environment. In addition, MAI will also affect what could be called the immaterial infrastructure of such institutions, such as work organization. Instead of exclusively focusing on implementing different technological solutions, one also has to consider how healthcare professionals and MAI may interact in a given environment in order to create value, either in terms of enhancing the patient experience or benefitting other stakeholders. The introduction of MAI as agents will mean a fundamental change in work organization. Healthcare professionals will increasingly use semi-automated or even automated systems. That means that they will delegate some tasks to these machines and also have to conduct new tasks in supervising and maintaining them. This could mean that some healthcare professions might be radically transformed. Some jobs in the healthcare sector might even be taken over by MAI-powered machines, be they computer systems in administration, computer vision systems in radiology, or smart robots in nursing care. If this occurs, it will have an impact on work relations and the self-image of healthcare professionals, especially those whose jobs rely heavily on MAI systems or are made obsolete by them. Furthermore, MAI systems and the practices related to them will also shape the immaterial aspects of home environments. Behaviors and identities shape and are shaped by the environments in which they are enacted. When technologies restructure the home, e.g. by introducing mHealth and IoT applications to the hitherto private realm, this also affects the behaviors and identities that constitute its immaterial networks. By entering and transforming our homes, be it in the form of stationary devices or smart wearables attached to our bodies, MAI will therefore affect our privacy. If we take both material and immaterial aspects together, we can say that MAI will fundamentally reshape the environments in which we act. Therefore, the impact MAI has and will have on these environments in medicine and healthcare is another crucial focus for any ethical analysis.
1.3
Structure of This Book
In the following, I build my ethical analysis of MAI on the aforementioned three aspects practices, relationships, and environments. I analyze each of these areas of impact separately, although overlaps will necessarily occur, since we are dealing with interwoven phenomena here. For example, the same ethical issue, let us say autonomy, may be a topic in the context of practices, but also relevant regarding relationships and environments.
10
1
Introduction
Before I can conduct the ethical analysis, I first have to build the foundations in terms of introducing the technologies in question as well as the ethical approach I aim to use. In the next chapter (Chap. 2), I try to give a definition of AI and MAI. I do not claim to come up with a concluding definition that is all-encompassing or even unique. My aim is twofold: On the one hand, I want to introduce a definition of MAI that fits the objectives of this book. The purpose of this definition is to delimit the scope of technologies and applications that are relevant in the field of medicine and healthcare. On the other hand, I want to raise awareness for the problematic nature of the term artificial intelligence, which has been rightly criticized for several reasons. Following that, I shortly outline the history of MAI and give a brief overview of the current state of the art (Chap. 3). I can only do this in broad strokes of course, without any claims to completeness. The aim of the historical account is to explain why MAI, a technology that has been around since the 1960s, has become a major topic only in recent years. The historical overview also serves another purpose, since I use it to introduce some of the crucial concepts in AI in general and MAI in particular and give a short introduction to the state of the art in MAI. I focus on the most important technologies and their current as well as future applications. Since I am not an expert in informatics, data science, computer science, or engineering, and do not expect the reader to be, I cannot go into the technical details. My goal here is to get a grasp of the basics of MAI that enables a thorough understanding of the ethical implications these technologies have. In the following chapter (Chap. 4), I introduce some basic concepts from medical ethics and data ethics that underly my ethical analysis. This has a twofold purpose. First, I aim to give the reader some kind of orientation on the ethics of medicine in general. Second, I want to introduce a new approach for dealing with the ethics of MAI in particular. This approach integrates concepts from data ethics and critical data studies into the medico-ethical analysis. In my view, this broadening of the spectrum of medical ethics is crucial for being able to deal with the unique implications that MAI brings. This will conclude the first part of this book. The second part contains the ethical analysis of MAI’s transformational impact. I outline the ethical implications connected to the transformation of practices (Chap. 5), relationships (Chap. 6), and environments (Chap. 7). I discuss the specific ethical aspects that arise from the use of MAI in each of these three areas, although several of those aspects will overlap. Discussing the ethics of MAI with a focus on its areas of impact offers the opportunity of a clearer-structured approach when compared to looking at each possible application separately. This approach also offers a more concrete perspective on the ethical issues than focusing on abstract ethical principles in a top-down manner. In a final step (Chap. 8), I outline strategies for dealing with these ethical implications. In this concluding section, I provide seven lessons from the ethical analysis for a successful and beneficial design, implementation, and use of MAI as well as an outlook.
References
1.4
11
Scope, Objective, and Limitations
It is important to outline the scope, objectives and limitations of the book in order to show the reader what they can expect, what the book covers and what it cannot cover. This book is written for healthcare professionals, policy-makers, engineers, as well as researchers and students of medicine, the health sciences, nursing science, social sciences, philosophy, and ethics. It aims to give a coherent mapping of the field of ethical implications of MAI, explores the challenges of the transformative impact this technology will have on medicine, and discusses several strategies for dealing with these challenges. Due to the book’s introductory character, several aspects, issues, and topics cannot be explored in detail. Especially philosophical, i.e. primarily ontological and epistemological dimensions of the topic would need a more in-depth investigation. I am not able to sufficiently do this here for two reasons. First, the aforementioned introductory character of the book. Second, its practice-oriented focus on applied ethics. This means that the book aims to analyze crucial issues in MAI and also discus possible solutions. Of course, a thorough philosophical investigation and an applied ethics approach are not mutually exclusive. However, since the focus of this book is on applied ethics, the deeper philosophical implications can only be dealt with cursorily. There are many crucial texts on the philosophy of AI and big data as well as the philosophy of medicine that already provide investigations of ontological and epistemological aspects relevant to MAI. I will make use of them wherever necessary, but cannot provide new insights in this field. However, I hope that this book may provide a mapping of the ethical implications of MAI that allows and enables more in-depth philosophical investigations of specific aspects in future research. As I outline in Chap. 4, I will not follow any unifying theoretical or reduce my analysis to a single theoretical perspective. Since my objective is to give the first mapping of the field in book-form, I want the perspective to be as broad as possible. However, I use certain concepts for a deeper understanding of the crucial ethical aspects in MAI. These concepts stem mostly from the field of critical theory and critical data studies as explained in Chap. 4. Instead of sticking to one particular theory or school of thought, I will make use of what can be called epistemic lenses from these critical approaches. The reader will notice that the critical lenses give the ethical analysis a sharper focus. I chose these lenses because a framework for interpreting data-intensive technologies cannot focus solely on technical aspects or sorting out risks and benefits. Rather, it has to reflect on the social practices and power asymmetries that shape said technologies. Hence, concepts from critical theory, especially critical data studies, will serve as epistemic lenses for the analysis and can be seen as the common methodological thread of the book.
References Agrawal, R., & Prabakaran, S. (2020). Big data in digital healthcare: Lessons learnt and recommendations for general practice. Heredity, 124, 525–534. https://doi.org/10.1038/s41437-0200303-2
12
1
Introduction
Alonso, S. G., de la Torre Díez, I., & Zapiraín, B. G. (2019). Predictive, personalized, preventive and participatory (4P) medicine applied to telemedicine and eHealth in the literature. Journal of Medical Systems, 43, 140. https://doi.org/10.1007/s10916-019-1279-4 Bean, W. B. (1954). Sir William Osler: Aphorisms from his bedside teachings and writings. BJPS, 5, 172–173. Benet, D., & Pellicer-Valero, O. J. (2022). Artificial intelligence: The unstoppable revolution in ophthalmology. Survey of Ophthalmology, 67, 252–270. https://doi.org/10.1016/j.survophthal. 2021.03.003 Christensen, C. (1997). The Innovator’s dilemma. When new technologies cause great firms to fail. Harvard Business School Press. Cochrane, A. L. (1972). Effectiveness and efficiency: Random reflections on health services. Nuffield Provincial Hospitals Trust. Coppola, F., Faggioni, L., Gabelloni, M., de Vietro, F., Mendola, V., Cattabriga, A., Cocozza, M. A., Vara, G., Piccinino, A., Lo Monaco, S., Pastore, L. V., Mottola, M., Malavasi, S., Bevilacqua, A., Neri, E., & Golfieri, R. (2021). Human, all too human? An all-around appraisal of the “artificial intelligence revolution” in medical imaging. Frontiers in Psychology, 12, 710982. https://doi.org/10.3389/fpsyg.2021.710982 Dedehayir, O., & Steinert, M. (2016). The hype cycle model: A review and future directions. Technological Forecasting and Social Change, 108, 28–41. EBM Working Group. (1992). Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA, 268, 2420–2425. https://doi.org/10.1001/jama.1992.03490170092032 El Khatib, M., Hamidi, S., Al Ameeri, I., Al Zaabi, H., & Al Marqab, R. (2022). Digital disruption and big data in healthcare – Opportunities and challenges. ClinicoEconomics and Outcomes Research, 14, 563–574. https://doi.org/10.2147/CEOR.S369553. eCollection 2022. Feinstein, A. R., & Horwitz, R. I. (1997). Problems in the ‘evidence’ of ‘evidence-based medicine’. The American Journal of Medicine, 103, 529–535. https://doi.org/10.1016/s0002-9343(97) 00244-1 Floridi, L., Cowls, J., Beltrametti, M., Gallo, U., Rossi, F., Schafer, B., Valcke, P., Vayena, E., et al. (Eds.). (2018). AI4People-an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach (Dordr), 28, 689–707. https://doi.org/10.1007/ s11023-018-9482-5 Galmarini, C. M., & Lucius, M. (2020). Artificial intelligence: A disruptive tool for a smarter medicine. European Review for Medical and Pharmacological Sciences, 24, 7462–7474. https://doi.org/10.26355/eurrev_202007_21915 Gama, F., Tyskbo, D., Nygren, J., Barlow, J., Reed, J., & Svedberg, P. (2022). Implementation frameworks for artificial intelligence translation into health care practice: Scoping review. Journal of Medical Internet Research, 24, e32215. https://doi.org/10.2196/32215 Greenhill, A. T., & Edmunds, B. R. (2020). A primer of artificial intelligence in medicine. TIGE, 22, 85–89. Kaul, V., Enslin, S., & Gross, S. A. (2020). History of artificial intelligence in medicine. Gastrointestinal Endoscopy, 92, 807–812. https://doi.org/10.1016/j.gie.2020.06.040 Mazurowski, M. A. (2019). Artificial intelligence may cause a significant disruption to the radiology workforce. JACR, 16, 1077–1082. https://doi.org/10.1016/j.jacr.2019.01.026 Mishra, S. (2022). Artificial intelligence: A review of Progress and prospects in medicine and healthcare. JEEEMI, 4(1), 1–23. https://doi.org/10.35882/jeeemi.v4i1.1 Patel, V. L., Shortliffe, E. H., Stefanelli, M., Szolovits, P., Berthold, M. R., Bellazzi, R., & Abu-Hanna, A. (2009). The coming of age of artificial intelligence in medicine. Artificial Intelligence in Medicine, 46, 5–17. https://doi.org/10.1016/j.artmed.2008.07.017 Phillips, P. W. B. (2007). Governing transformative technological innovation: Who’s in charge? Edward Elgar Publishing. Rysavy, M. (2013). Evidence-based medicine: A science of uncertainty and an art of probability. Virtual Mentor, 15, 4–8. https://doi.org/10.1001/virtualmentor.2013.15.1.fred1-1301
References
13
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ, 312, 71–72. https://doi.org/10. 1136/bmj.312.7023.71 Swan, M. (2012). Health 2050: The realization of personalized medicine through crowdsourcing, the quantified self, and the participatory biocitizen. Journal of Personalized Medicine, 2(3), 93–118. https://doi.org/10.3390/jpm20300932 Tonelli, M. R. (1999). In defense of expert opinion. Academic Medicine, 74, 1187–1192. https:// doi.org/10.1097/00001888-199911000-00010 Tonelli, M. R. (2017). Case-based (casuist) decision-making. In R. Bluhm (Ed.), Knowing and acting in medicine. Rowman & Littlefield International Inc. Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books, Inc.
Chapter 2
Artificial Intelligence: In Search of a Definition
Abstract In this chapter, I revisit some of the most influential definitions of AI from the last decades. No single definition of AI has been agreed upon so far and I do not aim to come up with a concluding definition. My aim is to better understand what AI entails, what it is supposed to be and do, by examining existing definitions. This enables a better understanding of the ethical aspects linked to AI in general and MAI in particular. I also shortly discuss why the term “artificial intelligence” is highly problematic. Keywords Algorithms · Artificial intelligence · Big data · History of technology · Human intelligence · Machine learning · Neural networks · Turing test “If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.” (McCarthy et al., 2006, p.13). The above quote, despite or rather because of its simplicity, may serve as a first step towards defining AI. It is taken from a two-page project proposal for a conference at Dartmouth University to be held in 1956 in which the term artificial intelligence was coined (McCarthy et al., 2006). It is quite unusual that a simple project proposal is widely cited decades later, but there are important reasons for this. McCarthy and colleagues outline nothing less than a new field of study that would eventually lead to the AI applications that we use today. AI has come a long way, from a focus on symbolic models and reasoning, early neural networks, and expert systems between the 1950s and 1970s, to neural networks in the 1980s, intelligent agents in the 1990s, big data and deep neural networks in the 2000s, and our everyday use of smart technologies today (Emmert-Streib et al., 2020; Russell & Norvig, 2010). It is not my intention to give a full account of the history of AI, since it is not relevant to the topic of this book (for a concise history of AI see Russell & Norvig, 2010, pp.16–28). What is relevant however, is some kind of definition of AI. I say some kind of definition, since no universally agreed upon definition exists, which is the case with most crucial terms or concepts in any given field. Also, AI presents © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6_2
15
16
2 Artificial Intelligence: In Search of a Definition
itself in a diverse manner, from a narrow technical field to almost an ideology or world view, and also has a strong business side, which is why AI is often praised as the ultimate problem solver (Collins, 2021). This makes a unified definition even more difficult. What is important to note in regard to the emergence and development of AI is that it has been considered as a science as well as an engineering discipline from the start. We are used to refer to software, computer systems, or robots as artificial intelligence, which would initially have been considered as the products of AI, not AI itself. This confusion makes forming a definition even harder.
2.1
Defining AI
Several authors have collected various definitions of AI from almost seven decades (Emmert-Streib et al., 2020; Legg & Hutter, 2007; Russell & Norvig, 2010; Wang, 2008). I do not aim to come up with the ultimate definition here. What I want to achieve is to derive the essence of what AI entails, what it is supposed to be and do, from existing definitions and concepts. This is crucial for a deeper understanding of the specific ethical aspects that are connected to AI (and also MAI) and especially why AI is an object of ethical reflection in the first place. Hence, I do not choose one particular definition or formulate a new one. No single definition of AI has been agreed upon so far and I do not think that this will change in the near future. Discussing the various existing definitions serves as characterization of AI technologies, their different functions and possible fields of application. These are important insights that the reader needs to be familiar with in order to grasp the ethical implications of MAI. A conclusive definition is less important in this regard. In the following, I examine several definitions of AI that have had some impact on research as well as the surrounding discourse, without claims to completeness. Along the way, I explain crucial concepts, methods, and approaches in AI. I do not go into details when it comes to technical explanations, i.e. the detailed concepts from informatics, data science, and engineering. An analysis of ethical aspects requires a basic understanding of how AI works, but the intricacies of mathematical, statistical, and engineering concepts needed for a deeper understanding of the technology is beyond the scope of this book.
2.1.1
AI as a Simulation of Human Intelligence
In their proposal, McCarthy and colleagues start with the basic assumption that we can describe learning and every other feature of human intelligence in such a way that it can be simulated by a machine. This is a hypothesis and at the same the goal of the enterprise they refer to as “artificial intelligence”. The objective is to create machines that are able to use language, form abstractions and concepts, solve
2.1
Defining AI
17
problems that have hitherto only been the domain of human intelligence, and also improve themselves (McCarthy et al., 2006). Later approaches echo McCarthy and colleagues, defining AI as an effort to construct machines that behave in a way that we would consider intelligent if observed in a human (Feigenbaum, 1963). Winston and Brown (1984) define making machines smarter, understanding what intelligence is, and making machines more useful as crucial goals of this field of study. Winston (1992) defines AI as a research field where the object of study is computations that enable perception, reasoning, and action. AI is therefore part engineering, part science: The engineering part is solving real-world problems, whereas the science part is conceptualizing knowledge, its representation and use. Crucial tasks for AI are solving analysis problems, helping in designing new devices, and learning from examples. Ginsberg (1993) defines the goal of AI as constructing an artefact that is able to pass the Turing test. In this thought experiment introduced by Alan Turing, true artificial intelligence would be achieved if a machine could communicate with a human being without the latter being aware that they are interacting with a non-human (Turing, 1950). Whereas some authors see the Turing test as a crucial instrument (Legg & Hutter, 2007; Russell & Norvig, 2010), others object, claiming that an artificial agent based on a sophisticated technology could also just pretend to be intelligent (Collins, 2021; Emmert-Streib et al., 2020). Another characteristic of AI is that the computational rationality involved builds upon inferential processes for perceiving, predicting, learning, and reasoning under uncertainty (Gershman et al., 2015). AI is about selecting the best action by making predictions about different outcomes. In this understanding, AI also entails maximizing the expected utility of an action by an agent. An artificial agent should not only be able to solve one isolated problem, but to develop an overall strategy for problem-solving throughout its history (Wang, 2008). Yet, a problem in defining artificial intelligence from the beginning is its relation to human intelligence. The lack of a universally accepted definition of intelligence in humans makes it difficult to define intelligence in computer systems (Emmert-Streib et al., 2020). As we have seen, Winston and Brown (1984) even claim that artificial intelligence as a field of study could be a perspective to better understand what human intelligence is. Nevertheless, the concept of intelligence one starts with has a severe impact on the research on AI. Different schools of AI follow different definitions or concepts of human intelligence. As Legg and Hutter (2007) put it, not only is there disagreement on solutions for designing intelligent machines between these schools, but also on what the basic problem is. This begs the question, what are the skills an intelligent machine is supposed to master? From the first MIT proposal on, commentators have understood problem-solving and learning as crucial features of intelligence in general, and the objective has been to create machines that are able to perform these actions. The idea is that in order to achieve artificial intelligence, machines should be designed to simulate human intelligence (Legg & Hutter, 2007). Following this approach, AI is an attempt to reproduce mental abilities of humans in computer systems or to duplicate human intelligence. Both humans and intelligent machines perceive their environments and
18
2 Artificial Intelligence: In Search of a Definition
perform actions. Thus, machines would have to be able to build adequate models of the world as a foundation for their actions. These machines must be capable of turning sensory inputs into efficient representations of their environment (Mnih et al., 2015). In addition to that, intelligent machines should be able to map input into output, i.e. perceptions into actions (Wang, 2008).
2.1.2
The Behavioral Turn: Robotic AI vs. Symbolic AI
The robot-centered approach to AI by Brooks changes this paradigm (Brooks, 1991), stating that intelligence is not primarily based on representations, i.e. symbolic constructs or models of the environment, but emerges from interactions with the environment. This behavioral turn in AI has been decisive for later developments, since it focusses on intelligent machines as agents, which implies the ability of performing tasks in a given environment. Later approaches define intelligent machines primarily as agents that are capable of adapting their behavior in order to perform tasks in different environments and meeting goals when faced with problems (Fogel, 2006; Legg & Hutter, 2007). Hence, AI is the discipline that creates computer systems for conducting tasks that are typically associated with cognition, learning, and adapting to new information and changing conditions (Solomonides et al., 2022). As a consequence, learning becomes a crucial feature of intelligent machines. Winston defines learning in terms of “reasoning about new experiences in the light of commonsense knowledge” (Winston, 1992, p.10). Generalizing past experience is key here to be able to adapt to new situations (Mnih et al., 2015). It soon becomes clear that the cognitive abilities of computer systems are limited to specific domains. Ginsberg (1993) speaks of brittle AI, which refers to computer systems that are capable of performing very specific tasks under certain conditions. He names computer vision as an example. Computer systems may be very good in recognizing objects in images, but when the picture is turned sideways, they often fail completely. However, in these specific tasks, intelligent agents may go beyond human faculties in being able to integrate and make use of extensive amounts of information, integrating data from different sources, and make decisions based on them (Solomonides et al., 2022). We can therefore define AI as a computational agent that is capable of building models representing its environment, makes predictions about the outcomes of different actions when faced with a problem, and learns from experience. Such an agent is able to generalize past experience and thus adapt to new situations. The aim of AI research and development is to create an intelligent agent that possesses a broad spectrum of skills for solving a wide range of problems. Nowadays, the ability to fulfill specific tasks, e.g. image recognition, is referred to as weak or narrow intelligence (artificial narrow intelligence – ANI). A human-like intelligence that masters different skills, from driving a car to recognizing faces in pictures and interpreting a text is called artificial general intelligence (AGI).
2.3 “Artificial Intelligence”: A Critical Perspective
19
Whereas ANI has become part of our everyday lives (e.g. speech assistants like Siri and Alexa), AGI is still a futuristic concept. Some authors also speak of a third type of AI that not only simulates but surpasses human cognitive abilities, which they call artificial superintelligence (ASI) (Bostrom, 2014; Hibbard, 2002). Some authors expect that AI-systems may undergo an evolutionary process with ASI as an outcome. Whether this expectation is correct or what the ethical implications might be are surely interesting questions, but they go beyond the scope of my investigation. Hence, I will not discuss AGI or ASI technologies and their possible impact. What we will deal with in this book is ANI in the field of medicine and healthcare.
2.2
Instead of a Definition: Crucial Features
Wang (2008) introduced a characterization of AI that is widely accepted by experts (Monett & Lewis, 2018) and also fits the purposes of this book. Wang defines AI by structure, behavior, capability, function, and principle. Regarding structure, AI imitates the human brain, resulting in a brain-like structure of neural networks that performs tasks that are typical for human intelligence. The aspect of behavior refers to the potential ability of an artificial agent to pass the Turing test. Capability means that an artificial agent is capable of solving complex problems. Function refers to the ability of an artificial agent to map input data into output data and thus base actions on perceptions. Principle implies that an artificial agent is capable of problemsolving and rationality, which means developing an overall strategy to solve problems based on past experience and learning. As we will see in the following chapters, all of these features have some relevance for MAI. Therefore, instead of giving a conclusive definition, I will outline several features that are relevant for MAI. MAI mostly refers to artificial agents in the field of medicine and healthcare, i.e. virtual or embodied computer systems that are capable of performing tasks of medical reasoning, problem-solving, decision-making, or mechanical tasks related to clinical or nursing practices. MAI systems use big data approaches, which means that they learn from large amounts of data and build models for analytic or predictive purposes based on them. Due to the high precision and efficiency of many MAI applications, they are able to imitate or sometimes even outperform human agents. MAI may therefore be understood as a subfield of ANI within the healthcare domain.
2.3
“Artificial Intelligence”: A Critical Perspective
Although I will use the term MAI as outlined above, I want to point out several issues with the concept of artificial intelligence in general.
20
2
Artificial Intelligence: In Search of a Definition
First, the term artificial intelligence is very prominent in the public debate, but from the perspective of informatics, computer science, and engineering, it is too unspecific. In this context, AI is an umbrella term that denotes a variety of techniques to organize, process, and analyze data. As we will see in the following chapter, these techniques have evolved over the last seven decades and with it our understanding of what artificial intelligence is, can do, and should be used for. When talking about AI today we mostly refer to applications of machine learning, which I outline in the next chapter. But that does not necessarily mean that AI has been or still is synonymous with machine learning in all regards. I am aware of this problem and the confusion that may arise from it. Therefore, I try to use the term MAI carefully and only when I speak about general aspects that all of the technologies involved share. Since not all MAI-technologies have the same characteristics or the same ethically relevant aspects, I will specify the particular application and its implications wherever necessary. For example, computer vision in radiology, CDSS, and patient monitoring through smart wearables are all applications of MAI, although they are based on different technologies and machine learning techniques, which each have their own specific ethical aspects. It is crucial to distinguish between these different applications in order to be able to understand the specific ethical implications attached to them. Second, the term artificial intelligence has been criticized for another reason besides its superficiality and fuzziness. Both parts of the term are incorrect and misleading according to this critique. The set of technologies usually referred to by this term are not artificial, since they rely on material resources and human labor. A huge number of low-paid workers, mostly from the Global South, are involved in building, maintaining, and testing AI applications (Crawford, 2021). These so-called click workers train algorithms by labelling data or filtering out unwanted content. This human-fueled automation (Irani, 2016) contradicts the common belief of an autonomous system, which may rest on a misunderstanding of the concept of selflearning. The other “myth” involved here concerns the definition of intelligence. Following this critique, we cannot call any agent intelligent without taking embodiment, relations, and ecology into account (Crawford, 2021). Claiming that machines can be intelligent the same way human beings are means to detach intelligence from social, cultural, political, and historical aspects. Reducing intelligence to a disembodied form of rationality is not the only problem in this regard. In fact, AI cannot be considered as fully rational since it is dependent on social and political frameworks. Crawford (2021, p.8) calls AI a “registry of power”, since it is designed to benefit those who have economic and/or political superiority. In this view, AI systems are not neutral and objective computing machines, but mirror as well as produce a certain interpretation of the world and with it a specific type of power relations. Hence, in order to fully understand AI and its impact, one has to consider the economic and political forces that shape the technology. Since the focus of this book is ethical aspects of AI in medicine and healthcare, I cannot comprehensively analyze this crucial critique of AI in general. However, as
References
21
I will outline in the chapter on the ethical foundations of my analysis (Chap. 4), I follow the premise of regarding MAI against its socio-political and socioeconomical backdrop. By using epistemic lenses form critical theory, especially critical data studies, I aim to contextualize the specific ethical aspects of MAI with the bigger picture, i.e. the social practices and power asymmetries that shape the development, implementation, and use of MAI.
References Bostrom, M. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. Brooks, R. A. (1991). New approaches to robotics. Science, 253, 1227–3210. https://doi.org/10. 1126/science.253.5025.1227 Collins, H. (2021). The science of artificial intelligence and its critics. Interdisciplinary Science Reviews, 46, 53–70. https://doi.org/10.1080/03080188.2020.1840821 Crawford, K. (2021). The atlas of AI. Power, politics, and the planetary costs of artificial intelligence. Yale University Press. Emmert-Streib, F., Yli-Harja, O., & Dehmer, M. (2020). Artificial intelligence: A clarification of misconceptions, myths and desired status. Frontiers in Artificial Intelligence, 3, 524339. https:// doi.org/10.3389/frai.2020.524339 Feigenbaum, E. A. (1963). Artificial intelligence research. IEEE Transactions on Information Theory, 9, 248–253. Fogel, A. (2006). Defining artificial intelligence. In A. Fogel (Ed.), Evolutionary computation: Toward a new philosophy of machine intelligence (pp. 1–32). Wiley-IEEE Press (E-pub). https://doi.org/10.1002/0471749214.ch1 Gershman, S. J., Horvitz, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349, 273–278. Ginsberg, M. (1993). Essentials of artificial intelligence. Morgan Kaufmann. Hibbard, B. (2002). Super-intelligent machines. Kluwer Academic/Plenum Publishers. Irani, L. (2016). The hidden faces of automation. XRDS, 23, 34–37. https://doi.org/10.1145/ 3014390 Legg, S., & Hutter, M. (2007). Universal intelligence: A definition of machine intelligence. Minds and Machines, 17, 391–444. https://doi.org/10.1007/s11023-007-9079-x Mccarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, august 31, 1955. AI Magazine, 27, 12. https:// doi.org/10.1609/aimag.v27i4.1904 Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518, 529–533. https://doi.org/10.1038/ nature14236 Monett, D., & Lewis, C. (2018). Getting clarity by defining artificial intelligence—A survey. In V. C. Müller (Ed.), Philosophy and theory of artificial intelligence. Springer, 212–224. Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach. Prentice-Hall. Solomonides, A. E., Koski, E., Atabaki, S. M., Weinberg, S., Mcgreevey, J. D., Kannry, J. L., Petersen, C., & Lehmann, C. U. (2022). Defining AMIA’s artificial intelligence principles. Journal of the American Medical Informatics Association, 29, 585–591. https://doi.org/10. 1093/jamia/ocac006
22
2
Artificial Intelligence: In Search of a Definition
Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433–460. https://doi.org/10. 1093/mind/LIX.236.433 Wang, P. (2008). What do you mean by “AI”? Proceedings of the 2008 conference on artificial general intelligence 2008: Proceedings of the first AGI conference. IOS Press, 362–373. Winston, P. H. (1992). Artificial intelligence. Addison-Wesley. Winston, P. H., & Brown, R. H. (1984). Artificial intelligence, an MIT perspective. MIT Press.
Chapter 3
MAI: A Very Short History and the State of the Art
Abstract This chapter gives a short overview of the history of MAI and describes its crucial contemporary applications. The aim is not to give a complete list of technologies, but to highlight the main areas of application of MAI and to focus on its transformative power. In this chapter, I explain some of the fundamental concepts in MAI and discuss some major opportunities as well as challenges in clinical practice. I aim to provide a basic understanding of the technological aspects as a prerequisite for the ethical analysis in part II. Keywords Artificial intelligence · Big data · Computer vision · Deep learning · Internet of things (IoT) · Machine learning · Mobile health (mHealth) · Neural networks In this chapter, I aim to give a short overview on the history of MAI as well as its numerous current fields of application. The historical perspective helps to understand why AI, although it has been used in medicine in one form or the other from the 1950s onwards, has become a relevant topic only in recent years. The historical development of MAI may thus illuminate the specific transformation process that is associated with that technology, setting it apart from other technological developments in medicine. In looking at the state of the art, I cannot give a complete list of technologies and areas of application. Not only would that be tedious for the reader, but it is also hardly possible to adequately include all relevant applications. What I will try instead is to introduce the main currents in MAI and focus on their transformative aspect. That means that I will give a kaleidoscopic view of what technology is out there, how it may transform or already transforms medicine and healthcare, and why it is relevant from an ethical point of view. It will be necessary to go into some detail concerning the technical aspects, but I am neither qualified nor willing to give an in-depth explanation of the intricacies involved in terms of informatics, data science, and computer science. What I aim to do is to give a brief account of those technical aspects, i.e. basic working principles as well as functions, goals, and basic terminology that are necessary to understand the fundamental difference between MAI and conventional approaches to turn medical data into knowledge. Understanding these basics is © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6_3
23
24
3 MAI: A Very Short History and the State of the Art
fundamental for being able to grasp the impact and transformative power of MAI, which in turn determines its ethical relevance.
3.1
The Rise of MAI
Early MAI in the 1950s and 1960s was mostly based on inferences, solving logical problems, or decision-making by applying formal logic that were hitherto considered as exclusively human abilities (Kaul et al., 2020; Quinn et al., 2022). Early MAI followed a rules-based approach of “if, then rules”. The result were knowledgebased systems for providing expert knowledge analogous to a cook book (Quinn et al., 2022). Rule-based systems were used as support in clinical reasoning, e.g. interpreting electrocardiograms (ECGs) or diagnoses, and choosing treatment options (Yu et al., 2018). One example is ELIZA, a program developed in 1964 by Joseph Weizenbaum at MIT (Weizenbaum, 1966). ELIZA was based on an early form of natural language processing (NLP). It could interpret simple commands and react to them, thus simulating a conversation. ELIZA would look for certain key words in the input that triggered a fitting response following a simple script that told the program how to answer or what questions to ask. The program was used to simulate a psychotherapeutic encounter and is considered as an early version of what today is called a chatbot. The 1970s saw a substantial shift in the field of AI towards real-world applications (Nilsson, 2009). Earlier approaches in AI focused on what has been called toy problems, meaning that the systems worked mostly under laboratory conditions in highly controlled settings. A shift in funding policies led to a more practice-oriented approach. Hence, efforts towards developing real-world applications increased during this time. This also affected MAI, where further attempts to emulate clinical reasoning were made, which resulted in early expert systems (Kavasidis et al., 2023). MYCIN was a system that contained information on bacterial pathogens and could provide recommendations for antibiotics (Quinn et al., 2022). CASNET (causal– associational network), developed in 1976, contained information on disease and provided advise for clinicians (Kaul et al., 2020). Another important development was the introduction of clinical informatics databases and medical record systems like Medical Literature Analysis and Retrieval System and PubMed developed in the 1960s. These can be considered the first steps towards digitizing data in medicine on a larger scale (Kaul et al., 2020). Although early MAI was able to perform several simple tasks, it came with crucial disadvantages. The systems were costly, and rules had to be programmed manually, defined explicitly, and updated regularly (Yu et al., 2018). Early MAI was not capable of probabilistic reasoning, integrating higher order knowledge from different sources, or solving more complex problems (Quinn et al., 2022). Curation by medical experts was needed, as well as the formulation of robust decision rules (Yu et al., 2018).
3.1
The Rise of MAI
25
These problems were not specific to MAI, but affected AI in general at that time. As a consequence, the first so-called “AI winter” set in at the end of the 1970s (Kaul et al., 2020). Research funding and efforts were dwindling, and AI seemed to have reached an impasse. An increase in research and development in the 1980s was short-lived, although some progress was made with computer vision and natural language processing, accompanied by improved networks architectures as well a processing and storage technologies (Nilsson, 2009). During this time, some efforts were made to improve MAI. Examples are INTERNIST-1, an algorithm that ranked diagnoses (Quinn et al., 2022) and DXplain, introduced in 1986, a decision support system that generated differential diagnosis when fed with information on symptoms (Kaul et al., 2020). However, these systems still were not able to solve complex problems, maintenance was costly, and the transition to industry therefore proved difficult (Quinn et al., 2022). Winter was coming once again. After the second AI winter in the late 1980s, things took a turn (Kaul et al., 2020). Up to then, AI systems had used either forward reasoning by following manually programmed if-then rules from data to conclusions or backward reasoning from conclusions to data. The leading paradigm in this era was symbolic AI (Garnelo & Shanahan, 2019), which constituted what is nowadays called “good old artificial intelligence” (GOAI) or classical AI (Garnelo & Shanahan, 2019). This approach is based on symbols as representations for objects. It suggests a propositional relation between the symbol and the object it refers to. When given the according inference rules, the system is able to infer from one relation between symbol-as-representation and object to another and so on. The downside of this approach is that the representational elements have to be programmed, meaning that a human has to define each of them. This makes symbolic AI elaborate and time-consuming. Furthermore, the symbolic approach is unable to solve more complex problems, especially when dealing with uncertainty. These were some of the reasons why early expert systems in the 1980s, despite high hopes, never succeeded in becoming an essential tool for clinical practice (Kautz, 2022). The elaborate requirements of training medical professionals to use these systems was another crucial reason. In the 1990s and 2000s, a new paradigm changed the game in AI, which also had a tremendous impact on its usability in medicine. The machine learning-approach was able to surpass the symbolic approach of GOAI and with it knowledge-based and rule-based systems (Quinn et al., 2022). Machine learning is based on non-linear functions learned directly from data, meaning that no prior domain knowledge is necessary. This opened the door to a variety of applications, especially since machine learning is much more efficient and needs much less maintenance by humans. Although machine learning became the new paradigm, symbolic approaches have not disappeared and are still used, e.g. in natural language processing (NLP). Today, some commentators state that both paradigms can be reconciled, given their advantages in different contexts and regarding different tasks (Garnelo & Shanahan, 2019).
26
3.2
3
MAI: A Very Short History and the State of the Art
Machine Learning and Deep Learning
Obermeyer and Emanuel (2016) introduced a convincing distinction between conventional expert systems and machine learning applications in medicine. They compare an expert system to a medical student who learns to apply medical knowledge to an individual patient by following general principles. In contrast, machine learning applications resemble a doctor in residency who learns general rules from individual patients. In other words, whereas expert systems need to be programmed with rules and principles, machine learning implies deriving these rules and principles from the data itself (Obermeyer & Emanuel, 2016). Regarding its main purpose or function, one can understand machine learning a set of mathematical and statistical techniques for data analytics with the aim of building predictive models (Camacho et al., 2018). One of its main advantages is that it can provide a higher-level analysis of complex data sets. In biological research and biomedicine, one mainly deals with large amounts of often heterogenous data, especially in molecular medicine and genomics. Machine learning methods analyze this data by finding patterns and associations. In a first step, the system processes existing input data. Then, software developers train the algorithm by using the appropriate learning method. Finally, the resulting model can be used to make predictions on new data. Important is this context are the labels and features that input data includes (Camacho et al., 2018). Features, meaning the measurements that the data contains, are the entities that form the output of a model. Machine learning thus means to use an appropriate set of model parameters in order to translate features from input data into predictions of labels in the output data. Whether an algorithm is able to perform well, depends on the quality of input data. This data needs to be properly formatted, cleaned, and normalized in order to avoid overfitting and underfitting. Overfitting occurs when the algorithm learns from data that is too complex or noisy, meaning that it contains useless information, and is therefore unable to make correct predictions on new data. Take the example of training an algorithm in computer vision for detecting melanoma in pictures. If one uses pictures that do not focus on a skin area, but also contain other features like clothing or background objects, the algorithm might be distracted. Underfitting signifies a high accuracy of prediction on training data, but a low performance on new data, due to the training data being too specific or not complex enough. If we only use pictures of people with white skin to train the algorithm, it will not be able to detect melanoma in pictures with darker skin tones. The consequence of overfitting and underfitting might be a false or exaggerated assessment of model accuracy and real-world performance of models (Obermeyer & Emanuel, 2016). Machine learning entails three main types, supervised, unsupervised, and reinforcement learning (Yu et al., 2018). In supervised learning, algorithms are fed with existing data in order to make predictions about future events (Manickam et al., 2022). Supervised learning thus uses training data, e.g. medical images, as input. The crucial task here is to define output labels, e.g. the presence of certain anomalies in
3.2
Machine Learning and Deep Learning
27
medical images. Learning the correct output for a given input means finding patterns in input-output-pairs. The machine learning application is shown images with the relevant feature, for example a melanoma. It produces an output with a certain score: The better the system analyzes the image, i.e. the higher the precision in detecting a melanoma, the higher the score. In order for the system to learn, a function is computed for the error margin between the desired score and the actual output score (LeCun et al., 2015). The system then adjusts its parameters in order to reduce the error. The next step is to generalize associations between input-output scores in training cases in order to apply them to new cases and build models and predictions. Another task is to evaluate the generalizability and accuracy of predictions, i.e. the performance of the model, by comparing predicted outcomes with actual outcomes (Yu et al., 2018). Two types of algorithms are relevant here, regression and classification algorithms (Manickam et al., 2022). Regression algorithms are used when there is a relation between input and output variable. Classification algorithms categorize out variables in groups (yes – no, true – false), depending on their input variables. Unsupervised learning aims at finding patterns in unlabeled data by building sub-clusters, detecting outliers, and building low-dimensional representations of the data (Yu et al., 2018). Whereas supervised learning requires labeled data, unsupervised learning can process raw data. Unsupervised learning therefore does not require human intervention and is directed at finding hidden correlations in large data sets. However, supervised learning tends to be more accurate due to human supervision for validation of results. Reinforcement learning uses positive or negative feedback to form a strategy for problem-solving (Hamet & Tremblay, 2017). This learning by trial and error may also include expert feedback and is especially useful in those medical contexts where a demonstrated task is to be learnt, for example suturing wounds in robotic surgery (Esteva et al., 2019). Depending on the nature of data, algorithms mainly perform two tasks, clustering and association (Manickam et al., 2022). Clustering means that data are grouped together due to their similarities. This can for example be used for epidemiological purposes, e.g. for grouping individuals with similar symptoms under one disease entity. Association algorithms detect patterns and relations regarding variables in different data sets and derive rules from them. Although a machine learning approach offers several advantages, there are also challenges. There is a need for high quality data free of noise. Furthermore, data labeling is crucial for many machine learning methods that rely on supervised learning. Extracting features from labeled data means that the machine learning algorithm needs to be told what to look for. That implies that data labels have to be handcrafted by humans, which in turn requires domain-specific knowledge (Mishra, 2022). In classical machine learning, algorithms are unable to extract patterns directly from raw data. Humans have to extract features and engineer representations from which the algorithm can detect patterns (LeCun et al., 2015). Not only is this a labor-intensive and time-consuming process, it also makes human error a possibility.
28
3 MAI: A Very Short History and the State of the Art
Erroneous feature definition or data labeling might thus negatively impact the functionality and accuracy of machine learning algorithms (Mishra, 2022). A sub-field of machine learning is deep learning, which differs from machine learning in one crucial respect. Whereas in machine learning, algorithms rely on pre-defined features that tell them what to look for, deep learning algorithms extract features and build representations by themselves (Mishra, 2022). Deep learning is a type of representation learning that discovers representations directly in the raw data (Esteva et al., 2019; LeCun et al., 2015). It uses neural networks that imitate the decentralized functioning of the human brain (Manickam et al., 2022). It applies the aforementioned types of machine learning for solving complex problems by analyzing multi-layered data, which requires considerable computing power. In deep learning, neural networks build different levels of representation from different layers of data, varying in complexity, beginning with raw data as input (LeCun et al., 2015). The different layers usually consist of simple nonlinear operations, whereas the complexity increases with each subsequent layer (Esteva et al., 2019). Take the aforementioned example of a picture showing a melanoma. In our example, the system would first analyze the array of pixel values to determine the presence or absence of edges at the first layer. In the second layer, the system would try to detect more complex shapes, e.g. combinations of edges, and so on for further layers, increasing the complexity of shapes with each layer. This learning process does not require human engineering, feature extraction, or data labelling, which makes deep learning not only more accurate and precise, but also more efficient than conventional machine learning techniques (LeCun et al., 2015). The chief advantage in the medical context is the ability of deep learning systems to process large amounts of disparate data from different sources and continuously improve accuracy and precision (Esteva et al., 2019). One of the disadvantages of deep learning, besides high computing power is the need for very large high-quality data sets (Camacho et al., 2018). Since this kind of data sets is often hard to come by, deep learning might not always be an available solution despite its superiority in regard to certain tasks. Machine learning and deep learning were important steps in the development of MAI (Kaul et al., 2020). Due to their abilities to integrate large amounts of data from different sources, detect patterns within the data, and build predictive models, these approaches allow a stronger focus on predictive and preventive perspectives as well as on personalized medicine (Hamet & Tremblay, 2017). The crucial advantage of machine learning, especially in the medical context, is the ability to identify patterns in data without the need for detailed sets of rules for each task (Yu et al., 2018). A side note is necessary at this point. When talking about machine learning in medicine and healthcare, we must be aware that we are dealing with different methods and different kinds of algorithms. For example, classification algorithms in supervised learning can be categorized into support vector machine (SVM)algorithms, discriminant analysis, naïve Bayes, and so on. Each of these methods has its specific uses, and downsides (for a good overview in the medical context see Manickam et al., 2022, pp. 5–7). For the purpose of this book, these classifications and sub-divisions are not important. However, it is important to note that the term
3.3
Further Developments
29
“machine learning” should be used with caution. Not all arguments concerning risks and benefits of machine learning fit with each machine learning method. For an ethical analysis, it is therefore important to be careful when generalizing risks and benefits. One should at least consider whether one deals with supervised or unsupervised learning when assessing MAI-systems. The possible applications of machine learning in medicine and healthcare are manifold. In pathophysiology, machine learning techniques can be applied for gaining new insights into disease biology (Camacho et al., 2018). Machine learning may be used in combination with a network biology approach that integrates the complex interaction between different factors that shape the disease phenotype. Since such an approach requires the integration of large amounts of disparate data, machine learning techniques could provide the appropriate tools. The main target here is so-called “omics data”, i.e. data from genomics, microbiomics, proteomics, metabolomics, interactome, pharmacogenome, and diseasome. In microbiomics research for example, machine learning approaches could be used for better understanding microbiome-host interactions. Another area where multi-omics are of crucial relevance and where a machine learning-enhanced network biologyapproach could be applied is drug discovery (Camacho et al., 2018). However, several challenges arise. In order for clinicians to rely on the support by MAI, transparent models are of crucial importance (Quinn et al., 2022). Some authors claim that in order to be transparent, MAI-models have to be intrinsically interpretable by clinicians. The black box-phenomenon is therefore one of the major challenges of implementing MAI in clinical practice (Camacho et al., 2018): An AI-system is considered as a black box when it cannot be explained how it produces results, e.g. predictive models. Although the performance of the system might be exactly what the system designers were striving for, they cannot account for the exact process. The black box phenomenon is a serious issue when it comes to the explainability and trustworthiness of MAI applications, as we will see in later chapters. Another aspect concerns the validation of MAI-performance in the clinical setting. Predictive performance and clinical utility are not necessarily the same (Quinn et al., 2022). Clinical validation is needed in order to clarify whether a certain application benefits patients and has clinical efficacy. Another issue is scalability, which means that is unclear how systems that have shown positive results in a limited setting may be adapted to large-scale healthcare providers. Scalability is also problematic in regard to smaller healthcare providers that might not have the financial or personal resources to implement MAI-systems.
3.3
Further Developments
Five further developments from the 1990s onwards are crucial to MAI: big data approach, natural language processing (NLP), computer vision, IoT, and robotics.
30
3.3.1
3
MAI: A Very Short History and the State of the Art
Big Data Approach
In order to understand the impact of the big data approach in its entirety, one first has to distinguish between big data and big data analytics (Batko & Ślęzak, 2022). Big data signifies the volume and complexity of large data sets whereas big data analytics describes the methods for transforming data into knowledge. Following this distinction, I will refer to the data sets as big data, the techniques for analyzing them as big data analytics, and the practices that result from combining both for achieving a certain set of goals as big data approach. The term big data was first used by NASA scientists in the late 1990s to signify large volumes of data that cannot be sufficiently stored, managed, or analyzed by conventional methods (Austin & Kusumoto, 2016; Cox & Ellsworth, 1997). In medicine, big data refers to large volumes of health-related data generated by patients or populations (Austin & Kusumoto, 2016; Hulsen et al., 2019; Krumholz, 2014). Accordingly, the European Commission defines big data in the health context as “large routinely or automatically collected datasets, which are electronically captured and stored” (European et al., 2016). In medicine, the “big” in big data refers to the vast amount, diversity, and complexity of data obtained from an individual or a population that cannot be properly analyzed by conventional statistical methods (Mishra, 2022). Diversity here means both the context in which data is generated – pathology lab results, clinical trials, smart wearables – as well as the format in which data is provided, like doctor’s notes, EHRs, or various kinds of medical images (Adibuzzaman et al., 2017; Alonso et al., 2019). Big data in medicine thus refers to omics data as well as environmental and behavioral data and data from the EHR (Alonso et al., 2019; Ristevski & Chen, 2018; Riba et al., 2019). Heterogenous omics data is mostly stored in different formats and possesses high dimensionality, meaning that the number of relevant features within the data is higher than the number of data samples (Ristevski & Chen, 2018). The EHR may encompass clinical notes, diagnoses, administrative data, charts, tables, prescriptions, procedures, lab tests, medical images, magnetic resonance imaging (MRI), ultrasound, computer tomography (CT) data. It usually consists of structured, semistructured, or unstructured data: Structured data is shaped by pre-defined sets of answers or choices, usually organized in a drop-down menu (e.g. lab results, age, ICD-codes). This type of data is searchable and standardized, which makes data analysis easy (Fessele, 2018). Unstructured data consists of free text notes, e.g. handwritten text in visit notes, or images (Fessele, 2018). Semi-structured data encompasses data in a somewhat organized or hierarchical form that is not standardized, such as flowsheets or free text fields in the EHR (Klang et al., 2021). Commentators often define big data by the six Vs that describe its crucial characteristics and benefits, whereby the six Vs stand for value, volume, velocity, variety, veracity, and variability of data (Ristevski & Chen, 2018). Value may signify the possibility to derive economic profit from data (Johnson et al., 2021), but also the possibility to discover hidden knowledge (Batko & Ślęzak, 2022) or the re-usability of data in different contexts and for different purposes (MayerSchönberger & Ingelsson, 2018). Volume refers to the large amount of data that is
3.3
Further Developments
31
used or generated (Batko & Ślęzak, 2022; Fessele, 2018). Velocity means the speed of data generation and sharing (Johnson et al., 2021) Variety describes the different data types (text, images, voice recordings, numerical data, structured, unstructured, and semi-structured data) (Johnson et al., 2021). Veracity signifies the quality and accuracy of data, which defines the level of trust in the predictive value (Johnson et al., 2021; Fessele, 2018). Variability implies that data sets may possess various kinds of information and meaning that do not necessarily have to be consistent (Fessele, 2018). Although different numbers of “Vs” for describing the benefits of the big data approach exist (Austin & Kusumoto, 2016; Batko & Ślęzak, 2022; Hamet & Tremblay, 2017) and although the Vs-based definition has been criticized (MayerSchönberger & Ingelsson, 2018), it may help to understand the hype surrounding big data in medicine. In biomedical research as well as in the clinical setting, the big data approach may bring about a methodological shift that transforms the way we integrate knowledge into medical practice (Hulsen et al., 2019; Mayer-Schönberger & Ingelsson, 2018; Riba et al., 2019). Big data analytics refers to techniques for integrating and analyzing large data sets for identifying correlations and building predictive models (Batko & Ślęzak, 2022). Hence, the big data approach implies to use large amounts of data for finding associations, patterns, trends, and outcomes that otherwise would not be detectable (Hulsen et al., 2019; Ristevski & Chen, 2018). The great advantage of big data analytics is to be able to integrate these disparate data types from various sources and perform quality control, analysis, modeling, interpretation, and validation of data (Ristevski & Chen, 2018). The main task of big data analytics is to build clusters and detect correlations in or between data sets, which in turn allow to generate predictive models (Ristevski & Chen, 2018). Collecting and processing health data from various sources allows to create models of an individual’s health that not only depicts the momentary status, but predicts future developments. Predictive models in turn can be tools for determining risk, e.g. for disease onset or progression, and implementing early interventions. This means a shift in clinical practices from the past and the present of an individual towards the future, from diagnosis and therapy to prognosis and prevention. Furthermore, the integration of various diverse data from different contexts may imply a more holistic approach (Alonso et al., 2019). The integrative analysis of omics data allows to obtain a systematic and complete view of the human body, may improve the understanding of disease mechanisms, and enable personalized and applicable treatments (Alonso et al., 2019; Riba et al., 2019). As an example, when fed with genomic data of an individual, deep neural networks may detect pathogenic genetic variants (Yu et al., 2018). This result can be contextualized with behavioral data, e.g. eating or smoking habits, to determine the risk for the onset of a disease under these concrete conditions. Such an approach may especially improve the diagnosis and risk assessment of complex diseases (Yu et al., 2018). Thus, integrating various omics data with data from other sources such as the EHR or sensor technologies may enable a more personalized diagnosis, treatment, and prevention (Riba et al., 2019).
32
3
MAI: A Very Short History and the State of the Art
One could describe the big data approach in terms of a paradigm change: Whereas hitherto biomedical research and clinical practice relied heavily on randomized data sampling, big data analytics deals with huge amounts of data (MayerSchönberger & Ingelsson, 2018). Instead of inferring insights from (hopefully) representative samples and generalizing them, big data analytics aims to find trends, patterns, and associations in large data sets. In addition, the big data approach implies another methodological shift. Finding patterns or trends in large data sets means first and foremost to put focus on correlations. This is at the same time an advantage and a limiting factor of the big data approach, since it does not tell us anything about causation (Austin & Kusumoto, 2016; Mayer-Schönberger & Ingelsson, 2018; Obermeyer & Emanuel, 2016). In other words, by using machine learning algorithms for clustering or associating features in large data sets, we do not get answers to the questions of “why?” and “how?” (Austin & Kusumoto, 2016). Answering those questions requires further analysis and investigation. What the big data approach can do is to enable us to ask the right questions, provide insights into correlations that have hitherto eluded us, and help us generate new hypotheses (Mayer-Schönberger & Ingelsson, 2018). The idea behind the big data approach is to integrate diverse data from various sources in order to get more precise insights of an individual’s health status. The goal is to enable a more personalized and at the same precise treatment that is tailored specifically to an individual. Within recent years, reductions in cost of sequencing approaches, computational power, and storage in combination with improved machine learning techniques have enabled the translation of big data approaches from a promising perspective to clinical reality (Agrawal & Prabakaran, 2020). Furthermore, the accessibility of data has immensely improved, especially due to data-sharing via the internet and the widespread dispersion of mobile devices (Austin & Kusumoto, 2016). Another important driving force in terms of data sharing is cloud computing, i.e. the use of huge data repositories that are combined with platforms services or other software infrastructures (Austin & Kusumoto, 2016). Cloud computing allows smaller agents to make use of big data without the need of investing in large-scale and costly infrastructures by sharing operational costs. Also, cheaper IoT and other monitoring devices now allow continuous data collection and data analysis in real-time. Hence, the big data approach can be seen as a tool of precision medicine, i.e. providing the fitting treatment at the right time by considering the individual characteristics of the patient (Ashley, 2016; Hulsen et al., 2019). One example in this regard is biomarker discovery (Yu et al., 2018). The aim of this approach is to detect correlations between measurements and certain phenotypes. Machine learning approaches are used for identifying molecular patterns in diseases and thus predict disease phenotypes. The accuracy of data-driven biomarkers depends on availability of very large amounts of data and advanced machine learning methods. Biobanks are an important factor in this context (Agrawal & Prabakaran, 2020; Hulsen et al., 2019; Mayer-Schönberger & Ingelsson, 2018; Ristevski & Chen, 2018). Since biobanks contain genotypical and phenotypical information and biological samples (blood and tissue) from a large number of
3.3
Further Developments
33
individuals, usually hundreds of thousands, they are indispensable for big data generation in the medical context and open access to biomedical data (Agrawal & Prabakaran, 2020). Several national initiatives have been implemented such as the UK Biobank (Bycroft et al., 2018) or the 100,000 Genomes Project and All of Us in the USA (Agrawal & Prabakaran, 2020). Some of these projects, like All of Us, combine the stored biomedical data with data from EHRs, including behavioral, and family data, to create patient profiles (Agrawal & Prabakaran, 2020). Other programs have been implemented in China and several EU countries (Agrawal & Prabakaran, 2020). The big data approach is key for the so-called P4 medicine, where the four Ps stand for predictive, personalized, preventive, and participative aspects (Alonso et al., 2019; Ristevski & Chen, 2018; Mishra, 2022). The P4-model was developed in the early 2000s in the context of systems biology and systems medicine (Hood et al., 2004; Weston & Hood, 2004). Initially, researchers developed systems biology as a way to understand biological system from a global, holistic perspective. Whereas biology up to this point had been reductionist, literally and epistemologically dissecting organisms into ever smaller parts, this new approach aims to understand the bigger picture by figuring out how the different parts of an organism interact with each other and the environment. This perspective generates large and disparate data that need to be integrated. The prospective use of systems biology for medicine became clear at an early stage. Integrating data on different aspects of the human body and understanding how their dynamic interaction with each other as well as the environment enables a more in-depth understanding of processes and possible future developments. What started as a mere vision soon become a reality, powered by ever improving big data technologies (Flores et al., 2013). P4 medicine implies that we may improve healthcare services based on an improved predictive analysis and modeling. This may encompass analysis and modelling of the spreading of disease, offer insights into disease mechanisms, and provide techniques to better monitor the quality of healthcare institutions and providers (Ristevski & Chen, 2018). To some, the big data approach thus implies a transition in healthcare from a reactive approach that focusses on treating diseases to prevention, mainly through large-scale screening and monitoring (Agrawal & Prabakaran, 2020). Some commentators speak of “precision health” in contrast to “precision medicine” (Ashley, 2016). The latter describes an omics-based big data approach for finding and applying the fitting treatment at the right time for an individual patient. Precision health on the other hand signifies a wider approach that also includes health prevention and promotion and focusses on the individual as an active participant in the process. This means a shift from the traditional paradigm of the average patient in evidence-based medicine to the individual characteristics of a particular human being (Mayer-Schönberger & Ingelsson, 2018). Thus, the big data approach is a possibility to change the disease-centered focus in medicine to a patient-centered focus (Batko & Ślęzak, 2022). Focusing on the individual patient might also have an empowering effect and enhance patient autonomy. “Smart patients” (Chen et al., 2017; Riba et al., 2019) may play a more active and self-determined role in treatment
34
3 MAI: A Very Short History and the State of the Art
as well as preventive contexts due to the possibility to access their own individual health data and make decisions as well as performing self-management tasks based on this informational base. However, several challenges have to be overcome for achieving the goals of P4 medicine, precision medicine, and precision health. Unlocking the full six V potential of big data in medicine and healthcare first and foremost requires a successful integration of disparate data. One of the main challenges in this respect is the unique nature of health-related data. When compared with the economic sector, where users share their data openly (either willingly or unwillingly) and companies have easy access, health-related data is considered as sensitive and therefore specifically protected (Hulsen et al., 2019). The fact that data is often siloed in static repositories without a possibility for central sharing and access hampers efforts to unlock the six Vs (Austin & Kusumoto, 2016). Another issue here is comparability and consistency of data (Riba et al., 2019). Enabling interoperability between different systems requires unified data formats (Johnson et al., 2021) as well as the standardization of laboratory protocols and values (Ristevski & Chen, 2018). A lack of standardized data hampers the shareability of data within or between institutions or sectors (Adibuzzaman et al., 2017). Furthermore, EHR data is often noisy and incomplete, meaning that it either contains useless and distracting information or that relevant information is lacking, which negatively affects the validity of data models (Ristevski & Chen, 2018). Other obstacles include access by relevant stakeholders (Agrawal & Prabakaran, 2020) and privacy issues (Adibuzzaman et al., 2017). All technologies outlined in the following are only useful in the context of big data. In reverse, big data is of any use only when machine learning techniques are applied to it (Obermeyer & Emanuel, 2016). It is important to note that for any application of AI in general and MAI in particular, the big data approach is crucial. In a way, AI technologies can be understood as the engine, whereas big data is the fuel. This is also significant from an ethical point of view, since it implies that whenever we talk about MAI in ethics, we also have to talk about big data. This crucial connection between MAI and big data shapes the methods and theoretical basis of my ethical analysis in part II.
3.3.2
Clinical Decision Support-Systems (CDSS)
Clinical decision support-systems (CDSS) are a perfect example for the synergy between the big data approach and MAI. CDSS based on machine learning techniques have been discussed for two decades now (Sim et al., 2001; Berner, 2007; Osheroff et al., 2007; Middleton et al., 2016; Sutton et al., 2020). These systems may perform various clinical tasks like image diagnosis, pathological diagnosis, clinical treatment decision-making, prognosis analysis, and drug screening (Wang et al., 2023; Craddock et al., 2022). CDSS can be divided into two groups (Sutton et al., 2020): Knowledge-based CDSS follow simple if-then-rules based on data sources as input (such as research literature or hand-crafted information). The rules have to be
3.3
Further Developments
35
programmed, which is a time-consuming and labor-intensive process. Non-knowledge based CDSS use machine learning approaches to analyze the input data, e.g. finding patterns within or similarities and associations across data sets. Before the machine learning era, knowledge-based CDSS were not sufficiently integrated into the workflow of clinicians and were used mostly for academic purposes. Modern CDSS based on machine learning are suitable for the use at the point of care, especially since they may use input from web applications, IoT applications, and the EHR (Sutton et al., 2020). In diagnostics, the main benefit of CDSS is that they integrate heterogenous types of data and information to build predictive models and provide specific recommendations regarding therapeutic steps (Craddock et al., 2022). That means that CDSS are not only a sophisticated tool for analyzing and modeling data, but also support and possibly enhance decisionmaking and action on behalf of clinicians by organizing clinical knowledge and patient data (Middleton et al., 2016). This goes to show that the big data approach can unlock the full potential of CDSS just as CDSS based on machine learning may make the best use of big data. In oncology for example, one of the main tasks for such a system would be to determine the best treatment scheme for different molecular phenotypes, thus operating in the sense of precision medicine (Wang et al., 2023). A CDSS in oncology may analyze individual patient data, also including non-medical aspects such as the financial status and medical insurance type of the patient, for evaluating the efficacy of a specific drug, assess product accessibility, and check for adverse reactions. Based on the outcome, the system may thus suggest individualized treatment steps and support clinicians in optimizing treatment plans (Wang et al., 2023). Another field of application for CDSS is the administrative branch of hospitals or other healthcare providers (Sutton et al., 2020). The systems may for example perform diagnostic code selection or automated documentation and thus reduce the workload of providers. By optimizing the scheduling of lab use or other utilization of resources, CDSS may also enable cost containment within institutions. Furthermore, CDSS can support clinical pharmacists in hospitals in performing medication reviews by combining patient data with pharmaceutical knowledge (Marcilly et al., 2023). This may reduce prescription errors and improve patient safety as well as prevent drug-related hospital readmissions. These are just a few examples for the various uses of CDSS. What is important here is the possibility of organizing information for optimizing decision-making and workflow, thus enabling a more precise, safer, and cost-efficient healthcare delivery. Digital twins could be an example for the peak of CDSS technology that combines the big data approach with a variety of smart technologies. The concept of a digital twin was first developed in production management at the beginning of the century (Fuller et al., 2020; Wright & Davidson, 2020). It signifies the modelling of a virtual product and the simulation of its product cycle by applying machine learning techniques, especially data mining, to make sense of the various data involved. The idea was to derive insights from this simulation for optimizing the production steps in the real world. This concept was later on adopted in other sectors,
36
3 MAI: A Very Short History and the State of the Art
among them healthcare (Coorey et al., 2021). The purpose of a digital twin in healthcare is to simulate a single organ, a physiological system, or the whole human body in a virtual model (Björnsson et al., 2019; Gkouskou et al., 2020; Kamel Boulos & Zhang, 2021). This model is fed and continuously updated with health-related data. Following a big data approach, diverse data from multiple different sources can be integrated, e.g. patient history and the EHR, lab results, as well as sensor and monitoring data from IoT and mHealth applications. By using machine learning techniques like data mining or deep learning, this data can be synthesized into a virtual model that enables clinicians to conduct diagnostic tests and predictive analyses. One could for example simulate the cardiovascular system of a patient with a cardiac disease to predict events and disease progression, make risk assessments, and implement preventive measures. The main benefit of a digital twin is its dynamic and bidirectional nature (Liu et al., 2021). Dynamic means that data can be fed into the virtual model continuously and in real-time, e.g. from IoT devices or smart wearables that monitor the patient’s heart rate and other vital signs. This allows a longitudinal collection of data and shows the ongoing processes, whereas relying on punctual diagnostic tests, e.g. blood lab results, only shows a snap shot image of the patient’s health condition. Thus, clinicians can make better predictions and react to changes in the patient’s condition in real-time. Bidirectionality implies a recurring data loop between the digital twin and the real world. The digital twin is fed with data which it analyzes and builds prediction from. The clinicians respond to this by implementing therapeutic measures, which in turn change the health condition of the patient, thus generating new data that flows into the digital twin. Whereas conventional predictive modelling in medicine is rather static with only a limited capability of integrating data and processing it in real-time, digital twins allow a dynamic virtual model that depicts processes and events as they occur. This way, clinicians can see the results of their actions in real-time without the need for repeated diagnostic testing, which is time-consuming and may be stressing for the patient. A good example here is drug testing (Björnsson et al., 2019; Liu et al., 2021). Going back to our example of a patient with a cardiac disease, clinicians could use a digital twin of the cardiovascular system for testing different drugs and dosages. The virtual model would show them instantly which drugs and dosage works in which way. Compared to the conventional method of trial and error, the in silico drug testing on the digital twin saves time and spares the patient the risk of a wrong dosage or drug incompatibility. In this regard, digital twins can be seen as a means of precision medicine, enabling clinicians to provide the fitting treatment at the right time (Kamel Boulos & Zhang, 2021). Another major benefit is the personalization aspect (Björnsson et al., 2019). Instead of, for example, relying on the evidence for drug effectiveness and dosage from large cohort studies showing only statistical probabilities, the digital twin models the reactions of the individual patient. Thus, digital twins are another example for the potential benefits in terms of personalization and precision medicine the synergy between the big data approach and MAI may yield. It is an especially interesting example since it may include various different smart technologies, from machine learning software to computer vision
3.3
Further Developments
37
and IoT. At the moment, digital twins technology is at a very early stage and only a few applications have made it from bench to bedside. However, it is obvious that this technology has an enormous potential to fulfill the promises of the P4 medicine.
3.3.3
Natural Language Processing (NLP)
Basically, natural language processing (NLP) aims at understanding and explaining human speech by machines (Wang et al., 2020). In the medical context, NLP is a tool for extracting and classifying information from medical documents like health and medical records. These documents contain valuable information, e.g. chief symptoms, diagnostic information, drugs administered, and adverse reactions, but not always in a structured way. NLP can be used as a means for transforming unstructured information in these documents into structured data. The process begins with information extraction for identifying and labeling entities like name, place, and time, and detecting grammatical or semantic relationships. This method can be used on unstructured, structured, or semi structured texts from doctor’s notes to EHRs and even social media posts. NLP also entails syntactic analysis, using syntactic rules to determine the function of each element of a sentence. Based on acquired and analyzed information, NLP can be applied for information retrieval by searching related documents in order to find relevant or similar content. Finally, machine translation as part of NLP signifies the automated translation of speech or single words. A major area of application of NLP is CDSS. A CDSS based on NLP can extract and integrate information in form of natural language from various sources like the EHR, combine it with data from lab results or scientific studies, and suggest treatment options (Yu et al., 2018). The main advantage here is the ability to analyze unstructured information, integrate structured and semi-structured data, and defining the most probable course of action (Emani et al., 2022). Furthermore, NLP is an important tool for clinical outcome prediction and patient monitoring. NLP applications allow to extract data from EHR to predict mortality, readmission, and length of hospital stay or classify cancer patients in regard to their responses to chemotherapy. This can be the basis for a more personalized therapy (Yu et al., 2018). The main benefits of NLP are quality improvement and reducing operating costs, especially since automated note taking and language translation services may optimize the clinician’s workflow (Quinn et al., 2022). One highly publicized example for the possible performance of NLP in CDSS is IBM’s Watson for Oncology (WFO) (Craddock et al., 2022; Liu et al., 2018). WFO uses NLP for generating evidence-based treatment options for cancer patients. The system was trained by thousands of real cases and when fed with patient data, it needs a median of 40 s for analyzing all relevant data, contextualize it with all available oncological knowledge, and generate a treatment suggestion (Somashekhar et al., 2016). Various retrospective studies have shown a high concordance rate of WFO with the actual decisions by the cancer boards involved. Yue
38
3 MAI: A Very Short History and the State of the Art
and Yang (2017) for example found a concordance rate of 95%, including 98% for breast cancer, 96% for colon cancer, 93% for stomach cancer, and 87% for lung cancer. Currently, mental health is among the most prolific fields of application of NLP. The technique is used for training algorithms to identify specific mental disorders (Wang et al., 2020). NLP is also the core technology of chatbots, which have already been implemented in many mental health contexts. Chatbots are conversational agents that are able to converse with humans in spoken or written form (Abd-Alrazaq et al., 2019; Vaidyam et al., 2019). The applications range from non-embodied forms such as a text input and reply applications to an embodied form, where the system mimics a human being in appearance and behavior. In the mental health context, chatbots may be used for diagnostical or even therapeutic purposes. Besides mental health, chatbots are also used for patient interviews in primary care (Ni et al., 2017) or for providing doctors and parents with information on generic medicines for children (Comendador et al., 2014).
3.3.4
Computer Vision
A further crucial development in the modern MAI era is advancements in computer vision (Kaul et al., 2020). Especially the development of image classifiers in the 2010s, a software that assigns labels to entire images, signifies an important step in the improvement of image-based diagnosis (Yu et al., 2018). The processing of medical images requires large amounts of labelled data, which is why large-scale studies and biobanks are crucial sources in this respect. Deep learning approaches are crucial here, since they allow to extract information from images and model complex relations between input and output data by using multiple layers of labeled data. Transfer learning is an important tool in this field, whereby neural networks that have been trained on large amounts of non-medical images are fed with medical images for fine-tuning. One example for the possibilities of computer vision is CardioAI, an application that analyzes cardiac MRTs to detect anomalies for cardiovascular risk prediction (Kaul et al., 2020). One of the earliest fields of application for computer vision is radiology. Computer vision is applied in almost all sub-fields like X-ray radiography, computed tomography (CT), magnetic resonance imaging (MRI), and positron-emission tomography (Yu et al., 2018). Image analysis is an important tool for diagnostic purposes and monitoring disease progression, e.g. chest radiography for detecting various types of lung disease and mammography scans for detecting breast cancer. These applications support doctors in making sense of images by extracting relevant data, identifying relevant features and evaluating them (Alsuliman et al., 2020). The application may for example detect an abnormal feature in a CT, draw borders around it and label it, and predict the health risk attached to it. Especially in these fields, AI can perform equally or even surpass human experts (Killock, 2020).
3.3
Further Developments
39
In dermatology, computer vision applications can detect malign skin lesions and distinguish them from benign ones (Yu et al., 2018). Also in this field, neural networks have proven at least the same accuracy in detecting skin lesions as dermatologists and in some cases outperformed them (Du-Harpur et al., 2020). Dermatology apps on the smartphone based on computer vision can be used by everyone to detect skin abnormalities and sending pictures to doctors (Ouellette & Rao, 2022). In ophthalmology, fundus photography, images of the retina, the optic disc, and the macula may serve as the basis for detecting several diseases like diabetic retinopathy, glaucoma, neoplasms of the retina, or age-related macular degeneration (Alsuliman et al., 2020). Computer vision also enables practitioners to identify causes of preventable blindness or conduct diabetic retinopathy preventive scans (Ouellette & Rao, 2022). This technology can go beyond the abilities of human doctors when applied to detect hitherto unknown associations between patterns in retinal images and factors like gender, age, blood pressure, or adverse cardiac events (Yu et al., 2018) and has proven to be more efficient than human practitioners in detecting specific retinopathies (Lim et al., 2023). In pathology, computer vision technologies may be used for the histopathological assessment of biopsies or tissues, which have been known to be hardly scalable until now (Yu et al., 2018). By providing a more objective analysis, these applications may also overcome the issue of discrepancies between assessments by different pathologist. In combination with neural networks, this technology can detect breast cancer metastasis in lymph nodes, identify prostate cancer in tissue samples, or measure and track tumors (Alsuliman et al., 2020). Although computer vision is already well-established in several clinical fields, various issues remain. The high performance of some of these systems is remarkable and so is their ability to outperform human clinicians. However, it should be noted that clinicians usually do not only analyze images for making clinical decisions. Rather, a sound decision often requires to integrate information from other sources, such as other diagnostic tests or the patient history (Habuza et al., 2021). Thus, image recognition on its own must not be overrated, since it performs only an isolated diagnostic task.
3.3.5
Internet of Things (IoT)
Internet of Thigs (IoT) means an immense step forward for MAI in terms of accessing and processing behavioral as well as environmental data of an individual in real-time. The so-called medical IoT or healthcare IoT encompasses connected medical devices that are mostly used for a wide range of monitoring tasks (Manickam et al., 2022). Devices like smart wearables, smart home monitoring and sensor technologies, and clinical point-of-care (POC) devices (ultrasound, thermometers, glucometers, and ECG readers) enable real-time health-monitoring at home or in the clinical setting (Manickam et al., 2022). Some of these monitoring
40
3 MAI: A Very Short History and the State of the Art
technologies are automated without the need for human intervention. The devices share data with the recipient, e.g. doctors, via the cloud. The crucial benefit of IoT-technologies is the possibility to improve integrated healthcare services, especially caring for older adults, management of chronic diseases, and rehabilitation (Pise et al., 2023; Sixsmith, 2013). Given the required technical infrastructure, IoT applications may enable and improve the intersectoral collaboration between GPs, specialist doctors, professional nurses, social services, and informal caregivers. In radiology for example, data obtained through IoT can be integrated with data from the EHR for improving patient tracking, symptom detection, remote screening, intelligent diagnosis, and remote intensive care (Sakly et al., 2023). IoT-powered ecosystems of care (Camarinha-Matos et al., 2015) might thus reduce costs and mitigate the effects of the demographic shift as well as the staff shortage in some branches of the healthcare sector. As with other MAI-technologies, IoT has a broad spectrum of possible applications. I will discuss only a few examples here to highlight the various contexts, uses, and target groups IoT can be used on. When IoT is applied in a smart home setting, one speaks of ambient assisted living (AAL) (Cicirelli et al., 2021; Queirós & da Rocha, 2018; Sapci & Sapci, 2019). AAL technologies are mostly used for providing healthcare services for older adults or for rehabilitation purposes, thus meeting healthcare needs and improving the overall quality of life (Sixsmith, 2013; Sulis et al., 2022). In addition, AAL can enable older adults to remain longer in their home environment and live an active, mostly independent life (Kuziemsky et al., 2019; Sixsmith, 2013). Combining remote health surveillance technologies, activity monitoring, and ubiquitous computing enables the permanent collection of real-time behavioral and environmental data of an individual in their home environment or in a nursing home (Mortenson et al., 2015; Pise et al., 2023). Machine learning technologies may develop activity profiles and detect aberrations from standards established by algorithms. These activity profiles enable intervention in acute emergencies as well as preventive measure based on predictive data analysis. For example, an AAL system might contain floor sensors that detect anomalies in the gait pattern of an individual, which may be a sign for an impeding fall. The system may then trigger an alarm and inform a caregiver so that the fall can be prevented. IoT is also a promising perspective in mental health. Various applications can be used in this context, from smart wearables to video surveillance and monitoring systems at home to smart textile technologies (Berrouiguet et al., 2018; Briffault et al., 2018; Gutierrez et al., 2021; de la Torre Diez et al., 2018). Combining these sensor, surveillance, and monitoring technologies with computer vision and data mining enables the integration of an individual’s behavioral and environmental data (Berrouiguet et al., 2018; Briffault et al., 2018). One crucial aspect in this regard is the digitally enhanced ecological momentary assessment (EMA), i.e. measuring behavior and experiences in an individual’s daily environment (Berrouiguet et al., 2018; Smith & Juarascio, 2019). The main benefit is real-time and longitudinal data sampling as well as enabling continuous monitoring and situational intervention (de la Torre Diez et al., 2018). Whereas experience sampling methods rely on self-
3.3
Further Developments
41
reports of individuals and are therefore prone to errors or recall bias, IoT-powered EMA enables the collection of objective data on mood, activity, or behavior (van Genugten et al., 2020). Environmental and behavioral data from an individual’s daily life and home environment can then be combined with data from the EHR in order to enhance decision-making and provide a more personalized treatment (Berrouiguet et al., 2018). This approach can be used for different types of mental disorders, from affective disorders to stress-related disorders and schizophrenia (Gutierrez et al., 2021). As we have already seen, mHealth technologies are crucial in the context of IoT. Whether one considers mHealth as part of IoT or the other way around is open to debate. The World Health Organisation (WHO) defines mHealth as the support of clinical or public health practices through mobile devices like smart phones, monitoring devices, or personal digital assistants (PDAs) (WHO, 2022). Smart wearables, body sensors, or apps for smartphones and tablets offer a wide range for collecting and processing data from daily activities and the living environment of individuals. This data is of exceptional value since it provides insights into the behavioral and environmental factors that, in a conventional clinical setting, are either not accessible or limited to patients’ reports and self-assessments, which can be vague or unreliable. One may distinguish between mHealth-applications for health condition management and wellness management (Robbins et al., 2017). Health condition management signifies the use of mHealth technologies for a clinical purpose, meaning that there is a medical indication. In this context, mHealth technologies might be used for diagnostic and therapeutic purposes or communication between healthcare professionals and patients. One area of applications is telemedicine and telecare, where mHealth technologies enable monitoring, surveillance, data transmission, and communication over a distance (Sim, 2019; Weinstein et al., 2014, 2018). Data transfer in telemedicine and telecare may be telemetric, i.e. automated and without the active participation of patients. When patients are actively included in data collection and transmission, one speaks of self-monitoring, whereas self-management describes the active participation of patients in the treatment process (Chib & Lin, 2018; Brew-Sam & Chib, 2020). According to several authors, health condition management offers great opportunities for improving healthcare services, especially when combined with MAI and a big data approach (Sim, 2019; Steinhubl & Topol, 2018; Topol, 2015). The main advantage is the possibility of generating and obtaining data without the limitation of the traditional clinical setting, which requires to be present at a specific location at a given time (Topol, 2015). Individuals may use smart devices like wearable sensors, apps on smartphones or tablets, and smart watches or wrist bands for tracking, monitoring, and surveilling their health data. Devices can track almost all daily activities of a person, from dietary habits to movement and sleep patterns, from reactions to social situations to shopping preferences. This does not only make access to healthcare easier and more comfortable, it may also enable the inclusion of hitherto underserved groups and thus allows a more personalized treatment based on a patient’s individual characteristics (Weissglass, 2022).
42
3
MAI: A Very Short History and the State of the Art
Digital biomarkers are of utmost relevance in the context of MAI-powered mHealth, especially with regard to personalization (Guthrie et al., 2019; Sim, 2019). Digital biomarkers are biomedical or behavior-related parameters required for defining health-related outcomes for prognosis as well as interventions. One example is step count by an activity monitoring app for patients with a cardiovascular disease (Guthrie et al., 2019). Based on this biomarker, doctors can tailor preventive or interventional interventions to a patient’s individual health situation and needs. Besides continuously obtaining real-time and real-world data, mHealth applications may also enable health promotion and behavior change. In addition to self-monitoring, self-management is therefore a crucial aspect. The active participation of patients is especially relevant in diseases where behavioral and lifestyle aspects are crucial. Whereas health condition management is based on a medical indication and the interaction of patients and health professionals, wellness management implies the use of mHealth applications without an existing health need. Wellness management aims to improve or maintain health and well-being, which is why it consists mainly of lifestyle applications like fitness trackers or smart watches. Around these wellness management applications, a community of users has evolved that track their health and fitness, document their lifestyle, and optimize their way of life. This quantified self movement also uses the internet, especially social media platforms, for sharing their data and exchanging their experiences (Lupton, 2016). The separation between health condition management and wellness management is not always clear, since some uses of the latter might be of relevance in a medical sense. Individuals who engage in wellness management might be better informed about their own health and have a more health-conscious lifestyle, which might have a preventive effect. In this sense, wellness management might contribute to health prevention (Jo et al., 2019). However, this correlation has not been sufficiently proven, since the existing evidence suggest only a weak preventive benefit of wellness management applications (Iribarren et al., 2021; Kitsiou et al., 2021). Besides the personalized treatment based on prediction and prevention, mHealth technologies offer new possibilities of participation in data collection and treatment. As we have seen, mHealth users can participate actively by checking vital signs, entering data into apps and send it to doctors. Participation may enable better access to as well as control over one’s own health data, allowing patients to assess treatment options and make better-informed decisions (Mishra, 2022). Hence, some commentators view mHealth as an enabler of patient empowerment (Topol, 2015). Although mHealth might contribute to improved disease management and health prevention as well as an empowerment of users, several issues arise. A major challenge is the integration and presentation of data obtained through mHealthdevices in the workflow of clinicians (Sim, 2019). This is problematic since countless health apps exists that may require different platforms or systems to run. Furthermore, the integration of mHealth device data into the EHR would be crucial for utilizing it, but this is also difficult due to interoperability issues. Diversity and lack of standardization is also problematic in the context of assessment and evaluation of mHealth-applications (Bradway et al., 2017). This may lead to
3.3
Further Developments
43
misinformation concerning the purpose and status of mHealth applications, for example the classification as a medical app or merely a lifestyle app. It is also crucial to assess the utility and usability of apps in order to evaluate their value for patients as well as healthcare providers. One problem in this regard is variation in data accuracy and user adherence (Yu et al., 2018).
3.3.6
Robotics
Finally, the advancements in robotics mean a major step forward in MAI. Besides virtual applications like NLP, robotics can be considered as the physical field of application of AI (Hamet & Tremblay, 2017). The term machine intelligence refers to the integration of robotics and AI, whereby the aim is to create intelligent agents that are aware of their embodiment as well as their environment and may thus act autonomously (Haddadin & Knobbe, 2020). A major step in the development of AI-powered robots was the development of reinforcement learning (Habuza et al., 2021). As discussed above, reinforcement learning means that a system learns the accuracy of its predictive model by trial and error. The robotic agent interacts with its environment to gather information, build models, and derive predictions from them. It then decides upon an action to which it gets an immediate reaction in the form of positive or negative reinforcement. When performing a task in the right, i.e. intended, way, the system is rewarded, and when making a mistake, a punishment ensues. Learning to play games like Go might serve as an example here. A wrong move in the game immediately leads to a negative outcome. Thus, the robotic agent learns which moves to perform and which to avoid. This same principle can be applied in the medical context, e.g. when a robotic agent learns to disperse medication or perform a surgical technique. In surgical training, AI-powered surgical robots can be applied in order to improve the skills of medical students or residents (Habuza et al., 2021). Besides training, surgery robots may also act as assistant surgeons or solo performers (Hamet & Tremblay, 2017). A prominent example is the surgical system Da Vinci, which has been implemented in many countries already. Da Vinci can perform a wide range of precision surgery procedures, e.g. prostatectomies, gynecologic surgical procedures, or cardiac valve repair. The crucial benefit ascribed to robotic surgery is the possibility to support practitioners in optimizing their skills, improve the efficiency of procedures, and increase access to surgery (Fuerst et al., 2021). The use of surgery robots could standardize decision-making processes in surgery by pooling data on practitioner’s experiences (Ozmen et al., 2021). Other effects could be the optimization of the workflow of surgeons as well as the reduction of surgery time, which would benefit patients directly (Habuza et al., 2021). Another promising field of application is cybersurgery, sometimes also called telesurgery, which signifies performing surgical interventions over a distance by using surgical robots via a telecommunications channel (Alafaleq, 2023). Surgeons can perform surgery on a remote patient, thus possibly reducing costs and overcoming the shortage of skilled personnel as well as
44
3 MAI: A Very Short History and the State of the Art
geographical barriers. Furthermore, autonomous systems in robotic surgery have shown promising results beyond supporting humans. In performing specific tasks like suturing and knot-tying, autonomous robotic systems acting as solo performers have already outperformed human surgeons (Yu et al., 2018). Machine intelligence in the form of robots can also be used for ultrasound imaging, either as teleoperated systems or in autonomous image acquisition for performing 3D-reconstruction of 2D-images (von Haxthausen et al., 2021). When applied to nanotechnology, machine intelligence may power nanorobots for drug delivery to target organs, tissues, or tumors (Egorov et al., 2021, Hamet & Tremblay, 2017). Another crucial factor in this regard is the integration of IoT-applications like monitoring systems to enable automated processes in healthcare (Dixit et al., 2021). The care domain offers a great variety of tasks for AI-driven robots, from fall prevention to delivering medication and providing emotional support (Archibald & Barnard, 2018). Care robots can be applied in hospitals, nursing homes, or the home. One can classify robots in nursing into two groups, assistive robots and socially assistive robots (SAR) (Maalouf et al., 2018). Assistive robots perform services such as mobility aid for persons with impairments or serving and feeding assistance and monitoring practices. Service robots may also support caregivers as carrier robots that lift and carry patients from and to beds or wheelchairs. A form of wearable robotics are exoskeletons, mechanical suits that can store or convert energy from human movement into mechanical (O’Connor, 2021). Exoskeletons can support caregivers in tasks such as lifting or holding heavy objects or persons, thus reducing work-related musculoskeletal disorders that affect the back, neck, or shoulders. SAR can integrate complex physical tasks with social interactions and are mostly used in long-term care (Abdi et al., 2018). As service robots, SAR may support individuals in their daily activities. As companion robots, they may provide entertainment or other forms of interaction with individuals, thus supporting or improving their psychosocial well-being. One of the most prominent examples is Paro©, a SAR in the shape of baby seal with white fur (Wada et al., 2004). Paro is able to interact with humans by using tactile, audio, visual, and posture sensors as well as movement actuators in the rear fin, neck, eyes, and eye-lids. It is capable of proactive, reactive, and physiological behaviors as well assigning value to different types of stimuli (e.g. stroking). Besides just being a cute companion, Paro has been used for therapeutic purposes such as affective therapy (Abdi et al., 2018) and behavior change in Alzheimer patients (David et al., 2022). This wide range of possible applications of MAI-powered robots offers great benefits, but also poses several challenges. There still remain considerable safety concerns, e.g. in telesurgery. The risk of deviation of motion tracking and force reflection might reduce the situational awareness for surgeons and the accuracy of surgery (Feizi et al., 2021). Similar safety concerns arise with nursing robots. Furthermore, since SAR may be utilized for psychosocial and emotional support, the human-machine relationship becomes an issue. Using robots for hitherto human activities may lead to an objectification of care receivers and a dehumanization of the care relationship (Sharkey & Sharkey, 2012).
3.4
3.4
The Shape of Things to Come
45
The Shape of Things to Come
The most significant development of the 2020s is the immersion of MAI into every day-practice (Kaul et al., 2020). Due to the availability and affordability of computing power and cloud storage, MAI enters all domains of medical expertise from biomedical research to translational research and clinical practice (Yu et al., 2018). MAI offers advantages and potential over a wide spectrum of healthcare practices such as enhancing clinical decision-making, improving disease diagnosis, detecting previously unknown imaging or genomic patterns associated with patient phenotypes, assisting in surgical interventions, and making healthcare services more accessible, especially in underserved regions through mHealth and telehealth (Yu et al., 2018). Despite the astonishing possibilities of MAI and the promising results it has already shown in the clinical context, there exists a translation gap challenging its implementation (Gama et al., 2022). Implementing MAI in clinical practice as an integral part of the workflow of healthcare practitioners will have to overcome several obstacles. First, there are technical challenges that may hamper a successful implementation (Yu et al., 2018). MAI requires the availability of large amounts of data, especially for training purposes. This data is scarcely available or difficult to access. Another requirement is the quality of data, which is again an important factor in training algorithms. Low-quality data may contain bias and noise, which causes false algorithmic generalization. A more structural or institutional than technical challenge is building environments for storage, collection, and exchange of large amounts of data. Not every healthcare facility has the same technical standard or is able to provide the required infrastructure. This might become problematic since the big data approach requires the exchange of standardized data. Hence, data integration across healthcare applications and institutions as well as interoperability are crucial issues. As some commentators claim, building a digital healthcare model is the main challenge in the transition process we are witnessing right now (Marino et al., 2023). Available evidence from other fields shows that an active approach to implementation of digital technology is beneficial to the diffusion and dissemination of technologies, whereas a passive approach may hamper this process (Gama et al., 2022). Since active approaches are largely missing in medicine, bottom-up solutions have been established at all stages of healthcare delivery, i.e. prevention, treatment, and posthospitalization, instead of a structured system (Marino et al., 2023). This is in part a consequence of a narrow research focus that concentrates mainly on technology design and the single-user perspective (Gama et al., 2022). As a result, fragmented systems and applications exist that are often unable to communicate with each other. This lack of interoperability also causes regional disparities and in general hampers the unlocking of the full potential of MAI technologies. In most countries, a coherent digital strategy is lacking that allows defining objectives, measures, and risks, revise existing laws and regulations, disseminate expertise nationwide, and monitor as well as assess the implementation of MAI
46
3 MAI: A Very Short History and the State of the Art
(Marino et al., 2023). Digital maturity is key here, signifying the ability to integrate potential disruptive digital technologies within a healthcare institution in a way that allows the improvement of healthcare services and the enhancement of the patient experience (Duncan et al., 2022; Marino et al., 2023). However, being able to fulfill the requirement of digital maturity is only in part a responsibility of a particular healthcare institution, like a hospital for example. Without a coherent implementation strategy on a political level, the result could be isolated solutions and fragmentation of services. Another major challenge is the utility of MAI for clinical practice. Since most of the evidence for the benefits of MAI so far has been achieved by retrospective data analysis, it is questionable whether MAI will show the same results in a real-world setting. Hence, mores studies are needed for evaluating real-world utility. One focus of such research should be the integration of MAI into the clinical workflow, especially regarding the handling of these systems and the ability to interpret results from data analysis. Especially the latter might be difficult for medical professionals without a data science background, since MAI applications are often black boxes when it comes to data or model interpretation. Connected to this is the scope and limits of MAI-powered data analysis. Integrating data across multiple modalities is, as we have seen, one of the major benefits of MAI, but it has its limits. Although specific tasks, e.g. detecting abnormalities on radiological images, can be done effectively, broader diagnostic tasks that require integrating context (e.g. patient values, background, or history) are difficult. This might become problematic especially in those cases where there is no expert consensus on diagnostic tasks. Finally, social, economic, and legal challenges have to be overcome for a successful integration of MAI into clinical practice. Given the fact that big data approaches are the key to unlocking MAI’s full potential, privacy challenges arise. It is to be expected that data will mostly be stored in clouds, due to their sheer size, but also because this makes them more accessible and easier to exchange. This raises the questions how data can be protected against theft and misuse and who has access for what purposes. This is in part a technical issue, but first and foremost a matter of regulation and policy. Among the possible benefits of MAI is its potential for improving clinical practice and work efficiency. Significant changes in clinical practice are expected to occur due to the expert-level accuracy and the potential to outperform human experts (Yu et al., 2018). MAI might contribute to reducing human error, thus improving quality of care. Whether MAI will reduce the workload of clinicians, as some suggest (Topol, 2019), is hotly debated. Some speculate that MAI can reduce the workload regarding specific tasks, such as data analysis, but that does not necessarily mean a reduction of workload in general, since closer examination of high-risk patients might be needed. It is expected that when delegating timeconsuming and mechanical tasks to MAI, clinicians may have more time for interactions with patients, which would imply a strengthening of the doctor-patientrelationship as well as an improvement of quality of care. However, there is also the fear that MAI might replace healthcare professionals, especially those engaged in the abovementioned routine tasks.
References
47
MAI could also lead to an additional workload. Learning to handle the technology, supervising it, and double-checking its results might be examples here. Other issues might arise from an unreliable MAI. One outcome could be alert fatigue, where frequent false alarms triggered by MAI may cause a reduced awareness and response by humans. Still other issues might arise from the opposite effect, an overconfidence in the technology, namely automation bias, which means that when in doubt, the decisions of a computer system is considered more reliable and accurate than a human’s. MAI might also disrupt interprofessional communication as well as patient communication, since many forms of exchanging data and expertise will be automated or delivered online. This could become a serious issue especially for the doctor-patient relationship. This relationship might also be affected by the issues of liability and responsibility. What happens when a MAI system makes a mistake and informs a practitioner falsely or gives the wrong advise? Who is responsible and liable in this case? Several legal implications might arise here, e.g. matters of negligence and malpractice as well as insurance issues (Yu et al., 2018). In the light of what we have learned so far about the impact of MAI, it has become clearer why an ethical analysis should focus on how this technology affects the practice of the agents involved, their relationships with other agents, and the environments that shape their practices. Based on these three impact areas, I will explore the ethical implications, benefits, and challenges that arise in connection with MAI. In order to be able to do so, I first have to outline the basic ethics approach I will take. This is the topic of the following chapter.
References Abd-Alrazaq, A. A., Alajlani, M., Alalwan, A. A., Bewick, B. M., Gardner, P., & Househ, M. (2019). An overview of the features of chatbots in mental health: A scoping review. International Journal of Medical Informatics, 132, 103978. https://doi.org/10.1016/j.ijmedinf. 2019.103978 Abdi, J., Al-Hindawi, A., Ng, T., & Vizcaychipi, M. P. (2018). Scoping review on the use of socially assistive robot technology in elderly care. BMJ Open, 8, e018815. https://doi.org/10. 1136/bmjopen-2017-018815 Adibuzzaman, M., Delaurentis, P., Hill, J., & Benneyworth, B. D. (2017). Big data in healthcare – The promises, challenges and opportunities from a research perspective: A case study with a model database. American Medical Informatics Association Annual Symposium Proceedings, 2017, 384–392. Agrawal, R., & Prabakaran, S. (2020). Big data in digital healthcare: Lessons learnt and recommendations for general practice. Heredity, 124, 525–534. https://doi.org/10.1038/s41437-0200303-2 Alafaleq, M. (2023). Robotics and cybersurgery in ophthalmology: A current perspective. Journal of Robotic Surgery, 17(4), 1159–1170. https://doi.org/10.1007/s11701-023-01532-y Alonso, S. G., de la Torre Díez, I., & Zapiraín, B. G. (2019). Predictive, personalized, preventive and participatory (4P) medicine applied to telemedicine and eHealth in the literature. Journal of Medical Systems, 43, 140. https://doi.org/10.1007/s10916-019-1279-4
48
3
MAI: A Very Short History and the State of the Art
Alsuliman, T., Humaidan, D., & Sliman, L. (2020). Machine learning and artificial intelligence in the service of medicine: Necessity or potentiality? Current Research in Translational Medicine, 68, 245–251. https://doi.org/10.1016/j.retram.2020.01.002 Archibald, M. M., & Barnard, A. (2018). Futurism in nursing: Technology, robotics and the fundamentals of care. Journal of Clinical Nursing, 27, 2473–2480. https://doi.org/10.1111/ jocn.14081 Ashley, E. A. (2016). Towards precision medicine. Nature Reviews Genetics, 17, 507–522. https:// doi.org/10.1038/nrg.2016.86 Austin, C., & Kusumoto, F. (2016). The application of big data in medicine: Current implications and future directions. Journal of Interventional Cardiac Electrophysiology, 47, 51–59. https:// doi.org/10.1007/s10840-016-0104-y Batko, K., & Ślęzak, A. (2022). The use of big data analytics in healthcare. Journal of Big Data, 9, 3. https://doi.org/10.1186/s40537-021-00553-4 Berner, E. S. (Ed.). (2007). Clinical decision support systems. Theory and practice. Springer. Berrouiguet, S., Perez-Rodriguez, M. M., Larsen, M., Baca-García, E., Courtet, P., & Oquendo, M. (2018). From eHealth to iHealth: Transition to participatory and personalized medicine in mental health. Journal of Medical Internet Research, 20, e2. https://doi.org/10.2196/jmir.7412 Björnsson, B., Borrebaeck, C., Elander, N., Gasslander, T., Gawel, D. R., Gustafsson, M., Jörnsten, R., Lee, E. J., Li, X., Lilja, S., Martínez-Enguita, D., Matussek, A., Sandström, P., Schäfer, S., Stenmarker, M., Sun, X. F., Sysoev, O., Zhang, H., & Benson, M. (2019). Digital twins to personalize medicine. Genome Medicine, 12, 4. https://doi.org/10.1186/s13073-019-0701-3 Bradway, M., Carrion, C., Vallespin, B., Saadatfard, O., Puigdomènech, E., Espallargues, M., & Kotzeva, A. (2017). mHealth assessment: Conceptualization of a global framework. JMIR mHealth and uHealth, 5, e60. https://doi.org/10.2196/mhealth.7291 Brew-Sam, N., & Chib, A. (2020). Theoretical advances in mobile health communication research: An empowerment approach to self-management. In: Kim, J. & Song, H. (eds.), Technology and health. Academic Press, 151–177. Briffault, X., Morgieve, M., & Courtet, P. (2018). From e-Health to i-Health: Prospective reflexions on the use of intelligent systems in mental health care. Brain Sciences, 8, 98. https://doi.org/10. 3390/brainsci8060098 Bycroft, C., Freeman, C., Petkova, D., Band, G., Elliott, L. T., Sharp, K., Motyer, A., Vukcevic, D., Delaneau, O., O’Connell, J., Cortes, A., Welsh, S., Young, A., Effingham, M., Mcvean, G., Leslie, S., Allen, N., Donnelly, P., & Marchini, J. (2018). The UK Biobank resource with deep phenotyping and genomic data. Nature, 562, 203–209. https://doi.org/10.1038/s41586-0180579-z Camacho, D. M., Collins, K. M., Powers, R. K., Costello, J. C., & Collins, J. J. (2018). Nextgeneration machine learning for biological networks. Cell, 173, 1581–1592. https://doi.org/10. 1016/j.cell.2018.05.015 Camarinha-Matos, L. M., Rosas, J., Oliveira, A. I., & Ferrada, F. (2015). Care services ecosystem for ambient assisted living. Enterprise Information Systems, 9, 607–633. Chen, Y., Yang, L., Hu, H., Chen, J., & Shen, B. (2017). How to become a smart patient in the era of precision medicine? Advances in Experimental Medicine and Biology, 1028, 1–16. https://doi. org/10.1007/978-981-10-6041-0_1 Chib, A., & Lin, S. H. (2018). Theoretical advancements in mHealth: A systematic review of mobile apps. Journal of Health Communication, 23, 909–955. https://doi.org/10.1080/ 10810730.2018.1544676 Cicirelli, G., Marani, R., Petitti, A., Milella, A., & D’Orazio, T. (2021). Ambient assisted living: A review of technologies, methodologies and future perspectives for healthy aging of population. Sensors [Online], 21, 3549. https://doi.org/10.3390/s21103549 Comendador, B. E. V., Francisco, B. M. B., Medenilla, J. S., Sharleenmae, T. N., & Serac, T. B. E. (2014). Pharmabot: A pediatric generic medicine consultant Chatbot. Journal of Automation and Control Engineering, 3(2), 137–140. https://doi.org/10.12720/joace.3.2.137140
References
49
Coorey, G., Figtree, G. A., Fletcher, D. F., & Redfern, J. (2021). The health digital twin: Advancing precision cardiovascular medicine. Nature Reviews. Cardiology, 18, 803–804. Cox, M., & Ellsworth, D. (1997). Application-controlled demand paging for out-of-core visualization. Proceedings. visualization ‘97 (Cat. No. 97CB36155), pp. 235–244. Craddock, M., Crockett, C., Mcwilliam, A., Price, G., Sperrin, M., Van Der Veer, S. N., & FaivreFinn, C. (2022). Evaluation of prognostic and predictive models in the oncology clinic. Clinical Oncology, 34, 102–113. https://doi.org/10.1016/j.clon.2021.11.022 David, L., Popa, S. L., Barsan, M., Muresan, L., Ismaiel, A., Popa, L. C., Perju-Dumbrava, L., Stanculete, M. F., & Dumitrascu, D. L. (2022). Nursing procedures for advanced dementia: Traditional techniques versus autonomous robotic applications (Review). Experimental and Therapeutic Medicine, 23, 124. de la Torre Diez, I., Alonso, S. G., Hamrioui, S., Cruz, E. M., Nozaleda, L. M., & Franco, M. A. (2018). IoT-based services and applications for mental health in the literature. Journal of Medical Systems, 43, 11. Dixit, P., Payal, M., Goyal, N., et al. (2021). Robotics, AI and IoT in medical and healthcare applications. In A. K. Dubey, A. Kumar, S. R. Kumar, et al. (Eds.), AI and IoT-based intelligent automation in robotics. https://doi.org/10.1002/9781119711230.ch4 Du-Harpur, X., Watt, F. M., Luscombe, N. M., & Lynch, M. D. (2020). What is AI? Applications of artificial intelligence to dermatology. British Journal of Dermatology, 183, 423–430. https://doi. org/10.1111/bjd.18880 Duncan, R., Eden, R., Woods, L., Wong, I., & Sullivan, C. (2022). Synthesizing dimensions of digital maturity in hospitals: Systematic review. Journal of Medical Internet Research, 24, e32994. Egorov, E., Pieters, C., Korach-Rechtman, H., Shklover, J., & Schroeder, A. (2021). Robotics, microfluidics, nanotechnology and AI in the synthesis and evaluation of liposomes and polymeric drug delivery systems. Drug Delivery and Translational Research, 11, 345–352. Emani, S., Rui, A., Rocha, H. A. L., Rizvi, R. F., Juaçaba, S. F., Jackson, G. P., & Bates, D. W. (2022). Physicians’ perceptions of and satisfaction with artificial intelligence in cancer treatment: A clinical decision support system experience and implications for low-middle-income countries. JMIR Cancer, 8, e31461. Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., Depristo, M., Chou, K., Cui, C., Corrado, G., Thrun, S., & Dean, J. (2019). A guide to deep learning in healthcare. Nature Medicine, 25, 24–29. https://doi.org/10.1038/s41591-018-0316-z Habl, C., Renner, A.-T., Bobek, J., & Laschkolnig, A. (2016). Study on Big Data in Public Health, Telemedicine and Healthcare: Executive summary. European Commission. Directorate-General for Health and Food Safety. Available at: https://op.europa.eu/en/publication-detail/-/ publication/5db46b33-c67f-11e6-a6db-01aa75ed71a1. Accessed 26 Feb 2024. Feizi, N., Tavakoli, M., Patel, R. V., & Atashzar, S. F. (2021). Robotics and AI for teleoperation, tele-assessment, and tele-training for surgery in the era of COVID-19: Existing challenges, and future vision. Frontiers in Robotics and AI, 8, 610677. https://doi.org/10.3389/frobt.2021. 610677 Fessele, K. L. (2018). The rise of big data in oncology. Seminars in Oncology Nursing, 34, 168–176. https://doi.org/10.1016/j.soncn.2018.03.008 Flores, M., Glusman, G., Brogaard, K., Price, N. D., & Hood, L. (2013). P4 medicine: How systems medicine will transform the healthcare sector and society. Personalized Medicine, 10, 565–576. Fuerst, B., Fer, D. M., Hermann, D., et al. (2021). The vision of digital surgery. In S. Atallah (Ed.), Digital surgery. Springer. Fuller, A., Fan, Z., Day, C., & Barlow, C. (2020). Digital twin: Enabling technologies, challenges and open research. IEEE Access, 8, 108952–108971. Gama, F., Tyskbo, D., Nygren, J., Barlow, J., Reed, J., & Svedberg, P. (2022). Implementation frameworks for artificial intelligence translation into health care practice: Scoping review. Journal of Medical Internet Research, 24, e32215.
50
3
MAI: A Very Short History and the State of the Art
Garnelo, M., & Shanahan, M. (2019). Reconciling deep learning with symbolic artificial intelligence: Representing objects and relations. Current Opinion in Behavioral Sciences, 29, 17–23. https://doi.org/10.1016/j.cobeha.2018.12.010 Gkouskou, K., Vlastos, I., Karkalousos, P., Chaniotis, D., Sanoudou, D., & Eliopoulos, A. G. (2020). The “virtual digital twins” concept in precision nutrition. Advances in Nutrition, 11, 1405–1413. Guthrie, N. L., Carpenter, J., Edwards, K. L., Appelbaum, K. J., Dey, S., Eisenberg, D. M., Katz, D. L., & Berman, M. A. (2019). Emergence of digital biomarkers to predict and modify treatment efficacy: Machine learning study. BMJ Open, 9, e030710. Gutierrez, L. J., Rabbani, K., Ajayi, O. J., Gebresilassie, S. K., Rafferty, J., Castro, L. A., & Banos, O. (2021). Internet of things for mental health: Open issues in data acquisition, selforganization, service level agreement, and identity management. International Journal of Environmental Research and Public Health, 18, 1327. Habuza, T., Navaz, A. N., Hashim, F., Alnajjar, F., Zaki, N., Serhani, M. A., & Statsenko, Y. (2021). AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine. Informatics in Medicine Unlocked, 24, 100596. Haddadin, S., & Knobbe, D. (2020). Robotics and artificial intelligence: The present and future visions. In: Ebers, M. & Navas, S. (eds.), Algorithms and law. Cambridge University Press, 1–36. https://doi.org/10.1017/9781108347846.002 Hamet, P., & Tremblay, J. (2017). Artificial intelligence in medicine. Metabolism, 69s, 36–40. https://doi.org/10.1016/j.metabol.2017.01.011 Hood, L., Heath, J. R., Phelps, M. E., & Lin, B. (2004). Systems biology and new technologies enable predictive and preventative medicine. Science, 306, 640–643. Hulsen, T., Jamuar, S. S., Moody, A. R., Karnes, J. H., Varga, O., Hedensted, S., Spreafico, R., Hafler, D. A., & McKinney, E. F. (2019). From big data to precision medicine. Frontiers in Medicine, 6, 34. https://doi.org/10.3389/fmed.2019.00034 Iribarren, S. J., Akande, T. O., Kamp, K. J., Barry, D., Kader, Y. G., & Suelzer, E. (2021). Effectiveness of mobile apps to promote health and manage disease: Systematic review and meta-analysis of randomized controlled trials. JMIR mHealth and uHealth, 9, e21563. https:// doi.org/10.2196/21563 Jo, A., Coronel, B. D., Coakes, C. E., & Mainous, A. G., 3rd. (2019). Is there a benefit to patients using wearable devices such as fitbit or health apps on mobiles? A systematic review. The American Journal of Medicine, 132, 1394–1400.e1. Johnson, K. B., Wei, W. Q., Weeraratne, D., Frisse, M. E., Misulis, K., Rhee, K., Zhao, J., & Snowdon, J. L. (2021). Precision medicine, AI, and the future of personalized health care. Clinical and Translational Science, 14, 86–93. https://doi.org/10.1111/cts.12884 Kamel Boulos, M. N., & Zhang, P. (2021). Digital twins: From personalised medicine to precision public health. Journal of Personalized Medicine, 11, 745. Kaul, V., Enslin, S., & Gross, S. A. (2020). History of artificial intelligence in medicine. Gastrointestinal Endoscopy, 92, 807–812. Kautz, H. A. (2022). The third AI summer: AAAI Robert S. Engelmore Memorial Lecture. AI Magazine, 43, 105–125. Kavasidis, I., Peoietto Salanitri, F., Palazzo, S., et al. (2023). History of AI in clinical medicine. In: Bagci, U., Ahmad, O., Xu, Z. et al. (eds.). AI in clinical medicine. A practical guide for healthcare professionals. Wiley, 39–48. https://doi.org/10.1002/9781119790686.ch4 Killock, D. (2020). AI outperforms radiologists in mammographic screening. Nature Reviews Clinical Oncology, 17, 134–134. Kitsiou, S., Vatani, H., Paré, G., Gerber, B. S., Buchholz, S. W., Kansal, M. M., Leigh, J., & Masterson Creber, R. M. (2021). Effectiveness of mobile health technology interventions for patients with heart failure: Systematic review and meta-analysis. The Canadian Journal of Cardiology, 37, 1248–1259.
References
51
Klang, E., Levin, M. A., Soffer, S., Zebrowski, A., Glicksberg, B. S., Carr, B. G., McGreevy, J., Reich, D. L., & Freeman, R. (2021). A simple free-text-like method for extracting semistructured data from electronic health records: Exemplified in prediction of in-hospital mortality. Big Data and Cognitive Computing, 5, 40. Krumholz, H. M. (2014). Big data and new knowledge in medicine: The thinking, training, and tools needed for a learning health system. Health Affairs (Millwood), 33, 1163–1170. https://doi. org/10.1377/hlthaff.2014.0053 Kuziemsky, C., Maeder, A. J., John, O., Gogia, S. B., Basu, A., Meher, S., & Ito, M. (2019). Role of artificial intelligence within the telehealth domain. Yearbook of Medical Informatics, 28, 35–40. Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444. https://doi.org/ 10.1038/nature14539 Lim, J. I., Regillo, C. D., Sadda, S. R., Ipp, E., Bhaskaranand, M., Ramachandra, C., & Solanki, K. (2023). Artificial intelligence detection of diabetic retinopathy: Subgroup comparison of the EyeArt system with ophthalmologists’ dilated examinations. Ophthalmology Science, 3, 100228. Liu, C., Liu, X., Wu, F., Xie, M., Feng, Y., & Hu, C. (2018). Using artificial intelligence (Watson for oncology) for treatment recommendations amongst Chinese patients with lung cancer: Feasibility study. Journal of Medical Internet Research, 20, e11087. Liu, M., Fang, S., Dong, H., & Xu, C. (2021). Review of digital twin about concepts, technologies, and industrial applications. Journal of Manufacturing Systems, 58, 346–361. Lupton, D. (2016). The quantified self. A sociology of self-tracking. Polity Press. Maalouf, N., Sidaoui, A., Elhajj, I. H., et al. (2018). Robotics in nursing: A scoping review. Journal of Nursing Scholarship, 50(6), 590–600. https://doi.org/10.1111/jnu.12424 Manickam, P., Mariappan, S. A., Murugesan, S. M., Hansda, S., Kaushik, A., Shinde, R., & Thipperudraswamy, S. P. (2022). Artificial intelligence (AI) and internet of medical things (IoMT) assisted biomedical Systems for Intelligent Healthcare. Biosensors (Basel), 12, 562. https://doi.org/10.3390/bios12080562 Marcilly, R., Colliaux, J., Robert, L., Pelayo, S., Beuscart, J.-B., Rousselière, C., & Décaudin, B. (2023). Improving the usability and usefulness of computerized decision support systems for medication review by clinical pharmacists: A convergent, parallel evaluation. Research in Social and Administrative Pharmacy, 19, 144–154. Marino, D., Carlizzi, D. N., & Falcomatà, V. (2023). Artificial intelligence as a disruption technology to build the harmonic health industry. Procedia Computer Science, 217, 1354–1359. Mayer-Schönberger, V., & Ingelsson, E. (2018). Big data and medicine: A big deal? Journal of Internal Medicine, 283, 418–429. https://doi.org/10.1111/joim.12721 Middleton, B., Sittig, D. F., & Wright, A. (2016). Clinical decision support: A 25 year retrospective and a 25 year vision. Yearbook of Medical Information, Suppl 1, 103–116. https://doi.org/10. 15265/IYS-2016-s034 Mishra, S. (2022). Artificial intelligence: A review of progress and prospects in medicine and healthcare. Journal of Electronics, Electromedical Engineering, and Medical Informatics, 4, 1–23. Mortenson, W. B., Sixsmith, A., & Woolrych, R. (2015). The power(s) of observation: Theoretical perspectives on surveillance technologies and older people. Ageing & Society, 35, 512–530. Ni, L., Lu, C., Liu, N., & Liu, J. (2017). MANDY: Towards a smart primary care chatbot application. In: Chen, J., Theeramunkong, T., Supnithi, T., & Tang, X. (eds.). Knowledge and systems sciences, Springer, 38–52. Nilsson, N. J. (2009). The quest for artificial intelligence. Cambridge University Press. Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the Future - Big Data, Machine Learning, and Clinical Medicine. The New England Journal of Medicine, 375(13), 1216–1219. https://doi.org/ 10.1056/NEJMp1606181 O’Connor, S. (2021). Exoskeletons in nursing and healthcare: A bionic future. Clinical Nursing Research, 30(8), 1123–1126. https://doi.org/10.1177/10547738211038365 Osheroff, J. A., Teich, J. M., Middleton, B., Steen, E. B., Wright, A., & Detmer, D. E. (2007). A roadmap for national action on clinical decision support. Journal of the American Medical Informatics Association, 14, 141–145.
52
3
MAI: A Very Short History and the State of the Art
Ouellette, S., & Rao, B. K. (2022). Usefulness of smartphones in dermatology: A US-based review. International Journal of Environmental Research and Public Health, 19, 3553. Ozmen, M.M., Ozmen, A., & Koç, Ç.K. (2021). Artificial intelligence for next-generation medical robotics. In: Atallah, S. (ed.). Digital surgery. Springer. https://doi.org/10.1007/978-3-03049100-0_3 Pise, A., Yoon, B., & Singh, S. (2023). Enabling ambient intelligence of things (AIoT) healthcare system architectures. Computer Communications, 198, 186–194. Queirós, A., & da Rocha, N. P. (2018). Ambient assisted living: Systematic review. In: Queirós, A. & Rocha, N.P.D. (eds.). Usability, accessibility and ambient assisted living. Springer, 13–47. https://doi.org/10.1007/978-3-319-91226-4_2 Quinn, T. P., Jacobs, S., Senadeera, M., Le, V., & Coghlan, S. (2022). The three ghosts of medical AI: Can the black-box present deliver? Artificial intelligence in medicine, 124, 102158. https://doi.org/10.1016/j.artmed.2021.102158 Riba, M., Sala, C., Toniolo, D., & Tonon, G. (2019). Big Data in Medicine, the Present and Hopefully the Future. Frontiers in medicine, 6, 263. https://doi.org/10.3389/fmed.2019.00263 Ristevski, B., & Chen, M. (2018). Big Data Analytics in Medicine and Healthcare. Journal of integrative bioinformatics, 15(3), 20170030. https://doi.org/10.1515/jib-2017-0030 Robbins, R., Krebs, P., Jagannathan, R., Jean-Louis, G., & Duncan, D. T. (2017). Health app use among US mobile phone users: Analysis of trends by chronic disease status. JMIR mHealth and uHealth, 5, e197. Sakly, H., Ayres, A. S., Ferraciolli, S. F., et al. (2023). Radiology, AI and big data: Challenges and opportunities for medical imaging. In: Sakly, H., Yeom, K., Halabi, S. et al. (eds.). Trends of artificial intelligence and big data for E-health. Springer, 33–55. https://doi.org/10.1007/978-3031-11199-0_3 Sapci, A. H., & Sapci, H. A. (2019). Innovative assisted living tools, remote monitoring technologies, artificial intelligence-driven solutions, and robotic systems for aging societies: Systematic review. JMIR Aging, 2, e15429. Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14, 27–40. https://doi.org/10.1007/s10676-0109234-6 Sim, I. (2019). Mobile devices and health. The New England Journal of Medicine, 381, 956–968. Sim, I., Gorman, P., Greenes, R. A., Haynes, R. B., Kaplan, B., Lehmann, H., & Tang, P. C. (2001). Clinical decision support systems for the practice of evidence-based medicine. Journal of the American Medical Informatics Association, 8, 527–534. Sixsmith, A. (2013). Technology and the challenge of aging. In: Sixsmith, A. & Gutman, G. (eds.). Technologies for active aging. International Perspectives on Aging, vol 9. Springer, 7–25. https://doi.org/10.1007/978-1-4419-8348-0_2Springer Smith, K. E., & Juarascio, A. (2019). From ecological momentary assessment (EMA) to ecological momentary intervention (EMI): Past and future directions for ambulatory assessment and interventions in eating disorders. Current Psychiatry Reports, 21, 53. Somashekhar, S. P. S., Kumar, R., Kumar, A., Patil, P., & Rauthan, A. (2016). 551PD validation study to assess performance of IBM cognitive computing system Watson for oncology with Manipal multidisciplinary tumour board for 1000 consecutive cases: An Indian experience. Annals of Oncology, 27, ix179. https://doi.org/10.1093/annonc/mdw601.002 Steinhubl, S. R., & Topol, E. J. (2018). Digital medicine, on its way to being just plain medicine. npj Digital Medicine, 1, 20175. https://doi.org/10.1038/s41746-017-0005-1 Sulis, E., Amantea, I. A., Aldinucci, M., Boella, G., Marinello, R., Grosso, M., Platter, P., & Ambrosini, S. (2022). An ambient assisted living architecture for hospital at home coupled with a process-oriented perspective. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652-022-04388-6 Sutton, R. T., Pincock, D., Baumgart, D. C., Sadowski, D. C., Fedorak, R. N., & Kroeker, K. I. (2020). An overview of clinical decision support systems: Benefits, risks, and strategies for success. npj Digital Medicine, 3, 17. https://doi.org/10.1038/s41746-020-0221-y
References
53
Topol, E. (2015). The patient will see you now: The future of medicine is in your hands. Basic Books. Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books. Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and conversational agents in mental health: A review of the psychiatric landscape. Canadian Journal of Psychiatry, 64, 456–464. Van Genugten, C. R., Schuurmans, J., Lamers, F., Riese, H., Penninx, B. W., Schoevers, R. A., Riper, H. M., & Smit, J. H. (2020). Experienced burden of and adherence to smartphone-based ecological momentary assessment in persons with affective disorders. Journal of Clinical Medicine, 9, 322. https://doi.org/10.3390/jcm9020322 Von Haxthausen, F., Böttger, S., Wulff, D., Hagenah, J., García-Vázquez, V., & Ipsen, S. (2021). Medical robotics for ultrasound imaging: Current systems and future trends. Current Robotics Reports, 2, 55–71. https://doi.org/10.1007/s43154-020-00037-y Wada, K., Shibata, T., Saito, T., & Tanie, K. (2004). Effects of robot-assisted activity for elderly people and nurses at a day service center. Proceedings of the IEEE, 92, 1780–1788. Wang, J., Deng, H., Liu, B., Hu, A., Liang, J., Fan, L., Zheng, X., Wang, T., & Lei, J. (2020). Systematic evaluation of research Progress on natural language processing in medicine over the past 20 years: Bibliometric study on PubMed. Journal of Medical Internet Research, 22, e16816. Wang, L., Chen, X., Zhang, L., Li, L., Huang, Y., Sun, Y., & Yuan, X. (2023). Artificial intelligence in clinical decision support systems for oncology. International Journal of Medical Sciences, 20, 79–86. Weinstein, R. S., Lopez, A. M., Joseph, B. A., Erps, K. A., Holcomb, M., Barker, G. P., & Krupinski, E. A. (2014). Telemedicine, telehealth, and mobile health applications that work: Opportunities and barriers. The American Journal of Medicine, 127, 183–187. https://doi.org/ 10.1016/j.amjmed.2013.09.032 Weinstein, R. S., Krupinski, E. A., & Doarn, C. R. (2018). Clinical examination component of telemedicine, telehealth, mHealth, and connected health medical practices. Medical Clinics of North America, 102, 533–544. Weissglass, D. E. (2022). Contextual bias, the democratization of healthcare, and medical artificial intelligence in low- and middle-income countries. Bioethics, 36, 201–209. https://doi.org/10. 1111/bioe.12927 Weizenbaum, J. (1966). ELIZA – A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9, 36–45. https://doi.org/10. 1145/365153.365168 Weston, A. D., & Hood, L. (2004). Systems biology, proteomics, and the future of health care: Toward predictive, preventative, and personalized medicine. Journal of Proteome Research, 3, 179–196. World Health Organization (WHO). (2022). mHealth: New horizons for health through mobile technologies (Global Observatory for eHealth series) (Vol. 3). Available at: https://iris.who.int/ bitstream/handle/10665/44607/9789241564250_eng.pdf?sequence=1. Accessed 26 Feb 2024. Wright, L., & Davidson, S. (2020). How to tell the difference between a model and a digital twin. Advanced Modeling and Simulation in Engineering Sciences, 7. https://doi.org/10.1186/ s40323-020-00147-4 Yu, K. H., Beam, A. L., & Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature Biomedical Engineering, 2, 719–731. https://doi.org/10.1038/s41551-018-0305-z Yue, L., & Yang, L. (2017). Clinical experience with IBM Watson for oncology (WFO) for multiple types of cancer patients in China. Annals of Oncology, 28, x162. https://doi.org/10.1093/ annonc/mdx676.024
Chapter 4
Ethical Foundations: Medical Ethics and Data Ethics
Abstract This chapter outlines basic concepts in medical ethics and data ethics. It aims to provide some kind of orientation regarding the complex issues, approaches, and concepts within these fields. I introduce the epistemic lenses of my conceptual approach to the ethics of MAI. This approach mainly rests on critical theory and critical data studies. I also introduce a framework for my further ethical analysis in Part II. This framework may also serve as a compass for the reader to navigate the following chapters. Keywords Autonomy · Confidentiality · Digital positivism · Critical data studies · Empathy · Equity · Privacy · Trust In this chapter, I aim to outline some fundamentals of medical ethics and data ethics. Given the scope of this book, this can only be a very limited sketch and I cannot explore both fields in detail. As I explained in Sect. 1.4, the philosophical dimension of both medical ethics and data ethics, i.e. ontology, epistemology, philosophy of medicine, and philosophy of technology would require an in-depth analysis, which I am unable to conduct here. I will refer to concepts from these areas wherever necessary, but without going into too much detail. There are excellent scholarly works on these topics to which I will refer the reader occasionally. I also introduce my conceptual approach to the ethics of MAI in this chapter. As the reader will notice, I do not advocate a particular ethical theory or school of thought. This is due to the fact that my goal is to give a broad account of ethical aspects connected to MAI. I do not want to limit the perspective of the analysis to one single theory. However, as outlined in Sect. 1.4, I will use epistemic lenses from critical theory, especially critical data studies, for my analysis. In Sect. 4.4 below, I introduce a framework for my analysis based on these epistemic lenses. Instead of a unified theoretical approach, this framework can be seen as the crucial instrument for the subsequent ethical analysis. At same the time, it serves as a compass for the reader to navigate the many facets of the ethical analysis in Chaps. 5, 6 and 7.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6_4
55
56
4.1
4
Ethical Foundations: Medical Ethics and Data Ethics
Medical Ethics: Definitions
Ethics can be defined as the systematic theoretical reflection on moral beliefs, rules, and judgements (Frankena, 1973). That means that ethics is about justifying decisions as well as actions based on rationality. What separates ethics from morals is that the latter consists of rules and concepts that have developed in a certain historical and cultural context and are accepted as conventions and traditional values. Ethics, on the other hand, requires a critical reflection of normative assumptions, judgements, and values. It may refer to established moral concepts such as values or principles or use common moral assumptions as a starting-off point, but applies analytic methods to reconstruct, justify, or reject them. Metaethics analyzes the concepts, arguments, and terms that are used in ethical theories. Normative ethics deals with principles of and judgments about what is morally right or wrong (Frankena, 1973). Although ethics is historically a philosophical and theological discipline, it has been applied to several other fields that deal with human practices, such as law (legal ethics), politics (political ethics), or medicine and the life sciences (bioethics). Hence, one speaks of applied ethics as that branch of normative ethics that deals with specific fields of action. Applied ethics defines action guidance for dealing with a moral issue in a given field, which implies reasonable normative justification for a particular decision (Chadwick & Schüklenk, 2020). By trying to establish guiding principles for decision-making and acting in real-world settings, applied ethics goes beyond purely academic reasoning. Bioethics is a branch of applied ethics that applies frameworks and tools of ethical analysis to issues in biomedicine and the life sciences (Chadwick & Schüklenk, 2020). Bioethics can be divided into medical ethics, dealing with ethical issues in medicine and healthcare, environmental ethics, focusing on the effects of human action on nature, and animal ethics that centers around the relation between human beings and animals (Potter, 1988). Some also use bioethics simply as short for biomedical ethics, which encompasses both clinical practice and biomedical research (Reich, 1994). My approach in this book is that of medical ethics, since I focus on the ethical implications of the MAI-triggered transformation of clinical practice and public health action. I aim to analyze the ethical impact of MAI on the way agents in the field act (practices), how they behave, interact, and form relations (relationships), and the basic structure within they interact (environments). Hence, biomedical research and research ethics are not topics I address in particular. However, since MAI technologies and big data analytics of course include research aspects, I will address issues linked to biomedical research sporadically.
4.2
Medical Ethics: Topics
Typically, an introduction to medical ethics starts with a list of ethical theories or schools of thought that historically have been of importance, mostly virtue ethics, utilitarian ethics, deontology, feminist ethics, and principlism (Ashcroft et al., 2007;
4.2
Medical Ethics: Topics
57
Chadwick & Schüklenk, 2020; deGrazia et al., 2011; Glannon, 2005; Jones & DeMarco, 2016; Vaughn, 2022). Some authors focus more on the methods for analyzing and/or justifying clinical decision-making and advocate the approach of casuistry (Brody, 1988; Jonsen & Toulmin, 1988). Still others expand the spectrum of methods from philosophical ethics to gender ethics, communitarianism, and discourse ethics, and include qualitative research approaches from sociology, and ethnography, as well as experimental methods and approaches from economics and decision science (Sugarman & Sulmasy, 2010). My approach is to focus on the crucial issues usually addressed in medical ethics, not limiting my analysis to a specific school of thought. I basically agree with Veatch (Veatch, 1997) in his argument that medical ethics is primarily about choices and decisions by the various actors involved, such as medical and healthcare professionals as well as patients, relatives, and policy makers. The task of medical ethics is to critically analyze reasons or formulate evaluative judgements for making decisions (Dunn & Hope, 2018; Veatch, 1997). However, I will not focus exclusively on the doctor-patient dyad which is the case with most approaches that center around choices and decision-making (Beauchamp & Childress, 2019; Brody, 1988; Jonsen et al., 2022; Vaughn, 2022; Veatch, 1997). I call the ethical analysis of decision-making in concrete clinical situations clinical ethics, which in my understanding is a part of medical ethics (Siegler et al., 1990). My medical ethics approach goes beyond clinical ethics in that it also reflects on the structures and relationships that shape decision-making. This can be made clear by looking at what I call the standard model of medico-ethical analysis. Childress (Childress, 1997) defines four standpoints for assessing human action: agents, acts, ends, and consequences. Ethical theories like virtue ethics, deontology, etc. usually focus on one of these aspects for analyzing whether a decision or action is morally right or wrong. In my view, this approach is insufficient to understand the full implications of decisions or actions since it solely focusses on the doctor-patient dyad and the concrete situation in which a decision is made. This ignores those factors that shape the patient-doctor encounter by preforming and defining the set of available options for decision-making and resources for action. Therefore, I will follow a contextualist approach that situates moral decisionmaking and moral action in institutional, social, and cultural backgrounds (Hoffmaster, 1994). MacIntyre (MacIntyre, 2007) claims that in order to understand ethical concepts, one has to look at their social embodiment. Although I do not agree with MacIntyre’s approach as such, I agree with the basic assumption that concepts like autonomy, trust, or justice have to be contextualized with the concrete social practices, relationships, and environments that shape them (MacIntyre, 2007). To understand what autonomy for example means in a MAI setting, one has to analyze how the technology transforms all three areas. I understand the advent of MAI as a transformation process in medicine and healthcare. MAI will transform the way we do things, the conditions under which we do them, and how we interact with each other as well as our roles and self-images. Hence, I identify three areas of impact of MAI: practices, relationships, and environments. My aim is to analyze the ethical implications connected to the impact
58
4 Ethical Foundations: Medical Ethics and Data Ethics
of MAI in each of these areas. This requires to first establish several ethical concepts for orientation. The purpose is to give the reader a very short overview of those ethical issues, principles, and conflicts that can be seen as constants in medicine and healthcare, notwithstanding the ethical theory or school of thought one ascribes to.
4.2.1
Autonomy
Autonomy has become more than an ethical principle among others and can rather be seen as a paradigm in modern medicine. This is due to the fact that the focus on autonomy today is the result of an attitude shift in medicine in the twentieth century. Usually, this shift is interpreted as the transformation from paternalistic medicine that centers around the responsibility of physicians for their patient’s health and well-being to a medicine where the will of the patient is paramount. Although this is too simplistic an account and paternalism still exists in various forms, it is important to note that the will of the patient is the crucial factor when it comes to the permissibility and legitimacy of medical actions. The principle of patient autonomy basically implies that no medical action can be taken against the explicit will of a competent person. Informed consent safeguards the will of the patient against any violation of this principle. It implies that before taking any action, e.g. performing a diagnostic test or surgical intervention, physicians have to inform the patient about the nature, meaning, and consequences of said action, its benefits and risks, and also about possible alternatives. Informed consent thus requires a thorough patient information, ideally in the form of a dialogue where the patient has the opportunity to ask questions and the physician can make sure that the patient understands the provided information. This information has to be evidence-based and communicated in a manner that allows laypersons without any medical background knowledge to comprehend it. Based on the information thus provided, the patient should be able to make a sound decision about consenting to or refusing the medical action suggested by the physician. In cases where the patient is not competent, meaning unable to give consent due to a temporary or permanent loss of cognitive abilities, surrogate decision-making comes into place. Such a situation might arise when the patient is unconscious or has a cognitive impairment. Surrogate decision-making implies measures such as the living will where patients record their wishes and preferences for medical treatment while competent and designate persons who are responsible for helping physicians realizing their will. Although informed consent is the basic prerequisite of patient autonomy, it should be understood as a formal principle that allows to operationalize autonomy in clinical practice. This formal principle is insufficient to understand the material aspect, the meaning and conceptual underpinnings of autonomy. Investigating the material aspect first requires to distinguish between autonomy as competence and autonomy as right. Beauchamp and Childress (Beauchamp and Childress, 2019) make a similar distinction when they define liberty and agency as preconditions for
4.2
Medical Ethics: Topics
59
autonomy. Liberty signifies the independence from external control and coercion, whereas agency refers to the capability of intentional action. Autonomy as competence refers to the cognitive ability to make rational choices and decisions. Similar to Beauchamp and Childress’ concept of autonomy as agency, it entails the capability of reasoning based on information, weighing arguments, and making a logically sound decision. In the medical context, being able to make rational, self-determined decisions basically constitutes patient autonomy. This view has been contested and there are many philosophical premises included that stand to reason. It is therefore no surprise that there is an ongoing debate about whether the term autonomy is fitting here. In the philosophical context, autonomy means self-rule, the capacity to follow rules one has accepted after critically examining them. This is at least the notion of autonomy most prominent in ethics, introduced by Immanuel Kant (Kant, 2015, 2017). Although Kant’s concept of autonomy is not about individual choices and decisions, but the morality of actions in accordance with a universal law, it is a crucial point of reference in bioethics (Secker, 1999). The important aspect is the capability to self-determined decisionmaking and action by means of rational deliberation. Many commentators have argued that this understanding of autonomy in medicine is too limited due to its focus on processing information for decision-making (Arrieta Valero, 2019). Such a mentalistic approach ignores the affects, values, and desires that also have to be considered as resources of decision-making (Meyers, 2005). Furthermore, the understanding of autonomy as rational decision-making in a concrete situation is mostly suitable for acute care, whereas other medical contexts such as chronic disease management or primary care require a different approach (Anderson, 2014; Arrieta Valero, 2019). Here, autonomy is not limited to a single decision, but entails a coherent set of capabilities, relationships, and identities. It is also important to note that in these contexts, autonomy might not be a stable condition, but a dynamic process, something that can diminish or disappear as a result of illness (Arrieta Valero, 2019). Another criticism has been raised against the individualistic view this notion of autonomy implies. Following this critical view, reasoning and decision-making never happen in a vacuum, but are situated in and shaped by social relations (Agich, 2007). That means that the broader social context in which a person acts and by which she is shaped also defines the scope and limits of her autonomy (Mackenzie, 2021). Thus, social relations may enable or hamper the ability of individuals to make decisions (Entwistle et al., 2010). Several commentators have referred to this view as relational autonomy (Mackenzie, 2021; Walter & Ross, 2014; Entwistle et al., 2010). Among the various relations patients may be embedded in, the doctor-patient relationship is of special importance. What I aim to show in the following chapters is that the approach of relational autonomy enables us to better analyze all three impact areas of MAI than the individualistic view. Autonomy as right means that legal regulations protect self-determined decisions and actions, which is codified as law. The principle of autonomy constitutes both a negative and a positive right (Childress, 1990; Cohen, 2000). In most countries it is a basic right to agree to or refuse a medical action out of free will. It secures the
60
4 Ethical Foundations: Medical Ethics and Data Ethics
freedom and especially the bodily integrity of a person against external intervention. Autonomy as a negative right is mostly uncontested, although certain complications and details may vary from case to case. Autonomy as a positive right, however, is a more difficult concept (Childress, 1990). A positive right constitutes an entitlement, e.g. to choose a certain medical procedure or to use a specific health service. The scope and limits of such entitlements are often unclear and vary between countries and health systems. Autonomy as a positive right is therefore a matter of ongoing negotiations and policy-making. Given these different dimensions of autonomy, it becomes clear that informed consent, although an indispensable principle, is not synonymous with patient autonomy. Informed consent is the minimal requirement when it comes to autonomy as competence and the formal codification of autonomy as a right. Furthermore, autonomy cannot be reduced to situational decision-making. It has to be understood as a complex web of rational choices, values, beliefs, and self-images that is shaped by social practices, structures and relations. That said, it is important to note that for an individual to decide and act autonomously, more is needed than simply the cognitive ability and the absence of coercion. The overall situation of an individual has to be taken into account in order to assess the implications for their autonomy, including social practices and social relations that shape the information as well as decision-making process and the social determinants that shape the individual themselves. Accounting for the different social contexts and relations in which an individual can be embedded is also crucial since we are dealing with a twofold identity in the context of MAI. Agents in this context may be patients and data subjects at the same time. It will be especially important to outline what each of these identities implies in a MAI-enhanced setting and also if conflicts may arise regarding the rights, goals, needs, and resources connected to each identity. Since the implementation of MAI means that agents other than healthcare professionals, such as tech corporations, might be involved in the treatment process, another social relation might be introduced. MAI will also reshape the environments individuals live in and interact, transforming the basic structure that shapes decision-making and social action and thus affect autonomy. Therefore, I will follow a context-sensitive and relational approach that considers the various practices, relationships, and structural conditions that shape the enactment of autonomy.
4.2.2
Therapeutic Relationship
Historically, the focal point of ethical thinking in medicine is the relationship between doctor and patient, which I will refer to as therapeutic relationship. This dyadic understanding focusses on the interpersonal relations of doctors and patients, each with their specific roles, rights, obligations, values, and desires. Several models of the therapeutic relationship have been discussed throughout the last decades, most prominently the four types introduced by Emanuel and Emanuel (1992). The crucial
4.2
Medical Ethics: Topics
61
feature of each model is its particular interpretation of patient autonomy, which suggests that the nature and level of patient autonomy may serve as an evaluative factor for assessing the ethical implications of the patient-doctor relationship. In the paternalistic model, health and well-being of patients are paramount. Doctors, who possess medical knowledge and skills, are obliged to act in the best interest of their patients. They are also those who decide what this best interest is, based on their expertise. The patient’s role in this model is to assent to the doctor’s decision with little to no room for patient autonomy. According to the informative model, doctors are obliged to provide patients with all relevant information for deciding on a medical procedure. The patient selects a procedure which the doctor then executes. In this model, doctors are technical experts offering their services while patients may select what best suits them, which is why this is also referred to as the consumer model. Autonomy here means that the patient has control over decision-making. The interpretive model goes beyond informing the patient based on medical facts. Rather, the physician aims to assist the patient in reflecting on their values and articulating them. This includes a narrative account of the patient’s goals, aspirations, and their overall life situation without any passing of judgement on the physician’s behalf. The physician then suggests the treatment option most suitable for achieving the patient’s goals and realizing their values. In this model, the physician is not just a technical expert, but fulfills an advisory role, which the authors compare to that of a cabinet minister. The conception of autonomy in this model goes beyond mere control over decision-making and includes a kind of selfunderstanding, where the patient reflects on the effects of a specific treatment in terms of goals and values. In the deliberative model, the physician supports the patient in choosing the best health-related values, i.e. those that are affected by the disease or the possible treatment. Here, the physician takes the role of a teacher, even a friend, who supports the patient’s self-development by engaging in dialogue and moral deliberation. Autonomy in this context means an empowerment of the patient, enabling them to reflect on their health-related values and preferences and choosing the best treatment options accordingly. The authors do not make any claims to completeness and even argue that a fifth model could be defined, which they call the instrumental model. This model implies that physicians may strive for fulfilling goals that are not in the patient’s interest, such as scientific insights or societal objectives. Furthermore, Emanuel and Emanuel clarify that the suggested models have to be interpreted as Weberian ideal types, i.e. a typology that outlines the most common characteristics of a phenomenon without claiming any absolute or complete description of all possible cases. As always with models and typologies that try to categorize highly complex phenomena, the four ideal types of the doctor-patient relationship introduced by Emanuel and Emanuel sparked an intense debate that is still ongoing (Agarwal & Murinson, 2012; Borza et al., 2015; Ben-Moshe, 2023). For the purposes of this book, the four types and as we will see also the fifth one may serve as a useful compass for navigating the impact of MAI on doctor-patient interaction and relations.
62
4
Ethical Foundations: Medical Ethics and Data Ethics
The focus shift towards a patient-centered medicine is one of the reasons that the deliberative model has become the dominant one (Kaba & Sooriakumaran, 2007; Mead & Bower, 2000). In patient-centered medicine, the patient’s status is that of a person rather than an agglomeration of medical facts. The concept of health underlying this idea is the biopsychosocial model, according to which health is defined by the complex interaction of somatic, psychological, and social factors (Engel, 1977). That means that a person’s health cannot be reduced to somatic or clinical aspects alone. The model rests on a view of the patient as a person that is defined by many non-medical determinants such as personality, socio-economic status, etc., which all may shape the patient’s individual experience of illness. Therefore, the patient narrative is crucial in a patient-centered clinical encounter. A further important development regarding the therapeutic relationship is the concept of shared decision-making (Charles et al., 1997; Frosch & Kaplan, 1999; van der Horst et al., 2023). This approach implies that health-related decisions should be the result of a deliberation process including patients and healthcare providers, based on the best available evidence, values, and patient perspectives. The decision which path to take, e.g. which treatment option to choose, takes the form of a mutual agreement, where patient and doctor also define their responsibilities. In contemporary medicine, the therapeutic relationship is built around the deliberative model, based on the paradigm of patient autonomy, and guided by the ideal of shared decision-making. However, what all of these concepts actually mean, is not categorically defined outside of the concrete clinical situation. The therapeutic relationship, the modus of decision-making, and the realization of patient autonomy is enacted in each case and depends on the particular characteristics of the situation, such as the nature of the disease, the personal aspects of the patient, and the institutional context (Thomas et al., 2020). Hence the crucial relevance of the therapeutic relationship: Autonomy is not a pre-defined, given entity, but a dynamic process that patients have to enact in each particular context. The therapeutic relationship is the enabler of patient autonomy. By listening to the patient’s narrative, taking their individual illness experience serious, respecting the patient’s values and goals, and including them into the shared decision-making process, doctors enable patients to decide and act autonomously (Entwistle et al., 2010).
4.2.3
Trust and Empathy
Trust is a cornerstone of the therapeutic relationship (Tegegne et al., 2022; Hall et al., 2001, 2002; Mechanic, 1998; Iott et al., 2019; Beltran-Aroca et al., 2016). One could define trust as the belief that others will act in one’s own interest (Mechanic, 1998). Trust can be divided into two forms, interpersonal trust that addresses individuals and social trust where the object is social institutions (Cohen, 2000). In the medical context, interpersonal trust shapes the therapeutic relationship while social trust affects the attitude of patients towards the healthcare system as a whole. Social
4.2
Medical Ethics: Topics
63
trust and interpersonal trust are usually mutually supportive, meaning that a patient who generally trusts in the healthcare system will be likely to trust an individual healthcare professional and vice versa. In medicine and healthcare, trust can be understood as both an intrinsic as well as an instrumental value (Hall et al., 2001). As an intrinsic value, it gives substance and meaning to the therapeutic relationship. As an instrumental value, trust is a crucial enabler of a successful treatment. Several aspects of patient behavior are affected by the level of trust in the physician or the medical institution the treatment takes place in, such as adherence to treatment regimen and disclosure of relevant information. Trust may also reduce fears and concerns on behalf of patients as well as help them to overcome uncertainty and deal with risks (Wolfensberger & Wrigley, 2019). Hence, trust can be interpreted as a mechanism of reducing complexity (Starke et al., 2022), which is especially important in the medical context since decisionmaking on behalf of patients often implies to deal with complex information and situations. Patients usually do not possess the knowledge and skills to interpret the medical facts for making a decision. Trust reduces the complexity of the information in that patients trust in the doctor’s suggestions, e.g. a treatment option, without fully comprehending all clinical details. In a sense, trust is therefore also the enabler of patient autonomy, since it allows patients to commit to something they cannot be sure of, for example a specific surgical procedure or a treatment plan (O’Neill, 2002). In this understanding, trust is something that patients have to invest when entering the medical domain. Trusting healthcare professionals and medical institutions thus seems to require a leap of faith on behalf of patients. But what is this faith grounded in? In other words, how and why can trust be earned? This is the basic question of trustworthiness (Goold, 2002; O’Neill, 2002; McCullough et al., 2020). This question arises from the particular asymmetry of the clinical encounter, where an individual in need of help but without any expertise is faced with an expert who can provide help (O’Neill, 2002). The individual seeking help but lacking expertise, i.e. patient, is not able to judge the expertise or the intentions of the expert, i.e. doctor. Hence, the patient needs to place trust in the medical institutions that vouch for the doctor’s expertise. The patient, of course, also needs to place trust in the doctor and believe that they will act in their best interest. Doctors can take active steps to earn trust and improve their trustworthiness, first and foremost trough communication in the form of an open dialogue (Hendren & Kumagai, 2019). This implies to disclose all relevant information on the patient’s health situation and the treatment process. Major enablers of trust are therefore the personal integrity of doctors, their expertise and competence, and the quality of the information provided to patients. These are elements of cognitive trust, where patients have confidence in doctors based on rational aspects (Gilson, 2003). Affective trust focusses on the interpersonal relationship, whereby patients can trust doctors when they are able to form an emotional bond that relies on mutual recognition and respect (Gilson, 2003). Hence, empathy plays an important role when it comes to affective trust.
64
4
Ethical Foundations: Medical Ethics and Data Ethics
One can distinguish between cognitive empathy and affective empathy (Hojat et al., 2002). Whereas the first is the ability to rationally understand the emotional motivations of a person, the latter includes the ability to share these feelings, or in other words, compassion. The medical mainstream long regarded cognitive empathy as a core competency of doctors, giving them opportunity to better understand patients and at the same time remaining objective. Based on cognitive empathy, the principle of detached concern states that doctors should aim to understand a patient’s emotions and motivations without getting emotionally involved (Halpern, 2003). This should allow doctors to keep the necessary distance for being able to make objective and rational decisions. However, many commentators have criticized this approach, stating that understanding emotional motivations and the perspective of patients alone is insufficient for an adequate treatment (Guidi & Traversa, 2021). Doctors should also be able to respond to the emotional states of patients accordingly, meaning on an emotional level, which cannot be achieved by detached concern (Hojat et al., 2023). Clinical empathy was introduced as an alternative that comprises understanding the emotional state of patients and the ability to respond to it. Clinical empathy is a complex concept that tries to combine cognitive empathy with a limited amount of emotional empathy. This way, doctors should be able to understand the emotional motivations of patients, integrate them into the decision-making process, and communicate this understanding while at the same time remain objective and rational. Clinical empathy thus involves strategies for regulating and limiting the emotional involvement in terms of affective empathy without excluding it altogether. This is not only a matter of objectivity, but also of self-preservation. There is an abundance of empirical evidence for the negative consequences of too much affective empathy on behalf of physicians, such as burn-out and stress (Lampert et al., 2019). To sum it up, one could say that trust is the crucial enabler of a successful therapeutic relationship as well as patient autonomy. Personal trust, the trust in the expertise and integrity of doctors, is of utmost importance and relies on cognitive as well as affective trust. Cognitive trust requires expertise on behalf of doctors as well as the disclosure of relevant information. This also includes communicative skills, first and foremost to explain medical facts in a comprehensible way for medical laypersons. Affective trust relies on an emotional bond between doctors and patients that requires mutual recognition and respect. Hence, empathy is a major factor in this relationship. Empathy is a highly complex skill that requires a balance between detachment and involvement, between cognitive and affective empathy. It requires doctors to develop skills that allow them to regulate their emotions in order to remain objective and protect themselves, while at same time being able to understand patients and respond to them on an emotional level. In the context of MAI, the transparency of information and decision-making as well as the impact of artificial agents on the therapeutic relationship in terms of empathy will be major topics in the ethical analysis.
4.2
Medical Ethics: Topics
4.2.4
65
Confidentiality and Privacy
Many authors describe confidentially as a major element in the clinical encounter (Beltran-Aroca et al., 2016; Kottow, 1986; O’Brien & Chantler, 2003; Thompson, 1979). To protect the sensitive information of patients is both a moral duty as well as a legal obligation of doctors (O’Brien & Chantler, 2003). This lies in the very nature of health-related information as sensitive data. On the one hand, this data may reveal the health situation of an individual and thus make illnesses, impairments, or other features that the individual does not want to be known visible to third parties. On the other hand, it may include additional information on the lifestyle, behavior, and overall life situation of the individual, which, when becoming known, might harm the individual’s reputation (Beltran-Aroca et al., 2016). Disclosing this information might affect their career chances, social status, and social relations. One of the crucial aspects in this regard is stigma attached to illness. Some types of illness are particular prone to stigma, first and foremost mental illness. Confidentiality is thus a safeguard that protects patients from stigma and other unwanted consequences of disclosing their health status. Although confidentiality is widely accepted as a main ethical principle in medicine, a debate on its status and relevance has been going on for decades. Even before the digital turn, critics have claimed that the conventional understanding of confidentiality is an ancient concept, dating back to Hippocratic times, which does not reflect the contemporary situation in medicine and is therefore obsolete (Siegler, 1982). The reason is that in modern medicine, the doctor-patient dyad is not the sole factor when it comes to dealing with patient data. Clinical practice nowadays is mostly team work, including dozens of different agents (Siegler estimates up to a hundred) in the treatment of a patient (Siegler, 1982). Thus, disclosing information to others outside the doctor-patient dyad is inevitable. It lies in the patient’s best interest to disclose their health-related information to a wider range of healthcare professionals and other actors in order to receive the best possible treatment. With the advance of digital technologies in medical data collection and analysis, this fundamental problem aggravated. The reason is the nature of digitized data, which Moor aptly described as “greased to slide easily and quickly to many ports of call” (Moor, 1997, p.27). That means that easy access to data and the speed of data exchange, although convenient, also poses the risk of unwanted exposure of information. In addition to easy retrieval, digitized data can also be easily stored and reused. Finally, digitized data can be retrieved, exchanged, and manipulated without the data subject’s knowledge. Hence, we are faced with three issues of data privacy (Tucker, 2019). First, data persistence signifies the possibility that data, once generated and collected, may persist for a potentially indefinite amount of time. A contributing factor in this regard is low storing costs. Second, data repurposing implies that, once created, data can be reused for various purposes that may not be foreseen at the time of data creation. Data generated for one specific purpose might become relevant for a totally different purpose in the future. This purpose might not be in the best interest of the data
66
4 Ethical Foundations: Medical Ethics and Data Ethics
subject. Third, data spillovers might occur, which means that additional information in the data that was recorded accidentally might later on become valuable. Data spillovers also occur when a certain information that is not directly disclosed in the data is identifiable by proxy. An interesting take on the matter of trust in the context of data privacy has been introduced by Nickel (2019). He argues that what is essential in regard to data subjects’ trust and especially its lack is the uncertainties data subjects are confronted with. The process of data collection and procession often creates unknowns, i.e. the scope and limit of data collection, the process of data analysis, and the persistence of data. In most situations, data subjects only possess very little knowledge about these aspects. They do not know to what extend their data is collected, who has access for what purposes, what are the mechanisms of analyzing the data, and how long their data will be available. Based on Nickel’s approach, one could argue that the uncertainty about confidentiality in a digital setting is an important cause of mistrust on behalf of data subjects. We are dealing with a paradox here: On the one hand, confidentiality is a safeguard for disclosing information on behalf of patients. The fact that patients know that the principle of confidentiality protects them in the clinical encounter allows them to provide health-related information. Confidentiality thus also enables trust (Tegegne et al., 2022; Hall et al., 2001, 2002; Mechanic, 1998; Iott et al., 2019; Beltran-Aroca et al., 2016). On the other hand, the modern clinical workflow requires to access patient information on a large scale. Hence, there is a crucial trade-off between the patients’ wish for best practice and their wish for confidentiality (Anesi, 2012). Although confidentiality does not become obsolete in the digital era, it surely requires a reconceptualization. To understand the role and relevance of confidentiality in the digital era, one has to frame it as a matter of privacy. Privacy is a difficult concept to handle, since it is vague, ambivalent, and multidimensional. Several typologies and categorizations of privacy have been suggested (Koops et al., 2017; Margulis, 2011), which I will not discuss in detail here. I focus on a few aspects of the phenomenon that are relevant in the context of MAI. Philosophical theories of privacy have highlighted two interpretations of privacy as value (Moor, 1997). Privacy can be an instrumental value that safeguards or enables other values. One could for example argue that privacy is essential to be able to form and maintain social relationships (Rachels, 1975). The basic idea here is that the ability to control who has access to information about us allows us to interact with others in our different roles, e.g. as family members, friends, or colleagues at work. In each of these roles, we disclose different kinds of information to different kinds of people and to a different extent. As an intrinsic value, privacy can be understood as an expression of security (Moor, 1997). In a society where data is easily available, being protected against any mischief can be considered as a value in itself. The need for protection against unwanted access and exposure of information is especially important when it comes to health-related data. As we have seen, exposure of this sensitive data can have severe consequences for an individual in terms of social relationships or economic position. Hence, individuals appreciate the value of privacy as such.
4.2
Medical Ethics: Topics
67
Other approaches interpret privacy in the light of autonomy (Johnson, 1994; Roessler, 2004). Since autonomy is of special relevance in the medical context, it makes sense to use an autonomy-focused concept of privacy for analyzing the impact of MAI. A widely adopted concept of privacy focusing on autonomy was introduced by Rössler (2004, 2018). She distinguishes local, informational, and decisional privacy. Local privacy refers to a personal space that is protected against the view and access of others. The individual inhabiting this space has control over what is disclosed to outsiders, who can have access, and how this information is presented. Informational privacy signifies the control over what others may know about us. To Roessler, this is a constitutive factor of autonomy. If we cannot control who has access to our data, our expectations about the knowledge others may have about us are impeded, which disrupts our social interactions. Hence the impact on self-determination. Decisional privacy enables an individual to make decisions about their own life. This may also include decisions about whom to interact with or whom to include in the decision-making process. We can therefore say that notions of privacy mostly center around control, be it control over the access to one’s own data or control over who is allowed to enter the personal space (Whitley, 2009). This control is in part a matter of data privacy, requiring a clarification of rights, privileges, and policies that regulate the governance of individual data. In addition, control requires data security, i.e. the technological means for protecting data against unauthorized access and disclosure, manipulation, and theft. In the context of MAI, both data privacy and data security play an important role in regard to control over one’s own data. As we will see, MAI technologies may impact local, informational, and decisional privacy, which makes defining access and control and developing the means for enabling both, a major challenge.
4.2.5
Justice and Equity
According to Norman Daniels (2001), concepts of justice in healthcare should address three questions: First, what makes health a special social good that requires an equal distribution in society? Second, what are the criteria for an unjust distribution of this good? Third, how can this good be distributed under the condition of limited resources in order to address different needs? Concerning the first question, one could argue that health is a transcendental good, meaning that it is the prerequisite for the well-being of an individual as well as for fulfilling one’s ambitions and life plan (Daniels, 2001; Daniels, 2007; Ruger, 2004; Sen, 2002) Health is also the prerequisite for participating in society and in the workforce (Braveman et al., 2011). In this sense, health or the absence thereof affects the opportunities of an individual (Daniels, 2007). In a society that acknowledges fair equality of opportunity, health must therefore be considered as an essential social good. This legitimizes a system of public health and medical services that meet the needs of individuals independent of their socio-economic status and financial means. During the Covid19-pandemic, we have witnessed an intensified
68
4 Ethical Foundations: Medical Ethics and Data Ethics
debate on the status of health as a social good and value (Ruger, 2020; Sharma & Arunima, 2021). This debate mainly focusses on the issue whether we should prioritize health when it conflicts with other social goods like personal freedom. This is especially the case in a public health setting, where the health of a population is at stake. The second question, addressing the criteria for injustice in regard to health, aims to define instances when health differences between different socio-economic groups are unjust. These criteria are necessary to determine which health inequalities are acceptable and which are to be compensated. Not all health inequalities are unjust and not all social disadvantages will result in unjust health inequalities (Braveman et al., 2011). What matters is the extent, depth or severity, and duration of disadvantage. To answer this question and find the criteria of unjust distribution of health, one has to take social determinants that shape health and illness into account (Daniels, 2007). Social determinants are those material, social, political, and cultural factors that shape peoples life, their identities, and behaviors (Marmot & Allen, 2014) One could argue that health inequalities are unjust when they are avoidable, unnecessary, and unfair (Daniels et al., 1999; Braveman et al., 2011). There is a difference between health differences between individuals or groups, which may be random, and health disparities, which are the result of systemic inequalities (Braveman et al., 2011). Following this approach, nondiscrimination and equality must therefore be enabled. Individuals must not be denied access because of their ethnicity, age, gender, sexual orientation, socio-economic status, physical and mental disability, or illness (Braveman et al., 2011). It is especially important to note that social determinants do not only affect the acute access to health services, but also have a cumulative effect, shaping the health of an individual over their lifetime (Daniels, 2001). Hence, issues of justice and injustice in regard to health are always connected to matters of social justice (Ruger, 2004). The third question aims to find out how we can meet individual health needs fairly under conditions that imply that we cannot meet them all (Daniels, 2007). It aims to find principles for a fair distribution of limited resources. When imposing limits, e.g. on access to a specific service or the provision of financial means, decisionmakers must follow general principles of justice. This is also important in regard to the legitimacy of setting limits, since general principles allow to hold decisionmakers accountable for their decisions. One general principle that has been established for this purpose is accountability for reasonableness (Daniels, 2000; Nunes & Rego, 2014). This principle deals with practices of rationing and priority setting, but also decisions on the implementation of new technologies in healthcare. It states that decisions regarding rationing and priority setting must be based on reasonable explanations on how the health needs of a specific population can be met with limited resources. The principle also implies that these decisions must be publicly available and open to revision should new evidence for better ways of using resources emerge. Distribution of healthcare services in terms of access is not the only relevant issue when it comes to matters of justice. One must also consider that the opportunities for achieving good health as such are also unequally distributed. Social determinants
4.2
Medical Ethics: Topics
69
and inadequate social arrangements have an impact on the capability to achieve good health, meaning that healthcare and the access to it, although an important aspect, is not the only relevant factor here (Sen, 2002). Health equity thus goes beyond inequalities in healthcare provision and issues of allocation insofar as it links health inequalities with the broader picture of social justice issues and distribution of goods, resources, and opportunities in society as a whole. Health and social justice are therefore interrelated, and their interaction goes both ways: Health is a prerequisite for realizing one’s opportunities in society, whereas social determinants and social arrangements largely influence the capabilities of achieving good health, including but not limited to the distribution of and access to healthcare services. Achieving health equity is therefore not something that can be done within the limits of the healthcare domain. It requires removing the social and economic obstacles to health in terms of discrimination, socioeconomic marginalization, and lack of political power (Braveman et al., 2019). All of these questions can be addressed in a national context. In recent years however, a broader perspective on matters of justice in medicine and healthcare has evolved. This perspective views matters of justice in the context of global health (Jecker et al., 2022; Ruger & Horton, 2020). The global perspective implies a focus shift from issues of allocation and health disparities within mostly high-income countries towards health disparities between countries and the global distribution of healthcare resources. The situation in low-income and middle-income countries (LMICs) often differs vastly from that in Western countries and the Global North. The Covid19-pandemic served as a kind of magnifying glass, showing these global health disparities with regard to access to healthcare services and the availability of vaccines. This made it even clearer that multilateral models of global health governance are needed in order to deal with health disparities and health justice on a global scale (Jecker et al., 2022). To summarize, one can say that justice in medicine and healthcare deals with health differences and disparities as well as the distribution of health-related goods, resources, services, and opportunities. Justice issues may arise on the level of the doctor-patient dyad, e.g. regarding decisions on how to allocate the resource of time among different patients. They affect the meso-level when it comes to limiting access to healthcare services and rationing resources in the healthcare domain. Finally, on a macro-level, health equity depends on social determinants of health and is thus a matter of social justice, dependent on arrangements for fair distribution in society as a whole. In the context of MAI, it will therefore be of utmost relevance to take social determinants into account and ask how this technology affects health equity.
4.2.6
Avoiding Harm: Patient Safety
Within the last two decades, patient safety has become a major topic in medical ethics. Starting with the landmark report To Err Is Human. Building a Safer Health
70
4 Ethical Foundations: Medical Ethics and Data Ethics
System by the National Academies in 2000 (Institute of Medicine et al., 2000). The report identified medical errors such as injuries, medication-related errors, and other adverse events as frequent causes of death as major healthcare system concerns (Azyabi et al., 2022; Berwick et al., 2015). Medical errors and adverse events also cause direct as well as opportunity costs and may lead to a loss of trust in the medical system (Institute of Medicine et al., 2000). In a way, the focus on avoiding harm is nothing new. Primum non nocere—first do no harm—has been a maxim of the doctoral ethos from Greek-Roman antiquity on, albeit in various different formulations. Beauchamp and Childress (Beauchamp and Childress, 2019) identified non-maleficence as one of the four main principles of bioethics. What changed during the past two decades was not so much the general perception of this idea, but the concepts and strategies for realizing it in clinical practice. Developing comprehensive approaches that acknowledge patient safety and define concrete measures to avoid errors and adverse events has become a major goal within healthcare institutions. This entails implementing a mandatory reporting system to identify errors and learn from them, implement safety standards defined by professional groups, and define safe practices at the delivery level (Institute of Medicine et al., 2000). The result of these measures should be a culture of safety within single health institutions and the healthcare sector as a whole (Azyabi et al., 2022; Berwick et al., 2015; (Sammer et al., 2010). The concept of a safety culture defines several levels of action and responsibility across healthcare institutions. The individual responsibilities of senior leaders as well as those of hospitals to align policies, resource allocation, and standardization of practices with the goals of patient safety are of crucial relevance here (Sammer et al., 2010). Patient safety has long been recognized as a global health issue and a vital factor for achieving universal health coverage (Flott et al., 2018). Furthermore, the relevance of patient safety in the context of digital technologies is a prominent research topic (Flott et al., 2021). Since MAI heavily impacts practices and environments, patient safety will also be part of the following ethical analysis. It is especially relevant to discuss issues of responsibility and liability in this context.
4.3
Ethics of AI and Big Data
Exploring the impact of MAI requires to go beyond medical ethics and consider ethical approaches that focus on the technologies involved. AI can be approached from different ethical perspectives, depending on which aspect is in focus. Accordingly, there are various domain ethics that have to be integrated here, namely computer ethics, digital ethics, data ethics and machine ethics. Although useful as categories, these domain ethics are not as clear-cut as some authors make them seem. There are different interpretations, unclear definitions, and rather blurry demarcation lines. However, I will try to give a short overview regarding the ethical sub-fields that are involved here. My interest is not to structure or systematize these ethics domains. Rather, I aim to illustrate ethical phenomena, conflicts, and strategies that
4.3
Ethics of AI and Big Data
71
each of these domains provide. The goal is to create some kind of compass or frame of reference for analyzing the specific ethical issues that arise in the context of MAI. Therefore, the following overview is meant solely for the purpose of this book, and I do not claim to deliver uncontested, definitive, or complete definitions or categorizations. Zuber et al. (2022) distinguish a micro-, meso-, and macro level for analyzing ethical issues of information technologies. In their view, information ethics can be considered as the macro level that comprises the different aspects, issues, and approaches. On the meso level, they distinguish between a techno-generic perspective and a structural perspective. The techno-generic perspective focusses on the digital artifact itself, meaning mostly its design process. This includes decisionmaking in the context of computer technology with a focus on software engineering. Examples would be the FAT-principles in data science, i.e. Fairness, Accountability and Transparency (Martens, 2022). The micro-level domain usually linked to this is computer ethics, sometimes also referred to as data science ethics (Martens, 2022). In the structural perspective, the social practices in which an artifact is embedded constitute the relevant factor for the analysis. The focus lies on societal transformation through IT, including social practices. The meso-level domain for this kind of analysis is digital ethics. The specific fields of application which form the micro level here are AI ethics, big data ethics, machine ethics, and professional ethics. Following this distinction, the focus of this book will mainly be on the structural perspective and deal with AI ethics and big data ethics. Since MAI mainly affects patients and healthcare professionals, the user perspective is the relevant one. However, the designer perspective cannot be omitted, especially since, as we will see, both spheres are not separated from each other, but necessarily interact.
4.3.1
AI Ethics
“AI is all over the place” (Coeckelbergh, 2020, p.78) might seem as trivial statement, and yet it refers to one of the crucial reasons why this particular technology requires an ethical sub-domain dedicated to it. AI-applications are already ubiquitous, present in different contexts of daily life, and will continue to heavily impact society, economics, the environment, social relations, and human interactions (Coeckelbergh, 2020; Stahl, 2021). From a philosophical perspective, the novelty of these technologies lies in the fact that humans are no longer the sole actors of the epistemological enterprise (Humphreys, 2009). AI often implies not only the enhancement, but also the replacement of human perception, judgement, decisionmaking, and agency (Boddington, 2017). This means that there is a connection between epistemological aspects and ethical aspects. The shift in our epistemological practices caused by AI has ethical implications, since AI technologies fundamentally shape our understanding of phenomena and support our decision-making. In other words, big data-fueled AI “enables an entirely new epistemological approach for making sense of the world” (Kitchin, 2014a, p.2).
72
4
Ethical Foundations: Medical Ethics and Data Ethics
The nexus between epistemology and ethics has become a major topic in the debates on AI (Mittelstadt et al., 2016; Morley et al., 2020; Russo et al., 2023). Of particular interest is the fact that the operational logic of AI-systems differs from the way in which humans process information. This is one of the main advantages of AI, since it is in some ways able to surpass human capabilities and provide new insights, especially when dealing with large amounts of data (see Sect. 3.2). However, the specific and often incomprehensible operational logic of AI is also problematic. Humphreys distinguishes a hybrid scenario from an automated scenario in this context (Humphreys, 2009): In a hybrid scenario, humans use computers as tools for analyzing data and gaining knowledge, e.g. in science. The way data is analyzed, represented, and modeled is human-centered in the sense that it is relative to human cognitive abilities. In an automated scenario, computational processes occur that go beyond human comprehension and are, at least partly, detached from the functioning of human cognition. This causes a philosophical problem, since conventional epistemological concepts are anthropocentric, i.e. focus on humans and their specific cognitive setup. The fact that AI is not bound to the same epistemological principles leads to what Humphreys calls the anthropocentric predicament (Humphreys, 2009): How can humans comprehend and evaluate the functioning and results of computational process that surpass our cognitive abilities? The anthropocentric predicament is especially relevant in ethics, namely because we are faced with a new type of non-human agent whose functioning and actions we do not entirely understand. One can therefore identify the impact of AI on epistemological practices and the aspect of agency as major ethical topics. Concerning epistemological practices, we have to consider that AI systems follow a specific operational logic based on machine learning techniques (see Sect. 3.2). Whereas human reasoning mainly focusses on causality, machine learning is about identifying correlations. The crucial benefit of AI technologies is that they surpass the human ability for detecting patterns within large amounts of data. This distinctive quality of AI also has a downside connected to the anthropocentric predicament, the issue of explicability (Coeckelbergh, 2020). Especially with AI based on machine learning and deep learning, we often find that neither a system itself nor its designers can explain why it came up with a certain result, even though this result might be correct or desirable. One can call this epistemic opacity, meaning that we sometimes do not fully comprehend the computational process that leads from a underlying model to the output (Humphreys, 2009). This leads to the so-called black box phenomenon, which is especially relevant since transparency and explainability are connected to trust (Durán & Formanek, 2018). The main question is, how can humans trust in the decisions of AI-systems when they are unable to fully understand the underlying computational processes? Some authors argue that for humans to be able to trust in the decisions and actions of AI-systems, transparent and comprehensible processes are crucial (Samek & Müller, 2019). This is all the more important in medicine, given that informed consent is the prerequisite for any medical action. Doctors are obliged to provide the best available information so that patients can consent to or refuse a medical action. This also entails to disclose the reason why doctors think a certain action, e.g. therapeutic procedure, is the best
4.3
Ethics of AI and Big Data
73
option. However, explicability itself raises further questions (Beisbart & Räz, 2022): What exactly do we need to understand? What type of explanation is needed? Explicability and transparency will therefore be relevant when we discuss the impact of MAI on trust and autonomy in Chap. 5. Other risks connected to the operational logic of AI are reductionism, standardization, and bias. Reductionism is one of the crucial problems debated in the philosophy of science (Nagel, 1961; Sarkar, 1992; van Riel, 2014). Several different interpretations and models of reductionism have been proposed, which I cannot discuss here in detail. One can roughly distinguish between ontological, methodological, and epistemic reductionism (Roache, 2018). Ontological reductionism claims that entities that seem to differ from each other can be reduced to one simple entity or matter. In medicine, this would mean that everything that concerns the human body can be reduced to biological processes. Methodological reductionism means that one method, e.g. the scientific method, is seen as the exclusive method for analyzing a phenomenon. That means that in medicine, molecular biology, cell pathology, biochemistry, etc. could be privileged over all other methodological approaches. Epistemic reductionism suggests that complex phenomena can best be analyzed by reducing them to their fundamental elements. In medicine, this means for example to reduce the understanding of disease to cell pathology and molecular processes. A common idea in these different types of reductionism is that a complex phenomenon like the health situation of an individual can be reduced to its fundamental and simpler elements or data points. To a certain degree, reductionism is a necessity in algorithm-based data analytics. Large data sets contain an abundance of noise, i.e. irrelevant information, which makes the selection of target variables and classification labels necessary. This in itself is a reduction process, since it requires a decision on what variables are relevant or irrelevant. Hence, the specific design of an AI system shapes the way it captures and analyzes data. AI systems do not operate in an epistemic vacuum, but are theory-laden, in the sense that their operational logic relies on a certain scientific, primarily reductionist approach (Kitchin, 2014b). Furthermore, the interests of AI creators or users might prefer certain outcomes over others, thus making the selection of target variables and classification labels also value-laden (Mittelstadt et al., 2016). From an ethical point of view, the specific epistemic practices surrounding AI might have severe implications. An overly reductionist approach that selects the target variables and classification labels too narrowly might obscure important aspects not represented in the data or overemphasize others. One consequence of reductionism is standardization, where individuals are sorted into groups by identifying one feature as essential. This may oversimplify the overall, often highly complex, situation of an individual and thus undermine the crucial goal of personalization. If the selected feature is discriminatory and implies unfair decisions toward particular individuals or groups, one speaks of bias (Challen et al., 2019). Especially machine learning poses risks of perpetuating or increasing already existing bias. Bias may occur on different levels throughout the process of data analysis and modeling. One speaks of data bias when the training data for an algorithm or the data that is to be analyzed already contains biased information (Mitchell et al., 2021). Algorithmic
74
4 Ethical Foundations: Medical Ethics and Data Ethics
bias occurs when the outcomes of an algorithmic process discriminates against individuals or groups in an unfair way (Kordzadeh & Ghasemaghaei, 2022) Outcome bias means that decisions based on algorithmic outcomes lead to an unfair discrimination (Grote & Keeling, 2022). Different approaches can be taken to prevent bias, such as policy making focussed on directives, regulations, and standards for diversifying AI-applications, codes of ethics, education, and technical measures (Coeckelbergh, 2020). An intensely debated approach is ethics by design, referring to the translation of values and ethical principles into the technology already during the design and development process (Coeckelbergh, 2020). Furthermore, participatory practices may prevent bias, whereby the perspectives of relevant stakeholders are included into technology design. The bias problem is of utmost importance in the medical context, since MAI applications that cause bias might perpetuate or exacerbate existing health disparities. I will therefore discuss reductionism, standardization, and bias as main risks of MAI-powered epistemic practices in detail. When it comes to agency, one has to distinguish between the agency of AI systems on the one hand and their impact on human agency on the other. Concerning the latter, it is important to note that automatization is one of the main goals of AI. Automated processes, executed by smart systems, are designed to replace human action and labor, especially when it comes to repetitive and dangerous tasks (Coeckelbergh, 2020). Of course, cost-efficiency is the crucial benefit here, since human labor is always the biggest cost factor in any business activity. This is why AI is first and foremost seen as a tool that enables better, faster, and more efficient work processes (Stahl, 2021). One focus of the ethical analysis will therefore be whether and to what extend processes should be automated and human labor replaced by MAI applications. Another task will be to analyze how MAI technologies will transform established clinical practices. An important aspect in this regard is technological vulnerabilities, i.e. safety and security issues, especially where MAI replaces human labor and agency (Coeckelbergh, 2020). We also see that different stakeholders are affected differently by the effects of AI, which in turn means that the social benefits and risks of AI have to be addressed (Stahl, 2021). Thus, aspects like the accumulation of capital in powerful AI-companies and the increase in unequal power relations are of relevance, since they may lead to social injustice and inequality (Boddington, 2017; Coeckelbergh, 2020). Analyzing the agency of AI systems, its nature, scope, and limits, is crucial for understanding the interactions of these technologies with the social reality they are embedded in (Boddington, 2017, Coeckelbergh, 2020; Stahl, 2021). Whether machines have agency is one of the basic questions in AI ethics. It is obvious that machines can be autonomous in the sense that they operate on their own without a human operator. What is important here is the degree and context of this autonomous functioning (Bartneck et al., 2021). One could distinguish between intentional agency and apparent agency (Liu, 2021). Intentional agency is the ability to act upon motivations and decisions for achieving a certain goal. Apparent agency means that acts appear as though there were intentions behind them. AI differs from other computer systems or machines in
4.3
Ethics of AI and Big Data
75
that it is able to perform tasks that have not been pre-programmed without direct commands by human operators. Think of a CDSS that comes up with a diagnosis and treatment option or a SAR that reacts to the emotional response of an individual. These processes have apparent agency, since it seems to us that these machines perform them without direct command, as if they would act deliberately. But can AI systems have intentional agency or is intentionality a feature limited to biological organisms? The question of intentionality is especially important for determining whether AI can be considered as a moral agent. Should AI be designed for moral reasoning, judgement, and making ethical decisions? What moral capacities should AI have (Coeckelbergh, 2020)? There are essentially three positions in this regard (Gunkel, 2012). The first position grants that computer systems can be considered as moral agents. The counter-position denies this, stating that computer systems do not have mental states such as consciousness and can therefore not act morally. The third position goes beyond this dichotomy by pointing out that it is based on false assumptions. Some authors who argue for the first view suggest that for answering these questions, we have to get rid of anthropocentric concepts of agency (Floridi & Sanders, 2004; Floridi, 2013). For example, Floridi and Sanders (2004) suggest a “mindless morality”, according to which an agent that performs morally relevant actions can be considered a moral agent. According to this view, neither freedom nor intentionality are necessary conditions for moral agency. As Floridi and Sanders argue, whether AI systems can be seen as moral agents depends on the level of abstraction. That means that the moral agency of a computer system depends on its degree of autonomy, understood as the capability for independent action. Since on some level of abstraction, one can define AI systems as autonomous and autonomy is crucial for morally relevant actions, they also qualify as moral agents. Other authors argue in a similar vein, claiming that although AI systems might not be moral agents in the full sense, they fulfill certain criteria of moral agency (Tigard, 2021). One such criterion is moral responsibility, meaning the ability to fulfill or transgress normative demands and expectations that exist within society (Gogoshin, 2021). In other words, the fact that AI technologies may cause morally relevant outcomes suffices for considering them as being moral agents. Intentionality and moral agency are part of a wider discussion on whether machines can have mental states (Gunkel, 2012; Hildt, 2019; Searle, 1980). Some authors claim that concepts of agency based on human features like intentionality are outdated anyhow, since AI technologies enable a group agency where one part enables the agency of the other (Pickering et al., 2017). It is obvious that I cannot discuss these topics sufficiently here. I will advocate a position that claims that even if we grant some kind of intentional agency to machines, we cannot separate them from human intentions, purposes, and goals. It is important to note that I am speaking of ANI here. AGI or ASI might be another matter. In the context of MAI, however, we are dealing with technologies that are designed to perform specific tasks. Hence, their intentionality, if it exists, is limited from the beginning. Given my position, several issues arise when regarding AI as moral agent. First, the fact that AI systems may be considered as autonomous on some level of
76
4 Ethical Foundations: Medical Ethics and Data Ethics
abstraction does not mean that this pertains to those systems in general (Johnson & Miller, 2008). What constitutes moral agency and the morality of an action and with it responsibility depends on the notion of morality and the moral practices this action is embedded in. A purely functional interpretation of moral agency misses the point of morality altogether. One could therefore argue that although on some level of abstraction AI systems act independently, e.g. by providing a diagnosis or performing a surgical procedure, the meaning of this action depends on the social practices that shape the moral framework it is observed with. In other words, AI technologies are never fully independent of humans and social practices. As Johnson and Miller put it, computational systems like all technical artifacts are always tethered to humans (Johnson & Miller, 2008). AI systems are designed and used for a specific purpose. Some computational processes might be unforeseeable, but that does not mean that the outcomes of the system are totally random. These systems serve a specific purpose, e.g. decision support for doctors, and are designed to perform in a specific way, e.g. by focusing on specific target variables and classifications. One could go one step further and add that also the institutional (meso-) level and the level of the health system as a whole (macro-level) have to be considered here. That means that MAI systems are also implemented within the health system for specific purposes. These might be precision medicine and personalization or saving costs and reducing personnel. In any case, a moral evaluation of the outcomes of MAI systems cannot abstract form this context and merely focus on the algorithmic decision as if it stands alone. The social and moral context, i.e. the social practices and moral framework, shape the meaning of the algorithmic decision. Hence, one cannot define AI, at least not ANI, as a moral agent and ascribe responsibility to it, since its agency, although autonomous on some level, is never fully independent. It depends on decisions that have been made by designers and policy makers that implemented the systems (e.g. in the healthcare sector), institutions that provide it (e.g. healthcare providers), and professionals who use it (e.g. doctors). ANI technologies, as all technical artefacts, are therefore a-responsible, neither responsible or irresponsible, since responsibility can only be attributed to humans who can act consciously, out of freedom, and with intention. In addition, one could argue that there is also a moral reason against ascribing responsibility to AI systems. Such a practice would allow humans to hide behind the computer, meaning to blame AI systems for mistakes (Gunkel, 2020). This ethical sidelining of humans would allow developers as well as doctors to deflect responsibility for their decisions and actions. We should therefore address agency of AI systems in terms of a control problem and ask whether and to what extent we are willing to grant autonomy of decisions, judgement, and actions to these agents (Boddington, 2017). The fact that AI technologies are tethered to humans and are fundamentally shaped by human decisions and purposes means that the responsibility for the outcomes of AI use lies with humans. Hence, we should focus on the impact AI technologies have on the moral agency of humans.
4.3
Ethics of AI and Big Data
4.3.2
77
Big Data Ethics
Richards and King (Richards & King, 2014) identify four principles of big data ethics: privacy refers to the ability to govern the flow of one’s own personal information. It can be understood as self-management, the right and ability to make decisions about providing one’s data, in our context, health data. The danger here is not only a coercion to share health data, but also a rigid, paternalistic limitation of data sharing and use. Confidentiality, as we have seen, is an established principle in medicine and especially relevant in the big data context. Transparency regarding data collection and use enables informed decision-making on behalf of data subjects. It may also prevent power abuse. Identity is a principle that is specific to big data practices, since it refers to regulating interferences in decision-making when it comes to classifying, identifying, and modulating individuals. It deals with the fact that many big data practices focus on sorting individuals into groups (e.g. regarding gender, age, ethnicity), for example to define risk profiles (Zwitter, 2014). Ensuring that individual health needs are still respected and the individual is not reduced to belonging to a group is crucial in this regard. The approach by Richards and King has the advantage of referring to principles that are already well-established in medical ethics. The next step is to define the actors involved and their roles in order to apply these principles. The first role is the data subject, which signifies a person who provides data (Martens, 2022). In our case, this may refer to patients or individuals who provide their health data outside of treatment. Big data collectors are those agents with power over data collection, i.e. natural persons, corporations, or institutions who decide what data is collected and for how long it is stored (Zwitter, 2014). In our case, this may be doctors, public health authorities, administrative personnel, data scientists, or economic agents like tech companies. Big data utilizers decide over the purpose of data use. This role may overlap with big data collectors, but may also include other actors like policy makers (Zwitter, 2014). The distinction between different roles in big data practices reveals the inherent power relations and asymmetries. This big data divide implies a fundamental gap between those who generate data and those who collect and use it (Mittelstadt & Floridi, 2016), or in other words: those who produce data and those who own and control the means of production. As a result, data subjects mostly do not possess the knowledge, means, or access necessary to utilize and profit from their own data. Data subjects are often unable to access, modify, or delete their data and, in some cases, do not have a right to opt out of data collection. This is the fundamental asymmetry between data subjects and big data collectors. Another asymmetry exists between data subjects and big data utilizers, since the latter may use data to generate profit, for example by selling it to third parties, without giving data subjects a share or even informing them about it. This fundamental divide in terms of power asymmetries is crucial for any ethical analysis of big data practices. In the following analysis of MAI, I will therefore contextualize big data practices with the big data divide. In addition to defining the
78
4 Ethical Foundations: Medical Ethics and Data Ethics
roles of the actors involved, we have to take relations and structures shaping these actions into account. Hence, any data ethics has to identify and analyze the specific power relations attached to data production, sharing, and use. Hasselbach (Hasselbach, 2021) distinguishes two meanings of the term data ethics. Data ethics by design entails concerns about the morally good or bad decisions and actions performed by data-driven technologies. The main task here is to instill values into the technology in order to avoid morally bad outcomes. What Hasselbach calls data ethics of power aims at elucidating power relations that are embedded in sociotechnical infrastructures formed by big data applications. Data ethics of power explores the cultural and social power dynamics that shape data practices and the meaning that is ascribed to them. Following this approach, big data and AI infrastructures are tools for ordering and sorting, wielded only by a small number of actors, thus potentially exacerbating the existing power divide in society as well as social inequities or creating new ones. Again, both spheres cannot be fully separated, since institutional aspects and sociocultural power relations that constitute data cultures shape practices of software designers. In my ethical analysis of MAI, I will follow the approach of data ethics of power and thus use concepts from critical data studies. I cannot give a detailed account of critical data studies here. Instead, I will focus on those elements, i.e. basic approaches and concepts, that are relevant in the context of MAI. Instead of discussing different theories or approaches from critical data studies, I will shortly outline those concepts that are relevant for the ethical analysis of MAI. In my view, big data in medicine and healthcare constitutes a field of interwoven paradigms, beliefs, structures and practices, which shape the epistemologies and actions connected to MAI. The dynamics resulting from this field and its elements is thus crucial for the ethical analysis of big data practices in the context of MAI. The foundation of critical data studies as with any kind of critical theory is the assumption that power asymmetries shape social practices (Dalton et al., 2016; Iliadis & Russo, 2016; Kitchin, 2014b; Richterich, 2018). These power asymmetries also shape the epistemological aspect of big data practices. The leading epistemological paradigm here is called digital positivism (Fuchs & Chandler, 2019; Mosco, 2014; Richterich, 2018). This paradigm claims that data speak for themselves and have to be understood as objective representations of real-world entities. The main benefit of big data is connected to this very belief, namely that large amounts of data have inherent value, inherent meaning, and epistemological superiority (Richterich, 2018). I call digital positivism a paradigm because it fulfills the same purpose as paradigms in science according to the approach of Thomas Kuhn (2012): A paradigm is basically a set of concepts, theories, methods, and mind sets that constitute, shape, and legitimize practices of scientific knowledge production. Paradigms define the field for these practices, which Kuhn refers to as normal science, as long as they do not conflict with the leading paradigm. Accordingly, one could use the term normal data practices for those practices that are in accordance with digital positivism. The fact that we accept the epistemic superiority of big data practices,
4.3
Ethics of AI and Big Data
79
especially their alleged objectivity, shapes our understanding, but also our actions in regard to data-driven technologies like MAI. Besides its leading paradigm, big data practices also rest on a certain structural prerequisite that is datafication. This term signifies the process of transforming all social actions, behavior, as well as physiological and health-related processes of an individual into digital data (Mayer-Schönberger & Cukier, 2013; van Dijck, 2014). This data is mostly available online in order to perform real-time tracking and predictive analysis (van Dijck, 2014). Datafication has become part of our daily lives, where every aspect of our online activities as well as our movements in public spaces are tracked, monitored, and surveilled. The goal of datafication is to standardize all this information on our behavior, being, and actions and transform it into commodities. I call this a structural prerequisite because it refers to both an ontology and an ensemble of tools, pathways, and practices. Thus, datafication lays the foundation for practices under the paradigm of digital positivism insofar as it determines the ontological status of primarily qualitative phenomena as essentially quantifiable and at the same time provides a set of tools and practices for extracting this qualitative essence. Dataism is an ideological belief (van Dijck, 2014; Richterich, 2018) that holds the paradigm of digital positivism and datafication as structural prerequisite together. It rests on a specific technological rationalism that emphasizes the limits and shortcomings of our human capacities for knowledge when compared with big data technologies (Hong, 2020). By claiming that so-called raw data contain objective facts it provides the ideological underpinning of digital positivism and at the same time legitimizes datafication as a feasible and desirable process. Dataism constitutes an ideological belief due to the fact that it ignores the contextuality and openness of data. This aura of objectivity (Hong, 2020) obscures the fact that there is no such thing as raw data, since data do not speak for themselves, but need interpretation and analysis (Van Dijck, 2014). Furthermore, data do not simply exist, do not derive from a “groundless ground” (Hong, 2020, p.23), but are shaped by social practices and the specific context of their generation, i.e. parameters for what is relevant etc. Hence, raw data, a term that we so often encounter in the medical context, is an oxymoron (Gitelman & Jackson, 2013). Dataveillance refers to the crucial practice within the field constituted by digital positivism, datafication, and dataism. It signifies the continuous surveillance of individuals through metadata (Raley, 2013; Dijck, 2014). Dataveillance is an automated, continuous, and opaque process where all data produced by online activities of an individual is tracked (Büchi et al., 2022). This does not only occur when an individual is interacting with websites or apps, but also and increasingly through interactions with IoT (Ahn, 2021). Dataveillance is a constitutive mechanism of making data available for commodification, thus powering an entire business ecosystem (Degli Esposti, 2014). One crucial aspect of this practice is that it not only serves the purpose of providing material for predictive analysis of an individual’s choices and behavior, but also regulates and governs them (Degli Esposti, 2014). The approach of critical data studies has the advantage of considering the aforementioned social practices of knowledge production, the techno-social
80
4 Ethical Foundations: Medical Ethics and Data Ethics
encounters, and the institutional as well as social backgrounds that influence actions and decisions in a MAI setting. My approach to analyzing MAI therefore combines the medical ethics-discourse and the critical data-ethics discourse in order to outline the ethical aspects, issues, and strategies in the context of MAI. Using contextual medical ethics and critical data studies for analyzing MAI, I am able to investigate the related practices of knowledge production, the social determinants of decisionmaking and action, and the structural frameworks of technology. This allows for an ethical analysis that goes beyond decision-making in particular situations or cases. It thus lays the ground for designing strategies for dealing with the ethical issues connected to MAI.
4.4
Conclusion
The medical ethics approach I follow in this book includes an analysis of the institutional context in which decision-making takes place, e.g. the institution of the hospital and the model of health system in which it is embedded. It analyses the underlying asymmetry in the patient-doctor-relationship. It considers social determinants of health and illness as crucial factors of healthcare provision. It reflects on the technologically facilitated encounter between patients and doctors and it includes a reflection on the type of knowledge that is used for making micro-, meso-, and macro-level decisions in medicine and healthcare. Hence, my approach does not only focus on decisions and their normative justification, but also analyzes the knowledge base, social practices, and institutional factors that shape this decisionmaking process. In my analysis, I do not follow a prefixed set of moral principles. I take a bottom-up approach insofar as I analyze ethical issues as they arise from the impact of MAI. Instead of looking at MAI through the lens of principles, I identify those effects of the transformation of medicine and healthcare that are relevant in an ethical sense. It is the fundamental hypothesis of this book that any ethical analysis of MAI needs a critical framework that goes beyond the patient-doctor-dyad and includes the relations and structures that shape the practices around the technology. This includes the power relations and asymmetries in big data, the specific economics AI design and distribution is based upon, as well as the decision-making process that enables the implementation of MAI in clinical practice. These aspects are not a mere add-on, but shape and pre-structure clinical practice, which is why any interactions between healthcare professionals and patients cannot be fully understood without them. Therefore, in order to capture this broader spectrum of relevant ethical areas beyond the patient-doctor dyad, I will integrate elements from data ethics, especially critical data studies into my discussion of the topic. The framework for the ethical analysis will rest on the following assumptions: (1) MAI significantly impacts epistemic practices and agency. The epistemic shift through MAI entails datafication and dataveillance as necessary tools of the big data approach. This poses the risks of dataism and digital positivism, reductionist
References
81
tendencies which may result in standardization and bias. Hence, the shift in epistemic practice has severe ethical implications. (2) Social determinants fundamentally shape an individual’s health and illness as well as their access to healthcare services. The effect of social determinates can only in part be translated into quantifiable data. If the qualitative aspects of an individual’s overall life situation, including their preferences and values, are omitted, crucial information required for personalization is missing. (3) The therapeutic relationship is the principal enabler of autonomy, shared decision-making, and trust. An ethical analysis must therefore focus on the effects MAI applications have on the therapeutic relationship and ask how these technologies can best be integrated into the therapeutic relationship in order to enhance it. Based on these assumptions and the epistemic lenses I take from critical approaches, especially critical data studies, I conduct the ethical analysis in Part II.
References Agarwal, A. K., & Murinson, B. B. (2012). New dimensions in patient-physician interaction: Values, autonomy, and medical information in the patient-centered clinical encounter. Rambam Maimonides Medical Journal, 3, e0017. https://doi.org/10.5041/RMMJ.10085 Agich, G. J. (2007). Autonomy as a problem for clinical ethics. In Nys, T., Denier, Y. & Vandevelde, T. (eds.). Autonomy & paternalism: Reflections on the theory and practice of health care. Peeters. 5–71. Ahn, S. (2021). Stream your brain! Speculative economy of the IoT and its pan-kinetic dataveillance. Big Data & Society, 8. https://doi.org/10.1177/20539517211051 Anderson, J. (2014). Regimes of autonomy. Ethical Theory and Moral Practice, 17, 355–368. Anesi, G. L. (2012). The “decrepit concept” of confidentiality, 30 years later. Virtual Mentor, 14, 708–711. Arrieta Valero, I. (2019). Autonomies in interaction: Dimensions of patient autonomy and non-adherence to treatment. Frontiers in Psychology, 10. Ashcroft, R. E., Dawson, A., Draper, H., & Mcmil, J. R. (Eds.). (2007). Principles of health care ethics. Wiley. Azyabi, A., Karwowski, W., Hancock, P., Wan, T. T. H., & Elshennawy, A. (2022). Assessing patient safety culture in United States hospitals. International Journal of Environmental Research and Public Health, 19. Bartneck, C., Lütge, C., Wagner, A., & Welsh, S. (2021). What is AI? In Bartneck, C., Lütge, C., Wagner, A. & Welsh, S. (eds.). An introduction to ethics in robotics and AI. Springer, 5–16. https://doi.org/10.1007/978-3-030-51110-4_2 Beauchamp, T. L., & Childress, J. F. (2019). Principles of biomedical ethics (7th ed.). Oxford University Press. Beisbart, C., & Räz, T. (2022). Philosophy of science at sea: Clarifying the interpretability of machine learning. Philosophy Compass, 17, e12830. Beltran-Aroca, C. M., Girela-Lopez, E., Collazo-Chao, E., Montero-Pérez-Barquero, M., & Muñoz-Villanueva, M. C. (2016). Confidentiality breaches in clinical practice: What happens in hospitals? BMC Medical Ethics, 17, 52. https://doi.org/10.1186/s12910-016-0136-y Ben-Moshe, N. (2023). The physician as friend to the patient. In Jeske, D. (ed.). The Routledge handbook of philosophy of friendship. Routledge, 93–104. Berwick, D. M., Shojania, K. G., & Atchinson, B. K. (2015). Free from Harm: Accelerating patient safety improvement fifteen years after to Err is human. National Patient Safety Foundation.
82
4
Ethical Foundations: Medical Ethics and Data Ethics
Available online at https://www.ihi.org/resources/Pages/Publications/Free-from-Harm-Acceler ating-Patient-Safety-Improvement.aspx. Accessed 13 Aug 2023. Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer. Borza, L. R., Gavrilovici, C., & Stockman, R. (2015). Ethical models of physician-patient relationship revisited with regard to patient autonomy, values and patient education. Revista MedicoChirurgicală a Societăţii de Medici şi Naturalişti din Iaşi, 119, 496–501. Braveman, P. A., Kumanyika, S., Fielding, J., Laveist, T., Borrell, L. N., Manderscheid, R., & Troutman, A. (2011). Health disparities and health equity: The issue is justice. American Journal of Public Health, 101(Suppl 1), 149–155. https://doi.org/10.2105/AJPH.2010.300062 Braveman, P., Arkin, E. B., Orleans, T., Proctor, D. C., Acker, J., & Plough, A. L. (2019). What is health equity? Behavioral Science & Policy, 4, 1–14. Brody, B. (1988). Moral theory and moral judgments in medical ethics. Kluwer Academic Publishers. Büchi, M., Festic, N., & Latzer, M. (2022). The chilling effects of digital Dataveillance: A theoretical model and an empirical research agenda. Big Data & Society, 9(1), https://doi.org/ 10.1177/20539517211065 Chadwick, R. F., & Schüklenk, U. (2020). This is bioethics: An introduction. Wiley Blackwell. Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality and Safety, 28(3), 231–237. https:// doi.org/10.1136/bmjqs-2018-008370 Charles, C., Gafni, A., & Whelan, T. (1997). Shared decision-making in the medical encounter: What does it mean? (or it takes at least two to tango). Social Science & Medicine, 44, 681–692. Childress, J. F. (1990). The place of autonomy in bioethics. The Hastings Center Report, 20, 12–17. Childress, J. F. (1997). The normative principles of medical ethic. In R. M. Veatch (Ed.), Medical ethics (2nd ed., pp. 29–55). Jones and Bartlett. Coeckelbergh, M. (2020). AI ethics. The MIT Press. Cohen, J. (2000). Patient autonomy and social fairness. Cambridge Quarterly of Healthcare Ethics, 9, 391–399. https://doi.org/10.1017/s0963180100903116 Dalton, C. M., Taylor, L., & Thatcher, J. (2016). Critical data studies: A dialog on data and space. Big Data & Society, 3, 2053951716648346. Daniels, N. (2000). Accountability for reasonableness. BMJ (Clinical research ed.), 321, 1300–1301. https://doi.org/10.1136/bmj.321.7272.1300 Daniels, N. (2001). Justice, health, and healthcare. The American Journal of Bioethics, 1, 2–16. https://doi.org/10.1162/152651601300168834 Daniels, N. (2007). Just health: Meeting health needs fairly. Cambridge University Press. Daniels, N., Kennedy, B. P., & Kawachi, I. (1999). Why justice is good for our health: The social determinants of health inequalities. Daedalus, 128, 215–251. Degli Espositi, S. (2014). When Big Data meets dataveillance: The hidden side of analytics. Surveillance and Society, 12(2), 209–225. https://doi.org/10.24908/ss.v12i2.5113 Dijck, J. V. (2014). Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology. Surveillance and Society, 12, 197–208. https://doi.org/10.24908/ss.v12i2.4776 Dunn, M., & Hope, T. (2018). Medical ethics. A very short introduction (2nd ed.). Oxford University Press. Durán, J. M., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines, 28, 645–666. Emanuel, E. J., & Emanuel, L. L. (1992). Four models of the physician-patient relationship. JAMA, 267, 2221–2226. Engel, G. L. (1977). The need for a new medical model: A challenge for biomedicine. Science, 196, 129–136. https://doi.org/10.1126/science.847460 Entwistle, V. A., Carter, S. M., Cribb, A., & Mccaffery, K. (2010). Supporting patient autonomy: The importance of clinician-patient relationships. Journal of General Internal Medicine, 25, 741–745.
References
83
Floridi, L. (2013). Distributed morality in an information society. Science and Engineering Ethics, 19(727), 743. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14, 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d Flott, K., Durkin, M., & Darzi, A. (2018). The Tokyo declaration on patient safety. BMJ, 362, k3424. Flott, K., Maguire, J., & Phillips, N. (2021). Digital safety: The next frontier for patient safety. Future Healthcare Journal, 8, e598–e601. https://doi.org/10.7861/fhj.2021-0152 Frankena, W. (1973). Ethics. Prentice-Hall. Frosch, D. L., & Kaplan, R. M. (1999). Shared decision making in clinical medicine: Past research and future directions. American Journal of Preventive Medicine, 17, 285–294. Fuchs, C., & Chandler, D. (2019). Introduction. In C. Fuchs & D. Chandler (Eds.), Digital objects, digital subjects: Interdisciplinary perspectives on capitalism, labour and politics in the age of Big Data (pp. 1–20). University of Westminster Press. Gilson, L. (2003). Trust and the development of health care as a social institution. Social Science & Medicine, 56(7), 1453–1468. https://doi.org/10.1016/s0277-9536(02)00142-9 Gitelman, L., & Jackson, V. (2013). Introduction. In L. Gitelman (Ed.), ‘Raw Data’ is an Oxymoron (pp. 1–14). MIT Press. Glannon, W. (2005). Biomedical ethics. Oxford University Press. Gogoshin, D. L. (2021). Robot responsibility and moral community. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.768092 Goold, S. D. (2002). Trust, distrust and trustworthiness. Journal of General Internal Medicine, 17, 79–81. Grote, T., & Keeling, G. (2022). Enabling fairness in healthcare through machine learning. Ethics and Information Technology, 24, 39. https://doi.org/10.1007/s10676-022-09658-7 Guidi, C., & Traversa, C. (2021). Empathy in patient care: From ‘Clinical Empathy’ to ‘Empathic Concern’. Medicine, Health Care and Philosophy, 24(4), 573–585. https://doi.org/10.1007/ s11019-021-10033-4 Gunkel, D. J. (2012). The machine question: Critical perspectives on ai, robots and ethics. MIT Press. Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22, 307–320. Hall, M. A., Dugan, E., Zheng, B., & Mishra, A. K. (2001). Trust in Physicians and Medical Institutions: What is it, can it be measured, and does it matter? The Milbank Quarterly, 79, 613–639. https://doi.org/10.1111/1468-0009.00223 Hall, M. A., Camacho, F., Dugan, E., & Balkrishnan, R. (2002). Trust in the Medical Profession: Conceptual and measurement issues. Health Services Research, 37, 1419–1439. Halpern, J. (2003). What is clinical empathy? Journal of General Internal Medicine, 18(8), 670– 674. https://doi.org/10.1046/j.1525-1497.2003.21017.x Hasselbach, G. (2021). Data ethics of power: A human approach in the Big Data and AI Era. Edward Elgar Publishing. Hendren, E. M., & Kumagai, A. K. (2019). A matter of trust. Academic Medicine, 94. Hildt, E. (2019). Artificial intelligence: Does consciousness matter? Frontiers in Psychology, 10, 1535. https://doi.org/10.3389/fpsyg.2019.01535 Hoffmaster, B. (1994). The forms and limits of medical ethics. Social Science & Medicine, 39, 1155–1164. https://doi.org/10.1016/0277-9536(94)90348-4 Hojat, M., Gonnella, J. S., Nasca, T. J., Mangione, S., Vergare, M., & Magee, M. (2002). Physician empathy: Definition, components, measurement, and relationship to gender and specialty. American Journal of Psychiatry, 159(9), 1563–1569. https://doi.org/10.1176/appi.ajp.159.9. 1563 Hojat, M., Maio, V., Pohl, C. A., & Gonnella, J. S. (2023). Clinical empathy: Definition, measurement, correlates, group differences, erosion, enhancement, and healthcare outcomes. Discovery Healthcare Systems, 2, 8. https://doi.org/10.1007/s44250-023-00020-2
84
4
Ethical Foundations: Medical Ethics and Data Ethics
Hong, S. H. (2020). Technologies of Speculation. The limits of knowledge in a data-driven society. NYU Press. Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169, 615–626. Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3, 2053951716674238. https://doi.org/10.1177/2053951716674238 Institute of Medicine (US) Committee on Quality of Health Care in America, Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (eds.). (2000). To Err is Human: Building a Safer Health System. National Academies Press (US). Iott, B. E., Campos-Castillo, C., & Anthony, D. L. (2019). Trust and privacy: How patient Trust in Providers is related to privacy behaviors and attitudes. American Medical Informatics Association Annual Symposium Proceedings, 2019, 487–493. Jecker, N. S., Atuire, C. A., & Bull, S. J. (2022). Towards a new model of global health justice: The case of COVID-19 vaccines. Journal of Medical Ethics, medethics-2022-108165. Johnson, D. G. (1994). Computer ethics. Prentice-Hall. Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10, 123–133. https://doi.org/10.1007/s10676-008-9174-6 Jones, G. E., & Demarco, J. P. (2016). Bioethics in context: Moral, legal, and social perspectives. Broadview Press. Jonsen, A. R., & Toulmin, S. (1988). The abuse of casuistry: A history of moral reasoning. University of California Press. Jonsen, A. R., Siegler, M., & Winslade, W. J. (Eds.). (2022). Clinical ethics: A practical approach to ethical decisions in clinical medicine (9th ed.). McGraw Hill. Kaba, R., & Sooriakumaran, P. (2007). The evolution of the doctor-patient relationship. International Journal of Surgery, 5, 57–65. Kant, I. (2015). Critique of practical reason (Introduction by Reath, A. Trans.: Gregor, M.) (2nd ed.). Cambridge University Press. Kant, I. (2017). The metaphysics of morals (Denis, L. (ed.). Trans.: Gregor, M.) (2nd ed.). Cambridge University Press. Kitchin, R. (2014a). Big data, new epistemologies and paradigm shifts. Big Data & Society, 1, https://doi.org/10.1177/2053951714528481 Kitchin, R. (2014b). The data revolution: Big data, open data. Data Infrastructures & Their Consequences. SAGE. Koops, B. J., Clayton Newell, B., Timan, T., Škorvánek, I., Chokrevski, T., & Galiča, M. (2017). Typology of Privacy. University of Pennsylvania Journal of International Law, 38(2), 483–575. Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems, 31, 388–409. Kottow, M. H. (1986). Medical confidentiality: An intransigent and absolute obligation. Journal of Medical Ethics, 12, 117–122. Kuhn, T. (2012). The structure of scientific revolutions. University of Chicago Press. Lampert, B., Unterrainer, C., & Seubert, C. (2019). Exhausted through client interaction—Detached concern profiles as an emotional resource over time? PLoS One, 14(5), e0216031. https://doi. org/10.1371/journal.pone.0216031 Liu, B. (2021). In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. Journal of Computer-Mediated Communication, 26, 384–402. Macintyre, A. (2007). After virtue: A study in moral theory (3rd ed.). University of Notre Dame Press. Mackenzie, C. (2021). Relational autonomy. In Hall, K.Q. & Ásta (eds.), The Oxford handbook of feminist philosophy. Oxford University Press, 374–384. https://doi.org/10.1093/oxfordhb/ 9780190628925.013.29 Margulis, S. T. (2011). Three theories of privacy: An overview. In Trepte, S. & Reinecke, L. (eds.). Privacy online: Perspectives on privacy and self-disclosure in the social web. Springer, 9–17. https://doi.org/10.1007/978-3-642-21521-6_2
References
85
Marmot, M., & Allen, J. J. (2014). Social determinants of health equity. American Journal of Public Health, 104, S517–S519. Martens, D. (2022). Data science ethics. Concepts, techniques, and cautionary tales. Oxford University Press. Mayer-Schönberger, V., & Cukier, K. (2013). Big Data: A revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt. Mccullough, L. B., Coverdale, J. H., & Chervenak, F. A. (2020). Trustworthiness and professionalism in academic medicine. Academic Medicine, 95. https://doi.org/10.1097/ACM. 0000000000003248 Mead, N., & Bower, P. (2000). Patient-centredness: A conceptual framework and review of the empirical literature. Social Science & Medicine, 51, 1087–1110. https://doi.org/10.1016/s02779536(00)00098-8 Mechanic, D. (1998). The functions and limitations of trust in the provision of medical care. Journal of Health Politics, Policy and Law, 23, 661–668. Meyers, D. T. (2005). Decentralizing autonomy: Five faces of selfhood. In Anderson, J. & Christman, J. (eds.). Autonomy and the challenges to liberalism: New essays. Cambridge University Press, 27–55. https://doi.org/10.1017/CBO9780511610325.004 Mitchell, S., Potash, E., Barocas, S., D’amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8, 141–163. Mittelstadt, B. D., & Floridi, L. (2016). The ethics of big data: Current and foreseeable issues in biomedical contexts. Science and Engineering Ethics, 22, 303–341. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3, https://doi.org/10.1177/205395171667967 Moor, J. H. (1997). Towards a theory of privacy in the information age. ACM Sigcas Computers and Society, 27. Morley, J., Machado, C. C. V., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2020). The ethics of AI in health care: A mapping review. Social Science & Medicine, 260, 113172. https:// doi.org/10.1016/j.socscimed.2020.113172 Mosco, V. (2014). To the cloud: Big Data in a turbulent world. Routledge. Nagel, E. (1961). The structure of science: Problems in the logic of scientific explanation, New York, NY, Harcourt. Brace & World, 716, 29. Nickel, P. J. (2019). The ethics of uncertainty for data subjects. In Krutzinna, J. & Floridi, L. (eds.). The ethics of medical data donation. Springer, 55–74. https://doi.org/10.1007/9783-030-04363-6_4 Nunes, R., & Rego, G. (2014). Priority setting in health care: A complementary approach. Health care analysis: HCA: Journal of Health Philosophy and Policy, 22, 292–303. https://doi.org/10. 1007/s10728-013-0243-6 O’Brien, J., & Chantler, C. (2003). Confidentiality and the duties of care. Journal of Medical Ethics, 29, 36–40. O’Neill, O. (2002). Autonomy and trust in bioethics. Cambridge University Press. Pickering, J. B., Engen, V., & Walland, P. (2017). The interplay between human and machine agency. In Kurosu, M. (ed.). Human-computer interaction. User interface design, development and multimodality. Springer, 47–59. Potter, V. R. (1988). Global bioethics. Michigan State University Press. Rachels, J. (1975). Why privacy is important. Philosophy and Public Affairs, 4, 323–333. Raley, R. (2013). Dataveillance and countervailance. In L. Gitelman (Ed.), ‘Raw Data’ is an Oxymoron (pp. 121–145). MIT Press. Reich, W. T. (1994). The word “Bioethics”: Its birth and the legacies of those who shaped it. Kennedy Institute of Ethics Journal, 4(4), 319–335. https://doi.org/10.1353/ken.0.0126 Richards, N. M., & King, J. (2014). Big Data ethics. Wake Forest Law Review. Available at SSRN: https://ssrn.com/abstract=2384174. Accessed 1 Mar 2023.
86
4
Ethical Foundations: Medical Ethics and Data Ethics
Richterich, A. (2018). The Big Data Agenda. Data ethics and critical data studies. University of Westminster Press. Roache, R. (2018). Psychiatry’s problem with reductionism. Philosophy, Psychiatry, and Psychology, 26, 219–229. Roessler, B. (2004). The value of privacy. Polity. Roessler, B. (2018). Three dimensions of privacy. In Van Der Sloot, B. & De Groot, A. (eds.). The handbook of privacy studies. An interdisciplinary introduction. Amsterdam University Press, 137–142. Ruger, J. P. (2004). Health and social justice. Lancet, 364, 1075–1080. https://doi.org/10.1016/ S0140-6736(04)17064-5 Ruger, J. P. (2020). Social justice as a foundation for democracy and health. BMJ, 371, m4049. Ruger, J. P., & Horton, R. (2020). Justice and health: The lancet-health equity and policy lab commission. Lancet, 395, 1680–1681. Russo, F., Schliesser, E., & Wagemans, J. (2023). Connecting ethics and epistemology of AI. AI & Society, https://doi.org/10.1007/s00146-022-01617-6 Samek, W., & Müller, K.-R. (2019). Towards explainable artificial intelligence. In Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K. & Müller, K.R. (eds.). Explainable AI: Interpreting, explaining and visualizing deep learning. Springer, 5–22. https://doi.org/10.1007/978-3-03028954-6_1 Sammer, C. E., Lykens, K., Singh, K. P., Mains, D. A., & Lackan, N. A. (2010). What is patient safety culture? A review of the literature. Journal of Nursing Scholarship, 42, 156–165. Sarkar, S. (1992). Models of reduction and categories of reductionism. Synthese, 91, 167–194. https://doi.org/10.1007/BF00413566 Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/S0140525X00005756 Secker, B. (1999). The appearance of Kant’s deontology in contemporary Kantianism: Concepts of patient autonomy in bioethics. Journal of Medicine and Philosophy, 24, 43–66. Sen, A. (2002). Why health equity? Health Economics, 11, 659–666. https://doi.org/10.1002/hec. 762 Sharma, T., & Arunima. (2021). Management of civil liberties during pandemic. Indian Journal of Public Administration, 67, 440–451. Siegler, M. (1982). Sounding boards. Confidentiality in medicine – A decrepit concept. The New England Journal of Medicine, 307, 1518–1521. Siegler, M., Pellegrino, E. D., & Singer, P. A. (1990). Clinical medical ethics. Journal of Clinical Ethics, 1(1), 5–9. Stahl, B. C. (2021). Artificial intelligence for a better future. An ecosystem perspective on the ethics of AI and emerging digital technologies (Springer Briefs in Research and Innovation Governance). Springer. https://doi.org/10.1007/978-3-030-69978-9 Starke, G., Van Den Brule, R., Elger, B. S., & Haselager, P. (2022). Intentional machines: A defence of trust in medical artificial intelligence. Bioethics, 36, 154–161. Sugarman, J., & Sulmasy, D. P. (Eds.). (2010). Methods in medical ethics. Georgetown University Press. Tegegne, M. D., Melaku, M. S., Shimie, A. W., et al. (2022). Health professionals’ knowledge and attitude towards patient confidentiality and associated factors in a resource-limited setting: A cross-sectional study. BMC Medical Ethics, 23, 26. https://doi.org/10.1186/s12910-02200765-0 Thomas, A., Kuper, A., Chin-Yee, B., & Park, M. (2020). What is “shared” in shared decisionmaking? Philosophical perspectives, epistemic justice, and implications for health professions education. Journal of Evaluation in Clinical Practice, 26, 409–418. Thompson, I. E. (1979). The nature of confidentiality. Journal of Medical Ethics, 5, 57–64. Tigard, D. W. (2021). Artificial moral responsibility: How we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics, 30, 435–447. https://doi.org/10.1017/ S0963180120000985
References
87
Tucker, C. (2019). Privacy, algorithms, and artificial intelligence. In A. Agrawal, J. Gans, & A. Goldfarb (Eds.), The economics of artificial intelligence: An agenda (pp. 423–438). University of Chicago Press. https://doi.org/10.7208/9780226613475-019 Van Der Horst, D. E. M., Garvelink, M. M., Bos, W. J. W., Stiggelbout, A. M., & Pieterse, A. H. (2023). For which decisions is shared decision making considered appropriate? – A systematic review. Patient Education and Counseling, 106, 3–16. https://doi.org/10.1016/j.pec.2022.09. 015 Van Riel, R. (2014). Conceptions of reduction in the philosophy of science. In Van Riel, R. (ed.). The concept of reduction. Springer, 153–183. Vaughn, L. (2022). Bioethics. Principles, issues, and cases (5th ed.). Oxford University Press. Veatch, R. M. (1997). Medical ethics (2nd ed.). Jones and Bartlett. Walter, J. K., & Ross, L. F. (2014). Relational autonomy: Moving beyond the limits of isolated individualism. Pediatrics, 133(Suppl 1), 16–23. https://doi.org/10.1542/peds.2013-3608D Whitley, E. A. (2009). Informational privacy, consent and the “control” of personal data. Information Security Technical Report, 14, 154–159. Wolfensberger, M., & Wrigley, A. (2019). Instrumental utility of trust. In Wrigley, A. & Wolfensberger, M. (eds.). Trust in Medicine: Its nature, justification, significance, and decline. Cambridge University Press, 145–161. Zuber, N., Kacianka, S., & Gogoll, J. (2022). Big data ethics, machine ethics or information ethics? Navigating the maze of applied ethics in IT. ArXiv, abs/2203.13494. Zwitter, A. (2014). Big data ethics. Big Data & Society, 1, 2053951714559253. https://doi.org/10. 1177/2053951714559253
Part II
Ethical Analysis
Chapter 5
Practices
Abstract In this chapter, I explore the concept of smart data practices, i.e. collecting and operationalizing data by using MAI. The underlying assumption is that epistemic practices of health professionals shape their patient-directed actions and interactions with the patient. Since MAI transforms epistemic practices into smart data practices, this also impacts patient-centered actions. Both aspects, data collection and operationalization, come with specific ethical implications. I analyze crucial issues like autonomy, bias, explainability, informed consent, and privacy protection through a critical lens and discuss strategies for overcoming challenges. Keywords Autonomy · Bias · Blockchain · Confidentiality · Explainability · Data ownership · Digital positivism · Informed consent This chapter focusses on the impact of MAI on various practices in medicine. It frames these practices as smart data practices, i.e. the combination of big data approaches and MAI in the medical context. Smart data practices encompass collecting and operationalizing data supported by a broad spectrum of MAI technologies. The structure of the chapter follows these different practices and explores the technological means and their respective ethical implications. My analysis is based on the assumption that MAI leads to a shift in epistemic practices, which in turn has severe ethical implications. In order to analyze this shift and its ethical implications, I make use of several concepts from critical data studies.
5.1
Collecting Data
Smart data practices rely on the availability of individual health data. The EU’s General Data Protection Regulation (GDPR) defines personal health information as “personal data related to the physical or mental health of a natural person” (GDPR, 2023, Art.4, 15). This data is defined by quantity and quality: there has to be a sufficient number of data sets and the data has to meet certain quality standards. Furthermore, data has to be made accessible in terms of measuring, monitoring and © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6_5
91
92
5 Practices
surveillance and it has to be shared and stored. As we have seen in Chap. 3, technological innovations in sensors and monitoring, cloud computing, and mobile devices has significantly boosted the availability and accessibility of individual health data. It has become easier for healthcare professionals to access and obtain this data from a wide array of sources, such as the EHR and lab results. The real breakthrough in recent years however has been the means to obtain data from the everyday-life of individuals, i.e. environmental and behavioral data. mHealth applications like smart wearables and IoT technologies like monitoring devices enable health professionals to datafy the bodily functions and daily activities of individuals in their private environment. Obtaining health data is thus no longer bound to the medical domain, meaning the hospital or GP’s office. Individuals generate data outside of lab conditions in their daily life, which is of enormous value for modeling the health situation over a certain time and build predictive models. The technical means have thus extended the reach of medicine beyond the medical domain into the private realm and everyday life of individuals. Hence, patients increasingly become data subjects. Of course, providing information on one’s health and bodily functions has always been important in medicine. Patients have always been obliged to provide this information or subject themselves to tests and diagnostic measures. In a sense, patients have always been data subjects. However, the sheer magnitude of data involved and the fact that data collection is not bound to fixed dates and locations, but has become ubiquitous, changes the game. This has a severe impact on traditional practices of collecting data in medicine. One could argue that what makes the clinical encounter special and distinguishes it from other social encounters is the fact that we willingly provide particularly sensitive information to (mostly) strangers. The reason why this works lies in the specific nature of the medical domain, which is constituted by several ethical and legal safeguards. Crucial principles in this regard are confidentiality, privacy protection, and trust (see Chap. 3). The introduction of smart data practices based on the big data approach and MAI technologies will have a transformative impact on these principles, which is why in the following ethical analysis I will focus on them.
5.1.1
Confidentiality and Informational Privacy
On the outset, the relation between the data hunger of MAI technologies and privacy seems to be a conflicting one (Ienca, 2023). The big data approach is all about gathering large amounts of data, whereas the idea of privacy suggests that there should be a safe space guarded against any external interventions. This is why from the beginning the digitization of medical information was identified as a potential conflict between the improvement of healthcare services and confidentiality (Barrows Jr. & Clayton, 1996). The crucial difference to conventional data management is the pooling of data from various sources and the subsequent easy accessibility. Hence, an unauthorized access could reveal the total health information of an
5.1
Collecting Data
93
individual, which makes all kinds of fraudulent actions possible. Fears of confidentiality breaches through data loss and data theft are among the main factors for lack of acceptability and slow uptake of digital healthcare technologies (Campos-Castillo & Anthony, 2015; Iott et al., 2019). Following the distinction by Roessler (Roessler, 2004), smart data practices in collecting data first affect informational privacy. At first glance, protecting patient data in the age of MAI might not seem to be such a difficult task. With confidentiality as a long-established principle and numerous rules, laws, and regulations sanctioning it, the healthcare domain seems better prepared for the impact of data-hungry technologies than any other field. Would it not be enough to just revise existing standards of confidentiality and adapt them to the requirements of MAI technologies? This line of argumentation overlooks an important characteristic of MAI. Developing and disseminating these technologies often involves commercial entities like tech companies and a process of commercialization (Murdoch, 2021). After the implementation of MAI technologies into clinical practice, for-profit corporations might still be involved in maintenance and oversight. This means that a non-medical agent has access to patient data, which is an unprecedented situation, at least on this scale. To a certain extent, commercial players have hitherto been involved in collecting health data, for example pharma companies in drug development (Vayena & Blasimme, 2018). However, the role tech companies play in the development, dissemination, and maintenance of MAI technologies differ qualitatively from previous involvements of commercial agents. This has mainly to do with the proprietary knowledge these agents hold. The result is a twofold power imbalance: First, tech companies have the necessary knowledge to provide and maintain MAI-based services for healthcare providers, which gives them superiority over them. Second, the big data divide implies an asymmetry between data subjects who provide data and the proprietors of MAI that own the means of data collection and procession (Vayena & Blasimme, 2018). The effects of this twofold power imbalance have already become clear. One striking example is the public-private partnership between the NHS Foundation Trust in the UK with Alphabet (Google’s holding company) in 2016 (Murdoch, 2021). The program included an app based on Alphabet’s Deep Mind-AI for supporting the self-management of kidney injury by patients. As some claim, the company had a poor privacy policy, informed patients insufficiently on data use, and unlawfully accessed patient data via the app. Especially the fact that Alphabet was able to transfer highly sensitive data to the USA, a different jurisdiction, meant that patients lost all control over their own data. This example shows that although long-established principles and rules for protecting privacy and ensuring confidentiality exist in the health care domain, new data-intensive technologies might create loop holes and grey zones. Especially the fact that smart data practices mostly rely on commercial services and interinstitutional data exchange, often on an international scale, exacerbates this problem. The first requirement of protecting the informational privacy of patients is therefore an adaption of existing laws and regulations to the genuine structural changes MAI and the connected smart data practices imply.
94
5
Practices
One step in that direction could be that transferring data to other jurisdictions than that in which they were obtained must be severely limited (Murdoch, 2021). An example for this approach is Art 45 in the GDPR that allows data transfers on the basis on an adequacy decision (GDPR). That means that the European Commission can decide whether a third country or international organization is able to ensure an adequate level of data protection and thus allow the data transfer. However, given the need for international cooperation in biomedical research, large data sets of high quality have to be available and accessible across borders. This is especially important since research as well as training data for machine learning should be diversified in order to improve precision and avoid bias. Since data protection regulations between two countries or political bodies might not be fully compatible, e.g. between the EU and the USA, data transfer might become difficult. This might hamper research efforts and training opportunities for MAI applications and thus, in the end, also negatively affect patient outcomes. In a word, protecting the informational privacy of patients might undermine the health benefits linked to MAI technologies. How can we overcome this conundrum? One approach is to change existing legal regulations in accordance with the specific requirements of data exchange needed for biomedical research and MAI development. Sector-specific regulations could define a genuine set of rules within existing data protection regulations that specifically address health data exchange (Bradford et al., 2020). Such an approach acknowledges the benefit calculus of making data available and accessible for medical purposes when compared with other fields where big data approaches play a role, like retail for example. From this perspective, it makes a difference whether researchers, doctors, or MAI developers access medical data in order to improve health services or whether a retail company uses personal data for personalizing ads. In ethical terms, we are weighing the good of health, which is a universal good and therefore of universal interest, against the good of economic benefits, which is of partial interest. In the first case, all may benefit from easier data access whereas in the second case, only certain stakeholders profit. Although this approach might provide a solution for improving the legal framework of smart practices, it does not solve the issue of protecting individuals against possible misuse of their health data. Apart from legal grey zones, there is also fraudulent access to and use of patient data as major threats to informational privacy and autonomy.
5.1.2
Informational Privacy and Autonomy
Several data harms threaten informational privacy and with it the autonomy of patients (Ballantyne, 2020): Privacy breaches may occur when unauthorized parties access patient data. This might lead to discrimination and stigma, since the individuals affected by the privacy breach might be characterized in a harmful way or associated with a certain social group, which in turn might imply social
5.1
Collecting Data
95
disadvantages like marginalization. Individuals might also experience disempowerment when losing control over their own data. A lack of transparency with regard to the use of their data might lead to a feeling of disenfranchisement, giving data subjects the impression that their right to decide upon their own data is taken away from them. Furthermore, data might be used for creating profit by big data utilizers without any benefit for the data subject, which implies exploitation. This shows how informational privacy and autonomy are linked: Without control over their own health data, data subjects are unable to ascertain who will use their data for what purposes. Hence, the potential data harms negatively affect the agency of data subjects as well as their capacity for decision-making. How can informational privacy and autonomy of data subjects be protected? As we have seen, informed consent is a major tool for protecting patient autonomy. As such, it is a cornerstone of clinical medicine as well as biomedical research. Smart data practices connected to MAI technologies may affect the ability to give informed consent as well as its scope. This is due to the fact that in the era of MAI and big data, individual health data can be easily stored and exchanged, re-used and combined with other data and processed for different purposes. The first issue that arises in the context of informed consent is the so-called secondary use of individual health data. Secondary use of data occurs when data that has been obtained for a specific purpose is used for producing additional insights. We experience this every day when we access any website or use an app on our phones. When we first assess a website, it stores cookies to identify us. This may be practical for us since the next time we assess the website our preferences (e.g. content we want to see or not to see) and possible information we provided (like login details) have been stored and we do not have to enter them again. The secondary use of our data might not be so easy to spot. For example, the website may process our information stored in the cookies to show us ads tailored to the preferences we defined. A similar process is custom in medicine, where data obtained throughout a treatment process is stored to be accessed later on for research purposes. The health data thus made available for secondary use might be available as treatment protocols or excerpts from the EHR in a data base. It may also be available as tissue, liquids, or genetic material stored in a biobank. Usually, patients are informed at the beginning of treatment that their pseudonymized or anonymized health-related data (in whatever form) is stored for later research. This is part of the informed consent process, where doctors provide all relevant information concerning storage, data protection and security measures as well as the time limit. The bigger picture here is the idea of a learning health care system (LHS) in which generating knowledge for continuously improving health care services is an integral part of medical practice (Faden et al., 2013; Institute of Medicine et al., 2007). This “new normal” in biomedical research marks a shift from small-scale single-site studies to large multi-site research based on the exchange of large data sets (Dove et al., 2014). The multi-centric, mostly international perspective in health research may conflict with national or local privacy regulations and policies as well as single-site ethical reviews (McLennan et al., 2019). Given this possible conflict, an intense debate
96
5 Practices
evolved around the question, how informed consent can be obtained properly and informational privacy be protected with regard to secondary uses (Mikkelsen et al., 2019). This practice is not new and not necessarily tied to smart data practices. The initial context in which this practice was established and discussed was biobanks as research tools within the last three decades (Mikkelsen et al., 2019). In the center of these debates is a weighing of principles, manifesting in the question whether the public good of gaining knowledge from research based on individual health data outweighs the right to privacy of individuals (Ploug, 2020; Porsdam Mann et al., 2016). Whereas some authors state that using individual health data, for example from the patient’s EHR, for research purposes does not need an explicit informed consent (Porsdam Mann et al., 2016), or that informed consent has to be limited when conflicting with other interests (Kluge, 2004), others argue that it is indispensable for protecting the interests of patients (Ploug, 2020; Helgesson, 2012). This question has gained new momentum with the increasing relevance of MAI, since the development of machine learning applications requires large amounts of training data, preferably from EHRs (Müller, 2022). It could therefore be argued that the benefits of these technologies for biomedical research, personalized medicine, and public health outweigh privacy concerns of individuals. Three questions arise here: First, how should we weigh the informational privacy of individuals against the public benefit of sharing health data? Second, how does the need for making individual health data available triggered by the big data approach affect informed consent? Third, what other ways of protecting autonomy and informational privacy are there besides informed consent? In the following, I will discuss the status of informational privacy as a good, the possible adjustments of informed consent in the light of MAI, and the technical as well as regulatory alternatives to informed consent, namely encryption and blockchain technologies, federated learning, data ownership models, and policy regulations. Informational Privacy as a Good As we have seen, protecting the informational privacy of patients is a major concern in medical ethics. Confidentiality as one of the major principles in the therapeutic relationship enables patient autonomy and trust. If we interfere with or limit the informational privacy of patients, we need good reasons for it. We have to weigh the benefits of informational privacy protection with the benefits of sharing healthrelated data of patients. One could claim that protecting the informational privacy of patients is indispensable for patient autonomy. Since respect for autonomy is an essential principle of medical practice, there are good reasons to prioritize the informational privacy of patients over all other goods. However, one could also state that although patient autonomy is crucial, it does not equal informational privacy. Put differently, protecting the autonomy of a patient does not necessarily exclude making use of their health data under certain conditions. A rather strong argument in this regard is based on solidarity (Prainsack & Buyx, 2013). One definition of solidarity is accepting costs for the benefit of others. Costs,
5.1
Collecting Data
97
in the context of providing individual health data for research purposes, translates mostly to risks of data harm described earlier: Unauthorized access, harmful use of data in a discriminatory way, disempowerment, and exploitation. As some authors argue, these risks of potential data harm are very small, even smaller than the health risks associated with conventional clinical trials (Prainsack & Buyx, 2013). Hence, taking these relatively small risks in the light of the expected benefits of big data health research would not imply a significant curtailing of patient autonomy. In fact, an approach based on solidarity recognizes autonomy as a crucial value and merely shifts the focus towards the information process. Instead of protecting patients against any supposed thread to their autonomy and thus putting them in a passive role, they should be able to make the decision whether they want to share individual health data themselves. In this solidarity-based model, the role of data subjects thus changes from that of passive data providers whose autonomy has to be protected by governance efforts and policies to that of active participants who can decide whether they are willing to take certain risks. This also implies a shift from a focus on risks towards a focus on harm mitigation. Instead of investing resources for preventing risks that are very small to begin with, measures should be implemented for mitigating data harms should they actually occur (Prainsack & Buyx, 2013). The solidarity approach has the advantage of empowering autonomy by enabling data subjects to make their own choices regarding the potential risks of data sharing. The success of such an approach rests on properly informing data subjects, since only a solid information base allows to make a sound decision. This is the idea behind informed consent as the formal enabler of patient autonomy. Furthermore, a proper information process and especially its formalization in procedures of informed consent also enables data subjects to trust health care institutions and researchers (Carter et al., 2015).
5.1.3
New Perspectives on Informed Consent
Specific consent, i.e. the classical model of informed consent focusses on providing specific information on the research the patient is participating in as well as on the nature of their involvement, the goals, and expected output (Sheehan, 2011). This model of informed consent is obviously insufficient for biobank research as well as smart data practices, since it only covers one specific study (Mikkelsen et al., 2019). The crucial issue here is the uncertainty of future uses of health data. When this data is obtained, it is not possible to foresee all research purposes it might be used for. This makes it difficult for healthcare professionals to provide information appropriately. Another issue is that health data might potentially be stored indefinitely, especially given the availability and low cost of cloud storage. This is an issue for informed consent since there is no time limit to data use. This makes it even more difficult to foresee all potential contexts and purposes data might be used for. Several models have been suggested that enrich informed consent and adapt it to the specific requirements of data-intense research.
98
5
Practices
In blanket consent, data subjects agree that their data can be used freely without the need for reconsent when new research efforts are undertaken (Nielsen & Kongsholm, 2022). Blanket consent can be considered as a “carte blanche” for researchers, allowing them to use data for any research purpose without having to specify it or recontact the data subject (Thompson & Mcnamee, 2017). The idea is that providing information on the use of data for research in a transparent form is sufficient to protect the autonomy of data subjects. As long as data subjects are informed that their data will be used for research that is not foreseeable at the time the data is obtained and consent to this, there is no need for further measures. While this approach would enable researchers to use data freely, it has been largely criticized. Some claim that blanket consent sacrifices the core principle of autonomy for the sake of a less bureaucratic research process or an alleged higher good in the form of public benefits from research (Caulfield, 2007). Others argue that blanket consent could have a societal impact apart from the often-supposed public health benefit of biomedical research. According to this view, research data could be used for purely commercial interests or even for supporting political agendas like profiling in the criminal context (Hansson et al., 2006). The notion of transparency as enabler of well-informed decision-making by data subjects has also been criticized, since telling data subjects that the possible uses of their data are unforeseeable alone is insufficient to protect them from possible harm (Mongoven & Solomon, 2012). This is why some commentators claim blanket consent is only permissible within very narrow confines, i.e. a minimal risk for privacy breach and a guaranteed oversight by ethics review boards (Thompson & Mcnamee, 2017). Some commentators have suggested broad consent as a model specifically adapted to the requirements of biobank research (Cargill, 2016; Hansson et al., 2006; Helgesson, 2012; Maloy & Bass, 2020; Mikkelsen et al., 2019). This approach implies that biobanks obtain the consent of data subjects, e.g. blood or tissue donors, by providing general information on the overall scope of research the data will be used for, the general goals of the biobank and its governance. That allows the biobank to re-use tissue samples for example in various research projects without having to obtain consent from the donors each time. Data subjects receive information on the general objectives and policies of the biobank as basis for their decision whether to provide their data without exactly knowing the specific nature of each research project their data might be used in. Data sets can be anonymized or pseudonymized (deidentified) so that they cannot be linked to a specific individual. Furthermore, data subjects are granted the rights to withdraw their consent at any time, which gives them control over their own data. Should a research study go beyond the scope of what has been the object of broad consent, an ethics review board can decide whether using the data is permissible. Critics have pointed out that this model, although pragmatic from the point of view of researchers, is insufficient to protect the informational privacy of data subjects and does not fulfill the criteria for being called informed consent (Hofmann, 2009; Karlsen et al., 2011). One argument in this regard is that broad consent insufficiently protects the autonomy of data subjects because it reduces privacy concerns to matters of technical manageability. Following this criticism, data safety
5.1
Collecting Data
99
and privacy protection by technological means or by establishing rules does not solve all privacy issues. Tiered consent offers a middle ground between specific consent and broad consent (Wiertz & Boldt, 2022). It allows data subjects to select and specify the uses of their data within a broad consent framework, such as the category of diseases, the area of research, and private or public research institutions (Mikkelsen et al., 2019). Data subjects may also specify their preferences, for example being re-contacted or dealing with incidental findings (Wiertz & Boldt, 2022). This implies a structured consent procedure with multiple options for data subjects to choose from. Tiered consent gives data subjects more control over the uses of their data than broad consent, thus better enabling them to exercise their autonomy (Tiffin, 2018). At the same time, this model allows a broader use of data than specific consent. However, this also means that tiered consent requires a high level of competence on behalf of data subjects to assess the available options in order to make an informed decision. It also implies a very elaborate and cumbersome consent procedure that requires a thorough categorization of research projects, goals, and data uses, which might be either impractical or inaccurate (Mikkelsen et al., 2019). Dynamic consent is of a more recent date compared to the other models and has been specifically developed in the context of digital technologies. It is less a model that provides a theoretical or conceptional basis for obtaining and giving consent, but rather aims to solve the issue of ongoing consent by technological means (Wiertz & Boldt, 2022). In dynamic consent, an ICT architecture enables the free flow of research data by using various encryption methods and integrating consent preferences of data subjects (Williams et al., 2015). It is based on a communication interface that enables data subjects to actively engage in research activities (Kaye et al., 2015). This allows to easily contact data subjects for obtaining consent for each new research project (Steinsbekk et al., 2013). It substitutes conventional paper-based consent forms, which are static and cannot be easily updated. This solution offers easy access and the direct participation by data subjects. Researchers can inform data subjects about ongoing research projects and easily obtain consent when the scope or goals of research have changed. Data subjects can easily update their personal information as well as their preferences. In some approaches of dynamic consent, data subjects have the opportunity to directly contact researchers, ask questions about ongoing research, and partake in online surveys (Wiertz & Boldt, 2022). Hence, dynamic consent is a more participant-centered approach that focusses on data subjects as active partners within the research process (Kaye et al., 2015). Although this model has several advantages and is especially well-suited for the MAI context, some challenges arise. In order for dynamic consent to work, several requirements have to be fulfilled. On an institutional level, the necessary IT-infrastructure has to be provided. This may be problematic to smaller healthcare institutions with limited financial or personal resources. Furthermore, it requires a certain level of e-literacy on behalf of data subjects. Handling the technology for interacting with others actors or supervising the use of one’s own health data might be challenging for some data subjects.
100
5
Practices
Meta consent combines broad consent and dynamic consent and also includes options for blanket consent (Ploug & Holm, 2015; Wiertz & Boldt, 2022). Meta consent is the most personalized consent model, since it not only allows data subjects to choose between different research purposes or studies, but between consent models. Data subjects can decide which type of consent they prefer for each study. This requires researchers to provide thorough information on research projects, specifying the goals, methods, and type of data use. It also requires a system that can handle the preferences of data subjects and allows them to access information. Some authors even suggest that meta consent should be made mandatory when entering adulthood, meaning that every individual would have to set their consent preferences when they come of age (Ploug & Holm, 2015). It is yet unclear how to implement this given the administrative effort required. The authorization model is a policy approach that challenges the legal and ethical framework for informed consent altogether (Caulfield et al., 2003; Greely, 1999). Following this model, data subjects give consent to research projects that cannot be clearly defined or foreseen. This form of consent also includes the option of re-contacting data subjects in unclear cases. Data subjects can withdraw their consent and set a time limit for the use of their data. They have control over third party access and commercial uses. The advantage of the authorization model is that data subjects can set their preferences by specifying the uses, group of agents with access, as well as their further involvement, e.g. whether and when they want to be contacted. In a way, data subjects can choose between elements of broad consent and blanket consent in this model, which some consider as an empowerment of the autonomy of data subjects (Chow-White et al., 2015). A prerequisite for this model is a thorough information of data subjects regarding possible risks to informational privacy. In order to prevent these risks, this model also requires a reformulation of trust, responsibility, and accountability in the form of a government framework (Caulfield et al., 2003). Review boards or ombudspersons could provide additional protection and oversight. In the era of big data medicine and MAI, some of the arguments from the debate around biobanks are still relevant, whereas others cannot be easily applied in the new setting. Since many of these arguments have been put forward at the beginning of the century, one apparent difficulty lies in the technical aspects that have changed during the last two decades. The means of storing, exchanging, and analyzing data have fundamentally improved. Accessing data, especially on a global scale, is much easier today than it was twenty years ago. Furthermore, data is increasingly obtained in non-clinical settings via mHealth applications and IoT technologies. Practices of producing and obtaining data shift from relatively controlled environments like hospitals and biobanks to the “wild”, meaning the everyday life and living environment of individuals. Hence, data safety and privacy protection has become significantly more difficult. One could therefore argue that the argument of low risk that was of relevance in the context of biobank research has become obsolete in the era of networked communication of health data (Chow-White et al., 2015). In addition, the very nature of research has changed through the introduction of MAI technologies. Data mining and other applications of machine learning may
5.1
Collecting Data
101
detect unexpected patterns within the data, thus opening new perspectives within a research project that could not have been foreseen at the outset. This is one of the most valuable features of MAI in particular and AI in general. When it comes to consent however, this potential of MAI technologies to find the unexpected is problematic. It requires a form of consent that is even broader or more dynamic than the consent models suggested in biobank research. This exacerbates the already existing issues linked to these models. Another new development is the increasing involvement of commercial agents. The aforementioned involvement of Alphabet with the NHS Trust Fund and the result of this endeavor is a striking example for possible negative outcomes. The Alphabet case is no accident, but follows the inherent business logic of big data tech companies. These commercial agents have a vital interest in patient data and do not only provide software solutions for dealing with them in the medical context. Conventional ideas of consent, be it classical informed consent or the alternatives discussed above, do not account for this constellation, since they are tailored to the traditional setting of biomedical research. Of course, commercial agents have also been involved in this setting, for example for-profit biobanks or pharma corporations. Nowadays however, corporations that are not only interested in providing a public service, but in turning the data involved into profit mostly provide and control the whole infrastructure of data collection, storage, exchange, and analysis. Finally, also the objectives of data use have changed throughout the last two decades. Nowadays, it is not only about obtaining health-related data for research projects, but translating big data-based research to the bedside (Chow-White et al., 2015). The aim is to use research insights for personalized treatment. Cutting-edge MAI technologies are applied for the explicit purpose of tailoring medical treatment to the specific needs and requirements of an individual. In order to achieve this goal, the involvement of multiple agents is necessary. Personalized medicine works best when a broad array of healthcare professionals and care givers have access to health data. All these developments suggest that new models of informed consent adapted to MAI might just be one aspect of resolving the issues of informational privacy. We also have to consider other means of privacy protection that include technical aspects as well as matters of data ownership and policies. These measures are not an alternative to informed consent, but complement and in some respects enable it. I will discuss two types, technical solutions and data ownership models.
5.1.4
Technical Solutions: Blockchain and Federated Learning
In the era of interconnected data bases, smart EHRs, and mHealth technologies, sophisticated technical solutions for protecting the informational privacy of data subjects are essential. Policies and consent models can only work as long as the
102
5 Practices
confidentiality of health data can be guaranteed by healthcare providers. A big data approach in combination with MAI technologies thus requires technical data security and data protection measures. First and foremost, technical solutions should authenticate data subjects’ identities, allow secure access to health data, and protect the data flow between different agents (Altameem et al., 2022). This requires encryption of all sensitive data. Encryption implies the generation of additional text following a certain key. This text is added to the initial data and renders it unreadable. Only by using the key can the data set be decrypted (Das & Namasudra, 2022). One of the most common and most important forms is end-to-end-encryption, whereby encryption and decryption occur directly on the device that collects the data (i.e. a smart phone or tablet) before it enters any network or cloud (Moosavi et al., 2018). Various methods of encryption have been developed, some of which use sophisticated algorithms for data encryption (Das & Namasudra, 2022). However, encrypting data has its disadvantages. It makes data more difficult to process. Furthermore, encryption systems are often high maintenance and require domain knowledge. Hence, technical alternatives or complementary measures are needed. The most recent development in this regard is blockchain-based security solutions (Mahajan et al., 2023). A blockchain is a decentralized ledger that is shared among a peer-to-peer network (Lu, 2019). Information is not stored in one single or centralized data base, but across many servers of a network, each containing parts of the information. Data is grouped together in closed blocks containing a certain amount of information. These blocks are linked, whereas each block contains a hash of information of the previous block. The information within the block is encrypted, but the block itself carries a time stamp and other transaction information that allows to retrace any transaction. Unique user identifiers and harshes ensure data provenance and ownership. The data blocks themselves cannot be altered, which ensures data integrity and allows for an easy authentication. The blockchain thus forms a kind of distributed ledger that consists of blocks of immutable and yet retraceable information. This enables transparency, since every participant within the network can access the transaction information and thus retrace who has accessed and transferred the data and how. Blockchain technology offers a solution for the aforementioned trade-off between privacy protection and the availability of health data for research purposes as well as improving health services. Every blockchain user possesses a personal encryption key that is linked to the public key of all other users. The encryption and decryption only works when both keys are combined. This allows to create different degrees of access to a data set. For example, the combined keys allow a patient to access the complete data set whereas a researcher may only access selected data in anonymized form. The same mechanism works in terms of identification of users. Each user is identifiable via their individual encryption key. Blockchain technology thus makes it easy to define different authorization levels when it comes to data access and at the same time enables selective data sharing and the validation of user identities. A key feature of blockchains is so-called smart contracts, which gives data subjects the opportunity to manage access to their data (Gaynor et al., 2020).
5.1
Collecting Data
103
Smart contracts consist of three decision pathways, which are types of contract, health data tracking, and process and storage. Users can decide which type of contract they prefer, meaning what information they want to share with other participants within the network. Health data tracking enables users to track enrollment for health plans or research studies on a public health level. It is thus an easy way to track and manage the utilization of health data for different purposes. The process and storage capability allows to store information on contracts the data subject has agreed to. This way, stakeholders, e.g. researchers, can access information on the level and form of consent the data subject has provided. All three aspects taken together enable data subjects to take control over their own health data and actively manage access. One advantage of blockchain technology is the immutable audit trail (Kuo et al., 2017). Since users can only access and read but not alter information shared via blockchain, it excludes any attempts of nefarious data manipulation. Another advantage lies in the decentralized distribution of health data (Kuo et al., 2017). There is no central instance, e.g. a data manger, who has control over the data. Instead, the peer-topeer-architecture of blockchain implies that the different nodes, i.e. stakeholders like patients, hospitals, care providers, payers, etc. can exchange data directly. Also, no single person, institution, or company owns the blockchain (Elangovan et al., 2022). This eliminates the aforementioned risks of reusing data for commercial purposes by for-profit agents and thus mitigates the effects of the big data divide. Finally, blockchain technology might also facilitate a better MAI development (Ng et al., 2021). It could help to provide the health data needed for training machine learning algorithms and at the same time protect the informational privacy of data subjects. Taken together, the immutability of data, the decentralized architecture, the identifiable data provenance, the option of including smart contracts, and the overall privacy protection possibilities make blockchain an ideal solution for managing individual health data. Different applications in health care are possible (Elangovan et al., 2022; Ng et al., 2021): Blockchain can enable a trustworthy and efficient use of data stored in EHRs and make them available for different stakeholders, including researchers. It also allows a secure data collection and sharing in mHealth applications. It can furthermore support the security of pharmaceutical supply chains by better identifying drugs and thus exclude the delivery of counterfeit medication. Similarly, the immutable and identifiable nature of blockchain data can enhance health insurance management, e.g. by making insurance claims easier to verify. Blockchain technology addresses the protection of informational privacy by focusing on data exchange. Another perspective would be to consider the models that are built from this data. This is the approach of federated learning, sometimes also referred to as collaborative learning (Brisimi et al., 2018; Rieke et al., 2020; Xu et al., 2021). As the term “learning” suggests, this approach specifically aims at making data available for training algorithms and building better models based on machine learning techniques. As we have seen, machine learning and especially deep learning works best with large data sets. That means that the accuracy of statistical models depends on the quality and volume of data that these models are built upon. In medicine and healthcare, accessing the required data is difficult due to
104
5 Practices
the fragmented nature of health data, which is distributed among different institutions, e.g. hospitals, and stakeholders like patients, GPs, and insurance companies. In addition, privacy concerns and the sensitive nature of health data make access and sharing for machine learning purposes difficult. Furthermore, collecting and curating health data is time-consuming, expensive, and requires a considerable amount of manpower (Rieke et al., 2020). Federated learning is a collaborative machine learning paradigm that does not require sharing sensitive data between different stakeholders or institutions. Stakeholders can train machine learning algorithms locally, i.e. within an institution, and then share the resulting models. The stakeholders involved can thus share various model characteristics such as parameters and gradients and use them to refine their own models. Instead of a centralized data collection and training of machine learning algorithms, federated learning thus allows a distributed learning while at the same time protecting sensitive data within the institution where they have been collected (Xu et al., 2021). Regarding informational privacy, federated learning thus allows each institution or stakeholder to adhere to their own privacy policies and control access and at the same time share valuable insights from their data (Rieke et al., 2020). The federated learning paradigm might thus help to realize a large-scale precision medicine with robust models, especially for prediction, which could also be better suited for gaining insights on rare diseases. One of the main features of this approach is that the models are scalable. When healthcare providers share large amounts of data, smaller institutions might not have the necessary capacities for data storage and curation. Sharing the model instead of the data allows these institutions to profit from the accuracy of a model trained on large data sets without the need to obtain, store, and curate large amounts of data themselves (Rieke et al., 2020) By circumventing the threats to informational privacy, federated learning allows to unlock the full potential of machine learning techniques in medicine. However, this paradigm does not solve all problems concerning informational privacy. Although federated learning implies to only share models while the data does not leave the institution it originates from, it is nonetheless possible to identify individuals (Topaloglu et al., 2021). In some models, unintentional memorization might occur, meaning that some information from the initial data the model is based upon still remains within the model (Carlini et al., 2018). This may make it possible to trace the data back to an individual. Another privacy risk occurs when one institution sends model updates to another, which may also carry sensitive and retraceable information (Topaloglu et al., 2021). One solution could be to define data security standards that are binding for all agents participating in a federated learning architecture. Furthermore, model encryption techniques may provide a way to secure the information within the models from malicious access before being shared (Topaloglu et al., 2021). All in all, federated learning is a promising approach for protecting informational privacy and also enabling precision medicine based on machine learning techniques. The feasibility of this approach has been shown in the clinical setting (Brisimi et al., 2018) and might be the fitting solution for many areas of MAI.
5.1
Collecting Data
5.1.5
105
Non-technical Alternatives: Data Ownership Models
Another strategy for protecting informational privacy and autonomy is data ownership. The basic idea is that the question, who owns health data, is crucial for determining the right to exclude others from using said data (Piasecki & Cheah, 2022). In the healthcare context, deciding on data ownership may thus affect research as well as clinical practice. Already at the dawn of the big data era, authors have pointed out that the legal uncertainty resulting from the unclear ownership status of health data hampers efforts to make the best use of individual health data for the patient’s benefit (Hall & Schulman, 2009). It is therefore important to determine whether health data is the property of data subjects, healthcare institutions, or researchers and what role commercial agents (software developers, providers of cloud services) play in this (Mirchev et al., 2020). Several models of data ownership by data subjects have been proposed, whereby two groups can be discerned, private ownership models and public ownership models (Piasecki & Cheah, 2022). Some authors also consider a third option, namely that ownership is not the appropriate approach when it comes to individual health data and that specific regulations should be implemented rather than focusing on property law (Mirchev et al., 2020). Private data ownership models emphasize the need for control over one’s own health data as well as the protection of privacy (Piasecki & Cheah, 2022). Some argue that the fragmentation of health data across multiple institutions as well as the lack of incentives for sharing health data is a major barrier for an efficient data use in healthcare (Kish & Topol, 2015). According to this view, the failure to integrate individual health data into clinical practice severely impacts the quality of care and the well-being of patients, since lack of access to health data is an important cause for preventable medical errors and even deaths. Another line of argumentation links the issue of health data ownership to democratization. In many jurisdictions, e.g. most states in the USA, data belongs to health care institutions, which to some is a remnant of paternalism (Kish & Topol, 2015). Granting property rights to patients in regard to their own health data would therefore mean an empowerment of patient autonomy. It would also enable patients as data subjects to exert control over whom to grant access to their health data. Furthermore, as some argue, since individuals pay for medical services, they should own their health data (Kish & Topol, 2015; Topol, 2019). The goal here is to build a health data economy driven by data ownership in order to enable patient-centered health care. This raises the question of proprietization, i.e. whether data subjects should have the right to monetarize their health data. The advantages of proprietization are manifold (Liddell et al., 2021): Owning one’s own health data would enable informational self-determination, thus empowering the autonomy of data subjects (Hummel et al., 2021). Since health data have become a very lucrative financial asset, ownership would provide data subjects with an equitable share of the benefits. As some authors argue, if property rights to health data are not clearly defined, the more powerful actors will simply appropriate them. This is another effect of the big data divide. Hence, it would be an illusion to think health data can be considered as
106
5 Practices
no one’s property (Purtova, 2015). Ownership by data subjects could also make it easier to determine who can use health data and for which purposes. Ownership could also make financial investments in health data technologies more lucrative, thus driving the innovation process (Liddell et al., 2021). However, some authors question the principle of data ownership. According to this critical view, especially health data have several properties that exempts them from being owned by data subjects (Liddell et al., 2021). First, the principle of confidentiality may set limits to property rights. The fact that confidentiality is the basis for the doctor-patient relationship creates a legal duty for doctors that cannot be circumvented by ownership. On the other hand, doctors may be obliged to disclose personal health information, e.g. for public health reasons. In these cases, ownership and property rights would not take effect. Hence, there would be no advantage when compared with existing legal regulations for privacy protection. Second, it is hard to see how ownership can be exercised over anonymized health data. How can researchers for example know whom to contact for consent? How can data subjects as owners prove their ownership? Since property rights are always limited to scope, which in the context of health data proves to be very limited, there is no reason to think that an ownership model would enable autonomy or protect privacy better than existing regulations. Third, when it comes to health data ownership as a driving force for investments, it could be argued that there is no need for such an incentive. Health data is already produced in huge quantities and the financial benefits of using them is incentive enough. The real need is investments in infrastructure, e.g. interoperability of systems, for which ownership of data by data subjects would not be an incentive. Fourth, the transferability of property that also pertains to health data ownership is an issue. Data subjects who own their health data may sell them, thus transferring their property rights to other agents. Given the power imbalance that exists between data subjects and big data collectors as well as big data utilizers (healthcare institutions or tech companies), a fair deal is unlikely. This is especially the case since there is also an imbalance when it comes to knowledge. Data subjects might not possess the knowledge to assess and evaluate the various possible uses of their health data. As a result, the fact that healthcare institutions and especially commercial agents could buy and sell health data could lead to a disempowerment of data subjects instead of empowering their autonomy. In addition, there is no sufficient evidence that data subjects even want to be owners of their own health data. The existing evidence rather suggests that data subjects in the healthcare context are uncomfortable with the idea of commercialization of health data as such (Ballantyne, 2020). Fifth, personal ownership might hamper the big data approach. Individuals might simply refuse to share any of their health data, thus undermining attempts to establish precision medicine. This would ignore the enormous public health value of individual health data (Majumder et al., 2019). Finally, the claim that health data is the property of data subjects is flawed given the way this data is produced (Liddell et al., 2021). The process of generating health data is a co-creation that involves several stakeholders apart from the data subject, like health care professionals, researchers, commercial agents, and healthcare
5.1
Collecting Data
107
institutions. One could therefore argue that data subjects provide the raw material for creating health data, but extracting, processing, and interpreting the data is a collaborative process that involves other agents (Ballantyne, 2020). Hence, the value of health data results from this process, not from the raw material itself, which challenges property claims by data subjects. Although private ownership of health data by data subjects might have certain advantages, it does not solve the big questions of privacy protection, autonomy, and consent. Protecting informational privacy requires rules and regulations on a legal and policy level, a sound data governance, and technical solutions for data security (Ballantyne, 2020). Public Ownership of Health Data Public ownership models are often connected to the idea of open research (Piasecki & Cheah, 2022). Following this approach, health data possess an immense value for public health purposes and should therefore be considered as a common good. Anonymized or otherwise deidentified individual health data should therefore be stored in public repositories with open access for researchers. This is the basis for the concept of a medical information commons (Bollinger et al., 2019). Medical information commons are environments in which individual health data are available for research purposes and clinical applications. It is basically a non-commercial open access approach to data sharing and storage that exists in the public domain (Majumder et al., 2019) The concept was introduced by the National Academy of Sciences (NAS) in their report Toward Precision Medicine in 2011 (Majumder et al., 2019; National Research Council, 2011). According to the NAS, making health data openly available is a necessary requirement for realizing precision medicine. Ideally, a medical information commons exists as an ecosystem that links different data sharing initiatives or platforms for the purpose of wide accessibility. It provides a governance structure for a networked environment that allows to integrate and connect various health data and make them available for different stakeholders (McGuire et al., 2019). Although medical information commons have been suggested over a decade ago, there is still an ongoing debate on how this concept should be realized. One approach is to organize this ecosystem in the form of a consumer-driven data commons (Evans, 2016). Using a citizen science approach, individuals (patients, research participants, users of smart health technologies) share their health data in a selfgoverning community. Instead of personal consent models where individuals decide upon access to their personal data, the consumer-driven data commons implies collective decisions on matters of access and use of aggregated data sets. In doing so, the participants follow a self-governance paradigm where they define binding rules for data sharing and access. This approach could be a way to empower data subjects and at the same time enable data use for the public good. The advantage of public ownership is that it resolves some of the issues of private property approaches. As we have seen, it is problematic to consider individual health data as the property of a person since it derives from a collaborative process. Public ownership models acknowledge this collaborative aspect and also define health data
108
5 Practices
as a common good worthy of protection. Hence, public ownership could enable the actors involved to make the best use of health data and at the same time define obligations for stewardship and protection (Montgomery, 2017). Although the idea of a commons approach to the ownership problem might resolve some issues associated with private ownership models, some open questions remain. Even if the data might be publicly owned, the infrastructure for data sharing and storage needs maintenance, curation, and funding (Piasecki & Cheah, 2022). It is unclear what stakeholder or institution would be responsible here. In a sense, the question of ownership shifts from the data itself to the data infrastructure. If commercial agents like corporations own the infrastructure, the same issues of power imbalances and access arise as with private ownership models. This power imbalance between data subjects and commercial actors is one of the crucial issues with commons in general and might affect medical information commons in particular (Purtova, 2017). Especially in commons models that do not define ownership and regard health data as no one’s property, sooner or later corporations will step in and appropriate the data (Purtova, 2015). Since these commercial agents mostly own the infrastructure for data sharing and storage, e.g. cloud solutions, this outcome is very likely. Such a development might result in a situation where a few actors from the information industry control the access to health data (Purtova, 2017). This would undermine to whole idea of a commons for granting broad access to a wide variety of stakeholders and promote the common good. A state-owned medical information commons seems to be the only solution here, although it is unclear whether a state would be willing or able to fund such an enterprise (Piasecki & Cheah, 2022). Another issue in medical information commons is the protection of agreements between data subjects and doctors or researchers. It is unclear how individually negotiated consent agreements can be protected in a medical information commons that pools deidentified health data (Piasecki & Cheah, 2022). This is especially problematic with approaches like the consumer-based data commons where collective decisions might overrule individual agreements. As a result, data subjects may not be able to build the necessary trust in the commons and refuse to share some of their data. This would also undermine the whole purpose of the medical information commons. Additionally, public ownership models like the medical information commons seldom address data supply and the question which forms of inclusion and exclusion of data are just (Prainsack, 2019). Participants of a medical information commons could decide upon inclusion and exclusion criteria for entering the commons, using data, benefitting from data, or participating in the governance of the data. These mechanisms of exclusion could negatively affect marginalized groups (Piasecki & Cheah, 2022).
5.1.6
Nontechnical Alternatives: Regulatory Models
As we have seen, both private and public ownership models offer several advantages but also have their downsides. Some authors argue that ownership and proprietary
5.2
Operationalizing Data
109
rights of health data might not be the best way to protect the informational privacy of data subjects. The two main arguments are that ownership approaches are redundant given existing legal regulations for privacy protection and that they extend property rights in an unreasonable manner (Liddell et al., 2021). As an alternative to ownerships approaches, regulatory models could be better suited for protecting the informational privacy of data subjects. This means that instead of regulating data access and use by defining property rights, these issues should be resolved on a policy level. A regulatory model has to address privacy protection, data security and accuracy as well as transparency in order to balance the different interests involved (Liddell et al., 2021). Examples are the GDPR in the EU or the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in the US. These regulatory approaches are tools for breaching the gap between protecting individual interests and leveraging public benefits. They might also include or be based upon principles of confidentiality and models of consent.
5.2
Operationalizing Data
Operationalizing health data is the main function of medical AI. In Chap. 3 we have seen the various applications of machine learning techniques such as data mining, NLP, and computer vision for processing and analyzing data. Operationalizing data means that health care professionals make decisions and take actions based on the results of MAI-aided data analysis. In this chapter, I want to investigate how smart data practices based on MAI inform clinical decision-making and action. The main focus here is on how the mechanisms of interpreting and analyzing data shape the epistemology and practices of doctors and what ethical issues arise from this. The great promise of MAI lies in its sheer computational power and the ability to make sense of large amounts of data. We ascribe epistemological supremacy to big data approaches (dataism), but the only way to unlock this potential is to apply machine learning techniques as a form of super cognition. Some of the results in the medical and healthcare context are stunning. We have already seen how MAI systems are able to outperform human practitioners in various fields, from detecting melanoma to deciding on the best-fitting cancer treatment. As CDSS, MAI technologies could support clinicians in making decisions and choosing the best treatment option. In the form of robotic systems, they may also assist humans in difficult tasks such as precision surgery. As automated systems, MAI applications could even replace human decision-making or action altogether, thus reducing human error and personnel costs, enhance patient safety, and improve quality of care. Smart data practices are crucial here, since analyzing large data sets and deriving conclusions from them is the main function of MAI-supported actions. MAI technologies are supposed to be faster, more precise, and more efficient than humans in analyzing and interpreting data as well as implementing the fitting actions. One reason for this superiority lies in the immense computing power of current MAI. Some CDSS can process more information per second than any human could
110
5 Practices
probably do in a life time. Theoretically, these systems can access all the knowledge let us say on cancer pathophysiology, diagnostics, and treatment there is and update this knowledge any second. Not even the best oncologist could read every paper or text book on cancer and process this knowledge within seconds to come up with an accurate diagnosis or the best-fitting treatment option. In a word, MAI could help us to overcome our cognitive limitations as humans. By making the most efficient use of existing medical knowledge and individual health data, MAI technologies could enable precision medicine and personalized treatments, allowing health care professionals to cater to the individual needs of patients. Hence, the epistemological superiority of smart data practices directly contributes to improving the well-being of patients. The prerequisite for this is the datafication of patients, meaning to translate all relevant aspects of an individual into quantifiable, digital data packages. In a sense, doctors have always conducted some form of datafication. Defining what information is relevant and how to deal with it, is a core task of medical practice. Using scientific methods for reading the patient, i.e. collecting and processing relevant information about them, is the hallmark of modern medicine, starting in the nineteenth century. As we have seen earlier, the leading paradigm in contemporary medicine is EBM, which radicalizes this idea and states that medical treatment should rest on the best available empirical evidence. For doctors, this means to integrate knowledge from text books, RCTs and Meta-Reviews with the data in the given case and their own clinical experience. MAI and big data approaches change the game insofar as they introduce new methods of obtaining and analyzing individual health data on a hitherto unimaginable scale. This has the already mentioned advantages of improved precision and personalization. But especially these two aspects can only be fully understood in the light of digital positivism, i.e. the paradigm according to which data speak for themselves and have to be understood as objective representations of real-world entities. There is no doubt that these technologies provide quantitatively more information on a single person than has been possible before. But suggesting that smart data practices also provide doctors with better information in a qualitative sense does not necessarily follow. This is not only a matter of data quality as such, in terms of validity or reliability. The question is whether datafication as the main tool of digital positivism guarantees to capture all aspects of an individual relevant to medical treatment. The crucial aspect here is the way in which smart data practices impact the clinical encounter, more precisely the specific techniques doctors use to read their patients. One could call this reading of the patient a hermeneutic practice: Doctors have to deal with different kinds of information on the patient, from patient interviews to health records, to lab results, and integrate this with their acquired knowledge as well as the best available empirical evidence. They have to make sense of this information and derive conclusions for making decisions regarding diagnosis and treatment. One challenge in this regard is the epistemological gap between MAI-built models and the real patient. Any model only depicts certain aspects of the real world. This also pertains to algorithmic models, however large their data base or
5.2
Operationalizing Data
111
sophisticated their machine learning techniques might be. Modeling always implies focusing on certain features or traits of a phenomenon, a selection process which necessarily excludes others. Whether one uses MAI for pattern recognition or prediction, the models have intrinsic epistemic limitations. They do not only rely on the quantity and quality of data, but also on the accuracy and appropriateness of the parameters, such as target variables. The focus on quantifiable data might thus lead to a reductionism that ignores crucial features of an individual’s health situation. Overemphasizing some features and ignoring others might not only lead to negative outcomes in an epistemological sense. Ignoring determinants that fundamentally shape an individual’s life or overemphasizing certain features that tie an individual to a specific group also has ethical implications. The result may be to define an individual by features that oversimplify their individuality and social situation, or in other words, bias. Another challenge for clinicians is to explain and justify their MAI-supported decisions. Clinical decision-making does not only require best available evidence. The principle of informed consent also demands that clinicians must explain decisions, possible risks, and alternatives. That means that doctors have to make sense of MAI-processed data not only for themselves, but also for the patient. They have to make transparent how they made a decision and why the option they suggest is the best path to take. This may be difficult due to the complex nature of smart data practices and the technologies involved. Just as MAI technologies transform the epistemic practices of doctors, they also have an impact on explaining and justifying clinical decisions. Hence, explainability and transparency of decisions and their justification are major aspects. In the following, I analyze both challenges, the epistemic limitations of MAI in the form of reductionism and bias and the implications for explaining and justifying decisions.
5.2.1
Reductionism
Reductionism basically means to understand patients as an aggregation of quantifiable data. The reductionist view is not genuinely connected to smart data practices or the digital era, but has been an issue since the dawn of modern medicine in the mid-nineteenth century. The scientific turn in medicine meant the introduction of scientific methods as the gold standard for medical research as well as diagnostic and therapeutic practices. Medical reasoning now largely depended on quantifiable empirical data as evidence base. As a result, the epistemological practices of clinicians focussed on dividing the human body into smaller and thus more traceable units in order to enable a more precise investigation of the causes for disease (Ahn et al., 2006). The body as an aggregation of components became an important paradigm. The reductionist view focussed on isolating single components and identifying singular factors for treatment, such as pathogens. Doctors viewed contextual information as less important than identifying the single cause for disease.
112
5 Practices
This also led to additive treatments, where each disease is treated individually, thus ignoring the complex interplay between different factors within the human body as a whole (Ahn et al., 2006). I have already mentioned the centuries-old discussion about medicine as art or science. With the scientific turn, this debate was far from over, since the question was how to integrate scientific methods and scientific reasoning into the clinical practice of doctors, in which traditions, personal experience, and intuition still played an important role. As we have seen in Sect. 4.3.1, reductionism can take many forms. Maybe the most prominent approach to understand reductionism in medicine is Foucault’ concept of the clinical gaze (Foucault, 1973, 1978). According to Foucault, the clinical gaze expresses the specific epistemological practice of clinicians trained in scientific medicine. Clinicians learn to focus on the pathological, i.e. the abnormal, when faced with a clinical case. That means that clinicians construct normality from the perspective of the abnormal. In order to do that, it is necessary to clearly discern between the normal and the pathological, which implies the use of scientific methods and quantifiable empirical data. Clinicians focus on those aspects of a patient that they can quantify, which are clinical data related to bodily functions. As a consequence, the clinical gaze objectifies the individual by reducing it to a body with certain functions or symptoms. This also affects therapeutic practices, since clinicians primarily treat diseases instead of patients. That means that the individuality and personhood of patients gets lost. The patient becomes a bundle of biomedical facts based on which clinicians construct normality. Any deviation from this standard is considered as pathological and requiring treatment. By opening a new field of knowledge of the body, the clinical gaze also provides the possibility of manipulating the body. This gives physicians the power to decide and to intervene, based on the specific type of knowledge it creates. But this power is impersonal. The individual physician is merely an agent of the larger institutional context that he or she is embedded in. Thus, the clinical gaze has to been seen in the wider context of biopolitics, i.e. measures to regulate and control populations by controlling body and health (Foucault, 1978). In Foucault’s view, reductionism is not merely a necessity of scientific medicine, but has a normative dimension. It serves as a mechanism for exerting power. What is the connection between MAI technologies and medical reductionism? How do these technologies affect the clinical gaze? A closer look reveals that the role of MAI is ambivalent. When it comes to reductionism, MAI can be seen as doubleedged sword that may serve as a weapon against it and also perpetuate it. As we have seen, overcoming the narrow focus of clinical medicine in favor of a more holistic view is the main goal of precision medicine. The P4 concept aims to go beyond the reductive approach to medicine by integrating data from various sources and settings. From the early 2000s on, data-driven personalization has been the new paradigm for stratification and the tailoring of medical services to individual needs (Prainsack, 2022). Based on the ideas of systems biology and systems medicine, data-driven technologies were supposed to enable a more holistic view of health and disease and with it a more precise and personalized treatment. Systems medicine in
5.2
Operationalizing Data
113
particular meant a new paradigm, a shift from the classical reductionist view towards a holistic approach that considers the complex interplay of genomics, physiology, behavior, and environment (Federoff & Gostin, 2009). The promise of MAI technologies with a big data focus is to realize exactly that, enabling the integration of omics-data with contextual factors such as the individual’s behavior and environmental data (Vandamme et al., 2013). However, MAI technologies also have the potential for perpetuating, if not exacerbating reductionism. The main factor for the reductionist approach in digital medicine is the paradigm of digital positivism. According to this paradigm, quantified digital data is objective and has inherent value. Thus, digital data models are objective representations of reality. This epistemological approach shapes the way we perceive the world by implying that all phenomena can be reduced to digital data and analyzed by smart machines. It rests on dataism, the conviction that this reduction process yields immense benefits and is epistemologically superior to other forms of gaining knowledge about the world, in our case health. Datafication as a structural prerequisite signifies the practices that are involved in this reduction process. Regarding medicine and healthcare, the paradigm of digital positivism therefore implies that smart data practices are ideal tools for creating an objective and accurate evidence base for medical practice. In this view, digital data, which means quantifiable data in a specific format, is epistemological superior to other forms of information. Hence, the highest level of accuracy and thus the best empirical evidence can be reached through datafying all aspects of a patient. In a way, this could be seen as a radicalization of Foucault’s clinical gaze: From vital functions to behavioral aspects, from the patient’s movement in their own living environment to their posts on social media platforms, potentially every bit of information on an individual is available and potentially relevant from a medical perspective. The clinical gaze in Foucault’s understanding was bound to medical institutions like clinics. The digitally enhanced clinical gaze is potentially limitless, ubiquitous, and even depersonalized, since it may be fully automated. One of the ethical risks of the reductionism connected to digital positivism is what I call the standardization of patients. This is a result of the need for quantifiable data that fit certain technical requirements, like specific data formats. Health data has to be standardized in order to be processed and analyzed by MAI technologies. In turn, the mechanisms of data procession and analytics themselves are standardized, since they rely on certain machine learning techniques. Standardization as such seems to be unproblematic, since it is a requirement of medicine also in non-digital settings. There are standards for blood pressure or the amount of white blood cells that are indispensable for identifying hypertension or leukemia. But if we look at standardization in the context of reductionism connected to the digitally enhanced clinical gaze, certain risks arise. The reliance on quantifiable and readily available data marginalizes or even excludes all other forms of information on the patient (Prainsack, 2022). Although especially mHealth and IoT-applications can continuously collect behavioral and environmental data, this data is still limited to a specific format and defined by
114
5 Practices
specific parameters. In other words, what counts as relevant data, has been predefined by standards that are in themselves reductionistic. The paradigm of MAI-aided precision medicine may thus be holistic in the sense that it integrates contextual information with the biomedical data from an individual patient. However, it is also reductionistic since it reduces the contextual information to quantifiable data packages. Following this view, MAI technologies may enhance the reductionist clinical gaze. The technoscientific holism these technologies imply is holistic only in the sense that they enable the monitoring and control of the an individual’s whole life process (Vogt et al., 2016). Standardizing this information implies to decontextualize it, meaning to ignore factors like the patient’s preferences and values as well as the social determinants that shape their life situation. The result is a uniqueness neglect that ignores the individuality of the patient by reducing them to standardized data and procedures (Longoni et al., 2019). One striking example for how a reductionist MAI-enhanced clinical gaze creates biopower is technologies for emotion recognition and regulation (Hartmann et al., 2024). The purpose of these technologies is to detect the emotions of patients by using sensor technology as well as video surveillance. Emotion recognition and regulation is often part of AAL-systems, i.e. smart home technologies that allow patient care through automated systems or telehealth interventions. In a typical setting for emotion recognition and regulation, video cameras combined with a facial recognition software would observe the patient in their home (Castillo et al., 2014). As soon as the system detects a negative emotion, for example a continuing frowning, an intervention is triggered. This may imply to contact healthcare professionals or other caregivers. It may also mean that automated interventions are launched. In some applications, the system would respond to negative emotion by adapting the light scheme in the living environment or play relaxing music. The aim is to create positive emotions, since negative emotions are associated with health issues and in some cases the deterioration of health. This is a perfect example for the connection between reductionist standardization, normalization, and digital positivism. Emotions are framed as quantifiable data that can be detected by surveilling facial expressions. This is in itself a reductionist interpretation, since the very complex and multifaceted phenomenon of emotions is reduced to a simple scheme of stimulus and response. Also, the system simply classifies certain emotions as “negative” and “positive”, regardless of the individual’s point of view. As could be shown, most emotion recognition and response systems are based on one study on facial expression of emotions from the 1960s (Hartmann et al., 2024). The detection of emotions by facial expression is automated, which means that a machine learning algorithm is applied to classify facial expressions. When the system detects a negative emotion, it responds in some settings with an automated intervention, like the aforementioned light and music schemes. Sometimes, even a social robot is involved that interacts with the patients to distract them from their negative emotions (Castillo et al., 2014). This whole process involves standardization and normalization: It rests on standardized notions of what emotions are and how they manifest, as if emotions and their manifestations
5.2
Operationalizing Data
115
were equal for all individuals. It furthermore implements measures to restore a “normal” emotional state, meaning that it operates on some definition of which emotions are normal and which are pathological. These measures are themselves standardized, in that they rely on concepts of light or music that trigger positive emotions. In this whole process, the individual is a passive object of surveillance and intervention. What is even more concerning, the system simply responds to what it identifies as a negative emotion. Why this emotion occurred, what caused it, is of no concern here. Even if we grant that positive and negative emotions can simply be identified by facial expression and even if we agree that the automated responses are helpful, we must still admit that this system only treats symptoms. Of course, one could argue that this is meant as an emergency response system and that additional therapeutic interventions for the causes of the negative emotions can accompany it. However, there is a certain risk that this standardized and cheap way of dealing with mental health issues becomes the status quo. This might be considered as an extreme example, but it serves to demonstrate the interconnected mechanisms inherent in digital positivism. Far from being an isolated issue connected to one technology, reductionism may inflict all areas of medicine. Overemphasizing quantifiable data, especially their objectivity and impartiality, may lead to a standardized concept of health. Just as with emotions in the abovementioned example, other aspect of health and well-being could be reduced from an individual experience to a set of standardized data. Models constructed form this data but devoid of meaningful information could be taken as representations of realworld entities. Think of digital twin technology as discussed in Chap. 3. Here, models of an organ, a physiological system or the whole body could be built by integrating multimodal data. Doctors could interact with the model and use it for risk prediction or drug testing. The digital twin could thus become a representation of the patient. Since this representation is based on digital data, doctors may ignore the fact they interact with a virtual model that only represents a certain aspect of reality (Rubeis, 2022a). Reducing the complexity of health to quantifiable data thus obscures its psychosocial elements, i.e. the individual health experiences, preferences, and values of an individual and the social determinants that shape their life situation (Lupton, 2014; Samerski, 2018). It also obscures the politics of measurement, the fact that quantified data is never fully objective, meaning free of normative aspects (Sharon, 2017). Several decisions that have been made before measurement begins shape data: A selection determining which data is relevant precedes every measurement. Parameters, which in the context of MAI means variables and classification labels, have been defined in advance. The purpose of data analysis has been decided upon. All these decisions in advance shape the meaning of data and potentially give them normative aspects, most notably by deciding about inclusion and exclusion. What is not deemed as relevant, will not show in the resulting model. What is not represented in the model, simply does not exist (Mittelstadt & Floridi, 2016). Apart from the operational logic of MAI technologies, i.e. the functioning of algorithms and techniques of machine learning, broader social factors may shape data. We should not forget that digital data is the fuel of a new economic paradigm
116
5 Practices
called surveillance capitalism (Zuboff, 2019). We trade personal data, e.g. meta data, for supposedly free services like online mapping or comparison shopping engines. This data is used for surveilling, predicting, and shaping our behavior, e.g. in the form of personalized ads or nudging. The same technology could be used for designing MAI applications that target health for purposes that go beyond medical treatment. These technologies could be tools for realizing a governmental health agenda that aims to save costs by promoting a specific lifestyle. Commercial actors like corporations could also use these tools for simply making profit by selling health services or wellness products. In both cases, the purpose of operationalizing health data will shape their nature. One could see this as a radicalization of the clinical gaze as introduced by Foucault. The MAI-enhanced clinical gaze could be a tool for exerting biopower on an unprecedented scale. The inherent risk of reductionism is therefore the depersonalization of healthcare services and medical treatment. Reductionism thus undermines personalization as the very goal of MAI. Strategies: Thick data Reductionism is a direct result of digital positivism that privileges big data approaches above all other epistemic practices. In order to mitigate reductionism, it makes sense to consider more inclusive or holistic epistemic practices that recognize the qualitative aspects of information in healthcare. Such an approach is thick data, which aims to overcome reductionism and standardization by challenging the big data paradigm (Boellstorff, 2013; Richterich, 2018; Wang, 2013). Thick data emphasizes the irreducible contextuality of data, meaning that data never speaks for itself but is shaped and given meaning by the context of data generation as well as the type of measurement (Boellstorff, 2013). Hence, revealing the social context that connects data points instead of treating them as isolated entities is crucial here (Wang, 2013). Thick data does not only acknowledge this fact, but demands to actively create meaning by including the perspective of data subjects. The basic assumption here is that narratives and storytelling are essential human practices for making sense of the world (Berger & Luckmann, 1991). In medicine, this implies patient narratives that inform about individual illness experiences as well as the emotional value and meanings of practices and technologies for people (Prainsack, 2015). The basic idea is that personalization needs meaningful data that not only represents the biomedical facts, but also the individual aspects of a patient. In a way, thick data means to enrich quantified data with meaningful information and machine learning with human learning (Wang, 2013). This is exactly what doctors do in their everyday practice already. Their heuristic practices aim to contextualize biomedical facts with the social and personal reality of an individual. Thick data approaches could be a tool that supports doctors in this task. One way to operationalize thick data is to use ethnographic methods for integrating the patient perspective into the design process (Wang, 2013). Stakeholder workshops or interviews could serve as methods for engaging patients in co-design of MAI technologies. These methods from social science already exists and have proven to be efficient in generating thick data in the form of meaningful
5.2
Operationalizing Data
117
information from real-life contexts. Thick data approaches for MAI co-design have already been successfully established. Ostrowski and colleagues (Ostrowski et al., 2021) demonstrated how to use ethnographic methods for the co-design of SAR. In a year long process, they collected patient narratives on individual experiences with technologies and their visions for the future. The stories contained valuable insight for a patient-centered design of SAR and could be of immense value for robot designers. MAI technologies rely in part on reductionism and standardized processes due to their operational logic. A certain degree of reducing the complexity of real-world phenomena to quantifiable data points is therefore inevitable (Rubeis, 2022b). However, there is no inherent trade-off between the benefits of datafication and personalization, as long as we acknowledge the epistemic limits of algorithmic models. A two-fold awareness is thus necessary: First, we have to acknowledge that algorithmic models are not mirror images of real-world entities, but interpretations based on specific rules. These rules, i.e. the statistical principles behind machine learning techniques, have their limitations. Hence, an algorithmic model only depicts certain aspects of reality. Although it does that extremely well, we must be aware that there is literally more to the story than what can be derived from quantifiable data. This is where thick data comes in as a necessary supplement that is able to depict the qualitative aspects of real-world phenomena, also within certain limits of course. Combining big data and thick data, machine learning and human learning, is therefore the best strategy to come as close to the social and individual reality of patients as possible. Second, even if it will not be possible to integrate all individual preferences into the technology, the insights from patient narratives may at least raise an awareness that individual health and illness experiences as well as interactions with technology are multifaceted and heterogenous. Technology design should be aware of this heterogeneity and acknowledge the limits of reductionism and standardization. It is also important to raise awareness for the overall goal of technology design, which in the case of MAI should always be personalization. Whenever a design choice conflicts with this overall goal, e.g. by standardizing users instead of catering to their individual needs, preferences, and resources, designers should take an alternative path. The simple formula should be that technology has to adapt to users, not users to the technology.
5.2.2
Bias
In order to understand the bias problem connected to smart data practices, we first have to explore the epistemic practices of clinicians as such. By epistemic practice I mean the aforementioned reading of the patient, i.e. making sense of data. At the core of epistemic practices of clinicians is clinical heuristics, i.e. strategies for dealing with large amounts of data when making clinical decisions (Marewski & Gigerenzer, 2012). Clinicians are usually faced with large quantities of various data and have to make decisions under uncertainty. Since the available information is too
118
5 Practices
vast to be analyzed as a whole and since it cannot be known in advance which information is relevant, clinicians need to select those data that is most likely helpful in the given case. In order to do so, clinicians apply different strategies for including and excluding data. They might focus on just a few predictor variables, which they rank by importance. Data collection stops when the chosen predictors prove to be successful. Only if one reaches an impasse, e.g. in determining the cause of a symptom or choosing the right treatment option, more data is obtained. These strategies mostly rely on individual experience for knowing what is relevant data from similar cases as well as focussing on patterns rather than details. Clinical heuristics thus provides short cuts for processing information under uncertainty and in situations where time is of the essence. It is a fast and frugal approach for decision-making and one of the core competencies doctors learn in their clinical practice (Gigerenzer & Gaissmaier, 2010). Clinical heuristics has mostly been discussed in the context of dual process theory, which defines two types of reasoning (Kahneman, 2011): Type I is an intuitive approach that relies on institution and context. It provides a way to deal with information in a given situation by focusing on the broad strokes, meaning patterns and tendencies rather than detailed information. Type II refers to an analytical and deliberate approach that implies a detailed analysis and assessment of the information involved in a given situation. Clinical heuristics is an example for Type I, an intuitive rather than an analytic way of thinking and decision-making. Its main advantage is the economic way of dealing with available information. However, some commentators have pointed out that the reliance on individual experience and competencies of doctors opens the gate to all sorts of cognitive fallacies and bias (Hughes et al., 2020; Whelehan et al., 2020). Hammond and colleagues (2021) give an extensive list of the different types of bias that might occur in clinical heuristics of which I will only discuss a few in order to illustrate the shortcomings of this type of reasoning. Confirmation bias occurs when clinicians focus on data that corroborates or endorses their previous beliefs. This kind of bias is especially likely when clinicians draw on their individual professional experience for selecting relevant data. Framing bias refers to instances where the assessment and evaluation of data depends on the negative or positive context this data is presented. An important aspect here is the language used for describing phenomena that might be inherently value-laden rather than objective. The status quo bias occurs when clinicians favor certain interpretations because they fit well with the current paradigm, i.e. a certain diagnosis or treatment option. Hence, interpretations that vary from the current standard are rejected or not even considered. The bias problem is not just an epistemological issue, but has ethical implications. The different kinds of bias might result in discriminatory outcomes that negatively affect specific individuals or groups. One can distinguish two types of discrimination (Cossette-Lefebvre & Maclure, 2022): Direct discrimination refers to the practice of defining an individual or group by a single trait and making a decision based on this trait although it is not relevant for the outcome of the decision. An example would be to discard applicants for a job based on their gender although gender is irrelevant for
5.2
Operationalizing Data
119
performing the expected tasks. Indirect discrimination means that although everyone is treated the same, some groups are disadvantaged. That means that although an inherently neutral rule or principle is applied to all individuals equally, some are in better position to fulfill the criteria the rule implies. For example, a job as an accountant is advertised as open to everyone with the required qualifications, but since the company building only has stairs and neither ramps nor elevators, people with mobility impairments could not work there. The different kinds of cognitive bias may lead to both types of discrimination in the medical context. As a result, some patient groups are underserved while others receive adequate healthcare services. Hence, cognitive bias and discrimination directly impact clinical outcomes and patient well-being. As some authors suggest, MAI technologies might reduce clinician’s cognitive biases and thus improve clinical outcomes (Hammond et al., 2021; Miller & Brown, 2018; Topol, 2019). Following this view, MAI technologies can be seen as the ultimate realization of Type II-reasoning. Machine learning algorithms may process large amounts of data with an analytic scrutiny and in a rapid way that would be impossible for human practitioners. Hence, clinicians would not have to rely on short cuts and intuition, which would strengthen the evidence base for clinical reasoning and decision-making. One could argue that MAI possesses two features that might diminish if not eliminate human bias: optimization and standardization (CossetteLefebvre & Maclure, 2022). By speeding up the decision-making process and standardizing procedures, the human factor essential to biased outcomes might be eliminated. Since machine learning focusses on empirical, objective data and detects correlations and patterns and since MAI systems do not have any particular interests of their own, bias is virtually impossible. In other words, taking humans out of the equation could eliminate bias and with it unfair discrimination. Although the sheer computational power of MAI systems might easily outperform any humans when it comes to analytical reasoning (Type II), bias may still occur. In fact, evidence for the potential bias of computer systems has been available early on. Friedman and Nissenbaum could prove this back in the 1990s, which makes the overconfidence in the epistemological superiority of computerized reasoning all the more surprising (Friedman & Nissenbaum, 1996). Friedman and Nissenbaum identify three categories of bias, preexisting, technical, and emergent. Preexisting bias refers to biased social practices, attitudes, and institutions. Technical bias is caused by the mechanics and operational logic of computerized data procession, while emergent bias occurs in the contexts of use. Following this approach, one could say that the risk for bias stems in part from the data material that MAI technologies process (preexisting bias) and in part from their operational functioning (technical bias) and the purposes for which these technologies are used (emergent bias), such as decision support. Hence, several issues arise from the view of eliminating bias by outsourcing data analysis and decision-making to MAI systems: First, the very idea of objective data or raw data is in itself highly problematic. Contrary to the dogma of digital positivism, data is not inherently neutral or objective, but a product of social practices. Data does not simply exist,
120
5 Practices
it is produced. Data is not simply observed or collected, but shaped by the mechanisms of data collection. Second, social practices, interests, and the social context (surveillance capitalism) shape the technologies for analyzing and processing data. That means that machine learning processes and algorithmic reasoning depend on factors that might not be inherently neutral, objective, or fair. Third, digital positivism does not only affect the way we perceive a phenomenon, but also shape our practices. When AI-generated data models are the evidence-base for decision-making and action, biased data and biased mechanisms of data analytics will also translate into biased decisions and outcomes. In other words, if the data is discriminatory and the methods of making sense of this data is also discriminatory, the outcomes cannot be anything else but discriminatory. The mechanisms that lead to various forms of bias in MAI have therefore been adequately described as a bias cascade (AlHasan, 2021). This cascade starts with a biased data input that meets biased data processing, which in turn leads to biased outcomes. Hence, similar to Friedman and Nissenbaum’s classification from the 1990s, one can distinguish three categories of bias in MAI, data bias, algorithmic bias, and outcome bias. Data Bias Data bias signifies that the data used for analysis or training algorithms already contains bias. In a way, one could see this as an example for one of the core principles in data processing, which is often described as “garbage in, garbage out”. That means that the results machine learning techniques yield are only as good as the data input. In other words, the quality of data an AI-system is fed with directly affects the quality of its output. Hence, a data input containing bias will lead to a biased output. There are various forms of data bias, depending on the different mechanism of data creation (Xu et al., 2022). One can distinguish between statistical bias and societal bias (Mitchell et al., 2021). Statistical bias refers to measurement errors that result in nonrepresentative sampling. It is also known as sampling bias or selection bias. When it comes to machine learning, there is a discrepancy between the data sample that is used for training an algorithm and the phenomenon this sample is supposed to represent. This type of bias occurs when the selection of the data sample focuses on a limited number of variables that is insufficient to adequately reproduce the complexity of a phenomenon and environment. Societal bias occurs when data represent unfair discrimination of individuals or groups. Even if the data sample is representative and the chosen variables adequately describe the phenomenon, this very phenomenon expresses some form of discrimination. In medicine, sample bias occurs when sampling either ignores specific characteristics of a given population or when whole populations are underrepresented in the data sample. Common types of sample bias are gender bias, ethnic bias, and sociodemographic bias. None of these types of biases is specific to MAI, but have long been an issue in evidence-based medicine.
5.2
Operationalizing Data
121
Important in this context is the connection between sample bias and diagnostic bias which results in health disparities for different social groups (Straw, 2020). One crucial example in this regard is sex and gender differences that have long been an issue in clinical research. Sex differences manifest themselves in various clinical aspects such as disease prevalence, age of onset, as well as disease progression and severity (Buslón et al., 2022). There is evidence for clinically relevant sex and gender-related differences in various contexts, such as chronic diseases, mental health disorders, cancer, autoimmunity, and neurological aging (Cirillo et al., 2020). One example is stroke, where women are affected at a different age than men and have a greater stroke severity (Carcel & Reeves, 2021). Despite this fact, biomedical research, anatomy, and pathophysiology have historically focused on the male body. Although there is a growing awareness for this issue and various initiatives have been launched for enabling a more inclusive research, sex and gender bias in clinical trials still persist (Steinberg et al., 2021). As a result, prevention, prognosis, diagnostic interpretation, and therapeutic decisions might be negatively affected by omitting female participants (Buslón et al., 2022). A crucial aspect in this regard is that sex and gender as well as ethnicity are complex phenomena that are constituted by biological as well as social and structural factors. In combination, these factors may affect health across intersecting identities, meaning that for example being male, female, or diverse does not have the same health implications in different ethnic groups or different socioeconomic contexts (McCradden et al., 2020). A reductionist MAI approach that focusses on one of these factors may not be able to adequately capture intersectional effects. Another more technical issue is missing data, signifying the phenomenon that MAI technologies are unable to perform properly because they lack data, e.g. on a specific patient group (Getzen et al., 2023). The issue of missing data is mostly linked to some problematic aspects of the EHR, first and foremost the lack of standardized formats across institutions and health care providers, which in turn may cause errors and selections bias. Another factor is that some institutions or healthcare providers might not have the financial resources to provide an EHR infrastructure. As a result, certain groups or regions might be cut off from access to this technology, which also means that their data will not be accessible to others. Hence, MAI technologies cannot be trained properly or build adequate predictive models for these groups. It is easy to see how missing data affects the performance of MAI technologies: When an algorithm cannot observe variables, they do not show in the model, which in turn means that no outcome can be assigned to them (Gianfrancesco et al., 2018). As a result, biased smart data practices might further exclude underserved groups from the opportunities of MAI such as disease detection and risk prediction as the implementation of the technology progresses (Getzen et al., 2023). This effect further exacerbates the issues related to the so-called data absenteeism, referring to the lack of data on underprivileged groups in data-rich environments that cause health disparities (Lee & Viswanath, 2020).
122
5
Practices
Algorithmic Bias Algorithmic bias occurs when the output of an algorithm affects different individuals or groups differently in an unjust way (Kordzadeh & Ghasemaghaei, 2022). In most cases this applies to situations where decision-making processes rely on algorithms. The fact that algorithmic procession of data is seen as a hallmark of accuracy and objectivity could therefore obscure the risk of algorithmic bias. The basic problem here is again the paradigm of digital positivism, suggesting the epistemological superiority of smart data practices. Here, digital positivism does not only imply that data is objective and speak for themselves, but also that the mechanisms of data analysis are neutral since they do not rely on human judgment. One aspect of this paradigm is the belief that a software or computer system cannot be discriminatory or biased, since these are purely human behavioral characteristics. This view focusses on the technical process of establishing statistical relationships between variables. As we have seen, it is known since the 1990s that this process is in itself open to bias. Consider data mining for example where the goal is to find statistical relations between data points. To achieve this, it is necessary to define target variables, i.e. those features or traits that the algorithm has to look for, and class labels that divide the values of the target variables in mutually exclusive groups (Barocas & Selbst, 2016). Furthermore, problem specification is necessary, meaning the underlying question one wants to answer or goal one wants to achieve. Problem specification requires data scientists to translate a problem into the formalistic language of computers and formulate a question. Let us imagine an algorithm that is tasked with identifying heart attack symptoms in EHRs. The target variables would be the symptoms as described in the medical literature. The class labels would be those patients who reported these symptoms and those who have not. The problem specification, i.e. the goal of the data analysis could be to create risk profiles for patients and predict the probability of future heart attacks. This is an over-simplified example, but it suffices to demonstrate the inherent susceptibility of machine learning technologies to bias. A machine learning application is only as good, i.e. accurate, as the target variables, class labels, and problem specification. This is similar to the “garbage in, garbage out” principle linked to the underlying data we have encountered earlier. In our example, the validity of the target variables, the significance of the class labels, and the accuracy of the problem specification determine the predictive power of the model. If we defined the common heart attacks symptoms listed in most medical text books as target variables for example, we would expect an accurate predictive model. However, some heart attack symptoms are gender-specific and women show symptoms for heart attacks that have mostly been overlooked (Dey et al., 2009). In many cases, the standard symptoms discussed in the literature are still those typical for male patients. The algorithm in our example will therefore most likely overlook many female risk patients. Although the algorithm itself might work properly and the data is diversified, the outcome might still discriminatory. An important aspect here is that machine learning focusses on correlation, not causation. Ideally, this probabilistic approach helps to identify patterns that indicate
5.2
Operationalizing Data
123
relevant medical information, for example the shape and color that are associated with melanoma in order to detect skin cancer on photographic images. However, the purely probabilistic focus on correlation of variables might also have two negative side-effects, identifying meaningless patterns or finding patterns where none actually exist (Akter et al., 2021). In Sect. 3.2, we have already encountered two issues in algorithmic data processing, overfitting and underfitting (Walsh et al., 2020). Underfitting occurs when a model fails to identify relations between input and output variables. Overfitting means that a model performs well on training data but fails when fed with new data. In data science and informatics, these shortcomings are usually referred to as bias. This is not the understanding of bias that is prevalent in the social context, implying an unfair discrimination of specific individuals or groups due to certain features. However, underfitting and overfitting could be seen as the technological component of those types of bias that do not arise from the data themselves, but from processing them into models. Another contributing factor is the social or economic context in which MAI technologies are developed. The contextual bias in MAI stems from the ways and location of its production. MAI technologies are mostly produced in high-resource settings such as medical research facilities or well-equipped hospitals (Price, 2019). The resulting algorithms and models are then specific to the context in which they were developed (Weissglass, 2022). As a result, it may be difficult to apply these algorithms and models to other settings, since they overgeneralize from the settings they were trained in. This is especially problematic since most MAI development takes place in the Global North, i.e. in high income countries. MAI trained and developed in these settings may not be able to perform when applied in low and middle-income countries (LMICs). The MAI system would always search for the patterns it is used to and ignore those present, even prevalent in the LMIC setting, as noise. This is especially problematic since it undermines the very potential of MAI to mitigate health disparities on a global scale. The issue here is the false perspective digital positivism suggests. By framing algorithmic decision-making as an objective computational process, it overlooks that this process is guided by human decisions and actions (Favaretto et al., 2019): Human actors such as data scientists define target variables, class labels, and outcomes of interest, thus dividing the outcomes in binary groups and specifying the problem or question that is to be addressed. This means that humans make the decision what to look for, how to sort it, and what to do with it. Even if there was a way to eliminate or at least mitigate the technological causes of bias (underfitting and overfitting), there is still a human factor to consider. Outcome Bias An outcome bias occurs when the outcome of data processing by a MAI application results in an unfair discrimination of individuals or groups. This is especially the case when the data processing serves as the evidence base for decision-making. The system may either support decisions by humans that cause bias or, in an automated setting, make those decisions itself.
124
5
Practices
Defining outcome bias as a separate category from algorithmic bias rests on the crucial distinction between fair algorithmic decisions and fair final decisions (Grote & Keeling, 2022). In the first case, an algorithm produces a classification or predication as a result of data processing. In the latter case, a human, e.g. a doctor, decides on a diagnosis or treatment option. This distinction is important, because in a real-world clinical setting, algorithmic decisions mostly serve as guidance for decision-making by human doctors. That means that a fair or unfair algorithmic decision does not necessarily imply a fair or unfair final decision. In order to understand the ethical implications of algorithm-guided decision-making, it is therefore important to investigate the collaboration between MAI applications and humans. One example is the use of decision support systems during the COVID-19 pandemic (Tsai et al., 2022). In the US, the Centers for Disease Control and Prevention (CDC) COVID-19 Forecast hub collected data and models from dozens of institutions, researchers, and economic actors across the country. The aim was to incorporate the data for a better forecasting of the pandemic, which in turn should support decision-making (Tsai et al., 2022). That means that MAI technologies were used as decision-making tools for policy makers to enable them to better allocate resources within the health care sector. Some commentators have raised concerns that the use of MAI-based decision support systems might further exacerbate existing health disparities within the US health system. The main issue here is again the definition of target variables and class labels. The crucial clinical endpoints mortality, hospitalizations, and ICU admissions were also the most important target variables in many models. The problem here is that access to testing and other structural factors may prevent certain groups, in the USA, especially members of the Black and Hispanic communities, from testing and hospital treatment. As a result, these groups are underrepresented in the data, which in turn makes the models less valid. Hence, policy makers might make-decisions on resource allocation that disadvantage these groups. One type of outcome bias that occurs in a setting where MAI technologies support human decision-making or action is automation bias. It signifies the tendency to overrate the accuracy and overall performance of automated systems (Goddard et al., 2012). Automation bias thus expresses an over-reliance on the epistemic powers of AI-technologies when it comes to process data. This concept is closely linked to automation complacency, which signifies the shift of attention towards automated outcomes in a multi-tasking setting (Parasuraman & Manzey, 2010). Human actors seldom question the veracity of the outcome of an automated process in situations where several other tasks also have to be performed simultaneously. On behalf of human agents that interact with automated systems, automation bias as well as automation complacency often results in errors of omission, i.e. failing to perform a task because the system did not advise it or remind the human agent to perform it. On the other hand, errors of commission occur when human agents follow an incorrect advice prompted by the automated system. In medicine and healthcare, automation bias occurs mostly in connection with CDSS. One of the
5.2
Operationalizing Data
125
main causes for automation bias and its related errors is lack of clinical experience (Goddard et al., 2012). Other contributing factors are confidence and trust in a twofold way: First, confidence and trust in the performance and accuracy of CDSS may cause or amplify automation bias. Second, a lack of confidence in one’s own clinical abilities can cause an over-confidence in the performance of a CDSS. Task complexity, workload, and time pressure may be seen as further causes (Goddard et al., 2012). A high verification complexity, which occurs when it is difficult or too elaborate to verify outcomes of an automated system, is also of relevance here (Lyell & Coiera, 2017). There is evidence for how automation bias may affect clinical outcomes. One example is the use of CDSS for cancer detection in radiology (Lyell & Coiera, 2017). Laboratory studies have shown that an over-reliance on the accuracy of computeraided detection (CAD) may affect the performance of clinicians when compared with those that do not use CAD. In cases where the system does not correctly identify cancer, clinicians are also more likely to miss it, whereas those that do not use CAD perform better. The interesting aspect here is that automation bias often occurs when CDSS have a high accuracy. This confirms the assumptions that a higher level of trust in the performance of MAI decisions increases the risk for automation bias. There is also the risk that MAI may amplify conformation bias (Challen et al., 2019), which could be seen as a form of outcome bias. This occurs when individuals focus on those data that confirm their assumptions. When doctors ascribe epistemological supremacy to MAI technologies, they might prefer a certain model or outcome prediction that fits with previous assumptions to other explanations, which exacerbates confirmation bias. Data bias, algorithmic bias, and outcome bias may exist as separate problems, but they are also very likely to occur in the aforementioned fashion of a cascade. Probably the best example for such a cascadic bias problem was introduced by Obermeyer and colleagues in their widely-received paper in Science (Obermeyer et al., 2019). In this paper, the authors demonstrate how automated decision-making may lead to bias and thus further exacerbate existing health disparities. They examined an algorithm that is commonly used in the US health system. The task of this algorithm is to assign risk levels to patients. The risk level determines whether to prefer a particular patient when it comes to granting access to healthcare services. The higher the risk level, the easier the access. The algorithm deals with patient data and uses health costs as proxy. That means that the crucial parameter for classifying patients into risk groups is the health costs that have been invested in them so far. The more health costs have been invested, the higher the risk level, the easier the access. This seems to be a very straightforward and comprehensible approach. However, as Obermeyer and colleagues could show, the bias cascade is in full effect here. It starts with data bias: The patient data this process relies upon is treated as objective data perfectly mirroring the reality of patients. However, the whole process ignores social determinants that shape patient experience, access to healthcare services and thus their data. In the US health system, there is a structural discrimination of African Americans. This group shows a higher prevalence of illness at a given risk score but has less access to healthcare services when compared with
126
5 Practices
Caucasian patients. The data used by the algorithm in question does not account for this fact. Instead, the algorithm treats all data equally. This is the first step in the bias cascade and would suffice to produce a biased outcome. However, the second step, algorithmic bias, even exacerbates the bias. By using health costs as proxy, a biased parameter is introduced for sorting patients into risk groups. Due to the structural discrimination within the US healthcare system, African Americans do not have the same access to health insurance and health care services as Caucasian patients. As a result, less health costs are invested in African American patients. Since health costs is used as a proxy, it appears as if the health needs of African Americans are less severe since less costs have been invested in them. The combined data bias and algorithmic result in an outcome bias at the final stage of the cascade. As a consequence of the algorithm sorting African Americans into a lower risk group more often, this patient group is less likely to have easy access to healthcare services. This of course negatively affects their clinical outcomes and overall health. Hence, the unfair outcome of the algorithm further increases existing health disparities within the US healthcare system. Ethical Implications The bias problem of MAI affects various medico-ethical principles and values. First of all, the different types of bias are a fundamental threat to health equity and justice Biased MAI technologies may exacerbate existing health disparities by excluding specific individuals or groups. Missing data on these individuals or groups, the definition of parameters and variables, and ignoring contextuality of data might lead to an ontic occlusion (Mittelstadt & Floridi, 2016). This signifies the result of using a specific interpretative framework for analyzing data that overemphasizes certain aspects and ignores others (Knobel, 2010). The occluded aspects are not part of the representation or data model and are therefore absent from any decision or action based on the model. Ontic occlusion thus obscures alternative ways of representing or interpreting a phenomenon. Take the aforementioned example of the algorithm discussed by Obermeyer and colleagues. Focusing on health costs as crucial parameter excluded other ways of categorizing and assessing the health needs of patients. Not including social determinants of health, such as ethnicity, ignored highly relevant factors for health and the access to healthcare. Ontic occlusion thus leads to social exclusion, in our case, to exclusion from healthcare services. It is easy to see that ontic occlusion affects access and treatment outcomes in an unfair manner. Some authors speak of epistemic injustice in this respect (Carel & Kidd, 2014; Fricker, 2007; Del Pozo & Rich, 2021). Epistemic injustice occurs when existing prejudices deny certain individuals credibility regarding the information they provide (testimonial injustice) or when the interpretative framework for analyzing social experiences excludes certain individuals (hermeneutic injustice). Especially hermeneutic injustice is an issue in cases like the algorithm discussed by Obermeyer and colleagues, since focusing solely on quantifiable data ignores the social experiences of the affected group, i.e. discrimination and exclusion from healthcare services. Excluding the social determinants that shape the data from data analysis creates epistemic injustice, which in turn leads to a discriminatory
5.2
Operationalizing Data
127
and unfair outcome. Hence, epistemic injustice may lead to social injustice and violate equity, thus undermining the great potential of MAI for enabling a personalized medicine that overcomes health disparities. Undermining health equity also puts into question the vision of democratizing healthcare through MAI, which is a prominent topic within the debate. Ignoring the individual needs of specific individuals or groups on a national as well as on a global scale contradicts the idea of a more inclusive healthcare provision. It is important to note that equity does not mean to treat all individuals or groups equally. In fact, equity can imply the very opposite, meaning that catering to the individual needs of certain individuals or groups might imply treating them differently (McCradden et al., 2020). This is exactly what MAI could achieve: Collecting and processing large amounts of individual health data allows to tailor specific treatment options to the individual’s health needs. Furthermore, physicians could perform a more precise prognosis and risk assessment that consider all aspects affecting an individual’s health, including social determinants and contextual factors. MAI could thus contribute to reduce health disparities by personalizing healthcare services according to an individual’s needs, resources, and social context. However, in order for MAI to achieves this, social determinants have to play a major role on the various levels of data collection and processing as well as regarding training data for algorithms. Mitigating health inequities requires being aware of possible bias and critically assessing data as well as the mechanisms for processing them. The various forms of bias may also undermine the autonomy of patients. This may occur at different stages of the treatment process. It may start with informed consent. If the diagnosis or assessment of an individual’s health status results from biased smart data practices, an informed decision for or against treatment options is not possible. Furthermore, decisions based on biased algorithmic outcomes might negatively affect specific individuals or groups. By denying them access to certain healthcare services or treatment options, they will find it difficult to make wellinformed decisions. In addition, the fact that healthcare professionals and the healthcare system as a whole increasingly rely on automated decisions may impair the ability of those affected by these decisions to object or demand alternatives. This may undermine the agency of patients in terms of self-determined decision-making and action. MAI-related bias might also violate the principle of avoiding harm. When a MAI application produces or further increases health disparities, it also endangers patient safety (McCradden et al., 2020). By excluding certain groups or individuals from health care services due to bias, MAI may therefore cause harm. One example in this regard is melanoma in black patients. With a 5-years-survival rate of 70% versus 94% in white patients, the mortality is significantly higher in black melanoma patients (Norori et al., 2021). In several instances, computer vision applications for detecting melanoma have been able to outperform human clinicians. However, these systems have mostly been trained with images of melanoma on white skin, thus failing often when applied to black patients (Norori et al., 2021). Relying on the
128
5 Practices
current state-of-the-art in MAI-aided melanoma detection would thus imply misdiagnosis and lack of treatment for this patient group. Strategies Overcoming bias in MAI requires a bundle of measures due to its manifold causes. On the first level in the bias cascade, we need to address data bias in the form of biased training data. It is necessary to select a data sample that adequately represents the patient population the algorithm is meant for. We have already seen that the absence of data on specific groups is a problem in clinical studies. When training data is taken from clinical studies, it is therefore important to consider the study population. This requires developers of MAI technologies to engage with the issue of lacking diversity in the data material. If they directly collect data for training purposes, technology developers may have more control over the diversity and hence validity of data. But to diversify training data is not an easy task. The reasons why specific populations are underrepresented in existing health data are mostly structural. They result from socio-historical developments or a general discrimination within a given society. Therefore, technology developers will have to actively address these structural challenges in order to get more diversified training data. The key concept in this regard is human-centered AI (HCAI) (Chen et al., 2023; Shneiderman, 2020). HCAI follows the idea that the best way to achieve fair and equitable MAI technologies is to include a wide range of relevant stakeholders throughout its life cycle (Chen et al., 2023). The aim of HCAI is threefold (Shneiderman, 2020): It aims to increase human performance by balancing high levels of human control and high levels of computer automation in the design of machine learning algorithms. In addition, HCAI seeks to identify those scenarios where full human control outperforms full computer control and vice versa. A further goal is to prevent the risk of both excessive human or computer control. The fundamental change in perspective HCAI implies is the shift from people as human factors to people as individuals (Auernhammer, 2020). That means that HCAI centers around integrating the prior experiences, needs, motivations, desires, ambitions, interests, and lifestyles of individuals into the design process. One strategy in HCAI is to engage with underrepresented communities and include them in defining standards for adequate, inclusive representation in data production (Wawira Gichoya et al., 2021). A method for such an inclusive algorithm development is Community-Based System Dynamics (CBSD), where different stakeholders participate in co-creating a technical solution (Prabhakaran & Martin Jr., 2020). The idea behind CBSD is that when different stakeholders face a complex problem, they have different explanations and ideas. Integrating these different perspectives may help to build more complex and adaptive technologies. CBSD has been successfully applied to the development of MAI with a special focus on diversifying data. One example is the initiative FAITH! (Fostering AfricanAmerican Improvement in Total Health!), which attempts to integrate members of the African-American community into the design process of mHealth applications (Harmon et al., 2022).
5.2
Operationalizing Data
129
Furthermore, making the nature of training data transparent is an important aspect (Walsh et al., 2020). Health care professionals should be aware of the characteristics of the training cohort in order to be able to assess the scope and limits of a MAI system. This also includes considering the socio-demographic context in which that data has been collected. Addressing missing data and data absenteeism requires to consider the social determinants of health that affect access to healthcare services (Lee & Viswanath, 2020). This may be a factor on an institutional level, meaning that a healthcare institution, for example in a structurally weak area, does not have the financial resources to provide a proper big data infrastructure. However, since their access to and expertise regarding underprivileged groups might also be valuable for other institutions or healthcare providers, strategic alliances with other providers could benefit all parties involved. On an individual level, the purchasing and maintenance as well as other latent costs connected to digital health devices might not be affordable. In a setting where data is collected via mHealth technologies, those who cannot afford the costs would be excluded. This might also affect eHealth solutions such as interactive websites, since internet connectivity might also be unaffordable to some individuals. To account for these costs and provide an extra budget has already proven to improve participation in settings of digital data collection (Lee & Viswanath, 2020). The next step in the bias cascade, algorithmic bias, depends on the operational logic of machine learning techniques. On a technological level, steps can be taken in the pre-processing, in-processing, and post-processing phases of the data analytics process (Kordzadeh & Ghasemaghaei, 2022): Pre-processing measures include selecting appropriate data sets for training the algorithms and making sure they do not contain bias. In the in-processing phase, developers can implement regulizers, which are parameters that prevent specific outcomes. In post-processing, developers may revise algorithmic outcomes and potentially recalibrate algorithms should bias occur. Furthermore, developers could continuously test algorithms for potential bias throughout data processing (Gianfrancesco et al., 2018). This includes highlighting social determinants such as ethnicity or gender in the input data and choosing variables and parameters that are non-discriminatory. In order to mitigate algorithmic bias, it is therefore necessary to scrutinize the applied machine learning techniques and develop strategies for adjusting them throughout the different stages of the data analysis process. Algorithmic fairness is the crucial concept in this context, referring to technical or statistical approaches that revise possibly biased algorithms by using fairness metrics (Xu et al., 2022). The idea behind this approach is that bias can be quantified in mathematical terms. Hence, when it comes to algorithms, the bias problem is mainly a mathematical problem that developers can resolve by mathematical means. Several fairness metrics exist, each depending on a specific definition of fairness and a specific technical fix to adjust statistical methods. One can classify the different fairness metrics in two groups, awareness-based fairness and rationality-based fairness (Wang et al., 2022). Awareness-based fairness metrics center around the question how to deal with a sensitive variable, i.e. an attribute that clearly distinguishes an individual or group
130
5 Practices
from the majority of the population or indicates some form of vulnerability. The most common examples are ethnicity and gender, which in some settings may define marginalized groups or individuals within a given population. One strategy in this regard is fairness through unawareness, which basically means that an algorithm should ignore the sensitive variable during the machine learning process (Wang et al., 2022). For example, a hiring algorithm tasked with choosing the best candidate for a job could be trained to ignore gender or ethnicity-specific names, both of which are features that human employers often use as unfair selection criteria (Kleinberg et al., 2018). In the medical context, the same strategy could enable fair allocation of resources or services to different groups. In reality, however, it would be difficult to implement fairness through unawareness, since the protected variables are mostly connected to other attributes like zip codes or gender-specific diagnoses. Thus, the correlation between the sensitive attribute and the rest attribute might still be detected by the algorithm (Xu et al., 2022). Furthermore, in some contexts it might be crucial to account for the sensitive attribute and define it as target variable in order to ensure fairness. These contexts require fairness through awareness, where the aim is to classify two individuals who are similar regarding a specific task similarly (Dwork et al., 2012). This kind of similarity only works when one is aware of specific attributes that might impair the chances of one group when compared with the other. In a fairness through awareness setting, the aforementioned hiring algorithm could be trained to focus on variables like gender for prioritizing female candidates in order to guarantee equal opportunities. Although this is a rather intuitive fairness metric, it is difficult to realize in real-world settings. Most settings, especially in medicine, are much more complicated than the hiring example. This makes it difficult to measure and weigh the sensitive attributes across groups (Wang et al., 2022). Rationality-based fairness metrics can be sub-divided into statistical-based fairness and causality-based fairness. Statistical-based fairness aims to ensure that mathematical methods treat different groups equally. In a common setting for this fairness metric, we have a vulnerable group with a protected attribute, e.g. ethnicity, and a non-vulnerable group. The fairness of the algorithm depends on its performance across both groups (Wang et al., 2022). There are various forms of mathematical paradigms and methods developers can be use to achieve statistical-based fairness. Demographic parity aims to build predictive models that are equally applicable to all groups of a given population. That means that predictions of outcomes for individuals in a protected group should be the same as those for the overall population (Xu et al., 2022). Demographic parity is achieved when the same output prediction results with the same probability exists for both protected an unprotected groups (Wang et al., 2022). Although demographic parity might be useful in some contexts, it fails to capture relevant clinical characteristics related to protected variables, e.g. disease risks that are more prevalent in one group than in the other (Xu et al., 2022). An alternative statistical fairness metric is equalized odds where the focus is on predicting outcomes that depend on the protected variable (Wang et al., 2022). The crucial indicator here is the false negative rate and false positive rate for both groups. Consider the example of applying a diagnostic tool to a
5.2
Operationalizing Data
131
patient group that consists of a majority of white patients and a minority of black patients. If the false positive rate and false negative rate are equal for both groups, the algorithm can be considered as fair. Equalized odds thus achieves a stronger groupspecific fairness than demographic parity (Wang et al., 2022). The main advantage is that it takes associations between the protected attribute and the outcome into consideration, which is especially relevant in a clinical context (Xu et al., 2022). Causality-based fairness goes beyond observational data and purely mathematical definitions crucial to statistical-based fairness metrics. It includes notions of fairness and discriminations that are derived from the real-world relationships and behavior of stakeholders as well as existing social institutions (Wang et al., 2022). Whereas statistical-based fairness metrics focus on mathematically quantifiable correlations between variables, causality-based fairness metrics consider the causal relationship between protected variables and outcomes based on additional knowledge from the real world. It allows to define the causal relation between the protected variable and a specific outcome as unfair discrimination and provides mathematical models for correcting this bias. The relevance of these strategies for medicine is obvious. Developers could train algorithms could to mitigate the data bias in clinical studies and design them to focus on variables that are relevant for the proper diagnosis or treatment of specific groups. This way, machine learning techniques might solve bias issues that medicine has been struggling with for decades. But the introduction of fairness metrics into algorithm design might not be a viable option for mitigating all forms of bias for another reason. As some authors claim, there is an inherent trade-off between the fairness of algorithms and their accuracy (Corbett-Davies et al., 2017; Valdivia et al., 2021). Both might even be mutually exclusive, meaning that improving the fairness of a model means to limit its accuracy. In addition, some authors state that mere technical solutions are insufficient to deal with the bias problem. In this view, fairness is not a technical fix, but requires a normative approach. One issue with fairness as technical fix is that it requires some definition of what fairness is supposed to be (Wong, 2019). As we have seen, one may apply different fairness metrics to the machine learning process. Each of these metrics implies a different definition or concept of fairness. That means that data scientists or software developers have to decide on one definition they follow in designing algorithms. Given the large number of different concepts of fairness, it is difficult to settle for one definition. Which concept of fairness one prefers might itself be a biased choice. The crucial question, therefore, is not how to create algorithms that enable fairness, but rather what concept of fairness is appropriate and why. As a consequence, some authors suggest that in order to create fair and equitable algorithms, the design process should not rely on the preferences of software designers and data scientists when it comes to choosing concepts of fairness. Rather, fairness and equity can best be achieved by including those who are potentially affected by algorithmic bias into the design process. This participatory approach is supposed to be a democratic way of dealing with bias and fairness issues. Including the perspectives of relevant stakeholders into the design of algorithms aims to ensure
132
5 Practices
that the needs, interests, and resources of those affected by algorithmic decisions are equally considered (Chen et al., 2023). This is again the perspective of HCAI that we have discussed earlier in the context of participatory, community-oriented approaches for diversifying data. HCAI is also a viable perspective for this second phase in the life cycle of MAI, model design, testing, and evaluation (Chen et al., 2023). Participatory design can be seen as both a principled and context-sensitive approach. It is principled in the sense that it treats equity and fairness as leading principles that cannot be sacrificed to greater accuracy. It is context-sensitive since it does not consider MAI design to occur in a vacuum, but embedded in a concrete social structure and social relationships. Hence, issues of democratic control and power imbalances are not mere side-effects, but essential factors for a design process that is supposed to deliver fair and equitable outcomes (Auernhammer, 2020). A HCAI approach based on participatory design could integrate the perspectives of various stakeholders (Chen et al., 2023). Software engineers and data scientists, ethicists, lawyers, healthcare professionals, patients, and communities could each provide their respective expertise. Especially the needs, interests, and resources of those directly affected by algorithmic decision-making should be an integral part of the design process. For example, the characteristics of a specific user group might be essential for defining fair and equitable target variables or class labels. Furthermore, including the user group into the evaluation and assessment process is a crucial measure. The feedback from those who are directly affected by the outcomes of algorithmic models helps software engineers to adapt the algorithms so that they fit with their requirements. This also ensures that the model is relevant and representative of the target population (Chen et al., 2023). Another advantage of participatory approaches is their context-sensitivity. Engaging communities helps to situate MAI technologies within specific health care infrastructures (Rubeis et al., 2022). This may also improve the usability and acceptability of the technology by both caregivers and care receivers (Fohner et al., 2019). The downside of participatory design is its high maintenance characteristic. That means that engaging stakeholders from different fields as well as communities is often challenging, time-consuming, cost-intensive, and requires careful moderation in order to balance the varying interest involved (Merkel & Kucharski, 2019). The final stage in the bias cascade, outcome bias, may be tackled from several angles. One strategy is regular monitoring of the performance metrics when the MAI system is in its real-world evaluation stage (Morley et al., 2021). An important aspect in this regard is to make sure that the performance metrics align with the values and priorities of clinical practice such as efficiency and safety. Automation bias could be one of the performance metrics under monitoring. To put it simply, monitoring the automation bias should be part of making sure that the system is working the right way before broadly implementing it in clinical practice. A method for doing this might be case study analysis including expert interviews to investigate the risk and scope of automation bias in a real-world setting. This is also important in a public health setting. As we have seen in the COVID19 forecasting example, MAI aided decision support for policy makers bears the risk
5.2
Operationalizing Data
133
of overlooking existing health disparities. For predictive models in this context to be valid, evaluating accuracy alone is insufficient. In order to design valid models that avoid exacerbating health disparities, the validation phase should include an equality analyses (Tsai et al., 2022). The aim of the equality analysis is to make sure that the results of the model are also accurate for subgroups and to avoid bias. In order to achieve this, several steps can be taken. First, as with every model, developers should check target variables and class labels for potential bias, meaning to scrutinize whether they represent or overlook structural inequities. Second, developers have to check the training model and adapt it if needed, applying technical fixes as described above. Third, developers should make fairness issues and bias risks transparent. This helps policy makers to better evaluate the quality of the evidence provided by the model and also asses their usefulness in terms of decision support. When it comes to automation bias as subtype of outcome bias, some authors suggest that a right to a second opinion should be in place (Ploug & Holm, 2023). In cases where doctors use a CDSS, patients should have the right of an additional assessment provided by a doctor to verify the outcome. This would protect patients from potential harm and also empower them. However, it is an open question who should pay for this option. Obtaining a second opinion may be a time-consuming and expensive process, thus undermining the effective use of resources as one of the main benefits of MAI-based decision-making. The bias problem related to MAI technologies is multifaceted, complex, and caused by several factors, from the quality of the training data to the operational logic of machine learning algorithms and the decisions by human actors. It is important to account for this complexity by developing strategies that address the different causes for bias as well as their outcomes. There is no panacea, no single measure to prevent all the various forms of bias. It is especially important to understand that although technical fixes are needed to a certain extent and in some contexts, fairness and equity largely depend on the social fabric algorithms are developed as well as deployed in. Fine-tuning algorithms to specific requirements of a certain field of application may require a lot of time and effort. However, since biased algorithms or algorithm-based decisions may undermine the main benefit of MAI in general, i.e. personalization, these efforts are necessary to unlock the full potential of the technology. Otherwise, there is a risk that MAI technologies may further exacerbate existing disparities and discrimination within the healthcare sector.
5.2.3
Justifying Decisions
Making a clinical decision, for example choosing a certain treatment option, requires a sound evidence base. That means that mere intuition or reliance on professional experience is insufficient to justify clinical decisions in the era of EBM. In a MAI setting, a doctor must be able to assess and evaluate the mechanisms behind a certain algorithmic outcome (Kundu, 2021). The crucial point here is justifying the reason
134
5 Practices
for a certain decision like a specific drug or form of therapy. Durán and Jongsma speak of an “epistemic warrant” (Durán & Jongsma, 2021), which means that physicians are obliged to justify their decision by reliable knowledge. The main argument here is that physicians have to be able to assess whether a certain treatment for example will benefit the patient. The nature of algorithms, their complexity and opacity, might make it impossible for physicians to assess predictions or the underlying evidence. Thus, it becomes questionable why they should trust the outcome of algorithmic decision-making. One can say that being unable to explain algorithmic decisions conflicts with the moral responsibilities of clinicians (London, 2019). The underlying assumption is that all experts have to justify their decisions or actions by causal explanations, a notion that is especially relevant in EBM (Pierce et al., 2022). If such an explanation is not possible, physicians would simply trust an outcome they themselves are not able to evaluate. That also means that physicians might not be able to identify or prevent errors by algorithmic decision-making. It is important to recall that machine learning algorithms detect patterns and make predictions based on correlations between variables. This is a statistical process that does not account for causality, meaning that different kinds of errors may occur. A lack of interpretability may therefore pose the risk that these errors are overlooked, since clinicians cannot assessed whether a causal inference is plausible or whether we are dealing with a confounding error or bias (Theunissen & Browning, 2022). Issues like overfitting where the algorithm, although yielding accurate results in testing mode, misinterprets data in the wild, might result in misdiagnosis (Watson et al., 2019). These serious risks led some commentators to request that opaque algorithms that are incomprehensible for users should not be used in sensitive areas like medicine at all (Rudin, 2019). Instead, only those algorithms that are fully explainable should be applied. However, some commentators claim that even in the era of EBM, the idea of being able to explain every clinical decision by scientific means or causal inference ignores actual clinical practice. Decisions in medicine are often atheoretical, associationist, and opaque, meaning that they rest on other factors than strict scientific evidence and causal explanations (London, 2019). Doctors are not always able to explain how they made a decision and still depend in part on intuition and personal experience (Watson et al., 2019). In clinical practice it is sometimes more important to produce results and verify them empirically (London, 2019). Given the potential of MAI technologies, it would be irresponsible to sacrifice accuracy for explainability. This argument is especially relevant since opacity is mainly a feature of deep learning algorithms, which perform especially well in certain areas such as computer vision for tumor recognition. If we were to ban these algorithms due to their lack of explainability, this would imply ignoring the potential benefit for patients. The basic problem with many machine learning algorithms and especially CDSS is their epistemic opacity. The so-called black box problem implies that sometimes even those who designed an algorithm cannot fully explain why it works (see Sects. 3.2 and 4.3.1). This is especially the case in deep learning. Software designers may
5.2
Operationalizing Data
135
understand the architecture of a system as well as the process of model generation, but not the models themselves. That means that although designers have chosen the machine learning techniques and can retrace how the system built a model, they cannot account for the relationships between features and the output classification (London, 2019). In a clinical setting, healthcare professionals who do not understand why an algorithm came up with a certain output will be unable to explain it to patients (Watson et al., 2019). This is highly problematic for several reasons. First, decisionmaking in medicine has to be evidence-based. There must be sufficient empirical evidence to support and justify a decision made by doctors. Part of this, as some argue, is being able to explain how a decision was made. If an algorithm supports decision-making, this also pertains to the workings of the algorithmic process. Second, being unable to explain a decision affects autonomy and informed consent, trust, and accountability. Third, explainability might be interpreted as a legally required feature of MAI systems. As some authors claim, patients have a right to explainability when it comes to algorithm-supported decision-making (Samek et al., 2019). Before analyzing the implications of the lack of explainability in MAI, we first have to unbox the black box and understand what opacity means in this regard. Burrell (Burrell, 2016) introduced a distinction of different types of opacity: Opacity as intentional corporate secrecy or state secrecy refers to the concealment of mechanisms for the protection of interests. Designers will not reveal the code for their algorithms due to proprietary concerns and to maintain competitive advantage. The opacity of the algorithm is intentional to protect it as a trade secret. Although this might be understandable with regard to commercial applications such as algorithms in search engines or online shops, it becomes problematic in the healthcare context. As we have seen before, making models and codes accessible is crucial for many MAI technologies. The exchange not only of data, but also algorithmic models and machine learning strategies is an essential requirement for unlocking the full potential of MAI. Intentional opacity could therefore be a barrier for achieving a learning health care system (LHR). In addition, protecting the code for algorithms as a trade secret might be a pretext for attempts to bypass regulations, manipulate users, and discrimination. Opacity as technical illiteracy refers to the fact that designing and interpreting, “writing” and “reading” an algorithm are specialist skills. Even if the code for an algorithm was available, most healthcare professionals would not be able to interpret it. Since it is unrealistic to expect healthcare professionals to master this skill, the inner workings of algorithms remain inaccessible even without intentional opacity. Opacity may also refer to the workings of algorithms in complex AI systems. Some of these systems entail multiple components that are built by different teams. As a consequence, a single programmer working on an algorithm might not know about the functioning of other components. Hence, algorithmic opacity implies that certain components of an AI system are opaque even to those involved in software development.
136
5
Practices
Opacity connected to the characteristics of machine learning algorithms and their application may arise despite a basic expertise in the field. This has to do in part with the operational logic of algorithms, and in part with the purpose of their application. Machine learning techniques are especially designed to deal with large amounts of data. When algorithms meet an abundance of data, a high level of complexity may result that is inscrutable even for those who designed the algorithm. In a way, the purpose of machine learning algorithms is to surpass human cognition when it comes to processing data. Hence, it is no surprise that some outcomes are difficult to explain. Especially deep learning is often applied to problems where linear logic is insufficient. Therefore, explaining all steps and sub-tasks such a system performs for producing an output is simply not accessible to every-day requirements of reasoning and semantic interpretation. This raises the question of explainability, its scope and limits. Explainability Explainability is an important task in developing machine learning systems apart from primarily ethical considerations. Explaining the outcome of machine learning algorithms is important for four reasons (Adadi & Berrada, 2018): First, to justify the results. This is especially relevant in cases where an AI system delivers novel or unexpected results. Second, understanding system behavior helps to prevent and identify errors and is an important requirement for debugging. Third, being able to explain how the system works is essential for improving it. System improvement is an iterative process that requires a continuous input of information on the mechanisms involved. Fourth, explainability is crucial to making new discoveries. When AI systems are used in science, it is insufficient that they just yield results. An explanation how these results were made is essential for gaining scientific knowledge. However, as Beisbart and Räz (Beisbart & Räz, 2022) argue, explainability itself is a complicated issue. On the one hand, we have to define the scope of explanations, meaning what exactly we need to explain. On the other hand, we cannot assume a straightforward relation between understanding and explanation. In fact, we have to distinguish between explanatory understanding and objectual understanding. Explanatory understanding means understanding why something is the case, whereas objectual understanding implies knowledge about a domain based on theories or models. That means that objectual understanding is possible without explaining the “why”. As a consequence, explainability in MAI requires a deeper investigation on what explaining actually means and whether it is a requirement of understanding. One element of explainability is interpretability, i.e. describing the operations of a system in simple terms that users can understand (Gilpin et al., 2018). Completeness is another requirement for explainability, which implies describing the functions of a system in the most accurate way. A complete explanation would imply to describe all mathematical operations as well as the parameters involved. There is a potential conflict between interpretability and completeness, since a very detailed
5.2
Operationalizing Data
137
mathematical description that fulfills the criterion of completeness might not be comprehensible to users, hence lacking interpretability. Not all commentators agree with the high relevance of explainability and emphasize the accuracy of algorithmic decision making. While some find the trade-off between accuracy and explainability problematic (Rueda et al., 2022), others argue that accuracy is the more important aspect and should thus be privileged over explainability (London, 2019). Some even argue to abandon explainability altogether and replace it with a different form of assessment. As Dúran and Formanek propose, assessing the reliability of a system is more important than understanding and explaining each detail (Durán & Formanek, 2018). An algorithm can be trusted if it is reliable, meaning that it delivers robust and verified results. This position has also been referred to as computational reliabilism (Durán & Jongsma, 2021). In the medical context, computational reliabilism means that assessing whether physicians or patients should trust the outcomes of an algorithm should not depend on explainability, but on the validity and quality of its results. Trust in MAI decisions is therefore possible because of the performance of the system. Ferrario et al. (Ferrario et al., 2020) describe this as a process where belief in the trustworthiness of the system is generated and updated with each use. Limiting the use of machine learning systems to tasks that have been empirically proven and validated would also enable autonomy and accountability (London, 2019). However, this view is problematic in the medical context, since it may affect patient autonomy and the principle of informed consent. Explanations usually help the patient to understand that the physician’s decision is not just arbitrary or based on authority (London, 2019). When a physician presents the patient with a decision, e.g. a treatment option, without giving an explanation, patients may view this as paternalistic. Furthermore, autonomy is inextricably linked to informed consent, the basis of which is the disclosure of all relevant information. Patients must be able to choose from different options and make a decision based on sound information. When physicians are unable to explain how a decision was made, this could undermine patient information (Amann et al., 2020). Apart from the requirements of EBM and the ethical implications, explainability may also be a legal requirement. Compliance to legislation might imply some form of explainability in terms of liability and individual rights (Samek et al., 2019). Strategies As is mostly the case with MAI technologies, strategies for overcoming the ethical issues of opacity consist of a mix between technical fixes, social practices surrounding development and application, and regulatory practices. A common strategy is to regard explainability of machine learning algorithms as a requirement of MAI and therefore a necessary design feature. The goal here is to design explainable AI (XAI) technologies (Adadi & Berrada, 2018; Murdoch et al., 2019). In its simplest form, XAI focuses on three questions: Why did the algorithm do that? Can I trust the results? How can I correct an error? (Holzinger et al., 2019). These questions can be addressed by various technical or mathematical fixes. One could distinguish between ante-hoc and post-hoc measures (Holzinger et al., 2019).
138
5 Practices
Ante-hoc measures focus on interpretability by design. The development of the algorithm contains steps to make certain process transparent. The goal is to create machine learning algorithms as a glass box instead of a black box, meaning that transparency is a design feature from the beginning (Holzinger et al., 2017). Developers can achieve this by methods of interactive machine learning (iML), where a human-in-the-loop-approach includes the interaction between humans and a machine learning algorithm during the training process. The algorithm gets supplementary information from humans in areas where their cognitive abilities surpass the computational abilities of the algorithm (Holzinger et al., 2017). That means that humans are not only involved in the pre-processing phase of the development process, where their task is mainly to define target variables and group classifiers. Humans also interact with algorithms in the learning phase, which offers the possibility of directly intervening when errors or bias occur. In addition, the operational rationale of the algorithms becomes more transparent. IML-approaches for deigning glass box algorithms have already been used in MAI development. This means that the supposed inherent trade-off between explainability and accuracy might rest on a false dichotomy (Rudin & Radin, 2019). Post-hoc approaches try to provide explanations for single decisions instead of explaining the system and its functioning as a whole. In a medical setting, doctors may be interested in the reasoning and evidence-base behind a certain outcome, for example a specific diagnosis. A global explanation that tries to describe the workings of the entire MAI system would be neither comprehensible nor useful here. In contrast, so-called local explanations aim to describe the mechanisms for a specific decision and make it reproducible (Holzinger et al., 2019).This helps doctors to assess and evaluate the validity and accuracy of the outcome. One approach is the visualization of complex machine learning processes (Samek et al., 2019). This is particularly helpful for healthcare professionals who cannot be expected to possess sophisticated programming knowledge. So-called heat maps are a common form of visualization in medical image recognition (Ghassemi et al., 2021). For example, a MAI system is tasked with detecting pneumonia on chest X-rays. The system, most likely a deep learning application, analyzes the image in different layers (see Sect. 3.2) and provides the probability for pneumonia. A heat map can partly retrace the steps of the analysis by highlighting those areas in the X-ray that the system focussed on. Brighter colors indicate areas of greater interest, whereas darker color show those areas that were less relevant for the analysis. Doctors can comprehend which areas the system assigned most relevance to. However, this visualization does not indicate what exactly the system focussed on. Whether it was the shape of the left pulmonary artery or some pixel value remains unclear. Hence, doctors cannot rule out that a confounding factor, e.g. the texture of the X-ray, was responsible for the outcome (Samek et al., 2019). Some commentators suggest audits for black box algorithms as a strategy for enabling transparency and fairness (Liu et al., 2022; Panigutti et al., 2021). Whereas a human-in-the-loop approach is an ante-hoc measure as part of the design process, algorithmic audits are performed after the algorithm has been designed. The aim is to scrutinize algorithms before they are implemented in clinical practice. This implies
5.2
Operationalizing Data
139
opening the black box and checking the algorithm for possible bias. Software developers or other stakeholders like health care professionals can conduct an audit (Liu et al., 2022). The auditing process may result in developing strategies for mitigating bias and adopting the algorithm accordingly (Panigutti et al., 2021). In order to do so, a concept of fairness has to be developed which is then operationalized when auditing the algorithm. One example is the FairLens methodology for auditing black box CDSS (Panigutti et al., 2021). FairLens focusses specifically on the reasons for differential treatment, i.e. why a CDSS recommends different treatments for different patient groups. Differential treatment might be medically indicated or caused by a biased algorithmic decision. In order to ensure the fairness of the decision, FairLens first stratifies the patient data sets according to protected attributes. The next step is scoring where the data is connected to the clinical history of patients in order to assess whether and how the patient group is separated from a target standard. The aim here is to evaluate which health conditions are more often misclassified in which groups. In the ranking step, auditors rank patient groups according to how well the CDSS performs on them. Those groups with a low ranking are then subject to further investigation to highlight which diagnostic codes are over- or underrepresented. In the explanation phase, mathematical techniques from XAI are applied to determine the reasons for mislabeling patient data. In the reporting phase, the results are translated into natural language for the user. Although several auditing methods have been proposed, auditing implies a large amount of manpower and working hours (Burrell, 2016). Scrutinizing algorithms as in the example above would therefore be a very elaborate and cost-intense process. Contextualizing Explainability Usually, when talking about explainable algorithmic models, most commentators refer to some kind of causal explanation. An explanation should provide information on how a MAI system inferred a conclusion by tracing the steps of reasoning back in the form of if-then-relations. As some authors claim, this kind of interpretability is a false expectation, since machine learning only provides correlations, which means that causal explanations cannot be expected (London, 2019). This is why alternative models of explanation focus not so much on the causality of algorithmic decisionmaking and the explainability of algorithms as such, but more on the purpose of explanations. One could distinguish two kinds of explanations (Watson et al., 2019): Model-centric explanations imply understanding all patterns the algorithm has learned, including knowing the variables and their interactions. This may be very difficult given the technical illiteracy of healthcare professionals and patients. Subject-centric explanations focus on patterns that are relevant to the patient. They mainly elaborate how one particular input led to one particular output. This type of explanation rests on the assumption that patients are not interested in the details of how machine learning algorithms work, but are mainly concerned with understanding why they came up with a certain decision. This is why some authors suggest that methods for generating model-agnostic local explanations are key when it comes to explainability.
140
5
Practices
The target audience of explanations plays a crucial role in this regard (Arbelaez Ossa et al., 2022). Hence, the main issue is not how algorithmic models work, but how physicians and patients can understand their clinical implications. The clinical context and the individual characteristics of the patient thus shape the requirements for explainability. For doctors, the design context as well as the purpose of the model, the clinical implications of its output for a specific patient, and uncertainty measures in case of model failure are relevant. Patients primarily want to know how their data is used, what the performance of the algorithm is, what its risk are in terms of bias, and to what extend the decision-making process included algorithmic models (Arbelaez Ossa et al., 2022). Local explanations is a convincing approach since it focusses on the actual clinical context in which an algorithmic model operates. It is also fitting for clinical practice, because its requirements resemble those of informed consent. Typically, doctors do not inform their patients about the exact biochemical mechanisms of a drug or the complex physiological details involved in a certain therapeutic measure. Most patients are medical laypersons and would not benefit from such a detailed explanation. Instead, doctors inform their patients about the meaning, consequences, risks and benefits of a measure and justify why they chose it based on empirical evidence. One could argue that this is exactly the potential trade-off between interpretability and completeness mentioned earlier. Since completeness in patient information is neither achievable nor useful, doctors focus on interpretability. Therefore, context-specific and patient-centered explanations in the context of MAI systems seem an appropriate path for the medical domain. What we are dealing with here is a phenomenon that has long been debated in medicine: I know that it works, but I cannot explain exactly how. One could argue that medicine has dealt with this problem from its beginning. It is therefore no surprise that medicine has a long history of developing standards and practices for the assessment and evaluation of technologies as well as forms of reasoning. In a sense, medicine is better prepared than other fields in dealing with uncertainty and opacity. A prudent path would therefore be to combine this genuine medical knowledge with technical solutions for making machine learning more transparent wherever possible. Medical professionals should increasingly participate in iML projects that follow a human-in-the-loop approach. This could contribute to the ex-ante prevention of errors and ethical issues. In cases where glass box algorithms perform equally well as black box algorithms, the former should be preferred. In cases where black box algorithms perform with greater accuracy but lack in explainability, strategies like subject-centric local explanations should be implemented. Additionally, methods of assessing the reliability and validity of models should be applied to ensure patient safety and prevent bias. Implementing these strategies requires a close collaboration between software designers, data scientists, and health care professionals. Another requirement is to create regulations that account for the flexibility needed for a successful application of MAI technologies.
References
141
References Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. Ahn, A. C., Tewari, M., Poon, C. S., & Phillips, R. S. (2006). The limits of reductionism in medicine: Could systems biology offer an alternative? PLoS Medicine, 3, e208. https://doi.org/ 10.1371/journal.pmed.0030208 Akter, S., Mccarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, 102387. https://doi.org/10.1016/j.ijinfomgt.2021.102387 Alhasan, A. (2021). Bias in medical artificial intelligence. The Bulletin of the Royal College of Surgeons of England, 103, 302–305. Altameem, A., Kovtun, V., Al-ma’aitah, M., Altameem, T. H. F., & Youssef, A. E. (2022). Patient’s data privacy protection in medical healthcare transmission services using back propagation learning. Computers and Electrical Engineering, 102, 108087. https://doi.org/10.1016/j. compeleceng.2022.108087 Amann, J., Blasimme, A., Vayena, E., Frey, D., & Madai, V. I. (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20, 310. https://doi.org/10.1186/s12911-020-01332-6 Arbelaez Ossa, L., Starke, G., Lorenzini, G., Vogt, J. E., Shaw, D. M., & Elger, B. S. (2022). Re-focusing explainability in medicine. Digital Health, 8, 20552076221074488. Auernhammer, J. (2020). Human-centered AI: The role of Human-centered design research in the development of AI. DRS2020: Synergy. https://doi.org/10.21606/drs.2020.282. Ballantyne, A. (2020). How should we think about clinical data ownership? Journal of Medical Ethics, 46, 289–294. https://doi.org/10.1136/medethics-2018-105340 Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104, 671–732. Barrows, R. C., Jr., & Clayton, P. D. (1996). Privacy, confidentiality, and electronic medical records. Journal of the American Medical Informatics Association, 3, 139–148. Beisbart, C., & Räz, T. (2022). Philosophy of science at sea: Clarifying the interpretability of machine learning. Philosophy Compass, 17, e12830. https://doi.org/10.1111/phc3.12830 Berger, P. L., & Luckmann, T. (1991). The social construction of reality: A treatise in the sociology of knowledge. Penguin. Boellstorff, T. (2013). Making Big Data, in theory. First Monday, 18(10). Available at: http:// journals.uic.edu/ojs/index.php/fm/article/view/4869. Accessed 8 Aug 2023. Bollinger, J. M., Zuk, P. D., Majumder, M. A., Versalovic, E., Villanueva, A. G., Hsu, R. L., Mcguire, A. L., & Cook-Deegan, R. (2019). What is a medical information commons? The Journal of Law, Medicine & Ethics, 47, 41–50. Bradford, L., Aboy, M., & Liddell, K. (2020). International transfers of health data between the EU and USA: A sector-specific approach for the USA to ensure an ‘adequate’ level of protection. Journal of Law and the Biosciences, 7, lsaa055. Brisimi, T. S., Chen, R., Mela, T., Olshevsky, A., Paschalidis, I. C., & Shi, W. (2018). Federated learning of predictive models from federated electronic health records. International Journal of Medical Informatics, 112, 59–67. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3 (1). https://doi.org/10.1177/2053951715622512. Buslón, N., Racionero-Plaza, S., & Cortés, A. (2022). Chapter 2: Sex and gender inequality in precision medicine: Socioeconomic determinants of health. In: Cirillo, D., Catuara-Solarz, S., & Guney, E. (eds.). Sex and gender bias in technology and artificial intelligence. Academic, 35–54. https://doi.org/10.1016/b978-0-12-821392-6.00005-4 Campos-Castillo, C., & Anthony, D. L. (2015). The double-edged sword of electronic health records: Implications for patient disclosure. Journal of the American Medical Informatics Association, 22, e130–e140. https://doi.org/10.1136/amiajnl-2014-002804
142
5
Practices
Carcel, C., & Reeves, M. (2021). Under-enrollment of women in stroke clinical trials. Stroke, 52, 452–457. Carel, H., & Kidd, I. J. (2014). Epistemic injustice in healthcare: A philosophial analysis. Medicine, Health Care and Philosophy, 17, 529–540. Cargill, S. S. (2016). Biobanking and the abandonment of informed consent: An ethical imperative. Public Health Ethics, 9, 255–263. Carlini, N., Liu, C., Erlingsson, Ú., Kos, J., & Song, D. X. (2018). The secret sharer: Evaluating and testing unintended memorization in neural networks. USENIX Security Symposium. Carter, P., Laurie, G. T., & Dixon-Woods, M. (2015). The social licence for research: Why care. Data ran into trouble. Journal of Medical Ethics, 41, 404–409. https://doi.org/10.1136/ medethics-2014-102374 Castillo, J. C., Fernández-Caballero, A., Castro-González, Á., Salichs, M. A., & López, M. T. (2014). A framework for recognizing and regulating emotions in the elderly. Ambient Assisted Living and Daily Activities. Caulfield, T. (2007). Biobanks and blanket consent: The proper place of the public good and public perception rationales. King’s Law Journal, 18, 209–226. Caulfield, T., Upshur, R. E. G., & Daar, A. (2003). DNA databanks and consent: A suggested policy option involving an authorization model. BMC Medical Ethics, 4, 1. https://doi.org/10.1186/ 1472-6939-4-1 Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality and Safety, 28, 231–237. Chen, Y., Clayton, E. W., Novak, L. L., Anders, S., & Malin, B. (2023). Human-centered design to address biases in artificial intelligence. Journal of Medical Internet Research, 25, e43251. Chow-White, P. A., Macaulay, M., Charters, A., & Chow, P. (2015). From the bench to the bedside in the big data age: Ethics and practices of consent and privacy for clinical genomics and personalized medicine. Ethics and Information Technology, 17, 189–200. Cirillo, D., Catuara-Solarz, S., Morey, C., Guney, E., Subirats, L., Mellino, S., Gigante, A., Valencia, A., Rementeria, M. J., Chadha, A. S., & Mavridis, N. (2020). Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ Digital Medicine, 3, 81. Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. Halifax, NS, Canada: Association for Computing Machinery. Cossette-Lefebvre, H., & Maclure, J. (2022). AI’s fairness problem: Understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 1255(1269), 3. (2023). https://doi.org/10.1007/s43681-022-00233-w National Research Council (US) Committee on A Framework for Developing a New Taxonomy of Disease. (2011). Toward Precision Medicine: Building a Knowledge Network for Biomedical Research and a New Taxonomy of Disease. Washington (DC): National Academies Press (US); 2011. Available from: https://www.ncbi.nlm.nih.gov/books/NBK91503/. https://doi.org/10. 17226/13284 Das, S., & Namasudra, S. (2022). A novel hybrid encryption method to secure healthcare data in IoT-enabled healthcare infrastructure. Computers and Electrical Engineering, 101, 107991. Del Pozo, B., & Rich, J. D. (2021). Addressing racism in medicine requires tackling the broader problem of epistemic injustice. The American Journal of Bioethics, 21, 90–93. https://doi.org/ 10.1080/15265161.2020.1861367 Dey, S., Flather, M. D., Devlin, G., Brieger, D., Gurfinkel, E. P., Steg, P. G., Fitzgerald, G., Jackson, E. A., Eagle, K. A., & For The, G. I. (2009). Sex-related differences in the presentation, treatment and outcomes among patients with acute coronary syndromes: The global registry of acute coronary events. Heart, 95, 20. Dove, E. S., Knoppers, B. M., & Zawati, M. N. H. (2014). Towards an ethics safe harbor for global biomedical research. Journal of Law and the Biosciences, 1, 3–51.
References
143
Durán, J. M., & Formanek, N. (2018). Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines, 28, 645–666. https://doi.org/10.1007/s11023-0189481-6 Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics. medethics-2020106820. https://doi.org/10.1136/medethics-2020-106820 Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference. Cambridge, MA: Association for Computing Machinery. Elangovan, D., Long, C. S., Bakrin, F. S., Tan, C. S., Goh, K. W., Yeoh, S. F., Loy, M. J., Hussain, Z., Lee, K. S., Idris, A. C., & Ming, L. C. (2022). The use of Blockchain Technology in the Health Care Sector: Systematic review. JMIR Medical Informatics, 10, e17278. https://doi.org/ 10.2196/17278 Evans, B. J. (2016). Barbarians at the gate: Consumer-driven health data commons and the transformation of citizen science. American Journal of Law & Medicine, 42, 651–685. https:// doi.org/10.1177/0098858817700245 Faden, R. R., Kass, N. E., Goodman, S. N., Pronovost, P., Tunis, S., & Beauchamp, T. L. (2013). An ethics framework for a learning health care system: A departure from traditional research ethics and clinical ethics. Hastings Center Report, 43, S16–S27. Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big data and discrimination: Perils, promises and solutions. A systematic review. Journal of Big Data, 6, 12. https://doi.org/10.1186/s40537019Federoff, H. J., & Gostin, L. O. (2009). Evolving from reductionism to holism: Is there a future for systems medicine? JAMA, 302, 994–996. Ferrario, A., Loi, M., & Viganò, E. (2020). In AI we trust incrementally: A multi-layer model of trust to analyze human-artificial intelligence interactions. Philosophy & Technology, 33, 523–539. Fohner, A. E., Volk, K. G., & Woodahl, E. L. (2019). Democratizing precision medicine through community engagement. Clinical Pharmacology & Therapeutics, 106, 488–490. Foucault, M. (1973). The birth of the clinic. Pantheon Books. Foucault, M. (1978). The history of sexuality volume 1: An introduction. Pantheon Books. Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press. Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. Acm Transactions on Information Systems, 14, 330–347. Gaynor, M., Tuttle-Newhall, J., Parker, J., Patel, A., & Tang, C. (2020). Adoption of Blockchain in health care. Journal of Medical Internet Research, 22, e17423. Getzen, E., Ungar, L., Mowery, D., Jiang, X., & Long, Q. (2023). Mining for equitable health: Assessing the impact of missing data in electronic health records. Journal of Biomedical Informatics, 139, 104269. Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digital Health, 3, e745–e750. https:// doi.org/10.1016/S2589-7500(21)00208-9 Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018). Potential biases in machine learning algorithms using electronic health record data. JAMA Internal Medicine, 178, 1544–1547. Gigerenzer, G., & Gaissmaier, W. (2010). Heuristic decision making. Annual Review of Psychology, 62, 451–482. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M. A. & Kagal, L. 2018. Explaining explanations: An overview of interpretability of machine learning. 2018 IEEE 5th international conference on data science and advanced analytics (DSAA), 80–89. Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19, 121–127. https://doi.org/10.1136/amiajnl-2011-000089
144
5
Practices
Greely, H. T. (1999). Breaking the stalemate: A prospective regulatory framework for unforseen research uses of human tissue samples and health information. Wake Forest Law Review, 34, 737–766. Grote, T., & Keeling, G. (2022). Enabling fairness in healthcare through machine learning. Ethics and Information Technology, 24, 39. https://doi.org/10.1007/s10676-022-09658-7 Hall, M. A., & Schulman, K. A. (2009). Ownership of medical information. JAMA, 301, 1282–1284. Hammond, M. E. H., Stehlik, J., Drakos Stavros, G., & Kfoury Abdallah, G. (2021). Bias in medicine. JACC: Basic to Translational Science, 6, 78–85. Hansson, M. G., Dillner, J., Bartram, C. R., Carlson, J. A., & Helgesson, G. (2006). Should donors be allowed to give broad consent to future biobank research? The Lancet Oncology, 7, 266–269. Harmon, D. M., Adedinsewo, D., van’t Hof, J. R., Johnson, M., Hayes, S. N., Lopez-Jimenez, F., Jones, C., Attia, Z. I., Friedman, P. A., Patten, C. A., Cooper, L. A., & Brewer, L. C. (2022). Community-based participatory research application of an artificial intelligence-enhanced electrocardiogram for cardiovascular disease screening: A FAITH! Trial ancillary study. American Journal of Preventive Cardiology, 12, 100431. https://doi.org/10.1016/j.ajpc.2022.100431 Hartmann, K. V., Rubeis, G., & Primc, N. (2024). Healthy and happy? An ethical investigation of emotion recognition and regulation technologies (ERR) within ambient assisted living (AAL). Science and Engineering Ethics, 30(1), 2. https://doi.org/10.1007/s11948-024-00470-8 Helgesson, G. (2012). In defense of broad consent. Cambridge Quarterly of Healthcare Ethics, 21, 40–50. Hofmann, B. (2009). Broadening consent: And diluting ethics? Journal of Medical Ethics, 35, 125–129. Holzinger, A., Plass, M., Holzinger, K., Crişan, G. C., Pintea, C.-M., & Palade, V. (2017). A glassbox interactive machine learning approach for solving NP-hard problems with the human-inthe-loop. ArXiv, abs/1708.01104. Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery, 9, e1312. https://doi.org/10.1002/widm.1312 Hughes, T. M., Dossett, L. A., Hawley, S. T., & Telem, D. A. (2020). Recognizing heuristics and bias in clinical decision-making. Annals of Surgery, 271, 813–814. Hummel, P., Braun, M., & Dabrock, P. (2021). Own data? Ethical reflections on data ownership. Philosophy & Technology, 34, 545–572. Ienca, M. (2023). Medical data sharing and privacy: A false dichotomy? Swiss Medical Weekly, 153, 40019. https://doi.org/10.57187/smw.2023.40019 Institute of Medicine (US) Roundtable on Evidence-Based Medicine, Olsen, L., Aisner, D., & McGinnis, J. M. (eds.). (2007). The Learning Healthcare System: Workshop Summary. National Academies Press (US). https://doi.org/10.17226/11903. Iott, B. E., Campos-Castillo, C., & Anthony, D. L. (2019). Trust and privacy: How patient Trust in Providers is related to privacy behaviors and attitudes. American Medical Informatics Association Annual Symposium Proceedings, 2019, 487–493. Kahneman, D. (2011). Thinking fast and slow. Farrar, Straus and Giroux. Karlsen, J. R., Solbakk, J. H., & Holm, S. (2011). Ethical endgames: Broad consent for narrow interests; open consent for closed minds. Cambridge Quarterly of Healthcare Ethics, 20, 572–583. Kaye, J., Whitley, E. A., Lund, D., Morrison, M., Teare, H., & Melham, K. (2015). Dynamic consent: A patient interface for twenty-first century research networks. European Journal of Human Genetics, 23, 141–146. https://doi.org/10.1038/ejhg.201 Kish, L. J., & Topol, E. J. (2015). Unpatients—Why patients should own their medical data. Nature Biotechnology, 33, 921–924. https://doi.org/10.1038/nbt.3340 Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of algorithms. Journal of Legal Analysis, 10, 113–174.
References
145
Kluge, E. H. (2004). Informed consent to the secondary use of EHRs: Informatic rights and their limitations. Studies in Health Technology and Informatics, 107, 635–638. Knobel, C. P. (2010). Ontic occlusion and exposure in sociotechnical systems. University of Michigan. Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: Review, synthesis, and future research directions. European Journal of Information Systems, 31, 388–409. Kundu, S. (2021). AI in medicine must be explainable. Nature Medicine, 27, 1328. https://doi.org/ 10.1038/s41591-021-01461-z Kuo, T.-T., Kim, H.-E., & Ohno-Machado, L. (2017). Blockchain distributed ledger technologies for biomedical and health care applications. Journal of the American Medical Informatics Association, 24, 1211–1220. Lee, E. W. J., & Viswanath, K. (2020). Big data in context: Addressing the twin perils of data absenteeism and chauvinism in the context of health disparities research. Journal of Medical Internet Research, 22, e16377. https://doi.org/10.2196/16377 Liddell, K., Simon, D. A., & Lucassen, A. (2021). Patient data ownership: Who owns your health? Journal of Law and the Biosciences, 8(2), lsab023. https://doi.org/10.1093/jlb/lsab023 Liu, X., Glocker, B., Mccradden, M. M., Ghassemi, M., Denniston, A. K., & Oakden-Rayner, L. (2022). The medical algorithmic audit. The Lancet Digital Health, 4, e384–e397. https://doi. org/10.1016/S2589-7500(22)00003-6 London, A. J. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus Explainability. Hastings Center Report, 15(21), 49. Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013 Lu, Y. (2019). The blockchain: State-of-the-art and research challenges. Journal of Industrial Information Integration, 15, 80–90. https://doi.org/10.1016/j.jii.2019.04.002 Lupton, D. (2014). Critical perspectives on digital health technologies. Sociology Compass, 8, 1344–1359. https://doi.org/10.1111/soc4.12226 Lyell, D., & Coiera, E. (2017). Automation bias and verification complexity: A systematic review. Journal of the American Medical Informatics Association, 24, 423–431. https://doi.org/10. 1093/jamia/ocw105 Mahajan, H. B., Rashid, A. S., Junnarkar, A. A., Uke, N., Deshpande, S. D., Futane, P. R., Alkhayyat, A., & Alhayani, B. (2023). Integration of healthcare 4.0 and blockchain into secure cloud-based electronic health records systems. Applied Nanoscience, 13, 2329–2342. Majumder, M. A., Bollinger, J. M., Villanueva, A. G., Deverka, P. A., & Koenig, B. A. (2019). The role of participants in a medical information commons. The Journal of Law, Medicine & Ethics, 47, 51–61. https://doi.org/10.1177/1073110519840484 Maloy, J. W., & Bass, P. F., 3rd. (2020). Understanding broad consent. The Ochsner Journal, 20, 81–86. Marewski, J. N., & Gigerenzer, G. (2012). Heuristic decision making in medicine. Dialogues in Clinical Neuroscience, 14, 77–89. Mccradden, M. D., Joshi, S., Mazwi, M., & Anderson, J. A. (2020). Ethical limitations of algorithmic fairness solutions in health care machine learning. The Lancet Digital Health, 2, e221–e223. Mcguire, A. L., Roberts, J., Aas, S., & Evans, B. J. (2019). Who owns the data in a medical information commons? The Journal of Law, Medicine & Ethics, 47, 62–69. Mclennan, S., Shaw, D., & Celi, L. A. (2019). The challenge of local consent requirements for global critical care databases. Intensive Care Medicine, 45, 246–248. https://doi.org/10.1007/ s00134-018-5257-y Merkel, S., & Kucharski, A. (2019). Participatory design in gerontechnology: A systematic literature review. Gerontologist, 59, e16–e25. https://doi.org/10.1093/geront/gny034 Mikkelsen, R. B., Gjerris, M., Waldemar, G., & Sandøe, P. (2019). Broad consent for biobanks is best—Provided it is also deep. BMC Medical Ethics, 20, 71.
146
5
Practices
Miller, D. D., & Brown, E. W. (2018). Artificial intelligence in medical practice: The question to the answer? The American Journal of Medicine, 131, 129–133. Mirchev, M., Mircheva, I., & Kerekovska, A. (2020). The academic viewpoint on patient data ownership in the context of big data: Scoping review. Journal of Medical Internet Research, 22, e22214. https://doi.org/10.2196/22214 Mitchell, S., Potash, E., Barocas, S., D’amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8, 141–163. Mittelstadt, B. D., & Floridi, L. (2016). The ethics of big data: Current and foreseeable issues in biomedical contexts. Science and Engineering Ethics, 22, 303–341. https://doi.org/10.1007/ s11948-015-9652-2 Mongoven, A. M., & Solomon, S. (2012). Biobanking: Shifting the analogy from consent to surrogacy. Genetics in Medicine, 14, 183–188. Montgomery, J. (2017). Data sharing and the idea of ownership. New Bioethics, 23, 81–86. Moosavi, S. R., Nigussie, E., Levorato, M., Virtanen, S., & Isoaho, J. (2018). Performance analysis of end-to-end security schemes in healthcare IoT. Procedia Computer Science, 130, 432–439. Morley, J., Morton, C. E., Karpathakis, K., Taddeo, M., & Floridi, L. (2021). Towards a framework for evaluating the safety, acceptability and efficacy of AI systems for health: An initial synthesis. ArXiv, abs/2104.06910. Müller, S. (2022). Is there a civic duty to support medical AI development by sharing electronic health records? BMC Medical Ethics, 23, 134. https://doi.org/10.1186/s12910-022-00871-z Murdoch, B. (2021). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics, 22, 122. Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences of the United States of America, 116, 22071–22080. Ng, W. Y., Tan, T.-E., Movva, P. V. H., Fang, A. H. S., Yeo, K.-K., Ho, D., Foo, F. S. S., Xiao, Z., Sun, K., Wong, T. Y., Sia, A. T.-H., & Ting, D. S. W. (2021). Blockchain applications in health care for Covid-19 and beyond: A systematic review. The Lancet Digital Health, 3, e819–e829. Nielsen, M. E. J., & Kongsholm, N. C. H. (2022). Blanket consent and Trust in the Biobanking Context. Journal of Bioethical Inquiry, 19, 613–623. Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns, 2, 100347. https://doi.org/10.1016/j. patter.2021.100347 Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366, 447–453. https://doi.org/10. 1126/science.aax2342 Ostrowski, A., Harrington, C., Breazeal, C., & Park, H. (2021). Personal narratives in technology design: The value of sharing older adults’ stories in the design of social robots. Frontiers in Robotics and AI, 28(8), 716581. https://doi.org/10.3389/frobt.2021.716581 Panigutti, C., Perotti, A., Panisson, A., Bajardi, P., & Pedreschi, D. (2021). FairLens: Auditing black-box clinical decision support systems. Information Processing & Management, 58, 102657. Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52, 381–410. Piasecki, J., & Cheah, P. Y. (2022). Ownership of individual-level health data, data sharing, and data governance. BMC Medical Ethics, 23, 104. https://doi.org/10.1186/s12910-022-00848-y Pierce, R. L., Van Biesen, W., Van Cauwenberge, D., Decruyenaere, J., & Sterckx, S. (2022). Explainability in medicine in an era of AI-based clinical decision support systems. Frontiers in Genetics, 13, 903600. https://doi.org/10.3389/fgene.2022.903600 Ploug, T. (2020). In defence of informed consent for health record research – Why arguments from ‘easy rescue’, ‘no harm’ and ‘consent bias’ fail. BMC Medical Ethics, 21, 75.
References
147
Ploug, T., & Holm, S. (2015). Meta consent: A flexible and autonomous way of obtaining informed consent for secondary research. BMJ, 350, h2146. https://doi.org/10.1136/bmj.h2146. Ploug, T., & Holm, S. (2023). The right to a second opinion on artificial intelligence diagnosis— Remedying the inadequacy of a risk-based regulation. Bioethics, 37, 303–311. https://doi.org/ 10.1111/bioe.13124 Porsdam Mann, S., Savulescu, J., & Sahakian, B. J. (2016). Facilitating the ethical use of health data for the benefit of society: Electronic health records, consent and the duty of easy rescue. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374. Prabhakaran, V., & Martin, D., Jr. (2020). Participatory machine learning using community-based system dynamics. Health and Human Rights, 22, 71–74. Prainsack, B. (2015). Through thick and big: Data-rich medicine in the era of personalisation. In J. Vollmann, V. Sandow, & H. Schildmann (Eds.), The ethics of personalised medicine. Critical perspectives (pp. 161–172). Ashgate. Prainsack, B. (2019). Logged out: Ownership, exclusion and public value in the digital data and information commons. Big Data and Society, 6(1), https://doi.org/10.1177/2053951719829773. Prainsack, B. (2022). The advent of automated medicine? The values and meanings of precision. Can precision medicine be personal; Can personalized medicine be precise? Oxford University Press. Prainsack, B., & Buyx, A. (2013). A solidarity-based approach to the governance of research biobanks. Medical Law Review, 21, 71–91. Price, W. N., II. (2019). Medical AI and contextual bias. Harvard Journal of Law and Technology, 33(1), 65–116. Purtova, N. (2015). The illusion of personal data as no one’s property. Law, Innovation and Technology, 7, 83–111. Purtova, N. (2017). Health data for common good: Defining the boundaries and social dilemmas of data commons. In: Adams, S., Purtova, N., Leenes, R. (eds.). Under observation: The interplay between eHealth and surveillance. Springer, 177–210. http://www.springer.com/us/book/ 9783319483405 Richterich, A. (2018). The Big Data agenda data ethics and critical data studies. University of Westminster Press. Rieke, N., Hancox, J., Li, W., Milletarì, F., Roth, H. R., Albarqouni, S., Bakas, S., Galtier, M. N., Landman, B. A., Maier-Hein, K., Ourselin, S., Sheller, M., Summers, R. M., Trask, A., Xu, D., Baust, M., & Cardoso, M. J. (2020). The future of digital health with federated learning. NPJ Digital Medicine, 3, 119. Roessler, B. (2004). The value of privacy. Polity. Rubeis, G. (2022a). Hyperreal patients. Digital twins as Simulacra and their impact on clinical heuristics. In J. Loh & T. Grote (eds.), MediTech—Medizin—Technik—Ethik. Techno: Phil – Aktuelle Herausforderungen der Technikphilosophie (pp. 7–17). Stuttgart. Rubeis, G. (2022b). Complexity management as an ethical challenge for AI-based age tech. In Proceedings of the 15th international conference on PErvasive technologies related to assistive environments Corfu, Greece 2022. Association for Computing Machinery. https://doi.org/10. 1145/3529190.3534752 Rubeis, G., Fang, M. L., & Sixsmith, A. (2022). Equity in AgeTech for ageing well in technologydriven places: The role of social determinants in designing AI-based assistive technologies. Science and Engineering Ethics, 28, 49. https://doi.org/10.1007/s11948-022-00397-y Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1, 206–215. https://doi.org/10. 1038/s42256-019-0048-x Rudin, C., & Radin, J. (2019). Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harvard Data Science Review, 1(2). https://doi.org/ 10.1162/99608f92.5a8a3a3d
148
5
Practices
Rueda, J. A.-O., Rodríguez, J. A.-O., Jounou, I. A.-O. X., Hortal-Carmona, J. A.-O., Ausín, T. A.-O., & Rodríguez-Arias, D. A.-O. (2022). “just” accuracy? Procedural fairness demands explainability in AI-based medical resource allocations. AI & Society, 1, –12. Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & Müller, K.-R. (2019). Explainable AI: Interpreting, explaining and visualizing deep learning. arXiv, 1708.08296v1. Samerski, S. (2018). Individuals on alert: Digital epidemiology and the individualization of surveillance. Life Sciences, Society and Policy, 14, 13. https://doi.org/10.1186/s40504-0180076-z Sharon, T. (2017). Self-tracking for health and the quantified self: Re-articulating autonomy, solidarity, and authenticity in an age of personalized healthcare. Philosophy and Technology, 30, 93–121. https://doi.org/10.1007/s13347-016-0215-5 Sheehan, M. (2011). Can broad consent be informed consent? Public Health Ethics, 4, 226–235. Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36, 495–504. Steinberg, J. R., Turner, B. E., Weeks, B. T., Magnani, C. J., Wong, B. O., Rodriguez, F., Yee, L. M., & Cullen, M. R. (2021). Analysis of female enrollment and participant sex by burden of disease in US clinical trials between 2000 and 2020. JAMA Network Open, 4, e2113749. Steinsbekk, K. S., Kåre Myskja, B., & Solberg, B. (2013). Broad consent versus dynamic consent in biobank research: Is passive participation an ethical problem? European Journal of Human Genetics, 21, 897–902. Straw, I. (2020). The automation of bias in medical artificial intelligence (AI): Decoding the past to create a better future. Artificial Intelligence in Medicine, 110, 101965. Theunissen, M., & Browning, J. (2022). Putting explainable AI in context: Institutional explanations for medical AI. Ethics and Information Technology, 24, 23. https://doi.org/10.1007/ s10676-022-09649-8 Thompson, R., & Mcnamee, M. J. (2017). Consent, ethics and genetic biobanks: The case of the Athlome project. BMC Genomics, 18, 830. https://doi.org/10.1186/s12864-017-4189-1 Tiffin, N. (2018). Tiered informed consent: Respecting autonomy, agency and individuality in Africa. BMJ Global Health, 3, e001249. https://doi.org/10.1136/bmjgh-2018-001249 Topaloglu, M. Y., Morrell, E. M., Rajendran, S., & Topaloglu, U. (2021). In the pursuit of privacy: The promises and predicaments of federated learning in healthcare. Frontiers in Artificial Intelligence, 4, 746497. Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books, Inc. Tsai, T. C., Arik, S., Jacobson, B. H., Yoon, J., Yoder, N., Sava, D., Mitchell, M., Graham, G., & Pfister, T. (2022). Algorithmic fairness in pandemic forecasting: Lessons from COVID-19. NPJ Digital Medicine, 5, 59. Valdivia, A., Sánchez-Monedero, J., & Casillas, J. (2021). How fair can we go in machine learning? Assessing the boundaries of accuracy and fairness. International Journal of Intelligent Systems, 36, 1619–1643. https://doi.org/10.1002/int.22354 Vandamme, D., Fitzmaurice, W., Kholodenko, B., & Kolch, W. (2013). Systems medicine: Helping us understand the complexity of disease. QJM: An International Journal of Medicine, 106, 891–895. Vayena, E., & Blasimme, A. (2018). Health research with Big Data: Time for systemic oversight. The Journal of Law, Medicine and Ethics, 46, 119–129. https://doi.org/10.1177/ 1073110518766026 Vogt, H., Hofmann, B., & Getz, L. (2016). The new holism: P4 systems medicine and the medicalization of health and life itself. Medicine, Health Care and Philosophy, 19, 307–323. Walsh, C. G., Chaudhry, B., Dua, P., Goodman, K. W., Kaplan, B., Kavuluru, R., Solomonides, A., & Subbian, V. (2020). Stigma, biomarkers, and algorithmic bias: Recommendations for precision behavioral health with artificial intelligence. JAMIA Open, 3, 9–15.
References
149
Wang, T. (2013, May). Big data needs thick data. Ethnography Matters Blog [online]. Available at: http://ethnographymatters.net/blog/2013/05/13/big-data-needs-thick-data/. Accessed 8 Aug 2023. Wang, X., Zhang, Y., & Zhu, R. (2022). A brief review on algorithmic fairness. Management System Engineering, 1. https://doi.org/10.1007/s44176-022-00006-z Watson, D. S., Krutzinna, J., Bruce, I. N., Griffiths, C. E., Mcinnes, I. B., Barnes, M. R., & Floridi, L. (2019). Clinical applications of machine learning algorithms: Beyond the black box. BMJ, 364, l886. Wawira Gichoya, J., Mccoy, L. G., Celi, L. A., & Ghassemi, M. (2021). Equity in essence: A call for operationalising fairness in machine learning for healthcare. BMJ Health & Care Informatics, 28. Weissglass, D. E. (2022). Contextual bias, the democratization of healthcare, and medical artificial intelligence in low- And middle-income countries. Bioethics, 36, 201–209. https://doi.org/10. 1111/bioe.12927 Whelehan, D. F., Conlon, K. C., & Ridgway, P. F. (2020). Medicine and heuristics: Cognitive biases and medical decision-making. Irish Journal of Medical Science, 189, 1477–1484. Wiertz, S., & Boldt, J. (2022). Evaluating models of consent in changing health research environments. Medicine, Health Care and Philosophy, 25, 269–280. https://doi.org/10.1007/s11019022-10074-3 Williams, H., Spencer, K., Sanders, C., Lund, D., Whitley, E. A., Kaye, J., & Dixon, W. G. (2015). Dynamic consent: A possible solution to improve patient confidence and Trust in how Electronic Patient Records are Used in medical research. JMIR Medical Informatics, 3, e3. https:// doi.org/10.2196/medinform.3525 Wong, P.-H. (2019). Democratizing algorithmic fairness. Philosophy & Technology, 33, 225–244. Xu, J., Glicksberg, B. S., Su, C., Walker, P., Bian, J., & Wang, F. (2021). Federated learning for healthcare informatics. Journal of Healthcare Informatics Research, 5, 1–19. Xu, J., Xiao, Y., Wang, W. H., Ning, Y., Shenkman, E. A., Bian, J., & Wang, F. (2022). Algorithmic fairness in computational medicine. eBioMedicine, 84, 104250. https://doi.org/ 10.1016/j.ebiom.2022.104250 Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs.
Chapter 6
Relationships
Abstract In this chapter, I analyze the impact of MAI on relationships as well as roles within healthcare. I identify three crucial relationships, the therapeutic relationships between doctors and patient, the nursing relationship, and the therapeutic alliance between therapists and patients in mental health. The crucial assumption is that MAI is an artificial agent that breaks up these hitherto dyadic relationships. I discuss several potential roles for doctors, nurses, therapists, patients, and artificial agents and address the question whether healthcare professionals can and should be replaced by MAI. Smart data practices will transform the epistemology of healthcare professionals and the phenomenology of patients and change how healthcare professionals encounter, perceive, and view patients. Since this transformation process affects the clinical encounter, smart data practices will also transform relationships in healthcare. These relationships are heterogenous and complex, since healthcare comprises different professions and contexts. Doctors, nurses, and therapists each form their own specific relationships with patients. Although there is a set of values all healthcare professionals share, like the respect for autonomy or the duty to help, each profession has their specific values and principles. It will therefore be necessary to analyze the impact of MAI on the relationships in each field. Keywords Artificial agent · Autonomy · Empathy · Mental health · Nursing · Responsibility · Shared decision-making · Therapeutic relationship Smart data practices will transform the epistemology of healthcare professionals and the phenomenology of patients and change how healthcare professionals encounter, perceive, and view patients. Since this transformation process affects the clinical encounter, smart data practices will also transform relationships in healthcare. These relationships are heterogenous and complex, since healthcare comprises different professions and contexts. Doctors, nurses, and therapists each form their own specific relationships with patients. Although there is a set of values all healthcare professionals share, like the respect for autonomy or the duty to help, each profession has their specific values and principles. It will therefore be necessary to analyze the impact of MAI on the relationships in each field.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6_6
151
152
6
Relationships
Since MAI will transform the relationships in all professional contexts of healthcare, one can say that it will change healthcare as a whole. What this change will look like, is still an open question. Somme believe that MAI will usher in a new era in medicine that is characterized by the personalization paradigm and a fundamental empowerment of patients (Topol, 2019). According to this view, the new medicine will be a democratic one, where empowered users of MAI-based health technology and healthcare professional will encounter each other on an eye-to-eyelevel. Others assume that this new era will bring about a new kind of medical paternalism, enhanced by dataveillance and datafication (Sparrow and Hatherley, 2020; McDougall, 2019). On the one hand, MAI-enhanced control over individual health data may enable governments to implement health agendas. On the other hand, commercial agents that provide MAI technologies may increasingly treat healthcare as a profitable market where individual data is the currency. In this view, paternalism and commercialization are more probable outcomes than the democratization envisioned by some commentators. Hence, apart from interpersonal relationships, also the relationship between the individual and the meso-level as well as macro-level agents within the healthcare sector will change. Since MAI technologies will transform all practices in the health sector, from administration to diagnostic and therapeutic techniques and also communication, they will also have an impact on the organization of work. This means that the relationships within professions (intraprofessional) and between professions (interprofessional) will change. This will have an impact on the organization of work, on team dynamics, and also on the self-image of the different professionals involved. One crucial aspect in this regard is automatization. Automated processes might not only change the way healthcare professionals do things, but also their roles and tasks. The big question here is whether automated systems will replace healthcare professionals or whether they will have to face new tasks and responsibilities. This is especially relevant since some MAI technologies are not simply new and more sophisticated tools, but rather new agents in healthcare. Automated systems that are able to make decisions, like CDSS for example, or even perform physical actions independently, like robots, create a novel situation. For the first time, we are not only facing new technologies or complex machines, but non-human agents. Hence, it will be important to investigate how MAI technologies may change the roles and selfimage of healthcare professionals. In this chapter, I will discuss how MAI technologies will transform interpersonal relationships between healthcare professionals and patients and what this means from an ethical point of view. Since this transformation affects all medical professions, I do not exclusively focus on doctors, but also include nurses and therapists. I investigate the impact of MAI on the healthcare sector as a whole and address the question, whether democratization or paternalism and commercialization are the more likely paradigms in the new era of medicine. This means to look at the new roles and self-images that are discussed in the context of a MAI-enhanced healthcare sector. The roles of patients as well as healthy individuals as users of MAI technologies and healthcare professionals are still undefined in this new setting. From an
6.1
Therapeutic Relationship
153
ethical perspective, this means that values and principles that shape their interactions and relationships are also unclear. Building on the framework introduced in Sect. 4.4, I understand the transformation of relationships and roles as a consequence of the shift in epistemic practices through MAI. My analysis rests on the assumption that the therapeutic relationship is the crucial enabler of autonomy, shared decision-making, and trust in the clinical encounter. Understanding how MAI-based practices transform this relationship and affects the roles in medicine also requires to consider social determinants and power asymmetries that shape these roles. It is of particular interest, if and how these smart data practices adequately reflect social determinants and whether they mitigate or perpetuate, maybe even increase power asymmetries. Again, I follow a critical approach, making use of epistemic lenses from critical data studies for analyzing these aspects.
6.1
Therapeutic Relationship
Patient-centeredness is often considered as the gold standard in the modern patientdoctor relationship (see Sect. 4.2.2). Engaging patients in shared decision-making is the key aspect here (Kerasidou, 2020). Whereas the paternalistic model rests on medical expertise, understanding the patient’s perspective, their needs and values, is the basis of patient-centeredness. This means that medical competency on behalf of doctors should also include moral competency, meaning the awareness of the patient perspective and the skills to integrate it into the decision-making process. Empathy is the key competency in this regard.
6.1.1
Empathy in a MAI Setting
In Sect. 4.2.3, I have discussed the role of empathy in the therapeutic relationship. The concept of clinical empathy is especially relevant here. It aims to balance cognitive and affective empathy and involves strategies for regulating and limiting the emotional involvement of doctors. One issue regarding the empathy of doctors may arise from the focus on data in a MAI setting. In Sect. 5.2.1, we have seen how epistemic practices of doctors may lead to a reductionist view of the patient that potentially undermines empathy. The standardization of patient data and the need to follow certain protocols and predefined documentation tasks linked to the EHR may contribute to an increasing formalization of the patient encounter (Boonstra et al., 2022). The result could be that doctors have to ask a set of standard questions required by the EHR, although they do not relate to the individual patient and their specific situation. In such a scenario, doctors would have to follow the operational logic of the EHR, which leaves them little room for including patient preferences or values. Some commentators even speak of “the disappearing patient” (Hunt et al., 2017), which refers to the fact that a large part of
154
6 Relationships
the documentation the EHR requires does not directly relate to the patient, but to institutional or financial aspects. Examples could be standardizing data for reimbursement calculations or providing data for research purposes (Boonstra et al., 2022). Doctors may end up spending more time on administrative tasks and follow standardized protocols for the sake of efficiency. This undermines the crucial argument that MAI technologies may free doctors from tasks not directly related to the patient encounter and enable empathy. On the contrary, the result could be even more and more elaborate administrative tasks and less time for patients. The formalization and standardization of the patient encounter could also undermine empathy due to its focus on patient data. Again, reductionisms as an outcome of digital positivism is the main risk here. The MAI-enhanced clinical gaze could reduce patients to a set of quantifiable variables, thus ignoring their personhood. In Foucault’s terms this would mean to regard the patient as body and ignoring the person as subject. The sociologist Zygmunt Bauman introduced a fitting concept in this regard, although he did not focus on medicine. Bauman analyses digital surveillance technologies and the connection between epistemological and social practices attached to them (Bauman, 2006; Bauman & Lyon, 2013). In his view, digital surveillance technologies disassemble persons into data packages according to predefined variables and parameters. These data packages are later reassembled to what Bauman calls the data double of a person. The social practices involved focus exclusively on the data double, which obscures the actual person behind it. The reductionist focus on the data double as an aggregation of functionally specific traits replaces a holistic view of a person. As a result, interacting with the data double may desensitize healthcare professionals for the fact that they are making decisions that affect a human being. Bauman calls this adiaphorization, i.e. the detachment of social practices from moral evaluation caused by techniques of digital surveillance (Bauman, 1991). The epistemic focus on variables and traits linked to digital positivism may lead to a situation where practices also focus solely on these variables and traits. As a consequence, the impact of decisions and actions on the person as a whole may be ignored. Adiaphorization in regard to empathy may easily occur in a MAI setting. Reductionism could cause doctors to focus only on quantifiable digital data and to interact with the data double of the patient. Take the example of the digital twin technology discussed in Sect. 3.3.2: Doctors could interact with the model for making predictions without having to put the patient through strenuous and potentially risky testing, e.g. of drugs or treatment options. This also means that doctors increasingly interact with the digital twin as digital double of the patient. Doctors may forget that they are not just adjusting the variables of a model, but treating a human being. The increasing mediatization through MAI, meaning that doctors view their patients more and more through the lens of data-intensive technologies, could thus undermine their empathy. The focus on quantifiable health data could result in ignoring the patient’s values and motivations as crucial factors in the clinical process. Person-centered care requires doctors to include these factors into the decision-making process. Tailoring treatments to the individual needs and requirements of an individual patient, the main goal of personalized medicine, cannot simply be reduced to biomedical
6.1
Therapeutic Relationship
155
aspects. Understanding the patient’s values and motivations, which is an integral element of clinical empathy, plays a crucial role here (Kerasidou, 2020). Doctors must be aware of the plurality of values that might motivate patient priorities as well as decisions and find ways to integrate them into the process of shared decisionmaking (McDougall, 2019). A reductionist view of the patient that MAI technologies might provide could be an obstacle for patient-centered care in this regard. But not all commentators share these concerns. There is also the viewpoint that the use of MAI technology may actually enable a more empathetic clinical practice. The most prominent advocate of this view is Eric Topol, who even considers this potential of MAI as its most important feature. According to Topol, deep empathy could be the result of an MAI-enhanced clinical practice (Topol, 2019). While MAI technologies could perform repetitive tasks that do not directly relate to the interpersonal interactions with the patient, doctors could focus on the therapeutic relationship. The crucial aspect here is time. As some commentators argue, MAI may free health professionals from certain tasks that are not directly patient-centered, such as administration or data procession (Aminololama-Shakeri & López, 2018; Holtz et al., 2022). Automatizing these tasks and processes could give healthcare professionals more time to spend with patients (Topol, 2019). Hence, as some argue, MAI technologies could be an important enabler for empathy. Several studies have established the crucial importance of time for the patientdoctor relationship, showing that although consultation length varies between countries, there is a correlation between the time doctors take for their patients and the patient’s perception of empathy (Deveugele et al., 2002; Greg et al., 2017). Various factors determine the consultation length, among which the workload of doctors and the overall limited resources within the healthcare sector are especially prominent. Given the fact that MAI technologies have the potential to perform various tasks in a fast and efficient way, they could reduce the workload of doctors. Freed of exhaustive tasks, doctors could invest more of their time into consultation and the therapeutic relationship in general. The simple equation behind this approach is more MAI means more time, which equals a more empathic therapeutic relationship. Some commentators have suggested that this account is overly optimistic and underestimates the socio-economic and institutional factors that surround the implementation of MAI in clinical practice (Sparrow & Hatherley, 2020). Rather than just creating more available time for health professionals, a MAI-enhanced setting would enable healthcare institutions to serve more patients in the same time. Given the fact that resources in healthcare are always limited, healthcare providers would instantly use any new availability for increasing financial effectiveness rather than improving the patient experience. One could argue that which prediction of the MAI future one chooses is simply a matter of an optimistic or pessimistic world view. However, it is difficult to see why the introduction of MAI technologies would change the entire economic logic of the healthcare system over night. This is a perfect example for the so-called solutionism, i.e. the approach to fix primarily social, ethical, or political problems by technological means (Morozov, 2013). Outsourcing time-consuming tasks is the technical fix for the problem of short consultations and overall time shortage on behalf of doctors.
156
6 Relationships
The issue here is not so much the technical fix itself. Some MAI applications could actually be viable tools for reducing the work load of healthcare professionals. Solutionism is problematic because it takes the focus away from the underlying problem, the shortage of resources, in favor of fighting one symptom. By doing so, a solutionist strategy makes it seem as if technology could fix all problems, which obscures alternate ways of dealing with the issue. The socio-economic problem of scarce resources in healthcare needs an adequate, i.e. political solution. Technical fixes might therefore not only be insufficient due to the fact that they only deal with symptoms, but also serve as a fig leaf for covering the real issues and their rather inconvenient solutions. That does not mean that MAI cannot contribute to fixing issues in healthcare. It simply means that implementing technology alone is insufficient for achieving such an improvement. As we have seen several times before, a technical fix is only useful when we embed it in a broader set of measures that target the social, economic, and institutional factors involved. MAI can only be an enabler of a more patient-centered medicine if it is accompanied by a paradigm change. That means that certain notions of patient-centered medicine should be combined with the willingness to realize them, also in terms of allocating resources. Topol’s vision of freeing doctors from time-consuming tasks and allowing them to invest more time into an empathetic therapeutic relationship depends not only on implementing technology, but defining patient-centeredness as a purpose for implementing it. Even Topol himself acknowledges this when he states that “[m]uch of what’s wrong with healthcare won’t be fixes by advanced technology, algorithms, or machines” (Topol, 2019, 14). The use of the EHR serves as a good example here. In principle, this technology could enable doctors to spend more time on the meaningful and empathetic interaction with patients, just as Topol envisions. In reality however, some of these applications may have the exact opposite outcome and cause even more administrative tasks and less time for the patient. To truly implement these technologies in a patient-centered manner would imply to adapt resource allocation in a way as to avoid that the timesaving through MAI practices leads to more patients fed into the system. Hoping that the use of MAI technologies will somehow automatically bring about patientcenteredness is unfounded. Besides the willingness to define patient-centeredness as an outcome of implementing MAI, strategies are needed to realize this goal. More time for the patient does not necessarily mean a more empathetic and patient-centered medical practice. This time should be used for a meaningful patient-doctor interaction. I suggest to implement narrative medicine as a core aspect of the MAI era (Rubeis, 2020b). Advocates of narrative medicine claim that modern medicine with its focus on scientific evidence and technology has depersonalized patients (Charon, 2001; Charon, 2016a, b). According to this view, individual patient experience plays only a very minor role in medicine, which tends to reduce patients to aggregates of biomedical information. This is one aspect of reductionism we have already encountered. Especially EBM often fails to recognize the individuality of patients and the complex interaction and relationships that shape it. Most doctors lack listening skills as a result of this development. Close listening to patient narratives beyond
6.1
Therapeutic Relationship
157
symptoms could be a way for doctors to understand patients and view them as persons. It could also enhance empathy on behalf of doctors and enable shared decision-making through better understanding of the underlying beliefs, convictions, and motivations behind a patient’s opinion. Given the fact that high workload and time pressure are main obstacles for narrative medicine, the potential of MAI to realize this approach becomes evident. If the equation MAI equals more time for doctors is made possible by political and institutional efforts, then narrative medicine could be one perspective for a personcentered medicine. Hence, the technical fix will only work when combined with narrative medicine as a paradigm for a truly patient-centered new medicine.
6.1.2
Empathetic MAI
Given the concept of clinical empathy, the crucial question arises whether MAI can be empathetic in this sense. Some argue that MAI can already master several features linked to empathy, such as detecting visual data that indicate emotional states and interpreting them correctly. Taking the perspective of the other and understanding them as a moral agent is a task for a more sophisticated technological approach that will surely be developed in the not too-distant future (Vallverdú & Casacuberta, 2015). In fact, some MAI applications have even outperformed human doctors when it comes to empathetic communication. One example is a recent study on chatbot responses to patient questions in a social media forum (Ayers et al., 2023). ChatGPT was used as a chatbot assistant and responded to randomly selected patient questions. The same questions were answered by human doctors. A team of health professionals evaluated the answers by both groups and found the chatbot’s responses more empathetic than those by human doctors. Although this is just one first result and more research is definitely necessary on the topic, it shows that some MAI applications have the potential to act in empathetic fashion at least in a limited way. In this view, machine empathy is simply a matter of data analytics. Artificial empathy or empathetic AI has become a research field including various different applications such as social robots or conversational chatbots (Srinivasan & San Miguel González, 2022). Sometimes also systems for emotion recognition and regulation are referred to as empathetic AI (Vallverdú & Casacuberta, 2015). We have already discussed one problematic example for this type of application (see Sect. 5.2.1). One crucial problem here is that MAI systems use proxies to detect emotions, such as facial expressions or voice modulation (Stark & Hoey, 2021). This proxy data may vary across social groups and may also be bound to cultural factors. Hence, the potential for generalizable models is questionable. Another issue is that the way the system interprets and deals with this information depends on the model of emotions it rests upon. Since there are various different models, the generalizability and validity of the output is again uncertain. Framing empathy as a technical issue, i.e. a matter of data input and computational techniques, is again a result of
158
6 Relationships
reductionism and solutionism. What this approach ignores is that empathy as all emotions is a complex, multidimensional phenomenon that is fluid across societies and social groups. It therefore largely depends on the interpretative framework consisting in this case of theories of emotion and their manifestations in human behavior. In medicine and healthcare, it is important to specify that MAI needs to fulfill the requirements of clinical empathy, not empathy in general. As we have seen, clinical empathy is a very complex skill that combines understanding emotions of others and responding to them with regulating one’s own emotional involvement. This delicate balance is extremely difficult for human agents to achieve. It is therefore no surprise that measurement and quantification of clinical empathy is a matter of intense debate. One could argue that if we are not able to exactly measure and quantify the features of clinical empathy, we will not be able to design MAI systems that master this skill. Several models for measuring and quantifying clinical empathy have been developed, mostly for medical education, such as the Jefferson Scale of Empathy (JSE) (Hojat et al., 2023). But even if MAI applications could be able to detect and correctly interpret emotions, there remains a big deal of skepticism when it comes to the potential of MAI for empathy. When we look at the methods MAI systems use for interpreting emotions and acting upon it, we find that this ability is limited to cognitive empathy, whereas affective empathy is hardly attainable for any AI system (Montemayor et al., 2022). This is due to the simple fact that AI systems lack embodiment and the ability to feel what others feel. Understanding the emotions and feelings of others is not a mere cognitive matter. It entails resonating with the emotions of others in order to be able to imagine things from their perspective. This is a skill that has to be learned and evolves as the result of a reciprocal human experience that involves trust, social understanding, and validation (Brown & Halpern, 2021). It is simply a fallacy to assume that empathy is somehow the result of a reasoning process. Empathizing with others is an experience and not something that we rationally infer (Montemayor et al., 2022). Hence, AI systems are in principle incapable of empathy. It is not a matter of computing power or data input, but the lack of affective resonance. Even if we would find ways of quantifying the parameters for clinical empathy and design MAI applications accordingly, this would not fully resolve the issue at hand. A setting in which empathy it outsourced to machines may affect the skills of doctors (Kerasidou, 2020). In a way, this could imply that doctors are relived from the burden of empathy, which would substantially change the profession. Empathy would become something operationalized and standardized and the special bond between doctors and patients that was hitherto based on a human aspect would be lost. Whether intelligent machines in general and MAI in particular can be empathetic in the full sense is still a matter of debate and requires both conceptual and empirical research. Given the complexity of clinical empathy and the need for interpersonal relationships in medicine, it seems unlikely that MAI technologies will fully replace human doctors in terms of empathy. However, it could be possible to apply MAI technologies for limited tasks that involve some level of empathy. Empathetic AI could for example be used for chatbots in different settings or other kinds of
6.1
Therapeutic Relationship
159
communication interfaces. In some contexts, this could be a helpful addition for existing services. However, empathy in the full sense cannot be expected from MAI technologies due to the aforementioned in principle-arguments. Empathy requires affective resonance which in turn requires the ability to have experiences and not just the capacity for logical inference. Since affective resonance is also a crucial element of clinical empathy, truly empathetic MAI systems are unrealistic. In addition, clinical empathy requires a complex set of skills that includes regulating one’s own emotional involvement. This delicate balance between distance and involvement goes beyond any quantifiable variables and has to be learned by social beings and adapted to each individual patient encounter. The vision of outsourcing empathy to MAI is therefore unrealistic and could also lead to a deskilling of doctors in terms of clinical empathy that would negatively the therapeutic relationship.
6.1.3
Shared Decision-Making and Autonomy
Many MAI systems, first and foremost CDSS, aim to support clinicians in making decisions by building models based on individual health data. However, a clinician’s decision is only part of the decision-making process. Given the principle of autonomy and its operationalization as informed consent, patients have to agree with what clinicians recommend as prerequisite of any medical action. In Sect. 4.2.2, we have encountered the paradigm of shared decision-making as the foundation of a deliberative therapeutic relationship. Shared decision-making refers to a deliberative process based on medical expertise, empirical evidence, and patient preferences. Doctors and patients form a mutual agreement on which actions should be taken, e.g. which treatment option to apply, and define their responsibilities within the treatment process. The crucial elements here are information and communication: There has to be a solid evidence-base and doctors have to be able to communicate the benefits, risks, and potential of different treatments as well as alternatives. Some commentators regard CDSS as a potential enabler of shared decisionmaking due to their ability to integrate a wide range of data and knowledge (Abbasgholizadeh Rahimi et al., 2022). CDSS can process large amounts of individual health data and also recommend personalized treatment options. This offers insights that surpass conventional uses of data for decision-making. Doctors can use predictive models to communicate risks and thus enhance the patient’s ability to assess and evaluate treatment options. CDSS might thus promote patient engagement and enable a better communication with their doctors. The systems can also help to better present risk estimates in an interactive and individualized way and contribute to a better information process (Hassan et al., 2021). The use of CDSS for patient information and decision aid has also proven to be more efficient when compared with conventional education methods. The main benefits are an increase in decision quality and patient satisfaction as well as better functional outcomes (Jayakumar et al., 2021). CDSS might thus be viable tools for strengthening shared decision-making by providing a better evidence base, integrating a variety of
160
6 Relationships
treatment options, and allowing and easier and more personalized communications of risks and benefits. Despite these potential benefits, there are also concerns that MAI technologies may negatively affect the decision-making process. As some authors suggest, the increased application of MAI systems might bring about a third wheel-effect when it comes to shared decision-making that slows down or disrupts the process (Triberti et al., 2020): First, MAI systems could misinterpret patient data and produce false results, which in turn affects the basis for decision-making. Patient autonomy largely depends on doctors providing information, which allows patients to make valid decisions on treatment options. Hence, the quality and validity of this information is of the utmost importance. Second, opaque MAI technologies might give recommendations that are difficult to understand or to explain. Explainability, as already discussed, is the crucial issue here (see Sect. 5.2.3). In a concrete clinical situation, an incomprehensible or inexplicable recommendation by a CDSS may slow down the decision-making process (Triberti et al., 2020). Transparency also implies that doctors have to disclose any MAI involvment in the decision-making process. Third, including MAI technologies in shared decision-making may cause confusion regarding roles and responsibilities. Given the fact that CDSS not only provide information or analyze data, but make explicit recommendations regarding treatment options, the use of these systems may undermine the professional autonomy of doctors (Lorenzini et al., 2023). This may cause uncertainty on behalf of patients regarding the role of doctors as well as the role of MAI as a non-human agent within the decision-making process. Shared decision-making usually relies on the doctorpatient dyad. Both have clearly defined roles within this relationship, each with their responsibilities and rights. The medical expertise and respect for the values and preferences that motivate a patient’s decision enables patient autonomy. Introducing MAI as a third party may disrupt this relationship, since the status of technologies like CDSS is unclear. Patients may not be sure whether this is just another tool doctors use for gaining information or whether algorithmic decisions replace the decisions by doctors. This role ambiguity may lead to confusion on behalf of patients, which could diminish their trust in the decision-making process (Triberti et al., 2020). One strategy for dealing with the issues regarding shared decision-making is to include patient preferences into the algorithmic decision-making process. Methods for including patient preferences in CDSS-aided shared decision-making have already been developed and clinically evaluated (Sacchi et al., 2015). Two aspects are important here: First, to take a thick data approach that combines qualitative data with quantified data and human learning with machine learning (see Sect. 5.2.1). Doctors must be ware of the epistemic limits of CDSS and avoid digital positivism by integrating qualitative aspects like patient preferences, values, and narratives into the decision-making process. Second, such a thick data approach requires relational autonomy (see Sect. 4.2.1). Following this approach, a strong therapeutic relationship where doctors form an emotional bond with patients and regard them as persons is key. By understanding how the individual life situation as well as social determinants shape the uniqueness of the patient, doctors get a full picture of the patient’s
6.1
Therapeutic Relationship
161
motives and values. Based on this information, doctors can enable decision-making on behalf of patients by contextualizing medical facts with their specific situation and thus empower their autonomy. This way, doctors can be enablers of shared decision-making and avoid the pitfalls of digital positivism.
6.1.4
Responsibility and Liability
The use of MAI technologies, particularly CDSS, raises questions of both moral and legal responsibility concerning the outcomes of MAI-aided decision-making. Compared to other technologies, AI poses a specific problem that has been called the responsibility gap (Matthias, 2004): Whereas developers or producers of a technical artefact or tool can be held responsible for malfunctions or unintended outcomes, the same does not hold for AI since many of its outcomes are in principle unforeseeable. This is due to the fact that AI systems are designed to be autonomous, at least to some extent. Hence, they differ substantially from hitherto used technologies. This also renders the instrumentalist view obsolete, according to which technologies are mere tools designed by humans for specific purposes (Gunkel, 2020). Since AI technologies imply processes beyond direct human control, the traditional view that holds humans responsible for how they use their tools does not apply anymore. The instrumentalist position simply fails to account for the complexity and dynamic that AI technologies bring. Two conditions must be fulfilled for being able to take responsibility, knowledge and control (Coeckelbergh, 2020). In order to be responsible, an actor must be aware of the nature and consequences of an action and possess the freedom to control it. In case of automated AI, the system may make decisions or perform actions without the intervention of a human operator. The lack of control over automated decisions and actions thus causes the responsibility gap, since it is unclear who is to be made responsible when unintended consequences result. The main question is whether a MAI technology can be held responsible for an action it performs (Verdicchio & Perin, 2022). If we look at knowledge and control as conditions for responsibility, one could argue that MAI systems may fulfill both. When fed with the right data and equipped with the right parameters, a MAI system is capable of foreseeing the outcomes of its actions. Think of a robot that is able to evade humans crossing their path. The system possesses the knowledge that it should not bump into a human (and probably any other obstacle) as well as the data for performing an evasive maneuver. The robot is able to control and direct its movements without the intervention of a human operator. Hence, it possesses both knowledge and control. One could therefore argue that AI technologies that possess relative autonomy in making decisions and performing actions may appear as responsible agents (Coeckelbergh, 2020). However, some commentators state that we can only ascribe responsibility to human agents. Even if we agree that the instrumentalist position of technology as a mere tool does not apply to AI, we still have to acknowledge that these technologies
162
6 Relationships
are designed for specific tasks and purposes. Hence, an AI system cannot be held responsible for the simple reason that it just does what it is meant to do: It classifies variables, builds models, and makes predictions based on the data it is fed with (Bartneck et al., 2021). AI systems only act on relative autonomy and cannot decide on the purposes of their action or choose an option for action beyond their operational logic. What we are facing here is the question of moral agency (see Sect. 4.3.1). I have already established that AI systems, at least domain-specific ANI like MAI, cannot be considered as moral agents, since they are inherently tethered to human designs, decisions, and purposes. The fact that they may be considered as agents does not mean that they are moral agents, since their agency is defined by and bound to human decisions. When ascribing responsibility, it is therefore insufficient to focus on algorithmic decisions, since those are always part of a broader context of social practices. Since MAI systems are inextricably linked to humans and shaped by their social practices, responsibility for their actions always lies with humans. This becomes even more evident if we look at the liability aspect, i.e. legal responsibility and accountability. As several studies have shown, unclear liability is one of the major concerns of clinicians regarding the implementation of MAI in clinical practice (Scott et al., 2021). Hence, it is crucial to address this issue in order to provide legal security for doctors and implement MAI into the clinical workflow. A computer system, software, or robot is no legal subject, hence it cannot bear rights or be sued. Since a computational entity cannot be punished for its actions, it cannot be made liable for its actions (van Wynsberghe, 2014). Therefore, ascribing responsibility to a MAI system would be a purely academic exercise, since liability is crucial for medical practice. In cases of misdiagnosis or mistreatment where patients suffer harm from a wrong decision, it is important to hold someone responsible in a legal sense. A concept of moral responsibility without the legal aspect of liability is useless in medicine. Now that we have established that MAI cannot be held responsible, we have to identify those actors that can. As some argue, responsibility for negative outcomes of a system has to be ascribed to those who oversee or operate the system (Bartneck et al., 2021), which in the context of medicine would mean healthcare professionals. When we consider knowledge and control as prerequisites for responsibility, several issues arise here. Concerning knowledge, we have to ask whether a human actor can be held responsible for an action that they do not fully understand? As we have seen, this is often the case with AI decisions. The black box-nature of many algorithms often makes it impossible for doctors to understand why the system made a certain decision. It would be unfair to ascribe responsibility to doctors for something they do not even understand. This is another strong argument for transparence and explainability of algorithms. Holding a doctor responsible requires that they have control over the process they are responsible for. This can become complicated when we deal with automated MAI processes. If a doctor cannot control for example a CDSS automatically adjusting the dose of a drug, how can we attribute responsibility to them? One answer is to prevent such a scenario altogether by ensuring that there always is a
6.1
Therapeutic Relationship
163
human in the loop. In medicine, this might be the consensus, given the specific role of doctors. However, there may be settings where automatization is a goal. Think about the algorithm discussed by Obermeyer and colleagues. This algorithm sorted patients into different risk groups, thus defining their access to medical services. In this case, automatization of decision-making is highly desirable, since it is cost-effective. But what about responsibility here when no human is involved in the decision-making process? To sum it up, in order for doctors to be able to take the responsibility for MAI technologies and their outcomes, knowledge in terms of explainability and transparence as well as control over the actions of MAI technologies are preconditions. What further complicates this matter is the complex interactions of different actors in MAI-supported decision-making. The issue of responsibility in the context of MAI can thus be considered as “a problem of many hands” (Chen et al., 2022): The fact that various actors are involved in an MAI-aided process makes it difficult to ascribe responsibility. Besides healthcare professionals, MAI manufacturers and medical institutions could be named here. Each actor contributes to the outcomes of MAI-aided decision-making in different ways and on different levels or stages of the process. MAI manufacturers basically design a product. Hence, it could be argued that existing regulations for product liability can help us to resolve the problem of many hands. One of these principles is strict liability, which applies to any kind of harm a product causes in costumers, even unintended consequences (Bartneck et al., 2021). In the medical field strict liability is applied to negative outcomes of vaccines. In cases where vaccines cause unintended consequences, the company that produces the vaccine can be held responsible for compensating the damage. In a similar way, one could therefore apply this principle to MAI technologies. Accordingly, the EU has proposed an AI Liability Directive that includes regulations based on strict liability for highly sensitive AI applications such as MAI (Buiten et al., 2023). However, the issues of knowledge and control arise also here. Concerning knowledge, how can MAI manufacturers take responsibility for algorithms they sometimes do not even fully understand themselves? This is the problem of algorithmic opacity that we have encountered in Sect. 4.2.3. Concerning control, it is difficult to see how MAI manufacturers can exert control over an MAI system once it is out in the wild. Again, this is especially an issue with automated processes. As a result, the issues of knowledge and control also make it difficult to ascribe full responsibility to MAI manufacturers. Some commentators argue that the employer, meaning the medical institutions that provides the MAI system, has a responsibility of overseeing the bidding process, algorithm audit, and risk control (Chen et al., 2022). Hence, medical institutions are obliged to conduct some kind of quality control when implementing MAI technologies. If this fails, the medical institution is reliable. This is especially the case when a MAI system is supposed to act autonomously. One could therefore argue since medical institutions decide upon and oversee the deployment of MAI technologies, they are also responsible for their outcomes (Palmer & Schwan, 2023). Although Medical institutions of course have a responsibility for the deployment of technologies, the same issues of knowledge and control arise here. Even if measures of
164
6 Relationships
quality control like audits are in place, they cannot be expected to foresee or prevent all negative outcomes caused by MAI systems. The basic problem here is that each actor involved can only act within their domain of knowledge and control. There is no single actor that could possess overall knowledge or control throughout the whole MAI decision-making process. Some actors may have more responsibility than others, according to their contribution to the process. This is also a temporal issue, since different actors perform different actions at different stages. The problem of many things aggravates the problem of many hands: A MAI-aided decision-making process may include different kinds of software and material artefacts, e.g. medical devices, sensors etc. That means that is often unclear whether the AI itself, meaning the machine learning algorithm, or some other technical part, like a sensor for example, caused a misdiagnosis or a false treatment recommendation. Strategies When looking at the different positions, it becomes clear that it is impossible to categorically ascribe responsibility and liability to one actor. I will therefore suggest an approach of distributed responsibility, which requires to specify the nature of the MAI-aided practice and the function each actors exerts in it in order to determine who is to be held responsible and/or liable and to what extent (Chen et al., 2022). The responsibility and liability of a doctor depends on their role within the process according to their degree of knowledge and control. In a usual setting that involves a CDSS for example, it would be the doctor’s responsibility to review the output by the system. Since an algorithmic decision is not the final decision, doctors bear responsibility. This could also mean that doctors are responsible for choosing the appropriate MAI technology. In cases where doctors choose to use an opaque MAI that they do not fully comprehend, they have to take responsibility and cannot simply refer to the black box nature of the system (Smith, 2021). In a fully automated scenario where MAI systems act as doctors themselves, no human doctor could be held responsible. This is a purely hypothetical scenario at the moment, but with the advancement of MAI such a full automatization could be possible. In such a case, the medical institution could be held responsible, since it provides the service. It is therefore the duty of the institution to ensure that only those products that adhere to quality standards are used. Depending on the nature of the negative consequence or harm caused by a MAI-aided practice, also the manufacturer of the technology could be the responsible and liable party. If the algorithm does not work the way it is supposed to or in case of a privacy breach, the responsibility should be assigned to the manufacturer. Such an approach of distributed responsibility considers the complexity of applying MAI technologies and differentiates between the actors and actions involved as well as between different types of negative outcomes. The advantage of this approach is that it clarifies the roles of each actor within the process and does not rely on deciding the question whether MAI possesses certain features like consciousness, intentionality, or free will. It also fits with existing and established notions of responsibility. Doctors are responsible for their decision and actions.
6.1
Therapeutic Relationship
165
Medical institutions are responsible for ensuring the safety of the technologies they use and the services they provide. Manufacturers are responsible for product safety and data security. This is a fair and transparent attribution of responsibility that also enables to determine legal liability. Instead of a categorical stipulation that defines whether humans or MAI systems should be held responsible, this approach is flexible and can be applied to different scenarios. There might be cases where an attribution of responsibility is difficult, for example when it cannot be clearly determined where the error lies or because more than one negative outcome happened, i.e. privacy breach and a harmful decision by doctors. In such cases, an interdisciplinary investigation could be launched that determines the causes and attributes responsibility to the actors (Whitby, 2015).
6.1.5
The Role of Doctors
Before analyzing the transformation of the role of doctors by MAI technologies, one has to define their current role. What makes a doctor? What makes a good doctor? There is a certain view of doctorhood, meaning that being a doctor is not simply a profession, but a specific role linked to a set of skills and attributes. To some, doctorhood is built around empathic engagement and detachment and tied to duties within the therapeutic relationship (Kazzazi, 2021). Others state that being a good doctor implies to combine medical skills and expertise with interpersonal skills such as empathy and honesty (O’Donnabhain & Friedman, 2018). It is also important to note that different stakeholders in the medical context hold different views regarding being a good doctor. Whereas doctors themselves emphasize medical skills and expertise, patients regard interpersonal and communication skills as highly important (Steiner-Hofbauer et al., 2018). In order to fulfill both requirements, a good doctor has to be an applied scientist and a medical humanist (Hurwitz & Vass, 2002). One could therefore argue that the role of doctors is complex and multifaceted: A good doctor possesses clinical skills and medical expertise, is an applied scientist and practical thinker, does not shy away from making decisions and takes responsibility for them, is empathetic where needed and distanced where necessary, treats patients with respect and regards them as persons instead of diseases or cases, and is a trustworthy and honest communication partner. MAI has the potential for significantly transforming the role of doctors in different ways. Some argue that doctors will have to adapt to the new MAI-enhanced medicine (Liu et al., 2018). In a setting where MAI systems outperform humans in several tasks like diagnosing diseases, predicting outcomes, and performing surgery, doctors will have to learn to handle these technologies or be replaced by them (Ahuja, 2019). In several medical fields, most prominently radiology, this process has already begun. Radiology was one of the first disciplines to implement MAI technologies and has been the focus of debates on the role of the physician. As some argue, MAI technologies will not necessarily replace all human radiologists but those who are unable to use it (King, 2018). This leads some authors to expect an intermediary role
166
6 Relationships
of doctors in a MAI setting, where doctors will still perform some traditional tasks that involve a high level of patient contact like physical exams (Ahuja, 2019). In this view, doctors will be the experts for everything that requires interpersonal skills, whereas MAI will take over most tasks that involve data processing. But what does that mean for the role of doctors as outlined above? Will MAI replace the applied scientist and will doctors therefore focus on being medical humanists? The first possible role is what I call an enhanced practitioner. Doctors may use MAI technologies for obtaining the best available information and thus strengthening the evidence-base for their clinical decisions. MAI technologies may enable a more precise diagnosis or more accurate predictive models for risk assessment. Doctors could thus use MAI as the ideal tool for putting their medical expertise to practice. This would enhance their skills and make them more efficient. As some argue, this would not be the first instant of such a development in medicine. In fact, there a numerous examples where new technologies were at first seen as potential devaluation of the skill of doctors, e.g. CT scans, which turned out to be valuable enablers for clinical practice (Hindocha & Badea, 2022). In this regard, the use of MAI technologies could boost the role of doctors. An enhancement of their medical expertise could also potentially increase the trust by patients. But as we have already discussed, MAI technologies are not simply another more sophisticated tool. They differ greatly from past technological innovations in that they fundamentally shape the epistemologies, decisions, and actions of doctors. Furthermore, MAI technologies may act as autonomous agents, which is also a unique feature that separates them from past innovations. In addition, the use of MAI technologies requires a new set of skills as well as enabling technological and institutional structures. We are faced with the problem of integrating a potentially disruptive technology into existing practices and structures. This means to realize the transformative potential of MAI and at the same time to ensure that it fits well with the workflow and the clinical reality of doctors. Hence, being an enhanced practitioner requires several factors. First, medical education has to focus on MAI technologies on all levels, from teaching of medical students to the advanced training of doctors. This will require an enormous effort by different stakeholders, such as educational facilities (universities, medical schools) and medical associations. Second, we have to consider institutional and infrastructural factors. Medical institutions will have to provide the best available technical infrastructure in terms of software and hardware as well as technical support. But this is not just a matter to be addressed by single institutions. It requires a concerted effort to ensure the system compatibility between institutions. Interoperability and data exchange are of the utmost importance to enable doctors to become enhanced practitioners. Isolated solutions may be an obstacle for doctors as acting as enhanced practitioners and thus also negatively affect patient outcomes. Third, the role of enhanced practitioner also requires doctors to adapt to this new era in medicine. Since constant change in terms of technical advances, scientific progress, and structural transformations has been a defining feature of modern medicine, adapting to new circumstances is not a new phenomenon for doctors. It can be expected that also this time doctors will be willing and able to adapt. However, enabling this
6.1
Therapeutic Relationship
167
adaption requires the aforementioned measures in education as well as institutional and infrastructural measures. Doctors cannot be expected to act as enhanced practitioners without the possibility to obtain the required knowledge or without the necessary structures. Another possible outcome is that the role of doctors may change from the medical expert with the sole authority to make autonomous decisions to a link between MAI systems and patients (Triberti et al., 2020). I call this the doctor as mediator. This could be an ambiguous role. On the one hand, it could mean that the tasks and responsibilities of doctors simply shift. Outsourcing some tasks to MAI applications, such as data collection and procession, might result in adding new ones, for example explaining algorithmic outcomes and investing more time in building the therapeutic relationship. This would imply the aforementioned shift from applied scientist to medical humanist, which is what some authors envision as the ideal role (Topol, 2019). In this setting, doctors would be mainly responsible for the human touch, meaning the therapeutic relationship and interpersonal communication (Ahuja, 2019). This role has some severe downsides. First, it implies that doctors should serve as “guardians of humanity” that protect humaneness in a heavily technisized setting (Rubeis, 2021b). This concept shifts moral responsibilities from those who design technology and those who decide how to implement it to those who apply it. MAI manufacturers and policy makers or medical institutions could design and implement MAI for reasons of efficiency and rentability without paying too much attention to ethical implications, since this is what doctors are for. In other words, the job of doctors would then be to compensate for the ethical collateral damage of MAI. This is contrary to the aforementioned approach of ethics by design, where ethical implications are addressed already during the technological development process. Another issue with the role of doctors as mediators is that patients could lose trust. To a large extent, trust depends on the medical expertise and authority of doctors (see Sect. 4.2.3). If patients view doctors simply as translators and communicators of algorithmic decisions, this may undermine their belief in the skills of doctors as well as their authority in making decisions. I consider the third role as the most unlikely and undesirable one, the doctor as supervisor. This role would become a reality if MAI replaced doctors on a large scale. In such a setting, MAI applications would conduct most medical tasks, form exams to data procession, from patient communication to surgery. Patients would interact directly with machines without the need for a mediator. The task of doctors would then be to oversee MAI in clinical processes and to make sure that it works properly and safely. This is a future that almost no one envisions, although some see it as a possible outcome. The doctor as supervisor could particularly become a reality if the focus of implementing MAI technologies is primarily on outcome and performance (Aquino et al., 2023). If accuracy, speed, cost reduction, and efficiency are the main motifs for using MAI in clinical practice then it will be difficult to argue against a full automatization and replacement of human doctors. The result could be a form of machine paternalism that undermines both the autonomy of doctors and patients (Diaz Milian & Bhattacharyya, 2023).
168
6.1.6
6
Relationships
The Role of Patients
When it comes to the role of patients, we find similarly high hopes. Some commentators believe that MAI technologies will completely transform the status of patients within healthcare. One reason for this optimism is the paradigm of personalized medicine as the focus of developing and implementing MAI. As discussed before, the P4 approach could be a way to realize a patient-centered medicine that integrates individual needs and preferences into the treatment process (see Sect. 3.1). When compared with the cohort medicine that has dominated the last century, personalized medicine bears the potential to regard patients not as cases or statistical variables, but as individuals. This would in itself imply a boost of the patient role. The key concept here is patient-centered care. As discussed before, true patient-centeredness is only possible when patient preferences, values, and motivations are considered in addition to quantifiable health data. Furthermore, the participatory aspect implies an empowerment of patients, according to some commentators. The fact that patients increasingly use smart health technologies and directly interact with MAI applications allows them to take health matters into their own hands (Topol, 2015, 2019). mHealth and IoT devices provide the opportunity to control one’s own health data and take an active part in the treatment process. Hence, MAI technologies could enable patients to make better informed decisions by providing a better evidence-base. In some cases, doctors may use MAI for communicating health information in a more comprehensible and effective way, e.g. through visualization of predictive models. An important enabler for improving patient education could be online communities, where patients share and discuss their health data and narratives (Vainauskienė & Vaitkienė, 2022). Besides information, some MAI technologies can be used for self-management and advising the patient in regard to performing diagnostic or even therapeutic tasks, such as routine tests or behavioral change. This is especially the case with chronic diseases that are linked to behavioral aspects. The prime example in this regard is cardiovascular diseases. Some authors speak of a paradigm shift with regard to the patient role in this context (Barrett et al., 2019). Direct patient engagement could go beyond educating patients about symptom detection and behavior change by supporting patients through monitoring and sensor technology. This could make it easier and faster to detect changes in the clinical status and to respond adequately. Combining remote patient management and patient self-management could thus enable personalized treatment, reduce health costs, and empower the patient. Hence, the role of an empowered patient is one possible outcome. In this scenario, MAI technologies could directly or indirectly boost patient autonomy by enabling a better-informed decision-making and giving patients control over their own health. This role is the focus when it comes to the expectations surrounding MAI. Although empowerment of patient autonomy and the focus on self-directed action is a positive development, there may also be certain risks. One could argue that greater autonomy also means greater responsibility. Self-management and active engagement in the treatment process through MAI technologies could become
6.1
Therapeutic Relationship
169
a responsibility of patients, not just an option. This responsibilization may be a manifestation of a health agenda rather than the attempt to empower patient autonomy (Lupton, 2013). It may transform the role of the patient in terms of rights and responsibilities into a digital health citizenship, where each individual is obliged to take care of and manage their own health by using MAI applications (Petrakaki et al., 2021). The aspect of citizenship results from the fact that taking care of one’s own health is linked to taking care of others and the community. For example, using MAI applications to track one’s weight and fitness level and prevent behavior-related disease may become a responsibility individuals owe to the community of health insurance payers. A second role that could possibly emerge in a MAI setting is the data double. As such, creating data doubles is a necessary requirement of many MAI technologies. This implies disassembling the patient into data packages, defined by relevant variables and class labels. Datafication of the patient is the prerequisite for personalized medicine, allowing to tailor treatment according to the individual characteristics of the patient. As we have already discussed, the main risk here is a reductionist approach that only focusses on the data itself without contextualizing them with the social determinants that shape the health of the individual. A personalized medicine in the true sense can only be achieved when the epistemological limits of the data focus are considered. Avoiding digital positivism by interpreting data in the light of the overall life situation of the patient, their personal, socio-demographic, and socio-economic factors, is key here. In the simplest form, one could state that patients are more than their data. This might sound trivial, but given the dangers of reductionism and digital positivism, it is a truism that should guide clinical practice. It is therefore a contradiction to promote deep empathy and patient empowerment while at the same time claiming that MAI should be used for “digitizing the medical essence of a human being” (Topol, 2019, 25). This cannot and should not be the aim of MAI, since the essence of a human being, whatever this is supposed to be, does not lie in their data. This is due to the simple reason that data is a social construct. What counts as data and especially relevant data is a result of human decisions and deliberate selection. This is the fundamental epistemic limitation of big data approaches and AI technologies. Hence, data does not speak for itself, but has to be contextualized and interpreted in the light of the bigger picture, which in medicine means the non-quantifiable factors that shape the situation of the individual patient. But MAI technologies may also push the boundaries of the medical domain and with it blur the lines between patients and healthy individuals. Another perspective is therefore that individuals may emancipate themselves from the status as patients altogether. A new role may emerge that transcends the traditional role of the vulnerable individual seeking help from medical experts. The preventive focus of MAI technologies might contribute to this: Healthy individuals could increasingly use MAI technologies to measure their vital functions and other relevant health data for maintaining a healthy lifestyle. This is a development that we have witnessed throughout the last one and half decade. The quantified self-movement emerged with widespread dissemination of smart mobile technologies such as smartphones and smart wearables (Lupton, 2016, 2017). The motivations for using mobile MAI
170
6 Relationships
applications for self-tracking are manifold and vary from self-entertainment and selfassociation to serious interest in controlling one’s own health status (Feng et al., 2021). This role has been called emancipated consumer (Topol, 2015, 2019). It implies a new relationship in healthcare that no longer centers around the dependency of a vulnerable individual on the expertise of medical professionals. Rather, the emancipated consumer is an informed individual that uses MAI technologies to assess the best choices and uses healthcare services accordingly. Doctors cater to the demands of emancipated consumers and act as health coaches. This implies a new type of autonomy that is not tied to the patient status, but to the individual’s role as costumer. The focus here is on self-directed choices of individuals who shape their lives in an entrepreneurial manner (Schüll, 2016). However, this vision of consumerist individuality might not be a thorough empowerment of autonomy. The big data approach may channel the attention and decision-making of the emancipated consumer in specific directions, thus creating a digital hypernudge (Yeung, 2017). Classical nudging implies manipulation in terms of presenting choices in a way that consumers will likely prefer one over the other (Thanler & Sunstein, 2008). Hypernudging reshapes the whole way individuals perceive and process information through means of big data analytics. These technologies imply much more sophisticated techniques for designing choice architectures that predict and shape behavior. Hence, the design and use of MAI technologies might create the image of empowerment, yet manipulates users to make the “right” choice. The purposes are manifold, from financial interests of commercial agents to state-initiated health agendas such as cost-savings in the healthcare sector.
6.1.7
The Role of MAI
I have already established the status of some MAI systems as agents. According to this view, possessing a mental state is not a necessary criterion that defines a moral agent. Computer systems possess a certain kind of efficacy that makes them relevant elements of moral situations. But the agency of computer systems has to be linked to the intentionality and agency of humans who designed them and use them. Although computer systems might not be moral agents, they can be considered as moral entities (Johnson, 2006). I follow this view since it is best suited for the medical context. I consider some MAI technologies as artificial agents, i.e. systems with a certain degree of autonomous decision-making and/or action. Autonomous in this respect means that the system is able to perform these tasks without direct programming or direct intervention by humans. This is a very broad definition and will therefore apply to a broad range of MAI applications, including robots as embodied artificial agents (Gunkel, 2018). However, the ethical evaluation of each application depends on the degree of autonomy as well as the context and purpose of use. An EHR that includes certain processes of data evaluation and processing is not an artificial agent. An algorithm
6.1
Therapeutic Relationship
171
that classifies patients according to risk profiles and assigns them access to certain healthcare services or decides on reimbursements has to be considered an artificial agent. Within the therapeutic relationship, a CDSS can be an artificial agent, depending on its level of autonomy and the purpose of its use. The important aspect here is that artificial agents cannot be detached from their interactions with humans. They are embedded in social practices and relationships that shape their field of application as well as their functioning. This is an important definition for several reasons. First, it allows to attribute responsibility and liability in the above-mentioned sense. If we recognize that MAI is an artificial agent, we accept that an instrumentalist view is insufficient for attributing responsibility. Such a view acknowledges that responsibility has to be ascribed according to the involvement of human doctors as those who choose the technology and make the final decision as well as to the nature of the negative outcomes. Acknowledging that MAI systems have agency within the network of individual and institutional responsibilities as well as legal liabilities helps us to attribute responsibilities more accurately than by simply blaming doctors or manufacturers categorically. Second, defining MAI as an artificial agent also allows to decide what role the technologies should play in the medical context. It helps to assess whether we want it to be a support for doctors or an entity that makes decisions without human control. We can only discuss this question adequately if we acknowledge that we are not dealing with a simple tool and neither with an entity that possess the same moral agency as humans. Third, by defining MAI technologies as artificial agents we can assess their role within the therapeutic relationship. It helps us to better understand the specific dynamic they bring to this relationship as well as to decide what role we want them to play. In accordance with the roles of doctors within the therapeutic relationship, we can also identify different roles of artificial agents. The first role is that of an enabler. By providing the best available evidence for decision-making, by supporting dangerous or difficult activities such as surgical practices, and by performing repetitive, mechanistic, and time-consuming activities especially in administration, artificial agents my enable a more person-centered medical practice. This could fulfill the promise of deep empathy, which claims that MAI technologies can free doctors from tasks that are not directly patient-related and thus provide them with the time required for building a meaningful therapeutic relationship. This is surely the most desirable role from the point of view of patient-centered care. It not only has the potential to optimize clinical outcomes and reduce costs, but also to enable an encounter that centers around the individual needs, resources, beliefs, and values of the patient. Realizing the role of MAI as enabler requires to define this as an explicit goal of technology design, implementation, and use. The benefits just described will not automatically arise from just implementing MAI technologies. Enabling deep empathy can only succeed if this is defined as the crucial desirable outcome. This might imply that other goals, such as reducing human contact and saving personnel costs should be abandoned. A second possible role is the mediator. This role is more ambiguous, since it could imply considerable risks. As a mediator, an artificial agent may restructure the
172
6 Relationships
therapeutic relationship through technical means of data collection, representation, or transfer as well as communication. This could have tremendous advantages regarding a more inclusive healthcare that provides better and easier access, e.g. through telehealth and the use of mHealth and IoT. At the same time, the role as mediator has the inherent potential to shape the encounter between doctors and patients through the focus on quantifiable data (Mittelstadt, 2021). By providing a new epistemology of the patient that bears the risks of digital positivism, the unique health situation of the patient and their individuality might get lost. This may for example be the case when doctors function as mediators, interpreting decisions by CDSS. Reductionism and bias, the main risks of digital positivism, and the resulting uniqueness neglect may undermine the very goals of personalization and deep empathy. A third role would be artificial agents as substitutes for human doctors. This could be desirable in the light of process optimization and cost-effectiveness. Human labor could be replaced in diagnosis, e.g. interpreting X-rays, decision-making, e.g. selecting the best-suited therapy option, or performing interventions, such as surgical practices. However, the interaction with artificial agents as substitutes poses multiple ethical risks, from safety concerns to liability issues. Regarding the therapeutic relationship, it would severely affect trust and the autonomy of patients. The fact that only unidirectional relationships are possible with non-human agents implies that the crucial enabling factor for trusting in clinical decisions and making selfdetermined decisions would be missing. That does not mean that artificial agents cannot serve as substitutes for any form of human labor in the clinical context. It means that there are strong reasons against designing, implementing, and using artificial agents for reducing human contact and substituting the therapeutic relationship to a mere human-machine interaction. Although technically possible, this is simply not the right purpose of artificial agents. Hence, the role of artificial agents as substitute should not be defined as a goal.
6.1.8
Models of a MAI-Enhanced Therapeutic Relationship
We have seen the various ways in which MAI may affect the therapeutic relationship One question remains unanswered. What will be the model for this relationship in a MAI setting? In Sect. 4.2.2, I have discussed the models for the therapeutic relationship introduced by Emanuel and Emanuel, the paternalistic, informative, interpretative, and deliberative model (Emanuel & Emanuel, 1992). MAI could either enhance or undermine all of these models. The widespread use of MAI technologies could lead to a new form of paternalism, mainly due to the possibility of ubiquitous and permanent surveillance and monitoring. Every aspect of an individual’s life could become the object of the clinical gaze. The medical domain could expand further and further into the privacy of patients. This could be either a tool for exerting health agendas by governments or financial interests by commercial agents. Either way, a form of neo-paternalism
6.1
Therapeutic Relationship
173
could emerge that undermines patient autonomy. Whether this scenario will occur will depend on several factors. First, we have to decide upon the role of MAI systems within the clinical setting. That means that we have to set the level of automatization we want to allow. In a fully automated setting, it is difficult to see how patient autonomy and shared decision-making could be ensured. Therefore, in order to avoid neo-paternalism, we should follow the principle automatize processes, not decisions. In all medical contexts where decisions directly affect patients, a human should be in the loop. This fits with existing ethics codes in medicine. Doctors or medical institutions have to take responsibility for their actions. They can only do that if they possess the knowledge about how a decision was made and can control the decision-making process. Hence, automatization should only go so far as to not conflict with this requirement. That also means that algorithmic decisions need some level of transparency and explicability as discussed in Sect. 5.2.3. Algorithmic decisions should support the decision-making by doctors, but not be the final word. Second, the purposes and aims of implementing and using MAI technologies have to be made clear and transparent. This is to prevent that the technologies are used for realizing hidden health agendas by governments or financial interests by commercial agents. That does not mean that either government health strategies or financial benefits for corporations are principally illegitimate. It only means that such aims should be openly communicated and their legitimacy should be a matter of public debate. Transparency and public participation could thus safeguard patient autonomy. MAI could also lead to a more participatory medicine and thus a more deliberative therapeutic relationship. Patients could actively engage in collecting their health data through mHealth technologies and IoT. Better access to health-related information and new ways of decision-making, e.g. deciding on which data to share for what purpose, may empower patient autonomy. MAI technologies may also enable doctors to better communicate with their patients about health-related outcomes, but also about values, needs, and motivations. This may be a result of telehealth applications for data collection and procession or a more efficient data management and administration, which allows doctors to spend more time with patients. In order to enable these positive outcomes, accompanying measures are required. MAI technologies have to be designed in a way that really empowers patient autonomy instead of hoping on empowerment as a side-effect. The prerequisites here are data security and privacy protection, user friendliness, and the various measures for preventing bias as discussed in Sect. 5.2.2. It is the responsibility of doctors to assess whether a certain application fits with the individual needs and resources of the patient. In some cases, patients might not be able to perform selfmanagement due to their lack of digital health literacy or other factors. In such cases, doctors must find alternative treatment options that do not rely on a high level of patient engagement. The principle of relational autonomy is key here, meaning that doctors, not technologies, are the enablers of patient autonomy. Medical institutions must define deep empathy as a crucial goal. Should MAI technologies really free
174
6 Relationships
doctors from certain time-consuming duties, medical institutions must enable doctors to spend the newly-found time with their patients. This can only happen if medical institutions do not use this time for assigning more duties to doctors or push more patients through the system.
6.1.9
What About Democratization?
Democratization of medicine through MAI technologies is a major topic within the ethical debate (Rubeis et al., 2022). Some authors envision a future in which smart technologies enable patients to participate actively in the treatment process, take control over their own data, and make self-determined decisions (Steinhubl and Topol, 2018; Topol, 2019). mHealth applications play a crucial role in this scenario, since mobile devices enable patients to generate, collect, and transmit individual health data on their own (Mulvenna et al., 2021). Doctors do not possess the knowledge monopoly any longer and are no longer those who control every aspect of data acquisition. Hence, as some commentators claim, MAI technologies may help to overcome paternalism and level the power asymmetries within the therapeutic relationship. A further aspect of democratization is equity. MAI technologies, especially mHealth applications, can reduce access barriers, thus contributing to the inclusion of underserved and marginalized groups in healthcare (Weissglass, 2022). Several issues arise with this approach. This is again an example of a solutionist mindset, according to which the implementation of technology alone brings a transformation towards the better with it. However, there is no such automatism. As we have seen, whether the therapeutic relationship changes for better or for worse depends on several factors. There are technology-related factors such as transparency and explainability of MAI technologies, safety, and quality control. There are also institutional factors such as providing the appropriate data architecture and infrastructure as well as enabling healthcare professionals to engage with MAI technologies. And finally, there are regulatory requirements for ensuring privacy and data security as well as responsibility and liability. Hence, a democratization of the therapeutic relationship may only occur if the use of MAI technologies is embedded in the appropriate practices, structures, and regulations. Relying simply on the technology ignores the risk of neo-paternalism as discussed above. Another issue with MAI as a perspective for democratizing medicine is the unclear meaning of the term. What is a democratic therapeutic relationship supposed to look like? If we look closer at the scenarios proposed by the advocates of MAI-asdemocratization, we find that they mostly focus on some form of consumerist individualism (Rubeis et al., 2022). They envision future medicine as a kind of health market where emancipated consumers can choose the health services they prefer. This might be a neoliberal definition of democracy, but it is highly deficient since it does not include any form of rights for data subjects that allows them to control the big data collectors and big data utilizers.
6.2
The Nurse-Patient Relationship
175
Democratizing medicine might be a valuable goal, but it cannot be done by simply implementing MAI technologies. On the contrary, making meaningful use of MAI in terms of personalization and patient empowerment requires to establish safeguards against paternalistic uses. Instead of regarding MAI as the key to a more democratic medicine, one therefore has to ask how we can create frameworks for practices that allow a democratic and democratizing use of MAI. The answer is multifaceted and requires different measures on different levels. First, strengthening the therapeutic relationship. This requires to define the enhanced practitioner and the empowered patient as the ideal roles for doctors and patients in a MAI-enhanced medicine. Regulations surrounding MAI should be designed in a way that enables and empowers these roles. Second, a clear attribution of responsibility and liability. MAI that include a certain level of autonomous functioning should be considered as artificial agents whose outcomes are morally relevant, but are linked to individual or institutional responsibility. That means that the use of MAI implies moral duties and legal considerations that go beyond those cases where technical artifacts are just mere tools. At the same time, responsibility and liability have to be ascribed to human agents, according to the context of use and the specific actions that are being taken. Third, a clear and transparent definition of the purpose of implementing and using MAI technologies. It must transparent to all actors involved what purpose a MAI system is supposed to serve. Only when the purpose is clear, one can assess the ethical implications and outcomes of using the technology. If for example a CDSS is supposed to improve clinical outcomes by personalizing diagnostic processes, then regulations have to put in place that prevent uses that are not in the patient’s interest, such as sorting them into risk categories for health insurance providers. This is primarily meant to prevent MAI technologies from becoming a mere tool for commercial agents to pursue financial interests or for governments to press health agendas.
6.2
The Nurse-Patient Relationship
There is an almost universal agreement that MAI technologies will have a huge impact on nursing practices, the organization of care work, and the roles of nursing professionals. Various MAI technologies are already in use in different nursing fields. Predictive analytics and CDSS support hospital nurses in detecting and predicting health status changes (Buchanan et al., 2020). Smart sensor technologies perform patient observation in in acute psychiatric inpatient care (Barrera et al., 2020). MAI-supported telenursing applications assist nurses in monitoring patients in hospitals as well as in outpatient care (Kuziemsky et al., 2019). AAL systems integrate smart home technologies with sensor technologies and enable a better risk prediction and intervention in cases of health emergencies (Sapci and Sapci, 2019). MAI-powered robots assist nurses either in physical activities such as lifting or washing the patient or in psychosocial interactions (Abdi et al., 2018; Archibald & Barnard, 2018). Virtual
176
6 Relationships
assistants like chatbots support psychiatric healthcare professionals in delivering outpatient nursing services (Vaidyam et al., 2019). Three types of MAI-powered data analytics will be especially important for nursing (McGrow, 2019). Clinical analytics for personalized treatment will transform diagnosis, risk assessment, and disease management. Operational analytics will improve efficiency and effectiveness of the care process, including documentation and reimbursement claims. Behavioral analytics, including data on patient engagement or readmission, will contribute to both clinical outcomes and efficiency. Overall, the use of MAI could improve care quality, optimize outcomes, and reduce the cost of healthcare (Sensmeier, 2015). The relationship between nurses and patients differs from the therapeutic relationship in many respects, which makes a separate discussion necessary. Traditionally, nursing practices have been identified with humanistic goals (Coghlan, 2022). Some authors claim that whereas clinical practices by doctors often focus on objective data, care practices by nurses are more holistic and patient-centered. Person-centered care is of high relevance in nursing, focussing on engaging with the patient, responding to needs, and forming relationships (Buchanan et al., 2020). Some authors expect that MAI technologies could leverage holistic care through an improved prediction and prevention and a better care management (Watson et al., 2020). MAI technologies could further contribute to put the person in the center of care practices by better integrating various patient data, thus making nursing practices safer and more effective and improve overall outcomes (Delaney & Simpson, 2017). The aim is to use a big data approach for “telling the patient’s story” and building better evidence out of nursing practice (Sensmeier, 2015). Most of these arguments are quite similar to the usual debate on personalized medicine and the therapeutic relationship. However, given the emphasis on humanistic goals and holistic approaches in nursing, there are also ethical aspects that affect nursing differently than the practice of doctors. These are mainly the specific smart data practices in nursing, which I call the MAI-enhanced nursing gaze, and the human-robot interaction.
6.2.1
The MAI-Enhanced Nursing Gaze
I have already described the clinical gaze as introduced by Foucault and its ethical implications (Sect. 5.2.1). Whereas the clinical gaze is a concept that serves to demonstrate the negative aspects of medical reductionism and pathologization of individuals, the nursing gaze offers a kind of counterweight. As some authors claim, the main characteristic of the nursing gaze is its stereoscopic vision (Liaschenko, 1994). The observational practices of nurses integrate the clinical and individual aspects of a patient. Using Foucault’s terms, one could say that nurses are trained to view the patient as body and subject. This is due to the nature of nursing practices that require a more intense interpersonal interaction and focus on forming meaningful relationships (Armstrong, 1983). Hence, nursing knowledge consists of foreground knowledge, which
6.2
The Nurse-Patient Relationship
177
pertains to clinical facts, and background knowledge, which focusses on the individuality of the patient apart from biomedical aspects (May, 1992). The specific way of observing patients and the genuine nursing knowledge shape nursing practices. One could therefore say that the nursing gaze integrates an ontology of the patient with an ontology of the practice (Ellefsen et al., 2007). The former implies the specific way in which nurses create normality based on their holistic observations of biomedical facts, behaviors, and the individuality of the patient, meaning their individual needs, responses, and experiences. The latter refers to clinical expectations and the actions required to promote specific outcomes. An important factor in this twofold ontology is concerned observation, a mode of encountering the patient that focusses on their vulnerability (Nortvedt, 1998). Based on concerned observation, the stereoscopic vision of nurses thus provides a holistic view that allows them to integrate the individuality of the patient into their practices. Therefore, the nursing gaze can have a humanizing effect that goes beyond the clinical gaze and may even compensate for the reductionist effects of the latter (Ellefsen et al., 2007). The use of MAI technologies in nursing might facilitate smart data practices that reshape the nursing gaze and with it the patient encounter. As outlined above, some commentators expect an enhancement of person-centered care through a better use of individual health data. A MAI-enhanced nursing gaze could thus facilitate personalization and improve clinical outcomes. As with other smart data practices, the MAI-enhanced nursing gaze might also be fraught with the risks of digital positivism. As a result, the increasing focus on quantifiable data and standardized data might undermine the stereoscopic vision of nurses (Rubeis, 2023). MAI technologies firstly affect the ontology of the patient by disassembling the individual to data packages and reassembling them into models. As we have seen several times, this process of datafication implies two major risks, reductionism and bias. Both might hinder a holistic perspective that integrates the individuality of the patient. Reductionism may over-emphasize the clinical, quantifiable aspects and thus obscure individual needs, preferences, and values. Bias may lead to defining the patient as a member of a specific group and potentially marginalize them. Digital positivism might shape the ontology of practice in a way that the outcomes it promotes do not fit with the patient’s individuality. A digital positivist use of MAI technologies in nursing could therefore hinder concerned observation and undermine the building of meaningful relationships that enable a holistic and humanistic nursing practice. Besides interactions with patients, smart data practices may also have a standardizing effect on other nursing practices. An example here is the EHR, which has a profound impact on nursing. Some commentators have stated that the use of EHR in nursing may enhance various practices like improving information quality in terms of completeness, consistency, and accuracy of patient data (Häyrinen et al., 2008). Another crucial aspect is decision effectiveness, meaning that the improved information quality may contribute to making faster and better-informed decisions. Furthermore, the EHR may have organizational impacts such as an improved communication and collaboration between nurses and doctors or between different
178
6 Relationships
care sectors. As with all MAI-related technologies, the key is the standardization of patient data and translating it into digital data formats. In the context of nursing, it is also important to standardize the terminology nurses use to describe patient features or nursing practices. Medical terminology used by doctors is already highly standardized, since classification systems like the ICD have been implemented long ago. The standardization of nursing terminology is of a more recent date, which requires more effort in order to incorporate it into the EHR (Westra et al., 2008). As some commentators argue, standardization in nursing might also have negative effects, since it implies a reductionist approach (Slemon, 2018). Reducing the complexity of nursing practices as well as patient experiences might therefore diminish critical thinking of nurses. One might add that standardization could also undermine the stereoscopic vision linked to the nursing gaze by focusing on quantifiable features or trying to quantify genuinely qualitative phenomena like patient experience. Another effect might be the disciplining of nurses through the EHR (DillardWright, 2019). An advanced EHR is not just a documenting system, but includes features based on machine learning techniques that support decision-making and action. The EHR might for example give prompts and commands to indicate that certain actions need to be performed, e.g. feeding or washing the patient. Another feature of an advanced EHR is monitoring and registering the practices of nurses. This data can then be used for analyzing and assessing nurses’ performance in a quantifiable way. One could therefore argue that the EHR might be a tool for monitoring, controlling, and manipulating the behavior of nurses (Dillard-Wright, 2019). The goal here might be a better management of nursing practices in terms of efficacy and cost efficiency. The focus of nursing practice would shift from acknowledging complex relations and health narratives to automated data entry and standardized tasks dictated by the EHR. By internalizing standardized terminology and standardized practices, nurses may lose their ability to respond to the patient’s individuality. Their behavior might contribute to discipline patients as well. Patients would have to adapt to the standards dictated by the EHR for the sake of efficacy and cost efficiency. Hence, disciplining nurses through standardized EHR-practices would also lead to disciplining patients, thus undermining patient-centered care and the goal of personalization. As a conclusion, we can say that smart data practices in nursing suffer from the same risks as clinical practices of doctors. The difference is that nursing practices are believed to be more focused on humanistic goals and a holistic understanding of patient care. Since patient-centeredness plays a crucial role in nursing, the possible negative outcomes of MAI technologies are all the more significant. The main issues are a reductionist approach as a consequence of digital positivism. As with smart data practices of doctors, similar strategies could be applied to prevent these negative effects. First, nursing knowledge has to be recognized as a genuine form of organizing information within the patient encounter. MAI technologies have to be designed in a way that allows the integration of this genuine nursing knowledge. Especially the stereoscopic vision of nurses is of relevance here. One strategy could therefore be to make use of nursing knowledge by including nursing
6.2
The Nurse-Patient Relationship
179
experts in the design, implementation, and optimization of MAI technologies (Sensmeier, 2015). This might imply the participation of nurse researchers or practitioners that have some expertise in data science in big data initiatives and MAI development (Brennan & Bakken, 2015). Besides standardizing knowledge on nursing practices, nurses could also advocate the needs and resources of patients. That way, nurses could contribute to enriching the big data approach in nursing by including their more holistic and patient-centered perspective, thus reducing the risk of reductionism.
6.2.2
Nursing Robots and the Human-Machine Interaction
Robots can be seen as the latest technological addition to the already highly technisized nursing practice (Grobbel et al., 2019). Especially aged care and longterm care are promising fields of application for nursing robotics. This is due to the fact that especially in these contexts, two social tendencies show tremendous impacts (Vandemeulebroucke et al., 2018): First, the demographic shift that implies that the percentage of older adults of the population is growing. Second, the shortage of qualified personnel in the nursing sector as well as other care givers. These two tendencies combined create a situation where the care needs of individuals may be underserved. According to a widespread consensus, the development and implementation of nursing robots could be a response to this situation. A prominent example is SAR that provide or support emotional and cognitive services. In Sect. 3.3.1, I have already discussed some applications of SAR in nursing. As companion robots, SAR like the robot seal Paro are able to interact with humans by using sophisticated sensor technologies and machine learning algorithms to recognize visual, audio, or tactile stimuli and respond accordingly. This may lead to an improvement of mood or reduce feelings of isolation and loneliness (Abdi et al., 2018). SAR may also be tools for cognitive training of patients with dementia (David et al., 2022). As social facilitators, SAR could be used to enable interaction between persons, e.g. residents of long-term care facilities (Abdi et al., 2018). Despite these potential benefits, one has to acknowledge that the evidence base for most outcomes is either thin or ambiguous. Although some studies suggest that SAR may mitigate loneliness in older adults, others find that this not the case, and that SAR rather cause feelings of discomfort and deception (Berridge et al., 2023). In the following, I will take SAR as the use case for my ethical analysis. Following my approach as outlined above, I consider MAI-powered robots as artificial agents, i.e. technical artifacts that have agency and are not merely passive tools. They are moral entities in the sense that they possess some level of autonomy regarding decision-making and action. Their decisions and actions may have ethically relevant implications, which makes them an object of ethical evaluation within the nursing relationship. However, they are not moral subjects in the same way humans are. Therefore, we have to assess robots as non-human agents within the context of social practices and normative frameworks. As moral entities, they are
180
6 Relationships
tethered to the decisions made by humans as well as the purposes and goals defined by humans. Hence, analyzing care robots like other artificial agents has to consider the context of the specific interactions, goals, and norms that shape nursing practice. One important ethical aspect is the purpose for which care robots are promoted, i.e. their potential to reduce the care burden in society caused by the demographic shift and the shortage of care givers. As some argue, this viewpoint implies framing people in need of care as problems to be solved efficiently by technical means (Sharkey & Sharkey, 2012). This is again an example for a solutionist approach that aims to solve genuinely socio-political issues by technical fixes. As with solutionism in general, the same problems arise here. By defining humans and their specific needs as mere technical problems to be solved by technical means, a solutionist approach objectifies individuals. The main issue in the context of care robots is a dehumanization of care (Rubeis, 2020a). One could distinguish different forms of dehumanization here, namely substituting human contact, infantilization, and deception. We have already seen that the interpersonal relationship is crucial for nursing practices, which are often interpreted as genuinely humanistic and patient-centered. Substituting human contact by interaction with robots therefore implies a loss of humanity, since it replaces interpersonal interactions. Being carried, washed, and fed by robots might cause patients to feel objectified, treated like mere things that are stored and organized in an effective manner (Sharkey & Sharkey, 2012). One crucial aspect in this regard is the inherent difference between interpersonal human relationships and human-machine interactions. Human relationships, and with it care relationships, depend on reciprocity and responsiveness (van Wynsberghe, 2022). Nursing relationships are bidirectional, meaning that patients are not passive recipients of care. Rather, good care implies responsiveness on behalf of the care giver to the individuality of the patient. This enables a reciprocal relationship where patients can build affective bonds with care givers. As discussed above, machines lack mental states that allow for reciprocity. A robot may respond in mechanical fashion to some utterance or behavioral stimulus by the patient, but does not have the capability of an affective response. Although the robot might be able to fake emotions, e.g. by modulating its audio output in the fashion of an agitated human voice, there are no genuine emotions behind it. The lack of reciprocity makes relations between humans and robots unidirectional (Scheutz, 2012): Humans may project mental states, an inner life, on to the machine, but since it does not have any mental states it cannot reciprocate on an affective level. Another crucial aspect that makes relations between humans and machines unidirectional is the lack of mutual vulnerability (Sparrow, 2016). As embodied beings, humans are fundamentally vulnerable and in risk of getting harmed. Since this vulnerability affects all humans (although clearly not in the same manner), it constitutes a form of mutuality in the human encounter. The same cannot be said for human-machine interactions, since, although existing in an embodied form, machines cannot be harmed in the same way humans can. Hence, substituting human contact through robot interactions can never have the same quality as human relations due to the lack of reciprocal, bidirectional relationships.
6.2
The Nurse-Patient Relationship
181
SAR often come in the shape of cartoonish human likeness, such as Pepper, or plush animals such as Paro. Although these forms might make it easier to interact with robots, some commentators also see a risk of infantilizing patients (Vandemeulebroucke et al., 2018). Especially when the robot is toy-like in form and shape, feelings of being treated like a child might occur (Hung et al., 2019). The result might be embarrassment and an unwillingness to interact with the SAR. Especially in the context of dementia care, toy-like SAR could be regarded as an attempt to frame as dementia as a second childhood (Sharkey & Sharkey, 2012). This implies a deficit-oriented approach to care and might lead to disempowerment and stigmatization of people with dementia. Some SAR are designed to mimic human-like or animal-like appearances as well as behaviors. We have already discussed Paro, the AI-powered robot seal. Another example is Pepper, a humanoid robot that is capable of performing some kinds of social interactions (Pandey & Gelin, 2018). Pepper has a vaguely humanoid face and a touchscreen that allows persons to interact with it. The robot is capable of analyzing mimic and gestures of humans as manifestations of emotions and respond accordingly. Paper can engage in basic conversation, play music, and dance, at least in a way. It has been called a companion robot due to its capability of engaging in social interactions with humans and responding to their emotional states and behaviors. SAR like Paro and Pepper have been the objects of criticism that interprets these machines as a form of deception because they pretend to be a human being or animal. There is disagreement on what exactly qualifies as deception in this context (Sharkey & Sharkey, 2021). Some authors argue that SAR are deceptive because they are capable of detecting human social gestures and respond with human-like behavior (Wallach & Allen, 2009). This strong interpretation of deception suggests that SAR deceive persons when they appear to have mental states or emotions (Matthias, 2015). Other commentators hold that deception requires the intention to make persons believe they are interacting with a human or an animal (Sorell & Draper, 2017). Humans may tend to anthropomorphize machines they interact with when these have human-like appearance or show human-like behaviors, but this is not a case of deception. The authors argue that children interact with plush animals or dolls that trigger an emotional response in a similar way although they are aware that these are not real animals or humans. Hence, deception requires an intention to produce false beliefs. However, there are strong reasons to set a lower threshold for deception here. The behavior of a SAR could easily trick a person into believing that it cares for them. The person would ascribe an emotional state to the machine, which it does not have, and thus succumbs to a deception. This is all the more the case with vulnerable individuals, for example people with dementia or people who suffer from loneliness or mental health issues. Paro is a good example in this regard. Sorell and Draper (Sorell & Draper, 2017) point out that the developers of Paro did not intent to trick people into thinking they are interacting with a real seal. However, Paro gives the illusion of being sentient and affective by responding to stimuli in a way that feigns emotions such as sympathy. Hence, persons are deceived not because they take Paro
182
6 Relationships
for a real seal, but because Paro appears like a sentient being that shows real affection for them (Sharkey & Sharkey, 2021). Even if we agree that SAR that appear human-like or animal-like and imitate human or animal behavior are deceptive, we have to ask whether this is morally relevant (Sharkey & Sharkey, 2021). One could argue that deception is not necessarily morally bad and that certain care scenarios might even make some form of deception acceptable (Coeckelbergh, 2016). For example, treating older adults with cognitive impairments as autonomous persons although they lack the ability for autonomous decision-making, as is often the case in care settings, could be seen as an accepted form of deception. Another example is that care givers do not always tell the whole truth in order to preserve the happiness of patients. Accordingly, deceptive robot behavior or appearance might be acceptable as long as it enables patient wellbeing. Although the examples for non-technical deception are taken from reality and such things occur in care settings, it is highly questionable to use them as indicators for accepted or acceptable behavior. Whether this kind of deception is acceptable at all is at least open to debate. A response to this position could be to highlight the effects of deception on the autonomy and trust of patients. Deception could become an instrument to achieve certain goals without respecting the autonomy of patients in terms of being ends in themselves (Sparrow & Sparrow, 2006). Reciprocity is key here. Robots are not capable of reciprocity in the full sense, meaning that although they may respond to stimuli like human behavior, they lack the affective capabilities as well as the vulnerability to form reciprocal relationships. This becomes an issue in terms of deception when robots are designed to appear as if they would behave in a reciprocal manner (van Wynsberghe, 2022). In this case, feigned reciprocity may be a tool to make robots more acceptable. It could also be used for manipulating the behavior of patients towards outcomes that are in the interests of others, such as economic agents (Scheutz, 2012). One could imagine a scenario where stakeholders design or use SAR to fake reciprocity in order to trick patients into certain desired behaviors that make care more effective or cost-efficient without directly benefitting the patient or even decreasing their quality of life. Being deceived may also affect the trust of patients. In social interactions, we usually assume that others are in principle honest until proven otherwise. This attitude has been called truth default theory, which implies the presumption that “most communication is honest most of the time” (Levine, 2014, p. 2). This presumption makes humans vulnerable to deceit and thus breach of trust. Trust in human interactions and relationships depends in part on a specific kind of authenticity of behavior. When we trust another person, we expect that their behavior is an authentic expression of their attitude. This specific kind of authenticity is undermined when robots pretend to have mental states or emotions they are not capable of (Sweeney, 2023). The capability of deceit in this sense is especially high in human-like robots. One could call this dishonest anthropomorphism referring to human-like robots designed to manipulate humans by pretending to possess human features (Leong & Selinger, 2019). Dishonest anthropomorphism may include the physical presence (body
6.2
The Nurse-Patient Relationship
183
language, appearance), communication and behavior characteristics like voice, and features of emotional and intellectual manipulation, e.g. agency, “personality”, or sensory manipulation. The latter point is especially important, since robots could not only pretend to possess features they do not have (like mental states or emotions), but also hide non-human features and skills they do possess, such as sensory perception or camera vision that is not visible to humans. An example would be a robot that lowers its gaze to signal that it does not watch the human it interacts with, but films them with a hidden camera or records other sensory data (Kaminski et al., 2017). One could therefore argue that the more anthropomorphic a robot is, the more human-like it appears, the bigger is the gap between behavior and attitude, or the outer manifestations and the inner workings of the machine. Hence, the breach of trust increases with the level of anthropomorphism. To some authors, it makes a difference for what purpose deception is used. Following this view, if deceiving patients by mimicking human-like behavior or fake emotional responses benefits patients, it is not a breach of trust (Grodzinsky et al., 2015). This implies a distinction between benign and unethical intentions of deception as discussed above and is highly problematic. Even if one argues that some form of deception might be beneficial for patients and nurses practice it already anyway, this view ignores the possible long-term effects for trust and cooperation (Sætra, 2021). If humans interacting with robots realize that they are being deceived, they could lose trust and be unwilling to interact with the robot any longer. Over time, this may evolve into distrust not only for the robot, but for human care givers involved or the care institution. As a consequence, interpersonal as well as social trust could break down. One solution would be to reduce the anthropomorphism of SAR. As some commentators argue, the less human-like a robot is the more potential it has to enrich social interactions and practices (Sandry, 2015). It should be transparent to humans at all times that they are interacting with a machine and nothing more. This position contradicts with the possible positive effects of human-like SAR.
6.2.2.1
Strategies
Nursing robots, as all non-human agents, cannot be separated from human agency. As outlined above, artificial agents are always tethered to human decision, design, practices, and purposes. A robot by itself can never deceive anyone, but can be a tool for deception. Its agency is bound to the decision humans made in its design as well as regarding to its implementation and use. If one accepts this premise, it becomes clear that whether a robot acts deceptively or trustworthy, whether its actions are beneficial or harmful, depend on the choices humans make. This may be harder to grasp when dealing with embodied artificial agents, especially when they are anthropomorphic. Despite the human likeness and sophistication of these machines, one should not forget that the responsibility for their outcomes must be attributed to humans or, as described earlier, to institutions. This is the case even when these
184
6 Relationships
MAI-powered machines may act unpredictably, since risking this uncertainty has also been a choice made by humans at some point (Sætra, 2021). A debate that focusses on the question whether robots deceive or not is therefore misleading. The question should rather be, should we accept deception as a design choice or possible use? If so, under which conditions is deception by SAR acceptable? As with smart practices, we also have to analyze the deployment of care robots in the light of the goals and paradigms that shape it. The demographic shift and the shortage of nursing personnel are crucial in this regard. There is hardly a research paper on care robots that does not start from this twofold challenge in nursing. Although the use of robots might contribute to mitigating the effects of these developments, we are again facing the issue of solutionism here, applying technical fixes to genuinely social problems. In the context of care robots, solutionism might have two negative effects: First, a solutionist approach might overestimate the potential of technology, i.e. care robots, for solving the social problem at hand, namely the increasing care burden. It is highly doubtful that robots can be implemented at the required scale due to financial and institutional issues. Furthermore, although robots might substitute some human activities, they are not capable of substituting human care work altogether, at least not in the foreseeable future. Second, solutionism might obscure non-technical alternatives to the broad implementation of care robots for tackling the increased care burden. It also shifts the focus away from the question why the shortage of nursing personnel exists. Instead of relying on technical fixes, one could argue that better education and working conditions, better payment, and an empowerment of nursing professionals within the healthcare sector might be appropriate measures here. The more technology is framed as a panacea, the less these alternatives are discussed. Solutionism might also have long-term consequences that affect or even disrupt relationships between humans. Using robots to replace human interactions and interpersonal relations might have positive outcomes in terms of patient benefits, efficacy of nursing practices, and cost-effectiveness. However, it may also be detrimental to meaningful relationships and cooperation. Defining interpersonal relationships and human interactions as something that should better be replaced implies a devaluation of these forms of social bonds and practices. Even if we grant that machines can perform nursing practices, we should still ask whether this is the best way. We should define which task we want to delegate to machines and which should remain a core human practice, even if a technical solution was available. Hence, we should not simply assess care robots in terms of efficacy and cost effectiveness, but also include what is best from the perspective of human care givers and patients. Surely, a categorical rejection of all robot technology in nursing would be absurd. Care robots clearly have the potential to improve services, the work conditions of nurses, and patient well-being. Exoskeletons and service robots may support nurses and, if implemented the right way, even allow them to spend more time on social interactions with patients. SAR use may have benefits for patients in some contexts and under some conditions. However, when planning the implementation of a SAR, one should be sure of the purpose of its use and make this purpose transparent. Replacing human interaction or human personnel through SAR is only
6.2
The Nurse-Patient Relationship
185
acceptable if it really surpasses all alternatives in terms of direct benefit for the patient. In any case, it should be considered whether alternatives are available that would achieve the same outcomes without the use of SAR. Mere cost-effectiveness should not be a sufficient reason for replacing the human interactions and relationships so crucial for a humanistic nursing practice. If we agree that we should not implement care robots for the wrong reasons (solutionism), what would be the right reasons? One perspective is the design and use of care robots for supporting human care givers instead of substituting them. This is obvious in the case of service robots or exoskeletons, which are designed to aid human care givers or enhance their physical capabilities. Also SAR could be designed in a way that enhances the interactions and interpersonal relations between humans, i.e. care givers and patients (van Wynsberghe, 2022). A cross-cultural study by von Humboldt and colleagues (von Humboldt et al., 2020) found that MAI technologies could support meaningful relationships during the Covid19-pandemic. Another study focused on the use of Paro in long-term care facilities and found that the SAR could improve mood and enable a better communication with relatives (Moyle et al., 2019). User-centered design is the key aspect in this regard. There is a gap between design preferences of users and those of developers, for example when it comes to the appearance of the robot or its capability of adapting to individual preferences (Hannah Louise et al., 2019). One strategy to overcome this gap could be participatory design approaches that integrate the perspective of stakeholders (patients, care givers, relatives) into the design process. Studies have shown that such an approach allows to better identify the therapeutic values of SAR deployment and use as well as day-to-day challenges that may arise (Šabanović et al., 2015). Besides design, a participatory approach could also entail the implementation process. It has been shown that methods of qualitative empirical research such as interviews with stakeholder could be a way to identify meaningful modes of use and define the appropriate goals for using SAR (Joshi & Šabanović, 2019). Again, thick data could leverage MAI technologies also in this context. Another important goal should be the design of trustworthy SAR. Several features can constitute trustworthiness here, such as the ability to recognize and adapt to user preferences (robot personalization), reliability, and safety (Langer et al., 2019). A human-in-the-loop approach could ensure that trustworthiness is included in the design of SAR by supervising the machine learning process.
6.2.3
The Role of Nurses
In order to understand the transformation of nurse’s roles through MAI, one has to clarify the existing roles. Peplau defines six nursing roles that are essential for the nurse-patient relationship (Peplau, 1988): The stranger role defines the initial encounter between nurse and patient. Nurses should treat patients with respect and without prejudice just as with any stranger one meets. It is especially important not to
186
6 Relationships
make any assumptions about the abilities of the patient or lack thereof: The default should be treating the patient as a fully capable person. The resource role refers to the task of nurses to provide information, e.g. on the treatment plan. In this understanding, nurses are the patient’s resources of health information and should take responsibility for this task. The teaching role builds on the resource role, deriving learnings from the provided information and supporting the patient in making sense of the information. The counseling role implies helping the patient to better understand their overall situation and providing guidance and encouragement. The surrogate role means that nurses help the patient to form a relationship modeled after relationships familiar to them. The leadership role refers to the duty of nurses to actively include patients in achieving treatment goals. MAI technologies may have an impact on all of these roles. By providing detailed and accurate information on the patient, MAI technologies could strengthen the nurses’ ability to cope with the stranger role, since they can get a better picture of the patient’s situation. However, digital positivism could negatively affect this very same role through reductionist or biased information. It is therefore especially important to sensitize nurses for the potential risks of digital positivism. Nursing education should provide and enhance the skill of critically reflecting on this issue. MAI technologies could enable nurses as information integrators, health coaches, and deliverers of human caring (Robert, 2019). The ability to make better use of individual health data and use the stereoscopic vision of the nursing gaze might thus strengthen the resource, teaching, and counselling role. As case and information navigators, nurses could enable the integration of nursing knowledge into technology design as well as smart data practices at the point of care (Buchanan et al., 2020). This would expand both roles to the design and implementation level. Assistive technologies, communication devices, and robots could empower the surrogate role by enabling meaningful communication and relationships. The use of these technologies may also have the potential to severely disrupt nursing relationships through dehumanization and loss of trust. The crucial aspect here is participatory approaches for a user-centered design and implementation. Better data use and especially mHealth technologies could support nurses in the leadership role, providing them with tools to leverage patient engagement in the treatment process. The use of these technologies has to be tailored to the individual preferences and resources of the patient in order to avoid overstraining their capabilities. It is also possible that nurses use SAR for activating patients or improving communication with them. Taken together, MAI technologies will impact the different roles of nurses and bring about a fundamental shift in the position of nurses within healthcare. As some state, nurses will become overseers of care whose tasks will mainly be coordinating the care process, which also implies delegating tasks to MAI technologies, and to ensure patient-centered care by catering to patient’s needs (Pepito & Locsin, 2019). The increasing automatization and technization of nursing tasks will require nurses to coordinate these processes and make sure that the holistic, humanist quality of nursing remains. The unique double nature of nursing knowledge that combines foreground and background knowledge is essential here. Nurses could contribute to
6.3
The Therapeutic Relationship in Mental Healthcare
187
avoiding the negative effects of digital positivism by contextualizing data with the real-life situation of patients, thus enabling a meaningful, comprehensive interpretation (Brennan & Bakken, 2015). Although this might seem a suitable role for nurses at first glance, a closer look reveals an inherent danger. Defining nurses as overseers of care responsible for ensuring a holistic, humane, and person-centered practice in a highly technisized setting may overstrain their abilities. Nurses would have to assume the aforementioned role as guardians of humanity who defend patients against the possible dehumanizing effects of MAI technologies (Rubeis, 2021b). The fundamental flaw here is to ascribe to nurses the duty of protecting patients’ humanity without providing the appropriate means and setting the right goals. In a setting where policy makers or healthcare institutions implement MAI technologies to reduce human contact, standardize patients and their behavior, and save costs, it is hard to see how nurses could fulfill their role as guardians of humanity. Nurses would be in a passive role where they can only compensate for the collateral damage of MAI technologies. If a holistic nursing practice that focusses on the patient as an individual is the desired outcome, this has to be defined as a goal of MAI design, implementation, and use. Hence, a more active role of nurses in all of these processes is needed to enable them to deliver patient-centered, holistic care. Nurses should take the role of influencers, validators, and strategic advisors by building strategies for dealing with change, supporting and evaluating the adaptation of MAI technologies, and support the translation of evidence into practice (Fuller & Hansen, 2019). This is only possible when nurses play an active role in co-design of MAI technologies as well as in decision-making process regarding technology implementation and use.
6.3
The Therapeutic Relationship in Mental Healthcare
By mental healthcare, I refer to healthcare services in psychiatry and psychotherapy, both in-patient and out-patient care, including nursing. Healthcare professionals in this field can be doctors, psychotherapists, or nurses. Those receiving mental healthcare services can be patients or clients. I use the terms care givers and care seekers to refer to both groups respectively. Wherever necessary, I will explicitly refer to the specific roles (doctors, clients, etc.). One significant parallel between mental healthcare and somatic medicine as well as nursing is the twofold problem of mental health burden. On the one hand we are witnessing an increase in the prevalence of mental disorders. Depression for example has become one of the world’s leading contributors to disability (WHO, 2017), with an estimated 280 million people suffering from depression worldwide (Khan et al., 2023). Hence, the increased mental health burden is a global problem. As with all health issues, social inequities and global inequities shape the mental health burden, meaning that it affects different social groups differently, on a domestic as well as a global scale.
188
6
Relationships
On the other hand, there is a shortage of doctors, therapists, and mental health nurses (WHO, 2022). As a result, 71% of people with a psychosis worldwide do not receive treatment (ICN, 2022). This also affects mental health professionals, since high work load contributes to stress and negative mental health outcomes across this group (O’Connor et al., 2018). As with the similar issue in nursing, also in mental health experts propose the implementation of MAI technologies as a strategy for mitigating the effects of the twofold problem of increased mental health needs and personnel shortage. MAI technologies have a broad range of possible applications in mental health. As in somatic medicine, the big data approach in combination with machine learning techniques enables the integration and modelling of multimodal data from different sources (Tai et al., 2019). This includes data mining technologies for disease classification, predictive modelling, prognosis, diagnosis, and treatment (Alonso et al., 2018). Computer vision techniques are also an important feature, e.g. in detecting mental illness risks in MRI scans (Zhang et al., 2023). NLP techniques help to detect indicators for individual risk factors in interviews, clinical notes, and social media posts (Zhang et al., 2022). Self-monitoring, mostly in the form of apps, is a practice for obtaining data as well as part of interventions such as behavior change (Alqahtani et al., 2019). Mental health apps can be tools of a face-to-face therapy (blended therapy) or as stand-alone applications, either with supervision by care providers (guided therapy) or without care giver support (unguided therapy) (Rubeis, 2021a). CDSS have shown positive results in supporting clinicians and therapists in benchmarking and testing, but due to their early phase of maturity, clear evidence for the improvement of practice and patient outcomes is still lacking (Higgins et al., 2023). MAI in mental healthcare can also take an embodied form, either as virtual agents such as chatbots and avatars, or physical agents, i.e. robotic interfaces (Fiske et al., 2019). This type of conversational AI can simulate conversations and even provide mental health support (Sedlakova & Trachsel, 2023). Some authors have suggested the high potential of combining different MAI techniques for improving diagnostic and therapeutic practices. One such approach is iHealth, a concept that integrates data mining techniques with self-monitoring and sensor and monitoring technologies (Berrouiguet et al., 2018). iHealth aims to combine data from the EHR and clinical notes with data from mHealth applications collected by the patient and data from smart sensors in the living environment of the patient. It uses the method of ecological momentary assessment that focusses on obtaining more contextual data from the patient’s personal environment. This is especially important in the mental health context, since signs, symptoms, and behaviors are crucial for diagnosis and decision-making due to the limited availability of biomarkers and other omics data. An iHealth approach could support care providers to make decisions based on real-time data and algorithmic modelling. This could lead to better-informed decisions regarding diagnosis, treatment options, admission and discharge, and risk stratification. Berrouiguet and colleagues (2018) discuss the example of a bipolar patient who shows irregular sleep patterns and increased activity. Sensor and monitoring technologies can record this data and analyze it through machine learning techniques, thus informing a psychiatrist that
6.3
The Therapeutic Relationship in Mental Healthcare
189
a maniac episode is imminent. The psychiatrist can then contact the patient for assessing their situation and decide on the best intervention. This spectrum of available technologies suggests that MAI in mental healthcare may achieve the same goals as in somatic medicine and nursing: improving overall patient outcomes, accuracy, efficacy, and cost efficiency (Alonso et al., 2018; Fiske et al., 2019) as well as mitigating the treatment gap (Higgins et al., 2023). It is therefore no surprise that the impact of MAI in mental healthcare raises similar ethical concerns as in healthcare in general. Although the insights form somatic medicine and nursing are valuable for analyzing the ethical implications of MAI in mental healthcare, this field requires an additional, specific analysis. This is due to the distinct nature of the therapeutic relationship, the characteristics of mental health disorders and patients, and the features of the therapeutic project, meaning the goals and purposes of the therapeutic enterprise (Radden, 2002).
6.3.1
Nature of the Therapeutic Relationship
The therapeutic relationship in mental healthcare is in itself a treatment tool (Radden, 2002). It has been widely recognized as the crucial common factor and is associated with positive therapeutic outcomes (Bolsinger et al., 2020). Some authors speak of a therapeutic alliance instead of a relationship, which emphasizes its crucial relevance (Ardito & Rabellino, 2011; Bordin, 1979). The mutual agreement of care giver and care receiver on treatment goals, an agreement on the tasks the treatment requires, and the development of a personal bond that is based on positive feelings, and reciprocity constitute the therapeutic alliance (Ardito & Rabellino, 2011). A successful therapeutic alliance requires care givers to gain the trust of care receivers and to recognize and align with their beliefs (McParlin et al., 2022). Hence, in order to be patient-centered, the therapeutic alliance requires to include the individual care receiver’s perspective and actively engage the care receiver in setting goals and participating in decisions on therapeutic steps. Compared to the therapeutic relationship in somatic medicine, the therapeutic alliance may in some therapeutic settings require an emotional bond between care giver and care receiver in order to achieve therapy goals. Trust and compliance are important enablers of a successful therapy in somatic medicine, but in mental healthcare, the active engagement of care receivers in therapy and their willingness to form a relationship with care givers is essential for therapy to work at all.
6.3.2
Mental Health Disorders and Patients
Somatic medicine characterizes diseases largely by their etiology, meaning the factors that contribute to their development. These factors may be genetic, related to behavior such as eating habits or substance abuse, environmental factors like
190
6 Relationships
toxins, or caused by germs such as viruses, bacteria, fungi, or protozoa. Furthermore, psychological issues and social factors can cause disease, exacerbate symptoms, or affect the access to healthcare, such as structural discrimination, poor housing or working conditions, or lack of health education. As we have discussed in Sect. 4.2.2, the biopsychosocial model recognizes the variety of contributing factors and integrates them into one coherent model for health and disease. Mental disorders show an even higher complexity. This is in part due to the uncertain nature of mental states and experiences, particularly regarding the relationship of somatic and mental factors (Patil & Giordano, 2010). This makes it difficult to form etiologies and clear definitions of mental disorders the same way as somatic medicine does for diseases of the body. Even the very concepts of disease and mental disorder as such are contested. From the mid-twentieth century on, the so-called medical model in psychiatry and psychotherapy has been an object of criticism. The medical model suggests that mental disorders have somatic, mainly neurological causes, and that treatment should focus on eliminating them (Hogan, 2019). This form of reductionism implies that mental health and illness rest on biomedical factors, especially biochemical mechanisms in the brain. Commentators raised various types of critique against this point of view. The anti-psychiatrist critique refutes the concept of mental illness or mental disorder altogether (Thompson, 2018). Critics like Szazs (Szasz, 1960) argue that mental processes can be dysfunctional, but a mind cannot be ill in the same way a body can, which means mental illness is at best a metaphor. This lead some authors to develop an exclusionist approach that demands to remove mental health and behavior from medical oversight altogether (Hogan, 2019). A crucial issue in this regard is that concepts such as disease and illness not only lack clear-cut definitions, but are also laden with value judgements (Huda, 2021). They may thus not be scientific or objective, but manifest social perceptions and normative judgments regarding certain behaviors. Alternative models such as the social model recognize this aspect, claiming that mental illness is the product of societal factors and oppression rather than a medical issue (Hogan, 2019). Similarly, the recovery model aims to de-pathologize mental conditions and behaviors. This model implies that the goal of psychotherapy should be to enable individuals to build a meaningful and satisfying life, according to their beliefs and values. By focussing on health, strengths, and well-being, the aim is to support individuals in gaining active control over their own lives. This enhancement of agency implies an encouragement of self-management and requires a strong therapeutic alliance as a helping relationship. Since each individual has their personal ideas about what mental well-being means, care givers should support and enable individuals to realize these ideas instead of enforcing a therapy based on predefined sets of clinical measures. Social inclusion, building meaningful relationships, and rediscovering one’s own personality apart from mental illness is key here (Thornton & Lucas, 2011). Moving away from the medical model has not only affected therapeutic approaches but also diagnostic and etiological concepts. Some commentators state that these concepts are often heterogenous categories and go beyond clear-cut
6.3
The Therapeutic Relationship in Mental Healthcare
191
definitions (Huda, 2021). Hence, diagnostics and etiology in mental healthcare are not binary, but rather form a spectrum (Patil and Giordano, 2010). This approach is the basis of the rather recent concept of neurodiversity, which has been developed in the context of autism and attention deficit hyperactivity disorder (ADHD) (Manalili et al., 2023). Neurodiversity implies that there is a variety in mental functions and cognitive abilities and that differences between individuals are not necessarily pathological. Hence, neurotypical functioning or behavior cannot be taken as representations of “normality” in a normative sense that defines every deviation as pathological instead of a natural variation. The existence of alternative models does not mean that biomedical approaches to mental health and mental disorders are obsolete. On the contrary, research efforts to identify biomarkers for mental disorders have gained momentum throughout the last years (García-Gutiérrez et al., 2020). This is a direct response to the great heterogeneity of patients in their clinical presentation. Biomarker research focusses on identifying measurable indicators for normal or pathological processes as well as responses to intervention. The goal of this research is to use biomarkers for stratifying patient groups in order to enable better-suited treatment and improved clinical outcomes. This implies an improvement of diagnostic, prognostic, and predictive practices in mental health. Biomarker research is especially relevant here since existing pharmacological treatment options show limited efficacy. The identification of biomarkers in mental healthcare mainly focusses on omics research, which implies that big data approaches and MAI technologies are of special relevance. To sum it up, the unique nature of mental health and mental illness is not only complex due to the heterogenous influencing factors and highly disputed concepts. We also witness different, in part conflicting, concepts and research interests regarding diagnostic and therapeutic practices. This complex situation makes a nuanced ethical analysis of MAI technologies in mental healthcare necessary. The unique nature of mental disorders also implies a special situation of care receivers that differs significantly from that in somatic medicine. One important aspect in this regard is the different level of vulnerability. Care receivers in mental healthcare care are especially vulnerable due to several factors. Mental disorders often affect not just one aspect of an individual’s life, but their overall ability to live an autonomous life, such as self-regulation and self-management. Autonomy is a crucial aspect in this regard (Bergamin et al., 2022): Some mental disorders, such as compulsory disorders or addictions, imply loss of control and impair self-determined decision-making. Depressive disorders may cause lack of motivation, which hinders patients in achieving short-term or long-term goals. Rigid dysfunctional beliefs may constrain patients in pursuing personal goals or career paths. This may severely affect an individual’s relationship to others, e.g. due to the lacking capability of forming relationships, violent behavior, or other behavioral features. In all of these examples, mental disorders undermine the patient’s autonomy as a capability to make self-determined decisions and act out of free will. Different mental disorders affect autonomy differently and also symptom severity is a factor here, which further complicates the matter. Furthermore, autonomy may not be consistent over time,
192
6 Relationships
meaning that there can be phases where the patient is capable of self-directed decisions and actions and others where this capability is severely impaired. But vulnerability does not only arise from the mental disorders themselves. The social perception of mental illness is also a contributing factor. Attitudes towards mental disorders, therapy, and mental health patients are often based on stereotypical or negative views. Hence, mental disorders are often objects of moral judgement. This kind of social perception may stigmatize individuals with a mental disorder, i.e. defining them solely in the light of negative attitudes (Radden, 2002). Stigmatization may result in social exclusion and economic disadvantages. Experiencing stigma may exacerbate symptoms and further deteriorate the mental health situations of individuals.
6.3.3
The Therapeutic Project
Therapy can have different goals and functions, depending sometimes on its context, sometimes on the decision of patients. Therapy can serve two main purposes (Radden, 2002): First, therapy may aim to restore functions that have been impaired my mental disorders or have been lost altogether, such as autonomy and self-control. Second, the goal of therapy may be to reform the character of a person, meaning their dispositions, capabilities, and social and relational attributes. In the light of what we have discussed above, it becomes clear that both purposes may have ethical implications due to nature of mental disorders and the specific vulnerability of patients. We have seen that concepts of mental disorders as well as etiologies may include normative judgements. Normativity may also affect both possible therapeutic purposes. Restoring functions implies a definition of a norm, meaning normal abilities or behaviors. Social perceptions and biased notions might shape this norm and therefore lead to stigmatization and discrimination of individuals. Furthermore, concepts of normality and the pathological may also serve as instruments for exerting power and enforcing social control. There is a long tradition of criticism, mainly based on the works of Foucault and his concept of biopower that we have already discussed in the context of the clinical gaze. According to this approach, medical knowledge can be seen as a from a power that can be used to discipline the individual (disciplinary power) or to regulate the life processes of populations (biopolitics). A major aspect of biopower is the distinction between the normal and the pathological. This holds for somatic medicine as well as psychiatry and psychotherapy. Especially the already mentioned antipsychiatry movement saw the framing of certain behaviors as mental disorders and the according therapeutic interventions as mechanisms of biopower. Reforming the character of an individual is even more problematic, since it explicitly requires some normative notion of normal function or behavior. As some authors claim, enhancing the morality of individuals is a legitimate goal of psychiatry and psychotherapy due to the nature of many mental disorders (Pearce & Pickard, 2009). Especially in personality disorders, primarily moral categories are
6.3
The Therapeutic Relationship in Mental Healthcare
193
prevalent as symptoms such as lack of empathy, inclination towards impulsive or violent behavior, and lying. Hence, effecting moral changes in individuals is a legitimate goal of therapy. This goal can be achieved in various ways (Pearce & Pickard, 2009). Psychological interventions can enable individuals to develop moral motives and intentions. They can also be an instrument for supporting individuals in acquiring or developing cognitive skills that enable moral action, e.g. empathy. Another perspective is to help individuals to apply these skills in social interactions. These elements of moral development and moral growth are especially relevant in forensic psychiatry that treats individuals in prisons or secure hospitals. As some argue, forensic psychiatry necessarily aims to morally improve patients as a therapy goal, which in itself is not problematic as long as this goal is made explicit (Specker et al., 2020). However, critics hold that forensic psychiatry, especially when it uses methods of behavior modification, can be considered as biopower in the Foucauldian sense (Holmes & Murray, 2011). That means that instead of benefitting the patient, this kind of therapy is primarily a tool for disciplining individuals and adapting their behavior to the norms of society. From the perspective of medical ethics, one could say that this therapy does not focus on the well-being of the patient, but pursuits primarily the goals of others. This undermines the autonomy of patients by instrumentalizing them for political goals and may also cause harm when the patients’ health needs remain unmet. In summary, there are several ethically relevant factors that set mental healthcare apart from somatic medicine. The therapeutic alliance is the decisive factor for the success of a therapy. It requires an emotional bond between therapists and patients built on trust and the acknowledgement of the patient’s beliefs and values. Integrating the patient into therapy in terms of an active engagement is the precondition for therapy to work. However, the autonomy of patients, their capability of self-directed decision-making and action, may be undermined by mental disorders. In fact, reestablishing autonomy may in itself be a therapy goal. Hence, a therapeutic alliance that enables relational autonomy is especially important. Complex interactions of biophysical and social factors shape mental disorders. Cognitive abilities and functions as well as behavior are to be understood as a spectrum of natural variation, in which the distinction between health and disease has to be situated. This also means that the care receiver’s individual definition of health and well-being, their beliefs and values, play a crucial role for defining what a mental disorder is and means, how to deal with symptoms, and deciding what goals therapy should achieve. Enabling the patient to actively participate in therapy therefore means encouraging self-management, respecting personal beliefs and values, facilitating shared decision-making, and acknowledging vulnerabilities. The purpose of therapy defines the forms of therapy as well as its goals. This implies several ethical risks. The fact that some forms of therapy imply a moral improvement of care receivers raises the question what norms and values should guide this process. Care givers have to be aware of the risk that social perception, bias, and stereotypes might have an impact on therapy goals. Focusing on the wellbeing and best interest of the care receiver is therefore crucial to avoid
194
6 Relationships
instrumentalization. At the same time, care givers have to enable care receivers to live a fulfilling live, which also implies the ability to recognize norms and rules of moral conduct and to act in accordance with them. Hence, care givers walk a thin line between empowerment and disciplining, which requires empathy and the willingness to recognize the care receiver’s beliefs and values. This is another reason why the therapeutic alliance in mental healthcare is unique and differs from somatic medicine. Furthermore, vulnerability is crucial in this regard. Not only are patients vulnerable in terms of the nature of mental disorders and the risk of stigmatization, but also in regard to instrumentalization. It is therefore important to be transparent about the purpose of therapy in order to allow the care receivers to make wellinformed decisions within the limits of their abilities and resources. Now that I have established the specific characteristic of ethical issues in mental healthcare, I can analyze the impact of MAI on the therapeutic allicance as the significant relationship.
6.3.4
Impact of MAI on the Therapeutic Relationship
Epistemology MAI technologies form the instruments of smart data practices in mental healthcare, which first and foremost means that they will impact the epistemology of care givers (Sedlakova & Trachsel, 2023). As with somatic medicine and nursing, the same risk of digital positivism might occur, leading to reductionism and bias. Reductionism is especially an issue here due to the heterogeneity of mental disorders. Since clear-cut labels based on biomedical biomarkers are difficult to define, machine learning algorithms may lack accuracy in terms of sensitivity and specificity (Lee et al., 2021). This could in turn lead to a situation where creating better manageable variables that enable building consistent models requires reducing the complexity of the manifestation of mental disorders. This reduction of complexity for pragmatic reasons might undermine the personalization of treatment as crucial goal of MAI. In addition, the increased focus on data might intensify a process that some authors have described as psychiatrization, which implies the expansion of diagnosis and framing of diverse functions and behaviors as pathological (Beeker et al., 2021). Psychiatrization can be seen as a form of biomedicalization, the process that defines normality of human life and behavior by means of biomedical science and technology and turns health into a commodity (Clarke et al., 2003). An important aspect here is that health, be it somatic or mental, becomes the responsibility of individuals who have to permanently maintain or actively produce it (Beeker et al., 2021). The type of reductionism connected to digital positivism could be a contributing factor of psychiatrization. The increased focus on datafying all human behavior might lead to standardization that defines normality based on quantifiable variables and class labels. As a consequence, behavior that has hitherto been considered as
6.3
The Therapeutic Relationship in Mental Healthcare
195
variation might now be defined as pathological. This could harm individuals by stigmatizing them and undermining their autonomy. A data-driven psychiatrization might therefore be an instrument of realizing health agendas. Another outcome could be that commercial agents or healthcare providers define certain diverging behaviors as pathological in order to sell mental healthcare services such as apps for selfmanagement. Psychiatrization might also be the result of a bottom-up process, for example when individuals use biomedical categories to seek recognition of subjective suffering or difference (Beeker et al., 2021). Another factor might be the so-called worried well, i.e. individuals with mild or unspecific symptoms and borderline cases who seek professional help without a clear indication. This over-utilization of mental healthcare services could have a severe impact on the understaffed and underfinanced mental healthcare sector and exacerbate existing health disparities (Beeker et al., 2021). The bias problem affects mental healthcare in a similar way as in somatic medicine. Discriminatory practices based on biased diagnostic data have a long history in mental healthcare. As an example, care givers often diagnose the same disorder differently in men and women, leading to inequitable health outcomes (Jane et al., 2007). Furthermore, ethnic minorities face access barriers to mental healthcare services and are less likely to receive psychotropic medication when compared to the majority population (Snowden, 2003). It is therefore likely that bias may occur on the level of training data when datasets are not diversified and representative. Aguirre and colleagues (Aguirre et al., 2021) conducted a study on gender and racial fairness in social media-based research on depression. The authors found that black people and women were underrepresented in the data sets, which lead to poorer algorithmic outcomes for these two groups. In a similar study on the detection of anorexia nervosa, Solans Noguero and colleagues (Solans Noguero et al., 2023) found that the rate of false negatives was especially high for women, which implies a severe health risk for this group. Bias may especially affect older adults and sociodemographically disadvantaged individuals, meaning those whose mental health needs are already underserved (Wilson et al., 2023). Again, we find that bias may be most harmful to those who already suffer from existing health disparities. In mental healthcare as well as in somatic medicine, bias undermines the crucial advantages of MAI technologies, personalization and the possibility to reach underserved groups. Apart from biased training data, algorithmic bias is also an issue in mental healthcare. Also here, variables and class labels might be based on biased concepts. One example is sentiment analysis using NLP for interpreting linguistic clues for emotion and respond accordingly (Straw & Callison-Burch, 2020). The goal is to build models for inferring human emotions from text, e.g. clues for suicidal tendencies in social media posts. The technology can be used on an individual level for improving risk assessment and prediction as well as on a population level to map the prevalence of mental illness. The language model is based on a specific terminology that provides variables and class labels. Interpreting text like social media posts may be tricky, since men and women mostly use different language for expressing
196
6 Relationships
emotions. Furthermore, there are cultural differences in describing emotional states. Bias may occur when the language model does not account for this variety. Given the risks for data bias in training data and algorithmic bias, it is easy to see how outcome bias occurs that affects the decision of care providers. One example would be the use of CDSS in mental healthcare (Maslej et al., 2023). Care givers may feed CDSS with clinical data or notes from the EHR and integrate this input with existing research literature. The purpose is to match patient data with the best available evidence on treatment options, thus supporting care providers in selecting the fitting treatment. It is obvious that biased training data or algorithms can lead to biased outcomes, thus possibly leading to a biased decisions by care providers. In summary, one could say that due to the multiple levels on which bias can occur, it may affect all stages of the therapeutic process in mental healthcare, from diagnosis and clinical classification to monitoring, prediction, and intervention (Timmons et al., 2022). Reductionism and bias as major shifts in epistemology may affect the therapeutic alliance in various ways. Care givers may encounter care receivers through the lens of datafication, which could undermine the ability to recognize the uniqueness of individuals. Uniqueness neglect (see Sect. 5.2.1) is a major risk of MAI in general, but has especially severe consequences in mental healthcare due to the variability of symptoms, manifestations of mental disorders, and individual coping mechanisms. Standardization as a consequence of digital positivism could undermine personalization, a similar effect as in somatic medicine. Avoiding digital positivism will therefore require the same strategies as in somatic medicine, mainly a combination of diversifying training data, participatory design approaches for enabling algorithmic fairness, a clear assessment of needs and resources of care seekers, a clear and transparent definition of purposes of using MAI, and a human-in-the-loop approach for supervising algorithmic learning and outcomes. These measures have to be framed by policy adjustment and better education and training of care givers. Empathy The unique nature of the therapeutic alliance requires an empathetic understanding on behalf of care providers (Luxton et al., 2016). This means that a safe environment must enable care seekers to express emotions. The use of MAI technologies might disrupt this safe environment by introducing artificial agents. Embodied artificial agents may interact with care seekers in the form of chatbots, i.e. software based on NLP that can interpret and react to text input. This interaction simulates a conversation, e.g. for diagnostic purposes. Embodied artificial agents can also take humanlike form as avatars, which enhances the user experiences. One example is SimSensei Kiosk, an avatar that conducts interviews with care seekers and uses various forms of sensors technology and computer vision for detecting verbal and non-verbal expression of emotions (Devault et al., 2014). The aim is to encourage care seekers with depression, anxiety, or post-traumatic stress disorder (PTSD) to share health information for diagnosis and further decision-making. The avatar takes the form of a “virtual human” called Ellie that is able to simulate a conversation by making small talk as well as asking and responding to mental health-related
6.3
The Therapeutic Relationship in Mental Healthcare
197
questions. During the conversation, the system tracks eye movement, body language, and voice modulation and interprets it to detect possible clues for disorderrelated behavior or emotions. The system’s algorithms have been trained with thousands of therapeutic conservations and use NLP techniques. The idea behind this concept is that care seekers interact with a human-like virtual therapists that does not simply provide answers to questions, but reacts to the emotional state of care seekers through comforting or consoling responses. Ellie is supposed to create an atmosphere that makes it more comfortable for care seekers to talk about their mental condition. In a way, Ellie pretends to be empathic by giving comforting comments when the data shows that the care seeker might experience negative emotions. One benefit of avatars like Ellie could be to reach individuals that have hitherto refused therapy due to fears of stigmatization (Fiske et al., 2019). Some care seekers might be reluctant to disclose sensitive information to a human care giver and might feel more comfortable sharing it with an artificial agent. Hence, artificial agents could be a supplement for the therapeutic process (Sauerbrei et al., 2023). This may be a fitting option for some care seekers. However, considering the status of MAI, it is questionable whether the interaction with a virtual therapist for example can replace the empathetic relationship with a human care giver. As discussed above, their lack of mental states and reciprocity only allows to form unidirectional bonds with artificial agents. Although the appearance of empathic behavior might be possible, artificial agents are incapable of empathy in the full sense. As we have already seen, navigating the extremely difficult task of regulating emotions as a crucial aspect of any therapeutic relationship is something that artificial agents are not capable of for the same reasons. This ability, however, is even more important in mental healthcare, since the emotional bond between care provider and care seeker is crucial for a therapeutic alliance that enables positive outcomes. One possible risk of interacting with artificial agents is connected to what is called transference in psychotherapy, which occurs when a care seeker transfers feeling towards a person in their live to the care giver (Luxton, 2014). Transference can be a means for supporting therapeutic outcomes, but it requires the skill to regulate one’s own emotions and deal with the emotions of the care seeker appropriately. When artificial agents are not able to perform this adequately, there is a risk that care seekers become overly attached to this agent that is incapable of responding to their emotions in the required manner. This may cause confusion or frustration, thus potentially lead to negative effects or exacerbate symptoms. Autonomy As some commentators argue, MAI technologies, especially mHealth applications, could empower the autonomy of care seekers in mental healthcare (Erbe et al., 2017, Lipschitz et al., 2019). The argument is the same as in somatic medicine. Active participation, self-management and self-monitoring, and the control over one’s own health data may strengthen the care seeker’s ability to make well-informed, self-determined decisions. Given the fact that restoring autonomy as the ability for
198
6 Relationships
self-direction often is a therapy goal, empowering care seekers may be a potential therapeutic benefit of MAI applications. However, one must consider the specific nature of patient autonomy in mental healthcare that differs from somatic medicine. This is due to the nature of mental illness that may directly affect autonomy as the capability to make rational, selfdirected decisions and perform actions of self-advocacy. Mental illness in general may negatively affect autonomy, but the effects differ according to the type of mental illness and the individual situation of the care seeker, which means that context is key here (Sauerbrei et al., 2023). The capability for autonomy may vary from care seeker to care seeker and may also be unstable in one individual. Care seekers may experience phases where they are temporarily unable to decide or act autonomously and others where they possess this capability. There may also be limited autonomy concerning some decisions and actions when compared with others. This makes autonomy in mental healthcare more complex and requires a careful consideration and assessment of the individual situation as well as context of the care seeker by the care provider. Hence, autonomy is ambiguous in MAI-based technologies for mental healthcare like the aforementioned mHealth applications (Rubeis, 2021a). On the one hand, these applications could be used for restoring and empowering autonomy. On the other hand, the ability of self-directed decision-making and action is a prerequisite for using mHealth applications. A certain level of autonomy has to exist in order for care seekers to use mHealth applications effectively. When restoring autonomy is a therapy goal, care seekers might not be able use the applications. In these cases, the focus on self-management could overstrain the care seeker’s ability, which in turn can lead to frustration and symptom increase. When mental health apps serve as stand-alone applications and are not imbedded into an existing face-to-face therapy, this might aggravate the problem. Accordingly, empirical evidence shows that mental health apps work best for care seekers with mild-to-mid symptoms (Kerst et al., 2020) and as elements of blended therapy (Pihlaja et al., 2018). The specific nature of autonomy in mental healthcare implies that a procedural approach of autonomy is insufficient here. Rather, relational autonomy is key to make use of MAI technologies in this context. A functioning therapeutic alliance is the enabler of autonomy on behalf of care seekers. Care givers must know the care seeker well in order to decide whether a technological solution based on selfmanagement is the right choice. Replacing human care givers should therefore not be the goal of implementing this type of MAI technologies in mental healthcare. On the contrary, if we want to unleash the potential of these applications, we need a strong therapeutic alliance based on human contact. Trust MAI technologies could affect trust as another crucial element of the therapeutic alliance (Luxton et al., 2016). Care seekers disclose highly sensitive information and express their emotions in their encounter with care givers. This is only possible if care seekers trust care givers to treat them with respect and protect their privacy. Hence, MAI technologies in mental healthcare must be designed to enable the
6.3
The Therapeutic Relationship in Mental Healthcare
199
fiduciary obligations of care givers (Martinez-Martin, 2021). Individual health information is always to be regarded as sensitive and its leakage can have serious implications, such as disadvantages in the work place or other forms of discrimination. Since mental disorders are associated with an even higher risk of stigmatization, data security and privacy protection are of the utmost importance. Further factors that enable trust are reliability, accuracy, and consistency of the MAI application (Shan et al., 2022). This aligns with what we have already seen in the context of somatic medicine. There has to be empirical evidence for the performance, quality, and safety of MAI systems. Another requirement we have already encountered in the context of somatic medicine is transparency, which means that also in mental healthcare, models of explainability are required. One especially relevant aspect is the issue of deception. Many of the same arguments from somatic medicine and nursing also apply here. Artificial agents such as chatbots or avatars might raise false expectations by mimicking an empathetic human being (Sedlakova & Trachsel, 2023). This may become especially problematic since artificial agents are not moral agents and therefore cannot have duties or be held responsible. As we have seen, this is one of the reasons why a relationship with an artificial agent is always unidirectional. Disappointing the care seekers’ expectations might cause feelings of frustration and exacerbate existing symptoms of delusion or psychotic symptoms (Luxton, 2014). The Roles in Mental Healthcare Roles for all three actors involved in the therapeutic alliance, i.e. care givers, care seekers, and artificial agents, are similar to those of doctors, patients, and artificial agents in somatic medicine. Yet, they differ in nuances, given the specific characteristics of mental healthcare: the uniqueness of the therapeutic alliance, the nature of mental disorders and vulnerability of care seekers, and the purposes of therapy. As enhanced practitioners, care givers could use MAI technologies for two main purposes, personalizing treatment and facilitating better access to mental healthcare services. The big data approach could enable the integration of omics data with patient data from other sources, thus facilitating a more precise diagnosis, prognosis, and risk prediction. Such an approach could also allow care givers to choose the fitting therapeutic option for each care seeker. CDSS could be the right tool for this task. MAI technologies also offer the unique opportunity to obtain valuable data from the care seeker’s daily life. mHealth and IoT applications, especially sensor technologies for EMA and apps for self-reporting, could obtain this behavioral and environmental data. MAI could also enhance therapeutic interventions by providing self-management apps as well as embodied or virtual artificial agents. This could allow care seekers to participate in therapy from home, thus overcoming the access barriers of stigmatization and lack of therapeutic face-to-face services. From the perspective of care givers, all these possibilities could enhance their practices by providing a better evidence base for decision-making and more options to support and interact with care givers. At the same time, care givers would have to adapt to these new possibilities by acquiring digital literacy and skills in handling the
200
6 Relationships
technologies. Another challenge would be to build trust in a highly technisized setting, especially with regard to care seekers’ concerns about privacy and data security. In order for this role to become a reality, several accompanying measures need to be taken. The role as enhanced practitioner should be defined as the most desirable outcome of implementing MAI technologies in mental healthcare. This requires a focus on this role in training and education, technology design, and policy-making. The ideal outcome would be to combine the skills of care givers for building a meaningful, patient-centered therapeutic alliance that centers around the uniqueness of the care seeker with MAI’s potential for accuracy, efficacy, and personalization. The role of mediator has in part become a reality already in certain mental healthcare settings. When care seekers use mental health apps in a guided standalone setting, meaning without an accompanying face-to-face therapy, this often reduces the role of care givers to providing information or trouble-shooting. This is not problematic per se, since there may be scenarios where a reduced contact to care givers and a focus on self-management is the best solution for the care seeker. However, whether this is the case has to be thoroughly evaluated by assessing the care seekers needs and resources. This model is certainly not suitable as a one-sizefits-all-approach. Another possible scenario where care givers could become mediators is the use of CDSS. Given the epistemic risks of digital positivism, i.e. reductionism and bias, CDSS should be handled with critical awareness. This risk grows with the level of automatization, so that algorithmic decisions should not be the final word in therapy. Furthermore, reducing the role of care givers to mediators could lead to a loss of trust on behalf of care seekers. In general, it could make building a patient-centered therapeutic alliance on meaningful emotional bonds more difficult. Hence, the role of mediator should only be an outcome in a limited number of care scenarios and always be evaluated in the light of a care seeker’s needs and resources. Care givers as supervisors is an even less desirable role in mental healthcare than in somatic medicine. The therapeutic alliance is the crucial common factor in mental healthcare. It requires empathy, reciprocity, and the recognition of care seekers’ personal values, preferences, and goals. Patient-centered care requires a strong focus on relational autonomy, meaning that care givers need to enable care seekers to restore their abilities for self-direction. The broad spectrum and individual variety of mental health conditions requires a careful consideration of the specific situation of the care seeker. Relying on standardized, automated processes of data collection and analysis involves the risk of undermining this very task. In addition, the purposes of MAI technologies need to be clearly defined and communicated in a transparent fashion. Reducing human contact and implementing standardized, automated process may imply that other purposes like efficacy and cost reduction replace the goals of restoring mental health or morally improving the care seeker. By using MAI technologies in the home environment and everyday life of care seekers, these technologies could become tools for disciplining. Health agendas or financial interests may instrumentalize care seekers for their own benefit. In a scenario that reduces the role of care providers to that of supervisors, their options for protecting care
6.3
The Therapeutic Relationship in Mental Healthcare
201
seekers against instrumentalization and ensuring a patient-centered mental healthcare are limited. Hence, the role of supervisor should not be the goal of MAI use in mental healthcare. The best outcome for care seekers would be the role of empowered patients. Better personalization and access as well as a focus on self-management and enhancing autonomy would be the main benefits in this scenario. This would of course require to define this role as desired outcome and align technology development and mental healthcare services with it. Implementing MAI technologies alone will not automatically bring these benefits with it. Both technology design and mental health services have to be built around the idea of patient-centeredness. In such a setting, the use of MAI could facilitate the tailoring of therapeutic measures to the care seeker’s unique characteristics. MAI technologies could enhance data collection and communication, thus allowing care seekers to better engage in the therapeutic alliance. In some cases, this could enhance the autonomy of care seekers and enable them to take their mental health into their own hands. In other cases, this could allow to adequately address the vulnerability of care seekers by providing them with the treatment option that is most fitting. Overall, it would mean an empowerment of care seekers in terms of a more person-centered mental healthcare. The role of data double is the other side of the coin here. Datafication is of course necessary for MAI technologies to work also in mental healthcare. The epistemological risks this process implies require a careful consideration of individual factors as well as social determinants of mental health. The variety of mental health conditions sets a limit to efforts for datafying care seekers. The qualitative aspect, the individual situation as well as the preferences, values, and goals of care seekers requires thick data in terms of qualitative approaches and contextualization. Hence, the data double should be handled with all necessary restraint and awareness of the epistemic limitations of this role. Again, “digitizing the essence of a human being” cannot and should not be the aim here. Care givers should integrate the data double in existing mental healthcare practices that acknowledge the uniqueness of care seekers. Making proper use of data doubles requires a functioning therapeutic alliance, which is another argument against automatization. The care seeker as engaged consumer has already become a reality. This is due to the abundance of digital mental health services that exist on the free market. Thousands of mental health apps are available, whereby the line between health condition management and wellness management often becomes blurry (see Sect. 3.3.5). Mental health and well-being have become a lucrative market, often targeting individuals who want to change their lifestyle as well as the worried well. The issues of making mental health a market cannot be discussed in detail here. Whether the engaged consumer is a fitting model for care seekers, i.e. individuals with a mental disorder, is highly doubtful. This model implies a specific understanding of autonomy, were care seekers make self-determined decision based on MAI technologies. This enables them to become more independent from the expertise of care givers and thus less vulnerable to the power asymmetry within the therapeutic alliance. MAI technologies, especially mHealth applications, allow care seekers to
202
6 Relationships
shape their own mental health, whereas care givers take the role as mental health coaches that cater to their demands. This role, although an empowering on in theory, does not work in mental healthcare due to the vulnerability of care seekers and the need for a strong therapeutic alliance. The entrepreneurial approach would not only overstrain the ability of most care seekers, but also ignore their basic needs. Replacing care by consumerism undermines the fundamental goals of therapy. In an ideal scenario, artificial agents would take the role of enablers of mental healthcare. Their purpose would be to support care givers and provide personalized services, enabling care seekers to overcome access barriers and facilitating a more inclusive mental healthcare. MAI use could for example help to identify the health needs of underserved groups as well as the individual factors, social determinants, institutional factors that constitute access barriers. On the level of individual treatment as well as public health, MAI technologies could enable more personalized services and enhance the quality of care. Furthermore, by performing administrative tasks, MAI technologies could also free care givers from time-consuming activities that are not directly patient-related, hence allowing them to spend more time on the therapeutic alliance. This could also result in cost-savings and workflow optimization. Again, all these benefits do not come with the technology, but stakeholders have to define them as goals of technology design, implementation, and use. Artificial agents have already taken the role of mediators in mental healthcare. As embodied artificial agents, they conduct care seeker interviews, thus supporting care givers in obtaining information. In the form of mental health apps, artificial agents form a link between care givers and care seekers by mediating the information flow and pre-structuring the care seekers view of the care giver through algorithmic models. The epistemic risks connected to digital positivism are obvious in this context. Furthermore, this kind of mediation could severely impact the dynamic between care givers and care seekers and with it the therapeutic alliance. The main issues here are difficulties in building trust and reducing human interaction, which is a crucial element of forming emotional bonds. If artificial agents are used in the mediator role, this has to be accompanied by measures to ensure that the therapeutic alliance still functions. The use as mediators should be limited to those purposes of data collection and procession that support care practices. Substituting these practices should not be the goal here. The level of mediation should be tailored to the individual needs and resources of the care seeker. Looking at the unique nature of the therapeutic alliance shows that the role of artificial agents as substitutes is not desirable. There may be care contexts where for example unguided stand-alone applications might be the fitting tool for catering to a care seeker’s needs and resources. However, it is yet unclear to what extent this form of treatment is effective at all, since the evidence base is ambiguous. It may be that unguided stand-alone approaches could be an option to support care seekers while they wait for therapy or in other very limited contexts. However, there is no clear evidence that the role of artificial agents as substitutes will have positive clinical outcomes or outperform human care givers. The requirements of the therapeutic alliance, which are empathy, reciprocity, and a strong emotional bond, do not favor the role of substitutes. On the contrary, there are strong reasons to define this role as a
References
203
scenario that should be avoided. If personalization and patient-centered care are the main goals for using MAI technologies in mental healthcare, this excludes substituting human interactions and relationship as goals.
References Abbasgholizadeh Rahimi, S., Cwintal, M., Huang, Y., Ghadiri, P., Grad, R., Poenaru, D., Gore, G., Zomahoun, H. T. V., Légaré, F., & Pluye, P. (2022). Application of artificial intelligence in shared decision making: Scoping review. JMIR Medical Informatics, 10(8), e36199. https://doi. org/10.2196/36199 Abdi, J., Al-Hindawi, A., Ng, T., & Vizcaychipi, M. P. (2018). Scoping review on the use of socially assistive robot technology in elderly care. BMJ Open, 8(2), e018815. https://doi.org/10. 1136/bmjopen-2017-018815 Aguirre, C. A., Harrigian, K., & Dredze, M. (2021). Gender and racial fairness in depression research using social media. In EACL – 16th conference of the European Chapter of the Association for Computational Linguistics, 2021. Proceedings of the conference. https://doi. org/10.48550/arXiv.2103.10550 Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7, e7702. https://doi.org/10.7717/peerj.7702 Alonso, S. G., De La Torre-Díez, I., Hamrioui, S., López-Coronado, M., Barreno, D. C., Nozaleda, L. M., & Franco, M. (2018). Data mining algorithms and techniques in mental health: A systematic review. Journal of Medical Systems, 42, 161. https://doi.org/10.1007/s10916-0181018-2 Alqahtani, F., Al Khalifah, G., Oyebode, O., & Orji, R. (2019). Apps for mental health: An evaluation of behavior change strategies and recommendations for future development. Frontiers in Artificial Intelligence, 2, 30. https://doi.org/10.3389/frai.2019.00030 Aminololama-Shakeri, S., & López, J. E. (2018). The doctor-patient relationship with artificial intelligence. AJR. American Journal of Roentgenology, 212, 308–310. Aquino, Y. S. J., Rogers, W. A., Braunack-Mayer, A., Frazer, H., Win, K. T., Houssami, N., Degeling, C., Semsarian, C., & Carter, S. M. (2023). Utopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skills. International Journal of Medical Informatics, 169, 104903. https://doi.org/10.1016/j.ijmedinf. 2022.104903 Archibald, M. M., & Barnard, A. (2018). Futurism in nursing: Technology, robotics and the fundamentals of care. Journal of Clinical Nursing, 27, 2473–2480. Ardito, R. B., & Rabellino, D. (2011). Therapeutic alliance and outcome of psychotherapy: Historical excursus, measurements, and prospects for research. Frontiers in Psychology, 2, 270. https://doi.org/10.3389/fpsyg.2011.00270 Armstrong, D. (1983). The fabrication of nurse-patient relationships. Social Science & Medicine, 17, 457–460. Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence Chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589–596. https://doi.org/10.1001/jamainternmed. 2023.1838 Barrera, A., Gee, C., Wood, A., Gibson, O., Bayley, D., & Geddes, J. (2020). Introducing artificial intelligence in acute psychiatric inpatient care: Qualitative study of its use to conduct nursing observations. Evidence-Based Mental Health, 23, 34–38.
204
6
Relationships
Barrett, M., Boyne, J., Brandts, J., et al. (2019). Artificial intelligence supported patient self-care in chronic heart failure: A paradigm shift from reactive to predictive, preventive and personalised care. EPMA Journal, 10, 445–464. https://doi.org/10.1007/s13167-019-00188-9 Bartneck, C., Lütge, C., Wagner, A., & Welsh, S. (2021). Responsibility and liability in the case of AI systems. In: Bartneck, C. Lütge, C. Wagner, A. & Welsh, S. (eds.). An introduction to ethics in robotics and AI. Springer, 39–44 https://doi.org/10.1007/978-3-030-51110-4_5 Bauman, Z. (1991). The social manipulation of morality: Moralizing actors, adiaphorizing action. Theory, Culture and Society, 8, 137–151. https://doi.org/10.1177/026327691008001007 Bauman, Z. (2006). Liquid fear. Polity Press. Bauman, Z., & Lyon, D. (2013). Liquid surveillance: A conversation. Polity Press. Beeker, T., Mills, C., Bhugra, D., Te Meerman, S., Thoma, S., Heinze, M., & Von Peter, S. (2021). Psychiatrization of society: A conceptual framework and call for transdisciplinary research. Frontiers in Psychiatry, 12. Bergamin, J., Luigjes, J., Kiverstein, J., Bockting, C. L., & Denys, D. (2022). Defining autonomy in psychiatry. Frontiers in Psychiatry, 12, 645556. https://doi.org/10.3389/fpsyt.2021.645556 Berridge, C., Zhou, Y., Robillard, J. M., & Kaye, J. (2023). Companion robots to mitigate loneliness among older adults: Perceptions of benefit and possible deception. Frontiers in Psychology, 14, 1106633. https://doi.org/10.3389/fpsyg.2023.1106633 Berrouiguet, S., Perez-Rodriguez, M. M., Larsen, M., Baca-García, E., Courtet, P., & Oquendo, M. (2018). From eHealth to iHealth: Transition to participatory and personalized medicine in mental health. Journal of Medical Internet Research, 20(1), e2. https://doi.org/10.2196/jmir. 7412 Bolsinger, J., Jaeger, M., Hoff, P., & Theodoridou, A. (2020). Challenges and opportunities in building and maintaining a good therapeutic relationship in acute psychiatric settings: A narrative review. Frontiers in Psychiatry, 10, 965. https://doi.org/10.3389/fpsyt.2019.00965 Boonstra, A., Vos, J., & Rosenberg, L. (2022). The effect of electronic health records on the medical professional identity of physicians: A systematic literature review. Procedia Computer Science, 196, 272–279. Bordin, E. S. (1979). The generalizability of the psychoanalytic concept of the working alliance. Psychotherapy, 16, 252–260. Brennan, P. F., & Bakken, S. (2015). Nursing needs big data and big data needs nursing. Journal of Nursing Scholarship, 47, 477–484. https://doi.org/10.1111/jnu.12159 Brown, J. E. H., & Halpern, J. (2021). AI chatbots cannot replace human interactions in the pursuit of more inclusive mental healthcare. SSM – Mental Health, 1, 100017. https://doi.org/10.1016/j. ssmmh.2021.100017 Buchanan, C., Howitt, M. L., Wilson, R., Booth, R. G., Risling, T., & Bamford, M. (2020). Predicted influences of artificial intelligence on the domains of nursing: Scoping review. JMIR Nursing, 3(1), e23939. https://doi.org/10.2196/23939 Buiten, M., De Streel, A., & Peitz, M. (2023). The law and economics of AI liability. Computer Law and Security Review, 48, 105794. https://doi.org/10.1016/j.clsr.2023.105794 Charon, R. (2001). Narrative medicine: Form, function, and ethics. Annals of Internal Medicine, 134, 83–87. Charon, R. (2016a). Clinical contributions of narrative medicine. In: Charon, R. Dasgupta, S., Hermann, N., et al. (eds.). The principles and practice of narrative medicine. Oxford University Press, 292–310. https://doi.org/10.1093/med/9780199360192.003.0014 Charon, R. (2016b). Close reading: The signature method of narrative medicine. In: Charon, R. Dasgupta, S., Hermann, N., et al. (eds.). The principles and practice of narrative medicine. Oxford University Press, 157–179. https://doi.org/10.1093/med/9780199360192.003. 0008. Chen, A., Wang, C., & Zhang, X. (2022). Reflection on the equitable attribution of responsibility for artificial intelligence-assisted diagnosis and treatment decisions. Intelligent Medicine, 3(2), 139–143. https://doi.org/10.1016/j.imed.2022.04.002
References
205
Clarke, A. E., Shim, J. K., Mamo, L., Fosket, J. R., & Fishman, J. R. (2003). Biomedicalization: Technoscientific transformations of health, illness, and U.S. biomedicine. American Sociological Review, 68, 161–194. Coeckelbergh, M. (2016). Care robots and the future of ICT-mediated elderly care: A response to doom scenarios. AI & Society, 31, 455–462. Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26, 2051–2068. Coghlan, S. (2022). Robots and the possibility of humanistic care. International Journal of Social Robotics, 14, 2095–2108. https://doi.org/10.1007/s12369-021-00804-7 David, L., Popa, S. L., Barsan, M., Muresan, L., Ismaiel, A., Popa, L. C., Perju-Dumbrava, L., Stanculete, M. F., & Dumitrascu, D. L. (2022). Nursing procedures for advanced dementia: Traditional techniques versus autonomous robotic applications (review). Experimental and Therapeutic Medicine, 23(2), 124. https://doi.org/10.3892/etm.2021.11047 Delaney, C. W., & Simpson, R. L. (2017). Why big data? Why nursing? In: Delaney, C., Weaver, C., Warren, J., Clancy, T. & Simpson, R. (eds.). Big data-enabled nursing (Health informatics). Springer, 3–10. https://doi.org/10.1007/978-3-319-53300-1_1. Devault, D., Artstein, R., Benn, G., et al. (2014). SimSensei kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (AAMAS ‘14). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, pp. 1061–1068. Available at: https://dl.acm.org/ doi/10.5555/2615731.2617415. Accessed 15 Aug 2023. Deveugele, M., Derese, A., Van Den Brink-Muinen, A., Bensing, J., & De Maeseneer, J. (2002). Consultation length in general practice: Cross sectional study in six European countries. BMJ, 325(7362), 472. https://doi.org/10.1136/bmj.325.7362.472 Diaz Milian, R., & Bhattacharyya, A. (2023). Artificial intelligence paternalism. Journal of Medical Ethics, 49, 183–184. Dillard-Wright, J. (2019). Electronic health record as a panopticon: A disciplinary apparatus in nursing practice. Nursing Philosophy, 20(2), e12239. https://doi.org/10.1111/nup.12239 Ellefsen, B., Kim, H. S., & Ja Han, K. (2007). Nursing gaze as framework for nursing practice: A study from acute care settings in Korea, Norway and the USA. Scandinavian Journal of Caring Sciences, 21, 98–105. Emanuel, E. J., & Emanuel, L. L. (1992). Four models of the physician-patient relationship. JAMA, 267, 2221–2226. Erbe, D., Eichert, H. C., Riper, H., & Ebert, D. D. (2017). Blending face-to-face and internet-based interventions for the treatment of mental disorders in adults: Systematic review. Journal of Medical Internet Research, 19(9), e306. https://doi.org/10.2196/jmir.6588 Feng, S., Mäntymäki, M., Dhir, A., & Salmela, H. (2021). How self-tracking and the quantified self promote health and well-being: Systematic review. Journal of Medical Internet Research, 23(9), e25171. https://doi.org/10.2196/25171 Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research, 21(5), e13216. https://doi.org/10.2196/13216 Fuller, R., & Hansen, A. (2019). Disruption ahead: Navigating and leading the future of nursing. Nursing Administration Quarterly, 43(3), 212–221. https://doi.org/10.1097/NAQ. 0000000000000354 García-Gutiérrez, M. S., Navarrete, F., Sala, F., Gasparyan, A., Austrich-Olivares, A., & Manzanares, J. (2020). Biomarkers in psychiatry: Concept, definition, types and relevance to the clinical reality. Frontiers in Psychiatry, 11, 432. https://doi.org/10.3389/fpsyt.2020.00432 Greg, I., Ana Luisa, N., Hajira, D.-M., Ai, O., Hiroko, T., Anistasiya, V., & John, H. (2017). International variations in primary care physician consultation time: A systematic review of 67 countries. BMJ Open, 7(10), e017902. https://doi.org/10.1136/bmjopen-2017-017902
206
6
Relationships
Grobbel, C., Van Wynsberghe, A., Davis, R., & Poly-Droulard, L. (2019). Designing nursing care practices complemented by robots: Ethical implications and application of caring frameworks. International Journal of Human Caring, 23, 132–140. Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2015). Developing automated deceptions and the impact on trust. Philosophy and Technology, 28, 91–105. Gunkel, D. J. (2018). Robot rights. MIT Press. Gunkel, D. J. (2020). Mind the gap: Responsible robotics and the problem of responsibility. Ethics and Information Technology, 22, 307–320. Hannah Louise, B., Katie Jane, E., Rhona, W., Serge, T., & Ray, B. J. (2019). Companion robots for older people: Importance of user-centred design demonstrated through observations and focus groups comparing preferences of older people and roboticists in South West England. BMJ Open, 9(9), e032468. https://doi.org/10.1136/bmjopen-2019-032468 Hassan, N., Slight, R. D., Bimpong, K., Weiand, D., Vellinga, A., Morgan, G., & Slight, S. P. (2021). Clinicians’ and patients’ perceptions of the use of artificial intelligence decision aids to inform shared decision making: A systematic review. Lancet, 398, S80. https://doi.org/10.1016/ S0140-6736(21)02623-4 Häyrinen, K., Saranto, K., & Nykänen, P. (2008). Definition, structure, content, use and impacts of electronic health records: A review of the research literature. International Journal of Medical Informatics, 77, 291–304. Higgins, O., Short, B. L., Chalup, S. K., & Wilson, R. L. (2023). Artificial intelligence (AI) and machine learning (ML) based decision support systems in mental health: An integrative review. International Journal of Mental Health Nursing, 32(4), 966–978. https://doi.org/10.1111/inm. 13114 Hindocha, S., & Badea, C. (2022). Moral exemplars for the virtuous machine: The clinician’s role in ethical artificial intelligence for healthcare. AI and Ethics, 2, 167–175. Hogan, A. J. (2019). Social and medical models of disability and mental health: Evolution and renewal. CMAJ, 191(1), E16–E18. https://doi.org/10.1503/cmaj.181008 Hojat, M., Maio, V., Pohl, C. A., & Gonnella, J. S. (2023). Clinical empathy: Definition, measurement, correlates, group differences, erosion, enhancement, and healthcare outcomes. Discover Health Systems, 2, 8. https://doi.org/10.1007/s44250-023-00020-2 Holmes, D., & Murray, S. J. (2011). Civilizing the ‘Barbarian’: A critical analysis of behaviour modification programmes in forensic psychiatry settings. Journal of Nursing Management, 19, 293–301. Holtz, B., Nelson, V., & Poropatich, R. K. (2022). Artificial intelligence in health: Enhancing a return to patient-centered communication. Telemedicine Journal and E-Health, 29(6), 795–797. https://doi.org/10.1089/tmj.2022.0413 Huda, A. S. (2021). The medical model and its application in mental health. International Review of Psychiatry, 33, 463–470. Hung, L., Liu, C., Woldum, E., Au-Yeung, A., Berndt, A., Wallsworth, C., Horne, N., Gregorio, M., Mann, J., & Chaudhury, H. (2019). The benefits of and barriers to using a social robot PARO in care settings: A scoping review. BMC Geriatrics, 19, 232. https://doi.org/10.1186/ s12877-019-1244-6 Hunt, L. M., Bell, H. S., Baker, A. M., & Howard, H. A. (2017). Electronic health records and the disappearing patient. Medical Anthropology Quarterly, 31, 403–421. Hurwitz, B., & Vass, A. (2002). What’s a good doctor, and how can you make one? BMJ, 325(7366), 667–668. https://doi.org/10.1136/bmj.325.7366.667 International Counsil Of Nurses (ICN). (2022). The global mental health nursing workforce: Time to prioritize and invest in mental health and wellbeing. Available at: https://www.icn.ch/sites/ default/files/inline-files/ICN_Mental_Health_Workforce_report_EN_web.pdf. Accessed 14 Aug 2023. Jane, J. S., Oltmanns, T. F., South, S. C., & Turkheimer, E. (2007). Gender bias in diagnostic criteria for personality disorders: An item response theory analysis. Journal of Abnormal Psychology, 116, 166–175.
References
207
Jayakumar, P., Moore, M. G., Furlough, K. A., Uhler, L. M., Andrawis, J. P., Koenig, K. M., Aksan, N., Rathouz, P. J., & Bozic, K. J. (2021). Comparison of an artificial intelligence– enabled patient decision aid vs educational material on decision quality, shared decisionmaking, patient experience, and functional outcomes in adults with knee osteoarthritis: A randomized clinical trial. JAMA Network Open, 4(2), e2037107. https://doi.org/10.1001/ jamanetworkopen.2020.37107 Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204. Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10, 123–133. Joshi, S., & Šabanović, S. (2019). Robots for inter-generational interactions: Implications for nonfamilial community settings. In 14th ACM/IEEE international conference on human-robot interaction (HRI), Daegu, Korea (South) (pp. 478–486). https://doi.org/10.1109/HRI.2019. 8673167 Kaminski, M. E., Rueben, M., Smart, W. D., & Grimm, C. (2017). Averting robot eyes. Md L Rev, 76, 983. U of Colorado law legal studies research paper no. 17–23. Available at SSRN: https:// ssrn.com/abstract=3002576. Accessed 14 Aug 2023. Kazzazi, F. (2021). The automation of doctors and machines: A classification for AI in medicine (Adam framework). Future Healthcare Journal, 8(2), e257–e262. https://doi.org/10.7861/fhj. 2020-0189 Kerasidou, A. (2020). Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bulletin of the World Health Organization, 98, 245–250. Kerst, A., Zielasek, J., & Gaebel, W. (2020). Smartphone applications for depression: A systematic literature review and a survey of health care professionals’ attitudes towards their use in clinical practice. European Archives of Psychiatry and Clinical Neuroscience, 270, 139–152. Khan, A. I., Abuzainah, B., Gutlapalli, S. D., Chaudhuri, D., Khan, K. I., Al Shouli, R., Allakky, A., Ferguson, A. A., & Hamid, P. (2023). Effect of major depressive disorder on stroke risk and mortality: A systematic review. Cureus, 15(6), e40475. https://doi.org/10.7759/cureus King, B. F., Jr. (2018). Artificial intelligence and radiology: What will the future hold? Journal of the American College of Radiology, 15, 501–503. Kuziemsky, C., Maeder, A. J., John, O., Gogia, S. B., Basu, A., Meher, S., & Ito, M. (2019). Role of artificial intelligence within the telehealth domain. Yearbook of Medical Informatics, 28, 35–40. Langer, A., Feingold-Polak, R., Mueller, O., Kellmeyer, P., & Levy-Tzedek, S. (2019). Trust in socially assistive robots: Considerations for use in rehabilitation. Neuroscience and Biobehavioral Reviews, 104, 231–239. Lee, E. E., Torous, J., De Choudhury, M., Depp, C. A., Graham, S. A., Kim, H.-C., Paulus, M. P., Krystal, J. H., & Jeste, D. V. (2021). Artificial intelligence for mental health care: Clinical applications, barriers, facilitators, and artificial wisdom. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 6, 856–864. Leong, B., & Selinger, E. (2019). Robot eyes wide shut: Understanding dishonest anthropomorphism. In Proceedings of the association for computing machinery’s conference on fairness, accountability, and transparency (pp. 299–308). https://doi.org/10.2139/ssrn.3762223 Levine, T. R. (2014). Truth-default theory (TDT): A theory of human deception and deception detection. Journal of Language and Social Psychology, 33, 378–392. Liaschenko, J. (1994). The moral geography of home care. ANS. Advances in Nursing Science, 17, 16–26. Lipschitz, J., Miller, C. J., Hogan, T. P., Burdick, K. E., Lippin-Foster, R., Simon, S. R., & Burgess, J. (2019). Adoption of Mobile apps for depression and anxiety: Cross-sectional survey study on patient interest and barriers to engagement. JMIR Mental Health, 6(1), e11334. https://doi.org/ 10.2196/11334 Liu, X., Keane, P. A., & Denniston, A. K. (2018). Time to regenerate: The doctor in the age of artificial intelligence. Journal of the Royal Society of Medicine, 111, 113–116. Lorenzini, G., Arbelaez Ossa, L., Shaw, D. M., & Elger, B. S. (2023). Artificial intelligence and the doctor–patient relationship expanding the paradigm of shared decision making. Bioethics, 37, 424–429.
208
6
Relationships
Lupton, D. (2013). The digitally engaged patient: Self-monitoring and self-care in the digital health era. Social Theory and Health, 11, 256–270. Lupton, D. (2016). The quantified self. A sociology of self-tracking. Polity Press. Lupton, D. (2017). Self-tracking, health and medicine. Health Sociology Review, 26, 1–5. https:// doi.org/10.1080/14461242.2016.1228149. Luxton, D. D. (2014). Recommendations for the ethical use and design of artificial intelligent care providers. Artificial Intelligence in Medicine, 62, 1–10. https://doi.org/10.1016/j.artmed.2014. 06.004 Luxton, D.D., Anderson, S.L., & Anderson, M. (2016). Ethical issues and artificial intelligence technologies in behavioral and mental health care. In: D.D. Luxton (ed.). Artificial intelligence in behavioral and mental health care. Academic, 255–276. https://doi.org/10.1016/B978-0-12420248-1.00011-8 Manalili, M. A. R., Pearson, A., Sulik, J., Creechan, L., Elsherif, M., Murkumbi, I., Azevedo, F., Bonnen, K. L., Kim, J. S., Kording, K., Lee, J. J., Obscura, M., Kapp, S. K., Röer, J. P., & Morstead, T. (2023). From puzzle to progress: How engaging with neurodiversity can improve cognitive science. Cognitive Science, 47(2), e13255. https://doi.org/10.1111/cogs.13255 Martinez-Martin, N. (2021). Minding the AI: Ethical challenges and practice for AI mental health care tools. In: Jotterand, F. & Ienca, M. (eds.). Artificial intelligence in brain and mental health: Philosophical, ethical & policy issues. Springer, 111–125. https://doi.org/10.1007/978-3-03074188-4_8 Maslej, M. M., Kloiber, S., Ghassemi, M., Yu, J., & Hill, S. L. (2023). Out with AI, in with the psychiatrist: A preference for human-derived clinical decision support in depression care. Translational Psychiatry, 13(1), 210. https://doi.org/10.1038/s41398-023-02509-z Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183. Matthias, A. (2015). Robot lies in health care: When is deception morally permissible? Kennedy Institute of Ethics Journal, 25, 169–192. https://doi.org/10.1353/ken.2015.0007 May, C. (1992). Nursing work, nurses’ knowledge, and the subjectification of the patient. Sociology of Health & Illness, 14, 472–487. Mcdougall, R. J. (2019). Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics, 45, 156–160. https://doi.org/10.1136/medethics-2018-10511 Mcgrow, K. (2019). Artificial intelligence: Essentials for nursing. Nursing, 49, 46–49. Mcparlin, Z., Cerritelli, F., Friston, K. J., & Esteves, J. E. (2022). Therapeutic Alliance as active inference: The role of therapeutic touch and synchrony. Frontiers in Psychology, 13, 783694. https://doi.org/10.3389/fpsyg.2022.783694 Mittelstadt, B. (2021). The impact of artificial intelligence on the doctor-patient relationship. Commissioned by the Steeing Committee for Human Rights in the Fields of Biomedicine and Health (CDBIO), Council of Europe 2021. Available at https://www.coe.int/en/web/bioethics/ report-impact-of-ai-on-the-doctor-patient-relationship. Accessed 14 Aug 2023. Montemayor, C., Halpern, J., & Fairweather, A. (2022). In principle obstacles for empathic AI: Why we can’t replace human empathy in healthcare. AI & Society, 37, 1353–1359. Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. Public Affairs. Moyle, W., Bramble, M., Jones, C. J., & Murfield, J. E. (2019). “She had a smile on her face as wide as the great Australian bite”: A qualitative examination of family perceptions of a therapeutic robot and a plush toy. Gerontologist, 59, 177–185. Mulvenna, M. D., Bond, R., Delaney, J., Dawoodbhoy, F. M., Boger, J., Potts, C., & Turkington, R. (2021). Ethical issues in democratizing digital phenotypes and machine learning in the next generation of digital health technologies. Philosophy and Technology, 34, 1945–1960. https:// doi.org/10.1007/s13347-021-00445-8 Nortvedt, P. (1998). Sensitive judgement: An inquiry into the foundations of nursing ethics. Nursing Ethics, 5, 385–392.
References
209
O’Connor, K., Muller Neff, D., & Pitman, S. (2018). Burnout in mental health professionals: A systematic review and meta-analysis of prevalence and determinants. European Psychiatry, 53, 74–99. https://doi.org/10.1111/inm.12606 O’Donnabhain, R., & Friedman, N. D. (2018). What makes a good doctor? Internal Medicine Journal, 48, 879–882. Palmer, A., & Schwan, D. (2023). More process, less principles: The ethics of deploying AI and robotics in medicine. Cambridge Quarterly of Healthcare Ethics, 24, 1–14. https://doi.org/10. 1017/S0963180123000087 Pandey, A. K., & Gelin, R. (2018). A mass-produced sociable humanoid robot: Pepper: The first machine of its kind. IEEE Robotics and Automation Magazine, 25(3), 40–48. https://doi.org/10. 1109/Mra.2018.2833157 Patil, T., & Giordano, J. (2010). On the ontological assumptions of the medical model of psychiatry: Philosophical considerations and pragmatic tasks. Philosophy, Ethics, and Humanities in Medicine, 5, 3. https://doi.org/10.1186/1747-5341-5-3 Pearce, S., & Pickard, H. (2009). The moral content of psychiatric treatment. BJPsych, 195, 281–282. Pepito, J. A., & Locsin, R. (2019). Can nurses remain relevant in a technologically advanced future? International Journal of Nursing Science, 6, 106–110. Peplau, H.E. (1988). Roles in nursing. In: Peplau, H.E. (ed.). Interpersonal relations in nursing: A conceptual frame of reference for psychodynamic nursing. Macmillan Education UK, 43–70. https://doi.org/10.1007/978-1-349-10109-2_3 Petrakaki, D., Hilberg, E., & Waring, J. (2021). The cultivation of digital health citizenship. Social Science & Medicine, 270, 113675. https://doi.org/10.1016/j.socscimed.2021.113675 Pihlaja, S., Stenberg, J. H., Joutsenniemi, K., Mehik, H., Ritola, V., & Joffe, G. (2018). Therapeutic alliance in guided internet therapy programs for depression and anxiety disorders – A systematic review. Internet Interventions, 11, 1–10. Radden, J. (2002). Notes towards a professional ethics for psychiatry. The Australian and New Zealand Journal of Psychiatry, 36, 52–59. https://doi.org/10.1046/j.1440-1614.2002. 00989.x Robert, N. (2019). How artificial intelligence is changing nursing. Nursing Management, 50, 30–39. Rubeis, G. (2020a). The disruptive power of artificial intelligence. Ethical aspects of gerontechnology in elderly care. Archives of Gerontology and Geriatrics, 91, 104186. https:// doi.org/10.1016/j.archger.2020.104186 Rubeis, G. (2020b). Strange bedfellows. The unlikely alliance between artificial intelligence and narrative medicine. Dilemata, 32, 49–58. Rubeis, G. (2021a). E-mental health applications for depression: An evidence-based ethical analysis. European Archives of Psychiatry and Clinical Neuroscience, 271, 549–555. Rubeis, G. (2021b). Guardians of humanity? The challenges of nursing practice in the digital age. Nursing Philosophy, 22(2), e12331. https://doi.org/10.1111/nup.12331 Rubeis, G. (2023). Adiaphorisation and the digital nursing gaze: Liquid surveillance in long-term care. Nursing Philosophy, 24(1), e12388. https://doi.org/10.1111/nup.12388 Rubeis, G., Dubbala, K., & Metzler, I. (2022). “Democratizing” artificial intelligence in medicine and healthcare: Mapping the uses of an elusive term. Frontiers in Genetics, 13, 902542. https:// doi.org/10.3389/fgene.2022.902542 Šabanović, S., Chang, W.-L., Bennett, C. C., Piatt, J. A., & Hakken, D. (2015). A robot of my own: Participatory design of socially assistive robots for independently living older adults diagnosed with depression. In: Zhou, J. & Salvendy, G. (eds.). Human aspects of IT for the aged population (Design for aging) (Vol. 9193). Springer, 104–114. https://doi.org/10.1007/978-3319-20892-3_11 Sacchi, L., Rubrichi, S., Rognoni, C., Panzarasa, S., Parimbelli, E., Mazzanti, A., Napolitano, C., Priori, S. G., & Quaglini, S. (2015). From decision to shared-decision: Introducing patients’ preferences into clinical decision analysis. Artificial Intelligence in Medicine, 65, 19–28.
210
6
Relationships
Sætra, H. S. (2021). Social robot deception and the culture of trust. Paladyn, 12(1), 276–286. https://doi.org/10.1515/pjbr-2021-0021 Sandry, E. (2015). Re-evaluating the form and communication of social robots. International Journal of Social Robotics, 7, 335–346. Sapci, A. H., & Sapci, H. A. (2019). Innovative assisted living tools, remote monitoring technologies, artificial intelligence-driven solutions, and robotic systems for aging societies: Systematic review. JMIR Aging, 2(2), e15429. https://doi.org/10.2196/15429 Sauerbrei, A., Kerasidou, A., Lucivero, F., & Hallowell, N. (2023). The impact of artificial intelligence on the person-centred, doctor-patient relationship: Some problems and solutions. BMC Medical Informatics and Decision Making, 23, 73. https://doi.org/10.1186/s12911-02302162-y Scheutz, M. (2012). The inherent dangers of unidirectional emotional bonds between humans and social robots. In: Lin, P.A.K. & Bekey, G. (eds.). Robot ethics: The ethical and social implications of robotics. MIT Press, 205–221. Schüll, N. D. (2016). Data for life: Wearable technology and the design of self-care. BioSocieties, 11, 317–333. Scott, I. A., Carter, S. M., & Coiera, E. (2021). Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Information, 28, e100450. https://doi.org/10.1136/bmjhci2021-100450 Sedlakova, J., & Trachsel, M. (2023). Conversational artificial intelligence in psychotherapy: A new therapeutic tool or agent? AJOB, 23, 4–13. Sensmeier, J. (2015). Big data and the future of nursing knowledge. Nursing Management, 46(4), 22–27. https://doi.org/10.1097/01.NUMA.0000462365.53035.7d Shan, Y., Ji, M., Xie, W., Lam, K.-Y., & Chow, C.-Y. (2022). Public trust in artificial intelligence applications in mental health care: Topic modeling analysis. JMIR Human Factors, 9(4), e38799. https://doi.org/10.2196/38799 Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14, 27–40. Sharkey, A., & Sharkey, N. (2021). We need to talk about deception in social robotics! Ethics and Information Technology, 23, 309–316. https://doi.org/10.1007/s10676-020-09573-9 Slemon, A. (2018). Embracing the wild profusion: A Foucauldian analysis of the impact of healthcare standardization on nursing knowledge and practice. Nursing Philosophy, 19(4), e12215. https://doi.org/10.1111/nup.12215 Smith, H. (2021). Clinical AI: Opacity, accountability, responsibility and liability. AI & Society, 36, 535–545. Snowden, L. R. (2003). Bias in mental health assessment and intervention: Theory and evidence. American Journal of Public Health, 93, 239–243. Solans Noguero, D., Ramírez-Cifuentes, D., Ríssola, E. A., & Freire, A. (2023). Gender bias when using artificial intelligence to assess anorexia nervosa on social media: Data-driven study. Journal of Medical Internet Research, 25, e45184. https://doi.org/10.2196/45184 Sorell, T., & Draper, H. (2017). Second thoughts about privacy, safety and deception. Connection Science, 29, 217–222. Sparrow, R. (2016). Robots in aged care: A dystopian future? AI & Society, 31, 445–454. Sparrow, R., & Hatherley, J. (2020). High hopes for “deep medicine”? AI, economics, and the future of care. The Hastings Center Report, 50, 14–17. https://doi.org/10.1002/hast.1079 Sparrow, R., & Sparrow, L. (2006). In the hands of machines? The future of aged care. Minds and Machines, 16, 141–161. Specker, J., Focquaert, F., Sterckx, S., & Schermer, M. H. N. (2020). Forensic practitioners’ views on stimulating moral development and moral growth in forensic psychiatric care. Neuroethics, 13, 73–85. Srinivasan, R., & San Miguel González, B. (2022). The role of empathy for artificial intelligence accountability. Journal of Responsible Technology, 9, 100021. https://doi.org/10.1016/j.jrt. 2021.100021
References
211
Stark, L., & Hoey, J. (2021). The ethics of emotion in artificial intelligence systems. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 782–793). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445939 Steiner-Hofbauer, V., Schrank, B., & Holzinger, A. (2018). What is a good doctor? Wiener Medizinische Wochenschrift (1946), 168, 398–405. Steinhubl, S. R., & Topol, E. J. (2018). Digital medicine, on its way to being just plain medicine. NPJ Digit Med, 1, 20175. https://doi.org/10.1038/s41746-017-0005-1 Straw, I., & Callison-Burch, C. (2020). Artificial intelligence in mental health and the biases of language based models. PLoS One, 15(12), e0240376. https://doi.org/10.1371/journal.pone. 0240376 Sweeney, P. (2023). Trusting social robots. AI Ethics, 3, 419–426. Szasz, T. S. (1960). The myth of mental illness. American Psychologist, 15, 113–118. Tai, A. M. Y., Albuquerque, A., Carmona, N. E., Subramanieapillai, M., Cha, D. S., Sheko, M., Lee, Y., Mansur, R., & Mcintyre, R. S. (2019). Machine learning and big data: Implications for disease modeling and therapeutic discovery in psychiatry. Artificial Intelligence in Medicine, 99, 101704. https://doi.org/10.1016/j.artmed.2019.101704 Thanler, R., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. Thompson, N. (2018). Mental health and Well-being. Alternatives to the medical model. Routledge. Thornton, T., & Lucas, P. (2011). On the very idea of a recovery model for mental health. Journal of Medical Ethics, 37, 24–28. Timmons, A. C., Duong, J. B., Simo Fiallo, N., Lee, T., Vo, H. P. Q., Ahle, M. W., Comer, J. S., Brewer, L. C., Frazier, S. L., & Chaspari, T. (2022). A call to action on assessing and mitigating bias in artificial intelligence applications for mental health. Perspectives on Psychological Science, 18, 1062–1096. https://doi.org/10.1177/17456916221134490 Topol, E. (2015). The patient will see you now: The future of medicine is in your hands. Basic Books. Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books. Triberti, S., Durosini, I., & Pravettoni, G. (2020). A “third wheel” effect in health decision making involving artificial entities: A psychological perspective. Frontiers in Public Health, 8, 117. https://doi.org/10.3389/fpubh.2020.00117 Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S., & Torous, J. B. (2019). Chatbots and conversational agents in mental health: A review of the psychiatric landscape. Canadian Journal of Psychiatry, 64(7), 456–464. https://doi.org/10.1177/0706743719828977 Vainauskienė, V., & Vaitkienė, R. (2022). Foresight study on online health community: The perspective of knowledge empowerment for patients with chronic diseases. The International Journal of Health Planning and Management, 37(4), 2354–2375. https://doi.org/10.1002/hpm. 3477 Vallverdú, J., & Casacuberta, D. (2015). Ethical and technical aspects of emotions to create empathy in medical machines. In: van Rysewyk, S. & Pontier, M. (eds.). Machine medical ethics. Intelligent systems, control and automation: Science and engineering (Vol. 74). Springer, 341–362. https://doi.org/10.1007/978-3-319-08108-3_20 Van Wynsberghe, A. (2014). To delegate or not to delegate: Care robots, moral agency and moral responsibility. In Paper presented at 50th anniversary AISB convention 2014, London, United Kingdom. Available at: http://doc.gold.ac.uk/aisb50/AISB50-S17/AISB50-S17vanWynsberghe-Paper.pdf. Accessed 9 Aug 2023. Van Wynsberghe, A. (2022). Social robots and the risks to reciprocity. AI & Society, 37, 479–485. Vandemeulebroucke, T., Dierckx De Casterlé, B., & Gastmans, C. (2018). The use of care robots in aged care: A systematic review of argument-based ethics literature. Archives of Gerontology and Geriatrics, 74, 15–25.
212
6
Relationships
Verdicchio, M., & Perin, A. (2022). When doctors and AI interact: On human responsibility for artificial risks. Philosophy and Technology, 35(1), 11. https://doi.org/10.1007/s13347-02200506-6 Von Humboldt, S., Mendoza-Ruvalcaba, N. M., Arias-Merino, E. D., Costa, A., Cabras, E., Low, G., & Leal, I. (2020). Smart technology and the meaning in life of older adults during the Covid19 public health emergency period: A cross-cultural qualitative study. International Review of Psychiatry, 32, 713–722. https://doi.org/10.1080/09540261.2020.1810643 Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press. Watson, D., Womack, J., & Papadakos, S. (2020). Rise of the robots: Is artificial intelligence a friend or foe to nursing practice? Critical Care Nursing Quarterly, 43(3), 303–311. https://doi. org/10.1097/CNQ.0000000000000315 Weissglass, D. E. (2022). Contextual bias, the democratization of healthcare, and medical artificial intelligence in low- and middle-income countries. Bioethics, 36, 201–209. Westra, B. L., Delaney, C. W., Konicek, D., & Keenan, G. (2008). Nursing standards to support the electronic health record. Nursing Outlook, 56(5), 258–266.e1. https://doi.org/10.1016/j.outlook. 2008.06.005 Whitby, B. (2015). Automating medicine the ethical way. In: van Rysewyk, S. & Pontier, M. (eds.). Machine medical ethics. Intelligent systems, control and automation: Science and engineering (Vol. 74). Springer, 223–232). https://doi.org/10.1007/978-3-319-08108-3_14 Wilson, R. L., Higgins, O., Atem, J., Donaldson, A. E., Gildberg, F. A., Hooper, M., Hopwood, M., Rosado, S., Solomon, B., Ward, K., & Welsh, B. (2023). Artificial intelligence: An eye cast towards the mental health nursing horizon. International Journal of Mental Health Nursing (IJMHN), 32, 938–944. World Health Organisation (WHO). (2017). Depression and other common mental disorders: Global Health estimates. World Health Organization. Available at: https://apps.who.int/iris/ handle/10665/254610. Accessed 14 Aug 2023. World Health Organization (WHO). (2022). World mental health report: Transforming mental health for all. Available at: https://www.who.int/publications/i/item/9789240049338. Accessed 14 Aug 2023. Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20, 118–136. Zhang, T., Schoene, A. M., Ji, S., & Ananiadou, S. (2022). Natural language processing applied to mental illness detection: A narrative review. npj Digital Medicine, 5(1), 46. https://doi.org/10. 1038/s41746-022-00589-7 Zhang, W., Yang, C., Cao, Z., Li, Z., Zhuo, L., Tan, Y., He, Y., Yao, L., Zhou, Q., Gong, Q., Sweeney, J. A., Shi, F., & Lui, S. (2023). Detecting individuals with severe mental illness using artificial intelligence applied to magnetic resonance imaging. eBioMedicine, 90, 104541. https:// doi.org/10.1016/j.ebiom.2023.104541
Chapter 7
Environments
Abstract In this chapter, I analyze the impact of MAI on environments, i.e. spaces that we work, live, and act in. My basic assumption is that MAI will transform the material (technical infrastructure and hardware) and immaterial aspects (networks of practices and relationships, behaviors and rules) of these spaces. The background for my analysis the concept of the learning healthcare system (LHS) that makes meaningful use of health data and links the individual to the public health sphere. The LHS approach with its ubiquitous and permanent data collection, exchange, and processing will substantially transform three types of environments, work environments, personal environments, and urban environments. This chapter in many ways has the character of an outlook, since some concepts I discuss, such as smart cities, have not been realized yet. Keywords Ecosystems of care · Digital literacy · Internet of Things (IoT) · Learning healthcare system (LHS) · Mobile health (mHealth) · Privacy · Smart city · Workflow optimization MAI technologies are not limited to isolated tools wielded by experts in medical facilities. As we have seen in multiple contexts, the great potential of MAI is that it expands the possibilities of data collection and analysis beyond institutional confines and enters the everyday life of individuals. This is an opportunity for a more personalized healthcare and may also enable healthcare professionals and policy makers to tackle issues of population health in a more effective and efficient manner. The great vision of MAI technologies and the big data approach is to transform healthcare into a more personalized, more efficient enterprise, to improve its overall outcomes and increase its quality by making the best possible use of data on a large scale. In an ideal scenario, health data use links the individual and public health sphere. All of these goals require a specific technical infrastructure for enabling the collection, transfer, and processing of large amounts of data. Since MAI technologies require big data for realizing their full potential, we have to rebuild the healthcare domain so it can provide the infrastructure for generating, exchanging, and analyzing large amounts of health data. Hence, MAI will not only impact the practices of single healthcare professionals, the patient journey, and the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6_7
213
214
7 Environments
various relationships involved, but healthcare as a system. The borders between personal and public health must become permeable for harnessing the full potential of MAI. The paradigm for this transformation process is the learning healthcare system (LHS), which was introduced one and a half decades ago (Friedman et al., 2010; Slutsky, 2007). The basic idea behind LHS is to make meaningful use of the increasing amount of digital medical information. Meaningful use means to disseminate new scientific insights from biomedical research to enable personalized healthcare as well as improve public health services (Friedman et al., 2015). This requires to connect community medical practices, state public health agencies, academic health centers, health information technology organizations, research institutes and industry, and federal agencies for biomedical research (Friedman et al., 2010). The aim is to link biomedical and translational research with clinical as well as public health uses in order to assess and improve the quality of healthcare and survey public health in real-time. An LHS requires three crucial elements (Friedman et al., 2010): Developing technologies and integrating them into the clinical workflow, first and foremost the EHR, but also technologies for datafying individual health (e.g. sensors), as well as data quality metrics and standardization for ensuring validity and accuracy of data (Friedman et al., 2015). Another step is to implement standards for securing the mobility of health data and establish quality measures. Standardization ensures that data can be transferred, exchanged, and used at every institution and enables the flow from where data was created to where it can be used. A further requirement is to elaborate policies for defining which data use can function automatically and which require approval. This also implies rules and protocols for patient consent as well as privacy and security frameworks, which are especially important for building public trust in the LHS and require to adapt the organizational structure of healthcare institutions. Policies also have to define incentives for making investments and engaging with the LHS. Such incentives for public and private investments could increase the value for healthcare spending and promote health information technology (Slutsky, 2007). The LHS brings multiple stakeholders together in order drive innovation across the healthcare ecosystem, generate value, and improve healthcare services on a personal as well as public level. The challenges with this approach are not merely of a technical or organizational nature, for example ensuring the validity and accuracy of data or establishing communication data networks across institutions. Since the LHS can be understood as a socio-technical ecosystem, there are also socio-technical challenges (Friedman et al., 2015): Protecting individual and institutional privacy and the integrity of data knowledge requires to apply adequate measures. A further requirement is methods to measure confidence, trust, and trustworthiness of technologies. Improving the LHS over time and ensuring that it is economically sustainable and governable implies to identify metrics for measuring health outcomes and cost-efficiency, as well as social and behavioral impacts. A crucial aspect is to map the patient experience for identifying service delivery
7
Environments
215
bottlenecks and processing pain points and opportunities for improvement, e.g. a more effective deployment of resources (Joseph et al., 2023). Despite the enthusiasm about the potential of LHS, this paradigm is still a mere concept. The COVID-19 pandemic highlighted the existing gaps in integrating scientific evidence into clinical practice, translating insights from bench to bedside in a fast and coherent manner, and link biomedical science, public health, and personal health (Reid & Greene, 2023). We are still in the building phase when it comes to LHS. Building this socio-technical ecosystem will substantially transform different kinds of environments connected to healthcare and its stakeholders. By environment, I refer to the material and immaterial aspects of spaces MAI applications are used in. Material aspects refers to technical infrastructure and hardware that make up the fabric of a space, e.g. a hospital. Immaterial aspects are networks of practices and relationships, behaviors and rules that shape a space in terms of how we act in it. Both are inextricably linked and mutually shape each other to form an environment. An LHS approach will substantially transform three types of environments, work environments, such as hospitals, GPs’ offices, or long-term care facilities, personal environments, such as the home, and urban environments in terms of the smart city. Healthcare institutions as work environments will have to adapt to the technical requirements in terms of infrastructure, for example providing the necessary equipment. This will also reshape the workflow of healthcare professionals, who will have to adapt to the MAI infrastructure as well. Most MAI technologies are not mere passive tools, but shape and structure the practices and relationships of healthcare professionals. Hence, a transformation of practices linked to the operational logic and technical requirements of MAI will occur, whereby material and immaterial aspects shape the work environment. The fact that MAI technologies increasingly enter the private realm will transform the personal environments of individuals. mHealth technologies with their omnipresence may restructure the daily routines of individuals. IoT technologies may reshape the home environment by introducing sensor and monitoring technologies or robotic systems. In both regards, the interaction of material an immaterial factors reshape the personal environment. As integral element of smart cities, the combination of mHealth and IoT in public places may also transform urban environments. The result could be an improvement in coordinating healthcare delivery across individual health and public health sectors. Also in the urban context, the material part of MAI, especially surveillance technologies, will affect social practices and relations as immaterial aspects, and vice versa. In this chapter, I analyze the impact of MAI under the paradigm of LHS on these three environments. This chapter has more the character of an outlook on possible developments, since the environments in question are only on the verge of being transformed. It is therefore significantly shorter than Chaps. 5 and 6. However, it is important to include these possible scenarios in an ethical analysis, since efforts to implement LHS are increasing. Hence, a preemptive reflection on possible positive
216
7 Environments
as well as negative outcomes and how to enable or avoid them is necessary. This chapter can also be seen as a culmination of what has been discussed before. It aims to show how epistemic practices and relationships in a MAI setting may transform various types of environments we live, act, and encounter each other in. The critical perspective, based on contextualizing this transformation with the broader social context and considering social determinants as well as power asymmetries, again serves as the epistemic lens of the analysis. This perspective allows to better understand how the transformation of work environments, personal environments, and urban environments affects professional practices, autonomy, privacy, and social relationships.
7.1
Work Environments
Reducing the workload of healthcare professionals is one of the main purposes of MAI technologies. A more efficient data management could free healthcare professionals from time-consuming administrative tasks. Administrative work of healthcare professional can be divided into patient administration and office work (Apaydin, 2020). Patient administration includes clinical as well as administrative tasks such as charting, scheduling appointments, and handling insurance issues. Office work pertains to purely administrative, non-patient care tasks like business and staff management, reordering supplies, and communication with providers. Studies have shown that these tasks take a significant amount of a health professional’s time. For example, Woolhandler and Himmelstein (Woolhandler & Himmelstein, 2014) have calculated that doctors in the U.S. spend one sixth of their work time on administrative tasks. The administrative burden has been recognized as one of the major factors for burn-out and quitting in healthcare professionals (Linda et al., 2020). It is obvious that implementing MAI technologies for reducing the administrative burden of healthcare professionals could improve the job satisfaction, their overall performance, and clinical outcomes significantly. Beyond the administrative burden, some commentators expect that MAI technologies will optimize clinical, patient-related processes and tasks. One aspect here is the shortage of healthcare professionals. This situation contributes to a higher workload for healthcare professionals as well as work dissatisfaction and burn-out, and consequently decreases the quality of healthcare (Hazarika, 2020). Implementing MAI technologies that perform some clinical tasks may thus reduce the overall workload of healthcare professionals and in turn improve their job satisfaction and the overall quality of healthcare. Extracting data from the EHR and combining it with omics data, interpreting X-rays, or finding patterns in patient monitoring data are just a few examples for the possibilities in reducing the clinical workload. From an ethical point of view, these are also desirable outcomes, since they would improve the quality of life of healthcare professionals and could also enhance the therapeutic relationship. Overall, such a transformation could be a positive side-
7.1
Work Environments
217
effect of implementing an LHS. However, as discussed multiple times, the implementation of MAI technologies alone is insufficient for achieving these goals. A work environment that is built around MAI technologies for harnessing the potential of an LHS requires to address challenges and implementation barriers. First, healthcare professionals have to possess the necessary knowledge and skills for handling MAI technologies, i.e. digital literacy. Second, we have to find ways for implementing MAI technologies into existing clinical practice, meaning workflow, routines, and pathways for delivering care. Third, since the agency of MAI technologies makes them more than tools, we have to reconsider their status in the work process of healthcare professionals and analyze what it means to work with an artificial agent as coworker. Fourth, implementing MAI into the work environment could replace human labor and expertise. Hence, we have to reflect on the replacement of healthcare professionals by MAI and its ethical implications.
7.1.1
Digital Literacy
A meaningful use of MAI in clinical practice requires knowledge and skills in wielding these technologies on behalf of healthcare professionals, which are referred to as digital literacy. The lack of these skills has been recognized as a barrier to a successful implementation of MAI (Petersson et al., 2022; Singh et al., 2020). Medical education must therefore aim at upskilling health professionals and implement a new culture of learning that allows them to continuously adapt to fast-evolving technologies (Abdulhussein et al., 2021). This means a shift in the professional identity, since apart from medical expertise, also skills based on engineering as well as data, and information sciences have to be acquired (Rampton et al., 2020). But mastering the technical side of MAI is insufficient. Given the epistemological as well as ethical implications of smart data practices, healthcare professionals need a more holistic view of the impact of MAI technologies. They do not only need to know how to use MAI as a tool for improving health outcomes or increasing efficiency, but also be aware of these broader implications. This includes the ability to critically reflect on the right use of MAI technologies according to the individual needs of patients as well as recognizing the risk for exacerbating existing health inequities (Abdulhussein et al., 2021). Therefore, critical data studies and digital ethics should be an integral element of empowering the digital literacy of healthcare professionals. It is of the utmost importance to raise awareness already in medical education that MAI is not a mere tool, but a transformative force in healthcare. First, medical education must prepare future or already practicing healthcare professionals for the epistemological impact of MAI, i.e. how it transforms the reading of patients. Digital positivism, with its twofold risk of reductionism and bias, should be a major topic here. Second, the various ethical challenges, from data security and privacy protection to autonomy, trust, and empathy should be part of this education. It is especially
218
7 Environments
important to prepare healthcare professionals for the possible new roles that might arise in a MAI setting and the possible transformation of the therapeutic relationship. Third, medical education and training should enable healthcare professionals to understand the socioeconomic framework behind AI technologies in general and MAI in particular. Healthcare professionals should be aware of the growing influence of non-medical, commercial agents as well as the potential of MAI to serve as tool for implementing health agendas. This holistic approach could enable healthcare professionals to better assess what a meaningful use of MAI technologies is, which goals and purposes can and should be achieved, and how to adapt the use of MAI to the needs and resources, values and beliefs, and the social context of individual patients.
7.1.2
Integration of MAI into Clinical Practice
Despite the promises of LHS, the implementation of MAI technologies in routine clinical practice is still lagging behind. One reason for this is the still open question of the real value of MAI for clinical pathways (Hashiguchi et al., 2022). Evidence is lacking for clinical utility, improvement of workflows, efficacy, and cost-efficiency. MAI systems show astonishing results in studies and under laboratory conditions, where they often outperform healthcare professionals and reduce processing time. However, out in the wild, things are quite different, since several unforeseen obstacles might arise. As a result, using a MAI technology in the actual clinical practice may proof to be more difficult and less accurate or effective as developers have demonstrated. Hence, an offline evaluation, where developers simply evaluate the accuracy of models according to specifically defined goals may ignore the often complex real-life process (Blezek et al., 2021). One important aspect here is that MAI technologies must be integrated in already existing structures and practices. This might sound counter-productive, given the potential for transforming these very structures and practices. However, clinical routines and pathways are not mere products of custom. They result from experiences and empirical evidence and consist of standards and guidelines that have been established to guarantee efficacy, efficiency, and safety. Disruption of existing structures and practices, routines, and pathways for destruction’s sake can therefore not be the goal. Implementing MAI without a careful consideration of how best to integrate them into existing workflows may have unintended consequences. Healthcare professionals often voice concerns that difficulties in implementing MAI into the workflow might imply additional workload, which contributes to adaption barriers in clinical practice (Lambert et al., 2023). This is a legitimate concern, since in the past, additional workload was often a result of implementing new information technologies in healthcare (Hashiguchi et al., 2022). Let’s look at radiology as an example. Radiology was one of the first medical fields to adapt to digital technologies on a large scale. The IT-infrastructure in radiology is complex, often consisting of imaging devices, the Radiology Information System or RIS as an
7.1
Work Environments
219
information manager, and the Picture Archiving and Communication System (PACS) for displaying, manipulating, an archiving pictures (Erickson et al., 2014). When these systems were implemented in the 1990s, they met with several challenges that made integration into the clinical workflow difficult. One issue was the hampered interoperability due to the lack of standards for data transfer between systems. The introduction of Digital Imaging and Communications in Medicine (DICOM) as a standardized format allowed the systems to communicate, enabling their full adoption in clinical practice (Kotter & Ranschaert, 2021). The challenge for implementing MAI technologies is to make them compatible with existing systems and standards in radiology. It can therefore be expected that the implementation of MAI without considering existing workflows will lead to the same difficulties as with the introduction of digital technologies in the 1990s. This is a significant risk, since reducing the workload of healthcare professionals is one of the most advertised benefits of MAI. The compatibility issues in MAI implementation mostly result from the fact that software designers are sometimes not aware of the complex systems architecture that already exists in clinical practice. Hence, successful implementation of MAI requires an exchange between medical domain experts and software designers (Blezek et al., 2021). It also requires to define standards for interoperability or the adoption to already existing ones. This is especially important to avoid implementing isolated solutions, which would severely hamper the creation of LHS based on communication and data exchange between different stakeholders. Besides compatibility and interoperability, other factors may hamper the integration of MAI systems into clinical practice. One factor is to decide where the implementation of MAI makes sense and which tasks should be delegated to these systems. Taking another example from radiology, the detection and follow-up of lung nodules in chest CT’s is a task that most radiologists consider as repetitive and time-consuming (Kotter & Ranschaert, 2021). Developing and implementing MAI solutions for this task could therefore optimize the workflow of radiologists. This kind of domain knowledge derived from personal experience cannot be expected in software developers. In order to create useful MAI tools for healthcare professionals, this user group must define the demand and identify use cases so that software designers may tailor the technologies to these specific needs and purposes. One tool for achieving this is clinical workflow analysis focusing on people, tasks, and resources (Ozkaynak et al., 2022). This method recognizes individual, organizational, and societal factors as drivers of workflows and identifies the interconnected roles and responsibilities within and across organizations. Organizational strategies and planning of workflows need to consider the context of specific healthcare delivery settings. Clinical workflow analysis should be the tool for informing software designers about the real-life demands and resources for a valuable MAI use. A fitting strategy could be to establish a multidisciplinary governance committee that integrates the expertise and experience of leadership personnel, clinical domain experts, and software designers (Bizzo et al., 2023). The tasks of such a committee are overseeing the deployment of MAI technologies in their respective domain, defining policies and protocols, deciding on the allocation of
220
7 Environments
resources, and monitoring as well as evaluating the use. This could be an effective way to adopt MAI technologies to the specific demands of an institution or clinical domain. Considering the roles and responsibilities, it is crucial to define who is to be included in the process of workflow analysis and implementation of MAI. This is especially important since the expertise and experience of healthcare professionals is the most valuable resource here. All stakeholders who use a MAI technology should participate in a workflow analysis to inform software designers. This seems rather obvious, but it is not always the case. There is for example a significant gap between doctors and nurses when it comes to contributing domain knowledge in software design (Rubeis, 2021). Since most clinical activities are a team effort, it is therefore crucial to include nurses not only in nursing-specific MAI design processes, but also when it comes to other medical applications that involve nursing practices. This is not only a matter of fairness, but also important regarding the genuine nursing knowledge as a valuable perspective on clinical processes. To summarize, a successful implementation of MAI technologies into the workflow of healthcare professionals requires the following measures: First, to conduct a needs analysis based on demands and use cases formulated by healthcare professionals as the relevant user group. Second, a full picture of existing IT-systems and the clinical workflow. Third, the definition of standards to ensure interoperability or the adoption to existing ones. Fourth, to monitor, evaluate, and validate MAI systems once they have been implemented. Fifth, to regard LHS as the leading paradigm and create solutions that fit within this framework. In all of these steps, it is paramount to follow a participatory approach and integrate the expertise and domain knowledge of healthcare professionals. All these aspects are highly relevant from an ethical point of view. We can only unleash the full potential of MAI when we integrate the technologies into the clinical workflow. Unsuccessful integration means missed opportunities for improving quality of healthcare, reducing the workload of healthcare professionals, and saving costs. It may even result in potentially harmful patient outcomes, an increased workload, and more costs. In ethical terms, this would severely affect patient wellbeing and safety, the well-being of health professionals, and an efficient allocation of scarce resources. It is therefore of the utmost importance to recognize the clinical reality in order to develop, implement, and use MAI technologies in a useful and meaningful way.
7.1.3
Artificial Agents as Colleagues
One of the most significant aspects of MAI technologies is their agency. The fact that MAI is not just a tool may lead to the perception of artificial agents as colleagues. Healthcare professionals can delegate tasks to MAI technologies without the need of constant supervision, for example patient monitoring. An artificial agent could monitor the vital functions or behavior of a patient and inform the doctor or nurse
7.1
Work Environments
221
whenever an intervention is necessary. Another example for the perception of MAI as a coworker is CDSS. By providing evidence and suggesting possible decisions, healthcare professionals may perceive artificial agents as colleagues they would usually consult. The perception of MAI as colleague could be especially the case with embodied artificial agents, such as robotic interfaces or avatars. The more human-like a MAI technology appears, the more likely it is to be perceived as a colleague instead of a tool. Several ethical issues are connected to the status of MAI as colleague. One issue is the so-called perfect automation schema (Rieger et al., 2022). It occurs when humans ascribe superiority to machines and system failure disappoints this performance expectation. This may significantly affect trust and perceived utility of the technology. In the context of MAI, we find high performance expectations due to the perceived epistemological superiority and agency involved, which may exacerbate the perfect automation schema. Healthcare professionals might overestimate the performance of artificial agents and lose confidence and trust when MAI fails to deliver. The opposite outcome is also a risk, i.e. an overconfidence in the decisions and outcomes by MAI. The result may be automation complacency or automation bias (see Sect. 5.2.2). Delegating tasks to machines might reduce overall workload and provide the opportunity to divert patients’ challenging behaviors (Persson et al., 2022). This may help to reduce stress in healthcare professionals. However, reducing human contact might also negatively affect the therapeutic relationship and have a dehumanizing effect. Working with an artificial agent may also have implications for care delivery that do not arise in working with human colleagues. One aspect here is the increased demand for safety. Robots or CDSS imply specific safety risks for patients that healthcare professionals must address and potentially supervise. This might imply a higher workload, since human colleagues usually do not require constant monitoring for safety reasons. A higher workload could also result from the need of constant tinkering to make the technology work. This might lead to frustration on behalf of healthcare professionals and reduce trust on behalf of patients. As discussed above, introducing MAI technologies could be a way to optimize existing practices, processes, and workflows. Working with an artificial agent that performs tasks more efficiently and effectively might therefore affect the work identity of healthcare professionals (Persson et al., 2022). A positive outcome could be that collaborating with artificial agents helps to critically reflect on existing practices and workflows. This may lead to improvements from which healthcare professionals and patients could profit. A negative outcome could be deskilling, i.e. the loss of skills due to the fact that certain tasks have been outsourced to the artificial agent (Persson et al., 2022). Finally, issues of responsibility and liability arise when healthcare professionals perceive artificial agents as colleagues (Rieger et al., 2022). Since artificial agents cannot be responsible for their decisions or actions in the full sense, uncertainties on behalf of healthcare professionals might arise when working with them. This may negatively influence their ability to trust in the technology and hence use it effectively.
222
7
Environments
The crucial issue here is again the unclear role of artificial agents. On the one hand, MAI is not a mere tool due to its agency. On the other hand, this agency is limited by several factors, including epistemology and responsibility. Hence, healthcare professionals need to be sensitized for the characteristics of MAI, especially when it comes to artificial agents that play an active role in clinical practice and the therapeutic relationship. Education and training must prepare health professionals for this, but also for the impact MAI has on their own work identity and roles. These roles need to be clearly defined, according to the options outlined in Chap. 6, in order to enable realistic expectations and trust on behalf of healthcare professionals.
7.1.4
Replacement of Healthcare Professionals by MAI
The most significant impact MAI could have on the work environment is its alleged potential to replace human healthcare professionals. Several commentators have voiced or discussed concerns that MAI technologies will make humans obsolete as radiology and pathologists (Obermeyer & Emanuel, 2016), surgeons (Kerr, 2020), operating room nurses (Ergin et al., 2023), nurses in long-term care (Pepito & Locsin, 2019), or psychotherapists (Swartz, 2023). The reasons for these concerns are manifold. Since some MAI technologies outperform healthcare professionals in terms of accuracy or processing time, replacing humans could be a matter of efficiency (Krittanawong, 2018). Cost-effectiveness is another aspect, since reducing high personnel costs by implementing MAI technologies could be a desired outcome. There is also the belief that patient-safety advocates could demand the replacement of humans by MAI technologies because they are less likely to make mistakes due to typical human factors such as tiredness or negligence (Obermeyer & Emanuel, 2016). Commentators have also put forward arguments against the supposed threat of replacing human healthcare professionals. A crucial argument is that modern medicine has a long history of constantly adapting to technological innovations. Radiology is again a good example here. As shown above, radiology pioneered the use of digital technology in medicine. Radiologists have adapted to technologies such as MRI scans that at first glance seemed to be disruptive (Pesapane et al., 2018). Modalities, protocols, and standards for IT-technologies were invented for or tailored to serving the demand of radiologists, such as PACS and DICOM. This was the result of an active engagement with technological innovation instead of outright refusing them. Another argument is that despite the immense capacities of MAI, there remain certain tasks that we need humans for. Since algorithms often underperform when confronted with novel, ambiguous, and challenging cases, interpreting them could remain a specifically human skill (Ahuja, 2019). Whereas machine learning focusses on correlations, humans are experts for identifying causal relations, which is mostly due to experience and the ability to contextualize isolated data with the bigger
7.1
Work Environments
223
picture. Ideally, MAI and humans should therefore combine their abilities, whereby the specific skills of one could outbalance the deficits of the other. This goal of synergy also implies that some forms of automatization, although a technical possibility, should not be pursued (Topol, 2019). As some authors suggest, instead of comparing human performance to that of MAI, we should rather compare the performance of MAI-augmented health professionals with that of health professionals who do not use MAI (Briganti & Le Moine, 2020). Besides epistemological aspects, one could also argue that MAI is incapable of empathy, which is crucial to the therapeutic relationship. I have discussed the concept of clinical empathy above (see Sect. 4.2.3), which integrates the ability to understand the emotional motivations of a person (cognitive empathy) and the ability to reciprocate and share these emotions (emotional empathy). Clinical empathy is a complex skill, since it implies a fine-tuned regulation of one’s own emotions and emotional responses. Healthcare professionals have to find the right balance between cognitive and affective empathy that on the one hand allows them to build a meaningful therapeutic relationship and on the other hand enables them to also emotionally detach themselves as a means of self-protection. We have already seen that it is at least questionable whether an artificial agent will ever be capable of mastering this highly complex skill. Therefore, one could argue that there are tasks we should not delegate to machines, especially those that affect the therapeutic relationship. Even in cases where MAI technologies could lead to more timeefficient or cost-efficient outcomes, we should recognize and prioritize the relevance of the therapeutic relationship as a human encounter. That means that although MAI technologies might perform specific single tasks better than humans, replacing human contact and interpersonal relationships might negatively affect the patient experience as a whole. As outlined above, if we accept this argument, the role of artificial agents as enablers instead of substitutes is the most desirable outcome (see Chap. 6). This aspect again shows that we cannot separate technology from human agency. We should not talk about the possible effects of implementing MAI technologies on the workforce in terms of a force majeure. This is not an earthquake or a hurricane that goes beyond human control. We can do more than just deal with the fall-out. We can and should actively identify certain possible outcomes as undesirable and work on solutions for preventing them. Technology will not cause the replacement of human healthcare professionals, but human decisions to design, implement, and use technology for this very purpose. Whether this replacement will happen is not an automatism, but something that we can either effectuate or prevent. In order to decide this, it is crucial to not only include seemingly objective parameters like accuracy, processing time, or cost-effectiveness into this decision. We also need to consider the specific relevance of the therapeutic alliance in this context. I have outlined the implications of the different roles of health professionals, patients, and artificial agents that may occur in a MAI setting (see Chap. 6). These roles might serve as a framework for deciding upon the scope of replacing human labor by MAI. Some of these roles are more desirable than others and some should be avoided altogether. The important point here is that we should not opt for or against the
224
7 Environments
automatization of specific tasks based on the technical possibility or financial benefits alone, but on the purposes, goals, and roles we think are desirable.
7.2
Personal Environments
One major advantage of MAI technologies is their potential to make health data accessible that healthcare professionals were hitherto unable to obtain. In a traditional setting, doctors acquire individual health data through patient history as documented in the patient’s health records, through patient interviews, and through their own examinations. These include checking vital functions, looking for optical or acoustical clues, and checking the reflexes and other indicators for the functioning of the nervous system. Doctors might need additional lab work for further results, e.g. a full blood count. Nurses or mental healthcare givers proceed in a similar way. In all these cases, healthcare professionals rely on a combination of information from the past and data from the present. A blood count for example is a snap shot of the patient’s health situation in the very moment of the blood draw. The doctor needs to contextualize it with the patient history and the information from the patient interview to make sense of the data and to derive predictions from it. These predictions, for example prognosis or risk assessment, remain in part speculative due to the fact that doctors only have limited data to work with. The data is limited in a double sense: First, it is limited in terms of amount. There is only so much data in the patient’s medical records. Doctors can only acquire a limited amount of data in the present situation given the limited resources regarding time, staff, and technical means. Second, the data is limited in temporal terms and in terms of accuracy. The medical records may cover several years, but include only those periods where the patient was treated. They do not tell doctors anything about the time in between. The data only comes from a limited number of sources, such as the diagnostic methods indicated by acute symptoms or from routine exams. What is missing due to all these limitations is exact data on the day-to-day health of the patient over a continuous period of time. This is exactly what MAI technologies can provide. A MAI-enhanced big data approach may combine multimodal data from various sources and enable healthcare professionals to obtain longitudinal data on biomedical, environmental, and behavioral determinants of an individual’s health. This allows doctors to model the health of patients in a more accurate and dynamic way, thus facilitating more evidencebased predictions and risk assessment. This requires data acquisition beyond patient history and exams or interviews. mHealth and IoT technologies enable healthcare professionals to collect data from the daily life of patients. Sensor and monitoring technologies, either stationary in the home of the patient or mobile in the form of smart wearables are ideal tools for obtaining data on environment and behavior. By integrating this longitudinal and dynamic data with existing health information, healthcare professionals may get a fuller and at the same clearer picture of the patient’s health situation.
7.2
Personal Environments
225
In a way, MAI technologies enable healthcare professionals to shake off the bonds of space and time. The clinical gaze is no longer limited to the controlled environments of medical institutions like hospitals, GP’s offices, or labs. It now penetrates the private realm of individuals, extends to their lives and into their homes. Given the LHS paradigm, integrating environmental and behavioral data from the wild is essential. This requires constant monitoring of individual health data, either automated, i.e. by the devices themselves, or by healthcare professionals. As a result, health becomes an object of permanent awareness. The crucial aspect is the prospective nature of this large-scale data collection. MAI technologies not only provide the opportunity to monitor health processes as they unfold and to intervene if necessary. The even bigger advantage is the ability to predict future developments and health outcomes. This allows a better-informed prediction and risk-assessment and shifts the focus of attention from the past and the present to the future. Prediction and prevention go hand in hand. A better informed, more evidence-based prediction enables to define and plan interventions to prevent negative health outcomes. This is one of the main goals of a LHS that may benefit individual patients as well as whole patients groups. Furthermore, prevention may also be a more cost-effective way to deal with health burden, which means that there might be financial and public health incentives for monitoring the health of individuals. Even before the age of broadly implemented digital monitoring technologies, there has been some analysis of the tendency towards monitoring and surveillance for risk prediction. Armstrong (1995) introduced the concept of surveillance medicine that completely shifts the perspective on health and the human body. According to this view, medicine not only focusses on the ill, but increasingly targets the healthy population. Traditionally, medicine is a domain that is clearly defined by its scope, i.e. treating illness, and its spatial confines, i.e. medical institutions like hospitals. The new surveillance medicine that arose in the latter half of the twentieth century extended its domain beyond hospital walls by targeting the whole population. Besides so-called social diseases like tuberculosis etc., health promotion was an essential measure in this context. It became a medical task to inform the healthy population about diet, exercise, and general health risks. This also included the encouragement to monitor one’s own health for disease prevention. As a result of this process, medicine was not only concerned with illness, but increasingly focussed on health and normality. The predictive turn in medicine can be a positive development. One could argue that MAI may enable us to realize the paradigm of salutogenesis. This concept was introduced by Aaron Antonovsky as an alternative to the bipolar model of health and illness (Antonovsky, 1979). Instead of viewing health and illness as two opposite and mutually exclusive states, Antonovsky speaks of a continuum, where health is not static and disease is not simply the absence of health. The individual constantly moves within the continuum of health and disease, which implies that the task of medicine should not only be to fight disease, but also to maintain health. This implies a shift from pathocentric to health-focussed medicine.
226
7
Environments
In a sense, using algorithms for predictive modeling and risk assessment might still be seen as pathocentric, since the focus lies on accurately predicting the onset of disease or adverse events. But we might also apply MAI technologies for maintaining health, since they allow a continuous, longitudinal data collection in real time. Especially mHealth and IoT technologies may collect and process individual health data outside of healthcare institutions. By integrating environmental and behavioral data from the daily life of patients, doctors can get a more holistic picture of an individual’s health situation. Many mHealth and IoT technologies also allow health interventions, enabling patients to perform evidence-based lifestyle changes and doctors to evaluate the outcomes. From the perspective of health prevention, these technologies could be key for maintaining health by permanently collecting individual health data and predicting the outcomes of a specific lifestyle. This has an obvious health benefit for patients, as it enables a more personalized and immediate treatment. The downside of this risk-oriented and health-focused concept is what Martin (Martin, 1994) described as the flexible body, meaning that health and the body become fluctuating entities that require constant surveillance and attention. This framing of individuals as risk profiles impacts the way we perceive health and illness (Samerski, 2018). In a sense, the boundaries between health and illness, or at least our traditional concepts of them, become liquified (Rubeis, 2023). The shift towards risks and prevention as crucial elements in the perception of health and illness does not only pertain to patients, but also to healthy individuals, thus also liquifying the traditional boundaries of the medical domain. The clinical gaze seeps through the walls of the home and penetrates the private realm. As we have seen already, this development is one of the explicit goals of personalized medicine based on MAI technologies. Better health prevention and prediction through participation of individuals, e.g. by using mHealth technologies, is key here. The preventive perspective implies that those who are healthy should use monitoring technologies for maintaining or even improving their health (Sharon, 2016). This may include smart wearables like body sensors or IoT applications in one’s home environment. As a consequence, this home environment and the private realm as such undergoes a transformation. The home becomes increasingly technisized and medicalized as it is primarily regarded as a source for potential data (Lupton, 2013). From an LHS perspective, constant and ubiquitous surveillance and monitoring would be ideal for generating valuable longitudinal data on environmental and behavioral factors. Healthcare professionals could gather individual health data on a population scale, thus allowing for better predictive models in three ways: First, population-wide data could train algorithms and achieve a better accuracy of predictive models. This would also benefit the individual, since better-trained algorithms could enhance personalized treatment. Second, in a public health context, this data could allow to model health needs and health disparities or predict epidemics more accurately. Third, biomedical research could profit from the data. Working with longitudinal environmental and behavioral data as addition to omics data and information from the EHR could lead to new scientific insights. To realize this, LHS
7.2
Personal Environments
227
in the fullest sense requires modification of traditional environments in which health and disease are enacted, measured, and maintained or treated. The most significant transformations would be the dissolution of boundaries between individual health and public health and the extension of the medical domain into the private realm.
7.2.1
Ecosystems of Care
A main goal in this regard is to create ecosystems of care (Savage et al., 2022). This concept describes dynamic networks that connect different stakeholders involved in the care process, i.e. care receivers, care providers (doctors, nurses, therapists), social workers, insurance companies, and technology developers. Ecosystems of care use MAI technologies to enable stakeholders to provide and exchange data and make the best possible use of it. The aim is to integrate all relevant actors that contribute to the care process in a joint effort to improve the quality of care and personalize healthcare services. A successful implementation of an ecosystem of care has several requirements (Carlton, 2020): Healthcare institutions have to integrate MAI technologies into existing care practices and infrastructures so that they facilitate combining fragmented information from health, social, and financial systems with holistic and longitudinal patient data. The latter includes data from monitoring care processes at home as well as self-monitoring, and activities of daily life. MAI systems use this data for advanced analytics and personalization. This requires models for the continuum of care interaction that enable best practice by care givers. This includes assistive technologies that support informal care givers as well as care receivers. Care givers and policy makers can use for the real-time refinement of care solutions and a better integration of these solutions into social and community structures. Such an ecosystem of care could benefit several patient groups. It could enable a better integration of various health, social, and financial services for the chronically ill. It could also provide the ideal infrastructure for delivering care to older adults. Finally, it could be conducive for catering to groups with high needs such as people with multimorbidity or people with disabilities. All of these patient groups have specific needs that require the integration of different services and stakeholders and to enable care givers like family members to engage in care processes. Ecosystems of care could link individual healthcare and public health by coordinating services and enabling the exchange of data between different stakeholders. MAI technologies could contribute to better inform public health services about the health needs of specific patient groups and how to structure care delivery for meeting them. Population-scale data could power algorithms and enable a more targeted, more personalized care delivery. Non-medical agents such as social workers or insurance providers could both contribute to and profit from this data exchange and coordination. Hence, ecosystems of care could be key to an effective LHS. In order for ecosystems of care to work, they require to gather large amounts of data from daily activities and behaviors directly from the daily lives and home environments of individuals.
228
7
Environments
I will very briefly discuss three examples where MAI technologies could enable practices that facilitate the thriving of ecosystems of care. These are chronic disease management, assistive technologies for older adults, and quantified self. Chronic disease constitutes an immense health burden on a global scale. Conditions like cancer, cardiovascular and cerebrovascular diseases, hypertension, chronic respiratory diseases, diabetes, obesity, chronic kidney diseases, degenerative diseases of joints, and neurodegenerative diseases affect nearly 25% of adults (Xie et al., 2021). Coordinating healthcare activities for chronically ill patients poses several challenges to existing services like short management radius, high dependence on manual tasks, and limited data flow. Furthermore, a timely reaction to changes in the health situation of a patient is often difficult due to the lack of dynamic and longitudinal health data (Xie et al., 2021). One strategy for coping with these challenges is the use of MAI technologies in the home, such as remote monitoring and smart home technologies (Fritz et al., 2022) as well as wearable devices (Xie et al., 2021). Some authors speak of smart environments consisting of invisible sensors, actuators, displays, and computational elements that are embedded in the home (Moraitou et al., 2017). These technologies are meant to integrate and process data from various sources like smart wearables, portable medical devices such as ECG monitors, EHR, biobanks, and nonmedical data (Alqahtani et al., 2019). The aim is to enable real time information exchange between primary healthcare providers, care givers, and public health authorities to inform care delivery and support evidence-based policy making (Morita et al., 2023). IoT and mHealth applications gather and process real time data from the personal environment of individuals, mostly without their active engagement. Technologies like blockchain can minimize privacy risk and standardize data transfer in order to enable process automatization in data collection, exchange and procession (Xie et al., 2021). This could ease the burden on healthcare systems by facilitating a better-coordinated healthcare delivery. The potential of such an approach has been demonstrated during the Covid19-pandemic. Smart wearables and telehealth monitoring applications have facilitated disease modeling and forecasting as well as individual and population-level risk assessment during this time (Mello & Wang, 2020). This shows that MAI technologies in the home environment may contribute to improve the quality of care for patients with a chronic disease. Assistive Technologies for Older Adults In Sect. 3.3.5, we have encountered AAL systems as an example for assistive technologies for older adults. The combination of smart sensor and surveillance technologies, mHealth applications, ubiquitous computing, and robotics aims to facilitate better-coordinated home care services. The main goal is aging in place, meaning to enable older adults to remain in their homes and to avoid hospitalizations or transfers to long-term care facilitates. A crucial aspect here is that AAL technologies are not just reactive in the sense that they respond to acute health needs and disabilities. Instead of such a deficit-oriented approach, most AAL technologies are proactive and focus on enabling a mostly independent life and maintaining or improving quality of life. The paradigm for this approach is active ageing, a concept
7.2
Personal Environments
229
that focusses on health, security and social inclusion as integral factors of an active and independent lifestyle of older adults (WHO, 2002). Hence, AAL technologies go beyond disease management in facilitating routines and living arrangements that allow older adults to lead a fulfilling and self-determined life. This includes monitoring technologies that measure vital functions or behavior, such as smart floors for determining gait patterns or smart matrasses that detect abnormal sleep behavior (Sapci & Sapci, 2019). As discussed above, AAL technologies might also be used for mental health purposes, such as emotion recognition and regulation (see Sect. 5. 2.1). Service robots or SAR might also be part of an AAL concept, supporting older adults in physical tasks or providing psychosocial support. Smart wearables in the form of medical devices, smartphones and tablets, or as smart textiles, i.e. sensors within the clothing, may also be an element of AAL. The goal of avoiding residential care and enabling the individual to remain in their own home requires the transformation of the home environment. The home becomes a smart home, laden with all kinds of smart health technologies. In order to make this as comfortable as possible for older adults and facilitate acceptance, unobtrusiveness of these smart homes technologies is a major task for developers (Hartmann et al., 2023). The aim is to integrate MAI technologies seamlessly into the physical structures of the home but also into the daily lives of older adults. This implies that devices should be concealed if possible and work mostly automatically and without effort for older adults. That way, MAI can be considered as non-invasive and an integral part of the home environment. Self-Management and Self-Monitoring One of the major advantages of MAI technologies is the preventive perspective. Healthy individuals can use MAI technologies, especially mHealth applications, to monitor their health and support activities for maintaining or improving it. Selfmanagement and self-monitoring have great potential for saving unnecessary health costs and improve the overall health of individuals. This is why many healthcare providers, such as the NHS in the UK, encourage the widespread use of digital technologies for self-management and self-monitoring in the home and daily life (Petrakaki et al., 2018). MAI technologies, especially mHealth applications and IoT, are the ideal tools for leveraging this approach. The crucial advantage is that patients with a diagnosed health condition and healthy individuals alike can use MAI technologies. Chronically ill patients, but also those who have a certain risk for developing disease may perform self-monitoring and self-management. For example, overweight individuals, especially at a certain age, have a certain risk for developing cardiovascular diseases. These individuals could use mHealth applications like smart wearables and health apps for regulating their diet, planning exercise sessions, and monitoring their weight. Apart from patients and individuals with a certain health risk, there is still another user group that uses MAI technologies for tracking and optimizing their health. We have briefly discussed the use of MAI technologies by the quantified self movement in Sect. 3.3.5. This movement, founded in 2007, centers around the idea that hard
230
7 Environments
data enables better self-knowledge for making well-informed health assessments and decisions (Sharon, 2016). This requires to quantify every health-related aspect of an individual and their daily life, including somatic health data, behavioral data, and environmental data. Some use this information as evidence base for a healthy lifestyle to prevent disease while others strive for self-optimization. The manifold possible uses of MAI technologies blurs the line between health, fitness, and wellbeing. What sets this technology use apart from uses in the medical context is that there is no medical indication or acute treatment. Individuals who engage in this MAI use are not patients in the sense that this technology is part of a treatment prescribed by a doctor to deal with an acute health issue. Of course, users might still be patients in the sense that they use health information gathered through MAI applications for informing their GP. Hence, the blurred line between health, wellbeing, and fitness also implies an unclear role of users in the quantified self spectrum. Conceptualizing this role is not just of academic interest, but concerns the responsibilities of healthcare professionals and technology developers vis-à-vis this user group. If individuals use MAI technologies without a medical indication, do the same principles apply that protect the autonomy and safety of patients? Since these users in a way privatize healthcare activities, is there a responsibility for doctors to supervise these activities? Do the same strict privacy policies and principles apply when healthy individuals voluntarily share their health data? These questions are important since the use of MAI for prevention or lifestyle improvement still impacts the home environment. A variety of actors may access the home through these technologies and interfere with the daily lives of individuals.
7.2.2
Medicalization and Healthism
The use of health technologies in the home and in the daily lives of individuals is essential for creating ecosystems of care. However, introducing medical devices, sensor technologies, and permanent monitoring to the private sphere of individuals might fundamentally transform the home environment. One effect of this transformation is medicalization, referring to the process by which phenomena that have hitherto be considered as non-medical become the object of medicine (Conrad, 2005). Medicalization thus occurs when viable claims are made that a problem is a medical one, rather than a matter of individual variation or moral failure. The agents of medicalization are not healthcare professionals alone, but also patients, healthy individuals, patient groups, the media, and economic agents. That means that medicalization is not some evil plan hatched by healthcare professionals to gain power. Rather, it is a process fueled by various interests and motivations, which transforms medical knowledge and institutions themselves. One major aspect is the growing influence of biotechnology. Commercial interests in this sector have become the major driving force of medicalization, expanding the range of health issues in order to make profits. This development not only affects patients or healthy individuals whose behavior or bodily functions are suddenly a
7.2
Personal Environments
231
medical issue. It also affects healthcare professionals, since commercially-driven medicalization redefines their roles and the scope of their domains and dictates the motives for medical action. Although traditionally concepts of medicalization have focused on the financial motives of big pharma, one can expand this view to digital health technologies. Given the LHS paradigm and the perspectives of MAI technologies, medicalization is a major risk since the daily life of individuals, their environmental aspects and behaviors, increasingly become the target of the clinical gaze. In a hospital or GP’s office, the traditional medical environment, the clinical gaze primarily focusses on the pathological. Being a patient usually implies the existence of symptoms, illness, or some form of health problem. Within the medical environment, the clinical gaze is mostly concerned with pathological processes, whereas the extended clinical gaze that penetrates the everyday life targets health and behavior as such. As mentioned above, this is a form of surveillance medicine that focusses on the normal, which means that medicine is increasingly concerned with behaviors and bodily functions that are non-pathological. Making these behaviors and bodily functions an object of the clinical gaze leads to a medicalization of everyday life, liquifies the boundaries between health and disease, and makes the body an object of constant care. Linked to medicalization is healthism, which describes a tendency based on health consciousness and self-care (Crawford, 1980). Healthism claims that health is primarily a matter of personal responsibility, which implies a moral obligation to take care of one own’s health. This moral obligation is based on the argument that health is a good in itself. Therefore, pursuing this good implies several behaviors that aim to preserve, maintain, or enhance health. Another argument is that health is a social good. From a public health perspective, one could argue that preserving one’s own health contributes to preserving the health of others, e.g. by participating in vaccine campaigns or following hygiene rules. In a universal healthcare system, one could also argue that health consciousness and self-care is an imperative of solidarity, since a healthy lifestyle can prevent disease and thus reduce health costs. Hence, staying healthy would be something that we owe others. Commentators have criticized healthism for ignoring the structural factors and social determinants of health and illness (Rier, 2022). The responsibilization of health thus fails to acknowledge that individual behavior and decisions are only part of the complex phenomenon that is health. Healthism is thus often considered as a form of victim blaming, ascribing responsibilities to individuals that are structurally disadvantaged. MAI-powered health technologies mean a huge boost for healthism. Personalization and prevention, prediction and participation are crucial aspects in this regard. Defining health promotion and prevention as a personal responsibility that can be fulfilled by using mHealth applications and participating in all sorts of big data activities is a defining element of P4 medicine. As we have seen, these practices are often defined as empowering the autonomy of individuals and enhancing their role in the healthcare system. However, MAI-powered monitoring and the obligation for taking care of one’s own health by using technology could also lead to a situation where individuals are held responsible for health outcomes. Some insurance
232
7 Environments
companies already use positive rewards as incentives for health consciousness behavior. It is therefore possible that MAI technologies could also be used to track unhealthy behavior, thus allowing insurance companies to implement sanctions (Davies, 2021). As some argue, self-care, self-management, and self-quantification are thus indicators of a shift towards responsibilization, driven by a neoliberal agenda (Kent, 2023a). How do medicalization and healthism impact the environments of individuals? What are the ethical implications of this? Medicalization may manifest itself in a quite concrete, material way by transforming the home environment of individuals and interfering with their daily lives. Medical devices, such as IoT applications or smart wearables, may be omnipresent in the home (Lupton, 2014). These devices might be a constant reminder of illness, thus making the individual’s health situation the main concern. Furthermore, these devices can be erratic or get in the way, thus leading to frustration. In general, medicalization implies that the home that had hitherto been mainly a private sphere now becomes open to medical practices and the clinical gaze. The three main implications of medicalization are therefore its impact on privacy, the possible disciplining effect, and the effect on identities. Privacy Although MAI technologies offer quantitatively and qualitatively new perspectives for penetrating the private realm, medical intrusions into privacy are not entirely new. Authors have long discussed medicalization and the extension of the medical gaze into the home environment in the context of home care. The crucial insight from these debates is what has been called the moral geography of home care (Liaschenko, 1994). This concept refers to the fact that logics and practices, social relations and networks shape the home, which is therefore is not only a physical space (Angus et al., 2005). This includes external social relations and structures, meaning first and foremost relations of power that regulate inclusion and exclusion. These practices and relationships shape the home as well as the identities of those who are part of it. The actors involved constantly produce meaning as well as their own identities through their actions and relationships, thus relationally configuring and performing the home (Andrews et al., 2013). This makes the home fluid and malleable, which implies that understanding its meaning requires a relational approach: We have to analyze the material and social character of the home in the light of the internal and external social practices and relationships that shape it (Angus et al., 2005). In a classical setting, individuals can draw a line between the medical domain and their home. When the clinical gaze is not limited to medical institutions, but penetrates the private realm, this border becomes permeable, thus transforming the home itself. The extension of the clinical gaze may change the landscape of the home and with it its moral geography, meaning the social practices and relationship that constitute it (Liaschenko, 1994). When the home becomes a site for home care, coordinated purposeful practices that benefit the care receiver, and production and
7.2
Personal Environments
233
consummation of resources transform it (Angus et al., 2005). Care practices and the clinical gaze may thus reshape the home and give it another meaning. MAI technologies as an extended and intensified clinical gaze may thus severely impact the complex relational networks that shape this environment. In Sect. 4.2.4, I have discussed different types of privacy, one of them being local privacy. This concept refers to a space an individual can retreat to without being seen by others. Local privacy implies that the individual has control over who has access to this private space. They can also choose in which way and to which extent they want to present themselves to others, what is to be seen and not to be seen. A demarcation line between the outside world and the private thus defines the home, which relies on the control over who may enter and the definition of allowed activities and behavior within this sphere (Angus et al., 2005). MAI technologies severely challenge local privacy as control in all of these regards. Control over who may enter exists only gradually in a MAI setting. It is not always transparent who enters in regard to the parties that can access individual health data. Many data-intensive personalized health approaches involve several actors, from healthcare professionals to data analysts, administrative personnel, and insurance companies. This is especially the case with an ecosystem of care approach, where data exchange between multiple stakeholders is an explicit goal. From the LHS perspective, even more stakeholders should be involved, including public health professionals, policy makers, and researchers. All of these actors may enter the home without the individual even realizing it. However, the individual may know or suspect that many are entering although they cannot identify them, which in itself implies a loss of control. This may undermine autonomy and also expose individuals to data security risks. Individuals might find it difficult to define allowed activities for similar reasons. As we have seen in the context of informed consent, it is often unclear what kind of data collection and analytics MAI technologies involve. Individuals might not be aware of the various activities or bodily functions that are being registered. They may also not know when monitoring takes place or what its exact scope is. In this regard, the unobtrusiveness of MAI technologies, for example in an AAL setting, might become problematic. Concealing the presence of a MAI device could be considered as deception, rendering it difficult for individuals to assess whether they are monitored (Hartmann et al., 2023). Furthermore, it may be unclear for what purposes their health data is used and whether the data use includes a financial profit for some stakeholders. Besides transparency, there could also be a coercion risk. Imagine a case where an individual is told about the exact workings of a MAI technology and the stakeholders involved, but refuses some of the purposes of data collection and procession. Other stakeholders could exert pressure on the individual by claiming that this is an all-or-nothing service. Individuals might also outright refuse to be monitored or use an application in their home, but be forced to do so. The loss of control in this regard implies severe threats to autonomy. Defining allowed behaviors might pertain to the workings of MAI technologies. Automated MAI applications might follow their own routines and schedules and
234
7 Environments
perform prefixed tasks. Hence, the individual would have no control over how the systems behave and would have to adapt to the routine dictated by the technology. Not only could this undermine their autonomy, but also cause feelings of estrangement. Their own home would seem dominated by strangers who follow their own rules. Again, concealment and unobtrusiveness of MAI devices might become problematic. When an artificial agent conceals its nature, individuals might not know that they are interacting with a smart machine. They may either be tricked into believing that they interact with a human or just with some technical device. Disciplining The use of MAI technologies at home may require a certain daily routine, thus altering the individual’s’ behavior (Lupton, 2013). For example, individuals might be required to measure bodily functions at a precise time every day, communicate the results, or respond to messages by health professionals. This can be considered as a kind of disciplining, where the behavior of individuals is forced to align with rules set by others. These rules might not be in the interest of the individuals, but serve other purposes, such as saving costs and optimizing treatment processes. The constant interaction with digital health devices might thus reconfigure the home environment of individuals by transforming their living arrangements and dictating a specific routine. The medicalization of the home environment might thus lead to a scenario where the medical domain enters and restructures the home not only through the mere presence of medical devices, but by implementing medical routines and enforcing health-conform behavior. This could be a way to enforce health agendas, e.g. cost savings. It is noteworthy that no direct intervention or coercion by medical or political authorities is necessary here. By making self-management and self-monitoring the norm, self-discipling by individuals renders external interventions obsolete (Petrakaki et al., 2021). Self-disciplining is enabled by moralizing practices that define health as an individual moral obligation or a public duty (Kent, 2023b). Selfmanagement and self-monitoring might thus literally open the door of one’s home to the regulatory mechanisms of public health institutions and economic agents, which some argue is a case of neoliberal biopower (Sanders, 2016). Hence, medicalization, extending the clinical gaze into the home, may become holistic in the sense that it subjugates all health-related aspects of a human being to medical control (Vogt et al., 2016). The aim of medicine would then no longer be treatment or prevention, but optimization. In order to achieve this, holistic medicalization would need to transform the home into an environment that enables the constant flow of data and the routines necessary for a healthy lifestyle. As a consequence, the technical imperatives of MAI could make it necessary for individuals to conform to routines and protocols that enable standardized, effective, and cost-effective processes. Disciplining individuals through MAI technology may thus be a strategy for implementing health agendas and serve financial interests of public health institutions or economic agents.
7.2
Personal Environments
235
Identities Social practices and the relationships between different actors constitute the home environment as a relational space. By transforming the very fabric of this relational space, MAI-powered medicalization may also affect the identities of said actors. Take the example of assistive technologies for older adults. Their main goal is to enable older adults to stay in their home and empower them to lead a mostly independent life. Reconfiguring the home by MAI technologies might come with the major downside of said technology, which is digital positivism and its two main risks, reductionism and bias. Reducing individuals to data packages and structuring their daily lives according to algorithmic models based on this data may implies the possibility of datafying inherently qualitative and rich phenomena like health or mood (Lupton, 2017). The data that needs to be processed stems from behavior and the complex interactions of with their material environment and other stakeholders. This data may be messy, ambiguous, and complex. Hence, complexity management becomes difficult given the need of standardization and quantification of data that mostly contains qualitative information (Rubeis, 2022). This may lead to a complexity reduction for technical reasons, undermining the goal of personalization and ignoring the individuality of persons (Rubeis et al., 2022). The result could be a reduction of human experience, since a healthy lifestyle is reduced to quantifiable data (Lupton, 2017). Implementing assistive technologies for financial reasons, i.e. to save personnel costs, might also lead to isolation and feelings of dependence (Lupton, 2014). Bias may occur since stereotypical views, in the case of AAL views of old age, might be inscribed into the technology. The result then is ageism, i.e. framing older adults by ascribing certain qualities like frailty or dependency to them (Rubeis et al., 2022). This would undermine active ageing as the very goal of most assistive technologies. Instead of allowing older adults to live an active and mostly independent life according to their own preferences and goals, ageist technologies could perpetuate dependencies. Apart from reductionism and bias, the transformation of the private environment might also bring about a new identity, that of the digital health self (Kent, 2023c). Promoting a healthy lifestyle and idealized bodies might motivate healthy individuals to self-optimize by using MAI applications. The self-optimizing, digital health self could become the ideal of a responsible and productive individual that continually improves their body. This implies a view of individual health as an ongoing project to be worked on, which can be seen as a manifestation of the neoliberal selfdisciplining mechanism (Kent, 2023b). The digital health self thus fits with the neoliberal vision of the entrepreneurial self, implying that individuals are responsible and accountable for their own health (Petrakaki et al., 2021). Individualism, flexibility, and reflexivity constitute this enterpreneurial identity that may pertain to patients and healthy individuals alike. However, empowerment could be another way in which medicalization of the private realm changes the identities of individuals. Empowerment may occur since individuals can gain more information on their own body and health and with it also
236
7 Environments
control (Lupton, 2013). Following this view, self-management and self-monitoring or the availability of health-related information may enable individuals to escape the clinical gaze by reducing the control by medical authorities. Individuals would no longer be bound to the rules of medical institutions and could perform health-related activities in their chosen environment. This de-centering of medical expertise would of course imply that individuals gather knowledge only through the lens of algorithms, which poses the ethical risks linked to digital positivism (Petrakaki et al., 2021). However, as some argue, MAI technologies may enable individuals to take control over their health without subjugating themselves to a neoliberal agenda. Some technologies might have a certain disciplining effect, but this self-disciplining might be a choice by the users themselves. It may help them to achieve self-chosen goals by offering tools to cope with their own lack of discipline, which might be just the kind empowerment that is needed (Schüll, 2016). In other words, MAI technologies might support individuals in nudging themselves into behaviors that helps them to achieve their self-chosen goals (Pols et al., 2019). Hence, as long as the goals of MAI implementation and use correspond with those of users, they can have an empowering effect. Other positive effects could come from the networking of MAI users. Online peer-advice, crowd diagnosis and the healing effects of sharing health experiences with others could improve the physical as well as mental well-being of individuals (Petrakaki et al., 2021). In this regard, participating in one’s own health management may enable individuals to make sense of health data and communicate difficult topics that are prone to stigmatization (Sharon, 2016). It could also be an opportunity for an LHS to learn from individual experience.
7.2.3
A Question of Agency
LHS and their ecosystems of care have the potential to significantly transform the home environment by extending the clinical gaze and the medical domain itself. The use of MAI technologies in the hitherto private domain may restructure the living arrangements, daily routines, and identities of individuals. The borders between health, fitness, and well-being as well as between patients and healthy individuals might shift or become blurry as a consequence. Freeing medical knowledge and practices from the confines of medical institutions may also have positive outcomes. The free flow of valuable health data is essential to power LHS, which may significantly improve the quality of healthcare. Gaining knowledge about and control over one’s own body and health may empower individuals. Sharing health data and individual experiences with others could help individuals to cope with their own situation and overcome stigma. Whether the transformation of the home environment yields positive or negative results depends on the agency of individuals engaging with technologies. As we have seen, technology use does not follow a fixed script, but has to be interpreted as a
7.3
Urban Environments
237
performative interaction, where technology shapes social actions as well as relationships and vice versa. This makes the reality of technology use much more ambiguous than some commentators would admit. Interacting with technologies implies navigating the landscape of power, autonomy, and control (Pols et al., 2019). Different identities may emerge from this. It is not only the technology that shapes these identities, but the practices and relationships it is embedded in. On the other hand, practices and relationships may also transform the technologies, generating new forms of use and allowing individuals to pursue goals that have not been foreseen by developers. From an ethical perspective, it is therefore wrong to argue simply for or against MAI-enhanced smart homes, self-management, and self-monitoring. The main task is to create technologies, environments, and relationships that enable individuals to use MAI applications for achieving their own goals. Ecosystems of care do not only consist of technical elements, but first and foremost of people and relationships. Technology design must anticipate that MAI applications need to fit with these complex networks that constitute healthcare environments. This requires a participatory technology design approach that uses information from real-world healthcare environments and the expertise of care givers as well as care recipients. This also applies to healthy individuals who use MAI technologies for health prevention or promotion. Functioning ecosystems of care could not only directly benefit those who are part of it. Embedded in the bigger structure of a LHS, they could drive quality improvement, research efforts, and innovation. If these ecosystems are conducive for individual experience and qualitative information in terms of thick data, they could also contribute to prevent digital positivism.
7.3
Urban Environments
LHS and ecosystems of care cannot exist in a vacuum, but have to be integrated into existing structures and environments. Besides institutional environments and home environments, the urban environment is a major factor in this regard. The concept of the smart city might enable the integration of the technological infrastructure necessary for LHS and ecosystems of care into the urban environment. Smart city refers to the utilization of networked digital infrastructures to enhance urban development in economic, political, social, and cultural respects (Hollands, 2008). The aim is to regulate transportation, leverage economy, enhance public safety, and optimize the use of energy and utilities (Solanas et al., 2014). This requires a coordination of data from different sources by using smart sensors and IoT, cloud solutions, and 5G technology (Rathi et al., 2021). The networked utilization of these technologies allows to optimize processes for leveraging economic development, education, and social services (Solanas et al., 2014). In the light of increasing urbanization, using a networked big data approach for the effective use of resources and regulation of processes such as transport and energy consumption could enable a sustainable urban growth (Savastano et al., 2023).
238
7
Environments
Healthcare is also among the sectors that could benefit from the smart city concept. The vision is to embed MAI technologies into the smart city infrastructure, which would allow to gather information on the living environment of individuals in real-time and to provide healthcare services with active context awareness (Solanas et al., 2014). MAI technologies could detect and automatically adapt to changes in an individual’s environment by using the sensor, 5G, and cloud computing infrastructure of a smart city. For example, interactive information poles installed across the city could inform individuals about pollen pollution, warn them about areas in the city to avoid, and direct them to the nearest pharmacy (Solanas et al., 2014). Smart wearables could inform healthcare providers about an accident and help them to locate the patient and to decide which responder is the closest to be dispatched in order to ensure the fastest possible aid (Rathi et al., 2021). If such a system is successfully integrated into the smart city infrastructure, it could also be possible to automatically assess the best traffic route, guide the ambulance, and adjust traffic lights to enable a speedy transport to the nearest hospital (Solanas et al., 2014). Monitoring devices and smart sensors, e.g. for detecting body temperature, could be applied to detect signals for infectious disease, which could enhance the prediction and prevention of pandemics (Rathi et al., 2021). Similar approaches could support the early detection of mental health issues and the according responses (Chakraborty et al., 2023). One example for the potential of smart cities to enable health promotion and prevention is the city of Kashiwanoha, Japan (Trencher & Karvonen, 2019). The city has launched several health initiatives based on MAI-powered monitoring and health education. Citizens have been encouraged to participate in a monitoring program, where digital pedometers for recording steps walked and digital scales for recording weight, body fat percentage, and Body Mass Index (BMI) generated data. Users received feedback on their health status in an internet portal and could communicate with other users. There was also a gamification aspect, since users could compete with each other by comparing numbers on calories burned and steps walked. In education centers for health promotion, volunteers spread health information and encouraged citizens to change their lifestyles. There was also a program that offered financial incentives to citizens who engage in exercise and a healthy lifestyle. Citizens welcomed the initiative and responded positively to the measures. Apart from health promotion and prevention, smart cities would be the perfect opportunity for realizing LHS and unfolding its full potential. One could imagine a smart city as the ultimate ecosystem of care, coordinating healthcare and public services, assisting individuals with special needs, and enabling health promotion and prevention. Engendering a ubiquitous data collection and constant data exchange could enable the effective and efficient coordination of health, social, and communal services. Combined with automated processes, such an approach could allocate resources more efficiently, streamline healthcare delivery, save costs, further personalize health services, and improve overall quality of care. However, all these benefits would come at immense ethical costs.
7.3
Urban Environments
7.3.1
239
Ethical Implications
The most obvious ethical issue is the massive, all-encompassing surveillance such an approach would imply. This would not only mean a threat to privacy, but rather end privacy altogether. In some visions of healthcare in a smart city, public surveillance is constant and ubiquitous. Cameras and sensors in public places, smart wearables, or IoT applications in their homes would monitor individuals at all times. The public environment would become a permanent object of the clinical gaze. The private realm would cease to exist, since mHealth and IoT would dissolve the walls of the home. Data would flow constantly between a variety of actors, medical and non-medical alike. It is difficult to see how autonomy in terms of informed consent could be maintained in such a scenario of total surveillance. As we have seen, it will be a great challenge to create models of informed consent that enable autonomy in MAI-powered individual treatment (see Sect. 5.1.2). This challenge is even greater in the context of healthcare in a smart city, since data is not bound to the medical sector, but is exchanged with social and public services. This gives a whole new range of actors access to individual health data as well as meta data, such as location and movement. It is doubtful whether it will be technically possible to maintain informed consent in such a scenario. There is a certain risk that individual health benefits and especially the public good could be used to justify the gradual weakening of consent-based autonomy. We would face biometric surveillance on an unprecedented level, which would significantly increase the disciplining power of governments and economic agents (Sanders, 2016). Transforming urban environments into surveillance environments would result in a hyper-medicalization that exacerbates the issues of discipling and transformation of identities discussed above. As some argue, the good of health cannot justify the risk of the immense power accumulation mass surveillance for health reasons on this scale would imply (Davies, 2021). Governmental actors could use the huge amount of individual health data for other than health purposes. The meta data, for example the movement patterns of individuals from smart warbles or public surveillance, could be used to track the behaviors of individuals. Health insurance companies could use this data for stratifying individuals or groups into risk categories, thus possibly exacerbating existing health disparities. Commercial agents could access the data for learning about an individual’s behavior and tailor ads accordingly or nudge individuals into buying products. Public institutions could also apply data-enhanced nudging in order to save costs or enforce desired behaviors. Possible health risks and preventive strategies as well as the according health technologies would become omnipresent in the city. This could be interpreted as a healthist agenda that defines the preservation of health as a civic duty. Healthism would thus be inscribed in the city’s infrastructure, resulting in a normative technological environment that is conducive for disciplining or self-disciplining. The digital health self could become the norm in such a setting, idealizing certain interpretations of health and body types and standardizing behaviors. This could
240
7 Environments
transform the identities of individual into bio-citizens, i.e. a moral identity that defines taking care of one’s own health as a moral duty and obligation towards others (Halse, 2009). Concerning identities, we also face the risk of digital positivism. Reductionist approaches may focus on quantifiable data and neglect the contexts in which data is produced. This may ignore exiting health disparities. Not all inhabitants of a city have the same resources, the same knowledge, or level of access to public services. The obligation to take care of one’s own health or to follow directions for disease prevention might therefore not be fulfillable by all. The Covid19 pandemic has shown that health disparities lead to different outcomes for different groups. The pandemic hit some social groups harder than others due to their lack of access to information, housing conditions, or financial resources. A reductionist approach that defines norms and standardizes individuals for the sake of operational logic overlooks these real-life disparities. Furthermore, bias might be inscribed into health technologies in a smart city. In a way, we would face the well-known bias problem of MAI technologies, but on a much larger scale. Again, this could exacerbate existing inequities and structural discrimination. Even in conventional cities, architecture, the organization of space, work places, and public transport often disadvantages marginalized social groups. This has been shown from a feminist perspective (Bondi & Rose, 2003) as well as regarding racism (Fu, 2022), and people with disabilities (Gissen, 2023). Biased MAI in a smart city could make structural discrimination total. Take the example of older adults. When policy makers and healthcare professionals frame this group as frail, dependent, and in need of constant care, ubiquitous surveillance and monitoring could be seen as a necessary measure. As a consequence, keeping older adults safe could become an imperative that justifies total surveillance. This might also lead to a situation where older adults are confined to certain areas of the city in order to minimize health risks. This is another example how a flawed MAI design, implementation, and use can undermine the main goal of personalization. Strategies The concept of smart healthcare in a smart city is promising, but bears the same risks of MAI technologies in individual medicine, only on a greater scale. Therefore, some of the same strategies apply. New ways of informed consent and autonomy protection have to be implemented. This includes the conceptional side, i.e. reframing what consent means in the context of MAI use, defining rights and obligations, and creating new strategies for informing individuals and enabling their autonomous decision. The technical side implies methods for obtaining and managing consent, for example through blockchain technologies. The challenge will be to create models of informed consent that are transparent and applicable to the complex networks of service providers and institutions involved. Transparency is crucial here since individuals have a right to know who can access which data for which purposes. In addition, data security and privacy protection will have to be upgraded to deal with the unprecedented scale of availability of personal health data and the broad spectrum of actors that are involved.
References
241
Mass surveillance, both through public as well as private monitoring, must be the object of public deliberation. A smart city that aims to implement programs of health promotion and prevention will have to communicate the methods, purposes, and goals of such an initiative. Medicalization and healthism should be discussed openly in order to inform citizens about the benefits and risks of a broad-scale health monitoring. Civic participation should not be confined to a referendum where citizens may vote for or against such an initiative. Rather, citizens should be included in decision-making on purposes and goals, which in turn enables citizens to decide which methods of health monitoring are acceptable and to what extent. City authorities will have to make all interests involved transparent, which also means to disclose possible participation of commercial agents and their role within such a partnership. A democratic approach is also necessary to ensure that marginalized groups have a voice in the process. Some of those groups may benefit most from smart healthcare in a smart city, since it could enable a more equitable healthcare delivery and also make the city a more inclusive place. However, these benefits do not occur automatically from simply implementing MAI technologies. Policy makers have to define equity and inclusion as explicit goals of technology design, implementation, and use. This requires information and expertise from those who will be affected by said technologies. Hence, participatory approaches should focus on including individuals from marginalized groups to enable inclusive and equitable healthcare solutions. The smart city could be the perfect opportunity to organize healthcare delivery more effectively and efficiently, increase sustainability, leverage the LHS approach, and overcome existing health disparities and barriers that prevent marginalized groups form participating in social life and cultural activities. Unleashing this potential requires a joint effort of city authorities, public health actors, citizens, and technology developers based on democratic participation. Technical solutionism, i.e. hoping technology itself will somehow bring about all the desired benefits, is the wrong approach here. Instead, we will have to actively reshape the public environment of the city in order to transform it into a place for smart healthcare.
References Abdulhussein, H., Turnbull, R., Dodkin, L., & Mitchell, P. (2021). Towards a national capability framework for Artificial Intelligence and Digital Medicine tools – A learning needs approach. Intelligence-Based Medicine, 5, 100047. https://doi.org/10.1016/j.ibmed.2021.100047 Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7, e7702. https://doi.org/10.7717/peerj.7702 Alqahtani, F., Al Khalifah, G., Oyebode, O., & Orji, R. (2019). Apps for mental health: An evaluation of behavior change strategies and recommendations for future development. Frontiers in Artificial Intelligence, 2, 30. https://doi.org/10.3389/frai.2019.00030 Andrews, G. J., Evans, J., & Wiles, J. L. (2013). Re-spacing and re-placing gerontology: Relationality and affect. Ageing and Society, 33, 1339–1373.
242
7
Environments
Angus, J., Kontos, P., Dyck, I., Mckeever, P., & Poland, B. (2005). The personal significance of home: Habitus and the experience of receiving long-term home care. Sociology of Health & Illness, 27, 161–187. Antonovsky, A. (1979). Health, stress and coping: New perspectives on mental and physical wellbeing. Josey-Brass. Apaydin, E. (2020). Administrative work and job role beliefs in primary care physicians: An analysis of semi-structured interviews. SAGE Open, 10(1). https://doi.org/10.1177/ 2158244019899092 Armstrong, D. (1995). The rise of surveillance medicine. Sociology of Health & Illness, 17, 393–404. Bizzo, B. C., Dasegowda, G., Bridge, C., Miller, B., Hillis, J. M., Kalra, M. K., Durniak, K., Stout, M., Schultz, T., Alkasab, T., & Dreyer, K. J. (2023). Addressing the challenges of implementing artificial intelligence tools in clinical practice: Principles from experience. Journal of the American College of Radiology (JACR), 20, 352–360. Blezek, D. J., Olson-Williams, L., Missert, A., & Korfiatis, P. (2021). AI integration in the clinical workflow. Journal of Digital Imaging, 34, 1435–1446. Bondi, L. I. Z., & Rose, D. (2003). Constructing gender, constructing the urban: A review of AngloAmerican feminist urban geography. Gender, Place and Culture, 10, 229–245. Briganti, G., & Le Moine, O. (2020). Artificial intelligence in medicine: Today and tomorrow. Frontiers in Medicine (Lausanne), 7, 27. https://doi.org/10.3389/fmed.2020.00027 Carlton, S. (2020). The era of exponential improvement in healthcare? THMT, 5. Available at: https:// telehealthandmedicinetoday.com/index.php/journal/article/view/166. Accessed 14 Aug 2023. Chakraborty, A., Banerjee, J. S., Bhadra, R., Dutta, A., Ganguly, S., Das, D., Kundu, S., Mahmud, M., & Saha, G. (2023). A framework of intelligent mental health monitoring in smart cities and societies. IETE Journal of Research, 1–14. Conrad, P. (2005). The shifting engines of medicalization. Journal of Health and Social Behavior, 46, 3–14. Crawford, R. (1980). Healthism and the medicalization of everyday life. International Journal of Health Services, 10(3), 365–388. https://doi.org/10.2190/3H2H-3XJN-3KAY-G9NY Davies, B. (2021). ‘Personal health surveillance’: The use of mHealth in healthcare responsibilisation. Public Health Ethics, 14, 268–280. Ergin, E., Karaarslan, D., Şahan, S., & Bingöl, Ü. (2023). Can artificial intelligence and robotic nurses replace operating room nurses? The quasi-experimental research. Journal of Robotic Surgery, 1–9. Erickson, B. J., Langer, S. G., Blezek, D. J., Ryan, W. J., & French, T. L. (2014). DEWEY: The DICOM-enabled workflow engine system. Journal of Digital Imaging, 27, 309–313. Friedman, C. P., Wong, A. K., & Blumenthal, D. (2010). Achieving a nationwide learning health system. Science Translational Medicine, 2(57), 57cm29. https://doi.org/10.1126/scitranslmed. 3001456 Friedman, C., Rubin, J., Brown, J., Buntin, M., Corn, M., Etheredge, L., Gunter, C., Musen, M., Platt, R., Stead, W., Sullivan, K., & Van Houweling, D. (2015). Toward a science of learning systems: A research agenda for the high-functioning learning health system. Journal of the American Medical Informatics Association (JAMIA), 22, 43–50. Fritz, R., Wuestney, K., Dermody, G., & Cook, D. J. (2022). Nurse-in-the-loop smart home detection of health events associated with diagnosed chronic conditions: A case-event series. International Journal of Nursing Studies Advanced, 4, 100081. https://doi.org/10.1016/j.ijnsa. 2022.100081 Fu, A. S. (2022). Can buildings be racist? A critical sociology of architecture and the built environment. Sociological Inquiry, 92, 442–465. Gissen, D. (2023). The architecture of disability: Buildings, cities, and landscapes beyond access. University Of Minnesota Press.
References
243
Halse, C. (2009). Bio-citizenship: Virtue discourses and the birth of the bio-citizen. In: Wright, J. & Harwook, V. (eds.). Biopolitics and the ‘obesity epidemic’: Governing bodies. Routledge, 45–59. Hartmann, K. V., Primc, N., & Rubeis, G. (2023). Lost in translation? Conceptions of privacy and independence in the technical development of AI-based AAL. Medicine, Health Care, and Philosophy, 26, 99–110. Hashiguchi, T. C. O., Oderkirk, J., & Slawomirski, L. (2022). Fulfilling the promise of artificial intelligence in the health sector: Let’s get real. Value in Health, 25, 368–373. Hazarika, I. (2020). Artificial intelligence: Opportunities and implications for the health workforce. International Health, 12, 241–245. Hollands, R. G. (2008). Will the real smart city please stand up? City, 12, 303–320. Joseph, A. L., Monkman, H., Kushniruk, A., & Quintana, Y. (2023). Exploring patient journey mapping and the learning health system: Scoping review. JMIR Human Factors, 10, e43966. https://doi.org/10.2196/43966 Kent, R. (2023a). 1: Transformations of health in the digital society. In: Kent, R. (ed.) The digital health self. Bristol University Press, 1–22. https://doi.org/10.56687/9781529210163-004 Kent, R. (2023b). 4: Discipline and moralism of our health. In: Kent, R. (ed.). The digital health self. Bristol University Press, 77–100. https://doi.org/10.56687/9781529210163-007 Kent, R. (2023c). 6: Sharing ‘healthiness’. In: Kent, R. (ed.). The digital health self. Bristol University Press, 131–149. https://doi.org/10.56687/9781529210163-009 Kerr, R. S. (2020). Surgery in the 2020s: Implications of advancing technology for patients and the workforce. Future Healthcare Journal, 7, 46–49. Kotter, E., & Ranschaert, E. (2021). Challenges and solutions for introducing artificial intelligence (AI) in daily clinical workflow. European Radiology, 31, 5–7. Krittanawong, C. (2018). The rise of artificial intelligence and the uncertain future for physicians. European Journal of Internal Medicine, 48, e13–e14. https://doi.org/10.1016/j.ejim.2017. 06.017 Lambert, S. I., Madi, M., Sopka, S., Lenes, A., Stange, H., Buszello, C.-P., & Stephan, A. (2023). An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. npj Digital Medicine, 6(1), 111. https://doi.org/10.1038/s41746-023-00852-5 Liaschenko, J. (1994). The moral geography of home care. ANS. Advances in Nursing Science, 17(2), 16–26. https://doi.org/10.1097/00012272-199412000-00005 Linda, L., Darren, M., Sophie, R., Anna, S., Alex, A., Emily, F., Jo, W., Sarah Gerard, D., John, L. C., & Rob, A. (2020). Understanding why primary care doctors leave direct patient care: A systematic review of qualitative research. BMJ Open, 10(5), e029846. https://doi.org/10.1136/ bmjopen-2019-029846 Lupton, D. (2013). The digitally engaged patient: Self-monitoring and self-care in the digital health era. Social Theory and Health, 11, 256–270. https://doi.org/10.1057/sth.2013.10 Lupton, D. (2014). Critical perspectives on digital health technologies. Sociology Compass, 8, 1344–1359. https://doi.org/10.1111/soc4.12226 Lupton, D. (2017). Self-tracking, health and medicine. Health Sociology Review, 26, 1–5. https:// doi.org/10.1080/14461242.2016.1228149 Martin, E. (1994). Flexible bodies: Tracking immunity in American culture from the days of polio to the age of AIDS. Beacon Press. Mello, M. M., & Wang, C. J. (2020). Ethics and governance for digital disease surveillance. Science, 368, 951–954. Moraitou, M., Pateli, A., & Fotiou, S. (2017). Smart health caring home: A systematic review of smart home care for elders and chronic disease patients. Advances in Experimental Medicine and Biology, 989, 255–264. https://doi.org/10.1007/978-3-319-57348-9_22 Morita, P. P., Sahu, K. S., & Oetomo, A. (2023). Health monitoring using smart home technologies: Scoping review. JMIR mHealth and uHealth, 11, e37347. https://doi.org/10.2196/37347
244
7
Environments
Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the future – Big data, machine learning, and clinical medicine. The New England Journal of Medicine, 375, 1216–1219. https://doi.org/10. 1056/NEJMp1606181 Ozkaynak, M., Unertl, K., Johnson, S., Brixey, J., & Haque, S. N. (2022). Clinical workflow analysis, process redesign, and quality improvement. In: Finnell, J.T. & Dixon, B.E. (eds.). Clinical informatics study guide. Springer, 103–118. https://doi.org/10.1007/9783-030-93765-2_8 Pepito, J. A., & Locsin, R. (2019). Can nurses remain relevant in a technologically advanced future? International Journal of Nursing Science, 6, 106–110. Persson, M., Redmalm, D., & Iversen, C. (2022). Caregivers’ use of robots and their effect on work environment – A scoping review. Journal of Technology in Human Services, 40, 251–277. Pesapane, F., Codari, M., & Sardanelli, F. (2018). Artificial intelligence in medical imaging: Threat or opportunity? Radiologists again at the forefront of innovation in medicine. European Radiology Experimental, 2(1), 35. https://doi.org/10.1186/s41747-018-0061-6 Petersson, L., Larsson, I., Nygren, J. M., Nilsen, P., Neher, M., Reed, J. E., Tyskbo, D., & Svedberg, P. (2022). Challenges to implementing artificial intelligence in healthcare: A qualitative interview study with healthcare leaders in Sweden. BMC Health Services Research, 22(1), 850. https://doi.org/10.1186/s12913-022-08215-8 Petrakaki, D., Hilberg, E., & Waring, J. (2018). Between empowerment and self-discipline: Governing patients’ conduct through technological self-care. Social Science & Medicine, 213, 146–153. Petrakaki, D., Hilberg, E., & Waring, J. (2021). The cultivation of digital health citizenship. Social Science & Medicine, 270, 113675. https://doi.org/10.1016/j.socscimed.2021.113675 Pols, J., Willems, D., & Aanestad, M. (2019). Making sense with numbers. Unravelling ethicopsychological subjects in practices of self-quantification. Sociology of Health & Illness, 41, 98–115. Rampton, V., Mittelman, M., & Goldhahn, J. (2020). Implications of artificial intelligence for medical education. Lancet Digit Health, 2(3), e111–e112. https://doi.org/10.1016/S2589-7500 (20)30023-6 Rathi, V. K., Rajput, N. K., Mishra, S., Grover, B. A., Tiwari, P., Jaiswal, A. K., & Hossain, M. S. (2021). An edge AI-enabled IoT healthcare monitoring system for smart cities. Computers and Electrical Engineering, 96, 107524. https://doi.org/10.1016/j.compeleceng.2021.107524 Reid, R. J., & Greene, S. M. (2023). Gathering speed and countering tensions in the rapid learning health system. Learning Health Systems, 7(3), e10358. https://doi.org/10.1002/lrh2.10358 Rieger, T., Roesler, E., & Manzey, D. (2022). Challenging presumed technological superiority when working with (artificial) colleagues. Scientific Reports, 12(1), 3768. https://doi.org/10. 1038/s41598-022-07808-x Rier, D. A. (2022). Responsibility in medical sociology: A second, reflexive look. The American Sociologist, 53, 663–684. Rubeis, G. (2021). Guardians of humanity? The challenges of nursing practice in the digital age. Nursing Philosophy, 22(2), e12331. https://doi.org/10.1111/nup.12331 Rubeis, G. (2022). Complexity management as an ethical challenge for AI-based age tech. In Proceedings of the 15th international conference on PErvasive Technologies Related to Assistive Environments Corfu, Greece 2022. Association for Computing Machinery. https://doi.org/ 10.1145/3529190.3534752 Rubeis, G. (2023). Liquid health. Medicine in the age of surveillance capitalism. Social Science & Medicine, 322, 115810. https://doi.org/10.1016/j.socscimed.2023.115810 Rubeis, G., Fang, M. L., & Sixsmith, A. (2022). Equity in agetech for ageing well in technologydriven places: The role of social determinants in designing AI-based assistive technologies. Science and Engineering Ethics, 28(6), 67. https://doi.org/10.1007/s11948-022-00424-y Samerski, S. (2018). Individuals on alert: Digital epidemiology and the individualization of surveillance. Life Sciences, Society and Policy, 14, 13. https://doi.org/10.1186/s40504-0180076-z
References
245
Sanders, R. (2016). Self-tracking in the digital era: Biopower, patriarchy, and the new biometric body projects. Body & Society, 23, 36–63. Sapci, A. H., & Sapci, H. A. (2019). Innovative assisted living tools, remote monitoring technologies, artificial intelligence-driven solutions, and robotic systems for aging societies: Systematic review. JMIR Aging, 2(2), e15429. https://doi.org/10.2196/15429 Savage, S., Flores-Saviaga, C., Rodney, R., Savage, L., Schull, J., & Mankoff, J. (2022). The global care ecosystems of 3D printed assistive devices. ACM Transactions on Accessible Computing, 15(31), 1–29. https://doi.org/10.1145/3537676 Savastano, M., Suciu, M.-C., Gorelova, I., & Stativă, G.-A. (2023). How smart is mobility in smart cities? An analysis of citizens’ value perceptions through ICT applications. Cities, 132, 104071. https://doi.org/10.1016/j.cities.2022.104071 Schüll, N. D. (2016). Data for life: Wearable technology and the design of self-care. BioSocieties, 11, 317–333. Sharon, T. (2016). Self-tracking for health and the quantified self: Re-articulating autonomy, solidarity, and authenticity in an age of personalized healthcare. Philosophy and Technology, 30, 93–121. Singh, R. P., Hom, G. L., Abramoff, M. D., Campbell, J. P., & Chiang, M. F. (2020). Current challenges and barriers to real-world artificial intelligence adoption for the healthcare system, provider, and the patient. Translational Vision Science & Technology, 9(2), 45. https://doi.org/ 10.1167/tvst.9.2.45 Slutsky, J. R. (2007). Moving closer to a rapid-learning health care system. Health Affairs, 26(2), w122–w124. https://doi.org/10.1377/hlthaff.26.2.w122 Solanas, A., Patsakis, C., Conti, M., Vlachos, I. S., Ramos, V., Falcone, F., Postolache, O., PerezMartinez, P. A., Pietro, R. D., Perrea, D. N., & Martinez-Balleste, A. (2014). Smart health: A context-aware health paradigm within smart cities. IEEE Communications Magazine, 52(8), 74–81. https://doi.org/10.1109/MCOM.2014.6871673 Swartz, H. A. (2023). Artificial intelligence (AI) psychotherapy: Coming soon to a consultation room near you? American Journal of Psychotherapy, 76, 55–56. Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25, 44–56. https://doi.org/10.1038/s41591-018-0300-7 Trencher, G., & Karvonen, A. (2019). Stretching “smart”: Advancing health and well-being through the smart city agenda. Local Environment, 24, 610–627. Vogt, H., Hofmann, B., & Getz, L. (2016). The new holism: P4 systems medicine and the medicalization of health and life itself. Medicine, Health Care, and Philosophy, 19, 307–323. Woolhandler, S., & Himmelstein, D. U. (2014). Administrative work consumes one-sixth of U.S. physicians’ working hours and lowers their career satisfaction. International Journal of Health Services, 44, 635–642. https://doi.org/10.2190/HS.44.4.a World Health Organization (WHO). (2002). Active ageing: A policy framework. World Health Organization. Available at: https://apps.who.int/iris/handle/10665/67215. Accessed 14 Aug 2023. Xie, Y., Lu, L., Gao, F., He, S.-J., Zhao, H.-J., Fang, Y., Yang, J.-M., An, Y., Ye, Z.-W., & Dong, Z. (2021). Integration of artificial intelligence, blockchain, and wearable technology for chronic disease management: A new paradigm in smart healthcare. Current Medical Science, 41, 1123–1133.
Chapter 8
Instead of a Conclusion: Seven Lessons for the Present and an Outlook
Abstract In this closing chapter, I summarize my main results, but not in the form of narrative conclusion. Rather, I define seven lessons for the present that can be derived from the ethical analysis provided in the previous chapters. These lessons serve as a compass that may help to navigate the impact of MAI on practices, relationships, and environments in medicine. I also attempt a short outlook, discuss some limitations of this book, and outline some topics for future research. Keywords Digital transformation · Disruption · Environments · Epistemology · Philosophy of medicine · Practices · Relationships · Surveillance capitalism As finishing chapter of books like this one usually expects some sort of conclusion that summarizes the main points and crucial messages. Since the architecture of this book, at least in Part II, is not a horizontal one where one chapter is built one top of the other, but in the fashion of prism where the different aspects of certain phenomena are viewed from different perspectives, I do not think that a conclusion is appropriate. Rather, I would like to formulate seven lessons which I think can be learned from the ethical analysis. I call them lessons because I am convinced that mere insights are insufficient for dealing with MAI technologies from an ethics perspective. What we need is some kind of compass that helps us to make sense of the multifaceted transformation MAI brings with it and at the same time helps us to navigate it. Navigating here means on the one hand to be prepared for and handle the various ethical issues that may arise from MAI. On the other hand, navigating should mean to create and design MAI technologies in a way that prevents those issues or at least mitigates them. Since both, the use and development of MAI, are happening right now, I speak of lessons for the present. In the future, it is too late. As outlined in Sect. 1.4, this book has several limitations, the most important being that it is a mapping of the field. Therefore, I could not investigate all relevant aspects adequately. However, I think that this book could serve as a basis for future research that uses the mapping provided here for more in-depth investigations. I outline a few of the areas I think need a closer investigation at the end of this chapter.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6_8
247
248
8.1
8
Instead of a Conclusion: Seven Lessons for the Present and an Outlook
Seven Lessons for the Present
Lesson 1: Harness the Transformative Power of MAI MAI is not a mere tool for improving isolated practices or processes in medicine. It is a driving force of transformation and in most contexts an agent of change. We have to come to terms with the fact that MAI will substantially change medicine. It will change how we do things. It will change how we encounter and relate to each other. And it will change the basic structures in which we act and enact health and illness, autonomy and equity, privacy and trust. But transformation does not necessarily mean improvement. Whether this process will mean a change for the better or the worse is not inscribed into the technology. It lies in our hands. MAI is not a panacea that solves all problems in medicine and healthcare. Neither is it a destructive force that necessarily leads to negative outcomes. It is, after all, a technology crafted and wielded by humans, despite its quasi-autonomous qualities. It is not data or algorithms that decide the impacts of MAI on practices, relationships, and environments, but decisions by humans. Hence, if we want certain outcomes, if we want MAI to transform, reshape, and improve practices, relationships, and environments in a specific way, we have to define this as explicit purpose of MAI design, deployment, and use. The technology will not bring about these results automatically. Hence, we need a clear strategy for managing the transformation, make risks and benefits transparent, set goals, define purposes and means, define roles, and include all those affected by MAI in the decision-making on its use. Lesson 2: Recognize the Impact on Practices Smart data practices will become the gold standard in medicine within the next years. This process has already begun. As a consequence, smart data practices will be the epistemological foundation of all clinical practices, just as EBM has been for the last 30 years. Collecting and operationalizing digitized health data are the crucial processes that constitute smart data practices. This will completely transform clinical practices in terms of efficiency and efficacy, precision and personalization. It also comes with several challenges in terms of privacy protection and data security as well as protecting the autonomy of data subjects and ensuring that standardization, reductionism, and bias do not undermine or ignore their uniqueness. Hence, we have to acknowledge the epistemological limitations and potential challenges connected to MAI. Addressing these challenges implies a mix of conceptual, technological, and regulatory measures. This in turn requires a close collaboration of healthcare professionals, data subjects, software designers, and policy makers. Lesson 3: Recognize the Impact on Relationships Relationships are the fabric of all medical treatment. Without a functioning relationship, clinical practice, nursing, and therapeutic practice is difficult, inefficient, and in some contexts even detrimental. MAI technologies may serve as enablers of a more patient-centered healthcare due to their immense potential for making practices more efficient and personalizing treatment. MAI may enhance healthcare professionals as practitioners and thus give them the opportunity to build stronger relationships with
8.1
Seven Lessons for the Present
249
their patients. But this is not an automatic outcome of MAI use. If we want to strengthen the relationships in healthcare, we have to define this as an explicit goal of MAI design, deployment, and use. This means to decide on the role of healthcare professionals, patients, and MAI within these relationships. We should enable doctors, nurses, and care givers in mental healthcare to take on the role of enhanced practitioners who use MAI technologies as ideal tools for putting their expertise to practice. This implies to provide the necessary skills and literacy through education and training. It also requires a close cooperation between healthcare professionals, software designers, and policy makers to create both the technology as well as its infrastructure in a way that is conducive for this role. By designing and applying MAI in terms of a patient-centered medicine that acknowledges their individuality and mitigates the risks of digital positivism we can achieve the role of empowered patients. Patients could thus profit from the benefits of MAI in terms of improved medical outcomes as well as control over their own health data, which in turn empowers their autonomy within the treatment process. Again, this can only become a reality if we define this role as an explicit goal. We should assign the role of an enabler to MAI technologies, in some cases also as a mediator, but never as a replacement of human healthcare professionals. By performing time-consuming and repetitive tasks as well as complicated data analysis, artificial agents may support healthcare professionals in building stronger patient relationships. Financial motives to cut personnel costs by implementing MAI must not be prioritized over the crucial goal of strengthening relationships. In a word, automate processes, not decisions. Lesson 4: Recognize the Impact on Environments Technology already shapes all environments we live and act in, and medicine is no exception. But MAI affects our work environments, personal environments, and potentially urban environments in quite specific ways. It frees the medical gaze from the confines of medical institutions. It makes the walls of the private realm permeable due the free-floating nature of data. It may also reshape our communities by making health technologies an integral element of the public space. Furthermore, it may transform workflows and professional identities due to its potential for standardization and optimization. All these factors will impact the material aspect of our environments, meaning the ubiquitous presence of health technology, as well as the immaterial aspects, i.e. the social practices and relationships that shape our environments. How far this goes and what results will come from it is again up to us to decide. We will have to weigh the benefits of MAI against the downsides, which are mainly disruption of professional identities, loss of privacy, and the agglomeration of power through constant surveillance. The crucial question behind this is the status of health as a good. Is health a good that justifies all means? Should we sacrifice other goods like autonomy, privacy, and personal freedom for the sake of health? Where is the line between optimizing individual as well as public health services and paternalism? Since the ideal scenario for unleashing the potential of MAI is the LHS, a medical super-environment that encompasses ecosystems of care, these questions become imminent. They should be addressed in a democratic process that discloses the purposes and goals, risks and benefits of MAI and makes its impact on the various environments transparent.
250
8
Instead of a Conclusion: Seven Lessons for the Present and an Outlook
Lesson 5: Understand the Nature of Health Data We have to acknowledge that there is no such thing as raw data. Health data, like all other data, is a product of social practices and the politics of measurement. Hence, the epistemological supremacy of big data must not fool us into believing that data speak for themselves. Data needs interpretation and context. In order to facilitate personalization and tailor healthcare services to individual needs, resources, and preferences, big data needs thick data. We have to make sense of individual health data in the light of the specific life situation of an individual data subject. That means to integrate social determinants and personal values, beliefs, experiences, and narratives into data processing and operationalization. This strategy may create a system of checks and balances for the risks of digital positivism, i.e. reductionism and bias. Lesson 6: Define the Nature of MAI MAI is not a passive tool, but in most of its manifestations, it has to be considered as an artificial agent. It may learn from experience, operate independent of human command and control, and make autonomous decisions. However, like all technology, it is inextricably linked to human decisions, purposes, and actions. It is wrong to frame MAI as force majeure whose collateral damage we have to somehow deal with if we want to profit from its advantages. Yes, the inherent operational logic of MAI leads to certain outcomes that are morally relevant. But that does not make MAI a moral agent. Debating whether MAI has some sort of consciousness or intentionality obscures the real ethical issue at hand: The moral agency and responsibility of those who design MAI, those who decide about its deployment, and those who use it. It is humans who decide how to design a MAI system, what features to integrate, what parameters to use, and what problems to solve. It is humans who decide for which purposes to implement a MAI system. And it is humans who decide to use a MAI system for supporting them in their clinical or administrative tasks. Hence, the machine, although autonomous in some respects, is always tethered to humans and their decisions. Instead of focusing on building moral machines, we should therefore focus on making moral decisions on the design, purposes, and uses of MAI. These decisions and not technology itself fundamentally shape the moral outcomes of MAI. MAI can only be a driving force for improving medicine and healthcare if we decide to design, deploy, and use it for this exact purpose. To put it simply, adapt technology to humans, not vice versa. Lesson 7: Consider the Economic Framework of Digital Health Technologies With the advent of MAI, we are also facing commercial agents as new players in medicine and healthcare. Of course, commercial agents already play an important role; just think of the pharma industry or traditional health technologies. But digital technologies involving big data and MAI are a game changer in this regard. Commercial agents in a MAI setting do not only provide goods and services. They often own the infrastructures and control the markets. From providing storing capacitates and cloud computing to data ownership and domain knowledge, the big data divide manifests itself throughout the healthcare sector. Surveillance capitalism, where individual health and meta data is the currency, reshapes power
8.2
Outlook
251
relations and affects practices, relationships, and environments. We have to be aware that commercial agents pursue their own interests that might not always be the interests of healthcare professionals and patients. The same goes for governmental actors who may find MAI technologies useful for exerting biopower. If it is true that data is the new oil, as scholars as well as media outlets let us believe, then we have all the more reason to be concerned. Given the business practices that surround oil production and trade and the long history of wars fought over this resource, we should be aware of the socioeconomic implications of MAI in healthcare.
8.2
Outlook
In Chap. 4, I tried to give a very brief overview of crucial issues in AI and big data ethics. This is a very limited and somewhat eclectic account, since the prime objective of this book is a practice-oriented one, i.e. mapping the main ethical issues and suggesting strategies for dealing with them. Future research on the ethics of MAI needs to engage with a wider spectrum of concepts and theories from the philosophy of data, philosophy of medicine, and philosophy of technology. One crucial aspect in this regard is the nexus between epistemology and ethics. I tried my best in this book to emphasize the relevance of this connection. However, a deeper investigation is needed that explores the shift in epistemology brought about by big data and AI. It is particularly important to reconceptualize our understanding of information as well as the nature and status of data in medicine. Sabine Lionelli’s research on data journeys could be an important starting-off point for this kind of research (Lionelli & Tempini, 2020). Furthermore, a better understanding of what we usually refer to as AI is needed for further analyzing ethical issues connected to smart data practices in medicine. The epistemic scope and limits of these technologies as well as our concepts of artificiality and intelligence need to be addressed. The Atlas of AI by Katherine Crawford (2021) is an immensely important work in this regard. Further medico-ethical investigations should make use of the broadening of the perspective this book provides by contextualizing AI with politics and what she calls the planetary costs. One discourse that should be tapped into in future research is the philosophy of medicine (see for example Alex Broadbent’s influential book The Philosophy of Medicine, Broadbent, 2019). In Chap. 5, I tried to analyze the impact of MAI on the practices of healthcare professionals, patients, and other users of digital health technology. In Chap. 6, I wanted to show how smart data practices may transform the roles of healthcare professionals and patients and reshape the therapeutic relationship. In Chap. 7, I envisioned the transformation of the work environment and the potential expansion of the medical domain into the private realm of patients as well as healthy persons. All of these phenomena touch upon fundamental topics in the philosophy of medicine, such as the self-image of healthcare professionals, methodological questions like the role of reductionism, evidence, clinical judgement and expertise, and the definition of health and disease. A chief task would be to
252
8
Instead of a Conclusion: Seven Lessons for the Present and an Outlook
investigate the impact of MAI on the concept of EBM. The supposed shift from large cohort studies, RCTs and meta reviews to big data analytics as the highest-ranking type of evidence within the evidence hierarchy is of particular relevance in this regard. Works like Jeremy Howick’s The Philosophy of Evidence-Based Medicine (Howick, 2011), Harald Schmidt’s bold vision in The end of medicine as we know it – and why your health has a future (Schmidt, 2022), and of course the extensive debate on EBM from the 1990s onward should be explored further in the light of MAI. Another a highly relevant task would be to explore the shift in our concepts of health and disease caused by ubiquitous and permanent availability of data and the possibilities of constant (self-)monitoring and surveillance. I have introduced the idea of liquid health as a consequence of big data and MAI, stating that the fluidity of data and surveillance liquifies the definitions of health and disease, the borders between the medical and the private domain, and the roles and relationships in healthcare (Rubeis, 2023). But this is just one aspect of a much larger topic. There is a rich tradition regarding theories of health and disease (for an overview see Carel & Cooper, 2014; Smart 2016) that future research needs to engage with. Questions of normativity, pathologization, and medicalization are especially relevant here. Some of these topics I briefly discussed in this book. But there are also open questions I could not discuss. How will MAI affect the relationships between different health professions? How will it change managerial and administrative aspects in healthcare? What are the economic implications of MAI, regarding the health sector (insurance practices, reimbursement, investment strategies, financial benefits) and economics as a whole? Addressing these questions requires a totally different set of concepts and theories from health economics, business administration, administrative science, and public management and governance. A whole horizon of future research opens from the impact of MAI. I hope that the mapping of the field in this book can contribute to exploring it.
References Broadbent, A. (2019). The philosophy of medicine. Oxford University Press. Carel, H., & Cooper, R. (Eds.). (2014). Health, illness and disease. Philosophical essays. Routledge. Crawford, K. (2021). The atlas of AI. Power, politics, and the planetary costs of artificial intelligence. Yale University Press. Howick, J. (2011). The philosophy of evidence-based medicine. Wiley-Blackwell, BMJ Books. Lionelli, S., & Tempini, N. (Eds.). (2020). Data journeys in the sciences. Springer Open. https://doi. org/10.1007/978-3-030-37177-7 Rubeis, G. (2023). Liquid health. Medicine in the age of surveillance capitalism. Social Science and Medicine, 322, 115810. https://doi.org/10.1016/j.socscimed.2023.115810 Schmidt, H. H. W. (2022). The end of medicine as we know it – And why your health has a future. Springer. Smart, B. (2016). Concepts and causes in the philosophy of disease. Palgrave MacMillan.
Index
The following list contains terms relevant to the fields of AI and data science, medicine, or ethics. Very frequent words like data, medicine, health(care), MAI, doctor, patient etc. are not listed. A Access barriers, 44, 105, 174, 195, 199, 202 Agency apparent, 74 intentional, 59, 74, 75 morals, 75, 76, 250 Algorithm, 5, 20, 25, 73, 102, 222, 248 Ambient assisted living (AAL), 40, 175, 228, 229, 233 App, 42, 43, 93, 95 Artificial agents, 8, 17, 19, 64, 170–172, 175, 179, 180, 183, 196, 197, 199, 202, 217, 220–223, 234, 249, 250 Attention deficit hyperactivity disorder (ADHD), 191 Automatization, automated, 7, 9, 35, 37, 40, 41, 44, 47, 72, 74, 79, 109, 113–115, 123–125, 127, 152, 161–164, 167, 173, 178, 186, 200, 201, 223, 225, 228, 233, 238 Autonomy relational, 59, 160, 173, 193, 198, 200 Avatars, 188, 196, 197, 199, 221
B Behavioral data, 30, 31, 41, 92, 225, 226, 230 Bias algorithmic, 45, 73, 120, 122–125, 127, 129, 134, 195, 196 automation, 124, 125, 132, 133, 221 cascade, 120, 125, 128, 129, 132
cognitive, 118, 119, 138 confirmation, 118, 125 data, 45, 73, 118–120, 122–129, 131, 133, 134, 139, 195, 196 framing, 118, 123 outcomes, 74, 118–120, 123–127, 129, 132–134, 173, 177, 195, 196, 221 societal, 120 statistical, 120, 129, 134 status quo, 118 Big data approaches, 7, 19, 29–36, 41, 45, 46, 80, 91, 92, 94, 96, 102, 106, 109, 110, 116, 169, 170, 176, 179, 188, 199, 213, 224, 237 Big data collectors, 77, 106, 174 Big data divide, 77, 93, 103, 105, 250 Big data utilizers, 77, 95, 106, 174 Biobanks, 32, 33, 38, 95–98, 100, 101, 228 Biomarker digital, 42 discovery, 32 research, 191 Bipolar disorder, 188 Black boxes, 8, 29, 46, 72, 134, 135, 138–140, 164 Blockchains, 96, 101–104, 228, 240
C Capitalism, 116, 120, 250 Casuistry, 57
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 G. Rubeis, Ethics of Medical AI, The International Library of Ethics, Law and Technology 24, https://doi.org/10.1007/978-3-031-55744-6
253
254 Causality, causation, 32, 72, 122, 134, 139 Chatbots, 5, 8, 24, 38, 157, 158, 176, 188, 196, 199 Chronical ill(ness), 227–229 Class labels, 122–124, 132, 133, 169, 194, 195 Clinical decision support system (CDSS), 8, 20, 34–37, 75, 109, 124, 125, 133, 134, 139, 152, 159–162, 164, 171, 172, 175, 188, 196, 199, 200, 221 Clinical gaze, 112–114, 116, 154, 172, 176, 177, 192, 225, 226, 231–234, 236, 239 Clinical heuristics, 117, 118 Clinical reasoning, 3, 4, 7, 24, 119 Cloud computing, 5, 32, 45, 92, 238, 250 storage, 5, 32, 45, 97, 108 Computer tomography (CT), 30, 38, 166, 219 Computer vision, 9, 18, 20, 25, 26, 29, 36, 38–40, 109, 127, 134, 188, 196 Confidentiality, 65–67, 77, 92–94, 96, 101, 106, 109 Consent authorization model of, 100 blank, 98, 100 broad, 98–100, 240 dynamics, 99–101 informed, 58, 60, 72, 95–101, 111, 239, 240 meta, 100, 239 specific, 72, 97–99 tiered, 99 Conversational agents, 38 Correlations, 27, 31, 32, 42, 72, 119, 122, 123, 130, 131, 134, 139, 155, 222 Covid-19, 124, 132, 215 Critical data studies, 10, 11, 21, 55, 78–81, 91, 153, 217
D Data doubles, 154, 169, 201 Datafication, -ified, 8, 79, 80, 110, 113, 117, 152, 169, 177, 196, 201 Data harms, 94, 95, 97 Dataism, 79, 80, 109, 113 Data mining, 35, 36, 40, 100, 109, 122, 188 Data ownership, 96, 101, 105–108, 250 Data security, 7, 67, 102, 104, 107, 109, 165, 173, 174, 199, 200, 217, 233, 240, 248 Data subjects, 60, 65, 66, 77, 92, 93, 95, 97–103, 105–109, 116, 174, 248, 250 Deceptions, 179–184, 199, 233
Index Decision making clinical, 35, 62–64, 80, 109, 111, 119, 124, 126, 135, 140, 154, 160, 168, 173, 188 evidence-based, 135 shared, 62, 153, 155, 157, 159–161, 173 Deep neural networks, 15, 31 Dehumanization, 44, 180, 186 Democratization, 105, 152, 174–175 Deontology, 56, 57 Depression, 187, 195, 196 Dermatology, 39 Detached concern, 64 Diagnosis, 6, 24, 25, 30, 31, 34, 38, 40, 45, 75, 76, 110, 118, 124, 127, 130, 131, 138, 166, 172, 176, 188, 194, 196, 199, 236 Digital literacy, 199, 217 Digital positivism, 78–80, 110, 113–116, 119, 120, 122, 123, 154, 160, 161, 169, 172, 177, 178, 186, 187, 194, 196, 200, 202, 217, 235–237, 240, 249, 250 Digital twins, 35–37, 115, 154 Disciplining, 178, 193, 194, 232, 234, 236, 239 Discrimination, discriminatory, 69, 73, 74, 94, 97, 118–120, 122, 123, 125, 126, 128, 131, 133, 135, 190, 192, 195, 199, 240 Disruption, -ive, 6, 46, 166, 218, 222, 249 Drug screening, 34 testing, 36, 115
E Economy, 105, 237 Ecosystem of care, 227, 233, 238 Electronic health record (EHR), 8, 30, 31, 34–37, 40–42, 92, 95, 96, 121, 153, 154, 156, 170, 177, 178, 188, 196, 214, 216, 226, 228 Embodiment, embodied, 4, 19, 20, 38, 43, 57, 158, 170, 180, 183, 188, 196, 199, 202, 221 Empathy affective, 63, 64, 153, 158, 159, 223 artificial, 157 clinical, 153, 155, 158, 165, 169, 171, 223 cognitive, 63, 64, 153, 158, 223 Empowerment, 42, 61, 100, 105, 152, 168–170, 173, 175, 184, 194, 201, 235, 236 Enhanced practitioners, 166, 167, 175, 199, 200, 249 Environmental data, 39, 40, 113, 199, 230
Index Epistemic gap, 3 Epistemic injustice, 126, 127 Epistemological practices, 71, 72, 111, 112 Equality, 67–69, 133 Equity, 67–69, 126, 127, 131–133, 174, 241, 248 Ethics AI, 70–80, 251 animals, 56 applied, 11, 56 big data, 11, 56, 70–80, 251 bio, 56, 59, 70 biomedical, 56, 98 clinical, 56, 57 computers, 70–72 data (science), 71 digital, 70, 71, 217 discourse, 57 embedded, 78, 80 EU’s General Data Protection Regulation (GDPR), 91, 94, 109 Evidence empirical, 3, 4, 64, 111, 113, 119, 135, 198, 199, 218 evidence-based medicine, 33, 120 scientific, 3, 4, 111, 134, 156, 215 Exoskeletons, 44, 184, 185
F Fairness algorithmic, 129, 131, 138, 195 gender, 195 metrics, 129–131 normative, 131 Federated learning, 96, 101–104 Feminist ethics, 56
H Health disparities, 68, 69, 74, 121, 123–127, 133, 195, 226, 239–241 Healthism, 230–236, 239, 241 Human-in-the-loop, 138, 140, 163, 185, 196 Human-machine interaction (HCI), 8, 172, 179–185
I Identities, 9, 59, 60, 68, 77, 102, 121, 217, 221, 222, 232, 235–237, 239, 240, 249 Image recognition, 18, 39, 138 Individual healthcare, 63, 227 Inequities, 78, 127, 133, 187, 217, 240
255 Intelligence artificial general intelligence (AGI), 19, 75 artificial narrow intelligence (ANI), 19, 75, 76, 162 classic AI, 25 good old AI (GOAI), 25 human, 5, 16, 17, 20 human-centered ai (HCAI), 128, 132 super-, 19 symbolic AI, 18–19, 25 Intensive care, 40 Intentionality, 75, 164, 170, 250 Internet of things (IoT), 9, 39–43 Interoperability, 34, 42, 45, 106, 166, 219, 220
J Justice, 57, 67–69, 126
L Learning deep, 26–29, 36, 38, 72, 103, 134, 136, 138 healthcare system (LHS), 95, 214, 215, 217–220, 225–227, 231, 233, 236–238, 241, 249 machines, 16–18, 20, 25–29, 32, 34–36, 40, 72, 73, 94, 96, 100, 103, 104, 109, 111, 113, 114, 116, 117, 119, 120, 122, 128–131, 133–140, 160, 164, 178, 179, 185, 188, 194, 222 reinforcement, 26, 27, 43 supervised, 26–28 unsupervised, 26, 27, 29 Liability products, 163 strict, 163 Long-term care, 44, 179, 185, 215, 222, 228
M Magnetic resonance imaging (MRI), 30, 38, 188, 222 Medicalization, 230–236, 241, 252 Mental disorders, 38, 41, 121, 187, 189–196, 199, 201 health(care), 92, 187–203, 224, 238, 248 illnesses, 65, 68, 188, 190–192, 195, 198 Mobile health (mHealth), 7, 9, 36, 41–43, 45, 92, 100, 101, 103, 113, 128, 129, 168, 172–174, 186, 188, 197–199, 201, 215, 224, 226, 228, 229, 231, 239
256 Model predictive, 26, 36, 43, 133, 225, 226 Monitoring, 7, 9, 20, 32, 33, 36–42, 44, 91, 92, 114, 132, 168, 172, 175, 178, 188, 196, 215, 216, 220, 221, 224–231, 233, 238, 240, 241, 252 Morals, morality, 56, 57, 59, 61, 65, 75, 76, 80, 134, 153, 154, 157, 161, 162, 167, 170, 171, 175, 179, 192–194, 199, 230–232, 234, 240, 250 Multimodal data, 115, 188, 224
N Narrative medicine, 156, 157 Natural language processing (NLP), 24, 25, 29, 37, 38, 43, 109, 188, 195–197 Nursing gaze, 176–179, 186
O Omics, 29–31, 188, 191, 199, 216, 226 Oncology, 35, 37 Ontic occlusion, 126 Opacity, 72, 134–137, 140, 163 Ophthalmology, 39 Outpatient care, 175 Overfitting, 26, 123, 134
P P4, 33, 34, 37, 112, 168, 231 Participation, participatory, 41, 42, 74, 99, 129, 131, 132, 168, 173, 179, 185, 186, 196, 197, 220, 226, 231, 237, 241 Paternalism, -istic, 58, 61, 77, 105, 137, 152, 153, 167, 172, 174, 175, 249 Pathology, 30, 39, 73 Pathophysiology, -ical, 29, 110, 121 Personal digital assistants (PDAs), 41 Personalization, 36, 42, 73, 76, 81, 110, 112, 116, 117, 133, 152, 172, 175, 177, 178, 185, 194–196, 200, 201, 203, 227, 231, 235, 240, 248, 250 Personalized medicines, 4, 5, 28, 96, 101, 112, 127, 154, 168, 169, 176, 226 treatments, 31, 110, 154 Pharma, 93, 101, 231, 250 Precision medicine, 32–36, 76, 104, 106, 107, 110, 112, 114
Index Predictions, 17, 18, 26, 27, 36–38, 42, 43, 104, 111, 115, 121, 125, 130, 134, 154, 155, 162, 175, 176, 195, 196, 199, 224–226, 231, 238 Predictive analytics, 8, 175 Prevention, 6, 31, 33, 42, 44, 45, 121, 124, 140, 176, 225, 226, 230, 231, 237, 238, 240, 241 Principlism, 56 Privacy decisional, 67 informational, 67, 93–98, 101, 103–105, 107, 109 local, 67, 95, 233 personal, 67, 77, 102, 106, 214, 228 protection, 66, 92, 94–96, 99–103, 105–107, 109, 173, 199, 217, 240, 248 Prognosis, 31, 34, 42, 121, 127, 188, 199, 224 Psychiatry, 187, 190, 192, 193 Psychotherapy, 187, 190, 192, 197 Public health(care), 7, 41, 56, 67, 68, 77, 96, 98, 103, 106, 107, 132, 202, 213–215, 225–228, 231, 233, 234, 241
Q Qualified, qualitative, 7, 23, 57, 79, 81, 110, 116, 117, 160, 178, 179, 185, 201, 235, 237 Quantification, -ifed, -itative, 4, 7, 42, 113, 115, 116, 129, 158, 160, 169, 228–230, 235 Quantified self movement, 169, 229
R Radiology, 9, 20, 38, 40, 125, 165, 218, 219, 222 Raw data, 27, 28, 79, 119, 250 Reductionism, 4, 73, 74, 111–117, 154, 156, 157, 169, 172, 176, 177, 179, 190, 194, 196, 200, 217, 235, 248, 250, 251 Responsibility distributed, 164 gap, 161 Responsibilization, 169, 231, 232 Risk health, 38, 77, 97, 98, 115, 121, 127, 132, 159, 169, 175, 188, 195, 199, 200, 217, 224–226, 228, 231, 239 individual, 188
Index Robot -assisted, 44 social assistive, 44 Robotics, 18–19, 27, 29, 43–44, 109, 179, 188, 215, 221, 228
S Safety data, 35, 100, 140, 230 patients, 7, 8, 35, 69, 70, 127, 140, 172, 221, 230 Salutogenesis, 225 Scalability, -able, 29, 39, 104 Schizophrenia, 41 Self-management, 6, 34, 41, 42, 77, 93, 168, 173, 190, 191, 193, 195, 197–201, 229, 232, 234, 236, 237 Self-monitoring, 6, 41, 42, 188, 197, 227, 229, 234, 236, 237 Sensors, 7, 8, 31, 36, 39–41, 44, 92, 114, 164, 168, 175, 179, 188, 196, 199, 214, 215, 224, 226, 228–230, 237–239 6Vs, 30, 34 Smart cities, 8, 215, 237–241 Smart data practices, 7, 91–93, 95–97, 109–111, 113, 117, 121, 122, 127, 151, 153, 176–178, 186, 194, 217, 248, 251 Smart wearables, 5, 7, 9, 20, 30, 36, 39–41, 92, 169, 224, 226, 228, 229, 232, 238, 239 Social determinants, 4, 60, 68, 69, 80, 81, 114, 115, 125–127, 129, 153, 160, 169, 201, 202, 216, 231, 250 Social relations, 4, 59, 60, 65, 71, 232 Solidarity, 96, 97, 231 Solutionism, 155, 156, 158, 180, 184, 185, 241 Stigma, 65, 94, 192, 236 Structured data, 30, 37 Surgeons, 43, 44, 222 Surgery, 27, 43, 44, 109, 165, 167 Surveillance capitalism, 116, 120, 250 medicines, 9, 92, 154, 225, 231, 250 public, 215, 226, 239, 240
257 T Telecare, 41 Telehealth, 45, 114, 172, 173, 228 Therapeutic alliance, 189, 190, 193, 194, 196–202, 223 Therapeutic relationship, 60–64, 81, 96, 153, 155, 156, 159, 160, 165, 167, 171–176, 187–203, 216, 218, 221–223, 251 Therapy, 6, 31, 37, 44, 134, 172, 188–190, 192–194, 197–200, 202 Thick data, 116–117, 160, 185, 201, 237, 250 Transparency, 8, 64, 71–73, 77, 95, 98, 102, 109, 111, 138, 160, 173, 174, 199, 233, 240 Treatments, 4, 5, 24, 31–37, 41, 42, 45, 58, 60–65, 75, 77, 95, 101, 109–112, 116, 118, 124, 126–128, 131, 133, 134, 137, 139, 154, 159, 160, 164, 168, 169, 173, 174, 176, 186, 188–191, 194, 196, 199, 201, 202, 226, 230, 234, 239, 248, 249 Trust personal, 64 -worthiness, 29, 63, 137, 185, 214 Turing test, 17, 19
U Ultrasound, 30, 39, 44 Underfitting, 26, 123 Uniqueness neglect, 114, 172, 196 Unstructured data, 30 Utilitarianism, 56
V Variables, 27, 73, 76, 111, 115, 118, 120–124, 126, 129–134, 138, 139, 154, 159, 162, 168, 169, 194, 195 Virtual, 19, 35, 36, 43, 115, 175, 188, 196, 197, 199 Virtue ethics, 56, 57 Vulnerabilities, 74, 130, 177, 180, 182, 191–194, 199, 201, 202
X X-ray, 5, 8, 38, 138