133 82 2MB
English Pages 186 [177] Year 2022
Ethics of Science and Technology Assessment 49
Pablo López-Silva Luca Valera Editors
Protecting the Mind Challenges in Law, Neuroprotection, and Neurorights
Ethics of Science and Technology Assessment Volume 49
Series Editors Carl Friedrich Gethmann, Universität Siegen, Siegen, Nordrhein-Westfalen, Germany Michael Quante, Philosophisches Seminar, Westfälische Wilhelms Universität, Münster, Nordrhein-Westfalen, Germany Bjoern Niehaves, Universitaet Siegen, Siegen, Nordrhein-Westfalen, Germany Holger Schönherr, Department of Chemistry and Biology, Universität Siegen, Siegen, Germany
The series Ethics of Science and Technology Assessment focuses on the impact that scientific and technological advances have on individuals, their social lives, and on the natural environment. Its goal is to cover the field of Science and Technologies Studies (STS), without being limited to it. The series welcomes scientific and philosophical reviews on questions, consequences and challenges entailed by the nature and practices of science and technology, as well as original essays on the impact and role of scientific advances, technological research and research ethics. Volumes published in the series include monographs and edited books based on the results of interdisciplinary research projects. Books that are devoted to supporting education at the graduate and post-graduate levels are especially welcome.
More information about this series at https://link.springer.com/bookseries/4094
Pablo López-Silva · Luca Valera Editors
Protecting the Mind Challenges in Law, Neuroprotection, and Neurorights
Editors Pablo López-Silva Universidad de Valparaíso Valparaiso, Chile
Luca Valera Universidad de Valladolid Valladolid, Spain Pontificia Universidad Católica de Chile Santiago de Chile, Chile
ISSN 1860-4803 ISSN 1860-4811 (electronic) Ethics of Science and Technology Assessment ISBN 978-3-030-94031-7 ISBN 978-3-030-94032-4 (eBook) https://doi.org/10.1007/978-3-030-94032-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents
1
Towards an Ethical Discussion of Neurotechnological Progress . . . . Pablo López-Silva and Luca Valera
Part I
1
Human Nature, Neurotechnologies, and Philosophy: Main Concepts
2
The Concept of Mind in the Neuroprotection Debate . . . . . . . . . . . . . Pablo López-Silva
3
The Unitary Sense of Human Being. A Husserlian Approach Against Reductionism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . José Manuel Chillón
9
19
4
Ethics and Neuroscience: Protecting Consciousness . . . . . . . . . . . . . . . Arran Gare
31
5
Free Will and Autonomy in the Age of Neurotechnologies . . . . . . . . . Andrea Lavazza
41
6
Responsibility: A Theory of Action Between Care for the World, Ethology, and Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gianluca Cuozzo
7
Neuroscience, Neurolaw, and Neurorights . . . . . . . . . . . . . . . . . . . . . . . Paolo Sommaggio
Part II
59 71
Neurotechnologies and Ethics: Main Problems
8
A Conceptual Approach to the Right to Mental Integrity . . . . . . . . . . Elisabeth Hildt
9
Mental Integrity, Vulnerability, and Brain Manipulations: A Bioethical Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Luca Valera
87
99
v
vi
Contents
10 Neurotechnology, Consent, Place, and the Ethics of Data Science Genomics in the Precision Medicine Clinic . . . . . . . . . . . . . . . 113 Andrew Crowden and Matthew Gildersleeve 11 Neuro-Rights and Ethical Ecosystem: The Chilean Legislation Attempt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Enrique Siqueiros Fernández and Héctor Velázquez Fernández Part III Neuroprotection and Human Rights: New Challenges 12 Mental Privacy and Neuroprotection: An Open Debate . . . . . . . . . . . 141 Abel Wajnerman and Pablo López-Silva 13 Neuro Rights: A Human Rights Solution to Ethical Issues of Neurotechnologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 Clara Baselga-Garriga, Paloma Rodriguez, and Rafael Yuste 14 A Technocratic Oath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 María Florencia Álamos, Leonie Kausel, Clara Baselga-Garriga, Paulina Ramos, Francisco Aboitiz, Xabier Uribe-Etxebarria, and Rafael Yuste 15 Neurotechnologies and the Human Image: Open Questions on Neuroprotection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Pablo López-Silva and Luca Valera
Editors and Contributors
About the Editors Pablo López-Silva is Adjunct Professor at the School of Psychology and Research Professor at the Institute of Philosophy of the Universidad de Valparaíso, Chile. He is Young Research Fellow at the Millenium Institute for Research in Depression and Personality (MIDAP-Chile). Pablo López-Silva is MRes and Ph.D. in Philosophy at the University of Manchester, UK. His areas of research are Philosophy of Mind, Philosophy of Psychology, Psychopathology, and Neuroethics. Luca Valera is Associate Professor at the Center for Bioethics (School of Medicine), Pontificia Universidad Católica de, Chile. Moreover, he is Visiting Professor at the Department of Philosophy, Universidad de Valladolid, Spain. He is Ph.D. in Bioethics at Università Campus Bio-Medico di Roma, Italy. His areas of research are Environmental Ethics, Philosophy of Technology, Bioethics, and Applied Ethics.
Contributors Francisco Aboitiz Centro Interdisciplinario de Neurociencias and Departamento de Psiquiatría, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago de Chile, Chile María Florencia Álamos Centro Interdisciplinario de Neurociencias and Departamento de Psiquiatría, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago de Chile, Chile Clara Baselga-Garriga Neurorights Initiative, Columbia University, New York City, USA José Manuel Chillón Department of Philosophy, Universidad de Valladolid, Valladolid, Spain vii
viii
Editors and Contributors
Andrew Crowden School of Historical and Philosophical Inquiry, University of Queensland, Brisbane, Australia Gianluca Cuozzo Department of Philosophy and Educational Sciences, Università Di Torino, Turin, Italy Enrique Siqueiros Fernández Facultad de Filosofía, Universidad Panamericana, Mexico City, México Héctor Velázquez Fernández Centro Sociedad Tecnológica y Futuro Humano, Facultad de Humanidades, Universidad Mayor, Santiago de Chile, Chile Arran Gare Department of Social Sciences, Swinburne University of Technology, Melbourne, Australia Matthew Gildersleeve School of Historical and Philosophical Inquiry, University of Queensland, Brisbane, Australia Elisabeth Hildt Center for the Study of Ethics in the Professions, Illinois Institute of Technology, Chicago, USA Leonie Kausel Centro de Investigación en Complejidad Social, Universidad del Desarrollo, Santiago de Chile, Chile Andrea Lavazza Centro Universitario Internazionale, Arezzo, Italy; University of Pavia, Pavia, Italy Pablo López-Silva School of Psychology, Universidad de Valparaíso, Valparaiso, Chile Paulina Ramos Center for Bioethics, Pontificia Universidad Católica de Chile, Santiago de Chile, Chile Paloma Rodriguez Neurorights Initiative, Columbia University, New York City, USA Paolo Sommaggio Department of Law, Università di Trento, Trento, Italy Xabier Uribe-Etxebarria Bizkaia, Erandio, Spain Luca Valera Centre for Bioethics, Pontificia Universidad Católica de Chile, Santiago de Chile, Chile; Department of Philosophy, Universidad de Valladolid, Valladolid, Spain Abel Wajnerman Faculty of Philosophy and Humanities, Universidad Alberto Hurtado, Santiago de Chile, Chile Rafael Yuste Neurorights Initiative, Columbia University, New York City, USA; Donostia International Physics Center, San Sebastián, Spain
List of Tables
Table 14.1 Table 14.2
Hippocratic oath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proposed technocratic oath . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
170 172
ix
Chapter 1
Towards an Ethical Discussion of Neurotechnological Progress Pablo López-Silva and Luca Valera
Abstract Over the last years, neurotechnological progress has motivated a number of theoretical and practical worries. For example, the potential misuses of neurodevices with direct access to our neural data in real time might pose a number of threats to our autonomy, free-will, agency, privacy, and liberty. In light of this— not so distant—scenario, cooperative interdisciplinary reflection is needed in order to inform conceptual, legal, and ethical challenges that arise from the way in which neurotechnological progress impacts our understanding of issues such as technology, society, the human mind, and finally, the very concept of the human person. Keywords Neurotechnologies · Neuroprotection · Human · Mind Motivated by a quest for medical neurotechnological applications, in 2013, the U.S. BRAIN Initiative is created to develop novel methods to record and manipulate neural activity of human brains with unprecedented specificity (Alivisatos et al. 2013). In 2017, similar enterprises from China, Korea, the European Union, Japan, Canada, and Australia joined the so-called International Brain Initiative aiming at cooperatively exploring new treatments for neurological and psychiatric disorders (Adams et al. 2020). Over the last years, neurotechnologies such as Deep Brain Stimulation (DBS), Transcranial Magnetic Stimulation (TMS), and Brain-Computer Interfaces (BCIs) are starting to offer promising ways to deal with the burden of specific neurological conditions such as Parkinson’s disease, Brain Strokes, Paralysis, among many others (Chaudhary et al. 2016; Espay et al. 2016; Garnaat et al. 2018; Roelfsema et al. 2018; Cagnan et al. 2019; Cinel et al. 2019; Chase et al. 2020; McFarland 2020). Worryingly, this neurotechnological revolution seems to be also leading to the creation of commercial and military applications (Fernández et al. 2015). This P. López-Silva (B) School of Psychology, Universidad de Valparaíso, Hontaneda 2653, Valparaiso, Chile L. Valera Centre for Bioethics, Pontificia Universidad Católica de Chile, Av. L. Bernardo O’Higgins 340, Santiago de Chile, Chile Department of Philosophy, Universidad de Valladolid, Plaza Campus Universitario, Valladolid, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_1
1
2
P. López-Silva and L. Valera
issue becomes highly problematic in light of the current lack of explicit international regulatory laws for the potential production of such neurotechnological applications (Ienca and Haselager 2016; Ienca and Andorno 2017; López-Silva and Madrid 2021). This scenario becomes even more complicated if we think that this type of neurotechnologies might allow the development of the unprecedented ability to read minds by decoding, analyzing, and interpreting data about neural activity patterns of human brains, exposing what once was thought to be private, namely, our thoughts, beliefs, desires, and (cognitive and behavioural) predispositions. More importantly, the very possibility of recording with such a precision the neural activity that produce specific mental states might offer scientists and governments the possibility of not only reading, but also controlling the production of mental states in the minds of regular citizens, process that has been called “brain-hacking” (Yuste 2019, 2020a, b). In this context, a number of researchers all over the world have claimed that current ethical and legislative frameworks are not ready to deal with some of the potential situations imposed by these neurotechnological advances and that, therefore, the development of neuroprotective legal frameworks should become a global urgent priority (Yuste et al. 2021). Certainly, the contexts in which all these practical and philosophical concerns arise are complex and invite us to reconsider— from an interdisciplinary point of view—fundamental questions about the ethics of neurotechnologies, the concept of science and its relationship with the notion of the mental, and issues about the anthropological model that guides current neuroethical debates. More specifically, the potential misuses of neurotechnologies with access to neural data also invites a re-evaluation of several traditional ethical, political, and philosophical categories. Here, there is a pushing need for developing clear and well-informed ideas from an interdisciplinary view. Without this approach, we will be unable to evaluate and inform the best actions to be taken for both current and future generations. Only by developing conceptually consistent tools, we will be able to responsibly inform decision and policy-making within these more practical contexts. Consequently, the progress in the neuroethical field may also contribute to building a framework to develop a better and safer neurotechnological (and neuro-engineering) practice. Taking this need into consideration, Protecting the Mind: Challenges in law, neuroprotection, and neurorights aims at exploring some of the most fundamental debates emerging from the analysis of the philosophical, social, ethical, and legal consequences of current advances in neurosciences and neurotechnology. The aforementioned scenarios make necessary the construction of an academic forum aiming at defining and re-defining some of the most fundamental concepts in the field to deal in an informed manner with the challenges of the growing influence of neurotechnologies, not only in medicine, but also, in our everyday life. The compilation has been divided into three main parts, each one of them dealing with three different problems concerning emerging neurotechnologies. Part I focuses on the question regarding human nature and the importance of developing a shared vocabulary on these issues. Part II deals with possible responses to the main ethical problems generated by the current neurotechnologies and, finally, Part III focuses on
1 Towards an Ethical Discussion …
3
some proposals regarding neurorights and possible policies to protect our privacy, neural data, and intimacy, with reference to emerging neurodevices. Given the wide range of topics, this book is necessary interdisciplinary: the different authors deal with problems and concerns generated by the emerging neurotechnologies starting from philosophical, ethical, clinical, legal, anthropological, and bioethical perspectives. In this sense, the aim of the book is to offer different approaches and standpoints on a current hot topic. Furthermore, given the diverse geographic distribution of the authors, the different problems will be addressed from very different cultural points of view, emerging from South America, Australia, Europe, and the United States. The different parts of the compilation have been structured hierarchically, from the most abstract—and fundamental—to the most practical and applied. Part I offers the philosophical and anthropological basis concerning the emerging neurotechnologies. In this sense, in Part I, a conceptual clarification is offered: in order to avoid fundamental problems, it is useful to better characterize the concept of “mind” (Chap. 2) and its theoretical implication for our target debate. This field, which has been reserved almost exclusively to philosophical investigations in the past, has begun to be explored by cognitive sciences and manipulated by neurotechnologies, with evident consequences in our “anthropological timage.” In this regard, the concept of mind is strongly connected to the idea of the human being we may have, for this reason, in the following contributions it is claimed that a non-reductionist approach to human mind, consciousness, and experience is necessary for developing comprehensive frameworks within the debate. In Chaps. 3 and 4 this comprehensive approach is explored by facing the complexity of our experiences and the permanent emergence of human freedom and autonomy within the human condition. Obviously, the most important anthropological concerns and risks concerning neurotechnologies refer to the impact they could have on our autonomy. Indeed, interventions in brain functioning might have major (bad or good) implications for our free will and our autonomy (Chap. 5). Such consequences on our autonomy might also imply new forms of interpreting our responsibility towards our nature and other humans and living beings. For this reason, a more “ecological” concept of responsibility is needed (Chap. 6) in order to interpret the different feedback effects that emerging technologies are generating on our nature. These new forms of responsibility and autonomy also require original conceptual tools to protect personal human rights against possible reductionisms. Here, the concept of cognitive liberty (Chap. 7) may represent an interesting ethical and legal paradigm to face any possible threat to human intimacy and integrity when pondering the potential effects of neurotechnological advances in our lives. Part II deals precisely with the emerging ethical concerns with reference to human integrity. This concept is particularly relevant in the current neurotechnological debate that stresses a person’s right to control their brain states and to protect the person against unauthorized brain interventions (Chap. 8). In this regard, both bioethical and ethical approaches are needed. On the one side, in order to protect personal autonomy and free will it is useful to consider the issue of human vulnerability as an anthropological and ethical starting point (Chap. 9). On the other, it is useful to find practical tools—like a new informed consent—that may prevent improper intervention in personal intimacy and allow for meaningful decision-making and implement
4
P. López-Silva and L. Valera
practical person-centered outcomes (Chap. 10). In this regard, it should be useful to reframe our ethical ecosystem (Chap. 11) starting from a non-reductionist personcentered approach, which allows for non-discriminatory access to neurotechnologies and the equal consideration of every human being. This last consideration is essential to take the last step in our theoretical path addressed in Part III of the compilation, namely, the legal issues regarding neuroprotection and fundamental human rights. The recent progress concerning neurotechnologies may indeed not only help many patients with mental and neurological diseases, but also, raise major concerns regarding mental privacy, identity, and agency. At the same time, it may change our ideas of the human mind and integrity (Chap. 12). In this regard, the issue of protecting mental (or neural) data is particularly problematic and challenging and needs further discussion. To address these concerns, the aforementioned Morningside Group firstly proposed five new human rights devised to protect individuals in the face of new neurotechnologies. These new human rights may be included in proposed legislations and soft laws in different countries, including Chile and Spain. Chapter 13 is a reframing of that proposal, which is constantly generating stimulating debates all over the world. Nevertheless, protecting our neural data, intimacy, mental integrity, and agency is not sufficient. To ethically orient the activity of neurotechnology developers and the industry, to ensure responsible innovation, and to protect the fundamental human rights of patients and consumers, new forms of public commitments are necessary. In this sense, Chap. 14 proposes a new “Technocratic Oath” (which is modelled on the basis of the Hippocratic Oath) based on seven ethical principles.
1.1 Final Remarks In John Milton’s Comus, the British poet writes “Thou canst not touch the freedom of my mind”, depicting the human mind as the last bastion of privacy, freedom, and agency. Such an idea remained unchallenged for a very long time. However, over the last years, the development of neurotechnologies with direct access to neural data is inviting us to critically reconsider the very notion of the mind as a secret and inaccessible place. This scenario has not only motivated discussions about the ways in which the access and control over our own neural data (mental privacy) could be protected, but also, discussion about our very notions of the human mind and the most fundamental anthropological model of ourselves. In this context, Protecting the Mind: Challenges in Law, Neuroprotection, and Neurorights is an attempt to meditate on a number of issues of current and future concern. We deeply hope that this compilation motivates new developments in our target debate as there is much to be discussed. Legal, Ethical, and Philosophical concerns about the way in which neurotechnological advances might affect every-day life and our understanding of the human condition certainly develop together and, for this reason, interdisciplinarity is not merely desirable but a fundamental need. Let’s hope current and future researchers
1 Towards an Ethical Discussion …
5
embrace such a need in a cooperative way to develop well-informed frameworks for decision-making in legal and political contexts that might affect us all.
References Adams A et al (2020) International brain initiative: an innovative framework for coordinated global brain research efforts. Neuron 105(2):212–216 Alivisatos AP, Chun M, Church GM, Deisseroth K, Greenspan MPRJ, Roukes ML, Sejnowski TS, Weiss P, Yuste R (2013) The brain activity map. Science 339:1284–1285 Cagnan H, Denison T, McIntyre C, Brown P (2019) Emerging technologies for improved deep brain stimulation. Nat Biotechnol 37(9):1024–1033 Chase HW, Boudewyn MA, Carter CS, Phillips ML (2020) Transcranial direct current stimulation: a roadmap for research, from mechanism of action to clinical implementation. Mol Psychiatry 25(2):397–407 Chaudhary U, Birbaumer N, Ramos-Murguialday A (2016) Brain-computer interfaces for communication and rehabilitation. Nat Rev Neurol 12(9):513–525 Cinel C, Valeriani D, Poli R (2019) Neurotechnologies for human cognitive augmentation: current state of the art and future prospects. Front Hum Neurosci 13:13 Espay AJ, Bonato P, Nahab F, Maetzler W, Dean JM, Klucken J, Eskofier BM et al (2016) Technology in Parkinson disease: challenges and opportunities. Mov Disord 31(9):1272–1282 Fernández A, Nikhil S, Gurevitz B, Olivier O (2015) Pervasive neurotechnology: a groundbreaking analysis of 10,000+ patent filings transforming medicine, health, entertainment and business. Sharp Brains, San Francisco Garnaat SL, Yuan S, Wang H, Philip NS, Carpenter LL (2018) Updates on transcranial magnetic stimulation therapy for major depressive disorder. Psychiatr Clin North Am 41(3):419–431 Ienca M, Haselager P (2016) Hacking the brain: brain–computer interfacing technology and the ethics of neurosecurity. Ethics Inf Technol 18(2):117–129 Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13(1):1–27 López-Silva P, Madrid R (2021) Sobre la conveniencia de incorporar los neuroderechos en la constitución o en la ley. Revista Chil De Derecho y Tecnol 10(1):53–76 McFarland DJ (2020) Brain-computer interfaces for amyotrophic lateral sclerosis. Muscle Nerve 61(6):702–707 Roelfsema PR, Denys D, Klink PC (2018) Mind reading and writing: the future of neurotechnology. Trends Cogn Sci 22(7):598–610 Yuste R (2019) Everyone has the right to neuroprotection. Columbia Neuroright Initiative. https://nri.ntc.columbia.edu/news/rafael-yuste-and-brain-hacking-everyone-has-right-neu roprotection-originally-spanish Yuste R (2020a) Si puedes leer y escribir la actividad neuronal, puedes leer y escribir las mentes de la gente. El País, Dicember 4th, 2020. https://elpais.com/retina/2020/12/03/tendencias/160702 4987_022417.html Yuste R (2020b) Can you see a thought? neuronal ensembles as emergent units of cortical function. IBM Distinguished speaker series. https://www.youtube.com/watch?v=QRr_2PuzTZU Yuste R, Genser J, Herrmann S (2021) It’s time for neuro-rights. Horiz: J Int Relat Sustain Dev 18:154–165
Part I
Human Nature, Neurotechnologies, and Philosophy: Main Concepts
Chapter 2
The Concept of Mind in the Neuroprotection Debate Pablo López-Silva
Abstract The rapid development of neurotechnologies with unprecedented access to neural data has motivated the creation of legal frameworks aiming at protecting the general public from their possible misuses. In a pioneering action, the Senate of Chile has recently published the first ever Bill for the Creation of Neurorights. This bill promises to protect “the human mind.” Here, I argue that there are good reasons to demand a clarification of the way in which the notion of “mind” is conceptualized within the bill in order to avoid fundamental problems. After that, I explore an alternative way in which the concept of mind can be integrated into the legal debate including anthropological, biological and subjective (phenomenal) elements that might respect the different pre-theoretical intuitions and dimensions that motivate the neuroprotection crusade. Keywords Mind · Neuroprotection · Neurorights · Person
2.1 Protecting the Mind: The Context of Neurorights Projects such as the Brain Activity Map (BAM) are large-scale initiatives aiming at providing ways to record and manipulate the activity of circuits, networks, and— eventually—whole brains with single-neuron specificity (Alivisatos et al. 2013; Andrews and Weiss 2012; Koch and Reid 2012). Going beyond current achievements in neurosciences, the BAM in specific is not only clarifying specific brain areas underlying the production of conscious mental states, but it promises to describe with unprecedented precision the specific neural paths that electric impulses in the brain take in order to produce those mental states allowing the understanding of “how brain processes produces perception, action, memories, thoughts, and consciousness” (Alivisatos et al. 2013, 1). This type of ambitious research program comes with a number of empirical, conceptual, and ethical worries, and for this reason, they have been compared with the impact that the Human Genome Project have had in sciences, ethics, and in the general understanding of the human condition. P. López-Silva (B) School of Psychology, Universidad de Valparaíso, Hontaneda 2653, Valparaiso, Chile © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_2
9
10
P. López-Silva
In a very recent interview, Rafael Yuste (2020, 1)—leading researcher of the Brain Initiative—warns that “if we can read and transcript neural activity, we might be able to read and transcript minds.” This means that, worryingly, the very possibility of recording with such a precision the neural activity that produce specific mental states might offer scientists the possibility of not only reading, but also controlling the production of certain mental states in the minds of others in what has been called “brain-hacking” (Yuste 2019). Over the last years, the impact that new neurotechnologies will have on important issues such as agency, liberty, control, and autonomy have been the focus of a number of debates in neuroethics. Such discussions have motivated the development of legal regulations aiming at protecting the general public from possible misuses and abuses of invasive and non-invasive neurotechnologies (López-Silva and Madrid 2021). Following the opening of the BAM, in 2013 former US President Barack Obama called attention upon the potential impact of neuroethological developments on human rights, emphasizing the need to address questions about: “privacy, personal agency, and moral responsibility for one’s actions,” along with important issues regarding “stigmatization and discrimination based on neurological measures of intelligence or other traits; and questions about the appropriate use of neuroscience in the criminal-justice system” (Presidential Commission for the Study of Bioethical Issues 2014). Taking this into consideration, in a pioneering action, the Chilean Senate dispatched in October of 2020 the first ever bill for the creation of a law in neuroprotection. In its Article 4 (Bulletin N°13.828-19), the bill states that “The use of any system or device, be it neurotechnology, BCI, or other, the purpose of which is to access neuronal activity, invasively or non-invasively, with the potential to damage the psychological and psychic continuity of the person, that is, their individual identity, or with the potential to diminish or damage the autonomy of their free will or decision-making capacity, is prohibited.” This bill will certainly set an important example of jurisprudence for any future discussion of different governments around the world aiming at establishing specific policies and laws to regulate the use of intrusive and non-intrusive neurotechnologies in medical and non-medical contexts. However, with novelty comes a number of questions. As I have recently suggested, one of the central weaknesses underlying the current discussion about the establishment of specific neuroprotection laws in Chile is the lack of definition of what is actually meant by these laws when referring to the concept of mind or the mental (López-Silva 2019). This becomes a fundamental issue when we realize that ‘the mind’ is exactly what the law is meant to protect (at least from the way it has been written). The Chilean bill on neurorights makes use of the concept of mind in a number of sentences without really specifying what is meant by it. The problem with this is that, without a definition of such a central notion, a number of interpretive, conceptual, and practical problems might arise when creating specific legal frameworks; but more importantly, different problems might arise when dealing with crimes against those very laws in trial contexts. The aim of the remainder of this chapter is very modest. I argue that the issue at hand is not a simple conceptual irrelevant matter, but rather, that we have good reasons to demand an explicitly declared working-concept of how neuroprotection
2 The Concept of Mind in the Neuroprotection Debate
11
laws understand the concept of mind in order to avoid fundamental problems. After that, I explore an alternative way in which the concept of mind can be integrated into the legal debate including anthropological, biological and subjective (phenomenal) elements that might respect the different pre-theoretical intuitions and dimensions that motivate the neuroprotection crusade.
2.2 What Are Neurorights? Understanding the Threat It is not controversial to claim that the field of neurotechnology seems to be living its most flourishing period. Projects such as the BAM are starting to grant access to critical brain data, capturing important aspects of a person’s private mental life that never appeared to be in danger before. The potential impact of neurotechnologies in common everyday life is critical. In the eventual case of a massification of such methods in society, and the potential misuses of neurodata, we might not only find the so-called cookies trying to push our behavior towards a certain specific purchasing decision (most of the time, purchasing a specific product or service), but also, it might be possible to think about the existence of neurocookies influencing—or even re programming—our neural activity to guide purchasing decisions if neurodata are misused (just in the case it seems to be happening with facebook algorithms). The problem here is that, while the former might be an accepted form of marketing (even with a number of ethical problems), the latter cannot be accepted under any circumstances for being a clear example of mental intromission, namely, a violation of human agency and free-will though the reprogramming of a subject’s neural activity without consent. Some might suggest that this type of threat is more likely to be found in a sciencefiction movie, however, the recording of specific neural activity producing specific mental state might make this scenario very much possible in the short time. An example of this is Elon Musk’s Neuralink that has recently developed successful invasive and non-invasive brain-computer interfaces that allow monkeys to control video games with their minds. In the same vein, the so-called “Kernel-Flow” uses infrared light to provide real-time brain data to consumers wearing the device (Johnson 2021). Among many others, these exploratory neurotechnologies seem to raise a number of ethical worries if misused. Importantly, although medical applications are the top priority for projects such as the BAM, it would be naïve to think that companies that have invested considerable amounts of money on these projects might not want to find specific ways to secure the return on that investment by creating commercial uses of those technologies (Fernandez et al. 2015). In this context, the MorningSide Group—a group of scientists associated to the Brain Initiative—has recently claimed that (i) “such advances could revolutionize the treatment of many conditions […]. But the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people” (Yuste et al. 2017, 160), and (ii) that existing ethical guidelines are insufficient for this realm (Goering and Yuste
12
P. López-Silva
2016). Accordingly, the Morningside Group has proposed that any neurotechnological development derived from the BRAIN initiative—or other projects alike—must be accompanied by the development of a legal scaffolding aiming at protecting “people’s privacy, identity, agency, and equality” (Yuste et al. 2017, 159). This legal resource has been named as neurorights. Yuste et al. (2017) proposes the creation of fundamental neurorights based on four worries. (i) The right to privacy & consent refers to the right for a person to keep her neural data private, ensuring that the ability to opt out of sharing is the default mode of ownership of such data. (ii) The right to agency & identity is the right for a person to preserve her sense of agency and human identity in light of potential changes produced by the use of neurotechnologies. (iii) The third worry has not been crystalized in the creation of a specific right yet. Such a worry refers to the possibility of unequal augmentation of cognitive and physical functions through the use of neurotechnologies and how this augmentation would reproduce different and new types of inequalities. (iv) The last worry refers to the way in which certain bias could become embedded into neurotechnological devices and how this might be avoided. Trying to operationalize these worries, a number of authors have stressed the need for the creation of new rights such as the right for cognitive liberty, the right for psychological continuity, and the right for mental privacy, among many others (Ienca and Andorno 2017; Sommaggio et al. 2017; Ienca 2017). Such rights might constitute the legal support for the defense against potential misuses of neurotechnologies capable of accessing neurodata and controlling people’s cognitive and motor activities. However, while some authors suggest that current legal frameworks already secure those “new” rights, others claim that such frameworks cannot deal with the unique nature of the potential violations of privacy—and the like—posited by current neurotechnological developments (Ienca and Andorno 2017). In fact, when scholarly reviewing current legal frameworks on the matter, the need for the creation of specific neurorights is not undisputed (Shein 2013; López-Silva and Madrid 2021). The idea here is that new threats to a certain right do not make necessary the creation of new rights in the same way in which the creation of new ways of killing do not make necessary the reformulation of the right to life. Perhaps, already existing laws should broaden their scope by clearly integrating these potential violations, however, if legal frameworks are too unspecific, it might be recommendable to create such new rights. Either way, it is not my intention to solve this problem here, rather, to make clear that the creation of new rights associated with possible misuses of neurodatarecording neurotechnologies is an open debate. Instead, in the following section I take the Chilean Bill for the creation of neurorights as a case study to point out specific problems that emerge in the process of creation of actual neuroprotection laws.
2 The Concept of Mind in the Neuroprotection Debate
13
2.3 Why Do We Need to Clarify the Concept of Mind in the Neuroprotection Debate? Actively working with the members of the MorningSide Group and researchers from different universities in Chile, in October of 2020 the Chilean Senate presented the first ever bill for the creation of a law in neuroprotection (Senate of Chile, 2020, Bulletin N°13.828-19). After establishing its general aim—the protection of the privacy and indemnity of the mental—the bill establishes a number of specific rights and technical specificities associated with them. Prima facie, the project brings genuine attention to the potential consequences of misuses of neurotechnologies, however, the initiative seems to lack a clear and unified conceptual framework about the mental, exactly what it promises to protect. In the first paragraph of the Bill on neurorights the concept of “privacy of the mental” is treated as a synonym of the term “privacy of neural data” as if they were one and the same thing. This statement is very far from uncontroversial. In addition to this, the bill uses the term “psychic” (in its “psychological” sense) to refer to something, apparently, similar to the term “mental.” On the one hand, if these terms mean exactly the same for the authors of the bill, it is not clear why they use different concepts. On the other hand, if those terms try to capture different dimensions of human life threatened by the misuses of neurotechnologies, it is not clear what the differences between the terms are, and how the bill integrates them into a clear legal regulation. This lack of conceptual clarity might posit a number of interpretative problems in legal contexts. In fact, in its second part, the bill proposes to treat the privacy of neural data in the same way human organs are treated by current legal international laws, but it does not specify anything about the epistemically private life of conscious human beings. Some might suggest that this distinction is simply irrelevant, however, I disagree. Think about the term “physical pain.” Even if physical pain is produced by C-type neurons firing, what is legally punishable is the pain provoked to a person, and pain is not C-type neurons firing, but rather, a conscious experience of a subject. Persons are not conscious of c-type neurons firing, they are conscious of their pain. Focusing only on the neuronal side neglects the rich personal structure of psychological meaning that accompanies experience of pain (and that might lead to suffering). From this, the problem seems to be that the current treatment of the terms associated to the mental dimension of human beings in the bill rests in an identity relation between the worries about the mental and the worries about the neural, a very debatable position for ethical and legal considerations. In its current form, the Chilean bill collapses very different uses of the term “mental” (all of them present in the expert philosophical literature) into an unclear and problematic unity. This issue becomes more pressing when we look at what the Chilean Civil Code establishes (Ministerio de Justicia 2021). In its 21st Article, the code indicates that “technical terms from sciences and art will be taken in the use that is given to them for those who profess such sciences and art; unless it is clearly stated that they will be taken in a broad sense” (author’s translation). The problem here is that, if the reach of the terms is not well defined in the bill, the
14
P. López-Silva
diversity of interpretations about what the mind is might affect legal processes and policy-making. Imagine that your neurodata is available in a neurodata bank under your consent. Perhaps, you became a member of a—hypothetical—neuro-social network called “NeuroBook” that allows you to connect with people with similar neural patterns; perhaps, you were part of an experimental study that required you to give that consent. In any case, somehow an evil-genius neuro-hacker accesses the neurodata bank, becoming capable of reprogramming people’s minds by using the electric activity of their smartphones. Using complex technological devices, the hacker recodifies such electric activity controlling people’s behavior at his own will. The hacker is found guilty of mental intromission. However, for the hacker’s defense, there is no issue about the mind since the mind is a bunch of observed behaviors and nothing else. Since no harm was reported by the victims, and no apparent harm can be deduced from their observed behavior (because the victims did not experience any conscious experience of being controlled), the concept of mind vanishes in the discussion. The aim of this example is neither to defend a behaviorist view of the mind nor ruling out the relevance of subjective conscious experience in legal debates. Rather, this example aims at showing that the lack of conceptual clarity within our target debate might influence the way in which the harm done to people can be evaluated. Certainly, if neuroprotection laws only aim at protecting neural data, this should be stated clearly. However, the problem with this alternative is that it might leave outside of the reach of legal protection important dimensions of common everyday mental life. In addition, the alternative seems not to be consistent with some of the motivations underlying the whole neuroprotection debate (see Yuste et al. 2017). Let me offer some philosophical background to understand this point. The relationship between what we call the mind and the brain has attracted the attention of neuroscientists and philosophers in light of a number of groundbreaking neuroscientific developments during the last decades. The so-called brainmind problem (or body-mind problem) in analytic philosophy refers to the way in which we should formulate the relationship between our private subjective experience of the world and the brain, two of the most fundamental dimensions of human life at stake in the neuroprotection debate. In this context, the idea that the brain is a necessary condition for the emergence of the type of mind that humans enjoy is trivial. Indeed, it is difficult to think about the emergence of the type of mind humans have without establishing some type of relationship with the type of brain we have. However, it is not clear how the brain is causally connected to the existence of the subjective dimension of the human mind, and more specifically, with conscious mental states. In other words, it is not clear how conscious mental states emerge from purely physical non-conscious matter (Nagel 2013). For this reason, caution should be requested when neuroscientists claim to be able to “decode,” “transcript,” or “read” the mind. In those cases, we only know that current neurotechnologies might be able to capture the biological activity underlying the occurrence of certain conscious mental states such as thoughts, beliefs, motor actions, etc. We simply cannot claim that mental states can be completely reduced to brain activity, or that neural activity really captures the richness of what we call the human mind.
2 The Concept of Mind in the Neuroprotection Debate
15
Considering this, an influential view within philosophy of mind claims that human mental states are characterized for having a certain phenomenal character that is irreducible and belongs to the conscious experience of the world and ourselves (Nagel 1974). Conscious mental states, then, are those where there is something that is like to be in those states for a subject; a private way of living experiences that cannot be accessed from 3rd person methods. In the same line, Jackson (1986) claims that such a phenomenal character might not be reducible to the mere physical activity of the brain, establishing an epistemic gap between this observable and measurable activity, and the conscious state that—allegedly—emerges from that activity. Furthermore, it is claimed that, once these properties emerge from the activity of the brain, they cannot be reduced to the latter as they have new and specific features that belong to the experiencing subject and not to the brain (Chalmers 1996). What this type of philosophical position seems to capture is the idea that the mind is not the same as the brain, a fundamental issue within our target discussion. It is important to consider this issue within a legal context. The reduction of the mental to the neural might imply a common error within cognitive sciences called mereological fallacy. This error can be defined as the tendency to ascribe to the brain properties that only make sense when ascribed to whole organisms (Bennett and Hacker 2003). Brains are not conscious, brains do not enjoy agency or free will, and certainly, brains do not experience pain or suffering. Such properties can only be ascribed to persons, and persons enjoy a rich mental life that includes biological and phenomenal dimensions. Importantly, a great number of the ethical worries regarding the misuses of neurotechnologies comes from the phenomenal use of the concept of mind (Yuste et al. 2017). For example, the specific worry about the alteration of a sense of agency through mental hacking locates the phenomenal use of the concept of the mental at the very heart of the neuroprotection discussion, and more importantly, at heart of the specific Chilean Bill. However, if the mental is the same as the neural, conscious features such as the sense of agency are simply not considered in any way within the specific regulation derived from the bill. Here it is relevant to clarify that I do not think that phenomenal considerations should be taken as the most fundamental indicator of autonomy violations. Multiple studies in clinical psychiatry and psychopathology have shown that the sense of agency is a very volatile and fragile feature of our mental conscious life (Haggard and Eitam 2015). The sense of agency might even be altered by making a subject to believe that I have been brain-hacking him, without actually doing it. More important here is the fact that a technically well performed brainhacking trick could actually produce actions in others that might include a sense of agency, this, if the recodification of the specific motor actions is well specified—as it is promised by the BAM. Here, perhaps in the same way in which physical pain can be an aggravating element in a trail for violent robbery, alterations in conscious experience of brain-hacked people might be included in this context as an aggravating element in cases of mental intromission. However, the way in which neuroprotection laws include different uses of the concept of the mental should be clearly stated in order to avoid problems in law-making and legal decisions.
16
P. López-Silva
2.4 Conclusion: From Minds of Brains to Minds of Persons The use of the concept of mind in the current Chilean Bill on Neurorights seems to be underlain by very reductionists ways of understanding this key dimension of human life. Perhaps, this issue has been inherited from a non-openly declared physicalist reductionism at the foundations of projects such as the BAM. In this chapter, I have tried to make clear how this lack of precision creates a number of problems that might affect legal decisions in the future. If neural activity is a necessary condition for the emergence of the mind, then it is not clear that the bill really protects the mind. Rather, it would protect one of the conditions for its existence. Now, if neural activity is taken to be the same as the mind, we can either use the already existing laws on personal data or deny the existence of a phenomenal privacy that can be affected by neurotechnologies. However, the latter option seems very implausible as the conscious suffering associated with these potential misuses of neurotechnology seems to be one of the main motivating worries underlying the project (p. 4–5), and, for there are good reasons to claim that the mind is not the same as the brain. Here, it is important to consider that the human mind is more than these two apparently distinct dimensions. In fact, this way of setting up the discussion might be contentious. As Lowe (2003) rightly points out, there might not be such a thing as “the mind.” Rather, there are minded beings, you and I as subjects of conscious experiences such as feeling, perceiving, thinking, and so on. Properly understood, the mind-body problem that we have been referring to in the last section is the problem of how subjects of experience are related to their physical bodies. At the same time, it is fundamental to clarify that we cannot expect the authors of the bill to solve the mind-body problem in order to continue the process of construction of specific laws on neuroprotection. What we can actually demand is that, when considering the concept of mind, they should take a broader anthropological point of view. A way of solving the apparent conundrum within the neurorights bill is to endorse the view that persons have minds, and those minds are different from their bodies; but at the same time, all of those dimensions exist in a unified psycho-physical whole. Minded persons are neither their bodies nor their neural activity. More importantly, the brain and the mind of a person are unlike one another in respect of the types of properties that each can possess. From this point of view, it is not the case that human beings have neural activity and that such activity is the mind of those persons. Rather, persons are subjects of experience and, as such, possess both mental and physical features. Persons are things that feel and think, but also have shape, mass, and spatial-temporal location. As aforementioned, mental properties cannot be attributed to brains without facing mereological problems. Mental properties cannot even be attributable to parts of a certain person, but rather, to the personal as a whole. As Lowe (2003, 16) suggests: “It is I who think and feel, not my brain or body, even if I need to have a brain and Body in order to be able to think and feel.” Furthermore, persons and brains have different persistence-conditions, that is the conditions under that object continues to survive as an object of its kind. While a brain will continue to survive as long
2 The Concept of Mind in the Neuroprotection Debate
17
as its functional biological conditions are satisfied, it is not obvious that a person could survive the demise of her body and brain, and this is exactly what I mean by such different dimensions living in a unity (see Jonas 2001). It is fundamental for the neuroprotection debate to take into consideration the fact that mental properties are significantly different from physical properties of the brain (Craig 2016), and that the type of being that enjoys mental properties are more complex that neurons firing. In order to avoid mereological and reductionist problems, and in order to make real progress in the task of protecting the human mind, legal frameworks should move from a concept of minds of brains, to a concept of minds of persons.
References Alivisatos AP, Chun M, Church GM, Deisseroth K, Greenspan R, McEuen PRJ, Roukes ML, Sejnowski TS, Weiss P, Yuste R (2013) The brain activity map. Science 339:1284–1285 Andrews A, Weiss P (2012) Nano in the brain: nano-neuroscience. ACS Nano 23(10):8463–8464 Bennett MR, Hacker PMS (2003) Philosophical foundations of neuroscience. Blackwell, London Chalmers D (1996) The conscious mind. Oxford University Press, Oxford Craig JN (2016) Incarceration, direct brain intervention, and the right to mental integrity—A reply to Thomas Douglas. Neuroethics 9:107–118 Fernandez A, Sriraman N, Gurevitz B, Ouiller O (2015) Pervasive neurotechnology: a groundbreaking analysis of 10,000+ patent filings transforming medicine, health, entertainment and business. SharpBrains, San Francisco Goering S, Yuste R (2016) On the necessity of ethical guidelines for novel neurotechnologies. Cell 167(4):882–885 Haggard P, Eitam B (2015) The sense of agency. Oxford University Press, Oxford Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13(5). https://doi.org/10.1186/s40504-017-0050-1 Ienca M (2017) Preserving the right to cognitive liberty. Sci Am 317(2):10 Jackson F (1986) What Mary didn’t know. J Philos 83(5):291–295 Johnson B (2021) Kernel flow 50 sneak peak. https://www.bryanjohnson.co/articles/kernel-flow50-sneak-peek-ep-2 Jonas H (2001) The phenomenon of life: toward a philosophical biology. Northwestern University Press, Evanston Koch C, Reid RC (2012) Neuroscience: observatories of the mind. Nature 483:397–398 López-Silva P, Madrid R (2021) Sobre la Conveniencia de incluir los Neuroderechos en la Constitución o en la Ley. Rev Chil De Derecho y Tecnol 10(1):49–72 López-Silva P (2019) Neuroethical concerns about neuroprotection law-making. In: Conference: “Es hora de los neuroderechos?,” Centro de Innovación, Pontificia Universidad Católica de Chile, Santiago de Chile (Chile) Lowe EJ (2003) An introduction to the philosophy of mind. Cambridge University Press, Cambridge Ministerio de Justicia de la República de Chile (2021) Código civil. Editorial Jurídica de Chile, Santiago de Chile Nagel T (2013) Mind & cosmos. Oxford University Press, Oxford Nagel T (1974) What is it like to be a bat? Philos Rev 83:435–456 Presidential Commission for the Study of Bioethical Issues (2014) Gray matters. In: Integrative approaches for neuroscience, ethics and society, vol 1, Bioethics Commission, Washington, DC Senate of Chile (2020) Bulletin N°13.828-19. https://www.diarioconstitucional.cl/wp-content/upl oads/2020/12/boletin-13828-19-nuroderechos.pdf Shein F (2013) Neuroscience, mental privacy, and the law. Harv J Law Public Policy 36(2):653–713
18
P. López-Silva
Sommaggio P, Mazzocca M, Gerola A, Ferro F (2017) Cognitive liberty: a first step towards a human neuro-right declaration. BioLaw J 3:27–45 Yuste R (2019) Everyone has the right to neuroprotection. Columbia Neuroright Initiative. https://nri.ntc.columbia.edu/news/rafael-yuste-and-brain-hacking-everyone-has-right-neu roprotection-originally-spanish Yuste R (2020a) Si puedes leer y escribir la actividad neuronal, puedes leer y escribir las mentes de la gente. El País, 4 Dice 2020. https://elpais.com/retina/2020/12/03/tendencias/1607024987_0 22417.html Yuste R et al (2017) Four ethical priorities for neurotechnologies and AI. Nature 551(7679):159–163
Chapter 3
The Unitary Sense of Human Being. A Husserlian Approach Against Reductionism José Manuel Chillón
Abstract We are a whole: body and mind, nature and spirit, in short, we are people. And this fact, so endorsed by the most common experiences of our existence, cannot be ruined by insistent scientist glances in naturalizing consciousness, qualifying as philosophical illusionism any margin of exceptionality granted to what makes us human beings. The thesis that sustains this chapter is that, independently of the novelty of the neuroscientific tendencies that Husserl could not even glimpse, both in the analysis of his presuppositions, as in that of his consequences, the research of the Moravian philosopher can still serve us. For this to be so, we must first show that neurosciences fall within the positivist paradigm whose criticism motivated the emergence of phenomenology. It is a question, then, of analyzing how narrow and reduced is the concept of experience handled by positivism, in the light of the analyzes of genetic phenomenology and of passive syntheses that, in our opinion, continue to serve as an explanatory framework that makes possible the fruitful dialogue between neuroscience and philosophy. Keywords Husserl · Neurophenomenology · Experience · Positivism · Crisis · Horizons of reason
3.1 Introduction. The Integral Experience of the Human Being Human experience is an expression of the integrity of the human being. We have a body and we think. We have wounds and we suffer. We are witnesses of blissful events, and we rejoice. On the other hand, we feel anguish and that state of mind affects important organic functions. We are excited and nervous, and we hardly feel like eating. A strong hormonal imbalance in the thyroid can cause intense periods of emotional lability. Certain neurotic processes of anxiety alter some parameters that have nothing to do with the psychic order, as can be seen in a routine analysis. These J. M. Chillón (B) Department of Philosophy, Universidad de Valladolid, Valladolid, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_3
19
20
J. M. Chillón
are some of the examples that give us a clue that we are actually a systemic, organic and unitary whole. Regardless of the mode of connection, and however the pineal gland of the body-spirit interaction is resolved, we are a whole.1 The spiritualist and materialistic tendencies, in their various historical nomenclatures, seem to bend the common sense of our self with permanent reductive approaches. By taking the very act of thinking as the only indubitable bastion in the face of everything contingent—following the ingenuity of Descartes-, the former offer an unshakeable certainty at the cost of an irrecoverable truth in their solipsistic versions. Having as their objective to make explicit the traceability of theoretical, estimative or practical mental events, the latter end up naturalizing consciousness in a de-ontologization of itself, hence disregarding the specificity of the human person. Husserl calls into question this second tendency, since it threatens the unity of person. Why? Because in the twentieth century, the greatest danger to the possibility of philosophy was commencing: to consider that its scientific status was called into question from the very moment in which, what had to be known and how it had to be known, was already perfectly designed by the natural sciences and their overwhelming success. Thus, if consciousness can be explained as another fact of the realm of Tatsachen, with its same methods and rudiments, with the principle of causality as a model of scientific explanation, then there will be no place to justify the singularity and genuineness of the human (Husserl 1941).2 Subjects cannot dissolve into being nature—the founder of phenomenology had argued—since then what gives meaning to nature would be lacking (Husserl 1989). The omni-explanatory tendency of scientism is decidedly repellent to any attempt at a unitary understanding of human being, and therefore to any possibility of restoring the idea of philosophy as scientia omnium rerum. This is precisely because it refuses to understand the transcendental relevance of any way of overcoming the perspective of pure facts. The thesis that this chapter holds is that, regardless of the novelty of the neuroscientific tendencies that Husserl could not even glimpse, in the analysis of both their presuppositions and their consequences, the analysis of the Moravian philosopher can still help us. Are not still latent naturalistic assumptions behind certain neuroscientific approaches? To naturalize consciousness is to reduce its immense horizontic capacity, its constitutive mysterious dimension, and its determining constitutive faculty that gives meaning to the world. Behind any naturalistic tendency, there is a reductionist attempt that does not pay attention to the need of reflecting, for example, on the limits of technological interventions, on the unity of the person as the foundation and, in short, on the meaning of human existence and on fully human experience.3 Therefore, this 1
Human being is, then, a real unity of body and soul (Husserl 1989). Schaefer (2009, 13) has explained in this regard that Husserl is the last great representative of the idea of human exception, which consists in the fact that the human has the specificity of being able to transcend his own naturalness. This in turn seems to be a hindrance to the cartesian duality between “nature” and “spirit.” In short, it is an anti-naturalism that is held as a common thesis in his works from Logical Investigations, Philosophy as Strict Science or The Crisis of European Sciences. 3 Husserl’s famous conference of 1931, Phenomenologie und Antropologie (Husserl 1941) holds as a general thesis this same idea of rejecting any philosophical perspective that tends to naturalize 2
3 The Unitary Sense of Human Being …
21
is a decided attack on the headquarters of truth whose phenomenological relevance does not consist only in knowing what this thing is we call truth, but how it can be lived.4 In addition, naturalism, trying to resort to a type of original and founding experience, ends up not assuming that science (natural science as well, and neuroscientific disciplines in the broad sense too) emerge in a sphere constituted of meaning, in a horizon of sediments exceeding the unilaterally empirical type of approximation. Thus, it is not accurate to claim that every theory has a material foundation, an empirical basis which one must be referred to in order to find its justification, but rather to discover that both theories and their collections of verifiable data, as well as their protocols of investigative work, are in turn founded. They are always anchored in formations of meaning; in historically constituted genesis that, at the very least, warns that the original is not exactly what is considered as purely empirical. In fact, a variety of experimentation protocols are grounded on concepts taken from popular psychology, the natural attitude, or from other philosophical traditions, which causes that the experimental design cannot be as exact and empirically tested as the researchers themselves think (Gallagher 2012, 81). Consequently, the concept of experience is to be discussed, and that is the key. Due to this, in our view, a reductionism of the empirical takes place here in the two possible senses (objective and subjective) of the genitive. On one hand, there is a reductionism that goes hand in hand with the strict empirical point of view tending towards the accumulation of observations and to the collection of data that strengthen a certain notion of objectivity consisting of opening to the pure facts. On the other hand, a reduction of what the empirical means is implicitly assumed when noting how the constitutive complexity of human experience becomes clear as soon as the neural mechanisms to which it goes back are revealed. “Consciousness is a physical, biological phenomenon, such as metabolism, reproduction or self-repair, of an exquisite ingenuity in its operation, but not miraculous, not even mysterious” (Dennet 2006, 75). Nonetheless, phenomenology warns us that the capital philosophical experience is the lived experience, that is, the experience in which consciousness finds itself giving meaning to the world where it is already installed through its body. Human being is a body and thinks as the body that it is. The body is the one that lives the world, and it is that through which all worldly experiences are constituted. It is the perspective of human experience as in-embodied experience that demands a much broader approach than any scientific reductionism. In addition to the body being one more object of nature insofar as Körper, as content of intentional acts, the body as a living body (Leib), has constituent functions: through it, we access the world, the things and others. The human experience relevant to phenomenology is then a human experience. This program of radical naturalization of consciousness, as brain’s operations and specifically as a result of the combination of the computational skills of the higher regions of the brain that deal with the last stages of information processing, is also maintained by Dennet (Kandel 2007, 436). 4 The problem with naturalism, then, lies in being shipwrecked in the skeptical deluge, allowing our own truth to disappear (Husserl 1970).
22
J. M. Chillón
profound experience of integrity between consciousness and the body, between me and others, between the subject and the world. Moreover, the question here is to see if a global experience, that transcendental vitality, is assumed and respected by the current anthropotechnical drifts, or if some of its claims assume a partial ontology that summarizes the complexity of the human being in a mere set of facts,5 with the consequent bias of any resulting theory, hence neither the concept of human experience nor the notion of person are saved. Searle and Nagel had attributed the characteristics of unity and subjectivity to consciousness. Consciousness is determined by a discrete set of biological processes that can be accessed through scientific analysis. However, these philosophers came to assume that the totality of consciousness is more than the sum of its parts. Therefore, consciousness is of a much higher complexity than the purported simplification of some neuroscientific perspectives (Kandel 2007, 439). Ultimately, it is about accepting the progress of the sciences in the traceability of experience to cognitive and mental events, and at the same time the inexplicability of human experience in the sole possibility of remission, as any naturalism pretends. Then, should it not suffice the irreducibility of subjective experience to the data of neuroscience without this implying ignoring the promising advances of these sciences?6
3.2 Naturalism and Reductionism It is not an affront to philosophy that neurosciences and other cognitive sciences have as their objective the description of processes of knowledge and the explanation of the functioning of the mind. To be able to make explicit the internal processes of thinking from empirical parameters linked to research on the functioning of neuronal synapses turns out to be an obvious manifestation of the progress of science. Likewise, it is also the explanation of how certain brain areas are activated depending on the nature of the concrete conscious operations or different experiences. Where does the critical problem for phenomenology lie? In the same statute of scientificity of these sciences and in how the methodological approach to consciousness is biased when having as its objective the dissolution of the great existential problems. For instance, the risk of freedom, the experience of emotion, the overflow of love, or anguishing suffering
5
That statement from Crisis (Husserl 1970) is memorable: “Merely fact-minded sciences make merely fact-minded people.”. 6 “Genetic and developmental processes determine the connections between neurons, that is, which neurons make synaptic connections with which others, and when they do so. But they do not determine the tenacity of those connections. Tenacity—long-term effectiveness of synaptic connections—is regulated by experience. This conception implies that the potential for many organisms’ behaviours is something intrinsic to the brain. To that extent, it is subject to the control of genes and development. Nonetheless, the environment and the learning of a creature alter the effectiveness of the pre-existing pathways and thus enable the expression of new behavioural profiles” (Kandel 2007, 237).
3 The Unitary Sense of Human Being …
23
could all have a neurological path that can be described. Thus, for certain neuroscientific assumptions, the great abysses of human experience, when approached with the rigor of the empirical method of the natural sciences, will be definitively clarified and diminished in their mystery claims.7 Does then describing and, in the event of a condition, being able to intervene surgically or pharmacologically, mean dissolving? Is there not a metaphysical error here about what the personal being of subject means? Husserl, in fact, had already clarified that the essence of naturalism was the consideration of nature as a unit of space-time being according to exact natural laws. Naturalism—maintains the philosopher—tends to treat everything as nature, falsifying the meaning of every domain that resists this type of approach (Husserl 2002). Accordingly, there are two reductive strategies: the first one has to do with content, considering consciousness explained exactly when the complex neural networks of the brain are shown. So, the brain is considered as the organ responsible for all cognitive experience. The second, on the other hand, is the one that refers to the methodological strategy of approaching the problem of consciousness considered as one of many facts to be explained through the scientific arsenal. If the first strategy reduces the field of experience of consciousness to pure conscious acts, fundamentally of a gnoseological type, then, the second annuls the specificity of the human precisely because of the professed ontological monism. So, the problem of naturalism is an epistemological problem (about how it should be known and in relation to what knowledge is) and a metaphysical problem (relative to how reality is composed only of that type of entities accessible to scientific methods). From this scientism, the integrity of the human experience is lost from sight. This is because a certain observation by neuroimaging is extrapolated as data or also as the result of a certain test. Are we going to deny, for example, that the production of language has to do with the activation of the brain’s place where the Broca and Wernicke areas are located? In any case, it will be necessary to insist that language determines and constitutes a whole linguistic experience whose truth is not entirely explained by this naturalizing tendency. In the experience of language itself, not only the conscious activities of the individual such as speaking, naming or expressing themselves are amalgamated and structurally interrelated, but also a whole complex network of lived experiences, acquired habits and social conventions. These latter examples fall within what Husserl calls the passive syntheses determined by the genesis of consciousness. Aristotle, in some way, had already advanced similar thoughts: the experience of production of meanings had to attend not only to a purely internal process of the subject, but also to a whole hermeneutical experience that had to integrate individual faculties, conventional aspects, as well as an evident guarantee of intersubjectivity. This is so in order for all legein to be semainein. 7
These are some of the theses of Wheatley (2015) in which it is stated that thought and action are physically produced in neural activity. He further asserts, as an uncontroversial fact, that these neurons cannot individually or collectively choose whether or not to send electrical signals, with the obvious problems of free will whose assertion—Wheatley sustains—does not seem to respect physical laws.
24
J. M. Chillón
The idea that the faculty to use meaningful language in order to express and communicate is located and specified in only one area of our organism, only speaks in favour of the integrity of the human being. Hence, language, which is a profoundly human experience, responsible for our openness to the world and to others, that is also what shapes our deepest convictions and values, and furthermore, intertwines our coexistence and social agreements, has its material and bodily basis in the brain section that originates the possibility of language. This is its necessary condition, or in other words, its condition of organic possibility. Nevertheless, the lived experience of language is not actually explained despite of improved neuroimaging techniques or by a real-time computerization of the speech process. Any analysis of the human experience, such as this one of language, needs to be understood from an integral idea of the human being. The experience of language is just one example of how decisive the neurological and neuropsychological approach to the problem is. This is evidenced, for instance, by being an essential part in a more effective rehabilitation for cases of cerebrovascular accidents that compromise the faculty of language. Yet, it is worth noting that the experience of language is not the only thing that can be said about the being that each one is and her horizontic experience of the word. For the field of philosophy, it seems necessary to let neuroscientists know that with which they are dealing is more than an organ and its multiplicity of operations or an instrument and its functions, but the realm in which the truth is lived, the depth of the existence is experienced and the freedom is wanted. This is something that no empirical methodology can try to exhaust in a single formula, in an image or in a diagnosis. Even more, it is in this manner that the so-called “hard problem” of consciousness was raised, as Varela seems to point out, consisting of the impossibility of explaining phenomenological performances from neural data. In sum, science has the first word, but not the last. To clarify these points, other examples could help. In the psychic sphere, drugs with active principles such as fluoxetine, paroxetine, escitalopram or sertraline, to name a few, have long been established to intervene in the reuptake processes of serotonin, norepinephrine and dopamine, assuming the psychiatric relevance of the monoaminergic hypothesis. Accordingly, maintaining adequate levels of these neurotransmitters by inhibiting an excessively absorbent tendency seemed to have an obvious cause-effect relationship on emotional stability. This shows that the chemistry behind these drugs works but not only by themselves, which is the reason why professionals urge for psychological-behavioural therapies that can complement pharmacological treatment. These psychological-behavioural therapies should help re-semantizing those vital areas that seem to have become meaningless. Additionally, new research on the intervention of these drugs in the neuroplasticity of the brain suggests that there is a definite importance of the environment in increasing the effectiveness of such drugs on personal well being, thus going beyond the pure chemical approach.8 8
We have referred to pharmacological use in psychiatry precisely because of the relevance it has in the context of this book on technological interventions in the brain. However, it is evident that all medicine measures the quality of its praxis precisely in the need to care for the person as a whole.
3 The Unitary Sense of Human Being …
25
Ultimately, any example is sufficient to sustain the integrity of the person from the unity of human experience since every conscious act is a way of “being aware of .” Any purportedly objective treatment from the scientist perspective leaves out the significance, the intentional opening of consciousness to the world, and ultimately, everything that human experience has of personal fulfillment. Not only this, but it also leaves out both the surrounding world of things, as well as the entire conglomeration of meanings constituted by life in common with others. “What we do not understand is the inescapable problem posed by consciousness: the mystery of how neural activity originates subjective experience” (Kandel 2007, 440). A few paragraphs above, we mentioned reductionism in terms of content, as if the mind were nothing more than a kind of bundle of especially representational conscious processes, that is, fundamentally linked by their relationship with what is outside of consciousness. If only so, we would have the unique mission of perfecting quantitative, measurable and objective introspection techniques. Nevertheless, even if this first reduction of the mind to its epistemic power were to be accepted, then it would be also easy to realize the partial competence of the empirical approach. Let us think of acts of a perceptual type, the simplest. Perception not only confronts the subject with the world from the most basic sensible immediacy, but it is also already incorporating time in some way and, in this sense, the lived experience. One may say, for example, that I perceive the computer on which I write and I can anticipate what the perception of this object will be if I close it, if I put it on its side, or if I myself turn around to look at it from another angle. Perceptions occur in the subjective experience that gives them meaning until they contextualize each one of them in a long series of observations that make the world meaningful, not only because of the retained experience (past experience) but also because of the future experience, the expectation. Certainly, conscious acts do not occur in a pure now, i.e., in a totally new present. Each conscious now, each experience, welcomes the prehistory of other “now” that have already happened and the perspective of the “now” yet to come. In short Damasio’s (1996, 255) words “the present is never here. We are always late to consciousness.” It can be said that investigating through high techniques the simple mental fact of perception will offer multiple interesting data for the neurosciences, for the diagnosis of certain conditions. However, these techniques could not interfere with the idea that the punctual perceptual act constitutes and, at the same time, is constituted by perceptual experience, understood as historical experience that also gives meaning to the world.9 This experience constituted in history is not only individual, but also intersubjective since that perceptual act is integrated into a formation of collective and cultural meaning.10 The key point that phenomenology does in this 9
In addition to this, as Husserl explained in the IV Meditation, the self develops a constant style in the performance of her acts with an uninterrupted unity of identity, a personal character. So, in this way, each punctual act can neither be studied nor understood in its pure individual and present occurrence. 10 Even though we cannot enter particularly in this field, it is necessary to point out how the neuroscientific finding of mirror neurons was a unique challenge to the phenomenological understanding of intersubjectivity. “Mirror neurons are cells in the premotor cortex of humans and monkeys,
26
J. M. Chillón
regard, consists in showing that all human experience, being subjective, is not subjectivist. In other words, all experience carries within itself its transcendental relevance precisely because reason is not a mere factual faculty but “an essential and universal structure of transcendental subjectivity” (Husserl 1982, § 53). Regarding our conscious experience, whatever it may be, has a lot of constituted experience, and not just a constituent one. Human life is an experience installed in subsoil, in a horizon of sediments of meaning. It could be said it is in a Lebenswelt, in a wide pre-reflective field to which we are referred from what Husserl explains in Ideas II (Husserl 1989) calls operative intentionality (fungierende Intentionalität). It is no longer only a matter of the inherent tendency of our conscious acts to give themselves to the world, to that intentionality of which the empiricist assumptions of naturalism cannot account. It is actually an entire structural, unconscious, passive framework that operates within us as an infrastructure of meaning in our ways of being intentionally referred to the contents of our noetic acts. Hence, the meaning does not only occur in the noema, as an intentional performance of noetic acts. It also does so even in the way in which the noetic noematic intentionality operates, being already constituted by presuppositions of meaning that go beyond any reductionist attempt.11 Furthermore, all of this becomes much more complicated if we continue to analyse the representational capacity of consciousness. Of course, this happens if we achieve to deepen into other extra-epistemic missions of consciousness such as the vast field of emotions, of feelings or of fantasy. This was what Varela actually wanted to warn when he put into circulation the concept of enaction: it refers precisely to a non-representational cognitive mode, which is supported simply by a manner of information processing. This notion was understood as the genuine way in which consciousness makes it emerge both the identity of the subject and the configuration of his world in an evolutionarily forged structural coupling. Consequently, authors such like Varela, Vermensch, and Depraz, among others, have ended up talking about neurophenomenology as a promising complementary line of research between cognitive sciences and phenomenology. “Neurophenomenology is the field that tries to marry modern cognitive science and a disciplined approach to human experience, thus placing itself in the lineage of the continental tradition of phenomenology” (Varela 1996, 330).12 which discharge when one performs certain actions and are also active when one observes the same or similar actions being performed by others. Since their discovery in 1996, mirror neurons have attracted considerable philosophical and scientific attention, due largely to their possible implications for theories of intersubjectivity” (Ratcliffe 2006, 329). 11 For this reason, the description of an activity, of a concrete praxis, requires investigating the conscious activity insofar as it perceives itself unfolding in an operative and immanent mode, both habitual and pre-reflective. It is the idea collected in Depraz et al. (2003, 42) about how cognitive reflection starts from pre-reflective conscience, pre-discursive, pre-noetic, ante-predicative, tacit, pre-verbal, pre-logical, or non-conceptual. An initial approach to these questions that is highly suggestive, can be seen in Ordóñez (2015). 12 This is an example of an integrative and interdisciplinary trend that led this phenomenological perspective to turn to other traditions about human experience, such as the oriental one (cf. Tenzen 2006). Varela and Shear (1999) assumed the need to integrate the introspection issues of psychology
3 The Unitary Sense of Human Being …
27
3.3 The Approach to the Experience of Consciousness as a Lived Experience The fact that the complexity and the extensiveness of human experience are not explained from the rigid scheme of the theory of knowledge, as if our entire experience of the world were a way of making explicit the unidirectional connection between what is immanent and what transcendent, is what forces phenomenology to access exactly the moment prior to this epistemic dissection: the realm of lived experience. We should remember the philosophical fertility of this find within the work of Franz Brentano. The peculiarity of psychic acts was given by their apodicticity, accordingly, making philosophy to focus on the access to the evidence of immanence and in the discovery of the bilaterality of all experience. This is what Brentano called the wonder of wonders: the intimate correlation between the psychic act and the physical phenomenon to which it tends. We could assume, in a very initial and simple way, that the maturing of phenomenology commences it advances from a first Platonism. This Platonism assumes the existence of areas of objectivity and absolute ideality and considers that the important thing in philosophy is the analysis of the nature of the conscious acts to which these ideas are given, in other words, to which these essences come to open. This finding is what allows the discovery of the constituent function of consciousness in the noematic direction of all experience, which, in turn, as consciousness situated in the world, is always constituted. In any case, what is decisive in this phenomenological understanding of a lived experience is how the apodicticity of immanence ends up moving from the psychic act (as empirical psychology would have arranged) to the content of the act itself, to the world correlatively given and constituted as sense in the determination of what subjective experience means, not as concrete experience, but as transcendental experience, as transcendental vitality.13 How to access this realm of the immanent? Husserl’s phenomenology condensed into the notion of reduction the adequate method to achieve this access. This is a question of redirecting oneself to the sphere from which the sense of the world for consciousness emanates. To reduce oneself was nothing more than to remain only with the transcendental performance of the world in what the world means for my life, in what it is supposed to be for every subject to have experience of the world. And for this, phenomenology had to be willing to change course to a new attitude that, precisely, would allow that access to an original state where it is discovered how with the contributions of phenomenology, mainly around reduction, and the Buddhist and Vedic traditions. “Phenomenological accounts of the structure of experience and their counterparts in cognitive science relate to each other through reciprocal constraints” (Varela 1996, 343). These efforts were about assuming the scientific relevance of the insubstantiality of the self, the end of the idea of the solid, centralized and unitary self, as it has been directly assumed by Buddhist practices. 13 With regards to the question of personal identity and the non-impersonal meaning of existence, particularly views that try to establish the difference between the notion of person and the identity of the self, this needs a second-person phenomenology approach. In order to read this kind of view, please refer to Crowell (2021). One can say that all Sein is reduced to a Bewusstsein, that is, all being ends up being a sense of being (San Martín 1994, 259).
28
J. M. Chillón
the world is there for me “not only as a world of mere things, but also with the same immediacy as a world of objects with values, a world of goods, a practical world” (Husserl 1983, 53). This thing that is first and original is no longer the obvious fact of how consciousness is open to the world, but the prior awareness of conscious acts that is accessed by an alternative attitude to positivism. Phenomenology advances by investigating from an understanding of the self, considered a mere bundle of experiences, or even a pure monadological empty container, to a notion of the self as a complex of lived experiences. These lived experiences, whose ingredient immanence refers to a transcendent immanence, broadens the field of evidence from cogito to cogitatum, to the world that is for consciousness, independently of its verifiable transcendent existence. Naturalism precisely denied any transcendental perspective with the evident consequence of the dogmatic reduction of consciousness to the description of conscious acts mechanically considered. Somehow, the motivation that Husserl had already in 1898 is becoming clear: the intimate correlation between consciousness and the world. If we follow this same reflective line, then perhaps we can see that phenomenology, from the beginning, demands a complete conception of the human, which an empirical approach cannot give. Accordingly, having an experience is, in a way, having the world, living the world, knowing that you exist in the world. Neurophenomenology, in its laboratory practice, assumes the fertility of the phenomenological method both of epojé (to the extent that beliefs or theories about experience are suspended), and of reduction (Gallagher 2012, 80). As explained by Husserl (2002) all way of living means a position take. Taking a position is to know that oneself is linked to ideas that claim to be rules of absolute validity. This mainly according to the demands of reason that determines what a life lived in the responsibility of the truth means. Yet, this can only happen if the value of ideas, of the deep convictions that have moved people to open up to them, to surrender to them, is assumed. So, can the idea be ontologically distorted into fact? Isn’t the understanding that reality is an amalgam of facts part of a meagre metaphysical position? Are not ideas a part of reality? Are not real those convictions that make men advance towards the potential assuming the phenomenological primacy of possibility over reality? All these questions point to what philosophy cannot deny: the transcendental work that offers the possibility of investigating what is valid for every human being and for all understanding of subjective experience. Every individual experience of a concrete person, which is situated in a specific moment and anchored in a concrete history, lies within the transcendental order, which is to say, in the order of what it can be worth for any other experience. From this point of view, the transcendental abandons, so to speak, its absolute and residual monadological status that is typical of the idealistic understanding of consciousness. From this, the transcendental moves toward a conception of intersubjectivity, as a “we” whose structures for the production of meaning (the entire scope of the spiritual) imply reciprocal and empathetic recognition. Therefore, one may ask doesn’t neuroscientific naturalism actually end
3 The Unitary Sense of Human Being …
29
up undermining the very concept of experience? And moreover, without the peculiarity of the experience, where will the human being find the meaning of his life as a concrete being and of humanity as a collective project?
3.4 Conclusion The phenomenological perspective contributes to assuming the guarantee of the progress of the sciences without losing sight of the danger of scientism, which is undoubtedly the danger of a bankrupt rationalism that has gone astray (Husserl 1970). This is no longer a theory, since what can be evidence through Husserl’s writings during the interwar period in Europe, in the midst of Nazism’s rise, is an epoch of existential desolation caused by a way of understanding the world and oneself that had renounced to the infinite task that lay ahead of reason. It is precisely this infinite task that is more suited to the horizontality of reason than to the hardening of rationalism of what, without a doubt, certain orientations of the neurosciences participate. This is because positive sciences are founded on a relative, unilateral rationality that leaves present its necessary reverse: a full irrationality (Husserl 1977). However, there is an undeniable truth in naturalism that should motivate a reciprocal relationship with phenomenology that can constitute a double complementary path that it is worth exploring (Gallagher 2012). This relationship, firstly, is about knowing to what extent some aspects of subjective experience, as they have been described by phenomenology, have an empirical and neuroscientific basis. Hence, it is undeniable the value of scientific progress with impressive advances in what it means early detection of neurological diseases, the therapeutic tracing of certain conditions or pharmacological research. Undoubtedly, the transcendental experience, insofar as it is founded on the real and concrete experience of consciousness, should count on the contribution of neurosciences that, perhaps, in some cases can even serve as a criterion for correcting possible phenomenological analyses (Churchland 2007). Secondly, it is about insisting that there is a whole phenomenological tradition that, in the words of Gallagher and Zahavi (2013, 2010), has probably already offered very relevant contributions that neuroscientific research must consider. These can end up being decisive for the necessary comprehensive approach to the human experience. The field of consciousness, reserved almost exclusively to the field of philosophy, has begun to be explored by the entire field of cognitive sciences with very decisive and fundamental contributions. These contributions, we insist, must recognize that they are not born in a vacuum, since there is a whole history of thought whose centuries-old reflections must be assumed. In short, phenomenology has reached consciousness through the exploration of experiences whose phenodynamic treatment must be complemented, from neurodynamic aspects, with complex neurophysiological data (Ordóñez 2015). Shared and
30
J. M. Chillón
interdisciplinary work (phenomenology, philosophy of mind, neurology, psychoanalysis, psychotherapy, and so on) is, now more than ever, a demand for intellectual honesty.
References Crowell S (2021) On What Matters. Personal identity as a phenomenological problem. Phenomenol Cogn Sci 20:261–279 Churchland P (2007) Neurophilosophy at Work. Cambridge University Press, Cambridge Damasio A (1996) El error de Descartes. Universidad Andrés Bello, Santiago Dennett D (2006) Dulces sueños. Obstáculos filosóficos para una ciencia de la conciencia. Katz, Madrid Depraz N et al (2003) On becoming aware. A pragmatics of experiencing. John Benjamins Publishing, Amsterdam Gallagher S (2012) On the possibility of naturalizing phenomenology. In: Zahavi D (ed) Contemporary phenomenology. Oxford University Press, Oxford, pp 70–94 Gallagher S, Zahavi D (2013) La mente fenomenológica. Una introducción a la filosofía de la mente y a la ciencia cognitiva. Alianza, Madrid Husserl E (1970) The crisis of European sciences and transcendental phenomenology. Northwestern University Press, Evanston Husserl E (1941) Phenomenologie und antropologie. Philos Phenomenol Res II(1):1–14 Husserl E (1977) Formal and transcendental logic. Martinus Nijhoff, London Husserl E (1982) Cartesian meditations. Martinus Nijhoff, London Husserl E (1983) Ideas pertaining to a pure phenomenology and to a phenomenological philosophy. First Book. Martinus Nijhoff, London Husserl E (1989) Ideas pertaining to a pure phenomenology and to a phenomenological philosophy. Second Book. Kluwer, Dordrecht Husserl E (2002) Philosophy as rigorous science. In: Hopkins J, Drummond J (eds) The New yearbook for phenomenology and phenomenological philosophy II. Routledge, London, pp 249– 295 Kandel E (2007) En busca de la memoria. El nacimiento de una nueva ciencia de la mente. Katz, Madrid Ordóñez S (2015) La experiencia subjetiva en la investigación de la neurociencia cognitiva. El caso de la neurofenomenología Open Insight VI(10):135–167 Ratcliffe M (2006) Phenomenology, neuroscience, and intersubjectivit. In: Dreyfus HL, Wrathall MA (eds) A companion to phenomenology and existentialism. Blackwell, London, pp 329–346 San Martín J (1994) La estructura del método fenomenológico. UNED, Madrid Schaeffer JM (2009) El fin de la excepción humana. Fondo de Cultura Económica, Buenos Aires Tenzen G (2006) The Universe in a single atom: the convergence of science and spirituality. Morgan Roads Books, New York Varela FJ (1996) Neurophenomenology: a methodological remedy for the hard problem. J Conscious Stud 3–4:330–349 Varela F, Shear J (1999) First person methodologies: what, why, how? J Conscious Stud 6(2–3):1–14 Wheatley T (2015) The moral brain: a multidisciplinary perspective. MIT Press, London Zahavi D (2010) Phenomenology and the problem of naturalization. In: Gallagher S, Schmicking D (eds) Handbook of phenomenology and cognitive science. Springer, Dordrecht, pp 2–19
Chapter 4
Ethics and Neuroscience: Protecting Consciousness Arran Gare
Abstract One way of developing an ethics for neuroscience is to extend the Hippocratic Oath as an ethical code, taking the health of the patient as the first consideration, and maintaining utmost respect for human life. In this paper I will defend this approach but argue that respect for human life is far more problematic than it seems. Respect for human life would appear to entail respect for the autonomy of people as conscious agents, but mainstream reductionist science and those who fund it enframe humans as standing reserves to be exploited efficiently. Utilizing the perspective provided by philosophical anthropology against such science, I argue that the Hippocratic Oath should be extended to embrace the Kantian imperative to treat humanity always as an end in itself, never as a mere means, implying that neuroscience and its associated medical practices should take as their end the maintenance and augmentation of human autonomy. Keywords Ethics · Neuroscience · Consciousness · Hippocratic oath · Philosophical anthropology · Kantian imperative · Autonomy
4.1 Introduction. Defining the Problem The Hippocratic Oath is a code of ethics defining correct behaviour by physicians they are required to commit themselves to before being accepted into the profession. It was the first code of ethics for any profession. While originating in Ancient Greece, it subsequently evolved, but the current code still embodies many of the core injunctions of the original code. The most widely accepted current form is the 2006 The Declaration of Geneva by the World Medical Association to be taken before being admitted as a member of the medical profession. The most important of its injunctions are: “The health of my patient will be my first consideration” and “I will maintain the utmost respect for human life.” The first is a rewording of the injunction
A. Gare (B) Department of Social Sciences, Swinburne University of Technology, Melbourne, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_4
31
32
A. Gare
from Epidemics, Book I, of the Hippocratic school: “Practice two things in your dealings with disease: either help or do not harm the patient.” This was later simplified to the most basic precept of the Hippocratic Oath: “First, do no harm.” It is one of the principal precepts of bioethics that all students in healthcare that, given an existing problem, it may be better not to do something, or even to do nothing, than to risk causing more harm than good. The development of neurotechnology could be subsumed with little modification under the Geneva formulation of the Hippocratic Oath, extending this precept to a commitment not to damage people’s psychological health. On this precept, it should be very clear that the old practice of lobotomizing supposedly mentally ill patients, severing connections in the brain’s prefrontal cortex and leaving them emotionally shallow, lethargic and unable to concentrate or take initiative, making it easier to manage chronically agitated, delusional, self-destructive, or violent patients, should have been ruled out when it was being practiced. Nowadays, there are far more interventions in the functioning of the brain available, and it is less clear what damage to psychological health would mean. Furthermore, with the development of neurotechnology, interventions in the future could go well beyond treatment of patients with neurological disorders. They could be used to “improve” ordinary people. This makes it all the more important to characterize psychological health.
4.2 The Challenge of Mainstream Science to Ethics The biggest problem is that as far as modern science is concerned, mind and consciousness are problematic concepts, while “common sense” views are often vague and contradictory. This makes the notion of psychological health problematic. Many sciences, with the support of a good many philosophers, are committed to explaining away the mind and consciousness, or only allowing that consciousness is an epiphenomenon. A whole tradition of philosophy, originating with Thomas Hobbes, has striven to understand humans as nothing but complex machines. This has come to be identified with the scientific view of humans and has had a major influence on psychology. With the development of Neo-Darwinism, molecular biology and information science, humans, as with other forms of life, have been characterized as machines for reproducing genes, where genes are understood as strings of DNA encoding information. The brain is then seen as an information processor, that is, essentially a computer. Humans can then be characterized as information processing cyborgs, with the brain being nothing but a carbon-based computer. This is the conception of humans now being promoted by transhumanists, who argue that the extensions of humans through technology should be welcomed as an extension of what we are as humans, and in a more extreme form by the posthumanists who argue that the whole idea of the human was a temporary aberration and should be abandoned. From this perspective, if there is a place for health it would amount to not hindering the efficiency with which human organisms are able to process information and act
4 Ethics and Neuroscience: Protecting Consciousness
33
efficiently on the basis of this information, and if possible, augmenting this efficiency. If parts of the body, including the brain, are seen as defective, there should be no problem with replacing them with artificial parts. Just as it is possible to provide amputees with artificial limbs, or people with defective hearts with artificial hearts, if the brain is defective in some way and cannot be repaired, or was defective to begin with, it should be possible to replace part of it with prosthetic parts. Some proponents of this view of life argue that in future it will be possible to download minds onto computers. If this is the case, it might be possible to replace the whole of people’s brains with prosthetic brains, not only repairing defects, but greatly augmenting their power to retain and process information. Ordinary people will be able to far surpass the greatest chess masters of the present, and will be free of emotions which at present interfere with their efficiency. If these arguments are correct, then with this conception of humans there should be no problem with traditional concerns about modifying the brain, such as concern with the effects of lobotomising patients to address their mental disorders, electroconvulsive and insulin shock therapy to cure their depression by destroying their memories, cutting the corpus callosum to cure epilepsy, or modifying people’s moods with chemicals so they will be content with their current life. With the conception of the brain promoted by information scientists, neurotechnologists are entirely justified in attempting to modify people’s brains, possibly by removing bits and adding artificial components in order to make them conform to social conventions and think and act more efficiently. In fact, such procedures could be defended on the grounds that this will make humans more competitive with the robots that will be manufactured incorporating new advances in artificial intelligence. The lesson that should be learnt from this is that the code of ethics that should be adopted in neurotechnology depends almost entirely on how the mind and the brain and their relationship are understood. At present, it is reductionist science culminating in the mechanization of the mind by cybernetics and information science that are taken to be the cutting edge of science and are being embraced not only by scientists but also by philosophers. However, this raises another issue. Is this triumph of cybernetics and information science due to their having proved themselves to be the most promising research program, or because science itself is being corrupted? Funding comes from governments and increasingly, big business, who overwhelmingly fund the kind of science that will facilitate increased control over nature and people to advance military technology and/or generate more profits for corporations. The implicit goal is to replace humans as much as possible to reduce war casualties and labour costs, and to control or eliminate people who no longer have a place in this brave new world. There are now a number of works showing this to be the case, with governments forcing academics to obtain their funding from business corporations to ensure that it is only this kind of science that is funded. If this is the case, what is required is not only a code of ethics for neurotechnology, but a code of ethics for science itself to prevent its corruption. But then the problem could be not just the corruption of science, but with science as such. The commitment to explanation involves a commitment to reductionism, since explanations imply showing that appearances are nothing but the effects of
34
A. Gare
something else. Following this logic, the ultimate explanations will be in terms of the basic existents of the universe. These used to be thought of as elementary particles or force fields, but information has now been added to these. This trajectory and its consequences were foreseen by Martin Heidegger. As he wrote in The Question Concerning Technology (Heidegger 1977, 21): “Modern science’s way of representing pursues and entraps nature as a calculable coherence of forces. Modern physics is not experimental because it applies apparatus to the questioning of nature. The reverse is true. Because physics, indeed already as pure theory, sets nature up to exhibit itself as a coherence of forces calculable in advance, it orders its experiments precisely for the purpose of asking whether and how nature reports itself when set up in this way.” While initially, the subject was privileged as a non-objective being in control of science, he ends up being dissolved by objective science. As Heidegger (1977, 152f.) wrote in The Age of the World Picture: “In the planetary imperialism of technologically organized man, the subjectivism of man attains its acme, from which point it will descend to the level of organized uniformity and there firmly establish itself. This uniformity becomes the surest instrument of total, i.e., technological, rule over the earth. The modern freedom of subjectivity vanishes totally in the objectivity commensurate with it.” The rise of cybernetics and the triumph of information science committed to total control of the world is the inevitable outcome. As Heidegger (1978, 375f.) observed in The End of Philosophy and the Task of Thinking: “No prophecy is necessary to recognize that the sciences now establishing themselves will soon be determined and steered by the new fundamental science which is called cybernetics. [...] For it is the theory of the steering of the possible planning and arrangement of human labour. Cybernetics transforms language into an exchange of news. The arts become regulated-regulating instruments of information.” So long as we accept this conception of science, the idea of a code of ethics for anything, let alone a code for neurotechnology, is problematic. If a code of ethics is to be defended for anything at all, it is necessary to re-open the question, what are humans? and What is science?
4.3 Philosophical Anthropology, the Humanities and Post-Reductionist Science These questions cannot be answered from within science by itself. They can only be answered with reference to the humanities. Since Plato, the question what are humans has always been at the centre of philosophy and the basis of the humanities. It was central to Aristotle’s philosophy and it was central to Hobbes’ philosophy in his effort to replace Aristotle’s conception of humans. While Hobbes’ philosophy was entrenched in culture through the scientism that he had defended, implying that only mechanistic science produces genuine knowledge, his work problematized the subject and subjective experience. While empiricists, granting a place to sense experience, attempted to uphold scientism, their efforts to do so were undermined by
4 Ethics and Neuroscience: Protecting Consciousness
35
their assumptions. In the last paragraph of his book An Inquiry into Human Understanding, Hume (1953, 173) concluded: “When we run over libraries, persuaded by these principles, what havoc must we make? […] [L]et us ask, does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.” This injunction would require the reader to cast his own book into the flames. It was in response to such work that Kant was inspired to make philosophical anthropology the focus of his philosophy. In his Introduction to Logic (2005, 17), published in its final form in 1800 and which guided his critical philosophy, Kant proclaimed that philosophy in its cosmic sense “is the only science which has a systematic connection, and gives systematic unity to all the other sciences.” It can be reduced to four questions, “what can I know?” “What ought I to do?” “What may I hope?” and “what is Man?” and Kant concluded “all these might be reckoned under anthropology, since the first three questions refer to the last.” It should be noted that while philosophical anthropology is made central to philosophy, it is inseparable from other domains of philosophy. If the question “what can I know?” can only be answered with reference to philosophical anthropology, the claim of philosophical anthropology to supply knowledge presupposes an answer to the question “what can I know?” Similarly, to engage in efforts to achieve such knowledge in order to work out “what ought I to do?” already presupposes that we know what we ought to do—engage in such efforts. The focus on philosophical anthropology made these interconnections clear, and appreciation of this was central to all Kantian, neo-Kantian and post-Kantian philosophy, including hermeneutic philosophy and phenomenology. What Kant showed was that the conception of humans put forward by Hobbes and the empiricists was too impoverished to account for the possibility of science. To account for science, we have to recognize the creative role of the subject in perception and in acquiring knowledge, requiring much more robust notions of imagination, reasoning and agency than the mechanistic view and the empiricism it engendered could countenance. It is also necessary to accord a place to the human capacity for autonomy, without which, all apparent beliefs would have to be viewed as epiphenomena of physical processes and no better or worse than any other beliefs, except in so far as they provide an advantage in the struggle for survival by what are now characterized as “gene machines,” machines for reproducing DNA. Philosophical anthropology has been the thread running through what analytic philosophers deride as “continental philosophy,” having been developed by Kant’s students, Herder and Fichte, and then by the Early Romantics and Idealists such as Hegel, then through to hermeneuticists such as Dilthey, neo-Kantians such as Ernst Cassirer, by the pragmatists, and many of the phenomenologists. These neoKantian and post-Kantian philosophers emphasised the essential social nature of human consciousness, that humans only develop the capacity for freedom through viewing themselves from the perspective of others and through being formed by their cultures, and generally, they promoted an ethics based on the notion of mutual recognition of each other’s freedom, self-realization as participants within communities, and recognition of the intrinsic value of life.
36
A. Gare
Friedrich Schelling was exemplary in this regard, arguing that humans conceived as such have to be understood as having evolved within nature. If there is a clash between this conception of humans and Newtonian physics, then physics will have to be transformed. Accepting Kant’s argument that we organize our experience through imagination and concepts, but rejecting Kant’s claim that through transcendental deductions it can be shown that we have to accept the concepts of prevailing physics, he argued that we can criticise and replace defective concepts and thereby bring nature, and humanity as part of nature, to a higher state of consciousness of itself through us. To this end, he argued for a philosophical physics in which activity, later characterized as energy, is fundamental, and characterized matter in terms of forces, arguing that this new physics would make magnetism, electricity and light and the relationship between them intelligible. He also argued for the development of new mathematics adequate to this more dynamic view of nature. On the basis of these concepts he argued for an evolutionary cosmology granting a place to emergence through the limiting of activity. Emergent entities might appear as objects, for instance crystals or chemicals of various kinds, but Schelling argued, these should be seen to be products of the activity of opposing forces achieving a balance. They are emergent, and to some extent immanent causes of themselves, and this makes it impossible to explain them as merely the effects of their environments and constituents. In chemistry, these opposing forces are now referred to as valances which generate molecules of various complexity and stability. Schelling characterized the distinctive characteristics of living beings as processes that must actively maintain their form while interacting with their environments, so these environments are defined in relation to them as their worlds. With this characterization of life, it was then possible to characterize and explain the distinctive characteristics of humans as essentially social, self-conscious beings living in culturally constituted worlds, capable of understanding their own history within the context of the history of nature and reflection on and transforming their cultures. In all cases, living beings, including humans, are inseparable from their environments, but are to some extent the immanent causes of themselves. These are the ideas which triumphed with the development of thermodynamics and the field theories of electro-magnetism of Faraday and Maxwell, with the development of chemistry and then relativity theory showing that matter is really a form of energy. While mainstream biology is reductionist, reductionism has been shown to be incoherent (matter can’t evolve) and is strongly challenged by holistic ideas associated with systems theory, including the theory of complex adaptive systems and anticipatory systems theory, process metaphysics, hierarchy theory, biosemiotics, and efforts to account for consciousness using quantum field theory (Vitiello 2002; Ho 2004, 228ff.). All these are part of the anti-reductionist tradition of thought and research program inspired by Schelling and those he influenced (Gare 2013). These are the forms of thinking being advanced in modern science that are consistent with work in philosophical biology and philosophical anthropology. Advances in science have produced what Ilya Prigogine and Isabelle Stengers (1984, xxixf.) called “the new alliance” between science and the humanities.
4 Ethics and Neuroscience: Protecting Consciousness
37
4.4 The Clash Between Reductionist and Anti-Reductionist Science Most scientists do not accept this post-Newtonian research program, however. They still promote particle physics or string theory rather than accept the advances in quantum field theories, and claim that statistical mechanics as developed by Ludwig Bolzmann has displaced thermodynamics. And they fail to appreciate the modern chemistry and nuclear physics are triumphs of Schelling’s post-reductionist thinking. On the basis of their acceptance of statistical mechanics, ignoring its limitations (for instance, accounting for phase transitions, let alone the dissipative structures examined by Prigogine) they embrace Boltzmann’s notion of entropy and equate negative entropy with the notion of information developed by Shannon to analyse the capacity of cables to transmit messages. As noted, combined with cybernetics, this notion of information provides the foundation of information science, and it is claimed to be able to account, along with molecular biology, for life and mind. These views are not accepted by advanced theoretical biology, however. Jesper Hoffmeyer (1997) in Signs of Meaning in the Universe, essentially a manifesto for biosemiotics based on Peirce’s philosophy, pointed out that “form” for the Romans was a mangled version of the Greek “morf ” (or “morph”), and “information” meant being formed mentally. Atomistic thinking in the Twentieth Century led “information” to be understood as isolated chunks of knowledge and this was taken over by the physicists, who then characterized it as something in the world, independent of anyone, and then tried to impose this inverted, desiccated concept of information on all other disciplines. In his later book Biosemiotics, he wrote that “up-to-date biology must acknowledge that the biochemical concept of information is just too impoverished to be of any explanatory use” (Hoffmeyer 2008, 61). As far as the computational notion of the mind is concerned, as Jeremy Fodor (2000) pointed out, the mind does not work that way. Is it just a matter of choice between rival research traditions? My contention is that it is not. The tradition inspired by Schelling is far more coherent and has proved far more fruitful than the rival reductionist tradition, even when this reductionist tradition utilizes concepts such as fields (in bowdlerized form) inspired by the Schellingian tradition and incorporates the notion of information. The post-Newtonian tradition can make intelligible whatever advances have been made through reductionist approaches in the sciences, while reductionist approaches cannot make sense of what is comprehensible from the anti-reductionist research tradition, including the existence of ourselves as conscious beings. Reductionism is the dominant tradition because science has been corrupted. Firstly, by those who fund science who are for the most part only interested in knowledge that facilitates control over nature and people. This is what reductionist science delivers. It is based on controlling situations and modifying components to enable predictions to be made; that is, as Heidegger observed, enframing the world to reveal it as standing reserve to be controlled and exploited. Secondly, it is far easier to develop such science. Following the “scientific method,” ultra-specialists add small increments to the bucket of scientific knowledge.
38
A. Gare
This can be real knowledge, but trivial. Through specialization, these scientists lose contact with other branches of science and with an integral interpretation of the universe, and this is really the negation of science. It has been shown that scientists who have done most to advance science have been characterized by a wide range of interests which they take very seriously, including interest in the arts and humanities (Root-Bernstein 2015). It is for this reason that they can go beyond established methods and ways of thinking and develop new concepts and create new methods. Both politicians and ultra-specialists are hostile to such scientists who are also prone to speak out on matters of public interest and who, by questioning the assumptions of mainstream science, challenge the work and career prospects of these ultra-specialists. When politicians, business leaders, managers of universities and research institutions ally themselves with such ultra-specialists and attempt to manage science, demanding quantifiable outputs, science stagnates. It has been strongly argued by a prominent medical researcher, Bruce Charlton (2012) that this is the case with current science. Furthermore, Joseph Ben David (1971) showed that throughout history from the Greek onwards, whenever governments have tried to control science to channel it to serve their interests, even when they have increased support for science, they have destroyed its creativity. For real science to flourish, there has to be an “autonomization” of the “scientific field,” as this was characterized by Pierre Bourdieu (2004), so that truth versus falsehood becomes the basis for competition and having the conditions to advance knowledge, rather than patronage, usefulness and ability to get research grants. This problem is particularly acute with neuroscience where, given the current state of science and its severance from natural philosophy, those who are likely to gain research support are the hyperspecialists aligned with reductionist science, unwilling to consider the problematic relationship between their scientific research and the reality of conscious experience, and willing to serve whoever pays them. It is because the scientific field, and more generally, the academic field have been corrupted, and in this corrupt state are imposing a nihilistic world-view that totally devalues life, that developing a code of ethics for neurotechnology important, and problematic. It is first necessary to have a code of ethics for science. What is needed of such a code is for all participants in the scientific endeavour to uphold the autonomy of the scientific field as the condition for the flourishing of science. Above all, this involves upholding the quest for truth and the conditions for those who are engaged in this quest, and to sustain this, reflexivity on the part of scientists about their own enterprise and the conditions for it. The quest for truth should be understood as the quest for a comprehensive understanding of the cosmos, of life and humanity, and the place of humanity, including science and scientists, in the cosmos, and all specialist inquiry should be related to this quest. What is important is that those accredited as scientists have a deep commitment to advancing our understanding of the world which requires knowledge of the history of both science and natural philosophy. This involves rejecting the idea that science can be treated as a mere instrument accumulating useful knowledge, and acknowledging that the health of science requires recognition that the scientific field has its own immanent dynamics that must be respected, giving autonomy to scientists who, by virtue of their drive
4 Ethics and Neuroscience: Protecting Consciousness
39
to comprehend the world, are unpredictable. The value of their work cannot be quantified and managed on the basis of quantifiable indicators. That is, the conditions for autonomous enquiry must be respected and cultivated. What is central to creating a code of ethics for science, is central to all ethics and all professions, and it pinpoints the central problem that has to be overcome. The corruption of science occurs through enframing the scientific community and its individual members as standing reserves to efficiently exploited, reducing them to nothing but instruments for advancing useful technology. It is through this enframing that those managing science have kept in ascendancy utterly debased notions of humans as stimulus-response mechanisms or information processing cyborgs in place of more philosophically and scientifically defensible notions of humanity. With such enframing, efficiency is the only evaluative criterion to judge science and scientists, and ultimately, even this is undermined. The development of neurotechnology brings this dilemma into sharp focus. Human brains are being treated as standing reserves to be efficiently controlled and exploited. However, the advance of science requires that the autonomy of the scientific field and scientists be respected so that the field can develop according to its own immanent dynamics. Since the telos inspiring these immanent dynamics is a coherent understanding of the world, including the place of science and scientists in this world, this will involve defending a conception of humans that acknowledges that they also have autonomous dynamics not completely explicable in terms of their environments and constituents. Autonomous science should itself provide the basis in the conception of humanity developed and defended for valuing and defending the autonomy of scientists. What is required above all of a code of ethics is recognition that individuals are autonomous agents. As Immanuel Kant put it (1959, 47): “Act so that you treat humanity, whether in your own person or in that of another, always as an end and never as a means only,” where being an end was associated with having the capacity for autonomy. Accepting this principle, the only acceptable intervention in either the biological or social conditions of people is that it does not damage their autonomy, but if anything fosters it. And this is especially the case with neurotechnology. Going back to the Hippocratic Oath and how this should be extended to deal with neurotechnology, the injunction “I will maintain the utmost respect for human life” can be taken to imply respect for people’s autonomy. “The health of my patient will be my first consideration” can be taken to imply “The capacity for autonomy of my patient will be my first consideration.” Summarizing this to conform to the traditional formulation, “First, do no harm” can be reformulated as “First, do not undermine the capacity of people for autonomy.” And what is right in a medical context is right everywhere. Neurotechnology should never be deployed to undermine people’s autonomy, and should only be deployed to augment people’s autonomy.
40
A. Gare
References Ben-David J (1971) The scientist’s role in society: a comparative study. Prentice-Hall, Englewood Cliffs Bourdieu P (2004) Science of science and reflexivity. University of Chicago Press, Chicago Charlton BG (2012) Not even trying: the corruption of real science. University of Buckingham Press, Buckingham Fodor J (2000) The mind doesn’t work that way. MIT Press, Cambridge Gare A (2013) Overcoming the Newtonian paradigm: the unfinished project of theoretical biology from a Schellingian perspective. Progr Biophys Mol Biol 113:5–24 Heidegger M (1977) The question concerning technology and other essays. Lovitt W (trans). Harper Torchbooks, New York Heidegger M (1978) Letter on humanism. In: Farrell Krell D (Trans FA Capuzzi) (eds) Martin heidegger: basic writings. Routledge & Kegan Paul, London Ho MW (2004) The rainbow and the worm: the physics of organisms, 3rd edn. World Scientific, Singapore Hoffmeyer J (1997) Signs of meaning in the universe. Indiana University Press, Bloomington Hoffmeyer J (2008) Biosemiotics. An examination into the signs of life and the life of signs. The University of Chicago Press, Chicago Hume D (1953) An inquiry concerning human understanding. Bobbs-Merrill, Indianapolis Kant I (1959) Foundations of the metaphysics of morals (Trans L White Beck). Bobbs-Merrill, Indianapolis Kant I (2005) Introduction to logic (Trans. Thomas Kingsmill Abbott). Barnes & Noble, New York Prigogine I, Stengers I (1984) Order out of chaos: man’s new dialogue with nature. Bantam Books, Toronto Root-Bernstein R (2015) Arts and crafts as adjuncts to STEM education to foster creativity in gifted and talented students. Asia Pacific Educ Rev 16(2):203–212 Vitiello G (2002) My double unveiled: the dissipative quantum model. John Benjamins, Amsterdam
Chapter 5
Free Will and Autonomy in the Age of Neurotechnologies Andrea Lavazza
Abstract Neurotechnologies—neuroimaging, brain stimulation, neuroprosthetics, brain-computer interfaces, optogenetics—are fairly new, but they are likely to progress rapidly and become increasingly widespread. These technologies are being developed primarily for medical purposes; however, applications in other fields are already under way. Intervening in brain functioning can have major implications for our free will and our autonomy. However, the effects are not unequivocal. Some techniques and some of their uses may increase our freedom, while other techniques and other uses may limit it. In this chapter, I propose an operationalised definition of free will and autonomy that is functional for the following discussion. Second, I briefly present some neurotechnologies and their current and future uses. Finally, I focus on the ethical, social and individual welfare evaluation of some neurotechnologies and their possible uses with regard to free will and autonomy. Keywords Control · Neurostimulation · Optogenetics · Memory · Politics
5.1 Introduction In general, people seemingly want to be free and autonomous. The rights related to our freedom and autonomy are regarded as most valuable, and restrictions or violations in this sense are considered extremely serious. But we do not all, and not always, want to enjoy certain kinds of freedom, for example when it involves great responsibility and we do not know how to make the best decision. Many people also value being autonomous from external influence, but sometimes external influences can help us, such as when psychotherapy, or even simply a book, helps improve our character. In a nutshell, this is the picture of what has been and still is happening in human history. Floridi has suggested that the advent of digital technology and ICTs has changed our human condition to the point that we can speak of hyperhistory, A. Lavazza (B) Centro Universitario Internazionale, Arezzo, Italy University of Pavia, Pavia, Italy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_5
41
42
A. Lavazza
after prehistory (characterised by oral communication) and history (characterised by writing and then printing) (Floridi 2012). From the point of view of neuroscience and neurotechnology in relation to concepts such as agency, free will, autonomy, identity, privacy, and authenticity, the situation may be similar. We may indeed be on the verge of entering a new era. Not only are there more powerful tools at our disposal than in the past, but these tools can radically transform the concepts I have just mentioned and the way we experience them. This chapter will be structured as follows. In Sect. 5.2, I will introduce some definitions of free will and autonomy. I do not claim here to exhaust the enormous debate on what free will and autonomy are taken to be; the definitions I will propose will only serve to guide the following discussion. In Sect. 5.3, I will very briefly describe some of the different kinds of technologies that can be subsumed under the neuro-prefix and that are likely to have an impact on free will and autonomy. Finally, in Sect. 5.4, I will outline three scenarios. In the first, freedom and autonomy are increased through the use of neurotechnologies. In the second scenario, I will describe uses of neurotechnologies that may threaten free will and autonomy. In the third scenario, I will discuss situations in which our freedom and autonomy appear to be only limited. Not every increase or decrease in personal freedom and autonomy is necessarily positive or negative. And this new interweaving of different moral outcomes of potential uses of neurotechnologies complicates their evaluation and possible regulation. In the conclusions, I try to bring together what I have said and describe a scenario that might materialise in the not-too-distant future.
5.2 Free Will and Autonomy: Now and in the Future Free will can be roughly identified with three conditions (Walter 2001). The first is the “ability to do otherwise.” To be free, one needs to have at least two alternatives or possible courses of action available. If one has an involuntary and irresistible spasm of the mouth, for example, one is not in the position to choose whether to twist one’s mouth or not. In other cases, although there is only one possibility, for example there is only a sandwich for lunch, one can still decide either to eat it or not. The second condition is the “control over one’s choices.” The individual who acts must be the same who decides what to do. To be granted free will, one must be the author of one’s choices, without the interference of people and mechanisms outside of one’s reach. This is what is called agency, that is, being and feeling like the “owner” of one’s decisions and actions. The third condition is the “responsiveness to reasons”: a decision cannot be free if it is the effect of a random choice; it must be rationally motivated. If I roll dice to decide whom to marry, my choice cannot be said to be free, even though I will freely choose to say, “I do.” On the contrary, if I choose to marry a specific person for their values and my deep love for them, then my decision will be free (Lavazza 2016, 2017). Thus defined, free will is a kind of freedom that we are willing to attribute to all human beings as a default condition. Obviously, there are exceptions: for example,
5 Free Will and Autonomy in the Age of Neurotechnologies
43
people suffering from mental illness and people under psychotropic substances (Levy 2013). Also, it is common knowledge that the conditions of “ability to do otherwise,” “control” and “responsiveness to reasons” are very rarely at work all at once. Nevertheless, the attribution of free will does not imply that all decisions are always taken in full freedom, as outlined by the three conditions illustrated above. We often act on impulse, against our interests, or without being fully aware of what we are doing. But this does not imply that we are not potentially able to act freely. Ethics and law have incorporated these notions, adopting the belief that people are usually free to act or not to act in a certain way and that, as a result, they are responsible for what they do, albeit with the exceptions just mentioned (Lavazza 2016). The notion of autonomy “is generally understood to refer to the capacity to be one’s own person, to live one’s life according to reasons and motives that are taken as one’s own and not as the product of manipulative or distorting external forces” (Christman 2015, 1). Autonomous people are able to make decisions without interference from others, and to act in the light of their own wishes and aims without external constraints. Autonomy also involves being capable of choosing what to believe in, as well as of evaluating the reasons for and against a given course of action. Another crucial aspect of autonomy is therefore rational reflection, by which the subject can assess common norms of behavior and beliefs and choose which ones to adopt and which to reject. In other words, autonomy implies critical thinking, as a result of which one chooses to be oneself, according to one’s beliefs, wishes, and character. Autonomy as the capacity to be one’s own person is linked to the concepts of “true self” and authenticity, which are equally complex and controversial. If that of “true self” seems more like an imprecise construct than common sense, authenticity can be defined as the consistency (and the second order identification of one’s own desires, as proposed by Frankfurt (1971)) between the choices made by the individual and their identity (at any given time), or at least some of the identity components relevant for the choice in question (Lavazza and Reichlin 2018; Lavazza 2019a). This means that autonomy cannot be given as a static condition, but must be referred to the subject at a specific time and under specific conditions. In fact, the context always somewhat influences one’s personal development and the traits, wishes and aims that characterize them at a given time. To try to operationalize the concepts of free will and autonomy and thus make them usable when referring to neurotechnology, one can resort to a unified idea based on control and the cognitive repertoire. One could say that those who are “freer” are able to control and direct their behavior. Those suffering from disorders in that evaluative procedure, instead, are not able to match a given stimulus-induction with an action and a congruent internal state. This is what allows “sane” people (or those who are “freer”) to block, for example, the utilization behavior, but also to stop harmful behaviors (by implementing a behavior characterized by greater self-control, that is, a feature of free will). This self-control is the exercise of willpower on behavior. Thus, self-control can be defined as the capacity to override individual impulses and automatic or habitual responses, or as the ability of higher-order psychological functions to modulate the activity of lower functions (Lavazza and Inglese 2015).
44
A. Lavazza
Control pertaining to autonomy is instead the control exercised on all manipulative or distorting external forces, in all situations that can affect the ability to be oneself and to decide without anyone interfering with one’s choices. The operationalized concept of free will and autonomy with the condition of control (internal for free will and external for autonomy) combines the idea of mental openness, operationalized by creativity, and the repertoire of personal knowledge and experience that influences the degree of free will and autonomy. This concept recalls the general idea of alternative possibilities that lies at the heart of the traditional conception of free will. In fact, those with little knowledge of a domain of the world and little ability to see new solutions also have less “freedom” and less capability of resisting manipulative and concealed influences.
5.3 Neurotechnologies that Affect Free Will and Autonomy Keeping all the above in mind, one can consider the technologies and devices (already available or under study) that are seemingly able to affect freedom and autonomy. Since the aim of my discussion is to provide conceptual and normative tools, and not to make a complete review of neuro-technologies, I will focus only on some of these devices, using them as general examples. Firstly, we have to consider neuroimaging tools when applied to so-called thought apprehension, that is, what was initially called “mind reading.” We cannot have direct access to mental states, but we can infer them from the analysis of their neuronal correlates (I will not be able to go into the crucial issue of the relationship between mental states and brain states here). As is well known, some experiments have made it possible to train expert software to recognise from brain activations, for example, what objects a volunteer was looking at (Kay et al. 2008). More recent studies have gone even further. Mason and Just (2016), for example, used fMRI to assess neural representations of physical concepts (momentum, energy, etc.) in students majoring in physics or engineering. The goal was to identify the underlying neural dimensions of these representations. A different study focused on the alterations in the neural representations of concepts related to death and life in people who engage in suicidal ideation. The study by Just et al. (2017) used machine-learning algorithms to identify such individuals (17 suicidal ideators versus 17 controls) with high (91%) accuracy, based on their altered neural signatures of death-related and life-related concepts. The most discriminating concepts were “death,” “cruelty,” “trouble,” “carefree,” “good,” and “praise.” A similar classification accurately (94%) discriminated nine suicidal ideators who had made a suicide attempt from eight who had not. Moreover, a major facet of the concept alterations was the evoked emotion, whose neural signature served as an alternative basis for accurate (85%) group classification. Recently, it has been shown that it is possible to predict the freely chosen content of voluntary imagery from prior neural signals. Koenig-Robert and Pearson (2019) showed that the content and strength of future voluntary imagery can be decoded
5 Free Will and Autonomy in the Age of Neurotechnologies
45
from activity patterns in visual and frontal areas long before participants engage in voluntary imagery. Participants chose freely which of two images to imagine. Using fMRI and multi-voxel pattern analysis, the authors decoded imagery content in visual, frontal, and subcortical areas as many as 11 seconds before the voluntary decision was made. Activity patterns in the primary visual cortex (V1) prior to the decision also predicted future imagery vividness. All this indicates that our mind can become transparent to neurotechnologies, with outcomes that can be detrimental to our freedom but also beneficial in terms of prevention of damage to the person, as in the case of suicidal tendencies. In the third section, we will see that these new abilities of acting on the brain cannot be assessed in binary terms, as simply good or bad, useful or dysfunctional. Instead, specific uses and situations must be considered within a precise value framework. Another booming set of technologies is that of neural interfaces (NIs), defined by the Royal Society as devices placed in or on the brain or other parts of the nervous system (Royal Society 2019). They are often connected to and operated by a computer, so they can be described as brain-computer interfaces (BCIs). It is commonly believed (UK Parliament 2020) that NIs can (a) stimulate a region of the nervous system, for example by applying electric or magnetic pulses (neurostimulation) (Edwards et al. 2017). Deep-brain stimulation is achieved by implanting electrodes deep in the brain to deliver electrical pulses, so as to alleviate the symptoms of motion disorders such as Parkinson’s disease or Tourette Syndrome; recently, patients with severe opioid addiction have also been given brain implants to help reduce their cravings. NIs can also (b) record and interpret nervous system activity. For example, auditory stimulus reconstruction is a technique that finds the best approximation of the acoustic stimulus from the population of evoked neural activity. Reconstructing speech from the human auditory cortex creates the possibility of a speech neuroprosthetic to establish a direct communication with the brain and has been shown to be possible in both overt and covert conditions (Akbari et al. 2019). Finally, NIs can (c) record and stimulate part of the nervous system. One stroke rehabilitation device under development, for example, aims to detect when a patient is willing to move their arm and then stimulates the muscle to assist movement (Selfslagh et al. 2019). Brain-computer interfaces (BCIs) are also being used outside the medical field, in many cases with commercial applications aimed at consumers who can purchase and use the devices themselves. In the field of non-invasive entertainment, BCI headsets to play computer games are already on the market: users can control actions within the game via thought. In the field of training and so-called neuromanagement, some companies are beginning to use neurofeedback, a non-invasive technique that aims to intervene at the neurocognitive level for the self-modulation of some functions of the central nervous system through information derived from an electroencephalogram visually processed by a computer. With this technique, workers could increase their concentration and control of emotions (Zoefel et al. 2011). Hyperscanning, on the other hand, is a technique that allows to estimate the strength of coupling/connectivity between brains, so as to synchronize the activations of people working together (Balconi and Vanutelli 2017).
46
A. Lavazza
Devices for transcranial direct current stimulation (tDCS) are also already available for purchase. Transcranial direct current stimulation is a widely-used tool in which a small constant direct current is applied through at least two electrodes positioned on the scalp, depending on the targeted brain regions. Stimulation changes the excitability of neurons in the brain by hyperpolarizing or depolarizing their membrane potential. tDCS causes cortical changes both during and after the stimulation. So far, it has been mainly used on healthy subjects (in the laboratory) to enhance mathematical cognition, reading, memory, mood, learning, perception, decision making, creativity, motivation, and moral reasoning. But it seems that tDCS also enhances motor skills and can be profitably used in the field of sports, especially on professional athletes (Lavazza 2019c). In the field of marketing, neuroimaging is used to collect data from the brain to gain insight into consumer decision-making. It is in fact possible to see people’s immediate automatic reaction to a product or an advertising campaign (Fortunato et al. 2014). Even those who organize electoral campaigns are beginning to use these tools, which allow them to assess the implicit popularity of the candidates, people’s response as not mediated by cognitive aspects. The data published so far seem to show that the effect of the first visual impression is very strong and is not easily changed by what the candidate later says and does. This use of neuroimaging could prove risky in that it could induce a party to choose candidates who elicit positive responses in voters even if they are not capable and competent. With regard to the defense sector, the armed forces of some countries are investigating how BCIs can enhance different cognitive abilities, including decision-making and sensory processing. In the US, the Darpa is funding an extensive programme of NI projects whose potential applications could include control of unmanned aerial vehicles or cyber-defense systems (Krishnan 2016). Brain-computer interfaces are also the object of heavy investment by major companies in the field of digital technologies and communication such as Facebook, which is supporting research into creating a headset that can transcribe words from thought to the device at a rate of 100 per minute. Neuralink has recently applied to start human trials in the US, inserting electrodes into the brains of patients with paralysis. Elon Musk said that, apart from treating neural conditions such as Parkinson’s, he hopes that Neuralink could one day facilitate a “symbiosis” between humans and AI. He also announced in 2020 that the company had successfully got a monkey to “control a computer with its brain,” and that Neuralink hopes to start human testing very soon. In this vein, while the idea of a network of brains directly communicating via brain-to-brain interfaces (BBIs) sounds like science fiction to some, it actually is not. BBIs allow for technology mediated direct communication between two brains without involving the peripheral nervous system. They consist of two components: a brain-computer interface (BCI) that detects neural signals from one brain and translates them to computer commands, and a computer-brain interface (CBI) that delivers computer commands to another brain (Hildt 2019). Jiang et al. (2019, 1) presented the first multi-person non-invasive direct BBI in which three persons used an interface called BrainNet to solve a task resembling a Tetris game collaboratively.
5 Free Will and Autonomy in the Age of Neurotechnologies
47
The authors state: “Our results raise the possibility of future brain-to-brain interfaces that enable cooperative problem solving by humans using a ‘social network’ of connected brains.” Recent developments suggest that “a stable, secure, real-time system may allow for interfacing the cloud with the human brain. One promising strategy for enabling such a system, denoted as a ‘human brain/cloud interface’ (‘B/CI’), would be based on technologies referred to as ‘neuralnanorobotics.’” The authors wrote that “a specialized application might be the capacity to engage in fully immersive experiential/sensory experiences, including what is referred to here as ‘transparent shadowing’ (TS). Through TS, individuals might experience episodic segments of the lives of other willing participants (locally or remote)” (Martins et al. 2019, 1–2). Finally, a big revolution may soon come from optogenetics. Optogenetic systems involve genetic modification of neurons through the insertion of opsin genes via viral infection and implantation of optical fibers with concomitant lesion of neural structures. Inserted opsins are then utilized as mediators to regulate the flow of electrically charged ions across membranes in response to light pulses. Being placed in specific cell types and neuronal pathways, their expression enables optogenetics to activate or reversibly silence (inhibit) specific neural circuits. Tests conducted so far in animal models indicate that the applications of optogenetics are very promising and could enable fine modulation of brain activity, both for clinical purposes and to improve the brain’s capabilities in different domains. By using this kind of interventions, optogenetics was able e.g. to reversibly deactivate specific memories (Nabavi et al. 2014), change their valence (Ramirez et al. 2013; Ryan et al. 2015), and more (Adamczyk and Zawadzki 2020; Josselyn and Tonegawa 2020). And in the not-too-distant future it may also be possible to achieve the activation or deactivation of circuits linked to specific behaviors, when human safety concerns about viral vectors and neuron engineering will be completely overcome.
5.4 Assessing and Regulating Neurotechnologies Given the definitions I have provided of free will and autonomy, the main types of neurotechnologies currently available and their potential developments, and the uses that can be made of them by different subjects, philosophical and ethical reflection has to provide categories, concepts, and criteria for a framework in which one might assess these technologies and their implications. Any regulation of use is more properly the responsibility of the relevant governmental institutions. Here, however, on the basis of ethical evaluations, I will try to hypothesize some general lines for the possible regulation of neurotechnologies to the extent that they can affect free will and autonomy of human beings.
48
A. Lavazza
5.4.1 Criteria for Distinguishing and Evaluating Neurotechnologies Two criteria can be identified in order to draw a distinction between new technologies that may potentially be subject to regulation, due to their ability to deeply affect human beings, and technologies that prima facie seem to be more neutral (even if the alleged neutrality of technologies is a controversial thesis, which I cannot deal with here). The first criterion distinguishes neurotechnologies which bypass conscious control from those which do not. Examples of the former are deep stimulation techniques such as DBS or the optogenetic activation of some neuronal circuits. Examples of the latter are prostheses or interfaces that are consciously controlled by the subject, such as implants for people with spinal injuries. The second criterion concerns the invasiveness of neurotechnologies, where invasiveness means access without the subject’s control. In this sense, tools for “thought apprehension” read the brain states and the corresponding mental states without limiting their detection to specific contents. Another kind of invasiveness—that which crosses a border, so to speak, and enters a protected territory—characterizes permanent neuroprostheses. Some neurotechnologies such as tDCS are instead called noninvasive insofar as they act from the outside on specific brain areas (Non-Invasive Brain Stimulation, NIBS). However, as emerges from a more careful analysis, both criteria come with exceptions or counterexamples. As for the conscious control criterion, it is known that other modes of non-conscious conditioning can also be implemented. These depend on the acknowledgement that our freedom and autonomy are not as robust as we tend to think. So-called situationism appeals to the simple structured observation of people’s ordinary behavior in real-life contexts, and endorses a frail control hypothesis (Doris 2002; Appiah 2008) about human behavior. The claim is that human choices and decisions are conditioned by external and situational factors which elicit a response without us realizing that such factors are relevant or that they affect our behavior. Our actions—according to situationism—are due to “automatic” consequences of environmental factors: they are not the result of the voluntary control exercised by the agent on their intentions and decisions (Lavazza 2019b). Our habits, character, and goals, which we believe to be the reasons for our choices, are actually less important than the minor contingencies we come across every day. In other words, external factors play a bigger role than internal factors linked to the agent, thereby reversing the traditional view of freedom as an endowment of the subject. Of course, the influence of external factors is mediated by transient internal states. In this vein, if you help someone after winning the lottery, this is most likely due to your good mood rather than a conscious choice (Isen and Levin 1972; Baron 1997). Concerning the invasiveness criterion, it is not so clear how the invasiveness of treatments should be measured. This is a topic that even medicine often deals
5 Free Will and Autonomy in the Age of Neurotechnologies
49
with (difficulty), especially when it comes to deciding which treatments in end-oflife situations can be continued or suspended. In fact, in medicine, the potential for invasiveness refers to the possibility that an examination, for example, ends up further compromising the health of the subject, instead of helping the doctor to improve it. As for the neurotechnologies that I am considering here, the problem of safety is taken for granted. Rather, the issue is that of the long-term effects on personality of clinical treatments such as DBS. This topic is the subject of a very complex debate that is far from being resolved (Erler 2019). Even the definition of some neurotechnologies as non-invasive can be contested, given that stimulation with TMS and tDCS has not yet been tested in its long-term effects, which may be more widespread and pervasive than is currently believed. In the light of this difficulty in drawing precise dividing lines, it may be useful to consider specific neurotechnologies and specific uses of such techniques, grouped under more general categories in which new neurotechnologies can affect our free will and autonomy. I will also consider the well-being of the person and the ethical categories that may be specifically relevant.
5.4.2 Can We Be More Free and Autonomous with Enhanced Brains? Substance addiction is often a dysfunctional and harmful behavior for the person, which lies outside the subject’s control and can be compared to a disease. In this sense, the use of deep brain stimulation to act on limbic areas and limit craving is a therapeutic strategy that can significantly increase the well-being of a patient by strengthening their free will. The reason is that deep stimulation weakens the individual’s craving for the substance and therefore gives them more internal control. External control, which characterizes autonomy, is also improved, because the patient’s social behavior is no longer so strongly conditioned towards satisfying the impulses induced by addiction. Neuroprostheses and brain-computer interfaces have the potential to increase the subject’s autonomy (Schaefer et al. 2014; Levy 2014). Firstly, they can do so by compensating for some deficits, be they innate or arising from pathological or traumatic causes. Secondly, when they are aimed at a form of physical or cognitive enhancement, they can allow the subject to achieve more effectively their free wishes and goals. The extension of the mind to devices (Clark and Chalmers 1998), however, has a series of consequences concerning threats to privacy and identity, which can in turn affect the conditions for the full exercise of free will and autonomy. I cannot address these issues here, as they are the subject of other chapters in this book. In the following subsections, however, I will consider some risks related to stretching the boundaries of the mind/brain thanks to neurotechnology. Both enhancing our brain performance and connecting our brains to digital tools can increase our freedom and autonomy by giving us a wider repertoire of possibilities
50
A. Lavazza
to choose from. In this sense, not only would we have greater control over our behavior but we would also increase the number of alternatives available to us, both on a practical and a cognitive level. This can happen in the workplace as well as in the entertainment field. Instead, it is more difficult to assess the impact of neurotechnologies in the context of interpersonal relationships, where an increase in opportunities is not necessarily synonymous with greater well-being, satisfaction, and more functional bonds. This depends on the fact that our brain, and therefore our behaviors, is the result of a long evolutionary process of adaptation. Too rapid changes could thus interfere with a balance that has been achieved in a very long time. This does not mean that change cannot or should not happen; rather, it means that we should proceed with caution. A good example in this sense comes from the debate around potential love-enhancing drugs capable of “making people fall out of love” (Earp and Savulescu 2020) and from the so-called affective BCIs (Steinert and Friedrich 2020).
5.4.3 Technologies that Might Threaten Freedom and Autonomy In closed-loop brain stimulation, a biomarker or neuronal pattern signals the probable occurrence of a specific brain activation and automatically activates the stimulator or induces the immediate administration of a drug to prevent it. Such a technique can be deemed useful for increasing the well-being of the individual by improving their control over states that are generally considered pathological or dysfunctional (Bouthour et al. 2019). However, consider a case already mentioned in the literature (Klein et al. 2016). An individual suffering from depression has a device implanted for closed-loop DBS. An electrode placed in the limbic areas signals the onset of neurochemical imbalances (deficiency of a neurotransmitter or neuromodulator) that are a prelude to an episode of major depression. In such cases, a stimulation is automatically activated that counteracts the imbalance and restores a euphoric mood. But imagine that the individual in question is depressed over the sudden death of a close friend. He will go to his friend’s funeral and appear happy and not at all in pain, creating the impression that he is not sad about his friend’s death, which is obviously not true. This effect, due to a technically appropriate functioning of neurotechnology, shows the potential threats to autonomy coming from this type of treatments, to which we would entrust the maintenance of physiological and psychological homeostasis beyond our control (Lavazza 2018). The same goes for techniques that can intervene in order to limit or guide behaviors that are not necessarily pathological. Think for example of optogenetic technologies that can reduce fits of anger or episodes of aggression. In general, the conscious modulation of our behavior in accordance with the demands of the situation in which we find ourselves is the sign of good personal autonomy. However, stress, fatigue,
5 Free Will and Autonomy in the Age of Neurotechnologies
51
or extreme environmental or social conditions can lead to choices and decisions that do not reflect our true intentions. It may therefore seem like a positive option to use an automatic backstop mechanism. But in some cases, we do need to use aggression: that is, to stop a crime or prevent injustice. And the decision about when this should be done rests with the freedom and moral autonomy of the subject, which optogenetic neurodevices could seriously distort. In this case, neurotechnologies would not guarantee the flexibility necessary to preserve free will and autonomy. Neurodevices could also be used to regulate the propensity to run risks and the specific psychological diagnosis of sensation-seeking connected to potentially dysfunctional behaviors, especially in young people (Rosenbloom 2003). In this case, it should be emphasized that risk seeking is a quality that often contributes to progress, thanks to individuals who are willing to embark on new paths that the majority of the population would not have the courage to take. Think of explorers such as Christopher Columbus or visionary inventors or entrepreneurs, willing to endanger their lives, their reputation, or their wealth in endeavours that many people have benefited from. This leads me to the topic of the individual or socially imposed use of some neurotechnological tools. As for individual use, any treatment should be administered only with the informed consent of the person requesting the treatment. In the case of new neurotechnologies, it is extremely important that informed consent includes all the possible consequences of using these technologies. It will not be enough to provide information on the risks associated with safety; it will be necessary to make an effort to provide a neutral account of some of the scenarios outlined above, as well as others that may emerge. These precautions will allow for a truly free and autonomous choice on the part of the individual. As for the socially imposed use of neurotechnologies on the part of the state or other organizations, for the purpose of social protection or efficiency in work or school environments, there are many considerations to make. Let’s look at three possible scenarios. (1)
As far as social protection is concerned, for some time now there has been talk of neurointerventions (Ryberg 2019) as tools for the control and rehabilitation of offenders convicted of serious crimes. In those cases, it is assumed that wrongdoers are offered to choose between a prison sentence or alternative forms of mandatory treatment. These new treatments, for example, limit sexual attraction to children or sadistic behaviors. In both situations there is a loss of freedom and autonomy: in prison one loses the freedom of movement and association, whereas with (future) neurointerventions one loses control over at least part of one’s brain/mental states. It can be argued that a person’s mental/cerebral states are what characterizes them the most and therefore deserve greater protection than other forms of freedom. On the other hand, the suppression of a few tendencies which are almost unanimously viewed as criminal could improve rather than decrease the autonomy of an individual (Douglas 2014). Two considerations seem crucial here. In the first place, we should be certain that these neurointerventions are really limited to criminal tendencies and do not end up modifying the overall psychological profile of the person
52
(2)
(3)
A. Lavazza
treated. This outcome, in fact, should generally be considered morally unacceptable. Secondly, the offender should always have the possibility to choose between a prison sentence and neurointerventions, and be correctly informed of the consequences of the latter. The availability of neurotechnologies of the type described above could also lead governments to introduce forms of crime prevention: these would be possible through neurointerventions on individuals who are deemed dangerous but are not yet guilty of specific crimes (Shaw et al. 2019). In states that do not fully respect human rights, such neurointerventions could also be conducted on political opponents of the regime or on people considered in some way dangerous for the ruling government. In all these cases, forms of preventive regulation with respect to neurodevices could be studied and put into practice (for more on this, see Sect. 5.4.6). In the field of education, neurotechnological tools that can increase concentration or learning skills could be encouraged or even imposed, if considered non-invasive. This cognitive enhancement could be taken to be positive as regards freedom, as it would allow a greater number of young people to obtain a degree, which in turn would translate into greater autonomy. However, these tools would bypass the conscious control of young people, as they would directly affect their brain functioning without their participation in terms of effort and exercise. For this reason, such neurodevices should be subjected to a rigorous ethical evaluation. In fact, improved receptivity to specific contents can decrease and not increase personal autonomy, as it can weaken critical thinking and the ability to judge the relative importance of the contents learned. These are still far-fetched scenarios, although the spread of DIY in relation to tDCS could accelerate the need to evaluate the compositional effects of the use of these tools (see Sect. 5.4.5). In the field of companies and organizations, new neurotechnologies seem to be candidates for widespread use due to the benefits they could bring in training and performance in almost all areas of work. This means that there may be a strong implicit pressure for all workers to agree to use neurotechnology even if the choice remains technically free and no one is forced to make it. In this case, the pressure for ever better performance should be made compatible with people’s decision not to use tools that bypass their conscious control or are deemed invasive, without this choice leading to forms of discrimination.
Society as a whole could also benefit from a massive use of neurotechnology in the workplace to have better services in many fields (Lavazza 2019d). This overall improvement in the quality of life of a community would only favour the freedom and autonomy of its members. However, empowering workers with neurotechnological tools could lead to excessive focus on work at the expense of other forms of selfrealization. In this sense, future devices may not be neutral means to improve one’s faculties at the service of any purpose but may rather be ways to direct the individual in a specific direction, namely that of work, thereby significantly reducing their freedom and autonomy.
5 Free Will and Autonomy in the Age of Neurotechnologies
53
5.4.4 Neurotechnologies that Might Limit Freedom and Autonomy Although, as previously mentioned, there does not seem to be a univocal criterion for discriminating between positive and negative neurotechnologies in absolute terms, it is certainly possible to identify some that generally cause, albeit unwillingly, a reduction of freedom and autonomy. Some uses of neuromarketing fall into this group. As is known, this technique exploits the analysis of brain activations aroused by a product or an advertising campaign presented to several subjects in order to infer their immediate, automatic and profound reactions, as not mediated by rational reflection. Data of this type, combined with further knowledge on the functioning of the brain, for example the strength of the visual and olfactory sensory channels, are exploited to increase the attractiveness of specific items in the eyes of the consumer. As situationism has shown, we are constantly subjected to unconscious conditioning. Neuromarketing, in this sense, is a strategy aimed at a commercial purpose of which the consumer is unaware. The average citizen enjoys a certain degree of autonomy and has the tools to choose freely, even though neuromarketing can employ more powerful tools than in the past. However, younger people (up to 22–23 years of age) are still developing their brains—particularly the prefrontal areas, which are responsible for impulse control (Mills et al. 2014). Faced with an advertisement that makes use of the most advanced neurotechnologies, children have less opportunity to exercise critical judgment and can therefore be strongly conditioned. The exposure of young people to the pervasive marketing of unhealthy products (e.g., junk food) is one of the main risk factors for childhood obesity, which has become a dangerous “epidemic” in today’s world, in addition to being a cause of disease and social inequalities. But foods and drinks rich in sugar and calories are advertised to children more than any other type of product, and in a far greater proportion than advertising aimed at adults. In this sense, these forms of neuromarketing reduce the freedom and autonomy of young people, not only in terms of individual purchase choices, but also in the long term due to the damage caused to a dysfunctional diet (WHO 2016). Political marketing based on new neuroscientific knowledge also risks severely limiting the autonomy of voters. Numerous researches indicate that exposure to a politician’s face is enough for the average voter to make a positive or negative judgment on the candidate—a judgment that is not easily changed by what the candidate actually says or does (Todorov 2017). This finding can lead political parties to choose their candidates on the basis of the people’s response to their facial features rather than their competence and skill. In this way, citizens’ freedom and autonomy would be severely limited, as they would no longer be called upon to choose between different political programs but between “faces” that are presumed to inspire positive feelings. Finally, the network of brains directly communicating via brain-to-brain interfaces also appears as a threat to the freedom of individuals. In this case, the “fusion” of brains seems to herald a new type of cognitive entity that exploits the combined
54
A. Lavazza
intelligence of the connected brains but seems destined to erase their individuality, that is, the unique identities that underpin the exercise of free will and autonomy. Of course, these tools have yet to be implemented on an operational level. The scenarios considered, however, tell us that even the most advanced BCIs could involve the prevalence of the “silicon component,” so to speak, with respect to the living one, with an imbalance to the detriment of the subject’s freedom and autonomy. Lastly, one should consider the cases in which there is an explicit trade-off in the use of medical neurotechnologies or there are unavoidable side effects. The use of deep stimulation for the treatment of Parkinson’s disease can significantly help improve the patient’s physical condition by reducing their tremor, but it can also cause uninhibited sexual behavior and compulsion to gamble (Smeding et al. 2007). In these situations, two different aspects of the patient’s autonomy are at stake. On the one hand, there is the liberation from heavy physical limitations, on the other hand the loss of control over basic impulses that the individual was able to keep in check before the deep brain stimulation. The choice is not always easy and in any case there is a loss of autonomy that is not compensated for by the gain on the other side of the trade-off.
5.4.5 Aggregate Effects There are many areas in which individual decisions produce unforeseeable and negative aggregate effects, even if the individual decisions as such are considered lawful and neutral. Take, for example, the sexual selection of children in China, which caused strong imbalances between the sexes as well as significant social and economic problems. In the field of neurotechnology, an undesirable aggregate effect could occur with the use of cognitive enhancement in training and professional environments. If all professionals in a certain field make use of enhancement, those who occupy the highest positions will effectively depend on enhancement (pharmacological, tDCS, ...). If the drug or device in question were unavailable for some reason, this would then result in a sudden drop in performance which could not be easily remedied. The massive use of neurotechnologies, even if licit and neutral from the individual point of view, may have undesirable aggregate effects as people progressively delegate a greater part of their decisions to the technology at hand, be it a BCI, a neuroprosthesis or a collective entity. The increased reliability of machines in supporting individual choices may translate into society’s inability to make decisions about general trends in technological development or economic and distributional arrangements. The increasing role of neurotechnologies and the stronger reliance on them could lead to a loss of alternatives in shaping ways of living and relating between people.
5 Free Will and Autonomy in the Age of Neurotechnologies
55
5.4.6 Criteria and Norms Invasive neurotechnologies currently involve procedures that can only be performed in medical facilities or under medical supervision; therefore, they require informed consent. However, the patient should not only be informed about the safety aspects, but also about the privacy issues and all the implications of using these devices. With regard to all other devices, one is free to use those that are available on the market according to the regulations that protect the health and safety of consumers. The most important factor in order to safeguard free will and autonomy of individuals is that there should be no coercion in place and that everyone is able to escape potential social or organizational pressures, as in the case mentioned above of companies making greater use of neurotechnologies. To this end, it might be useful to elaborate on some principles I proposed elsewhere regarding neuroprosthetics, in terms of the protection of mental integrity. Mental integrity should be understood as the individual’s control of their mental states and brain data so that, without their consent, no one can read, spread, or alter such states and data in order to condition the individual in any way. What I proposed is a functional limitation that should be incorporated into any devices capable of interfering with mental integrity. Specifically, neurotechnologies should (a) incorporate systems that can find and signal the unauthorized detection, alteration, and diffusion of brain data (and functioning); (b) be able to stop any unauthorized detection, alteration, and diffusion of brain data (and functioning). This should not only concern individual devices, but act as a general (technical) operating principle shared by all interconnected systems that deal with decoding brain activity (Lavazza 2018; Inglese and Lavazza 2021). The idea is that the devices themselves should be required by law to incorporate protection systems relating to their specific functioning and use. The goal would be twofold: to make the user always aware of what is happening when interacting with neurotechnological instruments, and to prevent the latter from limiting the user’s freedom and autonomy without their knowledge. This principle calls for thorough reflection on neurotechnologies, their application and implications, so as to create awareness among the public. This will in turn enable policy-makers to translate this awareness into relevant rules and laws.
5.5 Conclusions Neurotechnologies will soon become increasingly important in everyone’s lives, in addition to positively changing the condition of many sick or disabled people. Their effects are already proving helpful in many fields, but alongside the benefits we should also consider the potential risks, as illustrated in this chapter with regard to free will and personal autonomy (Ienca and Andorno 2017; Yuste et al. 2017). As shown, there do not seem to be any precise criteria for categorising the different types
56
A. Lavazza
of neurotechnologies and their uses in order to assess their benefits and dangers for individual freedom. However, I have tried to distinguish certain macro-areas and discuss their implications. Ultimately, in the age of neuroscience, the greatest threat to human freedom and autonomy in relation to neurotechnologies can be summed up in the implicit concept of the “normative brain.” This is the idea that directly recorded neuronal activations are the most authentic expression of an individual’s attitudes, judgements, and preferences. In this sense, it is the cerebral level (and necessarily an individual level) that is best suited to be considered as the object of interventions for any shortcomings in the various spheres of society. This could point in the direction of a symbiosis between increasingly advanced technology and human beings who rely more and more on their basic reactions inherited from natural evolution instead of intersubjective rational reflection, which modulates and overcomes spontaneous drives and tendencies. In some cases, it will be possible to block the most dysfunctional impulses with the new neurotechnologies, but the alternative seems to be the following: either the machine takes over, or the person will be guided by their more instinctive components, including the selfish and ant-social ones. I do not wish to go into the ancient and unresolved debate on the original “goodness” or badness of human beings. I only want to point out a trend that is in itself “cultural” and that for various reasons is becoming evident in the interaction with brain-related technologies. We should therefore ask ourselves whether we are ready for this future that is looming ahead and what kind of cultural vigilance and social and political regulation is needed to ensure that the transformation, if we want it, is not too fast and does not cause highly negative effects.
References Adamczyk AK, Zawadzki P (2020) The memory-modifying potential of optogenetics and the need for neuroethics. NanoEthics 14:207–225 Akbari H, Khalighinejad B, Herrero JL, Mehta AD, Mesgarani N (2019) Towards reconstructing intelligible speech from the human auditory cortex. Sci Rep 9:874 Appiah KA (2008) Experiments in ethics. Harvard University Press, Cambridge Balconi M, Vanutelli ME (2017) Interbrains cooperation: hyperscanning and self-perception in joint actions. J Clin Exp Neuropsychol 39(6):607–620 Baron RA (1997) The sweet smell of… helping: effects of pleasant ambient fragrance on prosocial behavior in shopping malls. Pers Soc Psychol Bull 23:498–503 Bouthour W, Mégevand P, Donoghue J, Lüscher C, Birbaumer N, Krack P (2019) Biomarkers for closed-loop deep brain stimulation in Parkinson disease and beyond. Nat Rev Neurol 15(6):343– 352 Christman J (2015) Autonomy in moral and political philosophy. In: Zalta E (ed) The stanford encyclopedia of philosophy. http://plato.stanford.edu/archives/spr2015/ Clark A, Chalmers D (1998) The extended mind. Analysis 58(1):7–19 Doris JM (2002) Lack of character: personality and moral behavior. Cambridge University Press, Cambridge Douglas T (2014) Criminal rehabilitation through medical intervention: moral liability and the right to bodily integrity. J Ethics 18(2):101–122
5 Free Will and Autonomy in the Age of Neurotechnologies
57
Earp BD, Savulescu J (2020) Love drugs: the chemical future of relationships. Stanford University Press, Stanford Edwards CA, Kouzani A, Lee KH, Ross EK (2017) Neurostimulation devices for the treatment of neurologic disorders. Mayo Clin Proc 92(9):1427–1444 Erler A (2019) Discussions of DBS in neuroethics: can we deflate the bubble without deflating ethics? Neuroethics. https://doi.org/10.1007/s12152-019-09412-9 Fenno L, Yizhar O, Deisseroth K (2011) The development and application of optogenetics. Annu Rev Neurosci 34:389–412 Floridi L (2012) Hyperhistory and the philosophy of information policies. Philos Technol 25(2):129– 131 Fortunato VCR, Giraldi JDME, de Oliveira JHC (2014) A review of studies on neuromarketing: practical results, techniques, contributions and limitations. J Manag Res 6(2):201–220 Frankfurt HG (1971) Freedom of the will and the concept of a person. J Philos 68(1):5–20 Hildt E (2019) Multi-person brain-to-brain interfaces: ethical issues. Front Neurosci 13:1177 Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13(1):1–27 Inglese S, Lavazza A (2021) What should We Do with people who cannot or Do Not want to be protected from neurotechnological threats? Front Hum Neurosci 15:703092 Isen AM, Levin PF (1972) Effect of feeling good on helping: cookies and kindness. J Pers Soc Psychol 21:384–388 Jiang L, Stocco A, Losey DM, Abernethy JA, Prat CS, Rao RPN (2019) BrainNet: a multi-person brain-to-brain interface for direct collaboration between brains. Sci Rep 9:6115 Josselyn SA, Tonegawa S (2020) Memory engrams: recalling the past and imagining the future. Science 367(6473):4325 Just MA, Pan L, Cherkassky VL, McMakin DL, Cha C, Nock MK, Brent D (2017) Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nat Hum Behav 1(12):911–919 Kay KN, Naselaris T, Prenger RJ, Gallant JL (2008) Identifying natural images from human brain activity. Nature 452(7185):352–355 Klein E, Goering S, Gagne J, Shea CV, Franklin R, Zorowitz S, Dougherty DD, Widge AS (2016) Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain Comp Interface 3:140–148 Koenig-Robert R, Pearson J (2019) Decoding the contents and strength of imagery before volitional engagement. Sci Rep 9(1):1–14 Krishnan A (2016) Military neuroscience and the coming age of neurowarfare. Taylor & Francis, New York Lavazza A (2016) Free will and neuroscience: from explaining freedom away to new ways of operationalizing and measuring it. Front Hum Neurosci 10:262 Lavazza A (2017) A pragmatic and empirical approach to free will. Riv Int Filos Psicol 8:247–258 Lavazza A (2018) Freedom of thought and mental integrity: the moral requirements for any neural prosthesis. Front Neurosci 12:82 Lavazza A (2019) Moral bioenhancement through memory-editing: a risk for identity and authenticity? Topoi 38(1):15–27 Lavazza A (2019) Why cognitive sciences do not prove that free will is an epiphenomenon. Front Psychol 10:326 Lavazza A (2019) Transcranial electrical stimulation for human enhancement and the risk of inequality: prohibition or compensation? Bioethics 33(1):122–131 Lavazza A (2019) The two-fold ethical challenge in the use of neural electrical modulation. Front Neurosci 13:678 Lavazza A, Inglese S (2015) Operationalizing and measuring (a kind of) free will (and responsibility): towards a new framework for psychology, ethics and law. Riv Int Filos Psicol 6:37–55
58
A. Lavazza
Lavazza A, Reichlin M (2018) Of Meatballs, autonomy, and human dignity: neuroethics and the boundaries of decision making among persons with dementia. AJOB Neurosci 9(2):88–95 Levy N (2014) Forced to be free? Increasing patient autonomy by constraining it. J Med Ethics 40(5):293–300 Levy N (2013) Addiction and self-control: perspectives from philosophy, psychology and neuroscience. Oxford University Press, Oxford Martins NRB, Angelica A, Chakravarthy K, Svidinenko Y, Boehm F J, Opris I, Lebedev MA, Swan M, Garan SA, Rosenfeld JV, Hogg T, Freitas RA Jr (2019) Human brain/cloud interface. Front Neurosci 13:112 Mason RA, Just MA (2016) Neural representations of physics concepts. Psychol Sci 27(6):904–913 Mills KL, Goddings AL, Clasen S, Giedd JN, Blakemore SJ (2014) The developmental mismatch in structural brain maturation during adolescence. Dev Neurosci 36(3–4):147–160 Nabavi S, Fox R, Proux CD, Lin JY, Tsien RY, Malinow R (2014) Engineering a memory with LTD and LTP. Nature 511(7509):348–352 Ramirez S, Liu X, Lin PA, Suh J, Pignatelli M, Redondo RL, Ryan TJ, Tonegawa S (2013) Creating a false memory in the hippocampus. Science 341(6144):387–391 Rosenbloom T (2003) Risk evaluation and risky behavior of high and low sensation seekers. Soc Behav Pers 31(4):375–386 Royal Society (2019) iHuman perspective: neural interfaces. Royal Society, London. https://royals ociety.org/topics-policy/projects/ihuman-perspective/ Ryan TJ, Roy DS, Pignatelli M, Arons A, Tonegawa S (2015) Engram cells retain memory under retrograde amnesia. Science 348(6238):1007–1013 Ryberg J (2019) Neurointerventions, crime, and punishment: ethical considerations. Oxford University Press, Oxford Schaefer GO, Kahane G, Savulescu J (2014) Autonomy and enhancement. Neuroethics 7(2):123– 136 Selfslagh A, Shokur S, Campos DS, Donati AR, Almeida S, Yamauti SY, Coelho DB, Bouri M, Nicolelis MA (2019) Non-invasive, brain-controlled functional electrical stimulation for locomotion rehabilitation in individuals with paraplegia. Sci Rep 9(1):1–17 Shaw E, Pereboom D, Caruso GD (2019) Free will skepticism in law and society. Cambridge University Press, Cambridge Smeding HMM, Goudriaan AE, Foncke EMJ, Schuurman PR, Speelman JD, Schmand B (2007) Pathological gambling after bilateral subthalamic nucleus stimulation in Parkinson disease. J Neurol Neurosurg Psychiatry 78(5):517–519 Steinert S, Friedrich O (2020) Wired emotions: ethical issues of affective brain-computer interfaces. Sci Eng Ethics 26(1):351–367 Todorov A (2017) Face value: the irresistible influence of first impressions. Princeton University Press, Princeton UK Parliament (2020) Postnote 614. https://post.parliament.uk/research-briefings/post-pn-0614/ Walter H (2001) Neurophilosophy of free will: from libertarian illusion to a concept of natural autonomy. The MIT Press, Cambridge World Health Organization (2016) Tackling food marketing to children in a digital world: trans-disciplinary perspectives. https://livrepository.liverpool.ac.uk/3004858/1/Food%20mark eting.pdf Yuste R et al (2017) Four ethical priorities for neurotechnologies and AI. Nature 551(7679):159–163 Zoefel B, Huster RJ, Herrmann CS (2011) Neurofeedback training of the upper alpha frequency band in EEG improves cognitive performance. Neuroimage 54(2):1427–1431
Chapter 6
Responsibility: A Theory of Action Between Care for the World, Ethology, and Art Gianluca Cuozzo
Abstract This essay explores the concept of responsibility with respect to today’s ecological and pandemic crisis. It identifies a fundamental category in “strategic repentance,” that is the ability to reconsider prior decisions based on the environmental feedback resulting from one’s activities. The model of this judicious and repentant behaviour can be traced back, on the one hand, to the animal kingdom, and, on the other hand, to the artistic domain. Thus, both a bat and Michelangelo can be viewed as masters of a new action scheme, based on feedback loops and in-progress revision of previously adopted strategies. Keywords Anthropo-ecology · Ethology · Art theory · Environment · Technology
6.1 For an Anthropo-Ecology of Responsibility Responsibility is the ethical concept that, on the social level, can widen our thinking— currently pervaded by a demoralising and resentful syndrome aimed at sacralizing the workings of the prevailing economic system (Bergoglio 2013)—to encompass a new association with nature. Indeed, nature alone is able to offer the essential “biosocial foundations” for a community life that does not intend to be nihilistic and, as such, aleatory (Mazzarella 2004, 142). Reproposing this category, thirty years after Jonas’s (1985) Das Prinzip Verantwortung, could be useful to give a new response to the nihilistic break between history, technology, life and ethics revealed by today’s ecological crisis. This hiatus, accompanied by the recent AI revolution, is creeping into the tension between the protection of labour as it is traditionally understood, prevailing economic interests, incentives for digital communication, the clash between old and new technologies, the push towards globalisation (with the risks associated with it) and the survival of democratic life practices (Foà 2020). The survival of anthropological identity, increasingly torn by the “challenging choice” between natural and artificial G. Cuozzo (B) Department of Philosophy and Educational Sciences, Università Di Torino, Turin, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_6
59
60
G. Cuozzo
intelligence, now depends on a huge historical decision, which must be able to reckon with the ethical-natural foundations of our world. These, indeed, are currently being thwarted by the delirium of technological omnipotence and the prospect of hypertrophic artificialization of the human being: a technogenesis of oneself and one’s own world, which believes itself to be autogenous with respect to its own conditions of possibility, “without a founding awareness of the world we live in” (Mazzarella 2017, 10). Our sense of responsibility is therefore key to the survival of the “threatened plenitude of the living world,” (Jonas 1985, 8) which in principle should never be the object of our elective faculty: as Jonas (1985, 37) writes, “never must the existence or the essence of man as a whole be made a stake in the hazards of action.” At the same time, this possibility of survival depends on the awareness that “every phenomenon, problem or solution interacts and is in a relation of mutual dependence with every other phenomenon, problem or solution” (Peccei 1976, 157). Artificial intelligence plays an evident role in this reflection: to give just two examples, recent studies are aimed, on the one hand, at applying artificial neural models and algorithms about the local ecosystemic effects of tree distribution in “a complex vegetation mosaic in Brazil” (Nunes and Bastos Görgens 2016), and on the other at using them “for the modeling and optimization of pollutants removal” from water (Fan et al. 2018). The ambitious goal, to pursue also thanks to ICTs, is to build an eco-social community that is resilient and responsibly open to a judicious evaluation and critical review of its livelihood strategies. This would be favoured by the combination of oiko-logy and eco-logy, in view of a desirable semantic overlap between oκoς— the home or environment of organisms that relate in a given external, organic and inorganic context (according to Ernst Haeckel’s classical definition: Haeckel 1868)— and ’Hχω, ´ the nymph Echo of Greek mythology. Echo was the daughter of Air and Earth, as well as the personification of the physical phenomenon of echo: “vox tantum atque ossa supersunt;/vox manet, ossa ferunt lapidis traxisse figuram” (Ovid, Met. III, 338-9). According to this blended meaning, ecology would not only deal with the community of living people in their possible mutual relationships within their common living context, but also with their ability to react/correspond (individually and collectively) to the systemic consequences produced by their own transformation activities. A responsible community, in this sense, is a housing consortium of living organisms that enters into a virtuous and biunivocal correspondence with the variations produced by their practice in the world around them (Umwelt). Acting responsibly therefore means interacting with the context and with the consequences of one’s actions, knowing how to give adequate and innovative responses to the phenomena of disruption determined in it, while also questioning the intentions behind one’s actions: today, the response (responsum) par excellence should internalise the principle of “act so that the effects of your action are compatible with the permanence of genuine human life” (Jonas 1979, 11). Otherwise, as in Ovid’s fable, instead of favouring the bio-social community of the living, the nymph Echo will resonate like the lugubrious lament of our late repentance, obsessively echoing—among the bare bones of the expectations of well-being—from the depths of that erosive “world-hole” from which nothing can return (Dick 2012, 231) (our
6 Responsibility: A Theory of Action Between Care for the World …
61
horribly disfigured Earth, transformed into the grave of our dreams of growth to the bitter end). As I said, the digital humanities can play a fundamental role in this construction of a responsible context: “The more ICTs advance, the more humanity appears responsible for how things go in the world (including in terms of forecasting and prevention of consequences and future events), and yet, the more difficult it becomes to identify specific sources of responsibility. Increasing levels of responsibility and co-responsibility are generating new challenges” (Floridi 2015, pos. 511)—challenges that are now taking the shape, to quote Otto Neurath (1959, 204), of “building the raft while swimming.” What’s at stake in this responsible choice, moreover, is another important philosophical category, which permeates the whole of humanistic-Renaissance thought: that of dignitas hominis. I would place this category at the centre of a “new humanism” based on the model traced, forty years ago, by Aurelio Peccei—a model characterised by a deep biosocial rooting of culture in the substantial foundation of nature. With the formula of a new humanism, 500 years after Leonardo da Vinci’s great ethicalscientific vision (but with the same real-utopian outlook aimed at establishing a new “science paradise”: a model of culturally advanced civilization inspired by the values of art, philosophy, science and technology: Leonardo da Vinci 1986–1990, fol. 8r), the Turin manager hoped for a rebirth of all knowledge—scientific, technical, and humanistic—adapted to the concrete needs of human beings and the context in which they habitually live and act. This rebirth should have led to an anthropological transformation that would have raised the capacity and quality of the human species to the level of its responsibilities and challenges in the global context: “This is the only way we can remain constantly in tune with our rapidly evolving universe” (Peccei 1976, 153). This philosophy of human life, underlying an adequate species conscience (based on eminently biocentric grounds), should, if necessary, have taken on revolutionary traits, overturning “principles and norms today considered untouchable, and favouring the emergence of new motivations and new values—spiritual, ethical, social, aesthetic, artistic— responding to the imperatives of the present era” (Peccei 1976, 155). Now, in the face of the potential disappearance of humanity, this revolution should ensure that the entire global system avoids the so-called “predicament of mankind,” by bringing social constructions as a whole (political, economic, educational, etc.), “to a higher level of understanding and organisation, based on a stable internal balance and a fruitful communion with Nature” (Peccei 1976, 155). “Either we succeed in elevating and developing our existential quality, in harmony with the cumulative changes that we ourselves produce and that occur in our world, or, estranged and defeated by the creations of our genius, we will go adrift, plagued by equally cumulative disasters” (Peccei 1976, 173–174). The large quantity of waste and residues produced by the production chain (Cuozzo 2019), that is, “the image of an era in which, unquestionably, capitalism is being increasingly submerged by the waste it produces,”1 is the most striking 1
Please see the introduction to the Italian edition of Botha (2006, 3).
62
G. Cuozzo
symptom that the path to change is still long; in other words, that we are still going towards “cancerous undifferentiated growth” (Mesarovic and Pestel 1974, 9) (due to which we introduce into the ecosystem a large number of substances that can have serious and far-reaching biological consequences). We are yet to embark on the journey of an organic and context-friendly development, able to “transform waste from a production process or a consumer good at the end of its life into an input for new production and consumption cycles” (Viale 2011, 85).
6.2 Bats’ Wisdom Being responsible, in this context, evidently means being able to respond adequately to the situation, considering the factual data—even those resulting from the side effects of our own life practices—in the project undertaken. This is what bats do with innate wisdom. These small nocturnal mammals of the chiroptera genus adopt a flight strategy based on the return effect of the ultrasounds emitted, with constancy and regularity, by their complex sensorial apparatus: a sort of biological sonar (or biosonar) that they share with marine mammals such as dolphins and other odontocets. The disposition of possible obstacles is perceived through echolocation or echodetection, that is, through “soundscape orientation” (Slabbekoorn and Bouton 2008, e5). This involves a complex technique that reconstructs three-dimensional space by sending and receiving (feedback) acoustic frequencies that are undetectable by the human ear; through these sound waves bats are able to draw a highly differentiated, reflected map of the world around them. Starting from this example, which is not surprisingly borrowed from the repertoire of ethology (definable as one of the “sciences of new humility”: Mazzarella 2004, 11), the model of action that could be proposed revolves around the rational strategy of repentance (or “strategic repentance of action”). This could be part of a programme aimed at “the restitution to the other of the whole of its latitude by the operative historical-social self of technology” (Mazzarella 2004, 12), thereby managing to identify the biological debt resulting from our being-born (natum-esse) and our historical action. Retracing one’s steps, as the ability to redirect one’s action, is based on inserting a feedback loop into the paradigm of action, which “seemingly reversing the course of time, or even annulling it, goes from the (planned) decision to its determinants” (Dupuy 2002, 116), going back to the reasons that led to the action in view of a prudent revision of the initial intentions. Repentance (whose structure is as follows: A ↔ B, where A and B are mutually exclusive, while the arrows indicate the telos of the action), on balance, is the mark of an existence that is never completely resolute, but undecidedly swings between several options that are never truly equivalent alternatives: “Sacrifice must be shown as the inevitable price for different groups of people to get what they want or at least to be liberated from what has become intolerable” (Illich 1974b, 112). Every removal, dismissal, rejection (as a mournful countermelody of creation, promotion
6 Responsibility: A Theory of Action Between Care for the World …
63
A↔B Scheme 6.1 Action oriented towards strategic repentance; its main characteristics are: preservation of experiences in the form of mnestic traces, reversibility, possibility of conversion to the (unfulfilled) past, cybernetic competence
and acceptance) comes with the feeling—dormant but still pulsating—of the counterfactual, of an alternative that was rejected and yet is ready to claim its rights: “This process of comparing wish with actuality, of sensing error and then correcting it by the precise application of an opposing force” (Lovelock 1979, 45) is precisely the sign of a judicious/repentant existence, capable of interacting with the world through the cybernetic process of trial and error. A typical characteristic of this attitude is “a circular logic which may be unfamiliar and alien to those of us who have been accustomed to think in terms of the traditional linear logic of cause and effect” (Lovelock 1979, 46). Next to the scheme of repentance, there is the unidirectional paradigm of action (what Max Weber called Gesinnungsethik), deaf to any course correction; it measures the success (exitus) of the practice in the mere geometrical adaptation of the instruments to the proposed goal, acting—starting from the initial choice between A and B—in the conviction of the inviolability of the objective (O). However, planning something, giving shape to a certain intention of meaning, results in an undesirable waste, even if only from the point of view of the possibilities of existence that have been rejected and judged improper. Choosing A between A and B so as to reach the result O means removing the second possibility (B), discarding it as not useful/desirable/sensible: as soon as the choice is made, B becomes a risky opacity, even a potential obstacle with respect to the decision taken (A), ending up identifying itself with “nothingness of useless [...] matter” (Scanlan 2005, 120)—at least as long as we understand the life of the subject as a necessary series of causes and effects, always with a view to a final, utterly irreversible end. According to Scheme 6.1, therefore, the heterogeneous of the aims—i.e., the negative externalities of the decision-making process materialised in practice, according to which “the unintended consequences of men’s actions are more important, for the most part, then the consequences they intend” (Passmore 1974, 84)—is something that needs to be taken into account in a responsible elective process. As Amartya Sen notes, consequential reasoning is essential for ethical judgement: “To get an overall assessment of the ethical standing of an activity it is necessary not only to look at its own intrinsic value (if any), but also at its instrumental role and its consequences on other things, i.e. to examine the various intrinsically valuable or disvaluable consequences that this activity may have” (Sen 1987, 75).
64
G. Cuozzo
6.3 Michelangelo: A Theoretician of Judicious and Responsive Action From bats to Michelangelo—a step that could appear, at least at first, rather risky. In Michelangelo, however, one finds a peculiar reflection, at once poetic and philosophical, on the meaning of sculpture, filled with metaphysical, religious, biographical and existential references. This gives rise to a perfect marriage of knowledge and literary genres of extreme evocative power and incisiveness, while paying attention to the material foundations of artistic production—the quarry and stone being the first elements of any sculptural project. After all, this is the right ante litteram antidote to Heidegger’s (2002) ontology of art found in the essay Der Ursprung des Kunstwerkes. In it, the technical detail of creation disappears into the mists of the mythology of Ereignis and of corresponding, from the point of view of historical being, to a destiny that appeals to the mere forward-looking nature of Dasein (removing, as Karl Löwith would say, the element of human nature that is rooted “im ewigen Umkreis der Physis, die so ist, wie sie ist, und nicht anders sein kann”: Löwith 1981, 265). First of all, as a sculptor in a “demonic frenzy of creation,” (Wittkower and Wittkower 1963, 72) Michelangelo thought of himself as the “spirit of a great martyr”, in whose imperiousness—as he writes in the Sonnets—“some sin presses, hidden to me” (Buonarroti 1863, 270): the sacrifice, the repentance, the destruction of the work are part of this process of retroversion of formativity, which—having noticed some shortcoming intrinsic to his own shaping gesture—the artist would like to do all over again, taking up the thread of artistic achievement and even overturning its result. To quote the Italian philosopher Pareyson (1991, 59), the artist, “in the very course of the operation, invents the modus operandi and defines the rule of the work as he makes it; he conceives by performing, and designs in the very act he performs.” It is useless, therefore, to judge a work of art by a criterion that is purely intrinsic to the act of making itself (e.g., the conformity of the result with the original intent). In fact, such a criterion would isolate the processuality of the creative gesture from the pragmatic context of resilience in which it operates. It is only in relation to the interaction between initiative and result (environmental feedback) that one can say that “the work succeeds only if it is made as if it made itself” (Pareyson 1991, 91), emerging almost spontaneously from what we might call the “play of forces” of artistic productivity: it is the act of mediating in itself (das Vermittelnde), the “manner in which the particular forces move as cause (Ursachen) and thus effect and so are made effective” (Heidegger 1994, 119). In this game of reciprocal co-implication, initiative and external limits, freedom, and necessity, as opposing forces, exchange the determinate features they initially presented, each one losing any right to exclusivity with regard to originality: what counts is the emergent settlement between praxis and environmental resilience. Something unpredictable comes out of this, which often disregards the initial intent (hence, therefore, the surprise effect of artistic success). It is in this complex dynamic that I would situate what in art is called “reactive improvisation”: “The ability to
6 Responsibility: A Theory of Action Between Care for the World …
65
respond to the unforeseen course of events” (Sparti 2007, 88).2 Considering the limits of the project in the project itself helps to redefine the action undertaken. In this way, the success of the work, taken in this interplay of choices, initiatives and context effects, takes on the character of an event: it is part of “an ephemeral, transient, non-reidentifiable activity that can only be perceived at the moment of its creation, i.e. in progress” (Bertinetto 2009, 148). On the one hand, the human being makes himself by knowing what he is; but, on the other hand, he comes to define himself precisely in his own making, becoming what he already is; so that, by “making ourselves what we are we knew ourselves, and by knowing ourselves we learned to be so” (Mazzarella 2017, 7), in a circularity in which the prius of the process is lost. The majestic Moses of San Pietro in Vincoli, one of Michelangelo’s most successful works, is the embodiment of the most exquisite artistic repentance imaginable. Indeed, the sculpture—conceived as a means to commission Michelangelo for the third time, so as to end the diatribe over the creation of the funeral monument for Pope Julius II—reveals a change in the figure’s posture that took place at an advanced stage of the work. This change in design conceals hylomorphic shortcomings and complex design solutions adopted in extremis, which highlight Michelangelo’s “technical gamble”—a redefinition of form that has left many traces in the sculpture. These shadows give an idea of the grandiose change that took place, we are told, in just two days: the figure of the Prophet, which in 1516 was standing frontally with his feet together (“according to what we can deduce from the strange shape of the cloak that today flanks the right foot just like the outline of another foot”) is now tense, almost disjointed: the head is turned, while the left leg, full of energy, is bent backwards, perhaps because “the marble previously carved did not allow him to find room for the foot except in a very backward position” (Mazzarella 2017, 310). But Michelangelo’s artistic repentance is mostly evident in Moses’ beard: the lack of marble, determined by the previous frontal working, imposed a truly prodigious ploy on the artist. Especially on the left side, the beard is barely noticeable—it is almost as if it were a high relief rather than a full shape like the right side (where the artist could evidently still count on the marble of the chin offered by Moses’ original posture). Because of this, Michelangelo was forced to move the beard all the way to the right thanks to the gesture of the forefinger, exploiting the twisting of the head; the hand gesture, while retaining the free flow of the beard, “could never have had the consequences that it originates in the sculpture” (Mazzarella 2017, 311). This solution, clearly antinaturalistic, is the showy trace of an artistic repentance which, however, originated one of the most dramatic and vital figures in the history of art. The form struggles with the matter until the very last moment, trying to remedy its shortcomings (due to subtraction consistent with an older project) in view of a completely new formative process. The result is that of a figure who seems to rebel against his original project, who appears to choose his own posture, contorting himself under the sculptor’s chisel. 2
I have drawn inspiration and references about the phenomenon of improvisation from Bertinetto (2009).
66
G. Cuozzo
Without going as far as Freud, who saw Moses as the effigy of the father of the primitive horde (who sculpts the super-ego in the conscience of his rebellious sons: Freud 1955, 209–238), I would say that Michelangelo’s task, however immense on the artistic level, was to give shape to his own repentance, a change of project determined in relation to two circumstantial elements: the lack of light and the absence of material. The first was a consequence of a change of lighting in the context environment: some high windows in the church were closed, unexpectedly casting a shadow over Moses’s face, originally designed to adopt a frontal posture—hence the need to turn his head towards the entrance, to the left of the monument. The second is given instead by the limitations imposed by the stone, which, already modelled, resisted the new posture, determining the risky solutions that animate the sculpture as if it were a living imago, endowed with its own intentionality, responding as it can to external necessity. The material constraint, retroacting on the artistic intention as its corrective, determines the artist’s repentance and the redefinition of the initial project, so that the culpa—if the need is consciously and responsibly owned by the artist—becomes felix, that is, a completely new trouvaille such as to determine a revision of the prior design intention. From this point of view, the shape that we admire today stands on the ruins of abandoned projects, on the remains of discarded forms. In this sense, Moses could set an example of cybernetic practice, based on the principle of resilience of internal and external limitations: a judicious action, always ready to reformulate its intentions in relation to the feedback offered by context data. The shaping of one’s repentance is a singular ability of human (also artistic) action, which justifies the sense of necessity underlying the success of one’s work, even if it was achieved through the most extreme improvisation—what’s crucial is a prompt and unpremeditated reaction to the contingent data, absolutely unpredictable during the design phase. Freedom and necessity, norm, and repentance, planning and responsible revision of the initial intention, as in the Hegelian play of forces, are welded together in a gesture that has both the traits of spontaneity and those of mere reactivity to the given situation. Or rather, the artist’s freedom is deepened by a responsible making that he is forced to rethink, in a completely new way, based on the failed projects that had been initially rejected, but which could now prove to be perfectly coherent with the new situation. The artistic action, therefore, can always be defined in terms of action (initial project) and counter-reaction (in the light of the repentance resulting from the obstacle perceived by the affecting organs of artistic making). The counter-reaction tends to reabsorb—by composing them in a new project—the variations in the state of the system with a view to achieving the regularity of the foreboded form (a form that in any case will bear the trace of its own limitations). In short, there is a perennial osmosis between matter and artifact, which puts the project originally conceived to the test. There is an interplay of alternations between the “artistic phase” proper (the moment of intuitive thought, the conception of the idea) and the “natural phase” (the practical realisation of the shape foreshadowed in the stone, thanks to the cunning use of hands, which perceive the limitations of the context). The latter phase gives the formative process the aspect of a continuous inspiration-effort interplay aimed at obtaining—paradoxically, as the result of supreme artifice—the naturalness and
6 Responsibility: A Theory of Action Between Care for the World …
A (and non-B)
67
O
Scheme 6.2 Teleologically oriented action; its main characteristics are: irreversibility, orientation towards the future, inability to keep track of the consequences produced in the context of practice
spontaneity of the work created (Clements 1954, 302): as if the work were executed “in a difficult manner with the utmost ease” (Vasari 2010, 150). According to Michelangelo, the artistic process, in the face of contingent factors that could not be predicted at the time of the initial formal intuition, could be confronted with marble that was not sufficiently “beautiful and receptive to the things that had to be done” (Buonarroti 1875a, 660); in the course of the work, it could prove to contain “certain failings [...] that could not be imagined” (Buonarroti 1875b, 383). In this case the artistic challenge becomes daring, taking the sculptor’s skill and expertise to the extreme. In his sovereign effort—aimed at achieving the desired shape—he “stool[s] blood into his works.”3 After all, there is no work that vibrates with its own life, like a viva imago, that is not permeated by its creator’s possible repentance. This element of randomness of the designed shape grants the image, suspended in the indeterminateness of equivalent possibilities, an “ideal vagueness” (Forcellino 2019, 171) that surprises the observer. Indeed, it’s as if the work were executed “in a difficult manner with the utmost ease” (Vasari 2010, 150).
6.4 Conclusion: “Prometheus at the Crossroads” In this sculptural metaphor I see a model of responsive and responsible action: that of a resilient subject, able to assimilate the feedback coming from the context, always willing to review their projects. Marble, after all, is a metaphor of the world, which we—in our imperious civilization project—must learn to listen to again. The knowledge favoured by technology, if it stops evaluating its own success on the basis of mere linear effectiveness (Diagram 2), can be of great advantage in establishing a cybernetic, judicious and repentant behaviour, similar to that of a bat. A new principle of ethical action, at the centre of an anthropo-ecology of responsibility, could be the following: act in such a way that your every action is always, at every moment of its teleological development, potentially reversible; also consider every intermediate link between A and O (→→→), in accordance with Scheme 6.2, as an end in itself and not as a means to an end. Those who act well, in this case, are backwardlooking actors, capable of questioning the assumptions of their practice, of dissolving the process of deliberation in an alternative way to that which they have undertaken. This means being endowed with a prodigious memory, with respect to which nothing must be lost. Oblivion, in this case, would correspond to the epochal validation of the principle of euthanasia applied, on a global scale, to the natural world. 3
This is what, speaking in the first person, Michelangelo apparently told his friend Bartolommeo Ammanniti (Clements 1954, 301).
68
G. Cuozzo
This intersection of qualitative alternatives, dissolved in the recent past in a way that is far from rational and responsible, has favoured questionable choices that are still reversible today (since, most probably, “if we don’t do something, for future generations it will be too late,” Passmore 1974, 109). On a historical level, a good example of this is the invention of the bicycle—a means of locomotion which, despite its apparent frugality, is a true concentration of sustainable technology. Nobody thinks about it anymore, but the bicycle was invented at the same time as the automobile, which today is causing an infinite number of problems—from road congestion to air pollution, from just wars (black gold being the constant mobilising factor for armies in the Middle East) to incontinent oil tankers plying the seas leaving behind oily and deadly trails, from terrible and useless deaths in road accidents (of which James G. Ballard was a morbid, brilliant storyteller, Ballard 1973) to mining problems (depletion of oil fields, fracking, etc.). The pedal-powered velocipede and the famous high-wheel, the forerunners of today’s bicycle, were born between 1855 and 1869, the year Eugene Meyer invented the spoked wheel; the first internal combustion engine was developed by the Swiss Isaac de Rivaz in 1802, and perfected only in 1876, eleven years before the first real cars were presented at the Universal Exhibition in London in the pavilion dedicated to railway equipment. In the same period, therefore, two different solutions were proposed to the problem of mobility, which embodied two opposite interpretations of the relationship between the amount of energy, speed of travel, fairness (in the hoarding of resources needed for travel) and the actual satisfaction of travelers. And the latter value should not only be considered in terms of the time needed to reach one’s destination: it is necessary to distinguish between the mere euphoria for a promise of speed that is actually unattainable in today’s traffic conditions and the true happiness of the traveler, which is motivated by a number of external realities such as safety, comfort, driving stress, environmental costs, etc. Indeed, as Illich (1974a, 1974b, 12) wrote, “equity and energy can grow concurrently only to a point.” Society is being swallowed with ever increasing amounts of energy that degrade, deplete and ultimately frustrate the majority of the population. People are suffocated by waste (smog, fine dust, CO2 , exhausted oils, car carcasses growing like rusty sheet metal forests in the city outskirts), paralyzed by clogged transit routes and alienated by the obscene metamorphosis of urban space due to the new mobility requirements, which “transform geography into a pyramid of circuits sealed off from one another according to levels of acceleration” (Illich 1974a, 1974b, 73). Illich (1974a, 1974b, 56) claims that “technocracy must prevail as soon as the ratio of mechanical power to metabolic energy oversteps a definite, identifiable threshold.” Wouldn’t it be appropriate, then, to rethink how to untie an ancient knot, now that the majority of us “spend an ever-increasing slice of our existence on unwanted movements,” due to a “distortion of human space” that has purely economic aims? Indeed, when one looks at actual data, “man’s speed remained unchanged from the Age of Cyrus to the Age of Steam” (Illich 1974a, 1974b, 31), whereas the air in our cities has become unbreathable. A similar example could be given with regard to photovoltaic and nuclear technology. Here, indeed, the chronological proximity of the two discoveries concerns the
6 Responsibility: A Theory of Action Between Care for the World …
69
consanguinity links between the inventors in question: “The photovoltaic effect, the creation of electric potential under the effect of sunlight, was discovered in 1839 by Alexandre-Edmond Becquerel, who was the father of that Henry Becquerel who later discovered radioactivity. Thus, already in the second half of the nineteenth century there were some who tried to manufacture solar cells with the goal of producing energy, even though no one could explain how they worked” (Bardi 2011, 177). And, also in this regard, the same observations could be made about the pollution rate, functionality and democratic contribution offered by the two alternatives proposed in the history of applied sciences. Once again, we have the comparison between a dirty solution (which takes advantage of the momentary non-renewable energy opulence) and a clean one—the liberation from the shortage determined by the illusory “shape of expectations” through the production of excesses, which, beyond their immediate benefits, deplete the world and endanger life.
References Ballard JG (1973) Crash. Picador, London Bardi U (2011) La terra svuotata. Il futuro dell’uomo dopo l’esaurimento dei minerali. Editori Riuniti, Roma Bergoglio JM (2013) Apostolic exhortation evangelii gaudium. http://www.vatican.va/content/fra ncesco/en/apost_exhortations/documents/papa-francesco_esortazione-ap_20131124_evangeliigaudium.html Bertinetto A (2009) Improvvisazione e Formatività. Annuario Filosofico 25:145–174 Botha T (2006) Mongo. Avventure nell’immondizia. ISBN Edizioni, Milano Buonarroti M (1863) Le rime cavate dagli autografi. Guasti C (ed). Le Monnier, Firenze Buonarroti M (1875a) Lettera a Leonardo di Cagione del 12 febbraio 1517. In Le lettere di Michelangelo Buonarroti. Ed G Milanesi. Le Monnier, Firenze Buonarroti M (1875b) Lettera a Domenico Buoninsegni del 2 maggio 1517. In Le lettere di Michelangelo Buonarroti. Ed G Milanesi. Le Monnier, Firenze Clements R (1954) Michelangelo on effort and rapidity in art. J Warburg Courtauld Inst 17(3– 4):301–310 Cuozzo G (2019) New wastes. nature is not an unlimited station. In: Valera L, Castilla JC (eds) 2019 Global changes. Ethics, politics and environment in the contemporary technological world. Springer, Cham, pp 57–65 Dick PK (2012) Martian time-slip. First Mariner Books, New York Dupuy JP (2002) Pour un catastrophisme éclairé. Quand l’impossible est certain. Éditions du Seuil, Paris Fan M, Hu J, Cao R, Ruan W, Wei X (2018) A review on experimental design for pollutants removal in water treatment with the aid of artificial intelligence. Chemosphere 200:330–343 Floridi L (2015) The Onlife Manifesto. Being human in a hyperconnected era. Springer, Oxford Foà S (2020) Distanza, lontananza e verità nell’emergenza. Diritto e nostalgia. In: Dall’Igna A, Sferrazza Papa EC, Carrieri A (eds) Distanza. Quaderni speciali di “Filosofia.” Mimesis, Milano Forcellino A (2019) Michelangelo. Una vita inquieta. Laterza, Roma-Bari Freud S (1955) The moses of michelangelo. In: Strachey J (ed) The Standard edition of the complete psychological works of sigmund freud/totem and taboo and other works: 1913–1914, vol XIII. The Hogarth Press and the Institute of Psycho-analysis, London Haeckel E (1868) Natürliche Schöpfungsgeschichte. Gemeinverständliche wissenschaftliche Vorträge über die Entwicklungslehre im Allgemeinen und diejenige von Darwin, Goethe und
70
G. Cuozzo
Lamarck im Besonderen, über die Anwendung derselben auf den Ursprung des Menschen und andere damit zusammenhängende Grundfragen der Naturwissenschaft. G Reimer, Berlin Heidegger M (2002) The Origin of the Work of Art. In: Young J, Haynes K (eds) Off the Beaten Track. Cambridge University Press. Cambridge Heidegger M (1994) Hegel’s phenomenology of spirit. Indiana University Press, Bloomington Illich I (1974a) Energy and equity. Harper & Row, London Illich I (1974b) Tools for conviviality. Harper & Row, London Jonas H (1985) The imperative of responsibility. In: Search of an ethics for the technological age. The University of Chicago Press, Chicago Leonardo da Vinci (1986–1990) I manoscritti dell’Institut de France di Parigi (1484–1515 ca). Ed A Marinoni. 12 voll. Ms E Giunti-Barbera, Firenze Lovelock J (1979) Gaia. A new look at life on earth. Oxford University Press, Oxford Löwith K (1981) Natur und Humanität des Menschens. In: Stichweh K, de Launay MB (eds) Band I (“Mensch und Menschenwelt”). Metzler, Stuttgart Mazzarella E (2004) Vie d’uscita. L’identità umana come programma stazionario metafisico. Il Melangolo, Genova Mazzarella E (2017) L’uomo che deve rimanere. La smoralizzazione del mondo. Quodlibet, Macerata Mesarovic M, Pestel E (1974) Mankind at a turning point. EP Dutton, New York Neurath O (1959) Protocol Sentence. In Ayer A J (ed) Logical Positivism. The Free Press. New York Nunes NH, Bastos Görgens E (2016) Artificial intelligence procedures for tree taper estimation within a complex vegetation Mosaic in Brazil. PLoS ONE 11(5):330–343 Pareyson L (1991) Estetica. Teoria della formatività. Bompiani, Milano Passmore J (1974) Man’s responsibility for nature: ecological problems and western traditions. Duckworth, London Peccei A (1976) La qualità umana. Mondadori, Milano Scanlan J (2005) On garbage. Reaktion Books, London Sen A (1987) On ethics and economics. Blackwell, Oxford Slabbekoorn H, Bouton N (2008) Soundscape orientation: a new field in need of sound investigation. Anim Behav 76(4):e5–e8 Sparti D (2007) Il potere di sorprendere. Sui presupposti dell’agire generativo nel jazz e nel surrealismo. In Ferreccio G, Racca D. L’improvvisazione in musica e letteratura. L’Harmattan Italia, Torino Vasari G (2010) Le vite de’ più eccellenti architetti, pittori, et scultori italiani, da Cimabue insino a’ nostri tempi. Ed L Bellosi & A Rossi. Vol. II. Einaudi, Torino Viale G (2011) La conversione ecologica. NdA Press, Rimini, There is no Alternative Wittkower M, Wittkower R (1963) Born under saturn: the character and conduct of artists; a documented history from antiquity to the French revolution. Random House, New York
Chapter 7
Neuroscience, Neurolaw, and Neurorights Paolo Sommaggio
Abstract Neurosciences study the relations between the human brain and human behaviour. Recent developments of these sciences are granting us an increasing possibility to control, or influence, mental processes. In this chapter, I analyse how this possibility is becoming a concrete ability to control socially undesirable behaviour, which is the reason why I choose to investigate the relationship between Neurosciences and the Law. Firstly, with this in mind, I show the new role of the neuroscientists in Courts. Secondly, I analyse new neuro-paradigms in public debates about the structure of Society and the Law. Moreover, I study the so-called reductive neurolaw, which is the gradual replacement of traditional sources of law with new neuro-scientific standards. Finally, I provide a definition of Cognitive Liberty (a new form of safeguard) able to be collected in a “Declaration of Human Neuro-rights”. Indeed, Cognitive Liberty may be used as a new conceptual tool, in order to protect personal human rights against reductive neuro-paradigms. Keywords Neurocivilization · Reductive neurolaw · Cognitive liberty · Human rights
1 “‘Neuroscience’ refers to the multiple disciplines that carry out scientific research on the nervous system to understand the biological basis for behaviour. […] The term ‘neuroscience’ was introduced in the mid-1960s, signaling the start of an era when these disciplines would work together cooperatively, sharing a common language, common concepts, and a common goal: to understand the structure and function of the normal and abnormal brain. Neuroscience today spans a wide range of research endeavors, from the molecular biology of nerve cells, which contain the genes that command production of the proteins needed for nervous system function, to the biological bases of normal and disordered behavior, emotion, and cognition, including the mental properties of individuals as they interact with each other and with their environments” (Committee on Opportunities in Neuroscience for Future Army Applications, Board on Army Science and Technology, Division on Engineering and Physical Sciences, National Research Council of the National Academies. Opportunities in Neuroscience for Future Army Applications, Washington, D.C.: The National Academies Press 2009). See Binder et al. (2009) and the Nobel Prize Kandel (1981).
P. Sommaggio (B) Department of Law, Università di Trento, Trento, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_7
71
72
P. Sommaggio
7.1 Introduction Neurosciences are disciplines that study the relation between human brain (and the nervous system) and human behaviour.1 In this work, I will briefly analyse the risk that knowledge of the neurological structures becoming a control of social behaviour (Roskies 2002), thus taking the path towards an authentic neurocivilization (Sommaggio 2014). In doing this, I will show the so-called reductive neurolaw, which is the gradual replacement of traditional sources of law with new neuroscientific standards, and I will present the cognitive Liberty: a conceptual tool able to defend people from direct brain intervention with resulting critical issues for human autonomy and personal freedom (Sommaggio et al. 2017). Finally, I will show how this concept has a pivotal role in a new international human rights perspective.
7.2 The New Technologies New techniques developed on human brain investigation, open a previously inconceivable array of opportunities as regards the capability of directly knowing, and controlling, the behaviour of persons. In this section we provide a brief overview of existing neuro-technologies and of the kind of questions that their development and application pose. The first set of neuro-technologies we consider are brain imaging techniques. The main techniques employed for the purpose of brain monitoring and imaging include electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). They provide structural and functional information about the brain and its neural activity, that is used for diagnostic and research purposes. Through fMRI, for example, neuroscientists are able to study the ways in which neurons fire up, and thus, to correlate brain activity with mental activity, localizing the areas of the brain that respond to certain stimuli, like pain or language recognition. This information provides a clearer understanding of the way in which the brain works and how it supports our thoughts. The next set of technologies comprise those of neuro-stimulation, which offer treatments based on electrical and magnetic stimulation of the brain through medical devices fixed on the head or implanted into the brain. Transcranial Magnetic Stimulation (TMS) and Deep Brain Stimulation (DBS) currently have widespread applications for the mitigation of the symptoms of neurological and psychiatric disorders like Parkinson’s disease, epilepsy and depression (Jotterand and Giordano 2011). A third set of neuro-technologies comprises psychoactive drugs, which are known to cause changes of personality too. The last issue that will be touched upon here is cognitive enhancement. Both neurostimulation technologies and psychoactive drugs can, in fact, be used to augment human cognitive capacities such as attention, focus and memory (but also mood,
7 Neuroscience, Neurolaw, and Neurorights
73
personality traits and behaviour). Therefore, to sum up, the neuroscientific technologies promise to: (a) Be able to “read” the mind of people, (b) Be able to change mood and personality, (c) Be able to induce behaviour modification, (d) Be able to alter memory formation and consolidation, (e) Be able to augment cognitive ability (or capacity). These are the reasons why the field of neuro-ethical needs is bringing about the necessity of an ever increasing consideration of social and ethical implications of neuro-technological discoveries (Sententia 2004, 223).
7.3 Radicals & Reformists The Nineties so-called The Decade of Brain (in the Presidential Proclamation 6158, 1990)—saw neurosciences beginning to expose themselves in the social arena, searching for notoriety (Cacioppo 2002; Franks 2010; O’Connor et al. 2012). In a recent article, focused on how neurosciences can provide for an effective and fair administration of Justice (Jones et al. 2013), the authors assume that at the present moment neurosciences have supplanted all other scientific disciplines and have radically transformed the traditional ways of conceiving the anthropological events on which legal systems are based. In this way, neuroscientists can become the most qualified experts for all those in-depth technical examinations that take place in Court, replacing all other kinds of consultation. In addition to this, we can see a gradual creation of stereotypes and commonplace ideas able to influence the social and political debates (Kolber 2014; Vincent 2013). The project is the coordination of the new neuroscientific achievements with traditional anthropological/moral concepts (for instance, free will and conscious moral action) that constitute the grounds of every legal order (Farah 2004). The final outcome is very simple: legal orders must be modified according to new neuroscientific achievements.2 Many neuroscientists share this assumption, but they differ in the way of transforming legal orders. On the one hand, we have the maximalists (or radicals), for whom the insertion of neurosciences into the field of juridical thought will necessarily lead to a revolution of the legal orders (Gazzaniga and Steven 2004). On the other hand, we find the reformists, who regard it as more useful to work gradually, by means of limited but constant slight changes to the existing legal orders, without stressing the society (Vincent 2010). Anyway, I think Greene and Cohen (2004, 1780) are the most representative authors of the radical position because they justify their support for the insertion of neuroscientific technologies into legal orders precisely with the elimination of the free will and, along with it, of the concept of responsibility as outlined in the theories of punishment, in particular in the retributivist one (Pardo and Patterson 2013). So
2
http://neuroethics.upenn.edu/index.php/penn-neuroethics-briefing/responsibility-a-brain-fun ction. Accessed 22 June 2018.
74
P. Sommaggio
far though, the enthusiasts have not yet explicitly set out the standards to which a subject should be treated, preferring practical solutions (Kolber 2014). On the other hand, Stephen Morse certainly belongs to the ranks of the calm reformists who recognise some level of usefulness in the introduction of neuroscience without praising it uncritically. Morse does not believe in the capability of the new achievements to revolutionise the legal order. As a matter of fact, he thinks the rapid evolution of neurosciences cannot modify the legal systems in the short term, or at least, not in a direct heavy way. Morse says that not to punish someone for a bad action, on the exclusive ground that “his/her brain did it,” is a mistake. In fact, every act of each one of us is somehow produced by the brain and therefore, unless some altered state is underlined, individual responsibility cannot be questioned. This is because law is based on a commonsense psychology that cannot be overturned by neuroscientific outcomes. This is particularly valid for criminal sanction, which “presupposes a ‘folk-psychological’ view of the person and behaviour” (Morse 2013, 107). This psychological theory explains behaviour caused in part by mental states such as desires, beliefs, intentions and plans; in other part by biological, psychological and sociological variables. To sum up, folk-psychology considers mental states fundamental to a full causal explanation and understanding of human action. Lawyers, philosophers, and scientists argue about the definitions of mental states and theories of action, but that does not undermine the general claim that mental states are fundamental. “Folk-psychology presupposes only that human action will at least be rationalised by mental state explanations or will be responsive to reasons—including incentives under the right conditions” (Morse 2011a, 598–599). This is why, in the Morse’s idea neurosciences will not modify law in a revolutionary way, as the latter is founded on premises tied to common-sense and not to techno-scientific explanations.
7.4 Neurolaw New neurolaw, the transformation of the society into a neuroscientific based one, is composed of two elements. The first one tries to establish new legal rules on the basis of the achievements of neurosciences; the second element is the possibility to intervene directly (in a sanctioning/therapeutic way) on someone’s brain (Opderbeck 2013). On the first element, we can say that it is accomplished through the replacement of traditional sources of law, with new neuroscientific standards. This is because for neuroscientists, law, intended as social-control traditional technique, has failed because it was not able to guarantee the maintenance of order in the society. On this topic, Brian Tamanaha (2006, 5, 2007) wrote: “Under a scientific view, law would come instead to be seen as the source of social order—to produce social order is the function or purpose or end of law. In turn, this new perspective, over time, would open up questions about the efficiency and utility of law in carrying out its functions. The
7 Neuroscience, Neurolaw, and Neurorights
75
subtle but fundamental difference can be put thus: law is order, versus law maintains order.” On the second element of neurolaw, David Eagleman states that criminal subjects should be treated as individuals with severe diseases or cognitive deficits. The rehabilitative, and not punitive, methods proposed by Eagleman are based on non-invasive forms of behaviour modification, such as neuroimaging techniques: a sort of biofeedback that allows individuals to observe their brain images and learn to better control their behaviour. He wrote: “To this end, we have begun leveraging real-time feedback to participants during brain imaging. This technique allows them to see when their brain is craving, and to learn how to control (in this case to lower) that neural activity by strengthening other, long term decision-making mechanism” (Eagleman and Isgur Flores 2012, 165). Nevertheless, this new “neurolaw” is based on neuro-standards that still reveal a great confusion; put to the test, these postulates are revealed to be mere subjective options (although interesting as scientific hypothesis). This is why the definition of shared standards represents one of the most delicate themes of the neurolaw. In other words, we do not have a shared definition of what ‘criminal’ is and what is not; for example: is it a question of will on breaking moral-legal rules or a biological determination or a kind of sickness? Perhaps we are falling into a new normal-not normal alternative. A very problematic alternative.
7.5 Neurorights The most widespread framework in neurolaw is very simple: human behaviour has a biological basis and as this basis can be modified; so, it is possible to control the biological matrices of socially unacceptable behaviour (Raine and Yang 2006; Nagera 2013). The way is very simple: it is sufficient to consider a social abnormality as a biological (brain) abnormality conceptualizing them in a single category: the category of mental disorder/disease. Since the law (especially criminal law) does not appear to be the most effective instrument to solve the problem, neuroradical societies have to make a way for other and more effective techniques (of neurolaw). Through these new ‘therapies’ (surgery, medicines, grafts, etc.) it is possible to obtain a variety of modifications of mental states and of the deep structure of the brain in order to control behaviour eliminating unlawful actions better than with jail or other. In other words, it seems that the path towards a more neuro-standardised society will be very simple and soft (Singh et al. 2013): “deviance” will become a simple health problem (Markowitsch and Seifer 2007) designed by neurostandards (neurorules) and it will be treated with neurotechniques. So, I think it is necessary to stress how this scientific framework can take a dangerous turn to personal liberty. It is because of the absence of consensus. It is known, there are forms of intervention, in many legal orders, completely regardless of the acceptance of those who undergo them; for example, involuntary healthcare treatments. I believe that, over the next few years, this blind spot could represent a
76
P. Sommaggio
picklock to test new forms of normalisation inspired by neuro-civilization (Meynen 2013) or, better, neuronormalization. In 2012 Hank Greely, one of the most enthusiastic neuro-civilizers, tried to open the debate on the use of involuntary treatments precisely for the modification/elimination of antisocial behaviour, as well as for the treatment of diseases and of psychic distress (Greely 2012). He asserts the daring thesis that neurosciences will provide the ability to modify undesired behaviour, by changing the neurological basis of agent individuals. This reasoning is very simple: if we agree that we are willing to intervene directly on the brain of a subject in case of severe disease or disablement, there is no reason to disagree on the treatment of the causes of socially undesired behaviour, related to the brain. Greely proposes safety and effectiveness as standards to evaluate these kinds of neuro-treatment. He asserts that traditional forms of direct brain intervention (for example, lobotomy) are unduly simplistic solutions for a very complex problem, since they are neither safe nor effective (Greely 2008). Therefore, it is necessary to test new forms of safe and effective intervention, in order to eradicate socially unacceptable behaviour through behaviour control (Greely 2012); provided that the interventions are safe, effective and not improper. He thinks that if we can serenely send someone to jail, unsuccessfully attempting to modify their behaviour, there is no reason for the scandal caused by a certainly more effective modification concerning their brain. The problem involves individual freedom, that is, the “resistibility” of traditional means that leave residual autonomy to the subject; an autonomy that the new means of direct intervention would not leave. In this regard, Greely asserts the need to identify a space of unattainable “cognitive liberty;” a sort of privacy level under which one should not go. He wrote: “A ‘resistible’ treatment, such as a prison rehabilitation effort, still seems to leave some freedom for choice; the more effective (and irresistible) the treatment, the greater the invasion of liberty. I feel that there should be some protected space of cognitive liberty, but, given that all interventions affect the brain, it is hard to see why mandatory brain interventions should be impermissible only if they are direct” (Greely 2012, 164). But, even given this sort of unattainability, it is difficult to assert that direct brain intervention could not become a commonly used practice to modify behaviour that is socially unfavourable or not accepted by the community, or vice versa to ease accepted behaviour. These considerations open an interesting possibility of detailed studies, of which we can trace only a first outline. In effect, the concept of cognitive liberty (or Right to Mental Self-Determination), has appeared only very recently in the international debate (Bublitz 2013). Linked to the concept of sovereignty over one’s “cognitive heritage,” cognitive liberty would consist of a right similar to the one of inviolability of the brain from the state or from third parties. Nevertheless, it includes the freedom to agree to direct interventions, appropriate to enhance one’s cognitive structure3 (Sententia 2004). 3
Wrye Sententia and Richard Glen Boire are the founders of the Centre for Cognitive Liberty and Ethics (CCLE).
7 Neuroscience, Neurolaw, and Neurorights
77
7.6 Cognitive Liberty as a Neuroright The term “Cognitive Liberty” is often used to expand the traditional notion of “liberty of thought.” Nowadays, however, some scholars are using that term in order to challenge legal systems of democratic societies to integrate such a right into their constitutions (Sententia 2013). This latter definition, indeed, expresses three conceptual points of interest: • Privacy. Regarding which, the content of our thoughts must remain private until one decides to share them. • Autonomy. Regarding which, every human being must be able to think independently and use the full spectrum of their mental faculties. • Choice. Regarding which, the abilities of human mind should not be limited. In any case, however, it should be noticed how to present the possibilities of a brain intervention (permanent or not) as an alternative to imprisonment integrates an implicit coercion to the individual’s will (Farah 2004). By contrast, a positive formulation of Cognitive Liberty argues that the existing neuro-technologies should be widely available to anyone who wants them. The main cases on this theme concern the free personal use of psychoactive substances and cognitive devices (such as Transcranial Direct Current Stimulator or Neuro Feedback Equipment) (Maslen et al. 2014) which may lead to a cognitive enhancement (Bostrom and Sandberg 2009, 311), even though the concept of enhancement may be related both to a hypothetical individual level (such as, for instance, the increase of one’s own memory) and to a hypothetical general level (such as, for instance, the drug treatment in academic exams). This new positions states as follow: until one person directly damages others, governments should not prohibit cognitive enhancement or the realization of any other mental state.4 However, if, on the one hand, the use of such “treatments” may be considered ethically permissible by society, on the other hand, the limited evidence regarding their efficacy and the potential safety problems in the long term might suggest being careful with their use. This dichotomy is also the basic brick on which transhumanist and bio-conservative debate is built. In fact, while the aim of the first part is to “create the opportunity to live much longer and healthier lives, to enhance our memory and other intellectual faculties, to refine our emotional experiences and increase our subjective sense of well-being, and generally to achieve a greater degree of control over our own lives” (Bostrom 2003, 493), the second part argued that the use of cognitive enhancement might have deep and unpredictable consequences for society because it could allow people to create cognitive structures of the type that do not occur within the range of normal human experience (Lynch et al. 2011). This is precisely the point; today a shared concept of “normality” has still not been elaborated. In other words, what is neuro-normality? Indeed, as I wrote in another 4
Cf. Center for Cognitive Liberty & Ethic CCLE, http://www.cognitiveliberty.org. Accessed 04/23/2021.
78
P. Sommaggio
article “In the neuro-scientific context, there are at least two formulae referring to normality: the statistical model, based on the observation of uniformity of behaviour, and the socio-biological, or evolutionary, model” (Sommaggio 2016). Nevertheless, both conceptions may be criticised. The first one by the fact that empirical observation suffers from the statistical syndrome: the bell-curve, namely a standardised data distribution, such as that where for each genius there can be an idiot, with the resulting defeat of any normality definition. The second one may be criticized by the fact that it leads to a blind alley; where we are unable to highlight the reasons why certain behaviour may have consisted of a “bad” or “good” adaptation to social-environment. Obviously, there are many arguments in favour of seeing the use of cognitive potentials legitimised by the right to autonomously determine our own identity and conscience. However, even if we assume a libertarian point of view, a common criticism of cognitive enhancement arises. Indeed, the better off will have access to Cognitive Enhancement while the poor will not, thus resulting in wider disparities in society, since it represents a potentially huge market, not only for drug companies but also for physicians who might enter the potentially lucrative market, especially of cosmetic neurology (Giordano 2010; Larriviere et al. 2010). Moreover, one may wonder whether the availability of enhancers might not create professional duties for individuals in high-risk professions (such as surgeons or pilots) to utilize them even if a reasonable doubt about their efficacy and their possible negative effects persists (Maslen et al. 2014).
7.7 Neuro Human Rights As we have seen, neuro-technologies have the potential to impact and redefine legal systems even though, up to now, international human rights law does not make any reference to neuroscience. And this is actually a problem (Sommaggio and Mazzocca 2020). Taking a wider view, we can say that neuro-technologies have the potential to redefine our global society conception. This is because they are able to influence every human person. In other words, they are able to modify the cognitive inner structure of every human being and this is an international question of rights; (indeed) a question of human rights. I am not interested in the discussion about the foundation of human rights, in this paper I will use Beitz’s definition of human rights. He considered human rights as “requirements whose object is to protect urgent individual interests against predictable dangerous (‘standard threats’) to which they are vulnerable under typical circumstance of life in a modern world order composed of states” (Beitz 2011, 109). I think cognitive liberty has all the features required by Beitz. This is because, in my opinion, Cognitive Liberty can be thought as a requirement to protect the mind’s self determination against the intervention of other parts (or the state) and this element is a common core of the whole world.
7 Neuroscience, Neurolaw, and Neurorights
79
The same considerations may be used to avoid the so-called “rights inflation” that is the traditional objection against the recognition or creation of new human rights. I use a justificatory test for these new kinds of rights to test if they are properly human rights. I think that no one could deny that Cognitive Liberty does not only deal with a very important good but also responds to a common and serious threat to that good. On the other hand, no one is able to impose burdens on its practice or to deny its feasibility in most Countries. This may be considered as the Nickel test (Nickel 2014). Cognitive Liberty also has the Nickel test’s required features. For this reason, I think Cognitive Liberty is able to successfully “overcome” the problem of human rights inflation. Similarly, in the examination led by Ienca and Adorno, the focus is the refusal of the coercive use of neuro-technologies and the development of the law category of Cognitive Liberty, which must be supported by the reconceptualization of existing human rights or the introduction of new human (neuro) rights: – – – –
the right to cognitive liberty; the right to mental privacy; the right to mental integrity; the right to psychological continuity.
They argued that: “For the purposes of our analysis, in this article we will focus exclusively on the negative formulation of the right to cognitive liberty, namely as the right to refuse coercive uses of neuro-technology. In addition, while we welcome the introduction of the right to cognitive liberty, we argue that this notion is not alone sufficient to cover the entire spectrum of ethical and legal implications associated with neuro-technology. Rather, the establishment of cognitive liberty as a human right should be coordinated with a simultaneous reconceptualization of existing rights or even the creation of other new neuro-specific rights. This is the right to mental privacy, the right to mental integrity and the right to psychological continuity” (Ienca and Andorno 2017, 11). In consideration of the first point, the question which arises is whether actual standards of privacy protection include the information included in, or generated by, our mind.5 Another problem is related to the attacks on the brain by criminal groups. They can directly manipulate mental capabilities, and the resulting mental integrity, through the use of neurological devices, the same way computer hackers do. As everyone knows, physical and psychological integrity is currently safeguarded by Article 3 of the European Charter of Human Rights which emphasises the 5
A possible protection is provided by the European Convention on Human Rights in Article 8, which recognizes the right to respect family life, domicile and correspondence and co. 2 states: “There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.”
80
P. Sommaggio
right in medicine and biology.6 Mental integrity should both ensure that patients with mental health issues can easily access psychiatric treatments and support, and protect the mental dimension from possible harm by others. This reconsideration of mental integrity should lead to specific regulatory protection from possible neurochemical interventions designed to irreversibly modify individual personality with direct cognitive impairment. The right to psychological continuity would be a specification of neural nature regarding the right to personal identity developed by the European Court of Human Rights on the basis of Article 8 of the European Convention on Human Rights and recognized by the Universal Declaration of Human Rights. Here, the right to personal fulfilment and the full development of personality is explained in Art. 22 and 29. In any case the utmost attention and public debates are imperative before authorising intentional intrusions into the personal sphere (Ienca and Andorno 2017). Therefore, though in their opinion, Cognitive Liberty is also a prerequisite of all the rights focused on neuro aspects; we, however, think that, following their reasoning, the better approach is to leave the idea to the introduction of a new neuro-orientated right into the current declarations of Human Rights in order to focus on a totally new Declaration of Human Neuro-rights. I think that, analogously with the concept of the Human Genome, we could claim a Universal Declaration on Neuro-Rights. This is because, the adaptive ability that human rights law has shown, in responding to the challenges posed by genetic technology, provides that it may be a useful tool to anticipate how this issue has to evolve in the next few years. The path may be like a step in a stairway: analogous with the argument on genetic issues. Genetic issues achieved international protection within a few years. In 1997, the Universal Declaration on the Human Genome and Human Rights was adopted to pursue genetic privacy (against uses incompatible with human rights), and to protect the human Genome. These principles were developed in the International Declaration on Human Genetic Data. A close connection between bioethics issues and human rights was further fixed in the Universal Declaration on Bioethics and Human Rights. Therefore, contrary to what is written by Bublitz and Adondorno-Ienca, we think that it is not only preferable and just, but it would also be easier solve the problems related the concept of Cognitive Liberty through a new Declaration of Human Neuro-Rights, following the path that has already be traced with regard to the Human Genome.
6
However, it is necessary to recognize that the rights of the Charter apply only to the institutions, agencies and bodies of the Union respecting the principle of subsidiarity as well as to Member States in the implementation of Union law, as stated in art. 51.
7 Neuroscience, Neurolaw, and Neurorights
81
7.8 Conclusion The authors I examined share a neuroscientific project: a social and legal enhancement. A sort of neurocivilization, or the substitution of the sources of law with new neuroscientific standards and to use direct brain intervention in order to eliminate unlawful behaviour. Neuroscientific based social organization promises a harmonious future for an “improved” society through the stigmatisation of unacceptable (non-normal) behaviours. This is the point; the neurosciences promise to solving social problems through direct and modifying interventions, where traditional humanities have failed (Bowart 1978; Taylor 2004). Nevertheless, they are still not able (or not willing) to provide a common social model to aspire to, in order to define the criteria of normal/abnormal behaviour. They generally mention only an undefined undesirability which, however, leaves space for questionable, if not risky, subjective solutions. As I showed, neuroscientists leave the laboratories and participate in the debates about the future of society (and of law), with the purpose of providing an apparently “neutral” point of view, aiming nevertheless at a big transformation. This transformation can be traumatic as radical neuro-enthusiasts assert, or can be diluted over time and less invasive, as neuro-tepid-reformists assert. According to the latter, technologies and neuroscientific studies will simply generate a progressive improvement of the society. In this new context, neurolaw will continue to be considered a control technique, reduced to an instrument for social evolution, in light of scientific standards and not of ethical values. Therefore, the infirmity state is identified with that of social dangerousness and equal, in the same concept, to “mental disorder.” Behaviour that cannot find refuge in a neuroscientifically based society because evident symptoms of neuronal barbarity (Morse 2011b). Therefore, it is not about saying yes or no to neuro-civilisation, but about identifying the fields in which it would be accomplished without the due respect for freedom, or in which direct brain intervention would be forced through in order to eliminate neuro-deviance. We must not forget that the figure of the deviant has an important role. It also represents the critical opposition to social order that forces society to reflect about itself. This was arguably the task of that most famous of deviants, Socrates. Even in a neuroscientifically based society, I believe, it is necessary to find a stage for this actor. In conclusion, I showed the pivotal role that Cognitive Liberty plays in this new neuro-orientated society. As a first step I described the importance and the features of the concept of Cognitive Liberty, understood as a necessary condition to all other liberties, since it is their neuro-cognitive substrate. As a second step, I reported how other proponents of Cognitive Liberty, suggest considering it as a fundamental human right as well as a central legal principle, guiding the regulation of neurotechnologies. In this regard, we should remember, as Bublitz (2013) argued, how “hard it is to conceive of any conception of a legal subject in which the mind and
82
P. Sommaggio
mental capacities (e.g., acting from reasons, deliberation) are not among its necessary constitutive conditions.” Subsequently, as a third step, I argued how Cognitive Liberty has all the features it needs to make it a key concept from which new human rights are able to emerge. This is because it cannot just be reduced to existing rights. But it may be considered as a basis of all, internal and external, liberties. Indeed, since Cognitive Life, in its various forms and degrees, is inherent to all human beings, so cognitive liberty is consistent with a definition of human rights as inalienable rights “to which a person is inherently entitled simply because she or he is a human being” (Sepulveda et al. 2004), regardless of their nation, location, language, religion, ethnic origin or any other status. As a fourth step I showed how the integration of Cognitive Liberty into the human rights framework would enable the protection of constitutive features of human beings that are not being entirely protected by existing rights. In this paper, my proposal was to consider these steps as parts of a metaphorical stairway to national and international protection of the inner sphere of every human being. In this sense, Cognitive Liberty may be a key concept for a new kind of “habeas corpus;” a recourse in law through which a person can report on unlawful intervention into their inner world. It is a new “habeas mentem” that would mean “my mind is free;” free from interventions of others, and free to change as I choose, as I think fit. To sum up, I ask for a go-ahead for the legal recognition of the neuro-cognitive issues in a defensive and proactive sense. I do not mind what form these neuro-rights will take. I am interested in unearthing this problem and in putting Cognitive Liberty at the centre of this conceptual turning point of our future international society.
References Beitz CR (2011) The idea of human rights. Oxford University Press, Oxford Binder MD et al (2009) Encyclopedia of neuroscience. Springer, Dordrecht Bostrom N (2003) Human genetic enhancements: a transhumanist perspective. J Value Inq 37(4):493–506 Bostrom N, Sandberg A (2009) Cognitive enhancement: methods, ethics, regulatory challenges. Springer, London Bowart W (1978) Operation mind control. Collins Sons & Co., Ltd, Glasgow Bublitz JC (2013) My mind is mine!? Cognitive liberty as a legal concept. In: Hildt E, Francke A (eds) Cognitive enhancement. Springer, Dordrecht, pp 233–264 Cacioppo JT (2002) Foundations in social neuroscience. MIT Press, Cambridge Eagleman DM, Isgur Flores S (2012) Defining a neurocompatibility index for criminal justice system: a framework to align social policy with modern brain science. In: Muller S et al (eds) The law of the future and the future of the law, vol II. Torkel Opsahl Academic EPublisher, The Hague Farah M (2004) Emerging ethical issues in neuroscience. Nat Neurosci 5:1123–1130 Franks DD (2010) Neurosociology. The nexus between neuroscience and social psychology. Springer, Dordrecht
7 Neuroscience, Neurolaw, and Neurorights
83
Gazzaniga MS, Steven MS (2004) Free will in the twenty-first century. In: Garland B (ed) Neuroscience and the law: brain, mind and the scales of justice. Dana, New York Giordano JJ (2010) Neuroethical issues in neurogenetic and neuro-transplantation technology: the need for pragmatism and preparedness in practice and policy. Stud Ethics Law Technol 4(3), Article 4 Greely HT (2008) Neuroscience and criminal justice: not responsibility but treatment. Univ Kansas City Law Rev 56:1103–1138 Greely HT (2012) Direct Brain interventions to “Treat” disfavoured human behaviours: ethical and social issues. Clin Pharmacol Ther 91:163–165 Greene J, Cohen J (2004) For the law, neuroscience changes nothing and everything. Philos Trans R Soc Lond B Biol Sci 359:1775–1785 Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13:1–27 Jones OD et al (2013) Neuroscientists in court. Nature 14:730–736 Jotterand F, Giordano J (2011) Transcranial magnetic stimulation, deep brain stimulation and personal identity: ethical questions, and neuroethical approaches for medical practice. Int Rev Psychiatr 23(5):476–485 Kandel ER (1981) Principles of neural science. Elsevier, Amsterdam Kolber A (2014) Will there be a neurolaw revolution? Indiana Law J 89:807–845 Larriviere D et al (2010) Neuroenhancement: wisdom of the masses or false phronesis? Clin Pharmacol Ther 88(4):459–461 Lynch G et al (2011) The likelihood of cognitive enhancement. Pharmacol Biochem Behav 99(2):116–129 Markowitsch HJ, Seifer W (2007) Tatort Gehirn. Auf der Suche nach dem Ursprung des Verbrechens. Campus, Frankfurt am Main Maslen H et al (2014) The regulation of cognitive enhancement devices: extending the medical model. J Law Biosci 1(1):88–93 Meynen G (2013) A neurolaw perspective on psychiatric assessments of criminal responsibility: decision-making, mental disorder, and the brain. Int J Law Psychiatry 36:93–99 Morse SJ (2011) The status of neurolaw: a plea for current modesty and future cautious optimism. J Psychiatry Law 39:595–626 Morse SJ (2011) Mental disorder and criminal law. J Crim Law Criminol 101:885–968 Morse SJ (2013) Compatibilist criminal law. In: Nadelhoffer T (ed) The future of punishment. Oxford University Press, Oxford Nagera H (2013) Reflections on psychoanalysis and neuroscience: normality and pathology in development, brain stimulation, programming and maturation. Neuropsychoanalysis 3:179–191 Nickel J (2014) Human rights. Stanford encyclopedia of philosophy. https://plato.stanford.edu/arc hives/spr2017/entries/rights-human. Accessed 22 June 2018 O’Connor C et al (2012) Neuroscience in the public sphere. Neuron 74:220–226 Opderbeck DW (2013) The problem with neurolaw. Saint Louis Univ Law J 58:497–540 Pardo M, Patterson D (2013) Neuroscience, normativity and retributivism. In: Nadelhoffer T (ed) The future of punishment. Oxford University Press, Oxford Raine A, Yang Y (2006) Neural foundations to moral reasoning and antisocial behaviour. Soc Cogn Affect Neurosci 1:203–213 Roskies A (2002) Neuroethics for the new millennium. Neuron 35:21–23 Sententia W (2004) Neuroethical considerations: cognitive liberty and converging technologies. Ann N Y Acad Sci 1013:223 Sententia W (2013) Freedom by design: transhumanist values and cognitive liberty. In: More M, Vita-More N (eds) The transhumanist reader: classical and contemporary essays on the science, technology and philosophy of the human future. Wiley, Hoboken, pp 356–357 Sepulveda M et al (2004) Human rights reference handbook. University for Peace, Costa Rica Singh I, Sinnott-Armstrong WP, Savulescu J (2013) Bioprediction, biomarkers and bad behavior. Scientific, legal and ethical challenges. Oxford University Press, Oxford
84
P. Sommaggio
Sommaggio P (2014) Neurocivilizzazione. Ethics Politics XVI(2):130–168 Sommaggio P (2016) Neuro-civilization: a new form of social enhancement. In: ATINER’S conference paper series, SOS2016-2106, pp 3–18 Sommaggio P et al (2017) Cognitive liberty. A first step towards a human neuro-rights declaration. BioLaw J 5:27–45 Sommaggio P, Mazzocca M (2020) Cognitive liberty and human rights. In: Aloia AD, D’Arrigo MC (eds) Neuroscience and law. Springer, London Tamanaha BZ (2006) Law as a means to an end. Cambridge University Press, Cambridge Tamanaha BZ (2007) How an instrumental view of law corrodes the rule of law. De Paul Law Rev 56:1–52 Taylor K (2004) Brainwashing. The science of thought control. Oxford University Press, Oxford Vincent N (2010) On the relevance of neuroscience to criminal responsibility. Crim Law Philos 4:77–98 Vincent NA (2013) Neuroscience and legal responsibility. Oxford University Press, Oxford
Part II
Neurotechnologies and Ethics: Main Problems
Chapter 8
A Conceptual Approach to the Right to Mental Integrity Elisabeth Hildt
Abstract In this chapter, I reflect on the right to mental integrity from an ethics perspective. Against the background of some conceptual considerations, I discuss the chances and limitations of a right to mental integrity. The right to mental integrity stresses a person’s right to control their brain states. It is often conceived primarily as a negative right to protect against unauthorized brain interventions. While this certainly emphasizes a very important aspect, I argue that the right to mental integrity would benefit considerably from reflections on what is specific about the right to mental integrity, compared to, for example, the right to bodily integrity or the notion of informed consent. That’s why after introducing and discussing the right to mental integrity, the notion of “mental integrity,” and the concept of informed consent, I sketch implications of neurotechnologies on privacy, agency, individual characteristics, identity, authenticity, and autonomy. Then, I highlight some implications of the right to mental integrity in the context of neurotechnologies. Keywords Mental integrity · Neurotechnologies · Informed consent · Autonomy · Privacy
8.1 Introduction Neurotechnologies such as deep brain stimulation (DBS), transcranial magnetic stimulation (TMS), transcranial direct current stimulation (tDCS), or brain-computer interfaces (BCIs) offer a multitude of applications in clinical and non-clinical contexts (Chaudhary et al. 2016; Espay et al. 2016; Garnaat et al. 2018; Roelfsema et al. 2018; Cagnan et al. 2019; Cinel et al. 2019; Chase et al. 2020; McFarland 2020). For example, DBS has been used as a treatment to alleviate motor symptoms in patients with Parkinson’s disease or essential tremor for decades, and brain-computer interfaces allow patients with motor impairments to control devices such as computer cursors or navigate prostheses. E. Hildt (B) Center for the Study of Ethics in the Professions, Illinois Institute of Technology, Chicago, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_8
87
88
E. Hildt
As they directly interact with the human brain, neurotechnologies offer a broad spectrum of treatment options, but also come with risks. In order to protect users from potential harm in the context of neurotechnologies, several authors have suggested a reconceptualization of existing rights, or have identified the need for new brainrelated rights or neurorights. Rights suggested include a right to cognitive liberty, a right to mental privacy, a right to mental integrity, and a right to psychological continuity (Ienca and Andorno 2017; Yuste et al. 2017; Lavazza 2018). In this following chapter, I reflect on the right to mental integrity from an ethics perspective. Against the background of some conceptual considerations, I discuss the chances and limitations of a right to mental integrity. The right to mental integrity stresses a person’s right to control their brain states. It is often conceived primarily as a negative right to protect against unauthorized brain interventions. While this certainly emphasizes a very important aspect, I argue that the right to mental integrity would benefit considerably from reflections on what is specific about the right to mental integrity, compared to, for example, the right to bodily integrity or the notion of informed consent. That’s why after introducing and discussing the right to mental integrity, the notion of “mental integrity”, and the concept of informed consent, I sketch implications of neurotechnologies on privacy, agency, individual characteristics, identity, authenticity, and autonomy. Then, I highlight some implications of the right to mental integrity in the context of neurotechnologies.
8.2 The Right to Mental Integrity Various authors have stressed the need for a right to mental integrity, and a number of definitions have been given for the right to mental integrity. Ienca and Andorno (2017) point out that in view of recent neurotechnological developments, it is necessary to reconceptualize the EU’s Charter of fundamental rights (Article 3) that stresses everyone’s “right to respect for his or her physical and mental integrity” and understands mental integrity as a right to mental health. Accordingly, they write: “Mental integrity in this broader sense should not only guarantee the right of individuals with mental conditions to access mental health schemes and receive psychiatric treatment or support wherever needed. In addition to that, it should also guarantee the right of all individuals to protect their mental dimension from potential harm. This reconceptualized right should provide a specific normative protection from potential neurotechnology-enabled interventions involving the unauthorized alteration of a person’s neural computation and potentially resulting in direct harm to the victim. For an action X, to qualify as a threat to mental integrity, it has to: (i) involve the direct access to and manipulation of neural signaling (ii) be unauthorized—i.e., must occur in absence of the informed consent of the signal generator, (iii) result in physical and/or psychological harm” (Ienca and Andorno 2017, 18). According to this definition, the focus of the right to mental integrity is on protecting individuals against unauthorized and potentially harmful interventions
8 A Conceptual Approach to the Right to Mental Integrity
89
in their brain processes. Central is to prevent physical and/or psychological harm. It is a negative right that protects a person against interventions he or she has not consented to. Other authors have stressed the right of individuals to be protected against unauthorized interventions. Referring to the rights to mental and bodily integrity, Elizabeth Shaw (2018) argues against the mandatory use of neurointerventions in criminal sentencing. Similarly, in the context of administering neurointerventions to an offender as an alternative to incarceration, Birks and Buyx (2018, 136) describe an “interest in mental integrity” as “a person’s interest in not having at least some of his mental states intentionally altered by others in certain ways.” Thomas Douglas and Lisa Forsberg use a definition for “right to mental integrity” that does not explicitly mention protection from potential harm. They understand a right to mental integrity “as a right against (certain kinds of) nonconsensual interference with the mind” (Douglas and Forsberg 2021, 182). In these latter approaches, the focus is on a person’s right not to have their mental states being intentionally altered by others. Andrea Lavazza gives a broader definition, central to which is the mastery of one’s own mental states and brain data. He defines mental integrity in the following way (Lavazza 2018, 4): “Mental Integrity is the individual’s mastery of his mental states and his brain data so that, without his consent, no one can read, spread, or alter such states and data in order to condition the individual in any way.” This definition centers on the individuals’ mastery of their brain states and brain data, insofar it encompasses not only interventions that aim at altering brain states but also issues related to privacy, data protection, and data sharing. In addition, the focus is not on preventing harm but on being in control of one’s mental states and brain data. This includes a positive right to modify one’s brain states. Insofar, this broad conception of the right to mental integrity suggested by Lavazza bears similarities to the concept of “cognitive liberty.” Wrye Sententia (Sententia 2004, 223) describes cognitive liberty to be “every person’s fundamental right to think independently, to use the full spectrum of his or her mind, and to have autonomy over his or her brain chemistry.” While the notion of cognitive liberty clearly is both about being free to modify one’s brain states and to have the right to fight off unauthorized brain interventions, in the interdisciplinary debate around brain interventions and enhancement, the concept of cognitive liberty is often employed to stress the right of individuals to modify their brain states, whereas the right to mental integrity usually serves to stress the right to protection from unauthorized interventions. Several authors have given examples of breaches of the right to mental integrity (Ienca and Andorno 2017; Birks and Buyx 2018; Lavazza 2018). Accordingly, threats to mental integrity can result from criminal activities that interfere with neural computation in the brains of neurodevice users (malicious brainhacking), certain military or anti-terrorism uses of neurotechnology devices, or the administration of neurointerventions to criminal offenders.
90
E. Hildt
8.3 The Right to Mental Integrity and Informed Consent The right to mental integrity is a strong concept that stresses a person’s right to control their brain states. Insofar it is closely related to the concept of informed consent. Traditionally used in the context of medicine, the concept of informed consent involves the patient to formally agree to undergo a medical intervention, after having been informed thoroughly of the benefits and risks. Central aspects of informed consent in medical contexts and in research involving human subjects include (Faden and Beauchamp 1986; Mason and O’Neill 2017; Hendriks et al. 2019): – consent is given voluntarily, without any form of external pressure being exerted; – information is provided in a comprehensive and transparent way, so that the patient or research participant is able to understand the relevant aspects and the benefits and risks of the intervention; – the individual patient or research participant is capable of decision-making in the respective context; – any form of intervention is only legitimate after having obtained free and informed consent, i.e., the informed consent has a gatekeeper function; – patients and research participants are free to quit at any time, without having to fear negative consequences; – data and specimens collected are to be used only for the purpose or purposes specified; any uses in additional contexts require consent; – protections are in place for individuals not competent to consent. All of these aspects are also of crucial importance to the right to mental integrity. To begin with, any intervention done without having obtained free and informed consent, or any intervention for which the person involved has retracted their consent is an unauthorized intervention. Neurointerventions exerted on a person without their knowledge are clearly intrusions to the right to mental integrity, as are interventions on individuals who are not competent to consent or who are in a situation that does not allow free decision-making. The right to mental integrity presupposes that everything that matters around an intended intervention is revealed and explained. Transparency is important. This includes mentioning all aspects that are not clearly known yet, as well as uncertainties pertaining to the respective context. While the concept of informed consent is a clear standard in medicine, the situation can be much more blurred or even opaque outside of the medical framework. Neurohacking and other forms of misuse are interferences by malicious intruders characterized by the absence of consent. Interventions on soldiers, offenders, or prison inmates raise complex and persistent questions concerning the extent to which voluntary consent is possible under these conditions. The right to mental integrity certainly is of enormous importance in that it stresses the individual person’s decision-making authority over any form of intervention into their brain. However, beyond the procedural decision-making aspect that is clearly in line with the concept of informed consent, it lacks content in that it does not provide
8 A Conceptual Approach to the Right to Mental Integrity
91
any criteria on the quality, reasonableness, or desirability of brain interventions. Not every intervention a person has consented to is unproblematic, ethically legitimate or desirable. In order to fill this void, the notion of mental integrity and concepts of relevance in the context of mental integrity will be discussed in the following section.
8.4 Mental Integrity While several rationales for a right to mental integrity have been suggested (Douglas and Forsberg 2021), what I am interested in is not so much the question of a right to mental integrity but the question of what it is that matters in the context of mental integrity. In the context of the right to mental integrity, the concept of “mental integrity” is used to stress a person’s authority to control their brain activity and to fight off unwanted brain interventions. A closer look at the words may help to clarify the notion of mental integrity. Beyond the definition of integrity as “the quality of being honest and having strong moral principles that you refuse to change,” i.e., integrity in the moral sense that is not directly relevant here, the online Cambridge Dictionary defines integrity as “the quality of being whole and complete” (https://dictionary.cambridge.org/us/ dictionary/english/integrity). Among the definitions given by the Merriam-Webster online dictionary for “integrity” is (https://www.merriam-webster.com/dictionary/integrity): “An unimpaired condition: soundness,” and “the quality or state of being complete or undivided: completeness.” Central to these definitions of integrity is the notion of being in a state of being whole, unimpaired, or undivided. Mental integrity, then, can be understood as a mental state of being whole, unimpaired, or undivided; being whole, unimpaired, or undivided with regard to one’s brain and brain-related factors; or being mentally whole, unimpaired, or undivided. Words usually used to describe states like these include being herself, being her genuine self, being unimpaired compared to how he or she usually is, being how he or she wants to be, having a uniform personality, or having a consistent and coherent life. All these characterizations are closely related to concepts like authenticity, identity, individuality, personality, autonomy, and agency. Another central concept is personhood. Any intervention that endangers personhood and capabilities central to personhood, such as rationality, autonomy, or communication, would clearly be extremely problematic (Müller and Rotter 2017). Insofar, mental integrity can be seen as an umbrella term that covers concepts like privacy, identity, individuality, authenticity, autonomy, rationality, and agency (Fuselli 2020). Before I sketch the role of some of these concepts in the context of neurotechnological interventions, it is worth noting that in contrast to mental integrity, the notion of brain integrity can be understood to be about structural modifications of the brain.
92
E. Hildt
These could be surgical brain modifications or structural changes following invasive interventions related to implanted devices, immunoreactions, or biocompatibility issues. Structural modifications may or may not go along with mental effects.
8.4.1 Privacy Brain privacy in the context of neurotechnologies is about a person’s ability to master their brain data, i.e., to control the recording, storing, sharing, and use of brain data. Privacy is particularly crucial when personally identifiable information can be revealed, such as brain-related data that provides information about a person’s location, activity, individual characteristics, or affective states. With wearable neurotechnological devices connected to the internet and information stored in the cloud, additional privacy-related issues arise (Hernandez 2016; Ienca and Andorno 2017; Ienca et al. 2018). Brain integrity will require effective measures to prevent others from having unauthorized access and to safeguard neurosecurity.
8.4.2 Agency Neurotechnologies can restore or support agency in that they help restore motor functions. For example, DBS in Parkinson’s disease patients facilitates movement, increases mobility, and reduces rigidity, or brain-computer-interfaces enable persons with severe motor impairments to navigate computer keyboards. However, also concerns regarding neurotechnological influence on agency have been raised. A stimulation device may lead to the impression of no longer being the one who is in control of one’s characteristics or movement capabilities but to the feeling of being controlled by the device. For example, with DBS, users have reported not being sure about the authorship of their behaviors and feelings (Lipsman and Glannon 2013; Goering et al. 2021). With BCIs, issues related to shared control and shared agency have been discussed (Burwell et al. 2017). All these considerations relate to questions of how far behavioral or emotional effects were brought about by the users themselves or by the technology. Some of these questions may be very difficult, if not impossible, to resolve. In this context, Goering et al. (2021) stress the concept of relational agency. They argue that neurotechnological devices provide agential assistance that is similar to the assistance provided by caregivers or family and see users and devices participating in a kind of co-agency.
8 A Conceptual Approach to the Right to Mental Integrity
93
8.4.3 Modifications in Individual Characteristics Neurotechnological interventions like DBS or tDCS have the power to modify individual behavior, mood, or personality. Some of these modifications are intentionally brought about, for example, when neurotechnological procedures are used to treat psychiatric disorders. Others are unintended or unexpected. While with DBS, most often only minor side-effects have been experienced, in some cases, more substantial behavioral effects have been reported, such as increases in impulsivity, aggressiveness or mania. With BCIs, body schema-related questions may arise, for example, when users navigate a prosthetic arm (Hildt 2006; Burwell et al. 2017; Fuselli 2020; Goering et al. 2021). Neurotechnological interventions that alter individual characteristics and personality may be experienced as alienating and difficult to deal with, not only for the persons undergoing the intervention but also for their families and friends. Questions to address in this context include: What are acceptable effects and side effects of neurointerventions? What types of modifications in individual characteristics are more desirable or more easily to tolerate than others? Which characteristics are more crucial to an individual’s personality than others? When would a modification be considered so dramatic that it would pose a threat? Are there characteristics that people better not attempt to modify? While it may be easier to adjust to intentional modifications than to unintended side-effects, it has to be seen that it is very difficult to know before undergoing an intervention how it will or would be like to exhibit modified behavior, mood, or personality. Technologies that are flexible and easy to modulate have an obvious advantage in this context. Non-invasive neurotechnological interventions that can easily be discontinued and unplugged clearly are less problematic than invasive technologies that are more difficult to remove.
8.4.4 Personal Identity Changes in personality traits following neurotechnological interventions have sparked an interdisciplinary discussion around personal identity. For example, with DBS in Parkinson’s disease patients, unintended consequences have been reported that raise questions concerning the person’s identity (Hildt 2006; Jotterand and Giordano 2011; Lipsman and Glannon 2013; Fuselli 2020). Personal identity is a complex philosophical concept that includes questions around what it is that makes a person to persist through time. Personal identity-related questions around neurotechnologies refer to what is central to a person’s identity and whether, in view of certain changes in individual characteristics, a person is still the same as before the neurotechnological intervention. Are the modifications experienced as alienating or are they in line with a person’s “true self” or “authentic self”?
94
E. Hildt
While some authors have stressed the importance of preserving and/or restoring personal identity in neurotechnological interventions (Jotterand and Giordano 2011), others have suggested a relational account of personal identity that considers personal identity as a dynamic interpersonal activity based in narrative (Baylis 2013; Postan 2020). Without doubt, the response to the question of whether a neurotechnological intervention poses a threat to identity depends on the philosophical approach to personal identity. While it is true—and somewhat trivial—that after a neurotechnological intervention, no matter what the consequences might be, life goes on as long as the person physically survives, some modifications certainly are more problematic to cope with than others.
8.4.5 Authenticity and Autonomy A broad conception of the right to mental integrity that stresses the right to master one’s brain states implies that individuals have access to neurotechnological treatments they consider necessary, useful, or desirable. If the right to mental integrity is understood not only as a protective right against unauthorized interventions but also as a right to govern one’s brain states, factors of authenticity in the sense of shaping one’s self and being in line with one’s true self gain relevance. From this perspective, neurotechnologies can be seen as empowerments to shape oneself and the course of one’s life. Individuals undoubtedly have different views on what sort of neurotechnological intervention they consider desirable, acceptable, or unacceptable. Authenticity-based considerations are about how a person wants to be and whether the modifications brought about by a neurotechnological procedure are in line with their authentic self.
8.5 The Right to Mental Integrity—Some Practical Implications The right to mental integrity has two directions: the negative right to protect against unauthorized brain interventions and the positive right to seek interventions to shape one’s brain states. Central to the right to mental integrity is the right and the capability to prevent unauthorized brain interventions from happening, and to stop unauthorized interventions of any kind, in case an intervention has already been initiated. This includes privacy intrusions, data leakage, or unauthorized data sharing. Exercise of this right presupposes having knowledge of the respective intervention and of what it implies. With authorized brain interventions, things are different. While a person may have intentionally sought and consented to a neurotechnological procedure, it may
8 A Conceptual Approach to the Right to Mental Integrity
95
turn out to come with negative or unexpected consequences. No matter whether these modifications are perceived as identity threatening or just annoying and not welcome, from the point of view of mental integrity, it will be important for users to be able to control their brain states and to be able to take measures to counteract any unwelcome effects or modifications. A right to mental integrity in the context of neurotechnologies presupposes access to neurotechnology-related services. This includes access to services that serve to adjust the system to the individual person and situation. Mental integrity requires that a person does not have to wait too long to have the system adjusted in case of problems. With non-invasive neurotechnologies, the advantage is that the technology can be easily removed or turned off. With invasive technologies, while turning off the device is possible, removing may be more difficult in that surgery is required, which is never without risks. A right to mental integrity also implies that it is feasible and realistic for persons to discontinue a chosen treatment or to remove a device if they wish so. In the case of invasive neurotechnologies, this includes the availability of low-risk procedures that allow the removal of devices with only minimal health risks.
8.6 Conclusion So far, most authors have discussed the right to mental integrity primarily in the context of protecting against unauthorized brain interventions. Protection against brain intrusions certainly is very important. However, this conception of the right to mental integrity is very narrow in that it primarily stresses individual decisionmaking, albeit with regard to brain technologies. In order to more thoroughly consider brain-related aspects of the right to mental integrity, there is a clear need to better reflect on mental integrity, a notion that can be understood to be an umbrella term that covers concepts like privacy, identity, individuality, authenticity, autonomy, and agency. Seen from this conceptual perspective, a right to mental integrity is a right to decide about aspects related to mental integrity in the context of neurotechnologies. As described, this comes with a number of implications, not only with regard to unauthorized brain interventions, but also in the context of deliberately chosen neurotechnological procedures.
96
E. Hildt
References Baylis F (2013) “I Am Who I Am”: on the perceived threats to personal identity from deep brain stimulation. Neuroethics 6:513–526 Birks D, Buyx A (2018) Punishing intentions and neurointerventions. AJOB Neurosci 9(3):133–143 Burwell S, Sample M, Racine E (2017) Ethical aspects of brain computer interfaces: a scoping review. BMC Med Ethics 18:60 Cagnan H, Denison T, McIntyre C, Brown P (2019) Emerging technologies for improved deep brain stimulation. Nat Biotechnol 37(9):1024–1033 Chase HW, Boudewyn MA, Carter CS, Phillips ML (2020) Transcranial direct current stimulation: a roadmap for research, from mechanism of action to clinical implementation. Mol Psychiatry 25(2):397–407 Chaudhary U, Birbaumer N, Ramos-Murguialday A (2016) Brain-computer interfaces for communication and rehabilitation. Nat Rev Neurol 12(9):513–525 Cinel C, Valeriani D, Poli R (2019) Neurotechnologies for human cognitive augmentation: current state of the art and future prospects. Front Hum Neurosci 13:13 Douglas T, Forsberg L (2021) Three rationales for a legal right to mental integrity. In: Ligthart S et al (eds) Neurolaw, Palgrave studies in law. Neuroscience, and human behavior, pp 179–201 Espay AJ, Bonato P, Nahab F, Maetzler W Dean JM, Klucken J, Eskofier BM et al (2016) Technology in Parkinson disease: challenges and opportunities. Mov Disord 31(9):1272–1282 Faden RR, Beauchamp TL (1986) A history and theory of informed consent. Oxford University Press, Oxford Fuselli S (2020) Mental Integrity Protection in the Neuro-era. Legal challenges and philosophical background. BioLaw J 1/2020:413–429 Garnaat SL, Yuan S, Wang H, Philip NS, Carpenter LL (2018) Updates on transcranial magnetic stimulation therapy for major depressive disorder. Psychiatr Clin North Am 41(3):419–431 Goering S, Brown T, Klein E (2021) Neurotechnology ethics and relational agency. Philos Compass 16:e12734 Hendriks S, Grady C, Ramos KM et al (2019) Ethical challenges of risk, informed consent, and posttrial responsibilities in human research with neural devices: a review. JAMA Neurol 76(12):1506–1514 Hernandez A (2016) Brain waves technologies: security in mind? I don’t think so. IOActive. https:// ioactive.com/brain-waves-technologies-security-in-mind-i-dont-think-so/ Hildt E (2006) Electrodes in the brain: some anthropological and ethical aspects of deep brain stimulation. Int Rev Inf Ethics 5:33–39 Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13:5 Ienca M, Haselager P, Emanuel EJ (2018) Brain leaks and consumer neurotechnology. Nat Biotechnol 36(9):805–810 Jotterand F, Giordano J (2011) Transcranial magnetic stimulation, deep brain stimulation and personal identity: ethical questions, and neuroethical approaches for medical practice. Int Rev Psychiatry 23(5):476–485 Lavazza A (2018) Freedom of thought and mental integrity: the moral requirements for any neural prosthesis. Front Neurosci 12:82 Lipsman N, Glannon W (2013) Brain, mind and machine: what are the implications of deep brain stimulation for perceptions of personal identity, agency and free will? Bioethics 27(9):465–470 Mason NC, O’Neill O (2017) Rethinking informed consent in bioethics. Cambridge University Press, Cambridge McFarland DJ (2020) Brain-computer interfaces for amyotrophic lateral sclerosis. Muscle Nerve 61(6):702–707 Müller O, Rotter S (2017) Neurotechnology: current developments and ethical issues. Front Syst Neurosci 11:93
8 A Conceptual Approach to the Right to Mental Integrity
97
Postan E (2020) Narrative devices: neurotechnologies, information, and self-constitution, neuroethics. Published Online: https://doi.org/10.1007/s12152-020-09449-1 Roelfsema R, Denys D, Klink PC (2018) Mind reading and writing: the future of neurotechnology. Trends Cogn Sci 22(7):598–610 Shaw E (2018) Against the mandatory use of neurointerventions in criminal sentencing. In: Birks D, Douglas T (eds) Treatment for crime: philosophical essays on neurointerventions in criminal justice (engaging philosophy). Oxford University Press, Oxford, pp 321–337 Sententia W (2004) Neuroethical considerations: cognitive liberty and converging technologies for improving human cognition. Ann N Y Acad Sci 1013:221–228 Yuste R et al (2017) Four ethical priorities for neurotechnologies and AI. Nature 551:159–163
Chapter 9
Mental Integrity, Vulnerability, and Brain Manipulations: A Bioethical Perspective Luca Valera
Abstract When discussing possible brain manipulations and interventions, the protection of mental integrity is especially relevant at the anthropological level since human identity may be affected. In this sense, I argue that mental integrity is constituted as a right because it is the condition of possibility for other human dimensions, such as freedom, autonomy, and agency. In this regard, we must protect mental integrity in order to safeguard human intimacy. Nevertheless, since the human being is a situated being, with a strong relationship with his/her environment, protecting the mental integrity of the individuals also means protecting their environment. In this regard, a more complex and integrative view of the human being is necessary. One of the dimensions that current brain manipulations and interventions may affect, at the anthropological level, is the issue of human vulnerability, which maintains a strong link with our integrity. Indeed, the mitigation (or the respect) of our vulnerability is a prerequisite for maintaining our integrity (which is linked to personal identity). Vulnerability creates, thus, ethical concerns for two main reasons: 1. We must protect our vulnerability because we need to preserve our integrity and, therefore, our dignity; and 2. We have to protect human vulnerability because we are the main cause of it: our technological power is probably the main source of our current vulnerability. In this sense, the concept of vulnerability lies at the intersection between power and duty and, for this reason, may constitute a powerful (bio)ethical indicator in order to assess current neurotechnologies and their impact in our lives. Keywords Mental integrity · Vulnerability · Neurotechnologies · Intimacy · Human enhancement · Bioethics
L. Valera (B) Center for Bioethics, Pontificia Universidad Católica de Chile, Santiago de Chile, Chile e-mail: [email protected] Department of Philosophy, Universidad de Valladolid, Valladolid, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_9
99
100
L. Valera
9.1 Mental Integrity and Brain Manipulation: Some Anthropological Concerns Recent technological advances for manipulating brain activities, which may also alter personal identity, have resulted in various attempts to protect human identity through both national and international regulations. One such initiative is the 2005 proposal by the “European Group on Ethics in Science and Technology (EGE),” which states “that ICT implants in the human body should not be used to alter personal identity and manipulate mental functions. This view is motivated on the basis of procedures for responsibility ascriptions and, more crucially, the right of having one’s dignity respected, which carries with it the right to respect for one’s physical and mental integrity” (Lucivero and Tamburrini 2008). It is also important to remember the 2008 EU Charter of Fundamental Rights (Title I, Article 3—Right to integrity of the person), which states that “everyone has the right to respect for his or her physical and mental integrity.” More recently the Morningside Group, which is led by Prof. Yuste and brings together international experts in neuroscience, machine learning, ethics, and engineering, proposed four NeuroRights to protect individuals in the technological era (Yuste et al. 2017). Due to the recent increase in technological interventions in the human brain, legal protection of a person’s mental integrity is surely an important issue. In this section, I will attempt to clarify the concept of mental integrity from a bioethical perspective rather than from a legal one. I will explain the relationship that mental integrity may have with human identity (Sect. 2) and, therefore, the necessity of protecting it (Sect. 3). In this regard, a focus on the right to a healthy and safe environment should be made. Therefore, in Sect. 4, I will propose the concept of vulnerability as an indicator to ethically and bioethically evaluate any possible interventions in the human brain (as well as in the human body) that aim to improve the cognitive possibilities of the human subject itself (Sect. 5). However, it is important to first explain why, in my perspective, mental integrity is especially relevant at the anthropological level when discussing brain manipulations. This concept novelly describes the individuality of the person by focusing on a specific part of the body (we no longer speak of body integrity). It refers in a peculiar way to the identity of the human subject. On the other hand, by focusing particularly on the “mental” and not on the “cerebral” aspects appears to already suggest a certain resistance to reducing or completely identifying the human mental properties and states with brain processes. As Craig (2016, 111) correctly highlights: “Mental properties are significantly different from physical properties of the body and brain because of the content of the mind, the cognitive, affective, and volitional states that are necessary for rational and moral agency that reflect the values and interests that motivate actions. Mental states have a normative aspect that physical states of the body lack. As such, the right to mental integrity does not seek to reduce brain states to mental states. Instead, it accounts for the contents of those states—contents which extend into a broader natural, normative, and social environment, and are closely tied to critical capacities necessary for autonomous human agency.” The radical
9 Mental Integrity, Vulnerability, and Brain Manipulations …
101
difference between the mental state (or activity) and brain processes highlights the importance of separating these two facets that concern different spheres of human existence, that is, the body, on the one hand, and the capacity for agency, on the other. Although there are obviously many relationships between these two spheres (and given the psychophysical unity of the human being, cannot be any different), it is convenient, and at least logical, to maintain a clear separation, for now. In this paper, I will only refer to mental integrity, and not to brain or body integrity, for two main reasons: 1. neurotechnologies focus precisely on a specific part of the human being, and not on the whole body (it would be different if we had to address the problem of human enhancement in general, and not just cognitive enhancement); and 2. brain integrity takes on a meaning and importance in light of mental integrity (i.e., we are concerned with brain integrity because we want to protect mental integrity to preserve the possibility of human identity and agency). As I have already highlighted, the hypothesis from which this paper starts (which I cannot detail here) is that the human being is a psychosomatically unitary individual (Valera 2013; Güell and Murillo 2015, 113; Marcos 2019, 2142–2143) and that its mental state cannot be reduced to its brain state, as well as the mind to the brain (or body). Or, better, that mental states cannot be reduced to mere epiphenomena of material states, such as chemical reactions in the brain. Ultimately, to recognize this psychosomatic unity of the human being, we must refer to the “living body, which is both a material being and an entity that exhibits signs of inwardness. The living body testifies to the fact that the two fields ultimately are not separate and that they do not exist in separation from each other” (Kampowski 2014, 6). In this sense, it is important to consider the theory of the living being presented by Hans Jonas (2001) in The Phenomenon of Life where he introduces “the fact of life” as an example of the “psychophysical unity which the organism exhibits,” which “renders the separation illusory” (Jonas 2001, 17–18). He added: “The actual coincidence of inwardness and outwardness in the body compels the two ways of knowledge to define their relation otherwise than by separate subjects” (Jonas 2001, 18). The Jonasian point of view is the starting point of this paper and I will return to it later to offer insights on the issue of vulnerability.
9.2 Mental Integrity and Human Identity First, we should return to the importance of mental integrity in defining human identity. Although it is not the only dimension necessary to describe the human being—as I have already clarified in the previous section by highlighting the psychosomatic unity of the human being from a Jonasian perspective—it is certainly an aspect that takes on great importance. I will attempt to clarify its importance by reflecting on the rights attributed to human beings that specifically concern mental integrity. Indeed, Due to the economy of the current text, I will not endeavor to provide an ontological justification to the matter. In the context of this logic of rights, Craig (2016, 111) writes: “The right to mental integrity is closely related to general notions such as liberty and autonomy
102
L. Valera
that enjoy a special degree of importance in our philosophical, political, and legal traditions.” Accordingly, “the right to mental integrity protects the inner-sphere of the human person, and it is well-grounded in general notions of liberty and autonomy” (Craig 2016, 112). In summary, we can affirm that “the right to mental integrity is, most importantly, a right that protects human agency” (Craig 2016, 112). In this sense, mental integrity is constituted as a right because it is the condition of possibility for other human dimensions, such as freedom, autonomy, and agency.1 In short, it is necessary to protect mental integrity, because by having it we also protect human intimacy (the “inner-sphere of the human person”). The underlying issue related to neuroprotection, then, is the human identity that is reflected in the intimacy of the subject itself. In which sense is intimacy linked to human identity? Obviously, it is not in a proper “ontological” sense, but rather in an existential one. The reflection, as the possibility of returning to oneself, is the point of access for such intimacy: the self finds itself—as if it were “another self” (Petrosino 2010, 98)—only through the act of looking deeply at itself. I will not expound upon this further as the reflections of the Lithuanian philosopher, Lévinas (1984), clearly and explicitly express this concept. It is, however, worth noting the relationship between integrity, identity, and intimacy. Each of these dimensions is a condition of possibility for the other ones. In this regard, “the idea of integrity shapes the properly constitutive feature of individuality, because only what is integral—what is not fragmented, not divided, not disjointed, not dispersed—can be properly called individual. Only what is not disjointed because of its individuality can have its own identity and autonomy in relating to itself and to somebody else or something else” (Fuselli 2020, 423). Individuality needs integrity: a subject is an individual (from the Latin individuus, i.e., indivisible) only if its mental ecology is intact. It should be noted, then, how precisely “the development of neurodevices seems to put into light the essential features of individuality. The individual is neither the entity who is isolated and indifferent to any relationship nor the one who is not affected by a split, but the one who does not lose her relationship capacity even in the case of the deepest and hardest splits because the ‘otherness’ is not something she goes into relationship with but is constitutive of her own self. Preserving and protecting mental integrity is the same as preserving and protecting this structural being-in-relationship-with of each individual. This could be the criterion for evaluating neurodevices and for establishing possible constraints to their use” (Fuselli 2020, 427).2 I have not used the word “ecology” accidentally.3 Since the human being is a relational being (Valera 1
As Craig (2016, 111) writes, with reference to the importance of autonomy in the context of current neurotechnologies, “in bioethics, autonomy has traditionally been seen as a fundamental principle that precludes paternalistic interference. Further, in past years, discussion in the emerging neuroethics discourse has turned attention to core features of human agency and autonomy.” 2 In order to understand this quotation, it would be useful to recall the famous definition by Rendtorff (2002, 237): “In bioethics and biolaw the idea of integrity as an untouchable core, the personal sphere, which should not be subject to external intervention, is the most important. […] Integrity expresses bodily completeness in a private sphere.” 3 Indeed, in the next Section, I will show the relevance of the environment on human agency in the digital era.
9 Mental Integrity, Vulnerability, and Brain Manipulations …
103
2018)—that is, an open being—its individuality is always related to the possibility of interacting with other individuals, and, at the same time, with itself. The human being is, in fact, a being that is always in relation to otherness, both with the otherness of the other, and with its same otherness (Petrosino 2010). As we can immediately understand, the above mentioned aspects are closely linked to the question of human identity: intimacy, integrity, individuality, relationality, and agency especially refer to the personality of the human being. As these very aspects (which, for the economy of this text, I will not delve into too deeply) are “threatened” by recent neurotechnologies, we can conclude that it is precisely these neurotechnologies that are currently challenging our human personality.4 As the European Group of Ethics (2005, 32) states, “personal identity is crucial for the attribution of moral responsibility according to many ethical theories. ICT devices should therefore not be used to manipulate mental functions or change personal identity. The right to respect of human dignity, including the right to the respect of physical and mental integrity, is the basis for this.” In summary, intervening in (or harming) mental integrity specifically affects the personal identity of the human being, and with it, undermines the possibilities for autonomy, agency, and intimacy. The need to recognize these fundamental aspects of human nature as an inalienable right of every human being emerges precisely from this point.
9.3 Protecting the Mind (and Its Environment): From Neuro-Ethics to Neuro-Rights What is this neuroprotection? As I have already mentioned, there has been a growing number of proposals for the recognition of rights associated with neurotechnologies in recent years. In this sense, Yuste et al. (2017, 162) wrote: “As neurotechnologies develop and corporations, governments and others start striving to endow people with new capabilities, individual identity (our bodily and mental integrity) and agency (our ability to choose our actions) must be protected as basic human rights.” Likewise, Ienca and Andorno (2017) stated: “In contrast to other biomedical developments, which have already been the subject of standard-setting efforts at the domestic and international level, neurotechnology still largely remains a terra incognita for human rights law. Nonetheless, the implications raised by neuroscience and neurotechnology for inherent features of human beings, urge a prompt and adaptive response from human rights law.” These proposals, thus, imply an advance in the “neurosciences,” that is, a clear transition from neuro-ethics to neuro-law. In recent years, all the neuro-ethical (and neuro-bioethical) scientific research and production have, in a certain way, converged in the neuro-law, which offers a possible way of solidifying efforts in the fields of 4
Of course, in order to fully understand the subject, it would be necessary to have “a relatively clear idea of what is meant by the expressions ‘personal identity’ and ‘change of personal identity’” (Lucivero and Tamburrini 2008).
104
L. Valera
ethics and bioethics to defend both the autonomy and agency of the human person. As Neil Levy (2011, 179) states, “if we have the right to a sphere of liberty, within which we are entitled to do as we choose, our minds must be included within that sphere.”5 This transition from neuro-ethics to neuro-rights can be observed in the words that recur in the major neuro-protection declarations. “Privacy” is perhaps the most important and recurrent among them since it “is one of the conditions for the exercise of personal freedom and autonomy. The concept of privacy generally regards the protection of a space of non-interference, based on a principle of ‘inviolate personality’ which […] is a part of the person’s general right of immunity” (Lavazza 2018). It is interesting to note how the neuro-rights discourse, which must have a solid foundation in the neuroethical literature, points to the question of defending the individuality of a subject in relation to the possible interference of a third party in their privacy. With these words Ienca and Andorno (2017) detail the meaning of these third-party non-interference rights: “The establishment of cognitive liberty as a human right should be coordinated with a simultaneous reconceptualization of existing rights or even the creation of other new neuro-specific rights. These are the right to mental privacy, the right to mental integrity and the right to psychological continuity.” In this sense, neuro-rights are based precisely on the statement that no one has a right over another’s state of mind (Bublitz and Merke 2014, 68). This negative statement also has a positive dimension, that is, the right of all individuals to protect their mental dimension from potential harm (Ienca and Andorno 2017).6 Finally, that same statement carries an important consideration (that is not well highlighted but is of great relevance in the era of technological civilization): protecting the mental integrity of the individuals also means protecting their environment. In this regard, Ienca and Andorno (2017) affirm: “As neurotechnology becomes part of the digital ecosystem and neural computation rapidly enters the infosphere, the mental integrity of individuals will be increasingly endangered if specific protective measures are not implemented.” This statement forces me to delve a bit into the issue of the infosphere, and also into an appropriate paradigm of technology, in order to address these issues in the current context. In a previous text (Valera 2020, 30), I explained that “it could no longer simply be asserted that ‘technology has drastically changed the human environment;’ rather, even further, technology has become the human environment. Thus, a ‘natural’ environment separate from technology does not exist: our environment overlaps with the technological environment, in which ‘natural’ and ‘artificial’ elements co-exist.” The following consequence emerges from this assumption: “If technology has become an environment for us, we can no longer ‘stop interacting’ with it, precisely because we live and carry out our lives 5
In this sense, we may state that “Mental Integrity is the individual’s mastery of his mental states and his brain data so that, without his consent, no one can read, spread, or alter such states and data in order to condition the individual in any way” (Ienca & Andorno 2017). 6 Ienca & Andorno (2017) explain what this harm consists of: “For an action X, to qualify as a threat to mental integrity, it has to: (i) involve the direct access to and manipulation of neural signaling (ii) be unauthorized —i.e., must occur in absence of the informed consent of the signal generator, (iii) result in physical and/or psychological harm.”
9 Mental Integrity, Vulnerability, and Brain Manipulations …
105
within it. Having said this, the relationship no longer depends on us because it is always occurring given that we live in this environment. The effort of thinking etsi tecnologia non daretur, thus, would likely be hermeneutically interesting but entirely inappropriate for the era in which we live. Eliminating all of our interactions with technology (with the environment) would actually mean eliminating a significant part of our daily experiences, thereby committing the tragic error of ‘hypostasing’ human beings. If the technological devices are our environment, thinking of human beings (and their relationships) outside of this environment would at the very least be an outdated philosophical operation” (Valera 2020, 40). The question becomes enormously complicated when brain data of a human subject is inserted into a potentially “uncontrolled” environment such as the infosphere (Floridi 2014, 2015). In this sense, while full and detailed control of this continually changing and evolving environment seems almost utopian, an effort to reduce inappropriate interventions seems timely. Again Floridi (2021) highlighted: “It is time to take seriously the fact that the infosphere is humanity’s commons and hence regulate its use with open and transparent rules, legally grounded on all human rights and on human dignity, to avoid arbitrariness, unaccountability, abuse, and discrimination.” Floridi was referring to the ethical regulation of that “limited” space in the infosphere that are social networks. However, here, we are discussing the regulation of the data that may emerge from our relationship with neurotechnologies (which has gained a relevant space in the infosphere itself), and this seems to be a more complicated problem. In the following section, I will suggest a possible ethical indicator to interpret neuroprotection and to assess the probable threats involved in cognitive enhancement technologies. That is the concept of vulnerability.
9.4 Vulnerability: An Ethical Approach to Neurotechnologies One of the most important statements regarding human vulnerability is definitely Article 8 of the UNESCO Universal Declaration on Bioethics and Human Rights, entitled “Respect for human vulnerability and personal integrity.” It states: “In applying and advancing scientific knowledge, medical practice and associated technologies, human vulnerability should be considered. Individuals and groups of special vulnerability should be protected and the personal integrity of such individuals respected” (UNESCO 2005). It is worth noticing how this UNESCO statement directly relates human vulnerability to personal integrity in the context of medical practice and new technologies. To understand this link, it is necessary to delve deeper into the concept of vulnerability that is mentioned there, because the word itself may be interpreted in many different ways. It is helpful to consider the following interesting point on this issue: “The vulnerable is the perforable. In more basic terms, it has to do with an entity into which another can be inserted, which logically requires the distinction between inner and outer. The idea of functional damage is also suggested.
106
L. Valera
The insertion of something external into an entity is considered to be a wound if it causes functional damage in the entity in question. The characteristics mentioned, i.e., the distinction between an interior and an exterior, as well as the functionality, exist in living beings in a paradigmatic way. Living beings have an interior and an exterior, they possess semi-permeable barriers which identify them and separate them from their environments; at the same time, however, they communicate with that environment, which makes them functional but also, and at the same time, vulnerable” (Marcos 2016, 34). In this regard, the concept of vulnerability clearly envisages a frailty or weakness when confronted by a specific source of risk or even by the normal course of events. Simultaneously, however, the concept of vulnerability connotes dependency and openness. Once we view the issue in this way, we have to recognize that every human being, specifically as a living being, is necessarily vulnerable. Thus, this feature of vulnerability may be described as a basic anthropological category that tells us something about our fragility, dependency, and openness. Our corporeity is, indeed, the most obvious sign of this fragility and, also, of our vital subordination to individual and collective balances. As living beings, we depend both on the environment and on the complex relationships that necessarily take place with other living beings. Once again, the relational dimension of the human being is stressed: the environment cannot be considered merely accidental to the human condition.7 It is, rather, essential. If this openness to the environment is something constitutive of the human being, the possibility of being wounded is consubstantial to the human essence. Following the UNESCO (2013, 9) Declaration: “The human condition implies vulnerability. Every human being is exposed to the permanent risk of suffering ‘wounds’ to their physical and mental integrity. Vulnerability is an inescapable dimension of the life of individuals and the shaping of human relationships. To take into account human vulnerability acknowledges that we all may lack at some point the ability or the means to protect ourselves, our health and our well-being. We are all confronted with the possibility of disease, disability and environmental risks. At the same time, we live with the possibility that harm, even death, can be caused by other human beings.” This idea is not new. Hans Jonas put the idea of vulnerability at the center of his Imperative of Responsibility: “The object of responsibility centres on that which is vulnerable and perishable: ‘Only for the changeable and perishable can one be responsible, for what is threatened by corruption, for the mortal in its mortality’” (Wheeler 2012, 103). This is the first aspect of vulnerability and deals mostly with our ontology. We may call it “the negative aspect of vulnerability.” However, it is also worth considering a second “positive aspect of vulnerability” that follows the 1998 Barcelona 7
Indeed, the CIOMS (2016, 57) Declaration—which concerns the ethical aspects of human research—recalls the relevance of context for the human condition: “In some cases, persons are vulnerable because they are relatively (or absolutely) incapable of protecting their own interests […] In other cases, persons can also be vulnerable because some feature of the circumstances (temporary or permanent) in which they live makes it less likely that others will be vigilant about, or sensitive to, their interests.”
9 Mental Integrity, Vulnerability, and Brain Manipulations …
107
Declaration: “Vulnerability is the object of a moral principle requiring care for the vulnerable” (Patrão Neves 2009, 158). This point needs to be clarified. If using the concept of vulnerability to describe the human condition is generally accepted, using it as an ethical (or better, a normative) concept is often regarded as controversial. In other words, it is quite accepted that human vulnerability may be considered as the condition of possibility for any of its rights, that is, we need to protect the human being because of its vulnerability. Nevertheless, it is not the foundation of the right itself (Andorno 2016, 270). Consequently, if it is true that human vulnerability cannot be the foundation of our responsibility (or of any laws), then it is also true that it is the privileged object of our own responsibility. In this regard, we have to return to my previous statements about human integrity, identity, and dignity in order to understand their relationship with vulnerability. Accordingly, the ethical scope of vulnerability may appear more evident. What is the relationship between integrity and vulnerability? The UNESCO (2013, 3) Declaration is quite clear on this point: “That is also why human vulnerability and personal integrity, the other essential concept evoked in Article 8, relate to each other. When a part of our body is inappropriately ‘touched,’ […] our life itself, or at least our health, may be threatened. When our freedom is hampered, either by adverse circumstances or by the actions of others, we experience a ‘wound’ to our identity, to its value and dignity. Preservation of integrity implies protection against these kinds of intrusions, the capacity to ‘say no’ to any sort of impingement upon our freedom or to any sort of exploitation of our body and our environment. We are nonetheless committed at least to seek to ameliorate the effects of harms and disadvantages imposed by circumstances. This is a prerequisite of human flourishing and self-fulfillment.” If vulnerability refers to the possibility of being wounded (from the Latin vulnus),8 integrity deals with the impossibility of being touched (from the Latin verb tangere, which means “to touch,” “to hit”) (Patrão Neves 2009, 159). In this regard, the mitigation (or the respect) of our vulnerability is a prerequisite for maintaining our integrity (which is linked to personal identity, as we argued in the previous sections). Kemp and Dahl Rendtorff (2008, 240) expressed this concept in the following way: “Vulnerability concerns integrity as a basic principle for respect for and protection of human and non-human life. It expresses the condition of all life as able to be hurt, wounded and killed. Vulnerability concerns animals and all selforganizing life in the world, and for the human community it must be considered as a universal expression of the human condition. The idea of the protection of vulnerability can therefore create a bridge between moral strangers in a pluralistic society, and respect for vulnerability should be essential to policy making in the modern welfare state. Respect for vulnerability is not a demand for perfect and immortal life, but recognition of the finitude of life and in particular the earthly suffering presence of human beings.”
8
This possibility of being harmed has been expressed by CIOMS (2016, 57) in the following way: “Vulnerability involves judgments about both the probability and degree of physical, psychological, or social harm.”
108
L. Valera
These two dimensions, therefore, constitute the basis for the respect of human dignity and may be used both as the basic elements to ethically assess the use of neuro(technologies) and to protect the human person (for example, in human experimentation, medical practice, deep brain stimulation, and so forth). The final point I want to make here relates specifically to this protection. Why do we have to protect people’s vulnerability? Two possible reasons may be offered. First, we have to protect our vulnerability because we need to preserve our integrity and, therefore, our dignity. This is precisely the argument used by UNESCO (2013, 9) in its famous Declaration: “There is an integral relationship between respect for the integrity and dignity of persons on the one hand and the vulnerability of persons on the other.” But, in my opinion, there is a more interesting answer that recalls the reflections of Hans Jonas. Vulnerability creates ethical concerns primarily because we are the main cause of it: our technological power is probably the main source of our current vulnerability (Bazin 2004, 4). In this regard, vulnerability is ethically relevant precisely because it may depend on our power over it. In other words, by revisiting Hans Jonas’ (1985) insights, we can see that we are responsible for natural and human vulnerability, and this responsibility is proportionate to human power (Wheeler 2012, 103). Even though Jonas’ Imperative of Responsibility refers mainly to our environmental impact, I think we should also regard his ethical insights concerning neurotechnologies: not only do “humans have a special obligation to the earth in its vulnerability caused by human power” (Joldersma 2009, 479), but they have an equal obligation to human nature (and to the human environment, considering our previous reflections). Following these considerations, we are “called to such responsibility because it is becoming clear that a significant part of earth’s vulnerability is the result of human action” (Joldersma 2009, 480), and the same may be stated regarding our vulnerability. As Patrão Neves (2007, 185) correctly pointed out, “humankind is not only the perishable and therefore vulnerable, but its members also have the power to harm other beings, including other humans, in their vulnerability, and so it becomes a duty, implied by power, to answer for the vulnerability of others. Having established the relation between ‘power’ and ‘duty’ Jonas’ vulnerability gains a positive ethical meaning, that is, it determines an effective obligation: that of defending and protecting, caring for and taking responsibility for those who are vulnerable. In Jonas, ‘vulnerability’ is basically ‘concern, recognized as a duty’, it is responsibility before a vulnerability which, when threatened, becomes the object of care.” In this regard, from the Jonasian perspective, the concept of vulnerability lies at the intersection between power and duty and, for this reason, may constitute a powerful ethical indicator.
9.5 Conclusions. Cognitive Enhancement and Vulnerability This paper does not aim to offer a clear ethical response about the legitimacy of cognitive enhancement. Rather, it aims to show how our vulnerability—which deals with both our identity or integrity on the one hand, and our power and duties on the
9 Mental Integrity, Vulnerability, and Brain Manipulations …
109
other—may be used as a relevant ethical indicator when addressing the possibility of enhancing human cognitive capabilities. An initial response regarding the relationship between vulnerability and enhancement (including cognitive enhancement) is a positive one: to reduce vulnerability, we have to enhance people’s autonomy and capabilities, as highlighted by ten Have (2016, 57): “The response to vulnerability therefore is to enhance autonomy in order to increase adaptive capacity or to reduce exposure by minimizing harm.” But the same author adds: “However such explanation does not consider that vulnerability is often not an individual affair. The terminology of ‘vulnerable populations’ for example indicates that it is associated with common and shared conditions beyond the individual situation” (ten Have 2016, 57–58). He is obviously referring to justice and the possible discriminations emerging from the possibility of enhancement. Furthermore, in a certain way, he is also talking about “appropriate environments” (ten Have 2016, 174), as I pointed out in the previous sections. To care for our vulnerability also implies the creation of healthy environments where all human beings can flourish (ten Have 2016, 174). In this context, the criticisms of enhancement as an individualistic issue make sense. In a globalized world, an individualistic approach that is mainly focused on individual rights and interests is no longer sustainable (Valera and Castilla 2020). For this reason, many authors (e.g., ten Have 2016; Patrão Neves 2007) have recognized that the “theoretical framework to address vulnerability is provided by care ethics. This framework insists on the interdependency of human beings (thus the notion of anthropological vulnerability). It has also pointed out the unequal allocation of vulnerability according to various contexts (thus the notion of special vulnerability). This theoretical backdrop notwithstanding, care ethics is only recently expanding into a global theory of care. It is obvious that globalization affects care practices worldwide. The challenge is to embed care practices that are often concrete, personal and inter-relational, within broader social structures. Broadening of core notions of care ethics such as relatedness, dependency, responsibility, cooperation, and contextuality can help to develop a global theory” (ten Have 2016, 174). This new ethical framework may help us to understand the relevance of thinking systemically and beyond individual interests and benefits. This is especially evident if we consider that, in a context of interdependence, individual self-realization depends on the self-realization of others (Valera 2018). The recognition of our common vulnerability, then, represents the first step towards our common self-realization. Lastly, there is a final interesting point that concerns enhancement and vulnerability. I have previously expressed (Valera 2018a) that the idea of enhancement implies both “a starting point (what do we want to improve?) and an objective/an end (what model do we want to pursue?).” Obviously, “our starting point” is our “corporeal imperfection, because if we were perfect, we would have no reason for enhancement” (Valera 2018a, 9). On the other hand, the aim of enhancement is perfection, which is something unattainable (Valera 2018a) because we are corporeal beings. In this regard, the idea of perfection is quite the opposite to the idea of vulnerability, since perfection implies the elimination of any limits in order to achieve a “perfect model.” As I have previously argued (Valera 2018a, 4), this idea
110
L. Valera
of perfection compromises “the possibility of a true human ‘authenticity,’ which is threatened by social and cultural models related to the new technologies,” and, more concretely, to the regulative idea of perfection. In this regard, what is at stake here is the possibility to be oneself, which Habermas (2003, 5) conceived as a condition for a “good life” in his popular book, The Future of Human Nature. I argue that this possibility “is being threatened by unattainable cultural models proposed by recent (bio)technological developments, based on a ‘non-justified abstention from all of us. In other words, nowadays ‘being-able-to-beoneself’ means ‘to be other selves,’ and in this constant mismatch of each of us with ourselves lies perhaps our strongest concern” (Valera 2018a, 12). As interrelated vulnerable beings living in a social and technological environment, the idea of perfection implied in cognitive enhancement—which is always an extrinsic perfection as it deals with external globalized and imposed models—may threaten our self-realization. Acknowledgements This research has been supported by the “Beca Santander Profesores 2021”, the programme “Erasmus Plus 2019-KA107 International Credit Mobility” (PUC-Università degli Studi di Torino), and the project ANID/Fondecyt Regular n. 1210081.
References Andorno R (2016) Is vulnerability the foundation of human rights? In: Masferrer A, García-Sánchez E (eds) Human dignity of the vulnerable in the age of rights. Springer, London, pp 257–272 Bazin D (2004) A reading of the conception of man in Hans Jonas’ works: between nature and responsibility. An environmental ethics approach. Éthique et économique/Ethics Econ 2(2):1–17 Bublitz JC, Merke R (2014) Crimes against minds: on mental manipulations, harms and a human right to mental self-determination. Crim Law Philos 8:51–77 Council for International Organizations of Medical Sciences (CIOMS) (2016) International ethical guidelines for health-related research involving humans. Fourth Edition. WHO Press, Geneva Craig JN (2016) Incarceration, direct brain intervention, and the right to mental integrity—A reply to Thomas Douglas. Neuroethics 9:107–118 European Group of Ethics (EGE) (2005) Ethical aspects of ICT implants in the human body. Opinion presented to the Commission. https://ec.europa.eu/commission/presscorner/detail/en/MEMO_0 5_97 Floridi L (2014) The 4th Revolution. How the infosphere is reshaping human reality. Oxford University Press, Oxford Floridi L (2015) The Onlife Manifesto. Being human in a hyperconnected era. Springer Open, Oxford Floridi L (2021) Trump, parler, and regulating the infosphere as our commons. Philos Technol. https://doi.org/10.1007/s13347-021-00446-7 Fuselli S (2020) Mental integrity protection in the neuro-era. Legal challenges and philosophical background. BioLaw J 1:413–429 Güell F, Murillo JI (2015) Una aproximación al problema mente-cerebro desde Xavier Zubiri a la luz del pensamiento de Leonardo Polo. Studia Poliana 17:101–128 Habermas J (2003) The future of human nature. Polity, Cambridge Kampowski S (2014) A Greater Freedom. Biotechnology, love, and human destiny. In dialogue with Hans Jonas and Jürgen Habermas. The Lutterworth Press, Cambridge
9 Mental Integrity, Vulnerability, and Brain Manipulations …
111
Kemp P, Dahl Rendtorff J (2008) The Barcelona declaration towards an integrated approach to basic ethical principles. Synth Philos 46(2):239–251 Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13:5 Joldersma CW (2009) How can science help us care for nature? Hermeneutics, fragility, and responsibility for the earth. Educ Theory 59(4):465–483 Jonas H (1985) The imperative of responsibility. In search of an ethics for the technological age. The University of Chicago Press, Chicago Jonas H (2001) The phenomenon of life: toward a philosophical biology. Northwestern University Press, Evanston Lavazza A (2018) Freedom of thought and mental integrity: the moral requirements for any neural prosthesis. Front Neurosci. https://doi.org/10.3389/fnins.2018.00082 Lévinas E (1984) De l’existence à l’existant. Vrin, Paris Levy N (2011) Hard luck how luck undermines free will and moral responsibility. Oxford University Press, Oxford Lucivero F, Tamburrini G (2008) Ethical monitoring of brain-machine interfaces. AI Soc 22(3):449– 460 Marcos A (2016) Vulnerability as a part of human nature. In: Masferrer A, García-Sánchez E (eds) Human dignity of the vulnerable in the age of rights. Springer, Cham, pp 29–44 Marcos A (2019) La creatividad humana: una indagación antropológica. Rev Port Filos 75(4):2137– 2154 Patrão Neves M (2007) The new vulnerabilities raised by biomedical research. In: Häyry M, Takala T, Herissone-Kelly P (eds) Ethics in biomedical research: International perspectives. Rodopi, New York, pp 181–192 Patrão Neves M (2009) Article 8: respect for human vulnerability and personal integrity. In: UNESCO. Universal declaration on bioethics and human rights: background, principles and application. UNESCO Publishing, Paris Petrosino S (2010) La scena umana: grazie a Derrida e Lévinas. Jaca Book, Milano Rendtorff JD (2002) Basic ethical principles in european bioethics and biolaw: autonomy, dignity, integrity and vulnerability—Towards a foundation of bioethics and biolaw. Med Health Care Philos 5:235–244 ten Have H (2016). Vulnerability. Challenging bioethics. Routledge, London & New York United Nations Educational, Scientific and Cultural Organization UNESCO (2005) Universal declaration on bioethics and human rights. http://portal.unesco.org/en/ev.php-URL_ID=31058&URL_ DO=DO_TOPIC&URL_SECTION=201.html United Nations Educational, Scientific and Cultural Organization UNESCO (2013) The principle of respect for human vulnerability and personal integrity. Report of the international bioethics committee of UNESCO (IBC). Paris Valera L (2013) Ecologia umana. Le sfide etiche del rapporto uomo/ambiente. Aracne, Roma Valera L (2018) Home, Ecological Self and Self-Realization: Understanding asymmetrical relationships through Arne Næss’s ecosophy. J Agr Environ Ethic 31:661–675 Valera L (2018) Against Unattainable Models. Perfection, technology and society. Sociología y tecnociencia 8(1):1–16 Valera L (2020) New technologies. Rethinking ethics and the environment. In: Valera L, Castilla JC (eds) Global changes. Ethics, politics and the environment in the contemporary technological world. Springer, Cham, pp 29–43 Valera L, Castilla JC (2020) Global changes. Ethics, politics and the environment in the contemporary technological world. Springer, Cham Wheeler S (2012) Climate change, Hans Jonas and indirect investors. J Hum Rights Environ 3(1):92– 115 Yuste R et al (2017) Four ethical priorities for neurotechnologies and AI. Nature 551(7679):159–163
Chapter 10
Neurotechnology, Consent, Place, and the Ethics of Data Science Genomics in the Precision Medicine Clinic Andrew Crowden and Matthew Gildersleeve
Abstract In this chapter we briefly outline key philosophy and bioethics dimensions of a genomics medicine case that begins with typical clinical encounters between a patient and members of a treating team. The case concerns a positive Huntington Disease (HD) diagnosis and the diagnosed person’s response to potential neurotechnology and other interventions. It is of interest because what happens illustrates how consent and a consideration of place can be challenging when clinical genetic medicine, genomics research, data science and neurotechnology intersect within the clinic. First, key factual aspects of the disorder are described, and the case is outlined. The nature of precision medicine in genetics and how patients, clinicians, researchers, scientists, and others are impacted by data science and neurotechnology decisions is explained. Distinct consent-based ethics dimensions are identified, and potential resolution pathways suggested. Doing so also makes it clear that, even when a genetic disorder is a relatively straightforward autosomal dominant monogenic mutation of a single gene as is the case with HD, the practical ethics dimensions can be complex. We argue that a person-centred place-influenced ethics framework can provide an additional way to better understand a situation, help meaningful decision-making, and implement practical person-centred outcomes. We make some practical suggestions about the case and recommend ways to apply our framework to similar situations. Keywords Consent · Ethics · Genomics · Philosophy · Place
10.1 Introduction Progress in human biology and technology is accelerating at an unprecedented rate. Innovative discoveries in biology continue to reveal new secrets about the human body (Davis 2021). As science and biological technology develop humanity will be increasingly confronted with complex and difficult decisions about medicine and healthcare. Should we boost our, or children’s cognitive abilities?; should depression A. Crowden (B) · M. Gildersleeve School of Historical and Philosophical Inquiry, University of Queensland, Brisbane, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_10
113
114
A. Crowden and M. Gildersleeve
be relieved by neurotechnological means?; should we attempt to cure genetic disorders using gene editing technology?; is cognitive enhancement a right for those with mental disorders and/or disabilities? All are examples questions that the new science of the human body will require humans to provide answers to. To be sure, when scientific discovery combines with new technologies the impact on human health is potentially enormous. New neurotechnology will create many options and also ethical challenges in the clinic. These challenges become clearly identifiable when we look at the relationships between patients, consumers, genomic services, clinicians, researchers, data scientists and the hybrid infrastructures of banked data and information that are increasingly providing the correlations that inform more precise decisions and interventions in medicine and health care. We outline an example from the clinic to illustrate complex health care decision-making that will confront many of us as we experience the possibilities offered by neurotechnology. First, before outlining the case, it is important to ensure that the terrain and context of the case is clear. Important contextual factors that inform the analysis are thus identified. These few preliminaries provide definitional clarity about neurotechnology, precision medicine and human data, genomics, genetics, and genetic disorders per se.
10.2 Contextual Factors 10.2.1 Neurotechnology A potential neurotechnology intervention is central to our case. Neurotechnology has been defined as “the assembly of methods and instruments that enable a direct connection of technical components with the nervous system. These technical components are electrodes, computers, or intelligent prostheses. They are meant to either record signals from the brain and ‘translate’ them into technical control commands, or to manipulate brain activity by applying electrical or optical stimuli. Closed-loop interactions of readout and stimulation systems (control circuits) are subject of current research as well” (Müller and Rotter 2017, 1). This definition captures key aspects of neurotechnology. The term itself is generally used to describe technology that helps humans to understand brain function and/or directly connects with human and/or other non-human animals’ nervous systems. Such neurotechnology methods offer enhanced brain imagery techniques and associated technological innovations that will potentially create pathways to more accurate diagnostics, better treatments, and improved care across many different medical fields. In clinical situations, neurotechnology does not happen in isolation. As our genomics medicine case illustrates, in practice new science of the body, clinical medicine and healthcare, research, data science and neurotechnology all likely (and rightly) intersect within the clinic. Enabled by advances in science, technology, genomics, data, data availability, and artificial intelligence, healthcare can be finely
10 Neurotechnology, Consent, Place, and the Ethics …
115
tuned to each person within the clinical lifeworld of precision medicine and genomics. Precision medicine offers the potential to shift healthcare focus toward prevention as well as treatment. There is potentially great benefit but also practical ethical challenges concerning consent, choice, privacy, equity, and the need to carefully determine risk versus benefit (Finkel et al. 2018).
10.2.2 Precision Medicine and Human Data Precision medicine aims to analyse a wide range of patient information, such as clinical observations, genomic and other biomarkers as well as patient generated data. All information within the context of lifestyle, behaviour, environment, and medical history to inform and personalise prevention, diagnosis and treatment at an individual, patient and population level is considered. It is often claimed that, as an emerging clinical practice precision medicine will adapt and transform over time by incorporating innovation and discovery, resulting in better targeted bespoke care. The collection, storing, accessing and linking of human data is essential to precision medicine. Data is potential information, raw and unprocessed, prior to anyone being informed by it (Pomerantz 2015, 26). Data science uses Artificial Intelligence (AI) methods to create automated decision-making algorithms to collect, store, access data which is analysed to identify new clinically relevant information. In precision medicine and health research such data and information is rarely, if ever, a static entity. The identifiability of data and information exists on a continuum: “This continuum is affected by contextual factors, such as who has access to the information and other potentially related information, and by technical factors that have the potential to convert information that has been collected, used or stored in a form that is intended to protect the anonymity of individuals into information that can identify individuals. Additionally, contextual and technical factors can have a compound effect and can increase the likelihood of re-identifiability and the risk of negative consequences from this in ways that are difficult to fully anticipate and that may increase over time” (NHMRC 2007, updated 2018, 33–34). Thinking about data identifiability in this way is realistic. The identifiability of data and information is particularly pertinent when it is recognised that biospecimen data is always potentially re-identifiable. Moreover, data-mining and particularly machine learning, automated decision-making realises the impact of AI. It has been recognised that this in itself may indeed be one of the biggest questions for humanity: “How to exist qua human beings individually, socially, collectively, in a world governed in large measure by algorithms” (Meyran 2021, 11). The merger of biotech and infotech potentially confronts us with one of the biggest challenges humankind has ever encountered (Harari 2017). Moreover, a key question that has been raised by philosopher/psychoanalyst/epidemiologist Miguel Benasayag is important. Benasayag rightly claims that in reality what we call AI is very badly named because a machine can never really be intelligent in a human sense. Machines
116
A. Crowden and M. Gildersleeve
make decisions independent of meaning. Thus, the difference between humans and machines is not quantitative, it is qualitative. It is therefore essential to distinguish the machines functioning from the intelligence of human beings, because living intelligence is not merely a calculating machine (Meyran 2021, 16). While machines and human made AI inform precision healthcare it is clear that humans must remain at the centre of care. Good healthcare demands that humans take care to check before applying automated AI decisions in the clinic. Human created AI needs human supervision to ensure the intelligibility, fairness, diversity and transparency of methods and assumptions. Also facilitating meaningful ways for people to engage and when necessary be able to challenge automated decisions and digital assumptions will increasingly be needed. Such processes are particularly important in genomics and within the genetics clinic.
10.2.3 Genetics and Genomics Genetics is a broad field of study that is concerned with heredity and how particular qualities, or traits are passed on from parents to their offspring. In the context of human health, genetics examines single genes and how their function and composition can affect growth and development. With the development of new technologies, the traditional focus on genetics and single genes is shifting towards study of the whole genome, including the study of multiple genes and their relationships with one another (genomics) as well as the relationships between genes and the environment (epigenetics). Rapid advancements in genetics, genomics and related technologies (in neuroethics, data science/analytics and in artificial intelligence and automated decision-making) are creating enormous opportunities for the understanding, prevention, treatment and cure of human diseases. In genetic medicine there are three types of disorders. The first is identified as monogenic, so-called because there is mutation of a single gene. There are three types of monogenic disorder, autosomal dominant (as in Huntington’s Disease) where there a mutation in one copy of a gene is inherited from either parent, autosomal recessive (as in Cystic Fibrosis) where there is mutation in two copies of a gene and an affected child inherits one copy from each parent. The third type of monogenic disorder is sex linked, where the mutation is traceable not to 22 autosomes but to the sex chromosomes X and Y. Most mutations are X-linked meaning that one in two of male children of mothers who carry the mutation will be affected, half the daughter will be unaffected carriers (as in Duchene muscular dystrophy and hemophilia where there is high penetrance (close to 100% chance of getting the disease). The second type of genetic disorder is identified as polygenic where there is mutation of several or more genes (maybe as in some forms of Cancer). The third is identified as multifactorial where there is interaction of several genes and the environment (as in most diseases). It is important to note that only a small percentage of diseases are caused by a mutation in a single gene. Because polygenic and multifactorial disorders are very
10 Neurotechnology, Consent, Place, and the Ethics …
117
complex for ease of analysis, we have chosen Huntington’s Disease, a monogenic autosomal dominant genetic disorder.
10.2.4 Huntington’s Disease Huntington’s disease (HD) is a monogenic autosomal dominant disorder, a neurological degenerative disease that has an onset in most people between the ages of 30 and 50. There is no cure for this condition, and it is progressive. Symptoms include deterioration in movement, cognition and generalised functioning. Death usually results from respiratory illness. HD is an inherited condition. A child of an affected person has a 50% chance of inheriting the faulty gene that causes the condition. Genetic predictive testing is now available for persons over the age of 18 who have an affected parent or relative which will tell them in almost all cases whether they will develop the disease at some stage in their life. Worldwide, of those eligible for the test, only around 15% of people have taken up the option of testing.
10.3 Case: Kunmanara and the Huntington’s Disease Test Kunmanara (K) is a physically active 25-year-old man whose grandfather died ten (10) years ago from Huntington’s disease.1 His mother has a 50% chance of developing HD. She recently decided to have the HD genetic test and has been shown to have the faulty gene. She will definitely develop HD at some time. K now has a 50% chance of developing HD. He is a civilian air traffic controller, working at a large international Australian airport. He loves his job, and he feels he could perform his duties well for many years, irrespective of whether he carries the faulty gene for HD or not. His employer and his fellow air traffic controller colleagues are unaware of his family history. After a long discussion with his mother—and much thought—K decided that he wanted to know if he has the HD gene. He made an appointment to be tested at a specialist genetic disorders clinic. At the clinic K saw a GP and gave written consent to having the test. K saw a clinical geneticist and gave written consent for his clinical and other information to be used by researchers in an international research project. He also agreed in writing to an extended consent for his data and information to be used by the researchers in other research projects conducted by them. K also in writing gave unspecific consent for his data and information to be 1
An original case about a Mr. H was developed for Fact Sheet 19 on Ethical Issues in Genetics and Genomics by the NSW Government Centre for Genetics Education (recently updated in March 2021). Mr. H has been used widely across many settings. We have been using Mr. H and other similar cases in different guises for many years in educational settings. K is a similar case to Mr. H, but we have added complexity, including identity as an Australian Aboriginal (Pitjantjatjara), and a more detailed story so as to better capture the relationships between precision medicine, data, research, genetics, genomics and place.
118
A. Crowden and M. Gildersleeve
stored on a HD international database so that it may be accessed by other researchers (with a nominated data custodian and processes approved by a local HREC as well as an international IRB). During the consultation the clinical geneticist spoke excitedly about research developments in HD. The clinician said: “Research in HD is really looking positive with new treatments. As well as gene-editing possibilities there is new research and trials into innovative neurotechnical HD interventions where certain proteins are lowered. We aim to reduce production of the huntingtin protein that damages sufferer’s cells—it’s called huntingtin lowering! We are likely to be able to destroy your RNA copies as they are produced but leave the DNA blueprint intact. We also can modify the message of the DNA blueprint, so it either can’t be copied into RNA or contains new instructions to help destroy the RNA.” This approach is what we refer to when we say gene therapy he explained, “it changes what is made from the blueprint without altering it—a really effective targeted brain intervention!” the clinician laughed. K didn’t quite follow and was a bit confused when he left the clinical geneticist’s office. He was a bit unsure about his DNA being destroyed, or was that RNA, that was destroyed? K then had a consultation with a data scientist/researcher and gave his consent for the data scientist to access his data for another project using an algorithm that seeks to identify similar patterns of behaviour in people who seek HD tests. Then K saw another local researcher and agreed to give consent to participate in a Ph.D. project on nutrition, exercise and the families of people with an HD diagnosis. Finally, he had a meeting with a Registered Nurse and Genetic Counsellor who reexplained the reasons for all the different consents, gave further information about HD support groups, likely treatments and prognosis, and answered questions. For each consent K was provided with targeted information. He appeared competent to understand all information and the team were certain that he had voluntarily made an informed decision for each different consent request. K left the clinic in a reasonably good mood. Two weeks later K had a few celebratory drinks with visiting friends and family from his hometown. He tells them about the HD test, the research, the international databank and the potential neurotechnology interventions that the clinical geneticist was so excited about. His friends say he needs to be careful, as he may be in danger of losing his job if the test is positive. His friends and family were worried about sending his data and information internationally. Several said that they were concerned about possible “brain interventions.” Afterwards, at home reflecting on his friends concerns, Kunmanara regretted his decision to go to the clinic. The next morning K phoned the clinic and requested that all the test data and information be destroyed. He stated that he has changed his mind and he does not want to know the result. K wants to withdraw his consent for the test, to all research and does not want his data and information stored on a database where others can access it. He said he was really worried about the possibility of “brain interventions.” To complicate things, K’s HD test result was positive. K will certainly develop HD during his life. The clinic treating team, the GP, the Clinical Geneticist, the Genetic counsellor, the data scientist/researcher and nurses have different views about how to progress. They refer the case to the Clinical Ethics Committee (CEC),
10 Neurotechnology, Consent, Place, and the Ethics …
119
the Human Research Ethics Committee (HREC) and a specialist HREC for advice. And, because Kunmanara is a Pitjantjatjara man, originally from Amata in the far northwest of South Australia the case is also referred to a specialist HREC with particular expertise in targeted Aboriginal and Torres Strait Islander research for advice.
10.4 Response There are many questions that this case raises. Some that are usually asked in such situations include: when is the right time to decide to have predictive/pre symptomatic testing?; do employers in industries involving public safety have the right to demand family health history information?; in cases where genetic predictive testing is available for conditions that may impact on public safety, do employers have a right to predictive testing information about an individual whose current health status is excellent?; who actually “owns” this information and who should decide who can access it? What if the situation was reversed and Kunmanara wanted testing but his mother had refused; what responsibility is there to offer testing to an individual when the result may indirectly reveal the genetic status of a relative?; are there implications for his reproductive choices?; what are the ethical obligations of data scientist/clinician/researcher role/s?; is clinical consent different from research and database consent?; can the data be kept without consent for the greater good? (NSW Government 2021). Here space does not allow us to give a comprehensive response to each of these questions. Instead, we will focus on how the philosophy of place impacts practical aspects of consent in the clinic.
10.4.1 Consent Consent occurs when permission is given for something to happen. Consent is “informed” when a competent person understands information about possible options and voluntarily makes an informed choice. There are reasons to accept some version of the consent principle. “This principle may be too demanding, and there may be some other ways in which it should be revised. But at least in most cases, it is wrong to act in ways to which anyone could not rationally consent” (Parfit 2011, 211). Derik Parfit is right. Consent aims to protect people, prevent harm, facilitate trust, ensure personal integrity and respect autonomy (Eyal 2019). While a valid consent can be hard to facilitate in some situations, we should always strive to seek it. Moreover, the philosophical reasons for consent, and particularly the relationship between autonomy and consent remain important. Autonomy is enacted through consenting processes in everyday life, in professional practices and has an enduring connection to law. The idea that a human being should be respected as a person, i.e., as a
120
A. Crowden and M. Gildersleeve
conscious social animal that deliberates, reasons, and chooses, that is possessed of an evolving or continuous—but not permanent or immutable—identity, and that seeks to live morally and meaningfully, is valued across many societies (Flanagan 2010). Autonomy implies respecting that people have governance over their own agency. People should not be treated as a mere means, but also always as an end in themselves (Gregor 1997). Respecting a person’s individual autonomy in this way recognises the importance of individual self-determination. Mill famously and rightly claimed that over themselves, over their own body and mind, the individual is sovereign (Skorupski 2006). In essence we should have, as human beings, the right to control or determine our own lives and to decide how we shall live subject of course, to our not infringing the rights of others to do the same (Charlesworth 1989). Respecting a person’s autonomy in such a manner recognizes that autonomy has both intrinsic and instrumental values. Intrinsically it is necessary for personal integrity, and it allows one to adopt the reactive attitudes that instil life with humanity. Instrumentally it is important because it is beneficial to the wellbeing of those with an internal locus of control. It allows people to attain goods and learn from mistakes and serves as protection against potential tyranny of the State (Maclean 2009, 48). Individual autonomy can thus be viewed as a character ideal which continues to have high instrumental and intrinsic value. (Young 1986). It is important to also recognize that there are different ways of conceptualising autonomy that give rise to different approaches to the nature of autonomous decision-making. One approach that generates much sympathy acknowledges that it is inappropriate to focus on an individual as entirely separate from all others. The rights and responsibilities of the individual include not only freedom of choice but also concern for the impact of decisions on others, as well as the responsibilities associated with such decisions (McCLean 2010). Accepting that people live in communities and are often intrinsically connected to others in overt, and often covert ways, does not however negate the importance of individual autonomy. Processes that respect people by facilitating individual’s autonomous preferences by ensuring choice and informed consent in both research and clinical practice. Respecting human beings in this way involves giving due scope to people’s capacity to make their own decisions. This requirement is often seen as having the following conditions: “Consent should be a voluntary choice, and should be based on sufficient information and adequate understanding of both the proposed research and the implications of participation in it” (NHMRC 2018, 16). In both clinical and research contexts consent has traditionally involved a “one study/procedure-one consent” model and has been classically defined as “an autonomous action by a subject […] that authorizes a professional to involve the subject in research […] An informed consent is given if a subject with (i) a substantial understanding and (ii) in substantial absence of control by others (iii) intentionally (iv) authorizes a professional to do X” (Faden and Beauchamp 1986, 278). More recently, a seven-element process of informed consent has been defended where Threshold Elements (competence and voluntariness), Information Elements (disclosure, recommendation of a plan, understanding) and Consent Elements (decision in favour of a plan, authorization of the chosen plan) are employed (Beauchamp
10 Neurotechnology, Consent, Place, and the Ethics …
121
and Childress 2019). Such elements reflect our view that extends on and adds to and reformulates earlier versions (Appelbaum and Grisso 2001). Competence to grant ‘informed consent’ is a function of understanding, abilities and voluntariness, formulated as articulated: Consent in Clinical Practice and Research I + (U + A) → C C + V → ID Information + (Understanding* + Abilities**) → Competence Competence + Voluntariness → Informed Decision2 In the clinic clear accurate information about possible treatment options or research participation are disclosed to a person. If the person competently understands that information, they are then able to make a voluntary informed choice, decide to accept (or refuse) treatment or to (or not to) be a participant in a research project. The formulation illustrates that abilities are synonymous with the capacities that make decision-making possible whereas competence relates to a state of intact decision-making abilities such that a decision should be honoured (Grisso and Applebaum 1998, 11). The idea of one study/procedure-one consent model of consent is not useful in genomics (Clark 2016; Greenwood and Crowden 2021). Future re-use of data and samples cannot be accurately predicted which means that participants must be re-contacted numerous times for consent to specific new studies. Several alternative models of consent have been suggested considering these practical difficulties. Recently, the Australian Productivity Commission (2017) recommended open consent (which requires participants to consent to the access, sharing and linking of their personal information to sets of information, in the knowledge that the full purposes of future studies and the extent of further usage cannot be foreseen and that their confidentiality cannot be guaranteed), specific consent (equal to one-study/procedure one-consent) and expanded consent (which includes broad, dynamic and metaconsent). Practically, in the health care context relevant to this case, consent can be implied (K comes to at the clinic to have the HD test and goes to the testing area), orally (K orally states that agrees to have the HD test), or written (K is given accurate information about the HD test or a research project, makes an informed decision and signs a consent form agreeing to have the test or be a research participant). Consent can also be understood as being specific (K agrees to have one test, one specific treatment or to be a research participant in one specific project), extended (K 2
Where understanding* refers to understanding of disclosed information and abilities** refer to four abilities or functional capacities: the ability to understand relevant information; ability to appreciate a situation and its likely consequences; ability to manipulate information i.e., ability to reason; and ability to communicate a choice (Crowden 1993, 66; Appelbaum and Grisso 2001; Greenwood and Crowden 2021).
122
A. Crowden and M. Gildersleeve
agrees for his data and information to be used in a related future project), unspecified (K gives consent for his data and/or information to be used in any future research). For some research the need for consent can also be waived. In the case of personal information in genomic medicine a research ethics committee would need to review the justifications when there is a request for a waiver before granting approval. The researchers would need to convince the reviewing committee that that the project is low risk, there is clear benefit, it is impracticable to obtain consent, participants would have consented if asked, privacy and confidentiality is protected, benefits including any financial gain are shared with participants, and that it is consistent with all relevant laws (NHMRC 2007, updated 2018, 21). It is worth restating that a waiver of consent is not relevant to this case now, but it may be in the future. Consent can usually be withdrawn at any time though in clinical research projects participants are likely to be asked to consent to allowing researchers to keep data if they choose to withdraw before a study is completed. From a philosophical perspective each type of consent is, or is not, ethically justified depending on the context, situation and the reasons given. While they are not always asked to do so (particularly in the clinic), clinicians and/or researchers are expected to be able to justify their reasons for determining and accepting a valid informed consent decision. The practical aspects of K’s case provide for an excellent illustration of why, even when we consider relatively identifiable disorders, in genomic medicine consent can be complicated. It is always important to establish the facts. When K left the clinic, he had willingly given consent for several different things to happen with his data and information: 1. 2. 3. 4. 5.
6. 7.
Written specific consent with a GP agreeing to having the HD test. Written specific consent with a Clinical Geneticist to be a participant in an international research project. Written extended consent extended consent for his data and information to be used by the researchers in other research projects conducted by them. Written unspecified consent for his data and information to be used by other researchers in other research projects. Written unspecific consent for his data and information to be stored on a HD international database so that it may be accessed by other researchers (with a nominated data custodian and processes approved by a local HREC as well as an international IRB). Written specific consent to being a participant and contributing his data to a local research project (Ph.D. project). Written specific consent to being a participant and contributing his data to a local research project (data science project).
K was unclear about all the different consents and even thought that he had given consent when with the Clinical Geneticist to neurotechnological treatment. He had misunderstood the clinician’s enthusiasm for possible treatment to actual treatment. K fear of potential treatments likely contributed to his change of heart. Clinicians should take care and ensure that their communications with patients are clear and considered.
10 Neurotechnology, Consent, Place, and the Ethics …
123
Overall, as with any clinical challenge, it is important to stress that there are options in this case. An ongoing therapeutic relationship between K and the treating team must be developed and sensitively managed. K is asymptomatic now. However, the treating team is now aware that K’s test was positive. They are rightly concerned that even though K is asymptomatic now he will pose a significant risk to others in his role as an air traffic controller as the disease progresses. They would like to talk to K, clarify the different consents and explore reasons for having the test and participating in research. They believe that it may be possible to separate the different consents, maybe not tell K the test result, but retain his data and information for research. One way of doing this is from a philosophy of place influenced person-centred approach. Such an approach is particularly relevant to K who is an Australian Aboriginal with a keen awareness of his connection to land and place. It has been well-documented that before colonial conquest Australia’s First Peoples lived a sophisticated spiritual propagation influenced hunter-gatherer life (Sutton and Walshe 2021). On this view, for Australian Aboriginals, nature and society were mutually dependent. The First Australians created societies where economic and spiritual adjustment to their environment flourished and lasted until colonial activity profoundly influenced the degradation of that environment. Australian Aboriginal and Torres Strait Islanders know that their Old Peoples’ lived experiences were intrinsically connected to place. The Old People were clearly ecologically and spiritually at one with the land: a notion that we argue is potentially applicable to all humans and this is therefore clinically relevant, not just for Kunmanara, but for all people. As Seamon (2018, 45) notes: “human beings are always immersed in their world and that one central facet of this immersion is ‘being emplaced’ and situated via place.”
10.4.2 Place, Neurotechnology, and Bioethics In this section we would like to outline several important considerations for the case that has been illustrated. First, we argue that any response to this situation requires an understanding of all the contextual and interacting factors to come to the optimal solution. Our previous research has focused on the philosophy of place (e.g., Gildersleeve and Crowden 2019) and we believe this is a fundamental consideration for any situation where there are ethical challenges, conflicts or dilemmas such as the one that has been described. Place needs to be involved in the conversation because “no two people will have exactly the same experience of place” (Price 2013, 122). In other words, how we deal with the ethics of genetic testing involves looking at each individual as a unique case with a number of different factors to consider. Our philosophy of place emphasises that ethical problems need to be understood as a “fluid, dynamic field of constantly interacting elements” and coming to the best solution requires understanding how others and “how we are emplaced” (Seamon and Mugerauer 2012, 18). Coming to the right decision on genetic testing and the example given involves looking at the various interrelationships for the individual involved. For example, Valera and Bertolaso (2016, 46) say “we live in an intertwined
124
A. Crowden and M. Gildersleeve
set of relationships, which continuously constitute and shape us.” Furthermore, every individual “differs from another one depending on the kind of the specific interdependent relationships between the entities” (Valera and Bertolaso 2016, 46). This work is important because it helps us to avoid being misled toward reductionist views that ignore place. We support Valera and Bertolaso’s (2016, 46) view that “the interpretation of the human being as a living being who is totally isolated, unfamiliar to the surrounding environment and not contaminated by the outside world is completely reductionist and considered inappropriate.” In the case provided, a number of factors can change the solution to the equation and that is why it is important to take a holistic view that considers the individual in their unique place in the world. For example, it can be imagined that the ethical outcome of this genetic testing would change if K’s occupation was different. Would there still be the urgency to demand health information if K was not an air traffic controller? What if K was a musician? Would he be put under the same pressure to disclose this personal information? Surely there are interacting factors too, where a combination of the probability of developing a condition, the type of the condition and the occupation for the individual will lead to situations where this pressure to disclose personal genetic information is always contextually different. This highlights that we cannot make universal sweeping claims when it comes to the ethics of genetic testing and to cases like the one we have provided. We need to look at things as a case-bycase basis, considering all the interacting elements that make the unique place of the individual. In other words, applying place to ethics is similar to the hermeneutic circle where “the elements that make up a text can only be understood in relation to the unity of the text as a whole, while the unity of the text is only to be understood in terms of the elements that contribute to that unity” (Malpas 2008, 60). The second point that we would like to emphasise here is that we are inclined to take a person-centred view to the sharing of genetic health information such as the example of K. We take this stance for a number of reasons which again has a basis in our philosophy of place but is also linked to autonomy and Foucault’s work on disciplinary power and biopolitics. As we have said, place is unique to each individual. Each individual should have the right to autonomy over their own place. In other words, an individual has a right to be authentic to who they want to be and how they construct their place in the world which aligns to the idea of autonomy which “refers to self-government (auto-nomos) or the ability to direct one’s actions and life according to one’s own values” (Mackenzie 2014, 275). Obviously, the case that has been presented highlights a conflict between individual choice by K and the treating team. How should such a conflict be thought about and resolved? We argue for a person-centred view because there needs to be a respect for the autonomy of the individual “to make decisions of practical importance to their lives and to determine the direction of their lives in accordance with their beliefs, principles, and values” (Mackenzie 2014, 275). In other words, we need to respect the uniqueness of an individual to create their own place in the world away from disciplinary power pressure or what Foucault also calls biopolitics. Pressuring an individual to give away their personal genetic information as is a possibility in the case provided can be considered biopolitical control and management. Biopower or
10 Neurotechnology, Consent, Place, and the Ethics …
125
biopolitics is a focus on controlling and managing “birth rate, education, discipline, health, and longevity of its population” (Deveaux 2010, 218). Foucault’s work on biopolitics is very important because it reveals “the mechanisms for the control and regulation of our bodies” (Deveaux 2010, 218) where the “basic biological features of the human species became the object of a political strategy” (Foucault 2007, 1). Interestingly, a recent publication that is related to the case we present has appeared in this area entitled Conquest of Body Biopower with Biotechnology (Tratnik 2017). It is important to integrate biopolitics into the discussion because it allows us to investigate relationships between “individual freedom, governmentality and subjectivity” (Mayes 2015, 5). In other words, biopolitical mechanisms applied to neurotechnology can lead to ignoring the unique individual and their place in favour of biological categorisation and probabilities for “epidemiological, statistical and economic knowledge” (Foucault 2007, 350). This forgetting of the unique individual occurs because the “rise of the biological sciences from the eighteenth century challenges the idea that individuals can be separated from the whole. The idea that humans are not only part of the natural world, but biologically connected to each other destabilized the borders separating the individual-organism from the population-species” (Mayes 2015, 22). Therefore “no longer conceived as a collection of distinct individuals with interests and rights, the population comes to be understood as a set of natural phenomena” (Blencowe 2012, 63). This biopolitical knowledge enabled “the regulation and government of life through town planning, insurance schemes and vaccination programmes” (Mayes 2015, 23).
10.5 Conclusion In this chapter we have shown how relatively straightforward clinical encounters can quickly become philosophically complex. K’s case illustrates the need for clinicians, healthcare teams and researchers to be ethically sensitive to those who seek their care and advice. People should be respected as living beings who are connected to others and the surrounding environment. We suggest that a philosophy of place influenced person-centred approach is one way of ensuring that peoples’ choices are respected, targeted consent processes are assured, and right decisions pertinent to specific contexts are made. Our point should be clear. If we determine that the advances in neurotechnology including the uninterrupted collection of genetic information in the name of research and scientific biodata banking overrides stated individual preferences, as the case we have provided illustrates, then we may turn the individual into an object for the scientific gaze that loses sight of the person, place and unique individual behind the data. Therefore, we caution the pressure to collect and use this personal data and information as it could be equated with a biopolitical mechanism for control and regulation of the population. We should not ignore or misunderstand the importance of place, identity, autonomy and choice for individuals and communities.
126
A. Crowden and M. Gildersleeve
References Appelbaum PS, Grisso T (2001) MacCAT-CR: MacArthur competence assessment tool for clinical research. Professional Resources Press, Saratosa Beauchamp TL, Childress JF (2019) Principles of biomedical ethics. Oxford University Press, Oxford Berg JW, Appelbaum PS (1999) Subjects’ capacity to consent to neurobiological research. In: Pincus HA, Lieberman J, Ferris S (eds) Ethics in psychiatric research: a resource manual for human subjects protection. American Psychiatric Publishing, Washington DC, pp 81–106 Blencowe C (2012) Biopolitical experience: foucault. Power and positive critique. Palgrave, New York Charlesworth M (1989) Life, death genes and ethics: biotechnology and ethics, boyer lectures. Australian Broadcasting Association, Crows Nest, NSW Clark A (2016) Genetics, genomics and society: challenges and choices. In: Kumar D, Chadwick R (eds) Genomics and society: ethical, legal, cultural and socioeconomic implications. Academic Press, London, pp 21–37 Commonwealth of Australia. Productivity Commission (2017) Data availability and use. Report n. 82 Crowden A (1992) Patient competence and informed consent. Masters thesis. Monash University, Australia Davis D (2021) The secret body: how the new science of the human body is changing the way we live. The Bodley Head, London Deveaux M (2010) Feminism and empowerment: a critical reading of foucault. In: Hekman S (ed) Feminist interpretations of michel foucault. Penn State University Press, University Park Eyal N (2009) Informed consent. In: Zalta EN (ed) The stanford encyclopedia of philosophy, 2019 edn. Spring. https://plato.stanford.edu/archives/spr2019/entries/informed-consent Faden R, Beauchamp TL (1986) A history and theory of informed consent. Oxford University Press, Oxford Finkel A, Wright A, Shafique P, Williamson R (2018) Precision medicine. Office of the Australian Government Chief Scientist Occasional Paper, Australia Flanagan O (2010) The problem of the soul: two visions of mind and how to reconcile them. Basic Books, New York Foucault M (2007) Security, territory, population. Palgrave Macmillan, London Gildersleeve M, Crowden A (2019) Genetic determinism and place. Nova Prisutnost 17(1):139–162 Greenwood J, Crowden A (2021) Thinking about the idea of consent in data science genomics: how “informed” is it? Nurs Philos 22(3). https://doi.org/10.1111/nup.12347 Grisso T, Appelbaum PS (1998) Assessing competence to consent to treatment: a guide for physicians and other healthcare professionals. Oxford University Press, New York Gregor M (1997) Kant I. Groundwork for the metaphysics of morals. Cambridge University Press, Cambridge Harari YN (2017) Homo deus: a brief history of tomorrow. Random House, New York Mackenzie C (2014) Autonomy. In: Arras JD, Fenton E, Kukla R (eds) Routledge companion to bioethics. Routledge, London Maclean A (2009) Autonomy, informed consent and medical law: a relational challenge. Cambridge University Press, Cambridge Malpas J (2008) Heidegger’s topology: being, place, world. MIT press, Cambridge Mayes C (2015) The biopolitics of lifestyle: foucault, ethics and healthy choices. Routledge, London McClean SAM (2010) Autonomy, consent and the law. In: Biomedical law and ethics library. Routledge-Cavendish, Oxon Meyran R (2021) The tyranny of algorithms: a conversation with Regis Meyran/Miguel Benasayag. Europa Editions, London Müller O, Rotter S (2017) Neurotechnology: current developments and ethical issues. Front Syst Neurosci 11:93
10 Neurotechnology, Consent, Place, and the Ethics …
127
National Statement on Ethical Conduct in Human Research (2007, updated 2018) The National Health and Medical Research Council, the Australian Research Council and Universities Australia. Commonwealth of Australia, Canberra NSW Government Centre for Genetics Education (2021) Fact sheet 19: ethical issues in genetics and genomics. Government Centre for Genetics Education, Australia Parfitt D (2011) On what matters, vol One. Oxford University Press, Oxford Pomerantz J (2015) Metadata. MIT Press, Cambridge Price PL (2013) Place. In: The Wiley-Blackwell companion to cultural geography. Wiley Rollin BE (2006) Science and ethics. Cambridge University Press, Cambridge Seamon D, Mugerauer R (2012) Dwelling, place and environment: towards a phenomenology of person and world. Springer, London Seamon D (2018) Merleau-Ponty, lived body and place: toward a phenomenology of human situatedness. In: Hünefeldt T, Schlitte A (eds) Situatedness and place. Springer, Cham, pp 41–66 Skorupski J (2006) Why read mill today. Routledge, London Sutton P, Walshe K (2021) Farmers or hunter gathers?: The dark emu debate. Melbourne University Press, Australia Tratnik P (2017) Conquest of body: biopower with biotechnology. Springer, London Valera L, Bertolaso M (2016) Understanding biodiversity from a relational viewpoint. Tópicos (méxico) 51:37–54 Young R (1986) Personal autonomy: beyond negative and personal liberty. Croom Helm Ltd, Kent
Chapter 11
Neuro-Rights and Ethical Ecosystem: The Chilean Legislation Attempt Enrique Siqueiros Fernández and Héctor Velázquez Fernández
Abstract The different challenges of the Fourth Industrial Revolution and its global automation of processes and services, today imply new risks related to a possible violation of our privacy and free decision-making, by virtue of new technologies that would analyze brain functioning linked to our emotional capacity, thoughts and reactions. Neurorights aim to protect our personal identity and the exercise of free will, with new legislation as in the case of Chile, to regulate this type of technology. In this text we argue that neurorights must involve an ethical ecosystem and a certain conception of the human being, considering all global technological, legal, and anthropological interactions, to guide us on what to preserve or remove from our relationship with new technologies. Keywords Neurorights · Ethical ecosystem · Free will · Artificial intelligence · New technologies Along with undeniable human benefits of the Fourth Industrial Revolution, the development of the so called artificial intelligence not only brought well-known threats to human rights, such as job automation, privacy issues, surveillance, automated decision-making (from algorithmic discrimination to autonomous weaponry), collective manipulation and imitation of people, among others (Stahl 2021), but has intensified, on the one hand, social inequalities (Burrows and Mueller-Kaler 2021) and, on the other hand, competitive relations within and between political and business organizations. We have seen how this technological acceleration achieved since the origins of the internet has greatly disrupted the social organization. In the legal machinery, for instance, it has opened and will continue opening numerous legal loopholes of a political, civic, and business nature (Liu et al. 2008). At the same time, new sources E. S. Fernández Facultad de Filosofía, Universidad Panamericana, Mexico City, México H. V. Fernández (B) Centro Sociedad Tecnológica y Futuro Humano, Facultad de Humanidades, Universidad Mayor, Santiago de Chile, Chile
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_11
129
130
E. S. Fernández and H. V. Fernández
of ethical reflection have emerged, outside the traditional academic centres (Fjeld et al. 2020), from leading companies in the field, to cosmopolitan associations. So, in this volatile environment, to keep up regulating technology, they have emerged in recent years, along with control networks known as governance and business efforts to adapt the legal norm known as compliance the development of ethical principles and strategies that guide the former (Fjeld et al. 2020). This principles and networks of ethical reflections, beyond amendments or patches to protect national, corporate, or individual interests, are designed with humanistic frameworks not only to hinder technology developments to protect human rights but also to understand their reach and power and to help shaping them bottom up to promote the well-being of our already digital societies, that is, they are ethical by design. We anticipate that in the analysis of ethical issues, principles and mechanisms will be evaluated based on their level of complexity (Velázquez 2020), since we are dealing with a complex problem: from the most mechanical to the most organic. The main difference between compliance and ethical design, two moments in the same humanizing process of technology, lies in three aspects of their nature: their main purpose, the implementation phase, and their operational logic. The main purpose of compliance is to protect the organization and its shareholders’ interests, its implementation occurs when they at risk of penalties and its operational logic is linear or mechanical, since it follows Newton’s first and third laws: (1) in general, as long as there is no risk of penalty, there are no regulations that block the free flow of technological development and (2) for each problem, there’s an artificial and vertical creation of an instance, norm, process or action that counteracts it. For the same reason, it is more corrective than preventive. On the other hand, ethics by design takes into account the interests, not only of shareholders but of the stakeholders and it operates with a more complex or organic logic, a networked (Velázquez 2020) or rhizomatic structure (Deleuze and Guattari 1994), since it considers environments and proposes principles, relationships, actions and reactions in the middle and long term, so that there can be emergence of ethical behaviours that benefit to society and prevent human rights violations. The legal exercise is also mechanical in its implementation. Since the end of the 20th century, according to Hirokiyo Furuta, the term compliance was used to designate a legal area in charge of protection from regulation. “It became a slogan in the American corporate world: in a legal context, this means to obey legal orders and fully realize the rule of law. [...] With the goal of increasing business efficiency, reducing risk, and heightening confidence from the market. This is all based on strong trust in law” (Furuta 2015, 1). The objective of compliance is to create a legal machinery that protects the interests of a company and its shareholders or shareholders, against the risks of legal non-compliance. We say that compliance is a legal machine because it works mechanically, based on a linear logic driven by the individual force of self-interest in the face of the “risk that an organization may suffer sanctions, fines, financial losses or losses of its reputation as a result of the non-compliance with laws, regulations, selfregulation norms or codes of conduct” and the consequent governmental, national and international penalties: nationally, based on laws and internationally, based on
11 Neuro-Rights and Ethical Ecosystem: The Chilean Legislation Attempt
131
memberships such as the OECD and certifications such as ISO. In this paper we will analyse the idea of neuro-rights in the Chilean Legislation Attempt from the perspective of the evolution of an ethical ecosystem.
11.1 What is an Ethical Ecosystem? Although we are aware of the epistemological limitations and ethical risks in the use of the ecosystem notion, we chose the expression because “it helps us understand the complexity of the debate and gives perspective for practical interventions” (Stahl 2021, 3), in contrast to a mechanical, rationalist and, therefore, too reductionist approach, enabled by the Constitutional law (Dainow 1966), legal tradition in Chile. This complex method of reading reality responds to one of the contemporary scientific paradigms: “Complexity as a method,” which implies “reading reality as a reticular whole [...] trying to identify global interactions, rather than deductions from one cause to another, ‘and complexity as a worldview’ that seeks “to understand that the world as a whole is a kind of changing organism where each element plays equally a leading role” (Velázquez 2020, 187). Exposed method and commitments, we will begin to clarify the terms from the predominant science in this complex worldview: biology. In biology an ecosystem is “a geographic area where plants, animals, and other organisms, as well as weather and landscapes, work together to form a bubble of life” (Rutledge et al. 2011 in Stahl 2021, 82), that is, an interconnected environment where biotic relationships occur. “Biotic interactions [competition, cooperation, predation, etc.] are those relationships established between two or more organisms [...] that may be benefited, harmed or not affected, depending on the context in which they occur” (Boege and del Val 2011, 1). So, in a technological perspective: “All AI ecosystems [...] cover technical, policy, economic, legal, social, ethical and other aspects that closely interact with AI and are very influential in the way ethical and related issues materialise, are perceived and can be addressed” (Stahl 2021, 92). This approach will help us analyse in a broader approach the attempt of neuro-rights in the Chilean law and to add to the dialogue between the various disciplines involved in the development and governance of artificial intelligence.
11.2 The Case of Neuro-Rights in Chilean Law Since 2017, the neuroscientist Rafael Yuste along with other academics from Columbia University have expressed their concern about the development of technologies that interact with the brain and considered that it was essential to frame them within ethical criteria. Yuste has worked for years on projects that seek new methods to study brain activity not in the individuality of each neuron but in the complex systemic interaction of one with another, with the intention of understanding what
132
E. S. Fernández and H. V. Fernández
happens at the brain level and being able to intervene in her to fix neurodegenerative diseases such as Alzheimer’s, schizophrenia, or depression. But in the search for a better understanding of brain processes through new neurotechnologies, possibilities of neuronal intervention and modification of the human mind arise, which ethically and socially challenge their research. Hence the emergence of the concept of neuro-rights, which contemplates the protection of mental activity with respect to future technical developments in brain intervention, especially those that could compromise activities of the mind related to our personal identity or reasoning abilities or exercise of the mind or our freedom. Faced with the irreversible development of neuro-technologies, Yuste proposes that neuro-rights go beyond the protection of the human being already made by the Declaration of Human Rights of 1948. And he proposes to draft a neurotechnological oath, which in imitation of the Hippocratic would give the guidelines for biotechnologists to express their ethical commitment to the limits and scope of their research so that it is in support of the human being and not against him. The concept of neuro-rights has been quickly introduced into the discussion on the convenience of an ethical and legal protection of human mental identity, both in the European Union and in the United States, and particularly in Chile. In this South American country, the Senate, through its Future Challenges Commission, has presented and approved for debating a bill to modify article 19, number 1 of the Chilean Constitution with which, for the first time worldwide, an attempt is made to regulate technologies current or future that interact with the brain and that could eventually violate sensitive elements of the person without their consent. This law aims to protect mental data and “mental integrity in relation to the advancement of neuro-technologies”; and seeks to cover different areas of brain intervention, whether for health reasons or of any other nature, through the right to mental privacy, physical integrity, decision-making capacity, equality in the face of neuro-enhancement technologies and protection against to ideological biases. The presentation of this Chilean law coincided with the announcement of the new technologies developed by Elon Musk to develop brain-brain, brain-machine interfaces, which by having access to people’s thoughts could put the mental privacy of users at risk, just like Facebook’s data helmets. The idea of a legal project like this is to prevent that artificial intelligence algorithms do contain biases that try to capture brain information from which models can be made that predict what the mental reactions or responses will be to various stimuli, such as messages politicians, advertising campaigns or information from the news. This law received an unusual unanimous endorsement on the part of all the political forces of Chile. Its promoters recognize the potential of neurosciences and neurotechnologies in the care of neurodegenerative ills such as Alzheimer’s, Parkinson’s, or schizophrenia, but at the same time they are concerned that developments can be carried out that manipulate the brain, introduce thoughts, or access the unconscious. In this way, it seeks to establish neuro-rights and their protection as an extension of human rights, as well as autonomy and free will as essential elements of people. The law establishes that neural data have the same status as the organs of the body and therefore penalizes their traffic or manipulation unless there is a medical indication.
11 Neuro-Rights and Ethical Ecosystem: The Chilean Legislation Attempt
133
This is intended to legislate in advance of the application of technology developments and not a posteriori as has happened with digital platforms, which have suffered different types of regulation, especially by the European Union before the nonconsensual commercialization of the browsing logs of internet users. The project to protect mental integrity in relation to the advancement of neurotechnologies (Senado de Chile 2020a) affirms that “Physical and mental integrity allows people to fully enjoy their individual identity and freedom. No authority or individual may, through any technological mechanism, increase, decrease or disturb said individual integrity without due consent.” And the protection of neurorights and mental integrity, and the development of research and neuro-technologies (Senado de Chile 2020b), “prohibits any intrusion or form of intervention of neuronal connections or intrusion or at the brain level using neurotechnology, brain-computer interface or any other system or device that does not have the free, express, and informed consent of the person or user of the device, even in medical circumstances. It prohibits any system or device, be it neurotechnology, brain-computer interface or other, whose purpose whether to access or manipulate neuronal activity, invasively or non-invasively, if you can damage the psychological and psychic continuity of the person, that is, their individual identity, or if it diminishes or damages the autonomy of their will or ability to make free decisions.” Both formulations were celebrated by legislators and experts as a sign that it is possible to anticipate the risks posed by new technologies, unlike what commonly occurs when legislation lags far behind the appearance of not entirely desirable consequences of the scientific and technological advances and that appear when there is still no legislation to regulate them. Despite the global approval of this initiative by sectors as diverse as the Chilean political, legislative, academic, or business-technological sector, there have been critical voices that draw attention, not so much to the laudable intention of protecting human vulnerability in any person and its dimensions, but on the foundations on which the argumentation rests to do so. In the next sections we are going to address a critical approach to this legislation from very different points of view.
11.3 Chilean Neuro-Rights from a Juridic Perspective To legislate a phenomenon that we do not fully understand and that is not already a threat complicates even more the legal exercise for several reasons. Overregulation can bring institutional paralysis because authorities may lose clarity about their functions, also it can slow down technical and social development and could cause corruption (Ibarra 2012). An organic approach will promote less laws and more exercises of jurisprudence.
134
E. S. Fernández and H. V. Fernández
In the text “¿Neuroderechos? Razones para no legislar” (Neurorights? Reasons for not legislating) Alejandra Zúñiga Fajuri, Luis Villavicencio Miranda, and Ricardo Salas Venegas, from the Research Center for Philosophy of Law and Criminal Law of the University of Valparaíso, believe that with this initiative on neuro-rights, legislators do not seem to understand exactly what it is what they want to protect (Zúñiga et al. 2020). Current and developing neuro-technologies allow the nervous system to be connected to technical devices. And neuro-rights seek to protect people from the various risks that would arise from these neuro-technologies. But for these authors the first problem is that the notion of neuro-rights is a conceptually vague formulation, and since they are not globally recognized worldwide, they can be understood as the search for protection against the risks of neuro-technologies. And in the opinion of these authors, this law seeks such protection in the wrong ways, because the ambiguity of the notion of neuro-rights does not allow distinguishing them from a precaution against new threats to rights that are already legally protected in Chile through the Constitution, international treaties on fundamental rights or local laws. This is not uncommon: ordinarily the rights already guaranteed are at the mercy of new threats; just as it happens with the right to privacy, which, although it comes from medieval legal thought, maintains its validity, despite its variations over time: first it regulated the interference of a minstrel who spied on the court of a King and now analyses the violation that facial recognition cameras installed on a drone may be committing when rummaging through bathers in a penthouse pool. Thus, according to these authors, if a transnational company or the State made use of some device, method, or instrument to make some connection with the nervous system and read, for example, the thoughts of a person, it would not be violating a new right but the already recognized right to privacy, or, where appropriate, to physical or mental integrity. Therefore, suggesting that to protect human beings against these new threats would require new rights, would make them legally redundant. The Chilean Constitution guarantees in its first article equal freedom in dignity and rights to all people, while article 19 establishes the right to life and physical and mental integrity, respect for personal data, freedom of conscience, security individual, health protection, property rights, and even intangible assets, among many others. Likewise, the Chilean law in force before the one referring to neuro-rights contemplates the protection of data that refers to personal privacy, personal habits, ideologies, beliefs and convictions, or the right to forget or cancel previous data. In the opinion of the authors of this criticism, claiming the recognition of new rights, when there is a robust list of fundamental rights that may receive new forms of threat, trivializes the civilizational conquest achieved by the protection of these fundamental rights.
11 Neuro-Rights and Ethical Ecosystem: The Chilean Legislation Attempt
135
11.4 Cartesian Fallacy in the Understanding of Neuro-Rights In Why AI is Harder Than We Think (2021), Melanie Mitchell addresses four popular fallacies regarding our understanding of AI. In fallacy number four, “Intelligence is all in the brain”, she points out the mistake of approaching the complexity of human intelligence from a cartesian dualism. Zúñiga, Villavicencio, and Salas point out the same problem: the proposal of the new rights is aimed at protecting a specific part of the body: the brain; when the actions attributed to it, such as thought or sensation, properly speaking are not of the brain itself but of the entire subject, since this is a psychophysical unit that reasons, has emotions, uses language and is self-conscious. And they conclude that it is also “infrequent and inappropriate to create legal norms in order to regulate a field of knowledge with such incipient findings.” They see a serious problem assuming concepts such as “psychological and psychic continuity of the person,” whose legal, philosophical, or biological reference is not very precise because it does not establish an objective definition from any discipline to take as a foundation. In his opinion, apart from the good intention of preventing the human being from the possible improper uses of new technologies, basically a proposal such as the protection of neuro-rights comes from a certain search for innovation or legal originality to put the vanguard to Chile, in the face of a problem whose legal attention would be poorly raised. Regardless of the conditions of this legal initiative, its possible legal defects or the hidden or confessed intentions of those who proposed it, it is interesting to note how an ethical environment is being articulated around the new technologies. It is understandable that this requires new notions, with disruptive semantics and complex approaches that force us to think outside the box. However, they are also a warning call to the need for precision in conceptualization when we speak of a human phenomenon because we cannot frivolize any of its aspects since they are all relevant.
11.5 Ecosystemic Approach From an ecosystemic approach, that is, the consideration of the full environments and its interactions we can list the following problems about early regulation in this matter: • Hinders technological development that can help social well-being and can leave behind Chile in the technological race. In the article “Is Regulation Killing Innovation in Health Care?” Gideon Kimbrell (2018) states that “innovation is the cornerstone of business, but in today’s era of ultra-fast evolution, governmental regulation can create unfortunate drags on innovation.” And about the technological race, in the article “Analyzing artificial intelligence plans in 34 countries,”
136
E. S. Fernández and H. V. Fernández
Fatima et al. (2021) shows that somehow the less advanced countries in artificial intelligence spend more resources in regulating technology instead of investing in developing technology. • Does not understand the nature of this development and the impact in the context of Chilean society, a fundamental concern in an ecosystem perspective: “All AI ecosystems are embedded in environments which partly shape them but in turn are shaped by them” (Stahl 2021, 92). • Does not understand the true nature of the problem and radical and organic responses that can shield the population from the reach of new technologies. For instance, to take the path of education. In his project “experiencias de innovación en la enseñanza de ética para ingenieros,” Gonzalo Génova, member of the Computer Department of Universidad Carlos III, explain the experience of teaching ethics to engineers so that they exercise in jurisprudence and critical thinking (Génova and González 2014, 3). As we have shown, this Chilean initiative for a law on neuro-rights supposes several lines of protection that do not appear entirely free of brain-centric reductionisms (Velázquez 2016). He speaks of the protection of mental privacy, as if consciousness, that act of doubling by which we can become distant spectators of ourselves, could be reduced to a set of information that can be encoded and decipherable by an algorithm. It speaks of the protection of decision-making capacity under the assumption that neurophysiological conditioning was to some extent deterministic, as the Benjamin Libet-type experiments assumed. But there is a point at which this law shows the difficulty of keeping a balance between assimilating the benefits of biotechnological advances and at the same time seeking to surround it with an ethical ecosystem: the law establishes to guarantee equality in the face of neuro-enhancement technologies. In such a way that there are no sectors that benefit from technologies that seek to increase cognitive, behavioural, or physical capacities, and others that do not receive those benefits and become second-class citizens. It is interesting that equity is sought in the possibility of access to biotechnological improvement technologies and, on the other hand, it is intended to safeguard personal identity, mental content or free will when precisely part of the interventions that seek to achieve a super intelligence that is based on AI, a super happiness that manages to modulate human behaviour through substances and drugs to eliminate angry or overly passionate responses, or super longevity. All these objectives involve an intervention in the exercise of freedom, in the personal identity or in the generation of our thoughts. You cannot serve two masters. Either you seek protection against biases, or you seek equitable access to technologies that contain biases. Every ethical ecosystem that wishes to put global elements of valuation of the human being in a technological environment must take on the challenge of deciding what to understand by being human, what it wants to preserve from it, what it seeks to modify, from what status and towards what condition (Velázquez 2021). Avoiding this discussion seems to me that it only frivolizes the approach of what it means an ethical environment in the context of the current knowledge and technology society.
11 Neuro-Rights and Ethical Ecosystem: The Chilean Legislation Attempt
137
References Boege K, Del Val E (2011) Bichos vemos, relaciones no sabemos: diversidad e importancia de las interacciones bióticas. Ciencias 102:5–11. https://www.redalyc.org/pdf/644/64421308001.pdf Burrows M, Mueller-Kaler J (2021) Scenarios for a future AI world. Atlantic Council Dainow J (1966) The civil law and the common law: some points of comparison. Am J Comp Law 15(3):419–435 Deleuze G, Guattari F (1994) Rizoma. Introducción. Pre-Textos: Coyoacán, México Fatima et al (2021) Analyzing artificial intelligence plans in 34 countries. https://www.brookings. edu/blog/techtank/2021/05/13/analyzing-artificial-intelligence-plans-in-34-countries/ Fjeld J, Achten N, Hilligoss H, Nagy A, Srikumar M (2020) Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society Futura H (2015) Origins and history of compliance. Chuo University Press, Japan Génova G, González M (2014) Experiencias de innovación en la enseñanza de ética para ingenieros. Jornadas Internacionales de Innovación Docente en la Enseñanza de la Filosofía, Madrid, 5, 6 y 7 de noviembre de 2014. https://gonzalogenova.files.wordpress.com/2015/06/experiencias-epi.pdf Ibarra LG (2012) La sobrerregulación: freno al desarrollo y causa de corrupción. Seminario de Estudios Jurídicos Económicos, UNAM, México Kimbrell G (2018, Mar 19) Is regulation killing innovation in health care? Forbes Technology Council, Forbes Liu H-Y et al (2008) Artificial intelligence and legal disruption: a new model for analysis. Law Innov Technol 12(2):2–22 Mitchell M (2021) Why AI is harder than we think. Santa Fe Institute, Santa Fe Stahl BC (2021) Artificial intelligence for a better future: an ecosystem perspective on the ethics of ai and emerging digital technologies. Springer, London Senado de Chile (2020a) Boletín B1382719. https://www.senado.cl/appsenado/templates/tramit acion/index.php?boletin_ini=13827-19 Senado de Chile (2020b) Boletín B1382819. https://www.senado.cl/appsenado/templates/tramit acion/index.php?boletin_ini=13828-19 Velázquez H (2016) Persona, identidad y naturalismo: del personhood al brainhood. In Carbonell C & Flamarique L (ed.) De simios, cyborgs y dioses. La naturalización del hombre a debate. Biblioteca Nueva, Madrid, pp 225–231 Velázquez H (2020) ¿Qué es la naturaleza? Introducción filosófica a la historia de la ciencia. Porrúa, México Velázquez H (2021) La revaloración del concepto de naturaleza humana en la sociedad tecnológica. Una introducción a propósito del human enhancement. Persona y Derecho 84(1):373–395 Zúñiga et al (2020) ¿Neuroderechos? Razones para no legislar. https://www.ciperchile.cl/2020/12/ 11/neuroderechos-razones-para-no-legislar/
Part III
Neuroprotection and Human Rights: New Challenges
Chapter 12
Mental Privacy and Neuroprotection: An Open Debate Abel Wajnerman and Pablo López-Silva
Abstract Current advances in neurotechnology are allowing the gradual decoding of neural information at the basis of a number of conscious mental states with unprecedented level of accuracy. Such developments, it is suggested, might give scientists the possibility to ‘read minds’, opening the debate about how to protect mental privacy, that is, the control that subjects have over the access to their own neural data and all relevant information about their mental processes and states that can be obtained by analyzing such data. In this chapter, we oppose those that deny the urgent need for a current discussion of this issue, offering some arguments to motivate and inform the debate. Finally, we examine some of the problems contained in the organic approach to mental privacy, namely, the idea that neural data should be protected by the laws for organ transplantation and donation. Keywords Mental privacy · Neuroprotection · Neurotechnologies · Neurorights
12.1 Mental Privacy and the Challenge of Consensual Regulatory Frameworks During the last decade, different government agencies around the world have led a neurotechnological revolution. The creation of the U.S. BRAIN Initiative in 2013 aimed at developing novel ways to record and manipulate neural activity of brains with unprecedented single-neuron specificity (Alivisatos et al. 2013, 2015). Motivated by this, China, Korea, the European Union, Japan, Canada, and Australia created similar large-impact projects in the following years (Rommelfanger et al. 2019). In 2017 all these initiatives converged in the creation of the International Brain Initiative, catalyzing and advancing neurotechnological research through international collaboration and knowledge sharing (International Brain Initiative 2020). Importantly, current developments associated to these initiatives are leading to the A. Wajnerman (B) Faculty of Philosophy and Humanities, Universidad Alberto Hurtado, Santiago de Chile, Chile P. López-Silva School of Psychology, Universidad de Valparaíso, Valparaíso, Chile © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_12
141
142
A. Wajnerman and P. López-Silva
creation of neurotechnologies that might help to decode specific information in the brain at the basis of the production of conscious mental states such as perceptions, voluntary bodily movements, thoughts, among many others (Alivisatos et al. 2013; 2015). Accordingly, such technologies might gain direct access to neural data with an unprecedented degree of accuracy. About this, Yuste (2020a)—one of the lead scientists of the US. BRAIN project—has recently warned the international community claiming that such advances might help us to develop the unprecedented ability— both in terms of scope and reliability—to read mental states. The idea is that these technologies could allow the decoding of specific information about mental states— or processes—by analyzing and interpreting data about neural activity patterns, and, as a consequence of this, mental states could be “written,” “register,” and “decoded” by modulating neural computation. This very possibility raises different conceptual, ethical, and legal worries in light of the lack of international regulatory frameworks (laws or treaties) for the potential production of commercial and military applications (e.g., Ienca et al. 2018; López-Silva and Madrid 2021). For this reason, the development of legal regulatory frameworks has become a global priority (Yuste et al. 2021). Taking this issue into consideration, during 2020 the Senate of the Republic of Chile pioneered a legal framework seeking to regulate neurotechnological developments and applications. The proposal consists of a constitutional reform bill (Senate of Chile-Bulletin 13.827-19, 2020b) and a bill for the creation of specific neurorights’ laws (Senate of Chile-Bulletin 13.828-19, 2020a). In the same vein, the Article 24 of the Spanish Charter of Digital Rights—recently announced by the Secretary of State for Digitalization and Artificial Intelligence of the Government of Spain—is a step forward for the establishing of specific rights for the regulation of neurotechnology. Both the Chilean and the Spanish proposal are strongly inspired by the framework developed by the Morningside Group, an interdisciplinary group aiming at motivating theoretical and legal discussion around the concept of neuroright (Yuste et al. 2017). Specifically, the group has proposed the creation of five key neurorights: the right to personal identity, the right to free will, the right to mental privacy, the right to equal access to cognitive enhancement technologies, and the right to protection against algorithmic bias (Yuste et al. 2017). These rights would expand or specify already existing international human rights for the protection of human dignity, liberty and security of persons, non-discrimination, equal protection, and privacy. The guiding idea for this proposal is that, in their previous versions, these rights address ethically relevant dimensions of human life in a very generic way, often, subject to broad interpretation. For this reason, the regulation of the use of neurotechnology in clinical and non-clinical contexts would require greater specificity. Due to the global impact of the potential misuses of neurotechnologies, Yuste et al. (2021) have recently considered the UN to be an appropriate forum to address the discussion about the creation of an international neuroprotection framework. They suggest the creation of an “International Science and Law Expert Commission on Neurorights” in order to develop an international consensus definition of neurorights and a new international human rights treaty. For Yuste, Genser & Herrmann, this process should include regular consultations with key countries that created advanced neurotechnological research
12 Mental Privacy and Neuroprotection: An Open Debate
143
programs, as well as with countries that already have neuroprotection regulations, such as Chile and Spain (López-Silva and Madrid 2021). However, the debate about the creation of consensual concepts underlying legal neuroright-focused frameworks seems far from simple. A good example of this problem is observed in the discussion about the definition we should endorse of the concept of Mental Privacy. This is simply the idea that we should have control over access to our neural data and to the information about our mental processes and states that can be obtained by analyzing it (e.g., Ienca and Andorno 2017; Lavazza 2018; Wajnerman-Paz 2021). Arguably, this notion can be regarded as one of the most relevant concepts within the current debate about the creation of specific neurorights-focused laws, which articulate different interpretations of mental privacy. Finding global consensus about the definition of mental privacy may be especially challenging. The remainder of this chapter is devoted to examine some of the most fundamental aspects of this specific discussion.
12.2 Exploring Mental Privacy 12.2.1 Contextualizing the Debate: Neuroprotection and the Role of Neuroethics Although some of the main concepts related to our target discussion have been discussed for decades in philosophy, the concept of “neurorights” constitute a recent resource created to face the particular challenges imposed by the potential misuses of neurotechnologies in ethical and legal debates. Since its origins a couple of decades ago, neuroethics has been concerned with conceptual updating in this specific context. One of the particularities of this area is that it cannot be understood merely as an applied ethics (an “ethics of neuroscience”), which employs some philosophical ethical theory to determine what to do in a particular clinical or research context such as deciding whether a patient with a given psychiatric condition has the capacity to consent to treatment or whether to preserve the life support of a patient in a reduced state of consciousness. Neuroethics, additionally and fundamentally, involves a “neuroscience of ethics” (Roskies 2002) or a “conceptual neuroethics” (Farisco et al. 2018) which can be understood as a critical application and philosophical elaboration of neuroscientific concepts or models to deepen our understanding of key ethical concepts, such as those of conscience, autonomy, agency, identity, and responsibility. Thus, neuroscience (e.g., studies on the neural mechanisms of decision making or social cognition) together with philosophy (e.g., relational approaches to agency or enactivists theories of cognition) could help to deepen a concept such as mental privacy. The application of such a concept by the neuroethicist (e.g., to determine whether a neurotechnological intervention constitutes a violation of a patient’s mental privacy) will always be mediated by the process of scientific and philosophical reformulation of ethics.
144
A. Wajnerman and P. López-Silva
In this case, the motivation for conceptual change is not merely given by the progress of neuroscientific research on our ethically relevant cognitive capacities, but mainly by the new ethical risks imposed by the development of neurotechnology. Experts in law, philosophy and neuroscience have recently considered that the safeguards provided by existing human rights constitute insufficient protection against the threats or risks imposed by emerging neurotechnology, and that a “neurocognitive” update of some of these rights is necessary (Yuste et al. 2021). Neurorights such as mental privacy, psychological continuity or cognitive freedom extend or deepen fundamental ethical and legal concepts to include aspects of human life that had not previously been considered and that new technologies could jeopardize. For Ienca and Andorno (2017) this issue might be comparable with the human-rights reformulation process that some decades ago occurred in response to the challenges imposed by the emerging technologies to process genetic data and manipulate the human genome. Despite the fact that the conceptual updating of ethical and legal notions has been successfully accomplished before, several issues concerning the efficiency and feasibility of the neurorights proposal still need to be addressed. For example, it has been debated to what extent pre-existing human rights and ethical notions would no longer be sufficient to characterize and protect the dimensions of human life targeted by neurorights (e.g., Shen 2013). Additionally, it has been discussed whether the neurotechnologies to be regulated really put these dimensions at risk, to the point of deserving the development of new legal safeguards (e.g., Meynen 2019). In what follows we will explain how these issues are articulated in relation to mental privacy.
12.2.2 Skepticism About Mental Privacy The accelerated development of neurotechnologies has provided unprecedented access (in terms of increasing scope and reliability) to brain processes underlying mental and motor behavior. In a classic study, Haynes et al. (2007) have shown that the covert decision of a person to perform one of two tasks can be decoded from activity in medial and lateral regions of prefrontal cortex, a potential neural substrate for prospective memory. In this context, neurotechnological mind-reading consists in a wide variety of applications, including not only the interpretation of neural activity patterns in isolation, but also the use of neural responses to consciously perceived stimuli (e.g., P300 signals) to identify experiences of recognition (Rissman et al. 2010), and the use of subliminal stimuli for detecting sexual preferences (Wernicke et al. 2017) and empathic responses (Chiesa et al. 2017). These developments have raised a general concern about mental privacy in light of the potential undermining of our control over the access to our neural data and mental information. However, a number of authors declare themselves as skeptical about the real possibilities of non-experimental mindreading. Let’s examine this approach. Some have argued that despite the accelerated development of neurotechnologies, we should not worry yet about how they can affect mental privacy. For instance, it
12 Mental Privacy and Neuroprotection: An Open Debate
145
has been argued that many of the applications of this type of technology still have significant limitations in their ability to really “read minds.” For example, one of the main concerns regarding mental privacy is the possibility of neurotechnologically decoding the contents of our mental states, that is, whether we are thinking about a house or a boat, whether we feel love towards a particular person or what particular event we are remembering. However, many of the studies that allow us to distinguish the content of thoughts have a multiple-choice design in which both the person and the decoding algorithm only can choose from predetermined options (Meynen 2019). We can call this, an ecological-based skepticism about mental privacy, namely, the idea that the methods used to claim that neurotechnological devices can “read minds” are way too far from real-life situations in terms of setting and scope. Commonly, a patient’s specific thoughts are “read” in neurotechnological experimental settings using a template of fixed categories (e.g., faces or houses). At the same time, the decoding of those mental states is based on previous measurements of brain activity evoked in the subject by those categories (e.g., Haxby et al. 2001; Cox and Savoy 2003). Such a methodological circularity would make it really hard to observe reallife mindreading in situations lacking such a complex and predetermined setting. For this reason, for some authors these applications of fMRI are thus a far cry from a situation where, for example, authorities extract directly from a suspect’s mind all of her experiences, memories, thoughts and feelings related to a crime and similar situations. In consequence, it is suggested that such developments would lack unlimited real time access to just any specific content of the mind. It has been also suggested that access to non-neural data pose a greater risk to mental privacy, providing access to very sensitive information about our mental processes, states, and dispositions. For this reason, mindreading derived from neurotechnology should not be our prime focus of discussion. We can call this approach, the priority skepticism about mental privacy. Authors claim that social media companies can not only identify their users’ social, political, religious affiliations but also specific mental traits and states. A prime example of this is psychological targeting, a technique for influencing behavior through interventions based on psychological profiles (e.g., mental traits and states) extracted from our digital footprints (e.g., Facebook likes, posts, photos, etc.) (Matz et al. 2020). The potential risks of these techniques have been brought to public attention by the Facebook—Cambridge Analytica data scandal. Additionally, our digital footprints may be used to identify a wide variety of mental conditions, such as postpartum depression disorder, PTSD, Anxiety, OCD, bipolar disorder, eating disorder, attention-deficit, hyperactivity disorder, anxiety, and schizophrenia. The statistical analysis of features extracted from these data (e.g., the word frequency of the first personal pronoun “I” or “me” or of the second personal pronoun, the color compositions of images, etc.) can be used for predicting symptoms of mental disorders (Wongkoblap et al. 2017). Thus, even if we agree that the privacy of mental information is at risk, the safeguards we need may not be rights regulating neurotechnology but rather digital rights.
146
A. Wajnerman and P. López-Silva
12.3 Against Skepticism: Mental Privacy Under Threat Although the ecological and the priority arguments against mental privacy add important elements to the discussion, none of them justify dropping out the discussion about how to protect the access to our neural information in light of current neurotechnological developments. Arguably, ecological constraints might be overcome in the future, and priority worries can be replaced by parallel work. Therefore, there are not good arguments to dismiss the relevance of our target discussion. However, even if we accept the current existence and risks of neurotechnological mind-reading to mental privacy, a crucial question is whether we need new laws to regulate such practices. Third-party access to a person’s neural data is known to carry serious risks, such as health insurance or bank-issued credit denial (Glannon 2017). In this context, some have argued that current legislation related to informational privacy (e.g., the “reasonable expectation of privacy” safeguarded by the fourth amendment to the U.S constitution) may constitute sufficient protection (e.g., Shen 2013). The idea is that, in the first place, gathering mental information from neural data is not substantially different from the everyday behavior-based mind-reading through which we interpret other people’s thoughts and feelings. There may be no new dimension of privacy that is not already on the open in our everyday interactions. Secondly, it has been suggested that the risks that neural data collection poses are similar to those associated with other types of data. If this were the case, neural data could be protected by the general right to informational privacy, no new legislation would be needed. In this next section we explore this issue in order to motivate the idea that threats to mental privacy should be taken into consideration seriously and that such threats might require specific legal frameworks.
12.3.1 The Present and Future of Mental Privacy There are good reasons to think that current limitations of neurotechnological mindreading should not deter us from legislating on mental privacy. First, the refusal to legislate on the basis of ecological technological limitations is arguably an instance of the so-called delay fallacy (Hansson 2004; Mecacci and Haselager 2019). Postponing a decision until we obtain more information is a usual reaction to situations in which we lack some of the information we would like to have in order to make that decision. For the case of mental privacy, the idea would be that, since robust mind reading that would put mental privacy at risk is not yet possible, and therefore, we do not know what features will have, we should wait for such a technology to be fully developed before deciding how to regulate it. However, this reaction is problematic, among other things, because in the period when nothing is done the situation may become worse. In the case of mental privacy, by the time the technology is already fully developed and massively embedded in society, the technical features and social
12 Mental Privacy and Neuroprotection: An Open Debate
147
practices associated with it may become too culturally entrenched to be easily modified, which is currently the case with some computer technologies. For instance, Matz et al. (2020) suggested that the powerful computational analysis of our digital footprints employed in psychological targeting has blurred the line between private and public information to the point that we have no control over what information about us can be digitally gathered, and therefore we can only focus on regulating its application (e.g., preventing unethical manipulations of behavior). As Goering and colleagues affirm, although it may be too late to restrict the acquisition of location data, video surveillance, commercial preferences, and behavioral data, “brain data may be one of the few remaining domains in which the most substantial invasions of privacy have not yet been realized” (Goering et al. 2021, 7). Therefore, regarding neurotechnological mind-reading, it may well be better to make an early decision with incomplete information than to make a more informed decision at a later time. This concern can also be framed as an instance of the so-called Collingridge Dilemma (Collingridge 1982). This is the idea that the impact of a technology cannot be easily predicted until the technology is extensively developed and widely used, but at the same time, the control or change of this technology is difficult when the technology has become socially entrenched. This characterization of the problem emphasizes the difficulties of reasoning about the potential impacts of not-yet-fullydeveloped technologies. A crucial point is that, in doing so, we should avoid excessive expectations or unwarranted concerns (Mecacci and Haselager 2019). To legislate with reasonable projections in mind we must determine, based on what we know now about mindreading, whether it is possible for it to evolve in the not-too-distant future to the point of overcoming the aforementioned limitations. In this regard, we believe there are good reasons to believe that we are on a steady (albeit long) developmental path toward full-fledged mindreading (Yuste 2020b). For example, there are approaches that are already overcoming the limitations of studies with predetermined options. Recently developed decoding methods define a semantic space in the cerebral cortex that represents thousands of different categories as a result of the very many possible combinations of neural activity patterns recorded by fMRI. The work of Jack Gallant’s team is an example of this approach. These methods have been shown to identify, from a large set of completely new natural images, which particular image a person has seen (e.g., Kay et al. 2008; Huth et al. 2012). More recently, it has been possible to decode the categories of objects perceived by a person while watching a movie or dynamic image (e.g., Huth et al. 2016; Wen et al. 2018). Another important line of research that might allow mindreading in the future has to do with the task of “cracking the neural code” (Quiroga and Panzeri 2013). Returning to the analogy between genetic and neuroscientific technology, just as it was a milestone in biology to decipher the genetic code, identifying the relationship between the sequence of nucleotides in the genome and the structure of proteins, the ultimate goal of technological mindreading research (and one of the central goals of cognitive neuroscience in general) is to decipher the neural code, which consists in understanding the relationship between patterns of brain activity at the most fundamental level (that of neurons and interactions between neurons) and mental processes
148
A. Wajnerman and P. López-Silva
and observed behavior. Following a tradition that can be traced back to Lorente de Nó (a disciple of Ramón y Cajal), Hebb, Hopfield and Abeles, there has been significant experimental developments regarding the hypothesis neuronal code is constituted by multicellular units that are coordinately activated, often called “neuronal assemblies” (Carrillo-Reid and Yuste 2020). Thanks to the recent development of optical methods that (unlike low spatial resolution technologies such as fMRI or “sparse” technologies that only record a few cells simultaneously, such as electrode arrays) allow the visualization and manipulation of large neuronal circuits at cellular level resolution it has been possible to provide direct experimental support to this proposal. While calciumbased imaging combined with two-photon microscopy allowed direct visualization of neuronal assemblies, biphotonic photoactivation of neurons and holographic optogenetics have made it possible to experimentally manipulate and artificially create assemblies to control perception and behavior in mice (Carrillo-Reid et al. 2016). Although we have not yet reached the stage of human applications, this is a promising line of research that sets us on the path to unraveling the physiological basis of learning, memory, perception, motor planning, and a number of conscious mental states (Yuste 2015). A full understanding of mental processing at this fundamental level would enable a virtually limitless technological reading of the mind. Finally, in addition to the fact that these current advances give a good indication that the current limitations of mind reading will be overcome in the not-so-distant future, there are current technological applications that already allow unlimited identification of complex conscious thoughts. This is the case with the mentioned use of fMRI to identify recognition and non-recognition experiences. For instance, Rissman et al. (2010) exposed subjects to a very large set of faces and were able to determine from single-trial fMRI data using a MVPA which of them were remembered, an approach that not only can be very useful in forensic contexts but that has no limitation regarding which mental (or specifically mnemonic) content can be detected. We believe that the just aforementioned neurotechnological advances and their functional projection give us good reasons to weight the urgent need for discussing specific ways of tackling potential threats to mental privacy.
12.3.2 Why Mental Privacy? It has been suggested that the specific risks associated with the collection, manipulation, and dissemination of neural data may constitute violations of a special form of privacy, mental privacy. In addition, this specific dimension of privacy might not be covered by current legislation and international treatises. For instance, Yuste et al. (2021) affirm that the concept is not covered by Article 17 of the ICCPR, which prohibits unlawful or arbitrary interference with privacy. The General Comment— i.e., the interpretation of Article 17—not only does not mention technology, but also does not speak of the privacy of a person’s thoughts. In addition to legal considerations, there are different philosophical reasons to think that this constitutes a special domain of privacy. Let’s examine these reasons.
12 Mental Privacy and Neuroprotection: An Open Debate
149
Given that all forms of privacy depend ultimately on mentally processing personal information, mental privacy can be regarded as the very source of privacy (Ienca and Andorno 2017; Goering et al. 2021; Wajnerman-Paz 2021). Privacy is partly defined by the control that persons have over the flow of information about them— i.e., being able to determine when, how, and to what extent information about them is communicated to others (Westin 1968). Most often, we say that information is under our control, at least in part, in a cognitive sense. For any piece of information, we can consciously grasp it, reason about its personal and social meaning and its potential applications, and finally decide whether, when, how, to whom and to which extent we want to share it or not. Privacy depends on this cognitive process of rationally filtering and selectively sharing information about us. Neurotechnological mind-reading may be specially threatening for privacy precisely that it by-passes this fundamental filtering process. Any given piece of information we have mentally considered and decided not to share will be anyway available to someone who has direct access to our mind. Technological mind-reading can potentially make the cognitive mechanisms that define privacy itself meaningless. A closely related idea is that if we understand mental privacy as the psychological dimension of privacy, it follows that it is closely related to personal identity. One of the most influential approaches to privacy as a psychological capacity is perhaps Irwin Altman’s idea that privacy is a boundary regulation process (e.g., Altman 1975, 1976). Altman characterizes privacy as the regulation of social interaction aimed at achieving a momentary ideal level of interpersonal contact, which can range from wanting to be accessible to others, to wanting to be alone. Thus, the notion of privacy implies a flexible barrier or boundary between the self and non-self—such as a cellular membrane that becomes more or less permeable—in order to achieve a viable level of functioning. Crucially, Altman (1975, 1976) argues that the main function of this mental privacy is the construction of identity, specifically, our understanding of ourselves as beings (what is referred as “practical identity” by MacKenzie and Walker (2015). This process includes knowing where one begins and ends vis-a-vis others, what aspects of the physical and social environment are parts of the self, and which aspects are parts of others. An intuitive way in which a breach in mental privacy could affect our identity as a self-narrative is through the revealing of facts that affect how one sees oneself. There is plenty of information in our minds, such as subconscious tendencies and biases to which we do not have immediate access. Some neurotechnological applications might be able to reveal unconscious mental information that is not epistemically accessible to ourselves. A typical example of this kind could be detecting biases in prospective jurors (Meynen 2017; Greely 2013). Although “debiasing” our cognitive processes is in many contexts something advisable, overruling the psychological defense mechanisms that prevent this from happening may alter our personhood and sense of identity and therefore subjects should have some degree of control over this process. Another relevant idea in this context is that mental privacy requires more robust protection because it protects the foundation of all freedoms: freedom of thought or cognitive freedom. If authorities were able to detect our mental processes, this could have worrying effects not only over our own actions but also over its very
150
A. Wajnerman and P. López-Silva
source, namely, thought processing. This would deter people from thinking what they want, from having thoughts deemed indecent, immoral, or just plain annoying (Bublitz 2019). We may think that this may not be a central problem if we understand thought as a mostly passive and automatic process. However, there are situations in which mental processing could be altered in response to being monitored. A prime example could be the well-known application of EEG headbands to school children for monitoring their concentration. Given that this information may have an impact on their lives (data collected by the headbands is sent to parents, who could reward or punish them depending on their performance), they can change how they process information in the classroom, potentially affecting their previous cognitive styles. Finally, another important point that has been made is that, unlike other types of information, mental information is not distinguishable or separable from its source: mental states. This is what has been called the “the inception problem” (Ienca and Andorno 2017). The point is that many of our mental states are themselves (at least in part) information. In many cases, accessing mental information is not accessing information about the mind, but is equivalent to accessing sections of the mind itself . For example, accessing the memory of my first love affair is not merely accessing information about me (as would be, for example, accessing my identity card number) but primarily accessing a part of me, something that constitutes me psychologically as an individual. The obscure boundary between myself and my mental states as information about myself motivates the idea that mental privacy might be better understood as a special type of privacy. Consequently, it seems plausible to claim that a specific definition and discussion of mental privacy is in place in order to protect it from potential misuses of neurotechnologies with access to neural data. Until this point, we have tried to motivate the idea that the concept of mental privacy should be treated as priority when it comes to weighing the consequences of potential misuses of neurotechnologies with access to neural data. However, even if we accept the idea, a consensual definition of the notion is far from agreed in the international opinion. In the following section we will briefly examine some of the problems derived from the so-called “organic approach to mental privacy.”
12.4 The Organic Approach to Mental Privacy: Some Critical Comments Most non-skeptical approaches to mental privacy converge in the idea that neural and mental data is a special kind of information that is intimately related to who we are. Therefore, they support the idea of protecting neural data as if they were part of ourselves. This is the idea underlying the Morningside Group’s organic proposal (crystalized in the Chilean neuroprotection bill) that neural data should be protected by the laws for organ transplantation and donation. This approach consists of two main aspects (Goering et al. 2021). Firstly, people not only have a right to not be compelled to give up brain data but, crucially, brain data collection requires explicit
12 Mental Privacy and Neuroprotection: An Open Debate
151
“opt-in” authorization. As Goering et al. (2021, 14) proposes: “Brain data should not be collected passively or rely on individuals to ‘opt-out’ if they do not wish their data to be collected. Rather, the default should require data collectors to obtain specific consent for not just data collection, but for how data will be used, for what purpose, and for how long.” Secondly, people have a right for their brain data not to be commercially transferred and used. That is, commercial reading and writing of brain data is prohibited regardless of consent status. Now let’s consider some potential objections to the Organic Approach to Mental Privacy. Although this approach might sound prima facie plausible, a number of difficulties arise when closely examined. First of all, we have what we can call “the analogy problem”: The analogy underlying the proposal seems to be problematic. As Wajnerman-Paz (2021) has suggested, a key disanalogy between neural data extraction and body-organs harvesting is that the former does not involve any organic-material transfer. Unlike a biopsy, in which the tissue containing medical data is extracted and preserved for analysis, “harvesting” neural data (e.g., in EEG recording) is often similar to what is known as “information replication” which does not consist in moving an information carrying object from a source to a receiver, but rather, reproducing a new (materially different) copy of a message at another point of a network. In this sense, the original material message does not leave the source (Borgatti 2005). For instance, the data contained in neural waves of ions is replicated by a materially different signal constituted by electrons in EEG electrodes, and then again in a new material format by the output devices. This means that the physical neural data register that is kept by clinicians or researchers for analysis is often not constituted by any organic material at all. Then, how to approach something non-organically constituted with an organic-based approach? Perhaps, information-based approaches might be able to overcome this unsound comparison problem. However, developing such an approach here would take us far from our main target. A second issue is that even if neural data were analogous to organic material, the two implications of the analogy are problematic. Firstly, we saw that the organic proposal allegedly entails that brain data collection requires explicit “opt-in” authorization. However, some of the countries that considered the implementation of this organic approach have “opt-out” regimes for organ donation and transplantation where all citizens are donors by default. Most importantly, Chile—the key pioneer in implementing the Morningside Group’s organic proposal—approved on January 6, 2010 Law 20.413, introducing an opt-out policy on organ donation, where all people over 18 years of age will be organ donors unless they state their unwillingness to be so (Senate of Chile-Ley 20.413, 2010). This would lead to two possible scenarios: either the organic approach to mental privacy is inconsistent with legal regulations of such countries or, by endorsing the organic approach, neural data of citizens might be available as a default legal option, which is against the spirit of the proposal. Regarding the second implication of the organic approach, the prohibition or limitation of neural data commercialization may be too restrictive to people or countries that prioritize other neurorights, such as cognitive liberty (Bublitz 2013). Our right to cognitive self-determination plausibly entails that we can not only (electrically,
152
A. Wajnerman and P. López-Silva
pharmacologically, etc.) modulate our cognitive processes in the way we please, but also that there should be no restriction to how we collect analyze and apply information about these processes, including the possibility of its commercialization. Actually, brain data gathered by neurogaming and self-tracking devices is turning into a new commodity that can fuel some form of “neuro-capitalism” (Samuel 2019) The current discussion of the neuroprotection bill in Chile seems to take this concern into consideration. The main observations to the original proposal by Senate members gravitate towards a softer reading of mental privacy, exclusively focused on the opt-in component of the organ transplant analogy. A possible approach to tackle these difficulties is articulating a sounder legal analogy that also makes neural data protection more stringent than the protection of other kinds of personal information, but that is loose enough to be consistent with other key neurorights. For instance, Wajnerman-Paz (2021) recently argued that neural data could be protected by our right to psychological integrity (which may entail less restrictions than physical integrity) because they are analogous to neurocognitive properties of the brain. Specifically, he claims that neural data about a particular brain constitutes an informational domain that is unique to that brain’s cognitive architecture. This idea would also be supported by approaches (such as Altman’s) that present mental privacy as a psychological capacity. Prima facie, psychological integrity is consistent with all forms of cognitive self-determination (e.g., Lavazza 2018) and therefore, this analogy would not entail the prohibition of neural data commercialization. Additionally, this alternative analogy would avoid the problem of the opt-out Chilean regime on organ transplantation. On the contrary, presumed consent for the treatment of mental disorders is very exceptional under Chilean Law. Certainly, more work is needed in order to agree with a consensual way of understanding the protection of mental privacy, but this might be a good place to start such an important process.
12.5 Conclusion Current advances in neurotechnology are gradually allowing the decoding of key neural information at the basis of a number of conscious mental states with unprecedented levels of accuracy. Such developments might give scientists the possibility to “read minds,” threatening our mental privacy. In this chapter, we have examined the arguments of those who denied the existence of an urgent need for debating this matter. We have claimed that neither methodological-ecological nor priority worries are sufficient for not considering this an urgent issue. In addition, we have offered a number of reasons to believe that threats to mental privacy derived from potential misuses of neurotechnologies with access to neural data are closer than the skeptics think. Finally, we have offered some critical comments on the view that aims at protecting neural data by the laws for organ transplantation and donation. Until this point, it is clear that the discussion about how to best protect mental privacy is just beginning to be articulated. Achieving international consensus
12 Mental Privacy and Neuroprotection: An Open Debate
153
regarding mental privacy should be a priority in national and international research and development policy. Although Chile’s pioneer legislative proposal will guarantee that technological developments have a positive social impact, it is critical to the scientific and economic development of the country that an international consensus around neurorights is formed in order to avoid potential political and economical disadvantages.
References Alivisatos AP, Chun M, Church GM, Deisseroth K, Greenspan R, McEuen PRJ, Roukes ML, Sejnowski TS, Weiss P, Yuste R (2013) The brain activity map. Science 339:1284–1285 Alivisatos AP, Chun M, Church GM, Greenspan RJ, Roukes ML, Yuste R (2015) A national network of neurotechnology centers for the brain initiative. Neuron 88(3):445–448 Altman I (1975) The environment and social behavior: privacy, personal space, territory, and crowding. Brooks/Cole Pub, California Altman I (1976) A conceptual analysis. Environ Behav 8(1):7–29 Borgatti SP (2005) Centrality and network flow. Soc Netw 27(1):55–71 Bublitz JC (2019) Privacy concerns in brain-computer interfaces. AJOB Neurosci 10(1):30–32 Bublitz JC (2013) My mind is mine!? Cognitive liberty as a legal concept. In: Hildt E, Francke A (eds) Cognitive enhancement. Springer, Dordrecht, pp 233–264 Carrillo-Reid L, Yang W, Bando Y, Peterka DS, Yuste R (2016) Imprinting and recalling cortical ensembles. Science 353(6300):691–694 Carrillo-Reid L, Yuste R (2020) Playing the piano with the cortex: role of neuronal ensembles and pattern completion in perception and behavior. Curr Opin Neurobiol 64:89–95 Chiesa PA, Liuzza MT, Macaluso E, Aglioti SM (2017) Brain activity induced by implicit processing of others’ pain and pleasure. Hum Brain Mapp 38(11):5562–5576 Collingridge D (1982) The social control of technology. St. Martin’s Press, New York Cox DD, Savoy RL (2003) Functional magnetic resonance imaging (fMRI) “Brain Reading”: detecting and classifying distributed patterns of fMRI activity in human visual cortex. Neuroimage 19(2):261–270 Farisco M, Salles A, Evers K (2018) Neuroethics: a conceptual approach. Camb Q Healthc Ethics 27(4):717–727 Glannon W (2017) The evolution of neuroethics. In: Racine E, Aspler J (eds) Debates about neuroethics. Springer, London, pp 19–44 Goering S et al (2021) Recommendations for responsible development and application of neurotechnologies. Neuroethics. https://doi.org/10.1007/s12152-021-09468-6 Greely HT (2013) Mind reading, neuroscience, and the law. In: Morse SJ, Roskies AL (eds) A primer on criminal law and neuroscience. Oxford University Press, Oxford, pp 120–149 Hansson SO (2004) Fallacies of risk. J Risk Res 7(3):353–360 Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P (2001) Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293(5539):2425– 2430 Haynes JD, Sakai K, Rees G, Gilbert S, Frith C, Passingham RE (2007) Reading hidden intentions in the human brain. Curr Biol 17(4):323–328 Huth AG, Nishimoto S, Vu AT, Gallant JL (2012) A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron 76(6):1210–1224 Huth AG, Lee T, Nishimoto S, Bilenko NY, Vu AT, Gallant JL (2016) Decoding the semantic content of natural movies from human brain activity. Frontiers Syst Neurosci 10:81
154
A. Wajnerman and P. López-Silva
Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13(1):1–27 Ienca M, Jotterand F, Elger BS (2018) From healthcare to warfare and reverse: how should we regulate dual-use neurotechnology? Neuron 97(2):269–274 International Brain Initiative (2020) International brain initiative: an innovative framework for coordinated global brain research efforts. Neuron 105(2):212–216 Kay KN, Naselaris T, Prenger RJ, Gallant JL (2008) Identifying natural images from human brain activity. Nature 452(7185):352–355 Lavazza A (2018) Freedom of thought and mental integrity: the moral requirements for any neural prosthesis. Front Neurosci 12:82 López-Silva P, Madrid R (2021) Sobre la conveniencia de incorporar los neuroderechos en la constitución o en la ley. Rev Chil De Derecho y Tecnol 10(1):53–76 Mackenzie C, Walker M (2015) Neurotechnologies, personal identity, and the ethics of authenticity. In: Clausen J, Levy N (eds) Handbook of neuroethics. Springer, London, pp 373–392 Matz SC, Appel RE, Kosinski M (2020) Privacy in the age of psychological targeting. Curr Opin Psychol 31:116–121 Mecacci G, Haselager P (2019) Five criteria for assessing the implications of NTA technology. AJOB Neurosci 10(1):21–23 Meynen G (2017) Brain-based mind reading in forensic psychiatry: exploring possibilities and perils. J Law Biosci 4(2):311–329 Meynen G (2019) Ethical issues to consider before introducing neurotechnological thought apprehension in psychiatry. AJOB Neurosci 10(1):5–14 Quiroga RQ, Panzeri S (2013) Principles of neural coding. Routledge, London Rissman J, Greely HT, Wagner AD (2010) Detecting individual memories through the neural decoding of memory states and past experience. PNAS 107(21):9849–9854 Rommelfanger KS, Jeong SJ, Montojo C, Zirlinger M (2019) Neuroethics: think global. Neuron 101(3):363–364 Roskies A (2002) Neuroethics for the new millenium. Neuron 35(1):21–23 Samuel S (2019) Brain-reading tech is coming. The law is not ready to protect us. https://www.vox. com/2019/8/30/20835137/facebook-zuckerberg-elon-musk-brain-mind-reading-neuroethics Senate of Chile (2020a) Bulletin N°13.828-19. https://www.diarioconstitucional.cl/wp-content/upl oads/2020/12/boletin-13828-19-nuroderechos.pdf Senate of Chile (2020b) Bulletin N°13.827-19. https://www.diarioconstitucional.cl/wp-content/upl oads/2020/11/Boletin13827-19-neuro.pdf Senate of Chile (2010) Ley 20.413. https://www.bcn.cl/leychile/navegar?idNorma=1010132 Shen FX (2013) Neuroscience, mental privacy, and the law. Harv JL Pub Pol 36:653 Wajnerman-Paz A (2021) is your neural data part of your mind? Exploring the conceptual basis of mental privacy. Minds Mach. https://doi.org/10.1007/s11023-021-09574-7 Wernicke M, Hofter C, Jordan K, Fromberger P, Dechent P, Müller JL (2017) Neural correlates of subliminally presented visual sexual stimuli. Conscious Cogn 49:35–52 Wongkoblap A, Vadillo MA, Curcin V (2017) Researching mental health disorders in the era of social media: systematic review. J Med Internet Res 19(6):228 Wen H, Shi J, Zhang Y, Lu KH, Cao J, Liu Z (2018) Neural encoding and decoding with deep learning for dynamic natural vision. Cereb Cortex 28(12):4136–4160 Westin AF (1968) Privacy and freedom. Wash Lee Law Rev 25(1):166 Yuste R (2020a) Si puedes leer y escribir la actividad neuronal, puedes leer y escribir las mentes de la gente. In: Interview in “El País,” https://elpais.com/retina/2020/12/03/tendencias/160702 4987_022417.html Yuste R (2020b) Can you see a thought? Neuronal ensembles as emergent units of cortical function. IBM Distinguished Speaker Series. https://www.youtube.com/watch?v=QRr_2PuzTZU
12 Mental Privacy and Neuroprotection: An Open Debate
155
Yuste R (2015) From the neuron doctrine to neural networks. Nat Rev Neurosci 16(8):487–497 Yuste R et al (2017) Four ethical priorities for neurotechnologies and AI. Nature 551(7679):159 Yuste R, Genser J, Herrmann S (2021) It’s time for neuro-rights. Horiz: J Int Relat Sustain Dev 18:154–165
Chapter 13
Neuro Rights: A Human Rights Solution to Ethical Issues of Neurotechnologies Clara Baselga-Garriga, Paloma Rodriguez, and Rafael Yuste
Abstract Advanced neurotechnologies, such as Brain-Computer Interfaces (BCIs), can connect the human brain to a computer or a machine. BCIs have great potential, for example, decoding speech directly from individuals’ brains with impressive speed and accuracy, giving stroke patients with disabilities the promise of talking again. But, while neurotechnologies will offer great insights into the human brain and potential therapeutic applications, they also pose privacy and other ethical concerns. A potential solution to ethical issues of neurotechnologies are “NeuroRights,” five new human rights devised to protect individuals in the face of new neurotechnologies. The NeuroRights include the right to personal identity, free-will, mental privacy, equal access to mental augmentation, and protection from algorithmic bias. Through international advocacy, NeuroRights are being included in proposed legislations and soft laws in different countries, including Chile and Spain. Keywords Human rights · Privacy · Agency · Identity · Bias · Augmentation
13.1 Ethical Issues of Neurotechnology On March 20th, 2020, at the onset of the Covid pandemic, Yuval Noah Harari, an Israeli historian, published an article in the Financial Times titled “the world after coronavirus” (Harari 2020). In it, Harari argued that the covid pandemic would fast-forward historical processes and fundamentally change certain aspects of our society. One of these changes would be surveillance, since governments were relying on new technologies to trace and minimize contagion (Harari 2020). While Harari expressed concern for how widespread surveillance tools would become in countries that previously rejected them, his main worry was the transition from “over the skin” to “under the skin” surveillance (Harari 2020). Governments were no longer C. Baselga-Garriga · P. Rodriguez · R. Yuste (B) Neurorights Initiative, Columbia University, New York City, USA e-mail: [email protected] R. Yuste Donostia International Physics Center Center, San Sebastián, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_13
157
158
C. Baselga-Garriga et al.
interested in watching citizens through cameras or tracing their browser history, but rather they were collecting biometric data that included body temperature, blood pressure, and heart rate. What would be the implications of amassing and harnessing individuals’ biometric data to the point they might know what made citizens happy, angry, sad, and bored? Harari’s question is especially salient in the context of Neurotechnology for several reasons. Neurotechnology grants access not only to general biometric data but to brain data, thus capturing the essence of individuals’ being. One thing is to contextualize a citizens’ physiological states during certain events, such as recording their heart rate during a political debate to find out how they feel about a specific candidate. Another thing is to have access, via a mind reading device, to citizens’ specific thoughts during that same debate, decoding why they feel a certain way about a candidate. In a world post-covid in which “under the skin” surveillance could become the norm, Neurotechnology might heighten the scrutiny of individuals’ private thoughts and experiences. In addition, the field of Neurotechnology is experiencing its Golden Age and, as a result, new BCIs might be introduced in hospital settings as well as in the market. In recent years, academic institutions and companies, including Facebook, Kernel, and Neuralink, for example, have made impressive advances both in non-invasive and invasive brain-computer interfaces. For instance, in August of 2020, Elon Musk, the CEO of Neuralink, hosted a live demonstration where he showcased his implanted BCI chip that’s the size of a dime on a pig called Gertrude that had undergone surgery months prior. As Gertrude romped around and her brain waves appeared on the screen, Elon Musk not only demonstrated the functionality of his BCI, but also showed that he could implant the chip without harming brain vasculature and without altering the pig’s behavior. Gertrude appeared healthy and active, similar to the control pig who had already had their chip removed and therefore captured the reversibility of the product. A few months later, in January 2021, Kernel, a company created in 2016 that aims to help consumers unlock and understand their brain, released its first 50 units “Kernel Flow,” their latest product, to select partners. “Kernel Flow” consists of a non-invasive headware that utilizes infrared light to provide real-time brain data to consumers wearing the device. In Academia, Neurotechnological innovation is also remarkable. In March of 2020, researcher Edward Chang and his colleagues at UCSF developed a BCI that can decode speech from neuronal activity at an unprecedented rate and with minimal errors (Makin et al. 2020). At Columbia University, electrical engineer Kenneth Shepard is designing a remarkably small and flexible, implantable, and wireless BCI that adapts to the folds of the cerebrum. In addition to the rapid innovation in the field, a major concern that arises from Neurotechnology is that it straddles the line between Neurology, a purely medical arena, and technology, a consumer-focused space. In the world of technology there has been historically an ethos of disruption, as the industry is closely tied to capitalism and, as a result, has often embraced a mentality of: “build fast, fix later.” Making innovative products available to consumers in a timely manner is invaluable in Silicon
13 Neuro Rights: A Human Rights Solution to Ethical …
159
Valley. If something isn’t perfect or if it breaks, there is always a newer model, a better update, or another product. The problem with applying the culture of technology to understand the brain is that the stakes are much higher. One cannot afford to “break” the brain. One cannot afford to create algorithms with imperfect security standards because thus sacrificing the privacy of individuals’ brain. One cannot afford to create a BCI that provides obvious cognitive advantages but can only be afforded by some, thus putting the economically privileged at a further advantage. A final issue with Neurotechnology arises from a policy perspective as there are no existing legal documents that address the risks posed by neurotechnology and AI. When the Universal Declaration of Human Rights was released in 1948, technological innovation, especially in the realm of neuroscience, was still nascent. While the Universal Declaration of Human Rights addresses justice, equity, and interference with privacy, in purposefully general terms so that they apply to many fields, the specificity of Neurotechnology and AI creates novel challenges. Given the reasons outlined above, it is apparent that the current ethical guidelines to tackle potential breaches in the world of Neurotechnology are not sufficient. The Morningside Group, which brought together a group of 25 international experts in neuroscience, machine learning, ethics, and engineers for a 3-day workshop on ethical concerns in the field of Neurotechnology, realized this in 2017. After thorough discussions across disciplines and geographic boundaries, the group drafted five NeuroRights to protect individuals in the technological era (Yuste et al. 2017). A similar proposal was made by two additional investigators (Ienca and Adorno 2017). While the NeuroRights were drafted to be universal, the Morningside group did take into consideration that different cultures and traditions would have different perspectives on how to tackle ethical challenges posed by AI and neurotechnology. For this reason, they proposed that each nation select a board of representatives from different fields so that these rights can be turned into policy and subsequently “laws and regulations” (Yuste et al. 2017).
13.2 The Five NeuroRights The first NeuroRight deals with the concern of personal identity. Current innovations in neurotechnology and AI could potentially have the ability to disrupt a person’s sense of self. For instance, companies could use neural information of individuals for target advertising algorithms. By establishing a connection between the individual and the digital network, a person’s consciousness could become inseparable from the technological inputs. Established ethical guidelines ought to prevent this outcome at all costs: people should be allowed to decide whether they wish for their neural data to be shared or not. This would put neural data at a position similar to that of organs in most countries. In the same way that one has the right to decide whether they wish to donate their organs, a person should be able to decide if they want their neural information to be accessed and shared. This would involve a process of informed consent. Additionally, the Morningside Group suggested strict regulations to limit
160
C. Baselga-Garriga et al.
the sale, use, and trading of neural data in order to avoid other routes of access to the information in an individual’s neural activity (Yuste et al. 2017). The second NeuroRight protects individuals’ free will by establishing that every person should have the right to make their own decisions without the manipulation of external neurotechnologies. Given the complexity of today’s BCIs, it isn’t unfathomable that these devices could potentially alter a person’s consciousness in making decisions; however, basic research in our lab already enables manipulation of mice behavior, specifically lick response, using optogenetics (Carrillo-Reid et al. 2019). The third NeuroRight states that every person should have a right to mental privacy. As discussed above, given the current socio-political climate, this NeuroRight is the most urgently threatened. Because surveillance tools are being legitimized in society as a means to provide broader health and safety, citizens in different contexts and democracies are becoming accustomed to sacrificing their right to privacy. The fourth NeuroRight aims to provide everyone with equal access to mental augmentation. As BCIs become more precise and effective, a temptation may emerge to use the newest technologies to enhance individual’ sensory or mental capacities, especially in military settings (Yuste et al. 2017). This could create a new realm for discrimination that would exacerbate already existing socioeconomic and political divides, but do so to an extent that has never been experienced before. In essence, BCIs could be used to create “super humans.” The fourth NeuroRight is based on the concept of justice and states that there ought to be guidelines established at the international and national levels that regulate the development and application of mental-enhancement neurotechnologies (Yuste et al. 2017). Finally, the fifth NeuroRight establishes the right to protection from algorithmic bias. Just like humans, algorithms in machine learning can sometimes be biased. As a result, technological innovation could potentially make certain social groups more privileged than others through algorithmic bias. If scientific decision-making about the application of these technologies occurs by considering only a small realm of systemic, structural or social concepts and norms (Yuste et al. 2017) certain groups could be discriminated against. For instance, according to a study performed by Datta et al. (2015), Google’s advertising algorithms were set such that job positions meant for women offered a lower salary than those meant for men. In order to address this challenge, measures to combat bias should be taken, such as allowing user groups, especially those that are already subordinated, to add input into designed algorithms (Yuste et al. 2017).
13.3 NeuroRights Advocacy and Implementation To promote the NeuroRights, the NeuroRights Initiative (NRI) was founded at Columbia University. The NRI is a policy and advocacy group who strives to introduce NeuroRights to different stakeholders and implement them in policy. The NRI was officially launched in 2019, and has since helped to advocate for NeuroRights legislation internationally. It is with the NRIs support that, as of October 2021, the
13 Neuro Rights: A Human Rights Solution to Ethical …
161
Chilean Senate and Chamber has approved unanimously a Constitutional Reform that has been signed into law by President Piñera. In addition, the Senate has unanimously approved a Bill for Neuroprotection, which, if approved by the Chamber, would make Chile the first country to add NeuroRights into their laws. Additionally, with the advisory support from the NRI, Spain, a Digital Rights Charter has been drafted and presented to the Spanish government and it incorporates NeuroRights in one of its sections. Though the document was originally composed by experts in the field, this document has now been made available to the public with the intention of taking suggestions. These examples, of Chile and Spain are the beginning of what could become a general strategy for governments to ensure that the novel Neurotechnologies have “guard rails,” to prevent potential abuses, and are being developed and deployed for the benefit of humankind
References Carrillo-Reid L, Han S, Yang W, Akrouh A, Yuste R (2019) Controlling visually guided behavior by holographic recalling of cortical ensembles. Cell 178:447–457. https://doi.org/10.1016/j.cell. 2019.05.045 Datta A, Tschantz MC, Datta A (2015) Automated experiments on Ad privacy settings. Proc Priv Enhancing Technol 1:92–112 Harari Y (2020) Yuval Noah Harari: the world after coronavirus. The financial times. https://www. ft.com/content/19d90308-6858-11ea-a3c9-1fe6fedcca75 Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13(5). https://doi.org/10.1186/s40504-017-0050-1 Johnson B (n.d.) Flow 50 livestream. Kernel. https://www.kernel.com/flow-50 Makin JG, Moses DA, Chang EF (2020) Machine translation of cortical activity to text with an encoder–decoder framework. Nat Neurosci 23:575–582. https://doi.org/10.1038/s41593-0200608-8 Yuste R et al (2017) Four ethical priorities for neurotechnologies and AI. Nature 551(7679):159–163
Chapter 14
A Technocratic Oath María Florencia Álamos, Leonie Kausel, Clara Baselga-Garriga, Paulina Ramos, Francisco Aboitiz, Xabier Uribe-Etxebarria, and Rafael Yuste
Abstract In the last decades, novel neurotechnologies are enabling the collecting and analyzing of neuronal data as well as the targeted alteration of brain activity. While this progress has the potential to help many patients with neurological or mental diseases, it also raises significant ethical and societal consequences, putting the mental privacy, identity and agency of citizens potentially at risk. As one approach to provide ethical guidelines to novel neurotechnologies, we propose a “Technocratic Oath,” as a pledge of simple, fundamental ethical core principles to be adopted by Neurotechnology developers and the industry. Our proposed Technocratic Oath is anchored on seven ethical principles: beneficence, non-maleficence, autonomy, justice, dignity, privacy and transparency. The Technocratic Oath is modelled after the Hippocratic Oath, a pledge taken by all physicians as they enter the medical profession. While legally non-binding, the professional weight of the Hippocratic Oath has historically led to responsible practices in the world of medicine. Similarly, the Technocratic Oath could help establish and propagate a core of ethical principles to ensure responsible innovation and to protect the fundamental human rights of patients and consumers.
M. F. Álamos · F. Aboitiz Centro Interdisciplinario de Neurociencias and Departamento de Psiquiatría, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago de Chile, Chile L. Kausel Centro de Investigación en Complejidad Social, Universidad del Desarrollo, Santiago de Chile , Chile C. Baselga-Garriga · R. Yuste (B) Neurorights Initiative, Columbia University, New York City, USA e-mail: [email protected] P. Ramos Center for Bioethics, Pontificia Universidad Católica de Chile, Santiago de Chile, Chile X. Uribe-Etxebarria Sherpa.ai, Bizkaia, Erandio, Spain R. Yuste Donostia International Physics Center, San Sebastián, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_14
163
164
M. F. Álamos et al.
Keywords Technocratic oath · Ethical principles · Education · Responsible work · Neurotechnologies
14.1 Ethical Implications of Neurotechnology Neuroscience is an interdisciplinary field that includes biology, medicine, psychology, chemistry, genetics, computer science, physics, engineering, and mathematics. It pursues the study and understanding of the nervous system, composed of approximately 100 billion neurons, interacting through trillions of synaptic connections (Kaiser 2014). For centuries, comprehending how the brain operates, generating the human mind and all cognitive experience, has been an enigma. Modern advances in technology and neuroscience open a new era that provides insights into these complex brain processes, allowing to open a window into the functioning of the brain. Neurotechnology, defined as the interfacing of the nervous system with technology is becoming a fledging field of science and engineering. The techniques used by neuroscientists have expanded enormously in the last decades, allowing to better understand the functioning of single cells to neural networks. In 1878 Richard Caton, a Liverpool physician and physiologist, described the transmission of electrical impulses through an animal’s brain (Caton 1970). In 1929, the German neuropsychiatrist Hans Berger (1873–1941) published the first report on human electroencephalography (EEG) (Berger 1929). This technique is widely used in clinical and research settings to measure electrical brain activity noninvasively detecting voltage fluctuations. In the last years a neurotechnological revolution has taken place, where non-invasive neuroimaging techniques emerged as an essential tool to map the brain and complement EEG recordings. Today, a broad spectrum of neuroimaging technologies has become clinically and commercially available, including magnetic resonance imaging (MRI), computerized axial tomography (CAT) scan, positron emission tomography (PET), and functional magnetic resonance imaging (fMRI), etc. Neuroimaging techniques were developed, and are still mostly implemented, within clinical practice and neuroscience research. However, in recent years, some applications have entered the market for commercial purposes, outside the clinical setting. They are now incorporated into consumer-grade devices for healthy users with different purposes. The advances in neuroscience techniques open an unprecedented possibility for accessing, collecting, sharing, and manipulating information from human brains (Yuste et al. 2017). These can positively impact clinical practice, improving patients’ well-being from neurological and psychiatric disorders, offering new preventive, diagnostic, and therapeutic opportunities. However, outside the clinic, these Neurotechnology’s commercial usages provide new possibilities for selfquantification, cognitive enhancement, personalized communication, and entertainment for regular users. This raises new ethical challenges, if misused or inadequately implemented, like the risk of creating distinctive forms of intrusion into people’s private lives, potentially causing physical or psychological harm, or allowing
14 A Technocratic Oath
165
undue influence people’s behavior without their consent (Yuste et al 2017; Ienca and Andorno 2017). In addition, over the last years also data analysis methods have become more complex and integrated with artificial intelligence tools. The algorithms that are used to analyze the big data sets are constantly being refined and also are evaluated as to follow correct statistical methods. The challenge of acquiring, managing and analyzing this big data sets is enormous and different initiatives such as BRAIN Initiative in USA and the Human Brain Project in Europe are generating guidelines on the respect (Salles et al. 2019; Shen 2013). In fact, neuroscience is becoming an interplay with big data and machine learning science, which is used to analyze datasets for a better understanding of the functioning and also intervention of the human brain. Neuroscience big data require an intensive approach to research across boundaries of different disciplines, where ethical features are integrated within a transparent, dialogical data governance process (Fothergill et al. 2019). Addressing these ethical issues implies becoming aware of the risks implicit in Neurotechnology and consequently taking action in this regard. These actions can go along two lines: (i) development of regulatory policies that allow their use to promote their potential benefit without violating people’s rights (Yuste et al. 2017; Ienca and Andorno 2017), (ii) education of future professionals on these ethical issues imposed on us by technological development. This chapter will focus on the second approach and analyze the risks that technologies can bring about and the relevance of addressing this issue during the training of people who apply neurotechnologies, establishing a new ethical deontology for the profession.
14.2 Recent Advances in Neurotechnology and Brain Computer Interfaces Neurotechnology is experiencing a moment of unprecedented innovation. BrainComputer Interfaces (BCI), which consist of any device that directly connects the brain to a computer or machine, in order to record or alter neuronal activity, are being developed for a wide range of purposes and audiences. In the following we will provide a summary of BCIs both in academia and industry in order to grasp the magnitude of advancements in this field and consequential challenges ahead. The origins of BCIs can be traced back to 1924 as mentioned previously, when Hans Berger collected the first human EEG, a technique that captures the brain’s electrical activity, on a 17-year-old boy during a neurosurgery (Britton et al. 2016). Some decades later, in 1968, researchers Barry Sterman and Wanda Wyrwicka used EEG to record the sensorimotor rhythms of cats that had been conditioned to press a lever to receive a food reward, making the first associations between brain activity and behavior (Wyrwicka and Sterman 1968). The field quickly evolved with ensuing
166
M. F. Álamos et al.
seminal work from scientists such as Eberhard Fetz, who developed the first closedloop brain computer interface, by recording the neural activity in monkeys trained to track visual targets and connected the output of that recording to an external device, and others (Chaudhary et al. 2016). After decades of steady progress, another pivotal moment arrived when president Obama launched the ongoing US BRAIN initiative, which aims to better understand the brain through the creation and application of Neurotechnology (Insel et al. 2013). As of today, the NIH’s BRAIN Initiative has awarded over 700 investigators and invested approximately $1.8 billion, with a $560 million appropriation for the 2021 budget (NIH 2021). Today, driven by of the ensuing innovation of this initiative, and similar ones around the globe (Yuste and Bargmann 2017), patients with speech impediments due to stroke, paralysis, or other forms of neurological damage can hold onto the promise of one day talking again (Anumanchipalli et al. 2019). While the history of BCIs is recent, it’s remarkable how, in a span of less of a few decades, neuroscience has evolved to deciphering and altering brain contents with increasing effectiveness. BCIs are often categorized based on whether they are invasive or non-invasive, depending on whether they require neurosurgical procedures. An example of an invasive method is intraoperative electrocorticography (ECoG), which places electrodes directly on the surface of the brain via surgical implantation. On the other hand, EEGs and fMRIs, which are more commonly known to patients and do not require surgery, are non-invasive method (Klaes 2018). While non-invasive BCIs have the advantage of not requiring surgical intervention, and therefore bypassing the risks and hassles of any operation, invasive BCIs offer more precision because they are recording brain activity closer to the action site. In academia, researchers are working to study and fine-tune the development of both invasive and non-invasive BCIs. For instance, Kenneth Shepard, an electrical engineer at Columbia University, has designed an implantable, wireless chip (Thimot and Kenneth 2017). Shepard and colleagues are experimenting with materials, flexibility, size, and electrodes to create the most efficient and least bulky BCI possible. As another example of invasive BCIs, Edward Chang, a neurosurgeon at the University of California, San Francisco, has created a neural decoder that can synthesize speech by recording with implantable electrodes the activity of the motor cortex responsible for moving the muscles involved in speaking with 97% accuracy and unparalleled speech (Anumanchipalli et al. 2019; Velasquez-Manoff 2020). Chang’s work is significant because, historically, speech decoders have had limited accuracy and size of vocabulary since they focused on decoding sentences by identifying speech segments or phonemes (Anumanchipalli et al. 2019). Though originally developed to support patients in research and medical contexts, BCIs have recently expanded beyond the realm of medicine. Companies such as Facebook, Neuralink, Kernel, and CTRL-labs are interested in using this technology to improve productivity, data collection purposes, and to seamlessly connect humans with technology. Facebook, for instance, is currently developing a noninvasive BCI program called “Brain-to-Text” that would enable people to type onto an iPhone screen simply by consciously thinking about what they would like to
14 A Technocratic Oath
167
say (tech@facebook 2020). The prototype consists of a hat or other headwear lined with infrared sensors, that collect optical measurements of cortical activity and then decodes them into utterances (Moses et al. 2019). In 2017, when Regina Dugan originally presented the project to the public during a conference, she depicted the product as a way for the general public to maximize their productivity (tech@facebook 2020). Elon Musk’s Neuralink is also developing a BCI (Musk 2019), though theirs is invasive and aims, not only to tackle a long list of neurological issues, from hearing loss to depression and insomnia, but, importantly, to insert AI algorithms into normal people to cognitively augment them. On August 25th, 2020, in a broadcasted demonstration, Elon Musk showcased the company’s coin-sized, Bluetooth-operated, wireless chip on “Gertrude,” a pig with the implanted chip whose brain recordings were captured on a screen while she walked around the perimeter. Gertrude had undergone the surgical implementation of the chip a few months prior to the demonstration via Neuralink’s especially-designed surgical robot. These innovations in academia and industry are exciting, as they enable researchers to strive for a world free of neurodegenerative diseases, mental health issues, and other brain illnesses; however, the current interdisciplinary innovation also poses significant challenges. The brain defines the essence of the human being. It is the only organ that allows humans to think and understand. While it is paramount that we protect the privacy and integrity of our minds in a cohesive and standardized manner, the question becomes: how? How do we find a way to encourage innovation that can relieve suffering without breaching ethical standards? How do we find a solution that fits both the world of academia and the world of industry? After all, every BCI is a world of its own, and so are the realms of academia and industry.
14.3 The Hippocratic Oath as an Example of a Professional Oath One solution to tackle the ethical issues of Neurotechnology is by developing a personal pledge, or oath, that the practitioners of Neurotechnology will take upon their conscience. The professional oath is different from a promise or an ethical code. Its importance lies in that it is a personal and performative utterance that is voluntary, solemn and public. It is validated by its transcendent appeal, binding regarding to interpersonal loyalty with those performing the same profession and with the society on whom their profession is performed. Professional codes of ethics are a set of specific moral rules. The Hippocratic Oath, for example, is recognized as a model of a Professional Oath, and the main changes in its text are expound. It is interesting to note that the Spanish term for oath, “juramento,” from Latin “iuramentum,” expresses compliance with loyalty according to what laws of faithfulness and honor require. Faithfulness, for its part, is understood as loyalty, observance of faith that someone owes to another person; and honor is defined as a moral quality that leads to the fulfillment of our own duties regarding neighbors and oneself. Its
168
M. F. Álamos et al.
meaning is equivalent to that assigned to the word “honesty” (RAE 2001). Therefore, it can be argued that a person swearing an oath is obliged to comply with their professional duties honestly and faithfully to values of their lex artis. The term “oath” is also different from “promise” as the former has a greater moral weight; it is not limited to a statement of intentions; it is performed in a context whose audience is only a witness: it is validated by transcendent appeal, either to a deity or something similar to a deity held in similar reverence or respect; and it seems to involve the person that swears more deeply (Sulmasy 1999, 31). Moreover, an oath is a personal and social act regarding free and personal commitment which is formalized verbally with others and in front of others. The swearers “profess;” they are subject to their professions’ deontological rules, which can be of two types: ethical rules and other that are more social, both gathered in the professional codes and whose violation implies a devaluation of the profession. The social part also appears because professions are performed precisely by a group of people— colleagues—with whom they form a community that shares an ethos, aims, methods, and an own language that allows differentiating a doctor from a lawyer or a scientist. The importance and significance of the oath consists in that it is a “performative” utterance whereas the manifestation of this special commitment is more than a formality without consequences. It transforms the swearers, giving them a new identity and sense of belonging. Before the oath, they were “students,” and after they are doctors, scientists, lawyers. This statement must be carried out in a particular context and modality (performance). The latter is a condition of a possibility for transformative actions, such as solemnity, performed in public and in the presence of a representative authority of the profession. The oath implies that not only new professionals initiating the profession assume certain intrinsic obligations to their profession, but also that the profession as a whole must control the right implementation of guidelines of their code of conduct. Therefore, the infringement to the aforesaid values must be denounced in the first place by colleagues practicing the same profession, and then by the society as well.
14.3.1 Hippocratic Oath—Model for Professions Regarding its form and content, doctors’ duties and responsibilities according to determined essential values of medical lex artis that are summed up to provide help to the patients and the prohibition to harm them (Aparisi 2006). Originally, the Hippocratic Oath was an oath taken by apprentices at the beginning of their medicine studies (Askitopoulou and Vgontzas 2017). They recognized their personal obligation to their teachers—which were rather reciprocal. Then, the apprentices committed themselves to take great pains in the art of medicine to transmit and bequeath knowledge to their own students and use it only to relieve their patients suffering, avoiding causing harm and guaranteeing confidentiality (Lemarchand 2010).
14 A Technocratic Oath
169
This oath has kept its validity and legitimacy through centuries, and its values have been embraced by Christian, Jewish, and Muslim medicine. With the development of the “scientific medicine,” after medical experiments from World War II, this oath came to life again with its reformulation made by the World Medical Association in 1948 with the Declaration of Geneva. The text of the Hippocratic Oath has undergone changes, some of them substantial, as it is the replacement of the word SWEAR to PLEDGE according to the following Table 14.1 (Sanchez-Salvatierra and Taype-Rondan 2018). In 2020, this oath’s performative nature stood out with the doctors’ performance during COVID19 pandemic throughout the world without exception, even at the expense of their lives. Otherwise, how can you understand the motivation of a group willing to put themselves in danger of contagion and death? It is not the obligation of responding to a job contract nor the so mentioned heroism of them. The difference is in what happened in the life of each one of them before they started the professional practice that marked them for life: The Hippocratic Oath. It is there where the reason for so much dedication, innumerable sacrifices, multiple resignations that each human being dedicated to the art and profession of medicine can be found, most of the time, without considering that they are doing something special or worthwhile (Gamboa 2020, 8). The Hippocratic Oath has been considered as a model by different professionals as well as by the scientific community. Thus, in 1987 the M.I.T Biologists Pledge and the Hippocratic Oath for Scientists (Nuclear Age Peace Foundation) were proposed, as well as the Buenos Aires Oath in 1988, the Scientists Pledge Not to Take Part in Military-Directed Research, among others. In that regard, the work of Lemarchand’s (2010) stands out, which gives an exhaustive account of more than ninety initiatives of ethical oaths and commitments for scientists proposed throughout history. From the foregoing, the urgent need and possibility of elaborating an ethical oath with universal validity for scientists and technologists appear.
14.4 The Technocratic Oath As discussed in the previous section, an oath is a solemn, usually formal, word of honor that one sincerely intends to do what one says in a specific context. As stated, it is performative in the sense that it transforms the “student” into the doctor, the lawyer or the scientist. This also gives people who work in these areas a new moral framework and a guideline of good practices. The idea to incorporate an oath for the people that will work with different neurotechnologies in academic, clinical or industry settings aims to incorporate an ethical and moral code to this work that will have a transformative effect on the ones that swear. Our proposed Technocratic Oath (Table 14.2) is modelled after the Hippocratic Oath, which all new physicians swear upon as they enter the medical profession. In the case of the Technocratic Oath, it could be sworn by students and workers who are involved in the use of neurotechnologies and brain data analysis tools. The principles
170
M. F. Álamos et al.
Table 14.1 Hippocratic oath Original oath
Modern version, as it appears in the declaration of Geneva, revised in 2017
I swear by apollo physician and asclepius and hygieia and panaceia and all the gods and goddesses, making them my witnesses, that I will fulfill according to my ability and judgement this oath and this covenant
I solemnly pledge to dedicate my life to the service of humanity
I will apply dietetic measures for the benefit of The health and well-being of my patient will the sick according to my ability and judgment; be my first consideration I will keep them from harm and injustice I will respect the autonomy and dignity of my patient I will neither give a deadly drug to anybody who asked for it, nor will I make a suggestion to this effect. Similarly, I will not give to a woman an abortive remedy. In purity and holiness, I will guard my life and my art
I will maintain the utmost respect for human life
I will not use the knife, not even on sufferers from stone, but will withdraw in favor of such men that are engaged in this work I will not permit considerations of age, disease or disability, creed, ethnic origin, gender, nationality, political affiliation, race, sexual orientation, social standing, or any other factor to intervene between my duty and my patient What I may see or hear in the course of the I will respect the secrets that are confided in treatment or even outside of the treatment in me, even after the patient has died regard to the life of men, which on no account must be spread abroad, I will keep it to myself, holding such things shameful to be spoken about Whatever houses I may visit, I will come for the benefit of the sick, remaining free of all intentional injustice, of all mischief and in particular of sexual relations with both female and male persons, be they free or slaves
I will practice my profession with conscience and dignity and in accordance with good medical practice
I will foster the honor and noble traditions of the medical profession (continued)
proposed to be incorporated in this oath correspond to seven ethical principles, also widely used in artificial intelligence ethical guidelines (Jobin et al. 2019). First, no maleficence, which means that there is no intention to generate a harm with the applied technology. Second, beneficence, which has the goal to contribute to common good with the done work. Third, autonomy. This is an important point, since it establishes that nothing can be carried out without the voluntary consent of the participants in any
14 A Technocratic Oath
171
Table 14.1 (continued) Original oath
Modern version, as it appears in the declaration of Geneva, revised in 2017
To hold him who has taught me this art as equal I will give to my teachers, colleagues, and to my parents and to live my life in partnership students the respect and gratitude that is their with him, and if he is in need of money to give due him a share of mine, and to regard his offspring as equal to my brothers in male lineage and to teach them this art—if they desire to learn it—without fee and covenant; to give a share of precepts and oral instruction and all the other learning to my sons and to the sons of him who has instructed me, and also to pupils who have signed the covenant and have taken an oath according to the medical law, but no one else I will share my medical knowledge for the benefit of the patient and the advancement of healthcare I will attend to my own health, well-being, and abilities in order to provide care of the highest standard I will not use my medical knowledge to violate human rights and civil liberties, even under threat If I fulfill this oath and do not violate it, may it be granted to me to enjoy life and art, being honored with fame among all men for all time to come; if I transgress it and swear falsely, may the opposite of all this be my lot
I make these promises solemnly, freely, and upon my honor
given situation. Forth, justice, which has the purpose to underlie the importance of generating fair and unbiased results from the application of the neurotechnology’s. Fifth, dignity, which indicates that all people should be treated with respect and integrity. Sixth, privacy, that advocates that all sensible and identifiable information will be eliminated from recollected data. And finally, transparency, which has the purpose of assuring that the used algorithms will be the most transparent and fixable as possible. The intention of this oath is to contribute to the emerging concerns about ethical guidelines in actual and future neurotechnologies. While legally non-binding, the cultural weight of an Oath has historically led to responsible practices in the areas where it is implemented, such as the Hippocratic Oath in medical practice. It is to be expected that the Technocratic Oath inspires an ethic-guided practice of neurotechnology in academic, clinical and entrepreneur environments.
172
M. F. Álamos et al.
Table 14.2 Proposed technocratic oath
14.5 Outlook We are living in an exciting moment for neuroscience research. Neurotechnologies are allowing us to better understand the functioning of the most complex body organ: the brain. Invasive and non-invasive BCIs are rapidly expanding and allowing an unprecedented possibility to record from large assemblies of neurons and to decode their activity to extract information. At the same time, methods that stimulate the brain and influence ongoing activity are also being refined (Roelfsema et al. 2018). These technological advances are aiming to improve therapeutic applications in neurological, mental, and neurodegenerative diseases, but could also be used to expand cognitive functioning of the population. As such, it is important to raise concern about the ethical guidelines that can help to prosecute a positive usage of these technologies. Our proposed Technocratic Oath aims to help to establish core ethical principles that will ensure responsible innovation and to protect the fundamental human rights of patients and consumers in the use of neurotechnologies. It morally obligates the ones performing the oath to the compliance of their professional duties honestly
14 A Technocratic Oath
173
and faithfully. Such an oath could be incorporated into different neurotechnology contexts and uses in environments such as academia, the clinic and the industry, and, as the Hippocratic Oath did, permeate this new field with an ethical code of conduct. Acknowledgements Supported by the IBM—Columbia University Data Science Institute grant (“Noninvasive Brain Computer Interface Data: Ethical and Privacy Challenges;” R. Yuste and Ken Shepard, PIs.), NSF DBI 1644405 (“Coordinating Global Brain Project;” R. Yuste and C. Bargmann, PIs.) and of the Precision Medicine & Society Program, from Columbia University College of Physicians & Surgeons (“Genomic Data Regulation: A Legal Framework for NeuroData Privacy Protection;” R. Yuste, and G. Hripcsak, PIs), FONDECYT Postdoctorado 3190914 to Leonie Kausel.
References Anumanchipalli K, Chartier J, Chang E (2019) Speech synthesis from neural decoding of spoken sentences. Nature 568:493–498 Aparisi A (2006) Ética y deontología para juristas. Eunsa, Pamplona Askitopoulou H, Vgontzas AN (2017) The relevance of the hippocratic oath to the ethical and moral values of contemporary medicine. Part I: the hippocratic oath from antiquity to modern times. Eur Spine J. https://doi.org/10.1007/s00586-017-5348-4 Berger H (1929) Über das Elektrenkephalogramm des Menschen. Arch Psychiatr Nervenkr 87(1):527–570 Britton JW, Frey L, Hopp J, Korb P, Koubeissi M, Lievens W, Pestana-Knight E, St. Louis E (2016) Electroencephalography (EEG): An Introductory Text and Atlas of Normal and Abnormal Findings in Adults, Children, and Infants Caton R (1970) The electric currents of the brain. Am J EEG Technol 10(1):12–14 Fothergill W, Knight B, Stahl I, Ulnicane B (2019) Responsible data governance of neuroscience big data. Front Neuroinformatics. https://doi.org/10.3389/fninf.2019.00028 Gamboa-Bernal GA (2020) Importancia e implicaciones de un juramento en tiempos de pandemia. Persona y Bioética 24(1):5–13. https://doi.org/10.5294/pebi.2020.24.1.1 Ienca M, Andorno R (2017) Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy 13(1):5 Insel TR, Landis SC, Collins FS (2013) The NIH BRAIN Initiative. Science 340(6133):687–688. https://doi.org/10.1126/science.1239276 Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1(9):389–399. https://doi.org/10.1038/s42256-019-0088-2 Kaiser UB (2014) Editorial: advances in neuroscience. the brain initiative and implications for neuroendocrinology. Mol Endocrinol 28(10):1589–1591 Klaes C (2018) Invasive brain-computer interfaces and neural recordings from humans. In: ManhanVaughan D (ed) Handbook of in vivo neural plasticity techniques. Academic Press, San Diego, pp 527–539 Lemarchand G (2010) Ciencia para la paz y el desarrollo: el caso del Juramento Hipocrático para Científicos, UNESCO. http://www.centropaz.com.ar/publicaciones/paz/paz35.pdf#page= pag71-73 Moses DA, Leonard MK, Makin JG, Chang EF (2019) Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat Commun 10(1). https://doi.org/10.1038/s41 467-019-10994-4 Musk E (2019) An integrated brain-machine interface platform with thousands of channels. BioRxiv, 703801
174
M. F. Álamos et al.
NIH (2021) https://braininitiative.nih.gov/ RAE (2001) Las definiciones de los términos “juramento,” “lealtad,” “fidelidad,” “honor” de la vigésimo segunda edición del Diccionario de la Real Academia de la Lengua del año 2001 Roelfsema PR, Denys D, Klink PC (2018) Mind reading and writing: the future of neurotechnology. Trends Cogn Sci 22(7):598–610 Salles A et al (2019) NeuroView the human brain project : responsible brain research for the benefit of society neuroview. Neuron 101(3):380–384 Sanchez-Salvatierra JM, Taype-Rondan A (2018) Evolution of the hippocratic oath: what has changed and why? Rev Med Chile 146(12):1498–1500 Shen BYH (2013) US brain project puts focus on ethics. Nature 500:6–7 Sulmasy DP (1999) What is an oath and why should a physician swear one? Theor Med Bioeth 20:329–346 tech@facebook (2020) https://tech.fb.com/ Thimot J, Kenneth LS (2017) Wirelessly powered implants. Nat Biomed Eng 1:1–2 Ujwal C, Birbaumer N, Ramos-Murguialday R (2016) Brain–computer interfaces for communication and rehabilitation. Nat Rev Neurol 12(9):513–525. https://doi.org/10.1038/nrneurol.201 6.113 Velasquez-Manoff M (2020) The mind readers. The New York Times Wyrwicka W, Sterman M (1968) Instrumental conditioning of sensorimotor cortex EEG spindles in the waking cat. Physiol Behav 3:703–707 Yuste R, Bargmann CI (2017) Towards a global brain initiative. Cell 168:956–959 Yuste R et al (2017) Four ethical priorities for neurotechnologies and artificial intelligence. Nature 551:159–163
Chapter 15
Neurotechnologies and the Human Image: Open Questions on Neuroprotection Pablo López-Silva and Luca Valera
Abstract Current neurotechnological progress opens a number of ethical, philosophical, and political debates. In this chapter, we claim that such progress also implies a more fundamental change in the way we understand the concept of human being. We explore the idea that the analysis of the impact of neurotechnological progress in the way in which we understand and protect the human mind should also motivate a profound reflection of the way in which science has been changing the way in which we understand the human condition. Keywords Neurotechnology · Ethics · Human condition · Human image Current progress in neurosciences has allowed the development of promising ways to deal with a number of neurofunctional and neuropsychiatric conditions. Certainly, this kind of research programmes should be promoted and supported by worldwide research agencies and local governments as they might improve the lifequality of citizens suffering from those conditions. In this sense, neurotechnological progress is not only a technical achievement, but also an enterprise with deep consequences on human health and society. Nevertheless, it is worth considering that neurotechnological progress comes with a number of practical and conceptual worries in medical and non-medical contexts. In medical contexts, neurotechnological progress raises several issues regarding, for example: (i) epistemic asymmetries between scientists (or bio-engineers) and patients (when it comes to issues such as consent, protective-rights, data ownership, etc.); (ii) ethical decisions when testing new techniques and devices; (iii) debates about political and economic influences in research when discussing private insurances or government funding; (iv) conceptual views preferred by scientists. Within non-medical contexts, the development of, for P. López-Silva (B) School of Psychology, Universidad de Valparaíso, Hontaneda 2653, Valparaiso, Chile L. Valera Centre for Bioethics, Pontificia Universidad Católica de Chile, Av. L. Bernardo O’Higgins 340, Santiago de Chile, Chile Department of Philosophy, Universidad de Valladolid, Plaza Campus Universitario, Valladolid, Spain © The Author(s), under exclusive license to Springer Nature Switzerland AG 2022 P. López-Silva and L. Valera (eds.), Protecting the Mind, Ethics of Science and Technology Assessment 49, https://doi.org/10.1007/978-3-030-94032-4_15
175
176
P. López-Silva and L. Valera
example, commercial neurotechnological devices with access to neural data in realtime makes necessary the development of legal regulatory frameworks in order to protect the autonomy, free-will, liberty, and privacy of subjects in light of unwanted intromission though potential device-hacking or neural-data exposition. Furthermore, the degree of progress and invasiveness of current neurotechnological devices should also motivate a broader and more profound reflection. Humans are minded beings inhabiting ecological niches composed of symbolic (social and individual) and material elements and both are part and parcel of the experience of being a human. During the last 20 years, technology has become a fundamental part of our everyday practical experience of the social and individual world shaping a number of changes in the way we understand the human mind in its context. Moreover, the current technological development implies—more fundamentally—changes in the way we understand the human being itself. In this regard, we believe that the analysis of the impact of neurotechnological progress in the way in which we understand and protect the human mind should also motivate a profound reflection of the way in which science has been changing the way in which we understand the human condition. It is not only a matter of changing “one of the parts” of the human being, but rather of modifying the “image of man” itself, recalling a famous expression by Jonas (1985). Along with the manipulation (or modulation) of our brains (or minds) emerges, then, the possibility of manipulating our image: although we are not reducible to our brain, we are also our brain. Any modification to our brain, in this sense, implies a change—more or less important—to our nature and image. Such a possibility of manipulation—which is today truly real—questions, then, the image we have of our humanity: Who am I, in the age of technological civilization (Jonas 1985) and of interventions in my brain? Who should I be? These questions recall another concern: What image of the human being emerges there? The question about human identity—or the image of humanity—today seems to make more sense, especially if one thinks of the social consequences that such interventions could have: “Have we the right […] to manipulate the very image of the human being in order to create a superior viz. invulnerable species? And what if this eventually ended in the extinction of the existing viz. vulnerable species of homo sapiens?” (Becchi and Franzini Tibaldeo 2016, 108). Thus, the question of human vulnerability, and of its “ethical and legal content” emerges here: Do we have to respect and protect such vulnerability, which is an important part of the image of the human being, or can we, simply, eliminate it, to look for more perfect and improved human beings? Emerging technologies—and particularly neurotechnologies—call us to reflect on these points, which we want to intentionally leave open. Furthermore, for a long time, the concept of mind was thought to be the last fortress of autonomy, liberty, and privacy. In this regard, the human mind has always been protected from direct intromissions. We live in an age where this idea seems to be changing very quickly. The potential exposition of neural data might put at risk some of the most fundamental aspects of our individuality and intimacy. Here, the challenge is twofold. On the one hand, there is a pressing need for deeper discussion in legal, conceptual, ethical, and political aspects related to the impact of neurotechnological progress in the human mind—and, more generally, in the human
15 Neurotechnologies and the Human Image …
177
person/personality). On the other, governments need to take action in light of potential misuses of these neurotechnologies by creating clear and well-informed legal frameworks to protect citizens from these possible attacks or intrusions. Clearly, both tasks are complementary as the inform each other, and for this reason, philosophers, scientists, and politicians need to work together to face this challenge. Due to the complexity of the consequences of neurotechnological progress in current society and individual persons, current related debates in politics, philosophy, ethics, and law should consider this underlying issue from a cooperative and interdisciplinary standpoint. Naturally, this is not a simple task. Considering this challenge, Protecting the Mind: Challenges in law, neuroprotection, and neurorights has been an active attempt to bring together views from different disciplines (philosophy, psychology, neurosciences, medicine, and anthropology, among others). This book will help assessing different concerns emerging from the impact of neurotechnological developments in our lives. This book will also hopefully reframe the way in which we understand concepts such as “human mind,” “consciousness,” “autonomy,” “responsibility,” “integrity,” “consent,” “intimacy,” and “privacy,” among others. Given the ongoing nature of these debates, we hope this collection of essays motivates further discussion to develop comprehensive concepts and to inform contextualized legal frameworks. This is the last question we want to leave open: Must we protect our mind and persons? Or is this protection only motivated by the fear of the unknown—e.g., the possible consequences that neurotechnologies may have on our society—and by the current lack of knowledge? In any case, what is clear, here, is the twofold role of neurotechnologies: “Neuro-technology is developing powerful ways to treat serious diseases, to improve lifestyles and even, potentially, to enhance the human body. However, this progress is also associated with new self-understandings, existential challenges and problems never seen before” (Echarte 2016, 137). The bet we have to make concerns, then, both what we can gain and what we can lose: we could achieve qualitatively better lives but forgetting our vulnerability and losing our “image;” we could be better off, but possibly we would lose the possibility of experiencing authentically. We need to evaluate if such a gamble is convenient and, above all, if it is authentically human.
References Becchi P, Franzini Tibaldeo R (2016) The vulnerability of life in the philosophy of Hans Jonas. In: Masferrer A, García-Sánchez E (eds) Human dignity of the vulnerable in the age of rights. Interdisciplinary perspectives. Springer, Cham, pp 81–120 Echarte LE (2016) Biotechnologies inside the self: new challenges in clinical ontology. In: Masferrer A, García-Sánchez E (eds) Human dignity of the vulnerable in the age of rights. Interdisciplinary perspectives. Springer, Cham, pp 123–140 Jonas H (1985) The imperative of responsibility. In search of an ethics for the technological age. The University of Chicago Press, Chicago