The Unfit Brain and the Limits of Moral Bioenhancement 9811696926, 9789811696923

In light of the potential novel applications of neurotechnologies in psychiatry and the current debate on moral bioenhan

120 103

English Pages 273 [270] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgments
Contents
List of Figures
1: Introduction
1.1 Biology Is Not Destiny
1.2 Overview of the Book
References
2: The Scope and Limits of Moral Bioenhancement
2.1 Conceptual Issues and Scientific Realities
2.1.1 Historical Perspective
2.1.2 The New Gospel: Salvation Through the Curing of Human Character
2.1.3 The Coming Dark Age
2.2 Becoming Fit for the Future
References
3: Moral Bioenhancement and the Clinical Ideal
3.1 The Challenge of Defining Enhancement
3.1.1 Various Conceptualizations of Enhancement
3.1.2 Therapy and Enhancement: Two Evolving Concepts
3.1.3 Disease as a Clinical Problem
3.1.4 Enhancement-Nondisabled Versus Enhancement-Disabled
3.2 The Clinical Ideal and Enhancement
3.3 Clinical Ideal and Psychiatric Disorders with Moral Pathologies
References
4: Neurobiology, Morality, and Agency
4.1 The Neurobiology of Morality
4.2 Moral Judgments and the Moral Self
4.2.1 Reason and Emotions: A False Dichotomy
4.2.2 Moral Agency: Moral Capacity and Moral Content
4.3 Phronesis and the Virtues
4.3.1 Aristotle on Phronesis
4.3.2 Good Deliberation
4.3.3 Judgment and Choice
4.4 The Autonomous Modern Self
4.4.1 The Creation of the Autonomous Self
4.4.2 The Emotivist Modern Self
4.4.3 The Disrupted Self
4.4.4 Moral Neutrality
4.4.5 Contentless Morality
4.5 The Pathologizing of Human Behavior
4.5.1 The Science of Morality
4.5.2 Moral Motivation
4.5.3 Three Main Criticisms
References
5: Techno-Science, Politics, and the Common Good
5.1 Post-academic Science
5.2 The Implications of Postmodernity for Science and Technology
5.2.1 The New Paradigm of Techno-Science
5.3 Beyond the Postmodern Cacophony: Deliberative Democracy
5.3.1 Recovering from the Balkanization of our Society
5.3.2 Deliberative Democracy
5.3.3 The Features and Purposes of Deliberative Democracy
5.4 Applying the Deliberative Democracy Paradigm
References
6: Neurotechnologies and Psychopathy
6.1 Psychiatry and Moral Bioenhancement
6.2 Psychopathy
6.2.1 From Moral Insanity to Neuro-anatomical Abnormalities1
6.2.2 Current Conceptualization: The Influence of the Hare Psychopathy Checklist
6.3 The Diagnosis of Psychopathy
6.3.1 Neuro-Imaging Technologies
6.3.2 Neurogenetic Studies
6.4 Treatment of Psychopathy
6.5 Feasibility, Usefulness, and Limitations of Neurotechnologies
References
7: Punishment, Responsibility, and Brain Interventions
7.1 Retribution Versus Rehabilitation
7.2 Dangerousness and Prevention
7.3 Capacity and Responsibility
7.3.1 Incapacitation and Moral Agency
7.3.2 Capacity and Moral Reasons
7.3.3 Threshold of Capacities
7.4 Moral and Legal Responsibility
7.4.1 Moral Responsibility
7.4.2 Criminal Responsibility
7.4.3 Insanity Defense
References
8: Identity Integrity in Psychiatry
8.1 Technology and the Current Anthropological Identity Crisis
8.1.1 Technology and the Real
8.2 Homo Sapiens Interacting with Machines
8.3 Identity Integrity
References
9: Epilogue: Final Thoughts for the Path to Future Philosophical Explorations
References
Index
Recommend Papers

The Unfit Brain and the Limits of Moral Bioenhancement
 9811696926, 9789811696923

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

The Unfit Brain and the Limits of Moral Bioenhancement Fabrice Jotterand

The Unfit Brain and the Limits of Moral Bioenhancement “Fabrice Jotterand offers his reader a robust and critically engaged examination of the problems incumbent to and the limits of the uses of neurotechnologies for psychiatry. The examination is interrogative of the imaginings and ideals of moral bioenhancement to resolve moral pathologies—and the interrogation is far-reaching. Jotterand expertly navigates the trans-disciplinary territory with due care, philosophical integrity, and constructive response. This is a book that must be read and carefully considered.” —Ashley John Moyse, PhD, McDonald Postdoctoral Fellow in Christian Ethics and Public Life, and Healthcare and Humanities Fellow, Faculty of Theology and Religion, University of Oxford, UK “Developments in neurotechnologies may increase calls for the moral bioenhancement of those with brains that are thought to be unfit, but is such enhancement feasible or acceptable? This monograph is a valuable resource for those who wish to consider these and related questions. Fabrice Jotterand’s new work is impressive in the way he brings together a wide range of scholarship in order to engage in this timely reflection. Whilst this insightful book has something of interest to say to scholars in a variety of disciplines, including those in the mind sciences and philosophers, it also has significant upshot for those who are interested in considering how criminal justice might and should respond to the scientific and technological advances that the book engages with. The Unfit Brain & the Limits of Moral Bioenhancement is an important contribution to a discussion that is likely to become increasingly important.” —Allan McCay, JD, Deputy Director of The Sydney Institute of Criminology and an Academic Fellow at the University of Sydney's Law School, Australia “Fabrice Jotterand’s book on moral bioenhancement presents a new perspective on the ethical challenges posed by the implementation of neurotechnologies for moral and criminal misconduct. Sensitive both to the need for public protection as well as the rights and humanity of the morally unfit individual, he frames his account in terms of a synergy between cognition and moral emotions, and the importance of respecting the complex personal identities of ‘morally unfit’ individuals. In doing so he moves us away from the fearsome doing-to narratives of

science fiction, and moves us back to people and doing-together. Such work should be of wide interest to scientists, clinicians, policymakers, and philosophers.” —John Z. Sadler MD, The Daniel W. Foster, M.D. Professor of Medical Ethics, Distinguished Teaching Professor, Professor of Psychiatry & Population/Data Sciences, UT Southwestern, USA “In  The Unfit Brain and the Limits of Moral Bioenhancement, Prof. Jotterand explores the promise - and problems - of contemporary and near future enhancement technology, and the benefits, risks and responsibilities of the institutions and societies that seek putting method to practice. In so doing, he has written a volume that makes significant contributions to both the topic, and the fields of science, ethics, philosophy and sociology at-large.” —Prof. James Giordano, PhD, Georgetown University Medical Center, USA “The Unfit Brain and the Limits of Moral Enhancement, by Fabrice Jotterand, is a thoughtful and fascinating exploration of a subject of great scientific and social significance.” —Prof. William B. Hurlbut, MD, Department of Neurobiology, Stanford University, USA “This book is a significant contribution to the field of neuroethics. It engages key ethical questions and political challenges surrounding the use of neurotechnologies for moral enhancement. Jotterand’s constructive account of “Identity Integrity” to guard against what he takes as problematic aspects and faulty anthropological assumptions regarding forms of moral bioenhancement is bound to generate robust discussion and debate. And this is as it should be. It is a compelling work that demands a wide reading.” —Patrick T. Smith, PhD, Director of Bioethics Program, Trent Center for Bioethics, Humanities, and History of Medicine, Duke University, USA

Fabrice Jotterand

The Unfit Brain and the Limits of Moral Bioenhancement

Fabrice Jotterand Center for Bioethics and Medical Humanities Medical College of Wisconsin Milwaukee, WI, USA Institute for Biomedical Ethics University of Basel Basel, Switzerland

ISBN 978-981-16-9692-3    ISBN 978-981-16-9693-0 (eBook) https://doi.org/10.1007/978-981-16-9693-0 © The Editor(s) (if applicable) and The Author(s) 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: © Alex Linch shutterstock.com This Palgrave Macmillan imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

To Silvia, Steven, Sarah, Michael, and Naomi for their continuous love and support In memoriam to my father, Claude Jotterand (1935–2020)

Acknowledgments

I would like to thank my former editor Joshua Pitt and my current editor Rachael Ballard at Palgrave Macmillan for their advice and support during the writing of this book. I am also thankful for the reviewers who provided helpful feedback on the proposal and the manuscript. Additionally, I would like to thank Connie Li, the editorial assistant, for facilitating the production process. Part of this book has been published as, or adapted from articles and chapters in the following publications: Jotterand, F. (2006). “The Politicization of Science and Technology: Its Implications for Nanotechnology”, The Journal of Law, Medicine and Ethics 34(4):658–666; Jotterand, F. & Levin, S.B. (2017). Moral Deficits, Moral Motivation and the Feasibility of Moral Bioenhancement, Topoi, DOI https://doi. org/10.1007/s11245-­017-­9472-­x; Jotterand, F. (2017). “Cognitive Enhancement of Today May Be the Normal of Tomorrow” in J. Illes (Ed.). Neuroethics: Anticipating the Future. Oxford: Oxford University Press; Jotterand, F. (2019—published in 2020). “How AI Could Exacerbate our Incumbent Anthropological Identity Crisis” (Viewpoint), Bioethica Forum 12:1/2; and an unpublished manuscript Jotterand, F. Pascual, J. M. and Sadler, J. Z. “The Can’t and Don’t of Psychopathy: Neuroimaging Technologies, Psychopaths and Criminal Responsibility.” I want to thank wholeheartedly the following colleagues, who contributed to my early reflections as I developed this book: James Giordano, vii

viii Acknowledgments

John Z. Sadler, Fred Grinnell, Juan Pascual, Jeffrey P. Bishop, Susan B. Levin, Veljko Dubljevic, Eric Racine, Tom Koch, Ann Helms, Patrick T. Smith, Jennifer McCurdy, Ralf Jox, Paul Brodwin, Rafael Yuste, Marcello Ienca, Philipp Kellmeyer, and Allan McCay. I am thankful for the support I received from my colleagues at the Medical College of Wisconsin, Center for Bioethics and Medical Humanities, Arthur Derse, Cynthiane Morgenweck, Ryan Spellecy, Garson Leder, and Mary Homan. Special thanks to Kris Tym for her careful editing of the manuscript of an early version and Diane Kramer for her administrative assistance. I am also very appreciative for the support I received from the Kern Institute, in particular Adina Kalet and the members of my lab (Philosophies of Medical Education Transformation Laboratory, P-METaL) with whom I had many opportunities to discuss topics related to this book, especially Lauris Kaldjian, John Yoon, Jeff Fritz, Lana Mishew, Jeffrey Amundson, Karen Marcdante, and Chris Stawski. I want to thank my colleagues at the University of Basel, Bernice Elger and Tenzin Wangmo, for their continuous support and collaborative spirit. Many thanks to my former and current medical and doctoral students for their insights and collaboration: Clara Bosco, Quinn McKinnon, Justin Chu, Marcello Ienca, Olga Chivilgina, and Julia Bosco for carefully reading the manuscript before final submission. Special thanks to Ashley Moyse for his friendship and support and for providing invaluable feedback in an early draft of the manuscript and to John Fulginiti for his friendship and continuous encouragements as I was writing this book. Finally, I am grateful for the support I received from my wife Silvia and my four wonderful children Steven, Sarah, Michael, and Naomi. They continuously remind me what is really important in life.

Contents

1 Introduction  1 1.1 Biology Is Not Destiny   1 1.2 Overview of the Book   7 References 11 2 The Scope and Limits of Moral Bioenhancement 13 2.1 Conceptual Issues and Scientific Realities  13 2.2 Becoming Fit for the Future  23 References 30 3 Moral Bioenhancement and the Clinical Ideal 33 3.1 The Challenge of Defining Enhancement  33 3.2 The Clinical Ideal and Enhancement  45 3.3 Clinical Ideal and Psychiatric Disorders with Moral Pathologies 47 References 52 4 Neurobiology, Morality, and Agency 55 4.1 The Neurobiology of Morality  55 4.2 Moral Judgments and the Moral Self  61 4.3 Phronesis and the Virtues  66 4.4 The Autonomous Modern Self  74 ix

x Contents

4.5 The Pathologizing of Human Behavior  87 References 98 5 Techno-Science, Politics, and the Common Good107 5.1 Post-academic Science 107 5.2 The Implications of Postmodernity for Science and Technology110 5.3 Beyond the Postmodern Cacophony: Deliberative Democracy115 5.4 Applying the Deliberative Democracy Paradigm 127 References136 6 Neurotechnologies and Psychopathy139 6.1 Psychiatry and Moral Bioenhancement 139 6.2 Psychopathy 140 6.3 The Diagnosis of Psychopathy 149 6.4 Treatment of Psychopathy 152 6.5 Feasibility, Usefulness, and Limitations of Neurotechnologies157 References161 7 Punishment, Responsibility, and Brain Interventions171 7.1 Retribution Versus Rehabilitation 172 7.2 Dangerousness and Prevention 174 7.3 Capacity and Responsibility 176 7.4 Moral and Legal Responsibility 183 References189 8 Identity Integrity in Psychiatry193 8.1 Technology and the Current Anthropological Identity Crisis193 8.2 Homo Sapiens Interacting with Machines 201 8.3 Identity Integrity205 References214

 Contents 

xi

9 Epilogue: Final Thoughts for the Path to Future Philosophical Explorations219 References221 Index251

List of Figures

Fig. 3.1 Graphic representation of the relationship between degrees enhancement and degrees of quality of life. In an ideal scenario, the clinical ideal suggests that the increased degree of quality of life must be equal or superior to the degree of increased enhancement as indicated by the grey arrow Fig. 3.2 Graphic representation of various potential enhancement scenarios. Represents the clinical ideal whereas increased degree of quality of life must be equal or superior to the degree of increased enhancement.  Represents the case of a disabled person whereas increased degree of enhancement minimally influences quality of life.  Represents the case of a disabled person whereas increased degree of enhancement decreases quality of life beyond a certain point. × Represents the case of a disabled person whereas increased degree of enhancement negatively impacts quality of life

49

51

xiii

1 Introduction

1.1 Biology Is Not Destiny On January 24, 1989, Theodore Robert “Ted” Bundy was executed in the State of Florida for the murder of 12-year-old Kimberly Leach, his last victim in a long series of crimes. Bundy, while perhaps not formally diagnosed as a Hare-ian psychopath, nevertheless exhibits many of the traits recognized as psychopathic. He was charming and educated but also a serial killer and a rapist. He confessed to approximately 30 murders (although it is estimated he had up to 100 victims) of young women and girls between 1974 and 1978 across the United States. Earlier accounts of psychopathy depicted the disorder as a “derangement of the moral faculties” or “moral insanity” (Pridmore et al. 2005) but have evolved to refer to individuals suffering from emotional dysfunction and anti-social behavior. Bundy fits into the “derangement of the moral faculties” category and also, perhaps into the psychosocial diagnosis. But recent developments of neurotechnologies (i.e., fMRI) have allowed determining neuroanatomical abnormalities in the brain structure of psychopathic individuals, which have challenged previous conceptualizations of the disorder (K.A. Kiehl et al. 2001; Benning 2003; R.J.R. Blair 2008, 2010; © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_1

1

2 

F. Jotterand

Yang et al. 2009; Ling and Raine 2018). Studies implicate the disruption of the functioning of the amygdala and the orbitofrontal cortex (OFC) in individuals with psychopathic tendencies (R.J.R. Blair 2008, 2010). In the light of the lack of neuroscientific evidence pointing to aberrant neuroanatomy in the case of Mr. Bundy, we can only guess about what triggered in him to commit these crimes and whether a biological explanation for his demeanor could be established. Consider the case of Mr. Oft, a forty-year-old schoolteacher, happily married with a stepdaughter, and up to that point with no history of health issues or mental disorders. He began suddenly to visit prostitutes on a regular basis and started collecting child pornography. His life changed for the worse when he started to behave inappropriately with his stepdaughter, which escalated to sexual abuse. Mr. Oft was diagnosed as a pedophile, charged with child molestation, and was given the option of being treated for pedophilia or go to jail. He opted for treatment but during his stay at the rehabilitation center he solicited sexual favors. Ultimately, he was expelled and ended up in jail. However, the day before he was expected to go to prison, Mr. Oft went to the University of Virginia’s Hospital due to a headache. After a brain scan, a tumor was discovered near Mr. Oft’s orbitofrontal cortex (OFC) which was impinging on the right prefrontal area of the brain (Burns and Swerdlow 2003). Following the removal of the massive tumor, he regained his normal self and did not engage in promiscuous sexual behavior. A few months later he relapsed into child pornography, but his neurologist suspected that the tumor grew back, which was confirmed by a brain scan. The tumor was removed again, and a complete recovery ensued resulting in the absence of deviant behavior. This case demonstrates the causal effect of neuroanatomical abnormalities of Mr. Oft’s brain and his deviant behavior. Would a scan of Ted Bundy’s brain have revealed some neuroanatomical abnormalities explaining his heinous crimes? Although we can only speculate at this point, something we do know is biology is not destiny. In The Psychopath Inside (2013), James Fallon recounts his fascinating journey following his discovery that his own brain has the neuroanatomy of a psychopath. As a neuroscientist, Fallon was interested in understanding the brain structure of serial killers, and accordingly, he performed brain scans of criminal psychopaths over a period of ten years. His

1 Introduction 

3

findings revealed that the individuals who committed dreadful deeds had brains with “a rare and alarming pattern of low brain function and in certain parts of the frontal and temporal lobes—areas commonly associated with self-control and empathy” (Fallon 2013, 1). Considering the behavior displayed by psychopaths, it is not surprising to find out that these areas of the brain that are essential for moral judgments and behavior exhibit neuroanatomical abnormalities. In addition to studying the neuroanatomy of psychopaths, Fallon’s research focused also on the neurological effects of Alzheimer’s Disease (AD). With his team, he decided to run genetic tests and to take brain scans of AD patients. For the control group he used family members, including himself. While reviewing his family’s scans, he spotted some peculiarities reminiscent of the data he outlined in a paper he just submitted for publication entitled “Neuroanatomical Background to Understanding the Brain of a Young Psychopath.” After double-checking that the scans belonged to his family, he made the shocking discovery that his brain had the neuroanatomical characteristics of a psychopath. He could be considered a borderline psychopath, not in terms of behavior since he was never involved in serious deviant behavior or committed any crime, but as he admits he is a prosocial psychopath. These three cases illustrate the complexities and challenges in determining what is causing or motivating individuals to act in certain ways (good or bad) and commit horrendous crimes. Bundy exemplifies deviant behavior in an individual who otherwise seemed normal. The case of Mr. Oft demonstrates how a tumor can affect behavior, and the case of Fallon shows that an individual can have a brain “outside the norm” with some neuroanatomical similarities to the brain of psychopaths, and nevertheless be a productive and honorable citizen. There is yet another dimension to consider: the manipulation or alteration of brain structure and functions either through 1) traumatic insult (Phineas Gage); 2) the purposive manipulation of brain structure (lobotomy); 3) the use of neurostimulation techniques (e.g., DBS-Deep Brain Stimulation, TMS-Transcranial Magnetic Stimulation, BCIs-Brain-­ Computer Interfaces); or 4) the use of psychotropic drugs (e.g., Modafinil, Aricept, Prozac). The case of Phineas Gage provides a classic example. Gage was a railroad worker in his mid-twenties who suffered a traumatic

4 

F. Jotterand

insult to his left frontal lobe following the piercing of his skull by a tamping iron. The incident did not affect his mental capacities or memory, and he managed to continue a somewhat normal private and professional life. However, Gage’s demeanor was affected, and he underwent a dramatic personality change. He became irascible and deceitful. His physician, Dr. J.M. Harlow, made the following report worth quoting at length about his health condition and demeanor: His physical health is good, and I am inclined to say that he is recovered… The equilibrium or balance, so to speak, between his intellectual faculty and animal propensities, seems to have been destroyed. He is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operation, which are no sooner arranged than they are abandoned in turn for others appearing more feasible. A child in his intellectual capacity and manifestations, he has the animal passions of a strong man. Previous to his injury, though untrained in the schools, he possessed a well-balanced mind, and was looked upon by those who knew him as a shrewd, smart businessman, very energetic and persistent in executing all his plans of operation. In this regard his mind was radically changed, so decidedly that his friends and acquaintances said he was ‘no longer Gage’. (Harlow 1868)

The traumatic insult to the brain changed the demeanor and moral behavior of Gage which indicates that there can be a correlation between the brain’s structure and function, and behavior. It is into this class of interventions that moral bioenhancement falls but through the purposive, as opposed to the result of a traumatic insult, alteration of brain functions to produce behavioral changes. The four aforementioned accounts raise interesting philosophical questions about the neurobiological bases of morality, our understanding of normal behavior, of moral and legal responsibility, as well as the possibility to alter and manipulate moral behavior (moral bioenhancement). This set of questions constitutes the main focus of my exploration with respect to moral bioenhancement. More generally, this book reflects an ongoing interest in studying and understanding the “criminal mind,” or the “criminal brain.” I am not

1 Introduction 

5

a psychiatrist nor a psychologist but a philosopher and ethicist, and therefore my intellectual curiosity is more centered around understanding why people behave the way they do rather than exploring what determines the neurobiological basis of moral behavior, although in Chap. 2 I focus on the neurobiology of morality. What moral theory(ies) best capture(s) the essence of what it means to be a good and just person? Why do people act virtuously, badly, or commit heinous crimes? How can the human mind with all its complexities, beauties, and creative powers engender atrocities such as the Holocaust, mass murders, or slavery? Should biotechnological means be used to alter behavior either to enhance the populace or address intractable psychiatric disorders with moral pathologies such as psychopathy? A possible answer to these questions could be the lack of moral education and understanding of what the good life is or the espousing of ideologies that undermine our humanity or discriminate against particular groups of people based on ethnicity, race, gender, or values. But the issue, as one can expect, is more complex and deserves careful analysis in light of the rapid advancements in neurotechnologies and their increasing availability in clinical and social contexts. The emergence of the field of neuroethics has raised some interesting challenges concerning our understanding of morality and about the boundaries of normal moral behavior. Philosopher and neuroscientist, Adina Roskies, published a seminal paper in 2002 where she divides neuroethics into two domains: the ethics of neuroscience and the neuroscience of ethics (Roskies 2002). The former is an examination of the ethical implications arising from the use of neurotechnologies to intervene in the brain of psychiatric patients or individuals in the social context. It includes issues such as the risk assessment associated with neurostimulation, the interpretation of the data collected through neuroimaging techniques, Brain-Computer Interfaces (BCIs), or the implant of neuroprosthetics. To a certain extent the ethics of neuroscience is the ethics of research in neuroscience. The second domain, the neuroscience of ethics, specifically examines concepts usually reserved to philosophical inquires such as free-will, moral and legal responsibility, personal identity, and the nature of intentions using knowledge gained through advances in neuroscience. The neuroscience of ethics also includes the investigation of the neural basis of moral cognition, or more plainly, how

6 

F. Jotterand

moral judgments occur in the brain and whether moral cognition is based on rational processes or emotions. In short: Are people hardwired to behave in a certain way, and can the brain be manipulated to alter, for instance, the moral behavior of individuals who pose a threat to public safety or simply to better the moral acumen of citizens? While the traditional means for moral education and development (nurture) are constantly re-assessed, the field of criminology and reflections on moral bioenhancement have adopted biological explanations of crime or bad behavior (nature). Intrinsically, there is nothing problematic in incorporating the knowledge gained through neuroscience and epigenetics for understanding human behavior—although the shadows of Lombrosianism and eugenics are always present. The old debate about nature versus nurture could be put to rest if a truly biosocial model can be developed. As Nicole Rafter remarks, [i]n the decades ahead, biological theories are likely to constitute a significant discourse within mainstream criminology. Should biological theories cling to the medical model, they might drag criminology back into the pathologization of individual offenders that used to characterize their research on crime. But they could also help the field toward becoming a truly biosocial criminology, one that investigates, much more precisely than in the past, the impact of negative environments on gene expression and behavior. (Rafter 2008, 251)

The tension between the potential pathologizing of bad behavior and the development of a “truly biosocial” approach is at the core of the issues addressed in this book. Preliminary findings suggest the possible use of deep brain stimulation to alter or even control criminal behavior (Hoeprich 2011; Canavero 2014; Merkel et al. 2007). Other techniques, such as real-time functional Magnetic Resonance Brain-Computer Interface (rtfMRI-BCI) or the use of Selective Serotonin Reuptake Inhibitors (SSRIs), have demonstrated in experimental phases to reduce psychopathic personality traits (Birbaumer et al. 2013; Birbaumer and Cohen 2007; Sitaram et al. 2007), and deviant sexual behavior (Renaud et al. 2011). At the same time, proponents of moral bioenhancement argue that traditional means of moral education, such as parental

1 Introduction 

7

supervision, education, socialization, and the role of social institutions, have mostly failed and that moral progress inevitably needs the use of biotechnological means to improve the human character (Persson and Savulescu 2008, 2012; Douglas 2014; J. Harris 2007). One last important point about this book. While my philosophical and ethical examination is informed by neuroscience and moral psychology, this book is speculative in nature and adopts an “anticipatory ethics” approach. This project examines the potential future trajectories of emerging neurotechnologies and scientific advances to consider their ethical and social implications. As the title of this book suggests, my intention is not to negate the possibility of altering or manipulating moral behavior through technological means. My goal is to emphasize that the scope of interventions is limited because the various options available to “enhance morality” improve, or simply manipulate, some elements of moral behavior and not the moral agent per se in the various elements constitutive of moral agency.

1.2 Overview of the Book Considering the potential novel applications of neurotechnologies in psychiatry and the current debate on moral bioenhancement, I contend that more conceptual work is needed to inform the scientific and medical community, and society at large, about the implications of moral bioenhancement before a possible, highly hypothetical at this point, broad acceptance, and potential implementation in areas such as psychiatry, or as a measure to prevent crime in society. To this end, this book is structured according to seven chapters. Chapter 2 starts with an outline of the current scientific realities associated with moral bioenhancement and a critical analysis of its feasibility from a neuroscientific and neurotechnological standpoint. Subsequently, a critique is offered of the conceptualization of human moral psychology by its proponents with a particular attention to the work of Julian Savulescu and Ingmar Persson. In a seminal article of 2008, they articulated the reasons why it is ethically desirable to enhance human moral capacities through technological means (Persson and

8 

F. Jotterand

Savulescu 2008) which culminated in their book Unfit for the Future: The Need for Moral Enhancement (2012). In Unfit for the Future, they pathologize moral behavior (i.e., lack of moral motivation due to the problem of the weakness of the will) and contend that biotechnology, genetic manipulation, or drug treatments could in principle alter people’s weak moral motivation. I argue that proponents of moral bioenhancement, like Persson and Savulescu, fail to account for the complexity of morality and all that it means to be a moral agent. Their focus on affective dimensions of moral motivation undermines the role of cognition and volition in human moral psychology. On their account, moral bioenhancement at best simply controls moral emotions but does not enhance morality. Chapter 3 provides a potential framework, the clinical ideal, to justify a prudent use of neurotechnologies techniques for clinical interventions. As previously noted, recent advances in neuroscience and neurotechnology have provided new insights regarding the neuroanatomy of the psychopathic brain that challenged prior understandings of the disorder and could provide novel opportunities for technological innovation and interventions. Some researchers have hypothesized that psychopathic subjects could learn how to reactivate brain regions implicated in fear conditioning using neurotechnologies, such as real-time functional Magnetic Resonance Imaging (rtfMRI)—Brain-Computer Interface (BCI) technology, that aim to alter brain capacities and behavior of people suffering from psychiatric disorders with moral pathologies. In the light of these advances, the clinical ideal allows us to evaluate potential justifications, if any, of brain interventions, such as the enhancement of one’s disposition to respond morally (i.e., moral capacity). To what extent moral capacity can be manipulated is the focus of Chap. 4, which outlines and explains the latest finding in the neuroscience of moral judgments. Certain brain areas are associated with basic and moral emotions and therefore could be subject to interventions to manipulate behavior and alter the moral self. In addition, I critically examine the implications of novel techniques that allow determining neuro-anatomical abnormalities in individuals suffering from mental

1 Introduction 

9

disorders with moral pathologies. I assert that these techniques must avoid a therapeutically nihilistic approach that classifies these individuals beyond rehabilitation. As my subsequent analysis demonstrates, however, this is not to say that there is no connection between brain structures or neurochemicals and human behavior. The second part of the chapter offers a unified conception of moral agency that considers the affective, cognitive, and conative dimensions of decision-making. It offers a counternarrative to contemporary accounts of morality and moral agency that emphasize the role of moral emotions over moral reasoning, and moral neutrality (i.e., absence of any claims concerning particular goods). I argue that the misconceptualization of morality by proponents of moral bioenhancement is the result of the abandonment of a unified conception of moral agency grounded on an Aristotelian framework (virtue ethics). Chapter 5 narrates how in the last few decades, socio-cultural and politico-economic factors have transformed academic science to produce the context of post-academic science. While post-academic science does not deny the goals of academic science, the former diverges in three specific ways: (1) in how scientific knowledge is produced (i.e., emphasis on transdisciplinarity); (2) in how it is funded and promoted as a source of economic power (i.e., the marketability of knowledge); and (3) in the emphasis on the imperative to find technological applications (i.e., greater stress on the norm of utility). In this chapter, I investigate how the particular ethos of post-academic science (postmodern in nature) shapes not only the scientific and technological development of neurotechnologies but also ethical reflections regarding the social and ethical implications of neurotechnologies. To anticipate the future trajectory of neuroscientific discovery and neurotechnological advances and more importantly in the context of this work, address their ethical and anthropological implications, I contend that the conception of deliberative democracy advanced by Gutmann and Thompson provides a fruitful approach to tackle disagreement in our pluralistic context by putting moral reasoning and moral disagreement at the center of the political discourse. In Chap. 6, psychopathy is used as an exemplar of a psychiatric condition with moral pathologies for which there is no truly efficacious

10 

F. Jotterand

treatment available but that could become a target for the implementation of brain intervention strategies. This chapter critically evaluates the feasibility, usefulness, and limitations of techniques or neurotechnologies in the diagnosis and treatment of individual with psychopathic traits. Neuroscientific developments have allowed, on the one hand, a better understanding of the structure and function of the nervous system, in particular the brain, giving the means to diminish the symptoms of some serious neuropsychiatric illnesses, but on the other hand, they have raised (unprecedented) ethical, legal, social, and clinical questions regarding possible risks of manipulation and abuse of neurotechnologies for brain interventions. The question of legitimate and illegitimate interventions in the brain by means of neurotechnologies is the focus of Chap. 7. The use of neurotechnologies is bound to expand. Therefore, it is crucial to establish up to which point society has the moral obligation to protect its citizens by “repairing or altering sick minds,” such as those of psychopaths and to use neurotechnological means for prognostic uses, to take early preventive measures and/or establish by coercion the diagnosis of psychopathy (and use the relative preventive interventions) in individuals with psychopathic traits but which are not yet considered criminals. Additionally, to what extent should psychopaths be criminally responsible for their actions thus deserving punishment or should they be considered offenders with extenuating circumstances due to neural abnormalities, hence needing treatment? With this in mind, I examine which theory of justice informs decisions in sentencing and the level of (moral and legal) responsibility incumbent to criminal psychopaths. In the final chapter, I argue that the increasing impetus to use neurotechnologies to alter, control, or manipulate human behavior reflects an anthropological identity crisis in Western culture in which key pointers to protect the integrity of human identity in all its dimensions (the brain and the mind) have been lost. I suggest what I call Identity Integrity as a potential framework to avoid human beings becoming orderers and orderables of technological manipulations.

1 Introduction 

11

References Benning, T.B. 2003. Neuroimaging Psychopathy: Lessons from Lombroso. The British Journal of Psychiatry: The Journal of Mental Science 183 (Dec.): 563–564. Birbaumer, Niels, and Leonardo G. Cohen. 2007. Brain-Computer Interfaces: Communication and Restoration of Movement in Paralysis. The Journal of Physiology 579 (Pt 3): 621–636. https://doi.org/10.1113/jphysiol.2006. 125633. Birbaumer, Niels, Sergio Ruiz, and Ranganatha Sitaram. 2013. Learned Regulation of Brain Metabolism. Trends in Cognitive Sciences 17 (6): 295–302. https://doi.org/10.1016/j.tics.2013.04.009. Blair, R.J.R. 2008. The Cognitive Neuroscience of Psychopathy and Implications for Judgments of Responsibility. Neuroethics 1 (3): 149–157. https://doi. org/10.1007/s12152-­008-­9016-­6. ———. 2010. Neuroimaging of Psychopathy and Antisocial Behavior: A Targeted Review. Current Psychiatry Reports 12 (1): 76–82. https://doi. org/10.1007/s11920-­009-­0086-­x. Burns, Jeffrey M., and Russell H. Swerdlow. 2003. Right Orbitofrontal Tumor With Pedophilia Symptom and Constructional Apraxia Sign. Archives of Neurology 60 (3): 437–440. https://doi.org/10.1001/archneur.60.3.437. Canavero, Sergio. 2014. Criminal Minds: Neuromodulation of the Psychopathic Brain. Frontiers in Human Neuroscience 8. https://doi.org/10.3389/ fnhum.2014.00124. Douglas, Thomas. 2014. Enhancing Moral Conformity and Enhancing Moral Worth. Neuroethics 7: 75–91. https://doi.org/10.1007/s12152-­013-­9183-­y. Fallon, James. 2013. The Psychopath Inside: A Neuroscientist’s Personal Journey into the Dark Side of the Brain—1591846005 | Smithsonian.Com Store. https://www.smithsonianmag.com/store/books/psychopath-­i nside­neuroscientists-­personal-­journey/?no-­ist. Harlow, J.M. 1868. Recovery after Severe Injury to the Head. Publications of the Massachusetts Medical Society 2: 327–346. Harris, John. 2007. Enhancing Evolution: The Ethical Case for Making Better People. Princeton, NJ; Woodstock: Princeton University Press. Hoeprich, M. 2011. An Analysis of the Proposal of Deep Brain Stimulation for the Rehabilitation of Criminal Psychopaths. Presentation for the Michigan Association of Neurological Surgeons 11.

12 

F. Jotterand

Kiehl, K.A., A.M. Smith, R.D. Hare, A. Mendrek, B.B. Forster, J. Brink, and P.F. Liddle. 2001. Limbic Abnormalities in Affective Processing by Criminal Psychopaths as Revealed by Functional Magnetic Resonance Imaging. Biological Psychiatry 50 (9): 677–684. Ling, Shichun, and Adrian Raine. 2018. The Neuroscience of Psychopathy and Forensic Implications. Psychology, Crime & Law 24 (3): 296–312. https://doi. org/10.1080/1068316X.2017.1419243. Merkel, Reinhard, G. Boer, J. Fegert, T. Galert, D. Hartmann, B. Nuttin, and S. Rosahl. 2007. Intervening in the Brain: Changing Psyche and Society, Ethics of Science and Technology Assessment. Berlin Heidelberg: Springer-Verlag. www.springer.com/gp/book/9783540464761. Persson, Ingmar, and Julian Savulescu. 2008. The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. Journal of Applied Philosophy 25 (3): 162–177. https://doi. org/10.1111/j.1468-­5930.2008.00410.x. ———. 2012. Unfit for the Future: The Need for Moral Enhancement. 1st ed. Oxford University Press. Pridmore, Saxby, Amber Chambers, and Milford McArthur. 2005. Neuroimaging in Psychopathy. Australian and New Zealand Journal of Psychiatry 39 (10): 856–865. https://doi.org/10.1111/j.1440-­1614.2005.01679.x. Rafter, Nicole. 2008. The Criminal Brain: Understanding Biological Theories of Crime. New York: NYU Press. Renaud, Patrice, Christian Joyal, Serge Stoleru, Mathieu Goyette, Nikolaus Weiskopf, and Niels Birbaumer. 2011. Real-Time Functional Magnetic Imaging-Brain-Computer Interface and Virtual Reality Promising Tools for the Treatment of Pedophilia. Progress in Brain Research 192: 263–272. https:// doi.org/10.1016/B978-­0-­444-­53355-­5.00014-­2. Roskies, Adina. 2002. Neuroethics for the New Millennium. Neuron 35 (1): 21–23. https://doi.org/10.1016/S0896-­6273(02)00763-­8. Sitaram, Ranganatha, Andrea Caria, Ralf Veit, Tilman Gaber, Giuseppina Rota, Andrea Kuebler, and Niels Birbaumer. 2007. FMRI Brain-Computer Interface: A Tool for Neuroscientific Research and Treatment. Computational Intelligence and Neuroscience 2007. https://doi.org/10.1155/2007/25487. Yang, Y., A. Raine, P. Colletti, A.W. Toga, and K.L. Narr. 2009. Abnormal Temporal and Prefrontal Cortical Gray Matter Thinning in Psychopaths. Molecular Psychiatry 14 (6): 561–62, 555. https://doi.org/10.1038/ mp.2009.12.

2 The Scope and Limits of Moral Bioenhancement

2.1 Conceptual Issues and Scientific Realities 2.1.1 Historical Perspective The conceptualization of moral bioenhancement can be traced back to the seventeenth century in the writings of philosophers such as Francis Bacon (The New Atlantis, 1627; Novum Organon, 1620), René Descartes (Discours de la méthode pour bien conduire sa raison et chercher la vérité dans les sciences, 1637), and Marquis de Condorcet (Esquisse d’un tableau historique des progrès de l’esprit humain, 1795). They participated in the intellectual movement of the Enlightenment that promoted the advancement of new scientific discoveries for the betterment of the human condition and suggested the possibility to enhance human capacities. Bacon, considered a precursor of human enhancement, outlines his philosophical project in Novum Organum (1620) and The New Atlantis (1627). In these works, he delineates the kind of community best suited to engage in scientific endeavors to promote the good of humankind suggesting human dominion over nature through the means of science and the pursuit of new knowledge. Descartes is another thinker who believed in the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_2

13

14 

F. Jotterand

capacity of science to control and rule over nature. In the Discourse on Method (Discours de la méthode pour bien conduire sa raison et chercher la vérité dans les sciences), he claims that “we may find a practical philosophy by means of which ... we can ... render ourselves the master and possessors of nature” (1637). We must wait until the end of the following century to see a more robust conceptualization of human enhancement that includes moral bioenhancement. De Condorcet, in Esquisse d’un tableau historique des progrès de l’esprit humain (Sketch for a Historical Picture of the Progress of the Human Mind, 1795), caught a glimpse of the possibility to improve human moral capacities. He writes, may it not be expected that the human race will be meliorated by new discoveries in the sciences and the arts, and, as an unavoidable consequence, in the means of individual and general prosperity; by farther progress in the principles of conduct, and in moral practice; and lastly, by the real improvement of our faculties, moral, intellectual and physical, which may be the result either of the improvement of the instruments which increase the power and direct the exercise of those faculties, or of the improvement of our natural organization itself? (Condorcet 1979/1795)

For de Condorcet, faith in scientific progress would result in the improvement of the human condition at the material level but also in heightening moral sensitivity, and improving physical, intellectual, and behavioral aptitudes. It is worth noting that contemporary reflections have not deviated from these original ideas. The nineteenth century witnesses two important shifts in science and in philosophy concerning the conceptualization of human nature, which reinforced the idea of the possibility, if not the desirability, of human enhancement including moral bioenhancement. In 1859, Darwin published his Origin of Species where he challenges the idea of a fixed human nature and outlines the evolutionary mechanisms that resulted in the human species. Since according to Darwin’s theory of evolution, humanity is on a constant evolutionary path, albeit based on biological processes, there is no reason to think that it cannot continue its evolution through technological means. This line of reasoning has been picked up by contemporary thinkers like Ray Kurzweil in The Singularity is Near: When Humans Transcend Biology (2005). He argues that biological

2  The Scope and Limits of Moral Bioenhancement 

15

evolution has created a species able to generate technologies that resulted in “the new evolutionary process of technology, which makes technological evolution an outgrowth of—and a continuation of—biological evolution” (Kurzweil 2005, 42). On this account, homo sapiens are being replaced, through their own hands, by homo technicus. Another contemporary thinker to embrace this technological vision of the human future is philosopher John Harris. He adopts this approach of human evolution in his book, Enhancement Evolution (2007). He advocates not only the use of human enhancement technologies for the betterment of human life overall (e.g., better health, longer life span, and happier life) but also to control our very evolutionary path to achieve what he calls the Holy Grail of enhancement: immortality (J. Harris 2007, 59). In the same vein, Kurzweil makes an even bolder prediction arguing that the singularity will allow human beings to transcend the limitations of their biology and brains. In his mind, human biological limitations are not a destiny because “we [human beings] will gain power over our fates. Our mortality will be in our own hands…” (Kurzweil 2005, 9). From a more esoteric perspective, but with a similar impact, Friedrich Nietzsche called for the overcoming of “man” (human beings). In his critique of nineteenth-century Western culture, he argues that to restore what he calls the “good,” that is, the “will to power in man,” a new type of human being is needed, a sort of superman (the Ubermensch) who will reinstate the will (to power) as the origin of all values, and able to revise and allow the revaluation of all values in Western culture (Nietzsche 2013). So while not referring to scientific knowledge, Nietzsche’s reasoning provided a philosophical basis for the call to transform human beings that inspired the current proponent of human enhancement (More 2010). In the twentieth century, we saw an acceleration of the pace toward the realization of true human enhancement. Julian Huxley, in “Transhumanism” (1957), envisioned a humanity entering a new era of existence, stating that the human species can, if it wishes, transcend itself—not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity…the human species will be on the threshold of a new

16 

F. Jotterand

kind of existence, as different from ours as ours is from that of Pekin man. It will at last be consciously fulfilling its real destiny. (Huxley, 1957)

In the following decades, other commentators made similar claims. Marvin Minsky, anticipating the advent of Artificial Intelligence in the 1970s, claimed that “extended lives” would require the replacement of our biological brains with computational devices. F.M. Esfandiary (also known as FM-2030) introduced in his book Are You a Transhuman? (1989) the concept of the “transitional human,” a human being using technology toward becoming a posthuman and suggests the betterment of human character. Finally, the World Transhumanist Association (1998, now called Humanity+) follows the same trend of thought and “envision[s] the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering…” (Humanity+ 2009). These various assertations about the desire and feasibility of human enhancement did not reflect, with the exception of Humanity+, the state of scientific and technological development required to transcend human biological and cognitive limitations. As with other domains of human knowledge, it is the outcome of creative minds envisioning a future for the human species shaped and determined by technology. It is only in the last several decades that progress in biotechnology, neuroscience, and bioengineering has provided the scientific backing necessary to move outside the sci-fi milieu and become more mainstream, although there is a lot of hype regarding the feasibility of human enhancement in general, and moral bioenhancement as envisioned by their proponents.

2.1.2 T  he New Gospel: Salvation Through the Curing of Human Character Between 2006 and 2008, the debate over moral bioenhancement really took off as the result of two separate publications by Ingmar Persson and Julian Savulescu (2008), and by Thomas Douglas (2008) respectively, and a presentation by James Hughes (James Hughes 2006). In 2006, Hughes envisioned the possibility to implement technological means to

2  The Scope and Limits of Moral Bioenhancement 

17

expand the moral horizon of the human species. In the future, he predicted in 2006, we will have many technologies that will allow us to modify and assist our emotions and reasoning. One of the purposes we will put these technologies to is to assist our adherence to self-chosen moral codes and citizenship obligations. For instance, we will be able to suppress unwelcome desires, enhance compassion and empathy, and expand our understanding our social world and the consequences of actions. (James Hughes 2006)

Reflections outside a merely transhumanist framework (Hughes) was recast in two seminal articles. In their articles, Persson and Savulescu, and Douglas attempt to provide a justification of why enhancing human moral capabilities and sensitivity through technological means is ethically desirable, if not mandatory for the survival of the human species (Persson and Savulescu 2008; Douglas 2008, see also Harris 2010). Whereas Persson and Savulescu predict a global collapse of human society if no moral bioenhancement will take place, Douglas argues that there are no good objections to moral bioenhancement and therefore, morally speaking, there is no reason not to undergo psychological alterations to meliorate the moral capacities of individuals if they wish to do so (Douglas 2008). Combined, these perspectives have provided, it is argued, a pragmatic, political, and ethical justification to move forward with an agenda that promotes the alteration of human moral capacities. With these overarching rationales in mind, let’s take a deeper dive into their arguments.

2.1.3 The Coming Dark Age In Unfit for the Future (2012), Persson and Savulescu argue that the environmental and geopolitical problems we are facing as a species can be resolved only through an additional biotechnological intervention to secure the future of subsequent generations. Traditional methods such as education and socialization through parental supervision and the role of social institutions in this effort are deemed insufficient to guarantee the improvement of human character and ultimately, due to threats from

18 

F. Jotterand

weapons of mass destruction, climate change, and environmental degradation, the survival of the human species (Persson and Savulescu 2008, 2012, 2013). Our “motivational drives … are too recalcitrant,” they write, and consequently, “human beings are not by nature equipped with a moral psychology that empowers them to cope with the moral problems that these new conditions of life create. … [T]he current predicament of humankind is so serious that it is imperative that scientific research explore every possibility of developing effective means of moral bioenhancement, as a complement to traditional means” (Persson and Savulescu 2012, 1–2). On their account, traditional moral education cannot overpower our biases and impulses such as selfishness, xenophobia, and nepotism, therefore alteration of the human genome and moral psychology must occur.1 Persson and Savulescu suggest a paradigm shift in addressing the challenges we are facing as a species, and therefore it is crucial to understand how change occurs in society, especially when the paradigm shift might require revolutionary methods. The idea of revolution has been used to describe many areas of human activity such as in industry (the Industrial Revolution beginning in the second half of the eighteenth century) and in science (the Scientific Revolution, 1550–1700). Originally, however, it was a political concept first developed by Aristotle in The Politics in relation to forms of government. Aristotle argues that any form of government implies specific conceptions of justice and equality. In Ancient Greece, two political factions, the democrats and the oligarchs, fought over the question of the status of each individual in society. On the one hand, democrats stressed that individuals were equal in all respect; whereas, oligarchs asserted that if an individual was not equal in any respect (in property, for instance) then he was unequal in all respects. Because of this ideological clash, revolutions occur. As Aristotle puts it, “[a]ll these forms of government have a kind of justice, but tried by an absolute standard, they are faulty; and, therefore, both parties, whenever their share in the government does not accord with their preconceived ideas, stir up revolution” (Aristotle The Politics, 5.1). Subsequently, other political philosophers addressed the question of revolutions by stressing the importance of creating a form of government able to protect the state against revolutions while at the same time

2  The Scope and Limits of Moral Bioenhancement 

19

recognizing that some changes are sometimes necessary to insure political stability (Niccolo Machiavelli [1496-1527]).2 For instance, English writer John Milton (1608-1674) insists that a revolution is a right of society to protect itself against abusive power and to establish a new government on new principles that would promote the well-being of citizens.3 Milton sees revolutions as the means to achieve freedom, as was the case in the French revolution (freedom from an oppressive class) and the American revolution (freedom from English tutelage).4 G.W.F. Hegel is another influential figure in the development of the notion of revolution, adopted by Karl Marx who used Hegelian political theory to advocate freedom in a classless society. More importantly, he developed a philosophy that explained revolutions in terms of a change in categories either in society or in scientific explanation. In his Philosophy of Nature, he addresses the question of revolutions in terms of a process in which a change of categories occurs: “All cultural change reduces itself to a difference of categories. All revolutions, whether in the sciences or world history, occur merely because spirit has changed its categories in order to understand and examine what belongs to it…” (Hegel, 1970, 202). Having introduced these caveats, we can make the following distinctions among elements, markers, or conditions for a revolution. Even when a particular state of affairs may be neither a necessary nor a sufficient condition for a scientific or technological revolution, it might nevertheless serve as a useful marker aiding in the identification of important distinctions and levels of scientific development or social progress. Three varieties of conditions or markers of scientific development or social progress deserve special emphasis: 1. the introduction of new descriptive categories for an area of science and society (i.e., experience is described and, in some sense, experienced in a different way) 2. the introduction of new explanatory categories or frameworks for an area of science and/or technology, and 3. the introduction of a new way of organizing or systematizing categories or frameworks.

20 

F. Jotterand

These three conditions are jointly sufficient for identifying a revolution. When an area of reality is described, explained, and the explanations organized in new ways, a dramatic revolution has occurred. However, each of the conditions may be sufficient for indicating a certain level of revolutionary change. The character of scientific knowledge or social facts may be such that when the second condition is satisfied the first must be as well. So, too, when the third condition is satisfied, the first and second must be as well. That is, new explanatory categories may change the way in which we experience reality. Therefore, new ways of organizing or systematizing the categories of explanation may reshape those categories and in reshaping them, how we both know and manipulated them. When we apply these three conditions to the debate on moral bioenhancement, the following picture emerges. First, regarding new descriptive categories, on the account of the proponents of moral bioenhancement, morality is not experienced exclusively as a process of personal growth. The moral experience becomes, partially, a process of submission subjecting our brain and mind to technological manipulations which in turn reduces morality to a biological phenomenon as a new explanatory category, hence sidelining traditional understandings of moral development and character formation. The ensuing outcome is that a new social order is constituted where personal and public affairs are organized and systematized according to these categories. This impetus to use biotechnology in the context of political expediency is reflected in Persson and Savulescu’s account. In their view, it is only through the “conversion of the majority” that liberal democracies will be able to address existential threat and secure the survival and prosperity of the human species.5 The human mind is malleable, and consequently, there is an imperative to instruct, or convert, individual norms and values that will secure the human future. They even go as far as to promote state interventions. Liberal democracies have to “take the step from a social liberalism, which acknowledges the need for state interference to neutralize the glaring welfare inequalities within a society, to a global(ly responsible) liberalism, which extends welfare concerns globally and into the remote future” (Persson and Savulescu 2012, 102). State intervention, on their account, is justified on the basis that the majority of humans do not have the moral strengths to set limits

2  The Scope and Limits of Moral Bioenhancement 

21

to their consumptions or abstain from objectionable behaviors. Even if some people would seek and achieve a certain level of altruism or demonstrate a high level of compassion or respect, Persson and Savulescu argue, these forms of behavior would be a minority of “eccentrics.” To be fair, it is important to note that Persson and Savulescu do not suggest the replacement of traditional or cultural means for moral development. However, they are quick to point out that the usual means for moral education have not been as “effective and quick” as the use of cognitive enhancement technique to alter and manipulate human behavior. The reason is that, while individuals know what is right, they do not do what is right (Persson and Savulescu 2008). For Persson and Savulescu, knowing the good is insufficient. What is needed is a strong motivation to do good not only to regulate impulses and desires but also to enhance the capacity to be altruistic or feel sympathetic to social issues such as poverty, inequalities, and so on (Persson and Savulescu 2016). Thus, they promote the alteration of the psychological make-up of individuals through biological interventions, since the capacity for sympathy or altruism appears to have a biological basis. They even argue that since women tend to be more caring and have more capacity of sympathy than men, changing the psychological characteristic of men to make them more like women with regard to these attributes is justifiable (Persson and Savulescu 2012, 2016). John Harris is even more radical in his proposal. He suggests “a radical feminization of the world’s future population” pointing out that a good measure of the evils of the world had been brought about both by the dominance of men, and by the dominance of certain distinctively (though not exclusively) male characteristics. It might … seem a rational and progressive step to attempt to create a society from which these disastrous characteristics [egocentrism, aggression, competitiveness, intolerant], and the characters that possessed them, had been eliminated. (Harris 1985, 166, see also 2016)6

When Harris wrote this statement in 1985, the biotechnological means to alter human psychology did not have the same power and range of interventions, but his publication paved the way for contemporary

22 

F. Jotterand

reflections on moral bioenhancement. He envisions reforms for a new type of society not based on the development and implementation of novel political agendas or moral theories but rather on a new type of citizenry. The society that would be able to produce such citizens “would be founded on and embody, not a political so much as, say, a eugenic theory” by attempting to alter human behavior to remove aggression from the human psychological make up through genetic engineering or other biotechnological means (J. Harris 1985, 167). Similar views have been defended by Paul Zak and colleagues with regard to testosterone which they see as the opposite of oxytocin, the “moral molecule”: There is an extensive literature associating male aggressive and antisocial behaviors with elevated testosterone (T). … These correlations should be viewed with caution as T is highly dependent on a variety of environmental conditions…Correlational studies of salivary T in humans have found that high T males are more likely to have physical altercations, divorce more often, spend less time with their children, engage in competitions of all types, have more sexual partners, face learning disabilities, and lose their jobs more often, suggesting that high T men may behave differently than other men. (Zak et al. 2009)

They conclude that high level of testosterone causes men to behave antisocially. The radicality of these claims should not be a surprise in light of current debates about moral bioenhancement. When morality is reduced to a biological phenomenon, radical views as the ones advanced by Harris, Persson and Savulescu, and Zak and colleagues make sense a priori because of the possibility to subject morality to technological manipulations. The implementation of a hypothetical agenda that would transform the (male) human psyche should be a source of serious concern. Even if it is true that male behavior has been the cause of great evil in the world, the same behaviors, directed toward other goals, have been the root of moral progress in the world. Fighting injustice, combatting Nazi ideology, or protecting one’s family might require aggressive behavior toward perpetrators of evil. Persson and Savulescu, Harris, and Zak and colleagues make a categorical mistake by confounding attributes of

2  The Scope and Limits of Moral Bioenhancement 

23

human behavior that are not intrinsically bad and necessary for survival with misguided behavior due to deviant ideologies that lead to the misapplication of a behavior due to wrong dispositions. In addition, the notion that we absolutely need moral progress to survive as a species is contradicted by recent scholarship. Steven Pinker, for instance, points out that despite the undeniable challenges we see all around the world, human civilization has made remarkable progress in areas such as democracy, quality of life, peace, and equal rights (Pinker 2018).

2.2 Becoming Fit for the Future According to proponents of moral bioenhancement, moral progress should occur through the enhancement of the moral capacities of human beings through biomedical treatment. (There are anthropological assumptions based on functionalism and dualism that need to be acknowledged. These assumptions are more fully addressed in Chap. 3 regarding the science of morality and Chap. 8 in relation to what I call Identity Integrity.) Using the example of smoking and unhealthy diet for purposes of comparison, Persson and Savulescu argue that techniques like cognitive psychotherapy might help but not sufficient. In some cases, people are genetically predisposed to nicotine or sugar addictions, and therefore a biomedical intervention is warranted. By analogy, Persson and Savulescu cite the use of the neurotransmitter and hormone oxytocin, involved in reproductive and social behavior, for moral bioenhancement since it has been demonstrated that the administration of oxytocin to subjects via nasal spray increases cooperation and trusting behavior (Persson and Savulescu 2012; see also Kosfeld et al. 2005). Many claims have been made in the media and the scientific literature about oxytocin, from oxytocin being the trust hormone7 (Honigsbaum 2011), to the “moral molecule” as “the new science of what makes us good or evil” or “the source of love and prosperity” (Zak 2001, 2012). Oxytocin has been also connected to human trustworthiness and increased generosity (Zak et al. 2005; Zak et al. 2007; Barraza and Zak 2009; Barraza et al. 2011). Paul J. Zak popularized the notion of oxytocin as the moral molecule. Building on a social experiment called “The Trust Game,” which was

24 

F. Jotterand

developed by Berg et al. (Berg, Dickhaut, and McCabe 1995) and measures the level of trust in economic decisions, Zak observed that people who shared more (altruism) and were more trusting have higher level of oxytocin in the brain and consequently concluded that oxytocin was the key factor in explaining differences—personality and character traits variations did not seem to be relevant factors. On his account, “in stable and safe circumstances, oxytocin makes us mostly good. Oxytocin generates the empathy that drives moral behavior, which inspires trust, which causes the release of more oxytocin, which creates more empathy” (Zak 2012, 64). These claims, however, have been challenged due to 1) the lack of scientific evidence that trust in humans is associated with or caused by oxytocin; 2) many studies cannot replicate the results of previous studies or are implicitly biased and report false-positives; and 3) oxytocin can have an opposite effect promoting aggressiveness and defensiveness (Nave et al. 2015; Walum et al. 2016; Shen 2015; De Dreu et al. 2010).8 Persson and Savulescu recognize that the examples they provide (oxytocin and serotonin) do not constitute biomedical means to enhance morality. They emphasize that both illustrations demonstrate that “manipulations of biology can have moral effects.” Therefore, they claim that “[t]here are then prospects of moral bioenhancement, even if so, far no biomedical means of moral enhancement with sufficiently precise effects have been discovered” (Persson and Savulescu 2012, 121; see also Savulescu and Persson 2012; Douglas 2008). In their view, while it might be the case that the magic pill for moral bioenhancement will never be developed, research in this area is only recent and limited in scope. It might just be a question of time until pertinent biotechnologies emerge (Persson and Savulescu 2012, 121). Regardless of whether such technology would be available, they are committed to requiring moral bioenhancement should appropriate means become available and demonstrate safe and effective usage (Persson and Savulescu 2008). Key to understand their argument about the necessity of moral enhancement is how human culture has developed throughout the centuries. The growth of scientific knowledge and its corollary, education, have provided the means to improve not only the human condition (e.g., standards of living, health, political structures, etc.) but also the power of

2  The Scope and Limits of Moral Bioenhancement 

25

the human mind. Without any interventions through biological and genetic means, the human mind has generated unprecedented progress in various domains of knowledge, science, technology, and cultural achievements. Increasing scientific advances in genetics, neuroscience, and neurotechnology have allowed the manipulations of the human genome and the alteration of brain structure and function which, they hypothesized in 2008, could result in genetic memory enhancement, memory enhancing drugs, improving working memory, better self-control, and heightened mental energy and wakefulness (Persson and Savulescu 2008). Buying into the apocalyptic scenario suggested by Persson and Savulescu, Parker Crutchfield, under the disguise of the potential “ultimate harm” argument and a misapplication of population health, advocates for compulsory moral bioenhancement. The most troubling is not necessarily the claim that moral bioenhancement should be compulsory, although I have serious objections on this point. What Crutchfield proposes is that it should be covert. In a typical pragmatic and utilitarian fashion characteristic of proponents of moral bioenhancement, he asserts that the argument … is intended to support the claim that if moral bioenhancement ought to be compulsory, it ought to be covert… Given that the costs of not preventing ultimate harm are indefinitely high, there is no intervention the costs of which would outweigh utility of the prevention of ultimate harm. Thus, if an intervention is necessary to prevent ultimate harm, and the intervention will actually prevent ultimate harm, then that intervention ought to be carried out, because the cost of not doing so is indefinitely high. Moral bioenhancement is necessary, because as cognitive enhancement makes causing ultimate harm more accessible to nefarious moral agents, ultimate harm is much more likely, unless everyone is enhanced. Where it used to require an extraordinarily coordinated effort to cause ultimate harm, now, or in the near future, it only takes one person. Thus, moral bioenhancement ought to be compulsory for everyone. (Crutchfield 2019, 113)

The second objectionable claim Crutchfield makes is that he opposes utility against liberty and transparency. He argues that if a program for compulsory bioenhancement were covert, there would be no liberty restrictions “as people would be unaware of the intervention in the first

26 

F. Jotterand

place and so there would be no need for such policies to compel participation” (Crutchfield 2019, 116). Indeed, the issue of the liberty restrictions would not be an issue but not because people are free to accept whatever interventions they deem appropriate but because Crutchfield simply denies that people are free moral agents who ought to decide for themselves. Referring to population health he argues that “a covert program would also better promote population health than an overt program” (Crutchfield 2019, 117). In the face of more than two century of debates and public discussion over population health, it is astonishing to promote such line of reasoning, and it is difficult to find any good examples where interventions were covert, without public uproar. Crutchfield’s proposal epitomizes again the move of pathologizing human moral behavior and an attempt to justify interventions on the masses as if citizens cannot make their own choices and should be forced to behave in ways decided by alleged experts. Not only who is to say how people should behave and accordingly to which values and norms, but such proposal is dangerous, irresponsible, and has no legal basis. The claim that moral bioenhancement should be mandatory is somewhat tempered by Douglas who asserts that “under certain conditions that may come close enough to obtaining, individuals may permissibly use biomedical technologies to morally enhance themselves” (Douglas 2013, 161; 2008). Whether mandating or granting permission does not change the fundamental evaluation of the feasibility and, more importantly, the desirability of moral bioenhancement as I will make clear in the next chapter.

Notes 1. It should be noted that the attempt to use scientific explanation and objectivity to morality goes back to the seventeenth century. According to Hunter and Nedelisky “the first self-consciously scientific approaches to morality” occurred as the result of the work of Hugo Grotius (1583-1645) and Samuel von Pufendorf (1632-1694) who “explicitly recognized a need for a moral theory, rooted in scientific objectivity, that could create a stable political society in the face of disagreement, skepticism, and the pragmatic failure of scholastic philosophy” (Hunter and Nedelisky 2018,

2  The Scope and Limits of Moral Bioenhancement 

27

38–39). See also the analysis by Schneewind on Grotius’ approach to morality in The Invention of Autonomy (Schneewind 1997). Grotius is considered to be the first modern thinker to argue that ethics does not need to be grounded on God (Hunter and Nedelisky 2018, 39–40, see also Schneewind 1997, 72). 2. Machiavelli recognizes the necessity to establish a mechanism that would ensure the political stability of a country. In his Discourses, he notes that “among the more necessary things instituted by those who have prudently established a Republic, was to establish a guard to liberty, and according, as this was well or badly place, that freedom endured a greater or less (period of time)…No more useful and necessary authority can be given to those who are appointed in a City to guard its liberty, as is that of being able to accuse the citizen to the People or to any Magistrate or Council, if he should in any way transgress against the free state. This arrangement makes for two most useful effects for a Republic. The first is, that for fear of being accused, the citizens do not attempt anything against the state, and if they should (make an) attempt they are punished immediately and without regard (to person). The other is, that it provides a way for giving vent to those moods which in whatever way and against whatever citizens may arise in the City. And when these moods do not provide a means by which they may be vented, they ordinarily have recourse to extra ordinary means that cause the complete ruin of a Republic. And there is nothing which makes a Republic so stable and firm, as organizing it in such a way that changes in the moods which may agitate it have a way prescribed by law for venting themselves…Whoever becomes Prince either of a City or a State, and more so if his foundations are weak, and does not want to establish a civil system either in the form of a Kingdom or a Republic, (will find) the best remedy he has to hold that Principality is (he being a new Prince) to do everything anew in that State; such as in the City to make new Governors with new titles, with new authority, with new men, (and) make the poor rich, as David did when he became King, who piled good upon the needy, and dismissed the wealthy empty-handed. In addition to this he should build new Cities, destroy old ones, transfer the inhabitants from one place to another, and in sum, not to leave anything unchanged in that Province, (and) so that there should be no rank, nor order, nor status, nor riches, that he who obtains it does not recognize it as coming from him” (Machiavelli 1517, chapters 5, 8, 26).

28 

F. Jotterand

3. See for instance in The Tenure of Kings and Magistrates where Milton argues that the power of kings and magistrates comes from the people themselves: “…since the King or Magistrate holds his authority from the people, both originally and naturally for their good in the first place, and not his own, then may the people as often as they shall judge it for the best, either choose him or reject him, retain him or depose him though not a Tyrant, merely by the liberty and right of free born Men, to be governed as seems to them best.” However, when tyrants (a tyrant is defined by Milton as “he who regarding neither Law nor the common good reigns only for himself and his faction”) abuse their power, the people who elected them have the right to reject their authority: “Because his power is great, his will boundless and exorbitant, the fulfilling whereof is for the most part accompanied with innumerable wrongs and oppressions of the people, murders, massacres, rapes, adulteries, desolation, and subversion of cities and whole provinces, look how great a good and happiness a just King is, so great a mischief a Tyrant. As he the public father of his Country, so this is the common enemy … Against whom what the people lawfully may do, as against a common pest, and destroyer of mankind, I suppose no man of clear judgment need go further to be guided by the very principles of nature in him” (Milton 1650). 4. Melvin Lasky remarks that a (political) revolution can be both a means (for change) and an end (creation of a new society according to new principles). As he writes, “revolution, taken only as means, suggests that there are situations of political and social crisis where, all available methods of change—peaceful or gradual or reasonable, or any other approach associated with concessions, compromise, and reconciliation—having proved futile, only a sharp break and basic overturn can now serve to defend justice and establish a tolerable order. So it is that not only radicals but also moderates, reformers, liberals, and even conservatives, find themselves enlisted in a revolution. Revolution, as an end, suggests that a new society is conceivable, and is indeed possible, only on the basis of fundamentally different social, political and cultural principles” (Lasky 1976, 79). A revolution in science is a revolution of the latter type, that is, it conceives new models of explanation and establishes new principles (or categories to use Hegel’s terminology) accordingly. 5. The impetus to build a more peaceful society through the mastering of human nature was already on the mind of Enlightenment thinkers. Thomas Hobbes (1588-1679) witnessed the impact of the French Wars of

2  The Scope and Limits of Moral Bioenhancement 

29

Religion (1562-1598) and the Thirty Years’ War (1618-1648) in Central Europe and famously stated that “every man is enemy to every man” and “…life of man [is] solitary, poor, nasty, brutish, and short” (Hobbes 1651) suggesting that people in a “state of nature” are selfish and at war against each other. To address this state of affairs Hobbes, in the Leviathan (1651), concludes that 1) the “state of nature” must be replaced by civil society, 2) the rule of reason commends people to seek peace through mutual beneficial agreements, and 3) the Laws of Nature constitute the basis of civil society. “These are the laws of nature, dictating peace, for a means of the conservation of men in multitudes; and which only concern the doctrine of civil society. … These laws of nature are immutable and eternal; for injustice, ingratitude, arrogance, pride, iniquity, acception of persons, and the rest, can never be made lawful” (Leviathan, chapter 15). Once he establishes the Laws of Nature he outlines his scientific approach to ethics in these terms: “The science of these laws, is the true moral philosophy. And the science of them, is the true and only moral philosophy. For moral philosophy is nothing else but the science of what is good, and evil, in the conversation, and society of mankind…Now the science of virtue and vice is moral philosophy, and therefore the true doctrine of the laws of Nature is the true moral philosophy” (Leviathan, chapter 15). 6. Similar views have been defended by Paul Zak with regard to testosterone which he sees as the opposite of oxytocin, the “moral molecule”: “There is an extensive literature associating male aggressive and antisocial behaviors with elevated testosterone (T). … These correlations should be viewed with caution as T is highly dependent on a variety of environmental conditions…Correlational studies of salivary T in humans have found that high T males are more likely to have physical altercations, divorce more often, spend less time with their children, engage in competitions of all types, have more sexual partners, face learning disabilities, and lose their jobs more often, suggesting that high T men may behave differently than other men.” They conclude that high level of testosterone causes men to behave antisocially (Zak et al. 2009). 7. In his interview Paul Zak stated that “oxytocin is primarily a molecule of social connection. It affects every aspect of social and economic life, from who we choose to make investment decisions on our behalf to how much money we donate to charity. Oxytocin tells us when to trust and when to remain wary, when to give and when to hold back” (Honigsbaum 2011). Paul Zak also talks about trust being chemical (Nowack and Zak 2017).

30 

F. Jotterand

8. In an article in The Atlantic Ed Yong reports that Ernst Fehr, who was at the origin of the first study, has second thoughts about the evidence available about the effects of oxytocin on human behavior. Yong points out that “Ernst Fehr from the University of Zurich, who led the original Nature study, notes that none of the other studies cleanly replicated his own. Each introduced tweaks, and sometimes flaws, that would have changed the results. Still, he says, “What we’re left with is a lack of evidence. I agree that we have no robust replications of our original study, and until then, we have to be cautious about the claim that oxytocin causes trust” (Yong 2015).

References Barraza, Jorge A., and Paul J. Zak. 2009. Empathy toward Strangers Triggers Oxytocin Release and Subsequent Generosity. Annals of the New York Academy of Sciences 1167 (June): 182–189. https://doi.org/10.1111/j.1749-­6632. 2009.04504.x. Barraza, Jorge A., Michael E. McCullough, Sheila Ahmadi, and Paul J. Zak. 2011. Oxytocin Infusion Increases Charitable Donations Regardless of Monetary Resources. Hormones and Behavior 60 (2): 148–151. https://doi. org/10.1016/j.yhbeh.2011.04.008. Condorcet, Jean-Antoine-Nicolas de Caritat. 1979. Sketch for a Historical Picture of the Progress of the Human Mind. Westport, CT: Greenwood Press. Crutchfield, Parker. 2019. Compulsory Moral Bioenhancement Should Be Covert. Bioethics 33 (1): 112–121. https://doi.org/10.1111/bioe.12496. Douglas, Thomas. 2008. Moral Enhancement. Journal of Applied Philosophy 25 (3): 228–245. https://doi.org/10.1111/j.1468-­5930.2008.00412.x. ———. 2013. Moral Enhancement via Direct Emotion Modulation: A Reply to John Harris. Bioethics 27 (3): 160–168. https://doi.org/10.1111/j.1467-­ 8519.2011.01919.x. De Dreu, Carsten K.W., Lindred L. Greer, Michel J.J. Handgraaf, Shaul Shalvi, Gerben A. Van Kleef, Matthijs Baas, Femke S. Ten Velden, Eric Van Dijk, and Sander W.W. Feith. 2010. The Neuropeptide Oxytocin Regulates Parochial Altruism in Intergroup Conflict among Humans. Science (New York, N.Y.) 328 (5984): 1408–1411. https://doi.org/10.1126/ science.1189047. Harris, John. 1985. The Value of Life. Routledge.

2  The Scope and Limits of Moral Bioenhancement 

31

———. 2007. Enhancing Evolution: The Ethical Case for Making Better People. Princeton, NJ; Woodstock: Princeton University Press. Harris, Sam. 2010. The Moral Landscape: How Science Can Determine Human Values. Reprint edition. New York: Free Press. Harris, John. 2016. Moral Blindness—The Gift of the God Machine. Neuroethics 9 (3): 269–273. https://doi.org/10.1007/s12152-­016-­9272-­9. Hobbes, Thomas. 1651. Leviathan. New York, NY: Collins. Honigsbaum, Mark. 2011. Oxytocin: Could the ‘trust Hormone’ Rebond Our Troubled World? The Observer, August 20, 2011, sec. Science. https://www. theguardian.com/science/2011/aug/21/oxytocin-­z ak-­n euroscience-­ trust-­hormone. Humanity+. 2009. Transhumanist Declaration. Humanity+ (blog). https:// humanityplus.org/philosophy/transhumanist-­declaration/. Hunter, James Davison, and Paul Nedelisky. 2018. Science and the Good: The Tragic Quest for the Foundations of Morality. New Haven and London: Yale University Press. https://www.amazon.com/Science-­Good-­Foundations-­ Foundational-­Questions/dp/0300196288. Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Books. Lasky, Melvin J. 1976. Utopia and Revolution. Chicago, IL: The University of Chicago Press. Machiavelli, Niccolo. 1517. Discourses on Livy. Translated by Henry Neville. London: Tommaso Davies. Milton, John. 1650. The Tenure of Kings and Magistrates. London: Matthew Simmons. More, Max. 2010. The Overhuman in the Transhuman. Journal of Evolution and Technology 21 (1): 1–4. Nave, Gideon, Colin Camerer, and Michael McCullough. 2015. Does Oxytocin Increase Trust in Humans? A Critical Review of Research. Perspectives on Psychological Science 10 (6): 772–789. https://doi.org/10.1177/ 1745691615600138. Nietzsche, Friedrich. 2013. The Anti-Christ. SoHo Books. Nowack, Kenneth, and Paul J. Zak. 2017. The Neuroscience in Building High Performance Trust Cultures. Chief Learning Officer—CLO Media (blog). February 9. https://www.clomedia.com/2017/02/09/neuroscience-­building-­ trust-­cultures/. Persson, Ingmar, and Julian Savulescu. 2008. The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of

32 

F. Jotterand

Humanity. Journal of Applied Philosophy 25 (3): 162–177. https://doi. org/10.1111/j.1468-­5930.2008.00410.x. ———. 2012. Unfit for the Future: The Need for Moral Enhancement. 1st ed. Oxford University Press. ———. 2013. Getting Moral Enhancement Right: The Desirability of Moral Bioenhancement. Bioethics 27 (3): 124–131. https://doi.org/10.1111/ j.1467-­8519.2011.01907.x. ———. 2016. Moral Bioenhancement, Freedom and Reason. Neuroethics 9 (3): 263–268. https://doi.org/10.1007/s12152-­016-­9268-­5. Pinker, Steven. 2018. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Illustrated edition. New York, NY: Viking. Savulescu, Julian, and Ingmar Persson. 2012. Moral Enhancement, Freedom and the God Machine. The Monist 95 (3): 399–421. Schneewind, Jerome B. 1997. The Invention of Autonomy: A History of Modern Moral Philosophy. Cambridge; New York, NY: Cambridge University Press. Shen, Helen. 2015. Neuroscience: The Hard Science of Oxytocin. Nature News 522 (7557): 410. https://doi.org/10.1038/522410a. Walum, Hasse, Irwin D. Waldman, and Larry J. Young. 2016. Statistical and Methodological Considerations for the Interpretation of Intranasal Oxytocin Studies. Biological Psychiatry 79 (3): 251–257. https://doi.org/10.1016/j. biopsych.2015.06.016. Yong, Ed. 2015. The Weak Science Behind the Wrongly Named Moral Molecule. The Atlantic. Accessed November 13, 2015. https://www.theatlantic.com/science/archive/2015/11/the-­weak-­s cience-­o f-­t he-­w rongly-­n amed-­m oral-­ molecule/415581/. Zak, Paul J. 2001. The Moral Molecule: The New Science of What Makes Us Good or Evil. Paul j Zak. London: Corgi Books. ———. 2012. The Moral Molecule: The Source of Love and Prosperity. New York: Dutton. Zak, Paul J., Robert Kurzban, and William T. Matzner. 2005. Oxytocin Is Associated with Human Trustworthiness. Hormones and Behavior 48 (5): 522–527. https://doi.org/10.1016/j.yhbeh.2005.07.009. Zak, Paul J., Angela A. Stanton, and Sheila Ahmadi. 2007. Oxytocin Increases Generosity in Humans. PloS One 2 (11): e1128. https://doi.org/10.1371/ journal.pone.0001128. Zak, Paul J., Robert Kurzban, Sheila Ahmadi, Ronald S. Swerdloff, Jang Park, Levan Efremidze, Karen Redwine, Karla Morgan, and William Matzner. 2009. Testosterone Administration Decreases Generosity in the Ultimatum Game. PLoS One 4 (12). https://doi.org/10.1371/journal.pone.0008330.

3 Moral Bioenhancement and the Clinical Ideal

3.1 The Challenge of Defining Enhancement Now that I provided an overview of the historical, socio-political, and scientific context in which the conceptualization of moral bioenhancement is taking place, let’s turn to its definition. A straightforward definition of moral bioenhancement is more challenging than it appears. First, there is the question of whether moral bioenhancement represents an intervention that will guarantee the improvement of moral behavior or an intervention that makes people more motivated to act morally. This distinction has been debated between Harris, and Persson and Savulescu at length (for an overview of the discussion see J. Harris 2016b; Persson and Savulescu 2016; J. Harris 2016a). Briefly though, it should be noted that Persson and Savulescu have a somewhat ambiguous position on moral bioenhancement. It is not clear whether the goals of the enhancements are to control outcome (Savulescu and Persson 2012) or to improve moral motivation (Persson and Savulescu 2012). In their article entitled “Moral Enhancement, Freedom and the God Machine” (2012), Persson and Savulescu envision a futuristic society in which advances in the science of morality and computing have made possible forestalling bad © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_3

33

34 

F. Jotterand

behavior. In their thought experiment, they imagine the design of the God Machine that would be able “to monitor the thoughts, beliefs, desires and intentions of every human being.” It would be able to modify these beliefs, desires, and intentions “within nanoseconds, without the conscious recognition by any human subjects.” However, the God Machine would intervene only when some threshold would be breached that would affect a “sentient being’s interests” (Savulescu and Persson 2012, 412–413). The God Machine serves as an instrument to assess the desires and intentions of individuals, the target area of human mental capacities determined by Persson and Savulescu (or moral emotions, per Douglas 2008) that need enhancement. Considering the highly speculative nature of the set of issues Harris and Persson and Savulescu are debating, I will not elaborate further because such discussion assumes that the current biotechnological means available provide evidence that either an intervention could make people behave more morally (whatever that means) or that people could be more motivated to act morally (whatever that means). Due to the skeptical posture, I adopt in this book, I do not feel compelled to further investigate what I think is a set of highly hypothetical and hyperbolic assertions. A more interesting question, because less dependent on speculations, is whether moral bioenhancement falls into the category of enhancement proper or whether, on Persson and Savulescu’s account, moral bioenhancement is a type of therapeutic intervention. Providing clarity to this question is paramount for my subsequent discussion on the potential use of neurotechnologies to address psychiatric disorders with moral pathologies. The challenge is how to construe moral bioenhancement if we define enhancements as “interventions designed to improve human form or functioning beyond what is necessity to sustain or restore good health” (Juengst 1998, 29). Based on Juengst’s definition, moral bioenhancement does not fall in the category of enhancement per se. Persson and Savulescu do not see the human race as being in a healthy state in terms of moral acumen since most of the human population, on their account, could benefit from moral bioenhancement. A definition of enhancement that refers to the “augmentation of biological capacities beyond what is species typical” in healthy subjects (Jotterand 2016) would simply undermine Persson and Savulescu’s claim that “drug treatment” is necessary

3  Moral Bioenhancement and the Clinical Ideal 

35

because such an intervention would fall in the therapeutic domain. For this reason, we need to take a deeper dive into how notions of enhancement are being currently conceptualized.

3.1.1 Various Conceptualizations of Enhancement Contemporary debates on human enhancement often characterize the notion of enhancement as morally troubling because it undermines some deeply held beliefs concerning humanity, notions of moral agency and responsibility, and challenges what some deem as the appropriate use of biotechnologies and neurotechnologies to serve human ends (Jotterand and Bosco 2020). Most techniques and procedures beyond therapeutic aims raise the eyebrows of many concerned with the potential harmful implications of enhancement technologies or their abusive implementation. Fewer people might oppose the enhancement of human physiological and mental capacities if these techniques address specific diseases and mental disorders. If the goal of any intervention is therapeutic, it falls under the purview of medicine. Beyond therapy, where the aims of enhancement are not well defined, the moral landscape becomes less clear. To say that human enhancement strives to the betterment of the human condition or personal fulfillment does not make any normative claim robust enough to justify why we ought to embrace any type of enhancement. In addition, enhancement is often gauged against therapy to provide the backdrop necessary to discriminate what belongs to the domain of medicine and health care as well as to provide conceptual clarity. However, the conceptual distinction between therapy and enhancement has limitations that restrain our ability to demarcate the types of enhancements that could indeed benefit some individuals. Moral bioenhancement falls into that category. In light of these considerations, in this work I consider the use of techniques to alter behavior in individuals suffering from psychiatric disorders with moral pathologies (e.g., psychopathy) who are otherwise healthy and in stable condition (as a preventative measure, for instance). The use of moral bioenhancement techniques in healthy individuals with psychopathic traits illustrates the necessity to make more nuanced distinctions

36 

F. Jotterand

when evaluating technologies for moral bioenhancement to harness their potential clinical benefits, if any, and if ethically permissible. The main objective of my analysis is twofold: 1) to outline some of the problems associated with the attempt to distinguish the concept of enhancement from therapy. Both concepts assume that an understanding of health and normality is difficult to establish. I argue that these two concepts cannot be the basis for the assessment of the use of moral bioenhancement techniques in people with psychopathic traits because the restoration of healthy behavior or the attainment of normal behavior falls into a behavioral spectrum difficult to determine (for instance, some individuals might not be criminals but have psychopathic traits—it is estimated that 4% to as high as 12% of CEOs exhibit psychopathic traits (McCullough 2019); and 2) to show the relevance of the distinction between two types of enhancement in the attempt to demonstrate why the notion of human enhancement (including moral bioenhancement), if it aims at the improvement of the quality of life of individuals, might become part of the therapeutic language of tomorrow and therefore we ought to be clear about what such language and enhancement imply in the clinical context. To this end, I introduce the concept of the clinical ideal to justify my claim about the use of neurotechnologies.

3.1.2 T  herapy and Enhancement: Two Evolving Concepts This section examines the various conceptualizations of enhancement found in the literature, especially as outlined by Ruth Chadwick and Nicolas Agar. These two scholars present helpful categorizations of enhancement that will provide the basis for the development of the concept of the clinical ideal. Chadwick suggests four approaches to enhancement helpful to understand how the concept is used in current debates, whereas Agar offers a valuable distinction between the objective ideal and the anthropocentric ideal. Building on the work of Agar, I argue that both ideals have limited clinical relevance and favor the clinical ideal, which allows for the evaluation of the concept of enhancement in the context of clinical interventions. Finally, I look at the implications of the

3  Moral Bioenhancement and the Clinical Ideal 

37

clinical ideal regarding the hypothetical use of moral bioenhancement techniques in the clinical context. I contend that the moral evaluation of (moral bio)enhancement must take place within the context of the patient’s clinical condition and in relation to the notion of quality of life. Here, I assume that these interventions are deemed applicable, safe, and efficacious and that the psychopath undergoing the procedure is always intrinsically a patient, a claim I will challenge in Chap. 6. Contrary to the theoretical nature of early reflections by Bacon, Descartes, and de Condorcet, some of the contemporary conceptualizations of human enhancement can be validated through use of specific technologies, either under development or already implemented. For this reason, we need to take a closer look at how the concept of enhancement is used in the literature based on current technological and neuroscientific development. To this end, I will first turn to the work of Ruth Chadwick. In her examination of the various interpretations of enhancement, Chadwick (2008) suggests four categories. The beyond therapy view (President’s Council on Bioethics 2003) holds the position that any procedures beyond therapeutic interventions are considered a type of enhancement. This interpretation, however, has deficiencies since some interventions might be viewed as enhancement but might also yield therapeutic benefits. In other words, the challenge is not only to define enhancement in relation to therapy. Therapy itself is likewise a difficult concept to describe and to suggest that the intentions behind any intervention can determine whether that particular intervention is therapeutic does not provide the necessary criteria to explain the difference between therapy and enhancement. The additionality view holds that enhancement is understood quantitatively. It signifies that “to enhance x is to add to, or exaggerate, or increase x in some respect.” But in this case, without being “characteristic-specific,” it is not possible to establish what counts as therapy or enhancement. To enhance cannot be understood in generic terms but requires focusing on particular capacities for moral evaluation (Chadwick 2008, 28–29). The improvement view (enhancement understood qualitatively) means that to enhance is to improve or to make better. This view is not very useful because it does not provide the criteria necessary to determine what is considered an improvement. Any knowledge of the goals and intentions

38 

F. Jotterand

behind any request for improvement (i.e., enhancement) is contingent upon qualitative judgments and therefore makes it difficult to assess from a moral standpoint: the umbrella view. Considering the challenges of the previous three approaches, this view offers a venue to assess enhancement case by case since it considers various potential changes: enhancement may be therapeutic (contra the “beyond therapy” view), enhancement may not add anything (contra the additionality view), and enhancement may not be an improvement (contra the improvement view) (see Table 3.1 for a side-by-side comparison of different definitions of enhancement). While Chadwick’s categorization of enhancement is helpful in many respects (especially the umbrella view), Nicholas Agar presents a more fruitful classification to define human enhancement with regard to the focus of this chapter: the objective ideal and the anthropocentric ideal. The objective ideal states “an enhancement has prudential value commensurate with the degree to which it objectively enhances a human capacity.” In other words, this approach holds that, all things being equal, technologies that produce greater enhancements are intrinsically more valuable than other technologies that generate less enhancement outcomes. This position is favored by transhumanists since they seek transcending biological limitations, hence the maximization of human transformation. On the other hand, the anthropocentric ideal states that “some enhancements of greater objective magnitude are more prudentially valuable than enhancements of lesser magnitude … or [it] assigns value to enhancement relative to human standards” (Agar 2014, 17, 27). This view asserts that the normal range of human capacities can be determined and that an enhancement can be evaluated based on a balancing between the instrumental and intrinsic value of the enhanced capacity. The enhancement of a human capacity has more intrinsic value if “instantiating more valuable internal goods” (i.e., an individual can directly benefit from enhancement only if that person enhances her/himself ) whereas it has more instrumental value if it produces qualitative and quantitative superior external goods (goods that depend on social circumstances) (Agar 2014, 29). The objective ideal approach is a vision of enhancement supported by transhumanism but is somewhat at the margin of reflections pertaining to the clinical setting since it focuses on the radical enhancement of the human species. Transhumanists do not seek to

3  Moral Bioenhancement and the Clinical Ideal 

39

Table 3.1  Comparative table for the definition of enhancement Concepts Views (Chadwick) Beyond therapy view

Additionality view

Improvement view

Umbrella view

Definitions

Key Features & Challenges

• Some interventions • Any procedures beyond might be viewed partly therapeutic interventions are as enhancement and considered a type of partly as therapy. enhancement. • Requires focus on • Enhancement is understood quantitatively: “To enhance x is specific capacities to establish what defines to add to, or exaggerate, or therapy or increase x in some respect.” enhancement. • Requires focus on particular capacities for moral evaluation. • Does not provide the • Enhancement is understood criteria necessary to qualitatively: To enhance is to determine what is improve or to make better. considered an improvement. • Contingent upon qualitative judgments. • Difficult to assess from a moral standpoint. • Too generic and does • Enhancement is assessed case not provide a proper by case to determine its framework for nature. conceptual clarity. • Enhancement may be therapeutic (contra the beyond therapy view). • Enhancement may not add anything (contra the additionality view). • Enhancement may not be an improvement (contra the improvement view). (continued)

40 

F. Jotterand

Table 3.1 (continued) Concepts Ideals Objective ideal (Agar)

Anthropocentric ideal (Agar)

Clinical ideal (Jotterand)

Definitions

Key Features & Challenges

• Prudential value of • All things being equal, enhancement is based technologies that produce on the degree to which greater enhancements are intrinsically more valuable than it objectively enhances human capacity. other technologies that generate fewer enhancement • Requires setting objective standards to outcomes. evaluate levels of enhanced capacities. • Assigns value to • Enhancement is evaluated enhancement relative to based on a balance between human standards and the instrumental and intrinsic whether or not an value of the enhanced enhancement is capacity. beneficial to the recipient based on standardized norms. • Requires a normative framework to determine how enhancement advances the well-being of the human species within the normal range of human capacities. • Requires setting • Enhancement is evaluated contextual standards. based on whether it enhances the physical, mental, and social • Necessitates the development of criteria capacities and the overall to evaluate degrees of quality of life of an individual enhancement in relation with mental impairment, to quality of life. where the baseline is disabled, and in the context of life (contextual standards).

address disorders or illnesses as such but rather improve the condition of the human species through technological means to move into a posthuman world. Efforts to develop enhancement technologies, on their account, should be geared toward transcending the limitations of our

3  Moral Bioenhancement and the Clinical Ideal 

41

brains and bodies as the survival of our species is closely linked to a technological future. The anthropocentric ideal approach represents a more promising conceptualization regarding the issue addressed in this chapter since enhancement is understood within the context of personal benefits (internal goods) and social benefits (external goods). Accordingly, the value of enhancement is not contingent upon an evaluative framework that fosters the maximization of the enhancement of human capacities but rather on the enhancement of the capacities of a particular individual based on his or her unique personal goals and values. However, according to Agar, the value of an enhancement is relative to the degree of prudential value in relation to the objective degree of enhancement within the normal range of human capacities. Beyond a certain point, he contends, enhancement becomes highly hypothetical, if not unrealistic, for it lacks objectivity with regard to its feasibility, and therefore its degree of prudential value decreases due to the uncertainty of how individuals could potentially be affected negatively by specific enhancements. The anthropocentric ideal offers a potential way to evaluate the use of moral bioenhancement technology in people suffering from psychiatric disorders with moral pathologies. Contrary to the objective ideal approach, the anthropocentric ideal specifically focuses on the established range of normal human capacities to evaluate the prudential value of particular enhancing technologies. As Agar puts it: “enhancement beyond human norms encompasses interventions whose purpose is to boost levels of functioning beyond biological norms” (Agar 2014, 19). Thus, he assumes that the normal range of human capacities can be determined based on biological attributes of normal functioning, a claim I will challenge in the next section. While Agar’s distinction is valuable in many ways, I propose a third way of evaluating enhancement, especially, since the objective ideal and the anthropocentric ideal, in my estimation, have limited clinical relevance. I am proposing what I call the clinical ideal based on the conceptualization of diseases as clinical problems (H. T. Engelhardt 1984).

42 

F. Jotterand

3.1.3 Disease as a Clinical Problem In order to explain what I mean by the clinical ideal, I will turn first to the concept of disease. Elsewhere, I outline why describing diseases as clinical problems (H. T. Engelhardt 1984) represents a more satisfactory account than descriptive explanations such as the ones adopted by Christopher Boorse (Boorse 1975, 1977) and Leon Kass (Kass 1985) (Jotterand and Wangmo 2014). Usually, the term disease refers to a condition that is outside a set of functional standards within human biology. Boorse, for instance, states that the concept of disease can be understood in relation to the notion of species-typical levels of species-typical functions (Boorse 1975, 1977). Living organisms have an organization of biological functions that Boorse defines according to the following four criteria: 1) the reference class is a natural class of organisms of uniform functional design; specially, an age group of a sex of a species; 2) normal function of a part or process within members of the reference class is a statistically typical contribution by it to their individual survival and reproduction; 3) a disease is a type of internal state which is either an impairment of normal functional ability, that is, a reduction of one or more functional abilities below typical efficiency, or a limitation on functional ability caused by environmental agents; 4) health is the absence of disease (Boorse 1997, 7–8). These criteria characterize a framework based on biostatistical analysis that provides standards of normal functioning (or healthy states) for which any deviation constitutes a state of disease. Kass likewise suggests that health, or the absence thereof, is based on biological standards. Each species displays specific bodily functionalities that can be recognized or determined by the specificities of its organism (Kass 1985). Engelhardt, however, rejects the idea that diseased states can be established uniquely in terms of species-typical levels of species-typical functions. The indeterminateness of the forces of nature does not allow establishing a taxonomy of disease in relation to the functional organization of the body (H. T. Engelhardt 1996). In addition, a species, the human species for instance, displays a range of characteristic polymorphisms that demand a specific understanding of what we mean by human nature, its purpose, and its values, which in turn justify the normality or abnormality of

3  Moral Bioenhancement and the Clinical Ideal 

43

biological attributes. These values operate in a proscriptive manner since they inform what behaviors and habits (e.g., smoking, unhealthy diet) affect a person’s health and therefore are considered detrimental to a person’s health. In short, conceptualizations of health and disease are the result of a complex interaction between values and norms, in addition to biological standards, that qualify specific understandings of well-being and human flourishing (H. T. Engelhardt 1996). In contrast to a descriptive account of disease, Engelhardt suggests considering diseases as clinical problems. The focus of clinical medicine is to address questions of pain, expectations concerning human form and grace, physical aptitudes, and mental capacities insofar as they affect a person’s ability to achieve specific human goals and ends (H. T. Engelhardt 1984). This means that diseases are not considered as abnormalities outside the scope of what is deemed species-typical levels of species-typical functions. Diseases, as clinical problems, are contextualized based on the goals, ends, and notions of human flourishing held by individuals suffering from incapacitating ailments. In light of the above analysis, I would argue that enhancement cannot be understood outside a normative framework and therefore without contextualizing enhancement in relation to notions of human flourishing, goals, and ends, debates will remain somewhat entrenched in ideological positions rather than careful critical enquiry. In addition, as I stated earlier, my goal is to examine the concept of enhancement within the clinical context. To this end, I suggest that describing the notion of enhancement as a clinical ideal allows disentangling enhancement from extreme applications such as transhumanism, and from a narrow explanation based on a normal range of human capacities. In so doing, I consider the possibility of enhancement as a means to address clinical problems rather than as a way to promote the agenda of transcending the boundaries of human biology to achieve a posthuman state. But in order to justify such claim, an important distinction between two types of enhancement must be made as I show in what follows.

44 

F. Jotterand

3.1.4 Enhancement-Nondisabled Versus Enhancement-Disabled Elsewhere I make an important conceptual distinction between enhancement-­nondisabled and enhancement-disabled (Jotterand et al. 2015; Jotterand 2017). Enhancement-nondisabled is associated with the enhancement of healthy people with no (cognitive) impairments. This means that these individuals at baseline are nondisabled (normal function) and interventions in unhealthy individuals within that group of people imply therapy if the goal is to restore baseline function whereas in healthy individuals any intervention resulting in the enhancement of cognitive capacities is equated with enhancement-nondisabled. This is usually how people will find the distinction between therapy and enhancement in the bioethics literature. Enhancement-disabled, however, starts with a baseline disabled or dis-ease (limited function or disfunction). These individuals are healthy but have an impairment (physical, psychological, behavioral) that limits their ability to function normally in major life activities (e.g., speaking, learning, communicating, interacting with others, caring, walking, sitting, and standing). This patient population is affected by genetic disorders, illnesses, or injuries that result in conditions such as neurological conditions (e.g., cerebral palsy with accompanying developmental delay) or psychiatric disorders. Therapeutic interventions are warranted if these people have other underlying conditions that make them unhealthy but individuals who do not suffer from any ailment and have behavioral impairments might benefit from the use of techniques to address moral pathologies (enhancement-disabled). The distinction between enhancement-nondisabled and enhancement-­ disabled is critical to the primary argument presented in this book. Conceptually, it demonstrates that enhancement ought not to be limited to issues related to a posthuman agenda but could, in principle, become part of the therapeutic language in the future under specific and strict conditions. When enhancement is associated with the idea of transcending the limitations of our brains and bodies, it implies particular values and a vision of the human future that does not capture the realities and frailties of this life in contrast to a potential enhanced existence

3  Moral Bioenhancement and the Clinical Ideal 

45

(posthuman condition). Such discourse relegates discussion about the clinical implications of (moral bio)enhancement to concerns about the current human condition, which is exactly what the strongest proponents of transhumanism want to transcend. In their view, human nature is improvable and can legitimately justify “reform[ing] ourselves and our nature in accordance with human values and personal aspirations” (Bostrom 2005). This perspective on the enhancement project does not carefully include any clinical aspects qua clinical practice. It polarizes the debate over enhancement in ways that do not consider how some enhancements could actually be beneficial for the improvement of the quality of life of individuals whose levels of functioning cannot be regained or achieved in comparison to the nondisabled.

3.2 The Clinical Ideal and Enhancement For this reason, I propose a potential framework, the clinical ideal, to justify a prudent use of various enhancement techniques in the context of clinical interventions, albeit highly speculative. The clinical ideal allows to evaluate enhancement on whether it enhances the physical, mental, and social capacities and the overall quality of life of an individual with a mental impairment or psychiatric disorder, where the baseline is disabled, and in the context of one’s life. To reiterate Agar’s earlier distinction, the objective ideal argues that the prudential value of enhancement is based on the degree to which it objectively enhances human capacity and therefore adopts a position that demands a set of objective standards to evaluate levels of enhanced capacities. On the other hand, the anthropocentric ideal assigns value to enhancement relative to human standards, that is, whether or not it benefits the recipient of an enhancement based on standardized norms. This approach assumes a normative framework that determines how enhancement advances the well-being of the human species. In contrast to these two ideals, the clinical ideal assigns value to enhancement if it enhances the physical, mental, and social capacities and the overall quality of life of an individual with mental impairment or behavior disorder (baseline disabled) in the context of life (contextual standards). The clinical ideal

46 

F. Jotterand

facilitates the introduction of the notion of enhancement in the clinical language without the need to contrast it or oppose it to the notion of therapy. It limits the scope of considerations to the life of a particular individual whose quality of life and well-being could be improved. Ordinarily, therapy refers to the restoration, partial or complete, of the functions of an organism affected by a disease, a disability, or an impairment using appropriate treatment. Therapy is a useful concept that helps clinicians determine the scope of medical practice based on established baseline criteria. In addition, the language of therapy implies an understanding of what a healthy organism is, and the scope of normal functioning. But providing clear definitions and conceptual clarity to notions such as health, normality, impairment, and abnormality has been a notoriously difficult task. Notions of health and normality are concepts that have evolved, and continue to do so, as progress in the biomedical sciences constantly reshapes the boundaries of what health is or what normal functioning for a biological organism entails. Health determinants are a complex combination of environmental, social, educational, and biological factors and therefore the expectation to achieve a healthy status is in constant flux. That said, the practice of medicine requires the establishment of standards that will allow practitioners to help their patients achieve the optimization of functioning established on standards of functionality within the boundaries of human biology. On the other hand, enhancement #1 refers to the use of the power of neurotechnologies (and biotechnologies) to transcend the normal scope of species-typical human capacities to improve biological abilities. Therapy is limited to the scope of medicine whereas enhancement #1 applies to a different domain since its application focuses on healthy individuals who want to enhance some capacities. Yet, the line of demarcation between what constitutes therapy and enhancement respectively is increasingly fuzzy because some therapeutic interventions, devices, or drugs have dual effects or, as I pointed out, enhancement #2 could be employed in the context of individuals whose levels of capacities are not consistent with nondisabled. Thus, one way to disentangle the therapy-­ enhancement discourse with regard to moral bioenhancement is to introduce the notion of enhancement in clinical language without a contrast with the language of therapy, but such an approach would require the

3  Moral Bioenhancement and the Clinical Ideal 

47

assumption that some (most) individuals have “moral capacities” below what is species-typical and that an acceptable threshold of moral behavior had been predetermined. This claim about some (most) people would be difficult to support unless we specifically refer to individuals with psychiatric disorders exhibiting moral pathologies (e.g., psychopathy). Outside the particular context of medicine, moral bioenhancement refers to the hypothetical use of biotechnology and neurotechnologies for the improvement of moral capacities or character traits such as justice, shame, forgiveness, empathy, and solidarity (Persson and Savulescu 2012; J. Harris 2011; Douglas 2008). It aims at the enhancement of the affective, motivational, and cognitive faculties that bear on human moral behavior (Harris’s critiques of Persson and Savulescu focus on the last of these (J. Harris 2011; 2016c)).

3.3 C  linical Ideal and Psychiatric Disorders with Moral Pathologies Now that a general framework has been outlined as to the nature of the clinical ideal, we need to dive further into its implications for individuals suffering from mental disorders with moral pathologies. The overall purpose of the use of neurotechnologies in individuals suffering from psychiatric conditions with moral pathologies is to increase their quality of life. Various factors should be considered in the evaluation of quality of life in patients: 1) the nature of the illness and the side effects of treatment, including neurointerventions; 2) the patient’s ability to perform basic everyday life activities; 3) the patient’s levels of independence, privacy, and dignity; and 4) the patient’s experience of happiness, pleasure, pain, and suffering (Lo 2000, 30). Typically, competent patients are able to communicate how they experience their own quality of life and to make decisions with regard to treatment options, duration, and possible cessation. However, there are challenges to evaluate the quality of life of others either when these individuals are not deemed competent, if they cannot voice their opinion, or when clinicians and family members make claims about the patient’s quality of life. Studies have shown that there are

48 

F. Jotterand

discrepancies between how patients perceive their quality of life and what others say about it, especially since patients learn how to cope with illness, find support elsewhere, and in many instances remain positive about life in general and still find pleasure despite their situation (Lo 2000, 30). Consequently, as Lo rightly states, “quality of life judgments by others might be inaccurate and biased unless they reflect the patient’s own assessment of quality of life” (Lo 2000, 30). With these considerations in mind, I turn now to specifically outline the criteria of the boundaries of the quality of life of a person when potentially undergoing a brain intervention aimed at behavioral modification. One specific aim of a brain intervention would be to increase an individual’s social capacity; that is to increase a person’s ability to socialize and enter in relationship with others, including a broadening of personal choices, self-expression, and an optimization of how to engage in the world (Jotterand et al. 2015). Does the (hypothetical) improvement in quality of life of individuals suffering from psychiatric disorders with moral pathologies (e.g., psychopathy) justify the use of neurotechnologies aimed at behavioral change? To answer this question, it is essential to stress that context is crucial. The clinical ideal asserts that any enhancement that improves the quality of life of an individual with a behavioral problem would be justified if it takes into account the particular situation of the person involved. From a pragmatic standpoint, such interventions could be justifiable. However, as I will argue in Chap. 6, individuals with psychopathic traits are not always patients. For this reason, other considerations must be taken into account for the assessment of a hypothetical “social” use of neurotechnologies. This means that no general claims can be asserted as the widespread usage of neurotechnologies and as to the moral obligation of their use even if it has been proven that these technologies improve their overall quality of life. The prudential value of enhancement is determined by how clinical procedures—the degree of enhancement—meet realistic expectations set by the person suffering from a psychiatric condition with moral pathologies proportionally to the improvement of the quality of life, to the enhanced mental functionality, and to the increased social capacity. Such a framework does not require setting any threshold since its goal is to promote the well-being of the individual subjected to these enhancements.

49

3  Moral Bioenhancement and the Clinical Ideal 

There is always a danger to use graphics representing questions of enhancement and quality of life due to the difficulty to quantify these concepts. However, the following representation might help visualize the concept of the clinical ideal. The point here is to make a normative statement about the nature of enhancement with regard to quality of life. Consider the Y-axis representing qualify of life on a scale of 0 to 10 and the X-axis representing degrees of enhancement on a scale of 0 to 10. In addition, consider this graphic as a visual aid used by the person deciding whether to allow brain interventions under his responsibility. While I recognize that quantification of enhancement and quality of life are difficult to establish, Figure 3.1 is a graphic representation of the relationship between degrees of enhancement and degrees of quality of life. In an ideal scenario, the clinical ideal suggests that the increased degree of quality of life must be equal or superior to the degree of increased enhancement as indicated by the grey arrow. This means that quality of life should take priority over

Clinical Ideal 10 9

Quality of Life

8 7 6 5 4 3 2 1 0

0

1

2

3

4

5

6

7

8

9

10

Degrees of Enhancement Fig. 3.1  Graphic representation of the relationship between degrees enhancement and degrees of quality of life. In an ideal scenario, the clinical ideal suggests that the increased degree of quality of life must be equal or superior to the degree of increased enhancement as indicated by the grey arrow

50 

F. Jotterand

enhancement to avoid an instrumentation of patients through technological interventions. Unless enhancement technologies are safe, the focus should be on the improvement of quality of life and not on increased capacities. In a non-ideal scenario (degree of enhancement higher than degree of quality of life—see Fig. 3.1) where the improvement of quality of life of an individual requires extensive enhancements, each case must be evaluated on its own facts using ethical guidelines that prevent the use of enhancement technologies that objectify the patient and promote his or her welfare and safety. The clinical ideal does not necessitate setting a normal range of human capacities because the patient population concerned has a baseline disabled that is outside the normal range of human abilities. Therefore, any moral assessment as to the acceptability of an enhancement must be established on the subjective degree of improvement of quality of life. Furthermore, to protect and maximize the well-being of the recipient of an enhancement procedure, the degree of enhancement must be equal or superior to the degree of improved quality of life. In other words, if the degree of enhancement in relation to disabled baseline is 4, ideally the degree of quality of life should be deemed to be 4 or higher. The individualist approach of the clinical ideal is represented in Fig. 3.2. It shows each individual starting at a different stage with regard to how quality of life is experienced or expressed and that the degree of enhancement does not necessarily improve well-being. In the diagram above we have three individuals with a psychiatric disorder displaying moral pathologies whose quality of life has been determined to be 2 at baseline disabled for person  , to 3 at baseline disabled for person  , and to 4 at baseline disabled for person ×. The clinical ideal does not provide a normative framework within specific parameters other than the fact that enhancements that do not increase quality of life above baseline line should be dismissed. In addition, quality of life is closely related to the condition of a particular person, how it is experienced and interpreted. Figure 3.2 reveals that there are different scenarios to consider: in certain cases, cognitive enhancement might indeed not improve quality of life (person × ), in other cases the magnitude of enhancement only slightly changes the quality of life (person  ) whereas in other

3  Moral Bioenhancement and the Clinical Ideal 

51

Individualist Approach of the Clinical Ideal 10 9

Quality of Life

8 7 6 5 4 3 2 1 0

0

2

4

6

8

10

Degrees of Enhancement Fig. 3.2  Graphic representation of various potential enhancement scenarios. Represents the clinical ideal whereas increased degree of quality of life must be equal or superior to the degree of increased enhancement.  Represents the case of a disabled person whereas increased degree of enhancement minimally influences quality of life.  Represents the case of a disabled person whereas increased degree of enhancement decreases quality of life beyond a certain point. × Represents the case of a disabled person whereas increased degree of enhancement negatively impacts quality of life

instances there might be a dramatic improvement of quality of life up to a certain level of enhancement and then a drastic drop (person  ). This is not a comprehensive list of scenarios, nor is it based on empirical data, but it illustrates that each individual should be treated as a unique case, with specific needs, and a particular experience and expression of well-being. To summarize, the clinical ideal avoids the challenge of determining the normal range of human capacities. It simply states that, with regard to a person with psychopathic traits, the moral evaluation of enhancement #2 can take place within the context of the person’s condition and in relation to the notion of quality of life. There is a subjective and evaluative dimension that escapes any attempt to standardize enhancement #2. A brain intervention is justified if it improves the overall well-being of the person.

52 

F. Jotterand

The above analysis offers a potential framework to evaluate justifiable brain interventions but does not offer a rationale with regard to the feasibility and desirability of moral bioenhancement (Does it enhance morality in the fullest meaning?). In addition, brain interventions in the clinical context might be justifiable, but as I will argue later in this book, there are strong reasons to question such interventions in the case of individuals with psychopathic traits whose rational capacities are suitable for decision-making.

References Agar, Nicholas. 2014. Truly Human Enhancement. The MIT Press. https://mitpress.mit.edu/books/truly-­human-­enhancement. Boorse, Christopher. 1975. On the Distinction between Disease and Illness. Philosophy & Public Affairs 5 (1): 49–68. ———. 1977. Health as a Theoretical Concept. Philosophy of Science 44 (4): 542–573. ———. 1997. A Rebuttal on Health. In What Is Disease? Biomedical Ethics Reviews, ed. James M. Humber and Robert F. Almeder, 1–134. Totowa, NJ: Humana Press. https://doi.org/10.1007/978-­1-­59259-­451-­1_1. Bostrom, Nick. 2005. In Defense of Posthuman Dignity. Bioethics 19 (3): 202–214. https://doi.org/10.1111/j.1467-­8519.2005.00437.x. Chadwick, Ruth. 2008. Therapy, Enhancement and Improvement. In Medical Enhancement and Posthumanity, 25–37. Dordrecht: The International Library of Ethics, Law and Technology; Springer. https://doi.org/10.1007/978-­1-­ 4020-­8852-­0_3. Douglas, Thomas. 2008. Moral Enhancement. Journal of Applied Philosophy 25 (3): 228–245. https://doi.org/10.1111/j.1468-­5930.2008.00412.x. Engelhardt, H. Tristram. 1984. Clinical Problems and the Concept of Disease. In Health, Disease, and Causal Explanations in Medicine, Philosophy and Medicine, ed. Lennart Nordenfelt, B. Ingemar, and B. Lindahl, 27–41. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-­94-­ 009-­6283-­5_5. ———. 1996. The Foundations of Bioethics. 2nd ed. New York: Oxford University Press. Harris, John. 2011. Moral Enhancement and Freedom. Bioethics 25 (2): 102–111. https://doi.org/10.1111/j.1467-­8519.2010.01854.x.

3  Moral Bioenhancement and the Clinical Ideal 

53

———. 2016a. How to Be Good: The Possibility of Moral Enhancement. Oxford, New York: Oxford University Press. ———. 2016b. Moral Blindness—The Gift of the God Machine. Neuroethics 9 (3): 269–273. https://doi.org/10.1007/s12152-­016-­9272-­9. Jotterand, Fabrice. 2016. Moral Enhancement, Neuroessentialism, and Moral Content. In Cognitive Enhancement: Ethical and Policy Implications in International Perspectives, ed. Fabrice Jotterand and Veljko Dubljević. Oxford; New York: Oxford University Press. ———. 2017. Cognitive Enhancement of Today May Be the Normal of Tomorrow. In Neuroethics: Anticipating the Future, ed. Judy Illes. Oxford University Press. Jotterand, Fabrice, and Clara Bosco. 2020. Keeping the ‘Human in the Loop’ in the Age of Artificial Intelligence. Science and Engineering Ethics, July. https:// doi.org/10.1007/s11948-­020-­00241-­1. Jotterand, Fabrice, and Tenzin Wangmo. 2014. The Principle of Equivalence Reconsidered: Assessing the Relevance of the Principle of Equivalence in Prison Medicine. The American Journal of Bioethics: AJOB 14 (7): 4–12. https://doi.org/10.1080/15265161.2014.919365. Jotterand, Fabrice, Jennifer L. McCurdy, and Bernice Elger. 2015. Chapter 11—Cognitive Enhancers and Mental Impairment: Emerging Ethical Issues. In Rosenberg’s Molecular and Genetic Basis of Neurological and Psychiatric Disease (Fifth Edition), ed. Roger N. Rosenberg and Juan M. Pascual, 119–126. Boston: Academic Press. https://doi.org/10.1016/B978-­0-­12-­ 410529-­4.00011-­5. Kass, Leon R. 1985. Toward a More Natural Science. Free Press. Lo, Bernard. 2000. Resolving Ethical Dilemmas: A Guide for Clinicians. 2nd ed. Philadelphia: Lippincott Williams & Wilkins. McCullough, Jack. 2019. The Psychopathic CEO. Forbes. https://www.forbes. com/sites/jackmccullough/2019/12/09/the-­psychopathic-­ceo/. Persson, Ingmar, and Julian Savulescu. 2012. Unfit for the Future: The Need for Moral Enhancement. 1st ed. Oxford University Press. ———. 2016. Moral Bioenhancement, Freedom and Reason. Neuroethics 9 (3): 263–268. https://doi.org/10.1007/s12152-­016-­9268-­5. Savulescu, Julian, and Ingmar Persson. 2012. Moral Enhancement, Freedom and the God Machine. The Monist 95 (3): 399–421.

4 Neurobiology, Morality, and Agency

4.1 The Neurobiology of Morality The focus of this chapter is to evaluate to what extent morality can be manipulated and enhanced. In what follows, I outline and explain some of the latest findings in the “neuroscience of ethics” (aka the neuroscience of moral judgments). My goal is to describe how certain brain areas are associated with basic and moral emotions (including the amygdala, the thalamus, the upper midbrain, the medial orbitofrontal cortex, the medial frontal gyrus, and the right posterior superior temporal sulcus) and therefore could be subject to interventions for the manipulation of behavior and the alteration of the moral self. A reductionist interpretation of these findings can lead to a problematic neuroessentialist approach which holds that mental states, behavior, sense of self, and personal identity depend essentially on the structure, chemistry, and neural activity of the brain. Neuroessentialism reduces human behavior to neurobiology which misconceptualizes human moral psychology. For this reason, these findings must be interpreted critically, but without denying their significance in our understanding of how morality works in the brain.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_4

55

56 

F. Jotterand

The neuroscientific evidence that specific brain areas are associated with behavior in conjunction with the availability of neurophysiological interventions suggest that human behavior, and by extension morality, are determined by natural phenomena occurring in the brain. Any variations in behavior from what is considered normal are the result of individual variations happening at the neuronal activity. To experience the world through our senses or to feel particular emotions in making hard (moral) decisions or seeing a sun set, “[we] are relying entirely on the brain’s biological machinery. [Our] brain makes [us] who [we] are” (Kandel 2018, 8). But the brain poses a real challenge to the scientific community concerning human behavior. As Eric R. Kandel further notes “the greatest challenge in all of science is to understand how the mysteries of human nature … arise from the physical matter of the brain. How do coded signals, sent out by billions of nerve cells in our brain, give rise to consciousness, love, language, and art?” (Kandel 2018, 7). He extends his questioning to our “sense of identity,” and, in the context of this book, I would argue we can include the nature of our identity as a moral agent. In short, how are values or ethical norms codified in the brain to be understood, internalized, and produce specific behaviors? It is likely that future improvements of neuroimaging techniques to enhance anatomical specification resolution will offer new ways to correlate cognition and behavior with specific brain regions (Hart, Jr. 2015). However, a robust body of research on neuroanatomy is already providing a good comprehension of the neurobiological basis of cognition and behavior. It has been established that the transmission of information for cognitive operations occurs through a web of neuroanatomical regions communicating with each other. Neurons send signals to adjacent neurons involved in cognitive operations. This process is termed anatomic-­ processing schema and signifies that “the way that neuronal firing patterns (spatially and/or temporally) in a brain region are linked to other anatomic regions to transfer information between regions” (Hart, Jr. 2015, 10). Information such as the concept of “car” or the statement “the sky is blue” is transferred via the encoding of units of knowledge through neurons that transmit signals (or synapses) to other neurons not necessarily neighboring the neuron that initiated the signal. Engaging in cognitive processing or in the representation of cognitive constructs is the result of

4  Neurobiology, Morality, and Agency 

57

the interaction between “multiple cognitive units supported by multiple brain regions in a variety of different combinations … linked together in a network, processing information in order to perform a specific cognitive task” (Hart Jr. 2015, 15). These cognitive units are best described as nodes and are organized in multiple networks to produce cognition and behavior. The disruption, through lesions for instance, of these networks or nodes within a network results in abnormal behavior which can be the product of three main factors: (1) which node is affected; (2) which neurons within a node are destroyed; and 3) what cognitive function a node might be involved in or how it might participate in other nodes (Hart Jr. 2015). In other words, behavioral disfunction (pathological behavioral disfunction) can be the result of a damaged cognitive processing stream. The complexity of human cognition and behavior and its mechanism in the brain cannot be underestimated. Breaking down which brain areas correspond to which cognitive and behavioral processing rely heavily, as already stated, on technological improvements in neuroimaging. For now, though, suffice to say that the brain-behavior relationship can be described as an intricate set of cognitive processes and neural encodings and therefore we should resist reducing it to a single process in the brain. As Hart states, the entire cognitive system in the brain is not conducive to a simplistic one-processing-model account. It is acknowledged that the brain-behavior relationships as a whole exist as a hybrid of multiple models of cognitive processing and neural encodings in brain regions associated with cognition—from a discrete circumscribed brain region with a specific function to a spatially and temporally distributed network of neurons. (Hart Jr. 2015, 21)

The brain-behavior relationship can then be further examined by taking a deeper dive into the neurobiological basis of morality. To this end, in what follows, I suggest a brief overview of the various brain structures associated with behavior (Pascual et al. 2013). These comprise: (1) the frontal lobe, which includes the ventromedial prefrontal cortex (VMPFC)—engaged in moral judgment and in the regulation of the role of emotions in processing moral dilemmas (Jorge Moll, de

58 

F. Jotterand

Oliveira-Souza, et al. 2002b; Prehn et al. 2008; Harada et al. 2009; Young and Koenigs 2007), and in adherence to social norms and their violation (Jorge Moll et al. 2005; Prehn et al. 2008); the orbitofrontal cortex (OFC)—implicated in assessing reward and punishment (O’Doherty et al. 2001; Shenhav and Greene 2010) and in processing emotionally charged assertions with regard to moral value (Jorge Moll, Oliveira-­ Souza, et al. 2002b); the dorsolateral prefrontal cortex (DLPFC)—associated with cognition and problem solving in moral judgments (Greene et al. 2004); and the anterior cingulate cortex (ACC)—involved in the processing of moral dilemmas requiring the violation of personal morality on utilitarian grounds (Greene et al. 2004; Young and Koenigs 2007); (2) the parietal lobe which includes the inferior parietal region—closely engaged in cognition and memory in moral judgment (Greene et al. 2004); the superior temporal sulcus (STS)—thought to play a role in perception of social cues (Allison et al. 2000); the temporo-parietal junction (TPJ) plays an important role in moral intuitions (Young and Saxe 2008; Young and Dungan 2012); (3) the temporal lobe, which includes the superior temporal sulcus (STS) involved in moral judgments (Allison et al. 2000; Jorge Moll, de Oliveira-Souza, et al. 2002b; C. L. Harenski et al. 2008); the anterior/middle temporal gyrus also implicated in moral judgments (Moll et al. 2001; C. L. Harenski et al. 2008); and the angular gyrus engaged in the process of evaluating personal moral dilemmas (Schaich Borg et al. 2006; Funk and Gazzaniga 2009); (4) the limbic lobe which comprises the posterior cingulate cortex (PCC) involved in memory, social awareness, forgiveness, and empathy (Sestieri et al. 2011; Greene et al. 2004; Völlm et al. 2006; Farrow et al. 2001); the insular cortex associated with various moral tasks such as emotional processing, empathy, disgust, and witnessing of inequity (Wicker et al. 2003; Jorge Moll, Oliveira-Souza, et al. 2002b; Decety et al. 2012; Hsu et al. 2008); (5) the subcortical structures such as the hippocampus involved in fear conditioning, social emotions, and interpretation of emotional facial expressions (Tsetsenis et al. 2007; Immordino-Yang and Singh 2013; Fusar-Poli et al. 2009); the amygdala, key in moral judgments and moral learning (Mendez 2006; Greene et al. 2004); the thalamus and the septum have also been recognized in the perception and evaluation of difficult situations resulting in empathy (Jackson et al. 2005; Decety et al.

4  Neurobiology, Morality, and Agency 

59

2012) as well as when individuals demonstrate generosity (Jorge Moll et al. 2006) or exhibit psychopathic behavior (Kent A. Kiehl 2006). There is strong evidence that insult to any of these brain regions due to trauma, disease, or purposive neurological intervention can alter behavior and personal identity, as exemplified by cases such as Phineas Gage or invasive procedures (e.g., lobotomies and deep brain stimulations). According to Moll et al., for instance, damage to the anterior areas of the prefrontal cortex “at an early age seems to prevent the acquisition of moral attitudes and values, leading to gross disturbances of social conduct” (Jorge Moll et al. 2008, 6). A superficial interpretation of neuroscientific evidence about morality could result in two main conclusions (Jotterand 2016). The first conclusion is that human behavior can be reduced to neurobiology or ethical naturalism. Morality can be explained as a natural occurrence, as the product of evolutionary processes like other biological phenomena. Ethical naturalism finds its origins in naturalism—the view that the world and how humans relate to it can only be understood in terms of the laws of nature—and therefore rejects supernatural sources as an explanation of morality. According to Flanagan et al. ethical naturalism has also other methodological commitments worth noting. First, moral philosophy should not “employ a distinctive a priori method of yielding substantive, self-evident and foundational truths from pure conceptual analysis.” The outcome of philosophical investigation on moral matters should always be subject to empirical testing and morality should be assessed in a continuum with other sciences. Second, naturalism denies “the existence of irreducible and non-natural moral facts or properties.” Moral truths cannot be discovered outside the study of human nature and therefore a metaphysics of ethics appealing to “transcendental rationales” is not a plausible account of morality. Third, naturalism rejects “the notion … that humans have metaphysical freedom of the will” because any conception of free will that is non-naturalistic violates basic natural laws (Flanagan et al. 2007, 5–9). These three commitments would require a more in-depth analysis to have a full picture of the essence of ethical naturalism. For the sake of this work, suffice to say that ethical naturalism defines morality in terms of naturalistic premises. Morality is not discovered as a reality outside biological constraints but rather a continuous

60 

F. Jotterand

process of creation according to the needs for the social group’s survival. For instance, evolutionary psychologist Frans de Waal argues that the origins of morality find their source in the psychological instinct to survive. On his account “the moral law is not imposed from above or derived from well-reasoned principles; rather, it arises from ingrained values that have been there since the beginning of time. The most fundamental one derives from the survival value of group life” (de Waal 2013, 228). The explanatory shift of the sources of morality from moral philosophy to moral psychology (i.e., “ingrained values”) marks an important turn in the process of ethical naturalism (Flanagan et al. 2007).1 Philosopher Patricia S. Churchland epitomizes this shift in philosophical inquiry. She contends that “morality seems… to be a natural phenomenon—constrained by the forces of natural selection, rooted in neurobiology, shaped by the local ecology, and modified by cultural developments” (Churchland 2012, 191). A similar perspective is adopted by psychologist Michael Gazzaniga. He makes a similar point with an emphasis on moral intuitions, stating that “[m]ost moral judgments are intuitive…we have a reaction to a situation, or an opinion, and we form a theory as to why we feel the way we do…moral ideas are generated by our own interpreter, by our brains, yet we form a theory about their absolute ‘rightness’” (Gazzaniga 2006, 172). Thus, the question is whether the formulation of moral theories is a justification of our moral intuitions or whether our moral intuitions are shaped by moral frameworks that operate as sounding boards to guide and correct behavior. In the field of neuroscience some individuals likewise promote the view that morality is mostly a natural phenomenon. David Redish espouses the language of the “science of morality”, which, in his view, “[a]t this point … is concentrating on the descriptive question of What do humans do? The evidence seems to be that we have evolved a host of ‘moral’ judgments that allow us to live in (mostly) peaceful communities” (Redish 2013, 225). This is not an exhaustive list of perspectives that adopt a biological stance on morality. The goal here is to provide few examples of how ethical naturalism is widely accepted in a variety of disciplines and to underline how it is moving toward establishing itself in scientific terms as the science of morality. The second conclusion, briefly though since this question will be addressed more fully in the next section, is the disproportionate emphasis

4  Neurobiology, Morality, and Agency 

61

on the role of emotions or intuitions in human moral psychology and in the conceptualization of morality. Moral intuitions are always informed by a normative framework for their justification and moral formation, contra Gazzaniga who argues that moral theories are developed to justify our moral intuitions. Moral intuitions develop through a process of engaging in the world and learning about the good, the right, and the just. Determining whether moral beliefs or moral intuitions are justified requires moving “beyond psychological description to the normative epistemic issue of how we ought to form moral beliefs” (W Sinnott-­ Armstrong 2008, 48), an issue I address more fully in the subsequent section.

4.2 Moral Judgments and the Moral Self 4.2.1 Reason and Emotions: A False Dichotomy The debate about the role of reason versus the role of emotions can be traced back to Immanuel Kant (1724–1804) and David Hume (1711–1776). Hume asserts that reason cannot validate moral behavior or encourage us to conduct ourselves in any particular manner. The only function of reason is to inform one’s ability to discern the best way to fulfil the goals determined by the passions. This concept of moral inquiry moves moral discourse into moral psychology as he attempts to justify how inherent emotions and perceptions direct human behavior. On the other hand, philosophers like Kant stress the rational aspect of moral investigation and conclude that because each human being has the same faculty of reason, each individual ought to reach the same conclusion with regard to morality. This debate has been somewhat eclipsed as there is an increasing consensus among many scholars in moral philosophy and moral psychology that moral judgments are the outcome of two different types of processes in the brain: (1) rational/explicit and (2) emotional/intuitive. Case in point: in their discussion, Pascual and colleagues conclude that this dual dimension of moral processes and the various brain structures implicated

62 

F. Jotterand

in moral judgments do not warrant the claim that there is a center in the brain for morality. The neural circuitry of moral behavior is closely connected with other types of behaviors: [t]he moral brain does not exist per se; rather, moral processes require the engagement of specific structures of both the “emotional” and the “cognitive” brains, and the difference with respect to other cognitive and emotional processes may lie in the content of these processes, that than in specific circuits. … [M]orality is supported not by a single brain circuitry or structure, but by a multiplicity of circuits that overlap with other general complex processes. (Pascual et al. 2013, 5–6)

The nature of the interaction between these two domains of neural activity remains unsettled and at least three main theories have been proposed to explain how these two dimensions in moral judgments operate (Kent A. Kiehl 2006): (1) the social intuitionist theory: most moral judgments are the result of intuitive processes (Haidt 2001); (2) the cognitive control and conflict theory: responses to moral dilemmas depend on how various brain areas, either emotion-related or cognition-related, activate which determines outcomes (Greene et al. 2004), and (3) the cognitive and emotional integration theory: behavior is the outcome of cognitive and emotional processes hence rejecting the idea of a dichotomy between reason and emotions (Jorge Moll et al. 2003). The last approach seems the most plausible, especially in light of the work by James Woodward (2016). What he calls the “rationalist dichotomy” (RD) position, features a sharp distinction between rational and emotional capacities, foregrounds the place of reason in moral judgments in a way that well outpaces the findings of current brain research (Woodward 2016, 87–88; 106–8). This stance has been popular among many philosophers, including utilitarians and deontologists, such as Immanuel Kant, Joshua Greene, Peter Singer, and Derek Parfit, who are committed to the view that moral judgments are grounded in reason (cognition), a faculty separate from emotion. The RD view carries three main assumptions as to reasoning abilities: they are (1) unique to humans or a recent occurrence in evolutionary terms, (2) sophisticated in their processing capabilities, and (3) flexible in their ability to respond to a wide range of situations. Conversely emotions are

4  Neurobiology, Morality, and Agency 

63

considered (1) ancient in evolutionary terms, (2) primitive in the level of information processing sophistication, and (3) inflexible as to their implementations (Woodward 2016, 88). For these three reasons, according to Woodward, many deem the role of emotions detrimental in the decision-making process that leads to moral judgments. However, as recent brain research has demonstrated, the view that emotional processes do not participate in the formation of moral judgments is inaccurate. Woodward rightly points out, the question is not whether emotions are involved in moral judgments but rather how they participate in moral judgments formation. He suggests an alternative view called the “integrative nondichotomist” (IND) position which is supported by neuroscientific evidence, particularly brain areas like the orbital frontal cortex (OFC) and the ventromedial prefrontal cortex (VMPFC), which are involved in the processing of complex information and computation (Woodward 2016). The IND view holds that emotions and cognition ought not to be viewed as two distinct dimensions of mental life but as two co-existing processes in moral decision making and moral judgments: “emotion and cognition are not sharply distinct, and emotional processing, properly understood, plays (and ought to play, on virtually any plausible normative theory, including utilitarianism) a central role in moral judgment” (Woodward 2016). The neural structures usually associated with emotional processing—including the ventromedial prefrontal cortex (VMPFC) and the orbital frontal cortex (OFC) as already stated but also the anterior cingulate cortex (ACC), the insula, the amygdala, and other structures associated with reward processing (ventral striatum)—are not limited to emotion; they also accomplish cognitive work (Woodward 2016, 87–89; 114n9). This point meshes well with the latest neuroscientific research. Neural structures such as the OFC and the VMPFC engaged in information processing and computation, and “are highly flexible and capable of sophisticated forms of learning, particularly in social contexts, which are often informationally very complex” (Woodward 2016, 89). In addition to their learning potential, these structures participate in calculation and computation, and are representational in the sense of providing the ability to quantify reward in moral judgments (Woodward 2016, 91–95).

64 

F. Jotterand

Because of the nature of these brain structures (flexibility and the capability to undertake sophisticated and complex learning tasks) and the anatomical and functional differences2 compared to non-human animals, according to Woodward it is sound to support three fundamental facts about how emotions and reason are integrated in moral judgments. First, this framework suggests that emotions and reason do not work as separate entities in the formation of moral judgments but rather work in synergy while maintaining their intrinsic specificities. Second, morality cannot be based on reason alone or as something different from emotion. And third, there is no evidence that qualitatively moral judgments and decision-making processes are affected by emotional distortions—which can happen but not inherently (Woodward 2016, 89).

4.2.2 M  oral Agency: Moral Capacity and Moral Content In light of my analysis, we can state that the distinction between reason and moral emotions is a dichotomy that should be rejected, as it is not supported by the latest research in brain science. Each dimension contributes to and participates in the two other aspects in the formation of moral judgments and the decision-making process: moral capacity and moral content. Moral content refers to particular conceptions of the good, the right and the just, beliefs, and ideas about the good life which are determined by one’s willingness and ability to reason about moral dilemmas and human flourishing. Moral capacity, on the other hand, refers to one’s ability or disposition to respond morally which involves the affective (i.e., emotional) dimension of moral judgments and moral actions (Jotterand 2011; Jotterand and Levin 2017).3 In other terms, three main dimensions characterize human moral psychology: the affective (desire), the motivational (volition), and the cognitive (reason). Each dimension reflects mental attributes or behavioral characteristics of human conduct. Desire reflects the capacity of an individual to act morally based on moral beliefs (affective), whereas reason is the anchor that justifies grounds for action (cognitive) (Woodward 2016, 113). We should, then, be skeptical of the view that morality can be established uniquely on rational ground

4  Neurobiology, Morality, and Agency 

65

and appeal to what reflects a true human experience in moral evaluations. The neuroscience of ethics has proven to be an excellent resource in understanding how brain structures participate in moral judgments. What seems to emerge from this research is a picture of morality that describes moral agency, regardless of whether one adopts a utilitarian or deontological approach, in a trifecta. As Woodward rightly sums up the issue, “[t]he way forward is…to recognize that moral requirements that we find appealing and that are suitable for regulating our lives will reflect facts about our effective and motivational commitments, as well as our capacities for reasoning” (Woodward 2016, 113). That said, the question remains as to whether desires to act morally, based on the evaluation of particular situations, themselves have enough weight to motivate an individual to act accordingly. Before we can go further in our analysis, we must define the terminology of “moral motivation.” Motivation is “a property central in motivational explanations of intentional conduct” that involves an interaction between two attitudes: desire and belief (Mele 1999, 591). Desire is an inclination (having, as such, an emotional dimension) to bring about the satisfaction of that very attitude, whereas a belief provides a justification to pursue the desire. It follows that motivation is constituted by cognitive and affective dimensions of human psychology. Let me illustrate this claim. Imagine you are walking in downtown Milwaukee during a blizzard, and you encounter a homeless person begging for change in order to be able to buy some food. A compassionate response might incite you to buy that person a hot meal based on your desire to do the right thing (emotional response to human need and misery), but also on your evaluation of the predicament of an individual in the middle of a winter storm (justification for your action). In this case, your motivation is the product of moral emotions regarding human misery and rational deliberation as to why a course of action is justified (“It is dangerous to be outside in cold weather without warm clothes, shelter and food”). In other words, compassion as a moral emotion needs a framework for its justification; otherwise a constant emotional response would lead to moral blindness. In the above case, the response to homelessness must avoid two extremes: a disproportionate emotional response that would result in a constant state of emotional distress when witnessing misery and a disproportionate rational response

66 

F. Jotterand

such as a rationalization for not intervening or always taking care of every individual in need regardless of the situation. The perspective just outlined is an attempt to provide a unified conception of moral agency that takes into account the fullness of the human condition within the context of moral inquiry. It reacts against the contemporary accounts of morality that display a void in terms of moral content as a result of its quest for moral neutrality (i.e., absence of any claims concerning particular goods). In the next section, I will argue that the way proponents of moral bioenhancement conceptualize moral behavior is the outcome of the abandonment of a unified conception of moral agency grounded on the Aristotelian concept of phronesis.

4.3 Phronesis and the Virtues 4.3.1 Aristotle on Phronesis Phronesis (practical wisdom) is, according to Aristotle, the art of deliberating well, that is, to make the appropriate choice and to establish the right means to achieve a particular moral end through a specific good action. Phronesis should be differentiated from deinotes defined as “shrewdness.” According to Aristotle “there is a power which is called ‘shrewdness’, and this is such as to enable us to act successfully upon the means leading to an aim we set before us. If the aim is noble, that power is praiseworthy, but if the aim is bad, the power is called ‘unscrupulousness’” (Aristotle, Nichomachean Ethics, 1144a 24-27). In short, phronesis always concerns moral matters, deinotes depends on the aim.  This choice, however, is closely related to virtue, which is defined as “a habit, disposed toward action by deliberative choice, being at the mean relative to us, defined by reason and as a prudent man would define it” (Aristotle, Nicomachean Ethics, 1111b 5; 1106b 36). Before we can investigate the nature of phronesis in Aristotle’s work, we need to examine the concept of virtue because without some background on the nature of the virtues, we partially misrepresent the concept of phronesis. In Nicomachean Ethics, virtue is described as a habitus or a disposition of character acquired by the continuous molding of one’s behavioral

4  Neurobiology, Morality, and Agency 

67

apparatus through the practice of morally good deeds. For Aristotle “every virtue or excellence renders good the thing itself of which it is excellence [man’s state of character] and causes it to perform its function well [man’s capacity to act well]” (Aristotle, Nichomachean Ethics, 1106a 15). Accordingly, disposition of character represents the telos in the life of an individual, which is happiness—or eudaimonia. This is “some kind of activity of the soul in conformity with virtue” (Aristotle, Nichomachean Ethics, 1099b 26). Contrary to Plato, however, Aristotle does not perceive virtue as an intrinsic quality in the soul or derivative from metaphysical speculation. The quest for virtue is not just for “the sake of contemplation, like other theoretical inquiries—for we are inquiring what virtue is, not in order [just] to know it, but in order to become good, since otherwise there would be no benefit from that inquiry” (Aristotle, Nichomachean Ethics, 1103b 26–28). Thus, for Aristotle, the virtuous self meets at the intersection of the knowing and the doing. The knowing is illustrated by his insistence of good upbringing and right education, which train individuals to master the passions of their souls: Ethical virtue is concerned with pleasures and pains; for we do what is bad for the sake of pleasure, and we abstain from doing what is noble because of pain. In view of this, we should be brought up from our early youth in such a way as to enjoy and be pained by the things we should, as Plato says, for this is the right education. (Aristotle, Nichomachean Ethics, 1104b 10–13)

An important point to stress, however, is that the formation of character is not limited to its cognitive dimension. Practicing virtuous actions is an important aspect for the development of character. As a musician practices to achieve excellence in music, so must an individual who wants to achieve moral excellence practice and learn how to reach this goal. This lengthy passage in Nichomachean Ethics underscores the necessity of practicing virtuous deeds: In the case of the virtues … we acquire them as a result of prior activities; and this is like the case of the arts, for that which we are to perform by art after learning, we first learn by performing, e.g., we become builders by building and lyre-players by playing the lyre. Similarly, we become just by doing what is just, temperate by doing what is temperate, and brave by

68 

F. Jotterand

doing brave deeds. … Such indeed is the case with virtues also; for it is by our actions with other men in transaction that we are in the process of becoming just or unjust, and it is by our actions in dangerous situations in which we are in the process of acquiring the habit of being courageous or afraid that we become brave or cowardly, respectively …. In short, it is by similar activities that habits are developed in men; and in view of this, the activities in which men are engaged should be of [the right] quality, for the kinds of habits which develop follow the corresponding differences in those activities. So, in acquiring a habit it makes no small difference whether we are acting in one way or in the contrary way right from our early youth; it makes a great difference, or rather all the difference. (Aristotle, Nichomachean Ethics, 1103a 31–1103b 25)

Even though Aristotle’s argument seems to form an endless loop—an individual cannot be virtuous except by acting virtuously, but to act virtuously one must be virtuous—the stress on responsible agency is tantamount to moral development. While Aristotle affirms that a good education and upbringing results in the formation of the virtuous self (the virtues are always developed in dialogue with others, e.g., teacher, mentor, moral exemplars in literature, etc.), he emphasizes the primordial role of agency or the obligation to develop the necessary habits conducive to the development of a virtuous self (Aristotle, Nichomachean Ethics, 1114b 13–25). Thus, virtue is not simply a matter of personal feelings, taste, or psychological traits of character. Virtue has certain characteristics of a habit but the Aristotelian understanding of virtue does not convey the notions of “Pavlovian reflex,” “automatic reflex,” or an “intuitive response to innate knowledge of the good” (Pellegrino and Thomasma 1993, 5). According to Aristotle, a virtue is a behavioral disposition under the supervision of reason that can be taught and learned. Aristotle’s ethics, particularly his understanding of phronesis, cannot be adequately comprehended without also examining some of his anthropological presuppositions. The first one, succinctly though, is the “power of reason” intrinsic to human beings. For Aristotle, human rational ability is an important aspect in his account of moral agency. It implies two elements: the first relates to the essence of humans, that is, the capacity to reason is what determines human nature and the right functioning of one’s existence (this is not to say that a human being lacking the capacity

4  Neurobiology, Morality, and Agency 

69

to reason due to a mental impairment is not human). As Aristotle puts it, “the proper function of man…consists in an activity of the soul in conformity with a rational principle.” (Aristotle, Nichomachean Ethics, 1098a 2–8). This rational principle is twofold: to have reason in the sense that it is able to obey the rules of reason. It is the part of the soul in which desires and its corollary virtues take place (or capacity in the model outlined previously). The other dimension is the thinking part that possesses and conceives rational rules, that is, the intellectual virtues or their opposites (or content in the model outlined previously) (Aristotle, Nichomachean Ethics, 1098a 4–5). The second element intrinsic to an Aristotelian anthropology is that an individual is able to set his or her goals and standards in living (Aristotle, Nichomachean Ethics, 1098a 13–17). This decisional capacity conveys the notion that agents are morally responsible. It also indicates that human rational activity is a dynamic process that expresses, not only a universal characteristic of the human condition, but also “the power of particular determination” (Hauerwas 1994, 46). The unity between the particularity and universality of human rational agency provides insights regarding the nature of phronesis: it is the art of good deliberation between desires (particularity) and rational rules/reason (universality) in conjunction with particular ends. Thus, according to Aristotle, phronesis is concerned with human affairs and with matters about which deliberation is possible. As we have said, the most characteristic function of a man of practical wisdom is to deliberate well: no one deliberates about things that cannot be other that they are, nor about things that are not directed to some end, an end that is a good attainable by action. In an unqualified sense, that man is good at deliberating who, by reasoning, can aim at and hit the best thing attainable to man by action. (Aristotle, Nichomachean Ethics, 1141b 8–13)

Danielle Lories remarks, though, that phronesis should be distinguished from other parallel concepts such as sophia, episteme, and techne. She notes that phronesis is distinct because, contrary to the other terms, it is relative to praxis and its object is the particular and the perishable rather than the immutable (Lories 1998).4 In addition, in reference to Pierre

70 

F. Jotterand

Aubenque, she underscores the fact that the practical disposition of phronesis relates to “la règle du choix” (the rule of choice) rather the choice itself. Phronesis is not concerned with the rectitude of the action per se, but rather the appropriateness of the criterion used to implement a particular action.5 The definition of phronesis provided above and its particular relation to action refer to three major further concepts we need to examine: (1) phronesis requires good deliberation (bouleusis, euboulia); (2) deliberation implies judgment (gnome); and (3) deliberation is a rational process (sunesis, eusunesis). Each of these elements of phronesis are examined below in order to provide a full picture of human moral agency.

4.3.2 Good Deliberation In his discussion on good deliberation, Aristotle presumes the pivotal role of human agency as the moving principle of one’s actions. As a corollary, deliberation is first and foremost about objects of choice attainable by human actions, whose function (i.e., the function of human actions) is the means for ends other than the actions themselves (Aristotle, Nichomachean Ethics, 1112b 31–32). Thus, deliberation is a rational process in which an agent selects or chooses the appropriate means in order to achieve a desired end. In Aristotle’s words “we deliberate not about ends but about the means to attain ends” (Aristotle, Nichomachean Ethics, 1112b 31–32). In other parts of his work, he emphasizes choice as a deliberative process involving desires and reasoning directed toward some end (Aristotle, Nichomachean Ethics, 1139a 32–33). This dimension is particularly obvious in Book VI where he explains the process by which an individual achieves the right action according to the combination of “intelligence motivated by desire or desire operating through thought.” Thus, the starting point of any action is choice that is directed by desire and reasoning. Aristotle collapses the two elements into one entity (i.e., deliberation). The colliding of these two dimensions, however, raises a difficulty. If desires and reason are so well perfected (after all the fully virtuous person is the one who can act according to the right response to a specific situation but also in such a way that there is no inner conflict

4  Neurobiology, Morality, and Agency 

71

because desires have been directed toward a good end according to reason), the working out of moral dilemma does not require the agent’s participation in an intentional attempt to select the right means to an end. Consequently, the moral agent’s decisional process does not imply the examination of a specific situation but rather a kind of “intuitive” response. This means that the question is to determine what deliberation entails for a perfectly virtuous person. In the Nichomachean Ethics, we are confronted with two different accounts of deliberation. On the one hand, it depicts a struggling process in which a moral agent attempts to discover the appropriate means to a particular end. On the other hand, deliberation could signify that the means to an end is almost implicit to the final outcome of an action. Is Aristotle inconsistent in his conceptualization of deliberation? It is worth considering Julia Annas’s observations on this particular issue. She argues that Aristotle does not demonstrate inconsistencies between the two books (Book III and Book VI) nor does he envisage moral life as a mechanical or intuitive response to moral dilemmas (Annas 1993). The seemingly contradiction ought to be viewed as two different stages in moral development. Book III depicts the person acquiring the indispensable virtues that contribute to the development of practical wisdom (phronesis). The individual is in a learning process and does not take for granted his or her ends. Therefore, one must, in this stage, scrupulously assess and work out the varied factors influencing his or her choices and actions. By contrast, in Book VI we have the portrayal of a fully virtuous person whose ends are in accordance with his or her feelings and judgments. In Annas’s words, “Book III suggests that the virtuous agent has to work out what to do before deciding to do it, whereas in Book VI there seems no room for this: the fully virtuous person embodies harmony of thought and feeling that she does not have to work out what to do: she is immediately sensitive to it” (Annas 1993, 90). Phronesis, then, should be understood as a “unified basis” in which feelings and judgments develop into a harmonious entity (Kristjánsson 2015). For Annas, these two accounts represent two different patterns of practical thinking, one in the learner and the other in the virtuous person. She recognizes that this conclusion might be abrupt. But she asserts that in Aristotle’s moral philosophy, even a fully virtuous person may struggle making the right

72 

F. Jotterand

decision “when faced by problems that do not spring from reasons of virtue” (Annas 1993, 90). For instance, there are problems created not only by a lack of bad information but also by what she calls non-moral factors. Therefore, the fully virtuous person will not be able to avoid deliberating about what to do in certain situations.

4.3.3 Judgment and Choice Having defined deliberation, we can now turn to the object of deliberation: choice. To this end we need to look at Aristotle’s definition of proairisis (choice), which provides insights concerning the requirements of moral agency in the act of deliberating well. According to Aristotle, choice is the starting point of action: it is the source of motion but not the end for the sake of which we act, i.e., the final cause. The starting point of choice, however, is desire and reasoning directed toward some end. That is why there cannot be choice either without intelligence and thought or without some moral characteristic; for good and bad action in human conduct are not possible without thought and character. Now thought alone moves nothing; only thought which is directed to some end and concerned with action can do so. And it is this kind of thought also which initiates production. For whoever produces something produces it for an end. The product he makes is not an end in an unqualified sense, but an end only in a particular relation and of a particular operation. Only the goal of action is an end in the unqualified sense: for the good life is an end, and desire is directed toward this. Therefore, choice is either intelligence motivated by desire or desire operating through thought, and it is a combination of these two that man is a starting point of action. (Aristotle, Nichomachean Ethics, 1139a 30–1139b 5)

A superficial reading of this passage could suggest that Aristotle’s account of practical wisdom emphasizes exclusively particular action (“an end only in a particular relation and of a particular operation”) rather than universal concerns about the good life. Earlier in the Nichomachean Ethics, however, Aristotle makes it clear that the phronimos displays his or her moral agency through the deliberation not only about “what is good

4  Neurobiology, Morality, and Agency 

73

or advantageous for oneself,” but also about “what sort of thing contributes to the good life in general” (Aristotle, Nichomachean Ethics, 1139b 25–28). More importantly, this passage addresses two important features intrinsic to moral agency: the role of desire and reasoning. The first element, desire, is the dynamic part of the act of deliberating about potential choices. Choices are directed toward some end, but without a “source of motion” choice would remain inert or simply a rational concept deprived of its spatio-temporal realization. The interface between choice and desire can be best described as follows: choice is “a deliberate desire for things that are within our power: we arrive at a decision on the basis of deliberation, and then let the deliberation guide our desire” (Aristotle, Nichomachean Ethics, 1113a 10–13). Making the right choice in deliberation requires reason but also the internalization of moral virtues that will produce right desires (gnome) and consequently right ends. Aristotle resists an “intellectualism” that limits moral deliberation to reason and depicts virtue as essentially the corollary of knowledge and therefore the virtuous person is the one who increased his or her knowledge of what constitutes the ultimate good for humans, that is, eudaimonia or happiness. Evil, on the other hand, is the absence of such knowledge and incorporates the emotional dimension (desires and feelings) into the cognitive element of moral reasoning. The necessity of gnome (the ability to judge with clarity, comprehension, and equity according to moral virtues) (Lories 1998)6 in Aristotle’s understanding of moral deliberation is precisely what makes excellence in character a condition for good deliberation. The reason being that our character is a “qualification of our agency” that depicts how we envisage, describe, and intend our actions (Hauerwas 1994, 61). Indeed, the very idea of character assumes that the self can be morally determined and therefore any action depends on the kind of moral virtues the phronimos possesses. The second aspect in the decisional process of deliberation is the role of reason (sunesis). The etymology of sunesis refers to two different meanings: it can suggest an “insight or understanding in the religio-ethical realm” or it can signify the faculty of comprehension, intelligence, acuteness, or shrewdness (Bauer 2001). In the context of Aristotle’s work, sunesis designates the latter. It is the ability to criticize and formulate an

74 

F. Jotterand

opinion in relation to concrete situations. René Antoine Gauthier and Jean Yves Jolif, in their translation of Aristotle, refer to sunesis as “[un] jugement normatif qui guide notre action” (a normative judgment that guides our action) (Gauthier et al. 1970, II.2 526). It follows that sunesis is an essential component in deliberation because it determines one’s understanding of the good, the right, and the just. The moral agent cannot simply rely on his or her desires because desires do not address the question about the moral justification of an action—that is “le jugement normative” or moral content—but, as pointed out earlier, they supply only the essential motives for an action (capacity). Furthermore, in order to avoid a “mechanical behavior” or an “intuitive morality,” sunesis refers to the agent’s capacity to examine and question each concrete situation in its particularities (Aubenque 1976, 151 cited in Lories 1998, 136). Sunesis requires not only excellence in thinking in order to acquire the right perception in relation to a specific action but also an indispensable characteristic of the phronimos. The distinction between moral capacity and moral content is unmistakably established in Aristotle’s account of phronesis. It not only demonstrates the multifaceted nature of moral agency in its dynamic and essential dimensions, but it also exhibits a contrast with current conceptualization of morality as a natural phenomenon that does not have any sources of moral justification outside the self. In other terms, the “distinctively modern self ” has emerged in a context in which emotivism is embraced. In the next section, I examine the factors that resulted in the emergence of the contemporary emotivist self and why moral bioenhancement is a symptom of a deeper problem in moral philosophy characterized by a transition from moral philosophy to moral psychology.

4.4 The Autonomous Modern Self 4.4.1 The Creation of the Autonomous Self In The Invention of Autonomy, Jerome B. Schneewind provides a historical account of moral philosophy in the seventeenth and eighteenth centuries and demonstrates how Immanuel Kant initiated a major shift in moral

4  Neurobiology, Morality, and Agency 

75

philosophy toward an understanding of morality as autonomy (Schneewind 1997). As Schneewind notes, “the idea that we are rational [autonomous] beings who spontaneously impose lawfulness on the world in which we live and thereby create its basic order is, our course, central to the whole [of ] Kant’s philosophy.” He goes on asking: “How did Kant come to invent such a revolutionary view, and to think that it could explain morality?” (Schneewind 1997, 484). Kant’s revolution in moral philosophy provided the foundation that paved the way to the modern emotivist autonomous self. His framework did not lead to the emotivist dimension of the modern self per se. Kant laid the foundation for conceptualizing the self, according to what Charles Taylor calls its “expressive power,” that is, “our contemporary notions of what it is to respect people’s integrity includes that of protecting their expressive freedom to express and develop their own opinions, to define their own life conceptions” (Taylor 1989, 25). Autonomy is then understood as respect which in turn refers back to Kant’s conception of human dignity. According to Kant, the intrinsic worth or dignity of a person is derived from the rational and moral dimensions of human personhood. An individual can act as a moral agent and distinguish right from wrong based on reasons for action. Hence, he argues that moral agents are autonomous insofar as they follow the tenets of his categorical imperative, “Act so as to use humanity, whether in your own person or in others, always as an end, and never merely as a means.” For Schneewind, Kant’s conception of morality as autonomy has two specific characteristics. First, he remarks that his notion of autonomy is rooted in a “metaphysical psychology.” It signifies that the human psychological makeup endows each individual, through reason, the ability to recognize the requirement of morality (duty) according to the categorial imperative. Second, Schneewind notes that for Kant morality “presupposes that we are rational agents whose transcendental freedom takes us out of the domain of natural causation” (Schneewind 1997, 515). Reason, regardless of contextual considerations and one’s state of corruption, cannot lose its ability to provide reasons for action because it is determined by transcendental freedom that is beyond the realm of natural causation. The other factor that shaped the modern self is the shift in Western philosophy from moral philosophy to moral psychology—reflected in

76 

F. Jotterand

Kant as noted above. David Hume (A Treaties of Human Nature [1739–1749]; see also Haidt 2003; Hauser 2006) adopted the theory of ideas advanced by John Locke but introduced the concept of perceptions. Perceptions, on Locke’s account, include impressions (knowledge gained through sense experience) and ideas (duplicate of former impressions). The mind, then, is simply a compilation of perceptions where reason has a secondary role in cognition. This approach to epistemology resulted in a radical skepticism in moral philosophy that ultimately relegated reason to the role of an informer about how to achieve the aims set by the passions. Hume provided what he thought a justification of how perceptions and emotions guide human behavior. Contrary to Kant, he advanced the idea that the basis of morality lies within human nature and reason cannot motivate an individual to behave nor can it validate moral behavior. Brought together, these two dimensions (morality as autonomy and the role of emotions) set the stage for the emergence of modern self.

4.4.2 The Emotivist Modern Self Alasdair MacIntyre in Whose Justice? Which Rationality? examines Aristotle’s understanding of practical rationality and contends that the modern account of morality is at odds with an Aristotelian account due to the diverging view about the nature of moral rationality. On the one hand, in Aristotle’s conception of practical rationality, the fully rational moral agent is directly informed about the premises of a foreseen end and therefore is immediately convinced of the moral validity of an action. On the other hand, the modern account is characterized by skepticism about how a moral rational agent should pursue a particular course of action, even though goods and compelling reasons are constitutive of moral deliberation. There is no set of practical reasons conclusive in the decision-­ making process about the outcome of a particular action (A. MacIntyre 1988, 140). Furthermore, modern moral philosophy typically rejects any particular claims about some good over others. This condition, MacIntyre asserts, creates moral dilemmas for which “no mode of rational resolution” is available to us (A. MacIntyre 1988, 141).

4  Neurobiology, Morality, and Agency 

77

In addition to the inability to provide a rational resolution to moral dilemmas, our postmodern context disallows the significance of narratives or traditions in our construct of morality. Whether we are still in the modern era (or in hyper-modernity) or entered the post-modern era is not that important since “there is much more continuity than difference between the broad history of modernism and the movement called postmodernism.” We ought to “see the latter as a particular kind of crisis within the former” (Harvey 1990, 116). The more important point is that the Enlightenment inaugurated the project of modernity characterized by an emphasis on empirical science, universal morality, and the autonomy of the individual (Juergen Habermas 1985). So, while postmodernity can be considered as a continuation of modernity, there is nevertheless a unique element in postmodernity with respect to how history and its embedded variety of narratives provide purpose and meaning. Jean Baudrillard, in The Illusion of the End, eloquently captures this postmodern phenomenon that is not surprising in an era of 24-hour news channels that report current events without offering any substantive historical perspective or meaning: History has gradually narrowed down to the field of its probable causes and effects, and, even more recently, to the field of current events—its effects ‘in real time’. Events now have no more significance than their anticipated meaning, their programming and their broadcasting. Only this event strike constitutes a true historical phenomenon—this refusal to signify anything whatever, or this capacity to signify anything at all. This is the true end of history, the end of historical Reason. (Baudrillard 1994, 21–22)

The orientation of postmodernity is antithetical to the use of history and its by products, tradition(s), and narrative(s), as paradigms for social and personal identity. Postmodernity, in one of its most radical forms, deconstructionism, maintains an anti-essentialist and highly unconventional ideology that constantly strives for the unforeseeable and originality and is in a continual resistance to conservatism, an effort to be productive rather than reproductive. The implications for moral philosophy are multiple and would be worthy of a lengthy analysis.7 However, in the context of my inquiry, I want to focus on what P.J. Labarrière calls, as a

78 

F. Jotterand

consequence of postmodernity, among other things, la disparition du sujet and la crise de la raison pratique et éthique (the disappearance of the subject and the crisis of practical and ethical reason) (Labarrière 1996).8 This crisis is translated into what MacIntyre describes as the lack of continuity between the tradition of moral reasoning and the “new” economy of postmodernity. Our Western heritage, he contends, has always understood ethics as a systematic reflection upon the nature and the goals of morality. The current “dominant view,” however, has purged its commitment to a systematic approach to moral decisions and transformed ethics into “the regulation of the relationships of anyone whatsoever with anyone else” (A. MacIntyre 1984b, 498). It follows that the role of morality has been supplanted by an a-historical, a-cultural, and a-philosophical perspective on human experience leading to the rejection of tradition and narrative and the disappearance of the subject. In this context, morality does not engage the past, present, and future of one’s own life, but rather it regulates the ever present between two parties. As MacIntyre notes, “the self had been liberated from all those outmoded forms of social organization which had imprisoned it simultaneously within a belief in a theistic and teleological world order and within those hierarchical structures which attempted to legitimate themselves as part of such a world order” (A. C. MacIntyre 1984a, 60). The move to free the individual from traditions, either through the universalizing of moral values and referring to utility and intuitions, has resulted in continuous disputes without any resolutions. These disputes have been the fertile soil for the emergence of “no uncontested and incontestable account of what tradition-­independent morality consists in and consequently no neutral set of criteria by means of which the claims of rival and contending traditions could be adjudicated” (A. MacIntyre 1988, 334). The impetus to secure a neutral morality that would permit social consensus is in fact a shift in focus from the good to the right, which promotes individualism and undermines a common understanding of the good through rational inquiry. An additional element, somewhat related to my previous point, but distinct, is the conceptualization of the moral self, according to the doctrine of emotivism. Within this framework ethical decisions are equated with personal inclinations and intuitions. Emotivism is “the doctrine that all evaluative judgments and more specifically all moral judgments are

4  Neurobiology, Morality, and Agency 

79

nothing but expressions of preference, expressions of attitude of feeling, insofar as they are moral or evaluative in character” (A. C. MacIntyre 1984a, 11–12). The modern self, then, requires not only a new social backdrop, free from the social structures that shape traditions and narratives, but also a variety of coherent beliefs and concepts which are not always consistent. The transition from a “traditional self ”—for lack of a better word—to the modern self is the history of the development of a set of conditions that ultimately created the individual, in its restrictive sense (A. C. MacIntyre 1984a, 31, 61 see also Trueman 2020). The emotivist self is left to its own self-ruling as a point of reference and, therefore, does not need to be liable to an outside moral authority: The specifically modern self, the self that I have called emotivist, finds no limits set to that on which it may pass judgment for such limits could only derive from rational criteria for evaluation and, as we have seen, the emotivist self lacks any such criteria. Everything may be criticized from whatever standpoint the self has adopted, including the self ’s choice of standpoint to adopt. It is in this capacity of the self to evade any necessary identification with any particular contingent state of affairs that some modern philosophers, both analytical and existentialist, have seen the essence of moral agency. To be a moral agent is, on this view, precisely to be able to stand back from any and every situation in which on is involved, from any and every characteristic that one may possess, and to pass judgment on it form a purely universal and abstract point of view that is totally detached from all social particularity. Anyone and everyone can thus be a moral agent, since it is the self and not in social roles or practices that moral agency has to be located. (A. MacIntyre 1984a, 31–32).

In a recent analysis of the rise of the modern self, Carl R. Trueman argues that many of the socio-political and ethical issues we are facing are the result of a “psychologized, expressive individual that is the social norm today [which] is unique, unprecedented, and singularly significant.” This new view of the self in contemporary culture is an important shift in the history of the West which provides the condition for a shift away from “a mimetic view of the world as possessing intrinsic meaning to a poietic one, where the onus for meaning lies with the human self as constructive agent” (Trueman 2020, 70–71). The resulting framework is a transformation of

80 

F. Jotterand

morality into a set of preferences in which a rational agreement on the nature of the good and the good life is rendered difficulty. The character of an agreement relies on a rational argument, but because the modern view of morality operates on different rationalities, morality is reduced to subjective inclinations or desires located in an allegedly “neutral philosophical” framework. The self is not constructed in relation to others or to a particular tradition but in a self-referential fashion. As Jurgen Habermas rightly remarks “because of the forces of modernism,” the modern self embraces “the principle of unlimited self-realization, the demand for authentic self-experience and the subjectivism of a hyperstimulated sensitivity have come to be dominant” (Juergen Habermas 1985, 6). Referring to Bell’s analysis in The Cultural Contradictions of Capitalism (1976), Habermas concludes that that modern culture and its emphasis on hedonism are “altogether incompatible with the moral basis of a purposive, rational conduct of life” (Juergen Habermas 1985, 6). Not only modern culture exhibits competing rationalities but also irrational modes of existence.

4.4.3 The Disrupted Self The picture of the self that has emerged so far holds two main components: it is autonomous (as transcendental freedom) and emotivist in nature (self-referential). In this section, I consider a third dimension that will complete my description of what I call the disrupted self—disrupted because of its non-essentialist, non-relational (failure to understand how the identity of self cannot be determined autonomously), and emotivist (the emotivist self is partially deprived of the rational basis for moral conduct) nature. In particular, I consider the work of Charles Taylor, The Sources of the Self (Taylor 1989). His analysis focuses on the modern understanding of what it means to be a “human agent, a person, or a self ” (Taylor 1989, 3). He makes two key claims pertaining to the modern notion of self: first, modern moral philosophy has shifted its focus from the good to the right or as he puts “on defining the content of obligation rather than the nature of the good life” (Taylor 1989, 3). This view on morality resulted in removing considerations related to the very nature of moral agency or differently stated, ontological perspectives have been

4  Neurobiology, Morality, and Agency 

81

deemed problematic because they have provided, in some instances, the ground to justify racism, bigotry, and other repugnant claims about particular group of humans. Hence, Taylor makes a second important assertion: selfhood and morality are themes that cannot be examined in isolation. The moral nature of human identity and the manifestation of moral intuitions in human behavior can only be understood from the perspective of a “given ontology of the human” (Taylor 1989, 5). The implication of these two insights about human agency is particularly salient when discussing the moral status or the dignity of various group of individuals in modern society. Specifically, the moral assessment of human beings rests on the fact that they are able of “some kind of higher form of life,” which provides the basis to the belief that “they are fit objects of respect, that their life and integrity is sacred or enjoys immunity, and is not to be attacked” (Taylor 1989, 25). Taylor argues that the dignity of human beings is defined according to the values we attached to particular forms of life or the rank we assign to them. But the very notion of dignity, in order to articulate what norms impart value and to assign immunity from outside attacks, depends on frameworks that are an integral part of the two dimensions outlined earlier (the connection between selfhood and morality and the good before the right). These frameworks provide the background necessary for moral judgments to make sense regarding how human beings behave in moral spaces (Taylor 1989, 26).9 In contrast, the naturalistic explanation of morality does not need frameworks to clarify what moral agency entails. Modern culture has challenged the validity of and expressed its skepticism toward traditional frameworks if not simply stated that all frameworks are problematic. Taylor rejects this thesis, and rightly so. He asserts that without frameworks, it is not possible to have a fulfilled life and they are the necessary condition for the creation of horizons which allows human beings to make sense of their lives and provide purpose. Stated differently, the rejection of frameworks in modern society has resulted in an identity crisis about not only what it means to be a human being—ontologically, that is, What am I? Who am I as moral agent? Where do I stand in the world?—but also what sources are a generator of meaning and purpose. Taylor goes even as far as to claim that the absence of such frameworks is damaging to the very fabric of human agency:

82 

F. Jotterand

I want to defend the strong thesis that doing without frameworks is utterly impossible for us; otherwise put, that the horizons within which we live our lives and which make sense of them have to include these strong qualitative discrimination. … [T]he claim is that living within such strongly qualified horizons is constitutive of human agency, that stepping outside these limits would be tantamount to stepping outside what we would recognize as integral, that is, undamaged human personhood. (Taylor 1989, 27)

The modern impetus to reject frameworks, epitomized by a lack of a common understanding of the public good resulting in moral pluralism, should be a subject of concern. The failure to acknowledge the importance of frameworks, traditions, and social practices are detrimental to the structuring of society and the creating of meaning and purpose in one’s life. A rhetoric (regardless of the ideological orientation) that constantly calls into question the framework(s) that shapes one’s identity and affords the basis for discriminating what constitutes a source of meaning and purpose versus what is damaging to one’s identity and sense of purpose can lead to an identity crisis or what Taylor calls a “form of disorientation.”10 Individuals undergoing disorientation “lack a frame or horizon within which things can take on a stable significance, within which some life possibilities can be seen as good or meaningful, others as bad or trivial” (Taylor 1989, 27–28). The current socio-political context encouraging identity politics emphasizes identity denying rather than identity affirming (Mitchell 2020). What has become clear from Taylor’s insights is the necessity of an orientation of the moral agent toward a telos that provides meaning, purpose, and a motivation to pursue good ends. More important, this orientation is paramount to give a moral identity that orients a moral agent within various moral spaces. Reacting to the naturalist approach, Taylor critically observes that such view would relegate the issue of what framework to adopt…as an ultimately factitious question. But our discussion of identity indicates rather that it belongs to the class of the inescapable, i.e., that it belongs to human agency to exist in a space of questions about strongly valued goods, prior to all choice or adventitious cultural change. This discussion…throws up a strong challenge to the naturalist picture. (Taylor 1989, 31)

4  Neurobiology, Morality, and Agency 

83

Not only does Taylor refute the naturalist view, but he also insists that individuals lacking frameworks are “in the grip of an appalling identity crisis.” They would not be able to determine where they stand on fundamental issues, lacking any sense of guidance, and unable to offer an answer (Taylor 1989, 31). The moral self cannot be divorced from the good and the narratives, traditions, and practices that sustain it within “webs of interlocutors” (Taylor 1989, 36). The development of a moral self does not occur in isolation but requires an interdependence that is in sharp contrast to the modern conception of individualism (Trueman 2020).

4.4.4 Moral Neutrality The complexity of moral life imposes upon contemporary society the difficult task of providing the values that sustain the public good apart from the traditional notions that have shaped our Western culture for centuries. But the impetus of modern moral philosophy to reach “moral neutrality” represents a conceptual mistake. The hope to improve human character through the technoscientific implies the possibility of a philosophical and moral neutrality concerning one’s beliefs, attitudes, presuppositions, values, and so on. This means that moral decision-making processes do not need any particular philosophical and moral standpoint, since we could achieve moral enhancement through the manipulation of moral emotions. However, this neutrality is illusory because individuals order their life according to a set of practices that imply particular ends or goods within a social order: It is through initiation into the ordered relationships of some particular practice or practices, through education into the skills and virtues which it or they require, and through an understanding of the relationship of those skills and virtues to the achievement of the good internal to that practice or those practices that we first find application in everyday life for just such a teleological scheme of understanding. (MacIntyre 1998, 140)

Practices provide the context that locates a moral agent within a network of social interactions. In the process of ordering one’s life according to a set of practices, one participates in “particular social orders embodying

84 

F. Jotterand

particular conceptions of rationality” (MacIntyre 1998, 121). One cannot escape one’s rationality in the hope to reach moral neutrality. The terms moral or morality always presuppose an inescapable content-laden conception of rationality situated in a particular social environment. Each individual has a set of norms guiding one’s behavior based on particular notions of the good, the right, and the just. To deliberate a moral question demands an initiation into everyday moral judgments and activities, or practices. These practices are acquired through the learning of virtues and skills within a community or social setting, which in turn requires an understanding of the relationship between these virtues and skills and the means to achieve particular (social) goods internal to these practices.

4.4.5 Contentless Morality Within the framework I have outlined so far, it appears difficult to determine, on the account of the proponents of moral bioenhancement, what level of emotional control for moral behavior is adequate or what degree of altruism, empathy, or solidarity insures sociability. The way human beings make moral decisions requires the interaction of a complex network of emotional, cognitive, and motivational processes that cannot be reduced just to moral emotions or technological control (moral capacity) but also to practical reasoning (i.e., the source of moral content). MacIntyre situates moral deliberation within a web of internal and external conditions necessary for moral agency. First, an individual develops as a moral agent within a narrative situated in a particular social context that informs and shapes one’s identity. Second, a moral agent cannot deliberate on moral questions without reflecting on the nature of the good. This deliberation includes the universal question “What is the good?” but also the particular question of “What is my good?” (MacIntyre 1998, 138). Practices and their internal goods are determined by a particular vision of the good life, and therefore these two questions are essential in moral development. Third, in the process of developing as a moral agent, each individual goes through “a process of learning, making mistakes, correcting those mistakes and so moving towards the achievement of excellence”

4  Neurobiology, Morality, and Agency 

85

(MacIntyre 1998, 140). In this process, practical wisdom (phronesis) is essential. It synergizes everyday moral judgments and activities and moral rules. Phronesis allows the affective, motivational, and cognitive processes to become an integrated whole. To avoid an adherence to mere rules, the art of good deliberation connects everyday experiences, ethical theories, and ethical concepts. The fourth and final point demands more elaboration because it brings us to the core of the meaning of moral agency. For MacIntyre, one’s inability to integrate life experience (narrative/social context) and moral reasoning (theory) results in the development of bad character, which is an intellectual blindness on moral questions (MacIntyre 1998, 142).11 It is the failure to process adequately “resources for judgment” that inform one’s moral decisions and provide reasons for action. “The marks of someone who develops bad character … [are that] she or he becomes progressively less and less able to understand what it is that she or he has mislearned and how it was that she or he fell into error” (MacIntyre 1998, 142). This is a crucial point in my critique of moral bioenhancement. It allows differentiating character traits as a way to perform certain activities and having character as a moral fundamental qualification of moral identity (Hauerwas 1981). Traits refer to various terms that describe different forms of behavior that do not necessarily encompass the moral significance of character in moral philosophy. For instance, a person might demonstrate diligence in his or her work but does not necessarily exhibit the similar trait in all his or her activities. On the other hand, having character describes a person’s moral strength to establish a set of behaviors deemed adequate in projected circumstances. It qualifies one’s moral agency and presupposes one’s capacity of self-determination. Agency (reasons, motives, intentions) and action constitute the two elements that refer to having character. Hence, the question is whether “moral bioenhancement” results in enhancing character traits or in shaping one’s character (i.e., having character). Does bioenhancement as the manipulation of moral emotions contribute to a better control of human behavior (qua moral emotions) or better moral behavior (moral behavior in the sense of a systematic reflection of the good, the right, and the just to provide guidance in making decisions)? The former appears to stress a form of outside constraint that influences one’s behavior whereas the latter emphasizes an internal motive based on reasons for action. What is

86 

F. Jotterand

paradoxical in the discourse of strong proponents of moral bioenhancement is the import of an aretaic conception of morality. I do not question the choice of the aforementioned aretaic categories. The issue is on the nature or content of these virtues. A closer look at their moral framework reveals a misconceptualization of moral agency (i.e., a disrupted self ) for it is unable to generate a content-full moral framework that qualifies virtues. The moral bioenhancement agenda envisions a day in which individual moral capacities will be enhanced and controlled but says nothing about the nature of the morality—individual or social. From my analysis, I conclude that moral bioenhancement is unlikely to enhance people morally in the true meaning of the word. The development of neurotechnologies to alter and manipulate behavior will allow one to control moral emotions but not to generate any content for moral reasons for actions. Without a systematic reflection on the nature of the good, the right, and the just, one would end up, using MacIntyre’s language, in bad character because of intellectual blindness. Moral agency requires understanding and the formation of right moral emotions. Moral emotions and moral reasoning constitute two inseparable elements in moral judgments. The latter provides an evaluative mechanism to assess whether moral emotions justify a particular behavioral response to a moral dilemma. The hope of controlling human moral emotions is insufficient for the formation of virtuous people. Moral agents are not engineered but trained through the development of a vision of the good life and an understanding of human flourishing. The use of neurotechnologies for alleged moral bioenhancement leads to the disruption of the (moral) self not in its development. The above analysis reveals, on MacIntyre’s account, that bad character is a question of intellectual blindness and a failure to develop the right moral emotions that lead to good actions. In contrast, proponents of moral bioenhancement argue that bad character can be pathologized, because human behavior is a natural phenomenon subject to neurobiological interventions. This shift in our understanding of morality needs further explanation.

4  Neurobiology, Morality, and Agency 

87

4.5 The Pathologizing of Human Behavior 4.5.1 The Science of Morality Earlier in this chapter, I referred to Trueman’s characterization of the modern individual as a “psychologized self ” with a focus on the role of inner psychological convictions for the construct of one’s self, including moral identity. Since psychological states that lead to behavior are based on neurobiology, it is not surprising to see proponents of moral bioenhancement conceptualize misconduct in terms of pathology, that is, the lack of some capacity. While Persson and Savulescu do not employ medical language to describe misconduct, they nonetheless pathologize human behavior. They see its cause as a deficiency, among other things, in human biology (genetics and neurobiology) that can be remedied through biotechnological interventions. This approach to explicate misconduct is not new. Philosopher Bertrand Russell already noted in mid-twentieth century that “while upheavals and suffering have hitherto been the lot of man, we can now see, however dimly and uncertainly, a possible future culminating in which poverty and war will have been overcome, and fear, where it survives, will have become pathological. The road, I fear, is long, but that is no reason for losing sight of the ultimate hope” (Russell 1953, 113). Moral bioenhancement reflects this philosophical posture on morality where human behaviors leading to societal issues such as injustice, violence, fear, and so on are pathologized as opposed to maintaining their full moral dimensions. The human brain is unfit for the future and therefore biological interventions are warranted. Traditional means of moral education are insufficient, if not inadequate because “it is quite hard to internalize moral doctrines to the degree that they determine our behavior” (Persson and Savulescu 2012, 106). They consider “moral doctrines” to lack the motivational weight to ensure that people will act morally. The role of moral education is restricted to promoting knowledge of the moral good, but it is not sufficient for moral betterment. To be morally good, an individual must be strongly motivated to act morally to overpower selfishness, biases, lack of empathy, and so on (Persson and Savulescu 2012, 116–17). To this end, recourse to biotechnologies, in their estimation, is a promising (and might be the only) approach.

88 

F. Jotterand

Considering that human behavior has a genetic and neurobiological basis, it would be possible, in principle, to alter people’s moral conduct through genetic manipulations or drug treatment (Persson and Savulescu 2012, 107). Douglas operates under the same set of assumptions. He argues that sometimes it is permissible to directly manipulate emotions, through pharmaceutical agents for instance, if the interventions result in “morally better motives or conduct” (Douglas 2014, 76). He is a proponent of what he calls “emotional moral enhancements” which consist of interventions that “will expectably leave an individual with more moral (viz, morally better) motives or behavior than she would otherwise have had … ‘noncognitive moral enhancement’ [refer] to moral enhancement achieved through (a) modulating emotions, and (b) doing so directly, that is, not by improving (viz., increasing the accuracy of ) cognition” (Douglas 2013, 162). Michael S. DeGrazia likewise recognizes the primacy of emotions on the question of moral bioenhancement, especially as it relates to moral motivation—better motivation to act upon what is right based on some judgment. He makes a distinction between motivational improvement and improved insight (DeGrazia 2014). The latter he deems highly cognitive and may occur through cognitive enhancement, that is, the strengthening of intellectual capabilities necessary to make good choices and act wisely. The former, motivational improvement, is considered as affective, that is, as it relates to emotions. For DeGrazia, this type of improvement is more closely related to moral improvement than improved insight. Persson and Savulescu go some way toward recognizing human biological and moral diversity. On their account, many individuals have deficient moral acumen and motivation. They suffer from weakness of will and require the strengthening of their moral motivation. In their view clearly the problem of weakness of will is a problem of motivation: it occurs because we are not sufficiently motivated to do what we are convinced that we ought to do, for instance, we do not feel sufficient sympathy for the global poor to aid them when we come up against temptations to satisfy our self-regarding desires. To enhance the capacity to feel such sympathy would be to enhance the probability that we do what we believe that we ought. This is an instance of moral enhancement, according to our view. (Persson and Savulescu 2016, 264)

4  Neurobiology, Morality, and Agency 

89

There is certainly nothing wrong with promoting the moral betterment of the human species. In addition, they are correct in their assessment that, in terms of human history and techno-scientific development, we are at a stage of great ecological and social and geopolitical uncertainties. The elaboration of strategies to ensure a secure and environmentally friendly world for future generations constitutes a moral imperative that should concern not only a select group of individuals with expertise in human behavior and ethics but the public in general. The problem with their rhetoric is that it does not reflect the realities of human moral psychology and moral philosophy, which they misconstrue in three different ways as I will outline very shortly. Before I do so, I will focus briefly on providing a deeper analysis of the concept of moral motivation.

4.5.2 Moral Motivation A scientific understanding of morality (i.e., the science of morality) is at the core of Persson and Savulescu’s project. Morality, if subjected to biotechnological manipulations, must be reduced to a biological phenomenon in the brain. This perspective has many proponents in disciplines like philosophy, psychology, and neuroscience as already noted (Churchland 2012; Gazzaniga 2006; Redish 2013) but also its detractors. In their recent book, Science and the Good Moral: The Tragic Quest for the Foundations of Morality (2018), James D. Hunter and Paul Nedelisky propose a three-level framework to understand how science could illuminate questions surrounding the nature of morality, its content, and its neurobiological basis. In “Level One” scientific results “would provide specific moral commands or claims about what is genuinely valuable …” and would be supported by empirical evidence of what is deemed good, bad, right, or wrong and how people should live their lives (Hunter and Nedelisky 2018, 100). Level One would be considered the Holy Grail of scientific attainment because moral doctrines would be scientifically justified and therefore undisputed. Few scientists venture to make claims that eventually science will provide Level One findings with a notable exception. Sam Harris thinks that as there are wrong and right answers about physics eventually the maturing sciences of mind will provide

90 

F. Jotterand

scientific evidence for Level One findings (S. Harris 2010). He does not suggest that science provides “an evolutionary or neurobiological account of what people do in the name of ‘morality.’” He argues that “science can, in principle, help us understand what we should do and should want— and, therefore, what other people should do and should want in order to live the best lives possible…there are right and wrong answers to moral questions, just as there are right and wrong answers to questions of physics, and such answers may one day fall within reach of the maturing sciences of mind” (S. Harris 2010, 28). Owen Flanagan likewise argues that to answer normative questions requires a comprehension of the nature, causes, and constituents of human flourishing. “Such normative inquiry,” he writes, “has very precise analogies in sciences such as engineering and botany. … Engineering exists by combining knowledge from physics, material science, and so on. … Botany and agriculture science work the same way. … Same with eudaimonics” (Flanagan 2009, 124). In “Level Two” the science of morality would not be able to demonstrate empirically why some moral principles or moral doctrines are justifiable but would provide evidence supporting or refuting some moral claim or theory (Hunter and Nedelisky 2018, 100). In other words, scientific evidence would provide the ground to reject some moral theories as false but would not be able to demonstrate why some moral claims are true or false. In “Level Three,” which is where most current work in the neuroscience of ethics fits, “findings would provide scientifically based descriptions of, say, the origins of morality, or the specific way our capacity for moral judgment is physically embodied in our neural architecture, or whether human beings tend to behave in ways we consider moral” (Hunter and Nedelisky 2018, 100). In Level Three, findings do not provide insights concerning the content of morality but rather the processes behind the capacities of human beings to make moral judgments and act accordingly. In light of this three-level framework, the subsequent issue to examine is whether the science of morality has anything to contribute to moral formation beyond neurobiological interventions aiming at the alteration of mental states and the control of behavior. Scientific inquiry is certainly an important element in the development of the field of moral psychology, and for our understanding of how the brain develops and of the

4  Neurobiology, Morality, and Agency 

91

neuroanatomy of morality (Pascual et al. 2013). However, it would be a mistake to marginalize the role of the disciplines that inform one’s vision of human flourishing, the good life, or notions of the good and right. An uncritical acceptance of an exclusively naturalistic approach to morality, supported by the “science of morality,” provides a truncated explanation of human moral agency and the nature of moral truths.12 Ultimately it could lead to a kind of moral nihilism. In their analysis of the current state of affairs of the “science of morality” Hunter and Nedelisky conclude that [t]oday’s moral scientists no longer look to science to discover moral truths, for they believe there is nothing there to discover. As they see it, there are no such things as prescriptive moral or ethical norms; there are no moral “oughts” or obligations; there is no ethical good, bad, or objective value of any kind. Their view is, ironically-in its net effect-a kind of moral or ethical nihilism. (Hunter and Nedelisky 2018, 21)

The increasing acceptance of moral bioenhancement as a means to address social issues and human concerns should not come as a surprise. It is the logical outcome of a long line of thought in the history of moral philosophy intersecting with science that, unfortunately, led to the marginalization of moral philosophy and the emergence of moral psychology as the main sources of explanation for the phenomenon we call morality. This process has resulted in reducing ethics to “a social technology” where particular brain functions are used inferentially to justify ethical novelties (Kitcher 2011, 262), as reflected in moral bioenhancement. In contrast to the truncated account of morality delineated by above, I now turn my attention to providing a fuller picture of human moral psychology and assess it with regard to how proponents of moral bioenhancement conceptualize it. First, it is important to stress that the presence of a desire to act morally is not sufficient for one’s acting upon that very desire. The moral capacity (affective dimension) to bring about a particular outcome can be considered moral insofar as moral content (cognitive aspect) provides reasons for action. The synergy between affective and cognitive processes leads to moral motivations. Furthermore, moral emotions can be modified or manipulated by various means such

92 

F. Jotterand

as the use of substances or even ideologies, which can lead to the alteration of moral judgments. To ensure a continuous shaping of these moral emotions, an epistemic framework is necessary to supply the moral justification for these emotions. This is particularly important regarding radical ideologies since they are most of the time based on the justification of racism, sexism, xenophobia, and so on, which create strong emotional responses toward particular groups of individuals based on race, gender, or country of origin. Without a justification (moral content), it becomes difficult to delineate when emotional uproar is defensible or even necessary to communicate moral indignation. Conversely, a cold moral detachment and cultural rationalization when witnessing, for instance, what we would deem the abuse of a child is likewise inadequate because, as we know too well, such explanatory frames can always justify, or purport to warrant, the most atrocious deeds. In sum, moral motivation includes affective and cognitive dimensions of human psychology that interact in the shaping of moral emotions and in the determination of a set of beliefs, values, and ideas of the nature of the good that moral agents develop through their life experience.

4.5.3 Three Main Criticisms The conceptualization of morality by proponents of moral bioenhancement focuses mostly on the control or alteration of moral emotions (to address the problem of the weakness of the will per Persson and Savulescu account), that is, on the emotional dimensions of moral motivation. There are strong reasons to question this perspective in light of current brain research and scholarship on human moral psychology. Morality is a process whereby moral agency requires a synergy between cognition and the formation of right moral emotions that shape a person’s responses to moral dilemmas. From this standpoint, Persson and Savulescu’s account of moral bioenhancement is subject to at least three main criticisms: (a) the inability to provide a precise threshold of acceptable motivation for people to be considered fully moral; (b) the failure to establish an explanation of the ultimate end of motivation other than external dimensions of human existence (environment, geopolitical threats), as opposed to internal dimensions of human existence (human flourishing, moral

4  Neurobiology, Morality, and Agency 

93

development, etc.); and (c) the lack of a substantive account of the means to achieve an augmented sense of justice and altruism, which Persson and Savulescu deem essential to our being wholly moral. Each critique is developed separately in what follows. First, Persson and Savulescu are unable to provide an indication of what threshold or level of motivation is adequate for a person to be wholly moral other than stating that some individuals ought to go through a treatment that increases their moral motivation. Their claim is that “those of us” who are less morally motivated should become as strong as “those of us” who are by nature strongly motivated to act morally (Persson and Savulescu 2012, 112–13). It is not clear what less morally motivated means and who sets the standards to decide. A member of PETA might be highly motivated to rescue abandoned dogs while other people might be more inclined to coach a youth soccer team and make a difference in the life of children. The dual dimension of moral judgments requires an equal development of emotional (emotional intelligence) and cognitive (cognitive intelligence) capacities for adequate moral growth. Some might view moral courage as a character trait that is necessary for the betterment of our society but what level of courage is satisfactory and necessary for moral progress? The lack of courage can be seen in some instances as cowardliness, but, in others, too much courage can reflect a sign of foolishness. Indeed, courage may no longer be in play at all, having given way to recklessness. Depending on what dimension bearing on morality is enhanced, individuals might give excess weight to emotions in moral judgments, leading to a lack of wisdom in acting courageously, or indulge in too much rationalization before acting, which would impede their acting in a timely fashion. Second, properly addressing the issue of adequacy in levels of motivation necessitates that one answer the following question: motivation toward what? As Persson and Savulescu adamantly contend, the aim of moral bioenhancement is the survival of the human species (Persson and Savulescu 2008, 2012, 2013). This purpose diverges markedly from what enhancement proponents, in general, maintain, namely, that mere survival is not required or suitably elevated to be human beings’ governing aspiration at our current developmental stage. Persson and Savulescu, and any proponents as a matter of fact, must provide an account of the

94 

F. Jotterand

values, endeavors, and goals that would guide and justify bioenhancement beyond mere expediency and social engineering. Their approach to morality rests on two main pillars: (1) the phenomenon we call morality is the outcome of biological forces for survival (i.e., ethical naturalism) and (2) the Ultimate Harm, the idea that the power humans hold through science and technology, if abused or misused, could make “life forever impossible on this planet” (Persson and Savulescu 2012, 46). The Ultimate Harm, however, fails to provide the argumentative resources necessary to deliver a robust framework Persson and Savulescu hoped for to tackle these matters head on. Rather they assume that moral bioenhancement should accompany the movement of current democracies “from a social liberalism, which acknowledges the need for state interference to neutralize the glaring welfare inequalities within a society, to a global(ly responsible) liberalism, which extends welfare concerns globally and into the remote future” (Persson and Savulescu 2012, 102). This quote demonstrates that the “moral grounding” of their argument is in fact based on specific political values rather than moral considerations. The focus of my analysis is limited to the nature of motivation and how Persson and Savulescu mischaracterize it not on the political values that should motivate to address what could cause Ultimate Harm. Motivation is a matter of having some dispositions, based on reasons to act, that justify a particular desire. Such desires provide a sense of pleasure, fulfill the conditions of a good action, or simply reflect one’s behavioral disposition to act in a certain way. Without a particular (or broader, albeit defined and defended) end for moral action determined in advance (i.e., moral identity), increased motivation is blind. For dispositions are always established according to the moral agent’s comprehension of what it means to be moral and the meaning of the good life (aka human flourishing) (Jotterand 2011; Jotterand and Levin 2017). Finally, Persson and Savulescu contend that altruism and a sense of justice are two indispensable attributes for an individual to be fully moral (Persson and Savulescu 2012, 108–9). They doubt the idea advanced by Steven Pinker that “enhanced powers of reason” would bring about the moral betterment of our society. On Pinker’s account, “[t]he kind of

4  Neurobiology, Morality, and Agency 

95

reasoning relevant to moral progress” is precisely that recorded by the Flynn Effect involving IQ: “the cultivation of abstract reasoning…particularly [the] ability to set aside immediate experience, detach [our]selves from a parochial vantage point, and think in abstract terms” (Pinker 2011, 656, 660). The notion that the augmentation of reasoning capabilities will result in improving the human condition has been further developed by Pinker in his latest book entitled Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (Pinker 2018). In sharp contrast to Persson and Savulescu and their apocalyptic account, Pinker offers a positive outlook on the future of human civilization citing data that demonstrate progress in various areas such as health, prosperity, peace, knowledge, quality of life, equal rights, and so on. Persson and Savulescu reject Pinker’s account on the basis that reason is unable to expand a “circle of concern … without the assistance of the moral dispositions of altruism and a sense of justice” (Persson and Savulescu 2012, 107). Rather than relying on the powers of reason, and to address the pressing issues humans are facing for their survival, Persson and Savulescu advocate for a vigorously enhanced “feel[ing of ] sympathy and a sense of justice” concerning people and the environment, alongside a “reduction of both the temporal bias and the commitment to a causally-based conception of responsibility” (Persson and Savulescu 2012, 108–9). Furthermore, in their view, augmented justice (and presumably altruism) will be achieved only if liberal democracies “inculcate norms that are conductive to the survival and prosperity of a world community of which their societies are integral parts” (Persson and Savulescu 2012, 102). In the abstract, their agenda sounds wonderful but in their defense of moral bioenhancement, they shifted their argument from enhancing values like justice to altering disruptive tendencies. So, while survival is paramount to any other dimensions of the human condition, Persson and Savulescu mention prosperity as the only purpose to implement moral bioenhancement at large scale. This is rather a thin justification. What is more, the inherently biopolitical dimension of neuroscience and neuroethics (Henry and Plemmons 2012; Jotterand and Ienca 2017) should warn us of the potential imposition, without any justification, of a particular philosophical interpretation of what it means to be fully moral and what a just society would look like.

96 

F. Jotterand

Notes 1. For a historical overview of the shift from moral philosophy to moral psychology see Hunter and Nedelisky (2018), especially Part II, chapters 2–4. 2. See, for instance, the human insula. According to Woodward “in nonhuman mammals (e.g., rats), this structure is involved, among other things, in assessment of taste and food intake that is potentially harmful (generating literal ‘disgust’ reactions), as well as the monitoring of interior bodily states. In human beings, this structure is involved in (it has been co-opted or reused for) a wide variety of other tasks, including empathetic identification, decisions regarding charitable donations, affective response to pain in self and in others, reactions to perceived unfairness in economic interactions, and assessment of risk” (Woodward 2016, 89). 3. I am indebted to John Sadler who made the distinction between moral content and moral capacity to understand the relationship between psychopathology and political extremism in a presentation at the 20th Annual Meeting of the Association for the Advancement of Philosophy and Psychiatry, May 4, 2008. 4. “La phronèsis aristotélicienne est une excellence de la dianoia qui ne s’identifie pas à la sophia, qui n’est pas non plus epistèmè, ni même technè. Elle est relative à la praxis, son objet est particulier plutôt qu’immuable, périssable et victime du temps plutôt qu’éternel” (Lories 1998, 110). 5. “Il y a lieu de souligner avec Aubenque que cette disposition pratique ‘concerne la règle du choix’ et non le choix (proairesis), qui relève de la vertu éthique, car ‘il ne s’agit pas ici de la rectitude de l’action, mais de la justesse du critère; c’est pourquoi la prudence est une dispostion pratique accompagnée de règle vraie.’ Restrainte aux biens et maux relatifs à l’homme, elle se distingue cependant de l’autre vertu intellectuelle qu’est la sophia.” Lories (Lories 1998, 110) quotes Pierre Aubenque (Aubenque 1976, 34). 6. Lories argues that the phronimos is inherently endowed with gnome: “Le phronimos est indissociablement doué de gnômè et de suggnômè, de clairvoyance et d’indulgence; il y voit clair dans une situation, mais son regard est favorablement disposé à l’égard des individus concernés, il est d’esprit ouvert, il est comprehensible et par là équitable” (Lories 1998, 137–8). 7. Edith Wyschogrod recognizes within the context of postmodernism six impulses or tendencies that bear upon the sphere of moral action: dif-

4  Neurobiology, Morality, and Agency 

97

ferentiality, double coding, eclectism, alterity, empowerment (and its opposite), and materialism. For further analysis see Saints and Postmodernism (Wyschogrod 1990, xvi–xxii). 8. Labarrière characterizes postmodernity as follows: “La ‘postmodernité’ se caractérise par la généralisation des évidences suivantes: disparition du sujet, crise de la raison pratique et éthique, recul des utopies, fragmentation des cosmologies, le tout conjugué avec une exacerbation des tensions géopolitiques liées aux problèmes concernant le développement et, sur un plan religieux, un désaveu des institutions et à l’explosion concomitante de recherches d’ordre ‘spirituelles’” (Labarrière 1996, 142). 9. Taylor defines frameworks as follows: “To articulate a framework is to explicate what makes sense of our moral responses. That is, when we try to spell out what it is that we presuppose when we judge that a certain form of is truly worthwhile, or place our dignity in a certain achievement or status, or define our moral obligations in a certain manner, we find ourselves articulating inter alia what I have been calling here ‘frameworks’” (Taylor 1989, 26) 10. According to Taylor the lack of orientation of the moral self reflects a scientific posture with deep ramification about fulfilment and meaning: “…behind the more-or-less question of mastery achieved lies an absolute question about basic orientation: the disengaged agent has taken a once-­ for-­all stance in favour of objectification; he has broken with religion, superstition, resisted the blandishments of those pleasing and flattering world-views which hide the austere reality of the human condition in a disenchanted universe. He has taken up the scientific attitude. The direction of his life is set, however little mastery he may have actually achieved. And this is a source of deep satisfaction and pride to him” (Taylor 1989, 46). 11. MacIntyre equates bad character with the failure to understand “what it is that she or he has mislearned and how it was that she or he fell into error.” In his view, “We need … an extended, practically usable answer to the question ‘What is my good?’ and ‘How is it to be achieved?’, which will both direct us in present and future action and also evaluate and explain past action. Such an answer will have to supply not only an account of goods and virtues, but also of rules, and of how goods, virtues and rules relate to one another” (MacIntyre 1998, 142). 12. See also Jesse Prinz who is critical of an overly confidence in neuroimaging studies: “That confidence … rests on undue faith in what brain scans

98 

F. Jotterand

can reveal, independent of other sources of evidence, including both behavioral studies and theoretical considerations. When taken on their own, extant neuroimaging studies leave classic debates unsettled, and require other evidence for interpretation. This suggests that, at present, the idea that neuroscience can settle psychological and philosophical debates about moral judgments may have things backward. Instead, psychological and philosophical debates about moral judgements may be needed so settle the meaning of brain scans” (Prinz 2016, 45).

References Allison, Truett, Aina Puce, Gregory McCarthy, Truett Allison, Aina Puce, and Gregory McCarthy. 2000. Social Perception from Visual Cues: Role of the STS Region. Trends in Cognitive Sciences 4 (7): 267–278. https://doi. org/10.1016/S1364-­6613(00)01501-­1. Annas, Julia. 1993. The Morality of Happiness. New York: Oxford University Press. Aubenque, Pierre. 1976. La Prudence chez Aristote. Paris: Presses Universitaires de France—PUF. Baudrillard, Jean. 1994. The Illusion of the End. Stanford, CA: Stanford University Press. Bauer, Walter. 2001. In A Greek-English Lexicon of the New Testament and Other Early Christian Literature, ed. Frederick William Danker, 3rd ed. Chicago: University of Chicago Press. Churchland, Patricia S. 2012. Braintrust: What Neuroscience Tells Us about Morality. Reprint edition. Princeton, NJ: Princeton University Press. Decety, Jean, Kalina J. Michalska, and Katherine D. Kinzler. 2012. The Contribution of Emotion and Cognition to Moral Sensitivity: A Neurodevelopmental Study. Cerebral Cortex (New York, N.Y.: 1991) 22 (1): 209–220. https://doi.org/10.1093/cercor/bhr111. DeGrazia, David. 2014. Moral Enhancement, Freedom, and What We (Should) Value in Moral Behaviour. Journal of Medical Ethics 40 (6): 361–368. https:// doi.org/10.1136/medethics-­2012-­101157. Douglas, Thomas. 2013. Moral Enhancement via Direct Emotion Modulation: A Reply to John Harris. Bioethics 27 (3): 160–168. https://doi.org/10.1111/ j.1467-­8519.2011.01919.x. ———. 2014. Enhancing Moral Conformity and Enhancing Moral Worth. Neuroethics 7: 75–91. https://doi.org/10.1007/s12152-­013-­9183-­y.

4  Neurobiology, Morality, and Agency 

99

Farrow, T.F., Y. Zheng, I.D. Wilkinson, S.A. Spence, J.F. Deakin, N. Tarrier, P.D. Griffiths, and P.W. Woodruff. 2001. Investigating the Functional Anatomy of Empathy and Forgiveness. Neuroreport 12 (11): 2433–2438. Flanagan, Owen. 2009. The Really Hard Problem: Meaning in a Material World. Reprint edition. Cambridge, MA; London: A Bradford Book. Flanagan, Owen, Hagop Sarkissian, and David Wong. 2007. Naturalizing Ethics. In Moral Psychology, The Evolution of Morality: Adaptations and Innateness, Vol. 1, ed. Walter Sinnott-Armstrong and Christian B. Miller, 1st ed., 1–25. Cambridge, MA: The MIT Press. Funk, Chadd M., and Michael S. Gazzaniga. 2009. The Functional Brain Architecture of Human Morality. Current Opinion in Neurobiology 19 (6): 678–681. https://doi.org/10.1016/j.conb.2009.09.011. Fusar-Poli, Paolo, Anna Placentino, Francesco Carletti, Paola Landi, Paul Allen, Simon Surguladze, Francesco Benedetti, et al. 2009. Functional Atlas of Emotional Faces Processing: A Voxel-Based Meta-Analysis of 105 Functional Magnetic Resonance Imaging Studies. Journal of Psychiatry & Neuroscience: JPN 34 (6): 418–432. Gauthier, René Antoine, Jean Yves Jolif, and Aristotle. 1970. L’éthique à Nicomaque. Louvain; Paris: Publications universitaires; Béatrice-Nauwelaerts. Gazzaniga, Michael. 2006. The Ethical Brain: The Science of Our Moral Dilemmas. New York: Harper-Perennial. https://www.press.uchicago.edu/Misc/ Chicago/1932594019.html. Greene, Joshua D., Leigh E. Nystrom, Andrew D. Engell, John M. Darley, and Jonathan D. Cohen. 2004. The Neural Bases of Cognitive Conflict and Control in Moral Judgment. Neuron 44 (2): 389–400. https://doi. org/10.1016/j.neuron.2004.09.027. Habermas, Juergen. 1985. Modernity: An Incomplete Project. In Postmodern Culture, ed. Hal Foster, 3–15. London: Pluto Press. Haidt, Jonathan. 2001. The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Psychological Review 108 (4): 814–834. https://doi.org/10.1037/0033-­295X.108.4.814. ———. 2003. The Moral Emotions. In Handbook of Affective Sciences, ed. R.J. Davidson, K.R. Scherer, and H.H. Goldsmith. Oxford: Oxford University Press. Harada, Tokiko, Shoji Itakura, Xu Fen, Kang Lee, Satoru Nakashita, Daisuke N. Saito, and Norihiro Sadato. 2009. Neural Correlates of the Judgment of Lying: A Functional Magnetic Resonance Imaging Study. Neuroscience Research 63 (1): 24–34. https://doi.org/10.1016/j.neures.2008.09.010.

100 

F. Jotterand

Harenski, Carla L., Olga Antonenko, Matthew S. Shane, and Kent A. Kiehl. 2008. Gender Differences in Neural Mechanisms Underlying Moral Sensitivity. Social Cognitive and Affective Neuroscience 3 (4): 313–321. https:// doi.org/10.1093/scan/nsn026. Harris, Sam. 2010. The Moral Landscape: How Science Can Determine Human Values. Reprint edition. New York: Free Press. Hart, John, Jr. 2015. The Neurobiology of Cognition and Behavior. Oxford; New York: Oxford University Press. Harvey, David. 1990. The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change. Oxford; Cambridge, MA: Wiley-Blackwell. Hauerwas, Stanley. 1981. Vision and Virtue: Essays in Christian Ethical Reflection. Notre Dame, Ind.: University of Notre Dame Press. ———. 1994. Character and the Christian Life: A Study in Theological Ethics. 1st ed. Notre Dame: University of Notre Dame Press. Hauser, Marc D. 2006. Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. New York: HarperCollins. https://www.goodreads.com/ work/best_book/130737-­moral-­minds-­how-­nature-­designed-­our-­universal-­ sense-­of-­right-­and-­wrong. Henry, Stuart, and Dena Plemmons. 2012. Neuroscience, Neuropolitics and Neuroethics: The Complex Case of Crime, Deception and FMRI. Science and Engineering Ethics 18 (3): 573–591. https://doi.org/10.1007/ s11948-­012-­9393-­4. Hsu, Ming, Cédric Anen, and Steven R. Quartz. 2008. The Right and the Good: Distributive Justice and Neural Encoding of Equity and Efficiency. Science (New York, N.Y.) 320 (5879): 1092–1095. https://doi.org/10.1126/ science.1153651. Hunter, James Davison, and Paul Nedelisky. 2018. Science and the Good: The Tragic Quest for the Foundations of Morality. New Haven and London: Yale University Press. https://www.amazon.com/Science-­Good-­Foundations-­ Foundational-­Questions/dp/0300196288. Immordino-Yang, Mary Helen, and Vanessa Singh. 2013. Hippocampal Contributions to the Processing of Social Emotions. Human Brain Mapping 34 (4): 945–955. https://doi.org/10.1002/hbm.21485. Jackson, Philip L., Andrew N. Meltzoff, and Jean Decety. 2005. How Do We Perceive the Pain of Others? A Window into the Neural Processes Involved in Empathy. NeuroImage 24 (3): 771–779. https://doi.org/10.1016/j. neuroimage.2004.09.006.

4  Neurobiology, Morality, and Agency 

101

Jotterand, Fabrice. 2011. ‘Virtue Engineering’ and Moral Agency: Will Post-­ Humans Still Need the Virtues? AJOB Neuroscience 2 (4): 3–9. https://doi. org/10.1080/21507740.2011.611124. ———. 2016. Moral Enhancement, Neuroessentialism, and Moral Content. In Cognitive Enhancement: Ethical and Policy Implications in International Perspectives, ed. Fabrice Jotterand and Veljko Dubljević. Oxford; New York: Oxford University Press. Jotterand, Fabrice, and Marcello Ienca. 2017. The Biopolitics of Neuroethics. In Debates About Neuroethics, Advances in Neuroethics, 247–261. Cham: Springer. https://doi.org/10.1007/978-­3-­319-­54651-­3_17. Jotterand, Fabrice, and Susan B. Levin. 2017. Moral Deficits, Moral Motivation and the Feasibility of Moral Bioenhancement. Topoi April: 1–9. https://doi. org/10.1007/s11245-­017-­9472-­x. Kandel, Eric R. 2018. The Disordered Mind: What Unusual Brains Tell Us About Ourselves. New York: Farrar, Straus and Girous. https://www.amazon.com/ Disordered-­Mind-­Unusual-­Brains-­Ourselves/dp/0374287864/ref=sr_1_1?h vadid=77859242047257&hvbmt=be&hvdev=c&hvqmt=e&keywor ds=disordered+mind+kandel&qid=1560285387&s=gateway&sr=8-­1. Kiehl, Kent A. 2006. A Cognitive Neuroscience Perspective on Psychopathy: Evidence for Paralimbic System Dysfunction. Psychiatry Research 142 (2–3): 107–128. https://doi.org/10.1016/j.psychres.2005.09.013. Kitcher, Philip. 2011. The Ethical Project. Cambridge, MA: Harvard University Press. Kristjánsson, Kristján. 2015. Phronesis as an Ideal in Professional Medical Ethics: Some Preliminary Positionings and Problematics. Theoretical Medicine and Bioethics 36 (5): 299–320. https://doi.org/10.1007/s11017-­015-­9338-­4. Labarrière, P.-J. 1996. Postmodernité et déclin des absolus. In La théologie en postmodernité, ed. Pierre Gisel and Patrick Evrard. Genève: Labor et Fides. Lories, D. 1998. Le Sens Commun et Le Jugement Du Phronimos Aristote et Les Stoiciens. Louvain-La-Neuve: Peeters Publishers. MacIntyre, Alasdair. 1984a. After Virtue: A Study in Moral Theory. 2nd ed. Notre Dame, Ind: University of Notre Dame Press. ———. 1984b. Does Applied Ethics Rest on a Mistake? The Monist 67 (4): 498–513. ———. 1988. Whose Justice? Which Rationality? 1st ed. Notre Dame, Ind: University of Notre Dame Press. ———. 1998. “Plain Persons and Moral Philosophy: Rules, Virtues and Goods.” In The MacIntyre reader, edited by K. Knight, 136–154. Cambridge: Polity Press.

102 

F. Jotterand

Mele, A.R. 1999. Motivation. In The Cambridge Dictionary of Philosophy, ed. R. Audi, 591–592. Cambridge University Press. Mendez, Mario F. 2006. What Frontotemporal Dementia Reveals about the Neurobiological Basis of Morality. Medical Hypotheses 67 (2): 411–418. https://doi.org/10.1016/j.mehy.2006.01.048. Mitchell, Joshua. 2020. American Awakening: Identity Politics and Other Afflictions of Our Time. New York City: Encounter Books. Moll, J., P.J. Eslinger, and R. Oliveira-Souza. 2001. Frontopolar and Anterior Temporal Cortex Activation in a Moral Judgment Task: Preliminary Functional MRI Results in Normal Subjects. Arquivos De Neuro-Psiquiatria 59 (3-B): 657–664. Moll, Jorge, Ricardo de Oliveira-Souza, Paul J. Eslinger, Ivanei E. Bramati, Janaína Mourão-Miranda, Pedro Angelo Andreiuolo, and Luiz Pessoa. 2002a. The Neural Correlates of Moral Sensitivity: A Functional Magnetic Resonance Imaging Investigation of Basic and Moral Emotions. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 22 (7): 2730–2736. 20026214. Moll, Jorge, Ricardo de Oliveira-Souza, and Paul Eslinger. 2003. Morals and the Human Brain: A Working Model. Neuroreport 14 (3): 299–305. Moll, Jorge, Roland Zahn, Ricardo de Oliveira-Souza, Frank Krueger, and Jordan Grafman. 2005. The Neural Basis of Human Moral Cognition. Nature Reviews Neuroscience 6 (10): 799. https://doi.org/10.1038/nrn1768. Moll, Jorge, Frank Krueger, Roland Zahn, Matteo Pardini, Ricardo de Oliveira-­ Souza, and Jordan Grafman. 2006. Human Fronto-Mesolimbic Networks Guide Decisions about Charitable Donation. Proceedings of the National Academy of Sciences of the United States of America 103 (42): 15623–15628. https://doi.org/10.1073/pnas.0604475103. Moll, Jorge, Ricardo de Oliveira-Souza, Roland Zahn, and Jordan Grafman. 2008. The Cognitive Neuroscience of Moral Emotions. In Moral Psychology, Volume 3 The Neuroscience of Morality: Emotion, Brain Disorders, and Development, ed. Walter Sinnott-Armstrong. Cambridge, MA: The MIT Press. https://mitpress.mit.edu/books/moral-­psychology-­volume-­3. O’Doherty, J., M.L. Kringelbach, E.T. Rolls, J. Hornak, and C. Andrews. 2001. Abstract Reward and Punishment Representations in the Human Orbitofrontal Cortex. Nature Neuroscience 4 (1): 95. https://doi. org/10.1038/82959. Pascual, Leo, Paulo Rodrigues, and David Gallardo-Pujol. 2013. How Does Morality Work in the Brain? A Functional and Structural Perspective of

4  Neurobiology, Morality, and Agency 

103

Moral Behavior. Frontiers in Integrative Neuroscience 7 (Sept.). https://doi. org/10.3389/fnint.2013.00065. Pellegrino, Edmund D., and David C. Thomasma. 1993. Virtues in Medical Practice. 1st ed. New York: Oxford University Press. Persson, Ingmar, and Julian Savulescu. 2008. The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. Journal of Applied Philosophy 25 (3): 162–177. https://doi. org/10.1111/j.1468-­5930.2008.00410.x. ———. 2012. Unfit for the Future: The Need for Moral Enhancement. 1st ed. Oxford University Press. ———. 2013. Getting Moral Enhancement Right: The Desirability of Moral Bioenhancement. Bioethics 27 (3): 124–131. https://doi. org/10.1111/j.1467-­8519.2011.01907.x. ———. 2016. Moral Bioenhancement, Freedom and Reason. Neuroethics 9 (3): 263–268. https://doi.org/10.1007/s12152-­016-­9268-­5. Pinker, Steven. 2011. The Better Angels of Our Nature. New York: Viking/ Penguin Group. https://www.goodreads.com/work/best_book/16029496-­ the-­better-­angels-­of-­our-­nature-­why-­violence-­has-­declined. ———. 2018. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Illustrated edition. New York, NY: Viking. Prehn, Kristin, Isabell Wartenburger, Katja Mériau, Christina Scheibe, Oliver R. Goodenough, Arno Villringer, Elke van der Meer, and Hauke R. Heekeren. 2008. Individual Differences in Moral Judgment Competence Influence Neural Correlates of Socio-Normative Judgments. Social Cognitive and Affective Neuroscience 3 (1): 33–46. https://doi.org/10.1093/scan/nsm037. Prinz, Jesse. 2016. Sentimentalism and the Moral Brain. In Moral Brains, ed. S. Matthew Liao, 45–73. Oxford University Press. https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199357666.001.0001/ acprof-­9780199357666-­chapter-­2. Redish, A. David. 2013. The Mind within the Brain: How We Make Decisions and How Those Decisions Go Wrong. Oxford University Press. Russell, Bertrand. 1953. The Impact of Science on Society. 1st ed. New York: Simon & Schuster. Schaich Borg, Jana, Catherine Hynes, John Van Horn, Scott Grafton, and Walter Sinnott-Armstrong. 2006. Consequences, Action, and Intention as Factors in Moral Judgments: An FMRI Investigation. Journal of Cognitive Neuroscience 18 (5): 803–817. https://doi.org/10.1162/jocn.2006.18.5.803.

104 

F. Jotterand

Schneewind, Jerome B. 1997. The Invention of Autonomy: A History of Modern Moral Philosophy. Cambridge; New York, NY: Cambridge University Press. Sestieri, Carlo, Maurizio Corbetta, Gian Luca Romani, and Gordon L. Shulman. 2011. Episodic Memory Retrieval, Parietal Cortex, and the Default Mode Network: Functional and Topographic Analyses. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 31 (12): 4407. https://doi. org/10.1523/JNEUROSCI.3335-­10.2011. Shenhav, Amitai, and Joshua D. Greene. 2010. Moral Judgments Recruit Domain-General Valuation Mechanisms to Integrate Representations of Probability and Magnitude. Neuron 67 (4): 667–677. https://doi. org/10.1016/j.neuron.2010.07.020. Sinnott-Armstrong, W. 2008. Framing Moral Intuitions. In Moral Psychology, Volume 2, The Cognitive Science of Morality: Intuition and Diversity, ed. W. Sinnott-Armstrong. Cambridge, MA: The MIT Press. https://mitpress. mit.edu/books/moral-­psychology-­volume-­2. Taylor, Charles. 1989. Sources of the Self: The Making of the Modern Identity. Cambridge, MA: Harvard University Press. https://www.goodreads. com/work/best_book/37652-­s ources-­o f-­t he-­s elf-­t he-­m aking-­o f-­t he-­ modern-­identity. Trueman, Carl R. 2020. The Rise and Triumph of the Modern Self: Cultural Amnesia, Expressive Individualism, and the Road to Sexual Revolution. Wheaton, IL: Crossway. Tsetsenis, Theodoros, Xiao-Hong Ma, Luisa Lo Iacono, Sheryl G. Beck, and Cornelius Gross. 2007. Suppression of Conditioning to Ambiguous Cues by Pharmacogenetic Inhibition of the Dentate Gyrus. Nature Neuroscience 10 (7): 896–902. https://doi.org/10.1038/nn1919. Völlm, Birgit A., Alexander N.W. Taylor, Paul Richardson, Rhiannon Corcoran, John Stirling, Shane McKie, John F.W. Deakin, and Rebecca Elliott. 2006. Neuronal Correlates of Theory of Mind and Empathy: A Functional Magnetic Resonance Imaging Study in a Nonverbal Task. NeuroImage 29 (1): 90–98. https://doi.org/10.1016/j.neuroimage.2005.07.022. de Waal, Frans. 2013. The Bonobo and the Atheist: In Search of Humanism Among the Primates. W. W. Norton & Company. https://www.amazon.com/Bonobo-­ Atheist-­Search-­Humanism-­Primates-­ebook/dp/B007Q6XKEY. Wicker, Bruno, Christian Keysers, Jane Plailly, Jean Pierre Royet, Vittorio Gallese, and Giacomo Rizzolatti. 2003. Both of Us Disgusted in My Insula:

4  Neurobiology, Morality, and Agency 

105

The Common Neural Basis of Seeing and Feeling Disgust. Neuron 40 (3): 655–664. Woodward, J. 2016. Emotion versus Cognition in Moral Decision-Making. In Moral Brains: The Neuroscience of Morality, ed. S.M. Liao, 87–116. Oxford University Press. http://www.oxfordscholarship.com/view/10.1093/acprof: oso/9780199357666.001.0001/acprof-­9780199357666. Wyschogrod, Edith. 1990. Saints and Postmodernism: Revisioning Moral Philosophy. 1st ed. Chicago: University of Chicago Press. Young, Liane, and James Dungan. 2012. Where in the Brain Is Morality? Everywhere and Maybe Nowhere. Social Neuroscience 7 (1): 1–10. https:// doi.org/10.1080/17470919.2011.569146. Young, Liane, and Michael Koenigs. 2007. Investigating Emotion in Moral Cognition: A Review of Evidence from Functional Neuroimaging and Neuropsychology. British Medical Bulletin 84 (1): 69–79. https://doi. org/10.1093/bmb/ldm031. Young, Liane, and Rebecca Saxe. 2008. The Neural Basis of Belief Encoding and Integration in Moral Judgment. NeuroImage 40 (4): 1912–1920. https://doi. org/10.1016/j.neuroimage.2008.01.057.

5 Techno-Science, Politics, and the Common Good

5.1 Post-academic Science The role played by politics and economics in contemporary scientific culture reflects a transition observed in the last decades. Nearly two hundred years ago, academic science emerged with its own social and epistemic norms and as “a specific mode of knowledge production” (Ziman 2002, 60). In the last few decades science has undergone a radical transformation resulting in what Ziman calls post-academic science which “performs a new social role and is regulated by a new ethos and a new philosophy of nature” (Ziman 2002, 60). This change in scientific culture became a source of concern as he saw the objectivity of science being threatened. Already in the mid-1990s, he voiced his unease about the transition. He refers to a “cultural revolution” in the traditional ethos of science giving way to post-academic science, which in his view may be so different from a sociological and philosophical perspective that it will result in a different type of knowledge (Ziman 1996a). Ziman considers this new paradigm as potentially unstable and not grounded on sound intellectual principles. The most radical feature of post-academic science is its “unselfconscious pluralism,” and its diversity © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_5

107

108 

F. Jotterand

of conceptual frameworks as well as an unashamed acceptance of possible inconsistencies. As he puts it, “[t]his pragmatism will no longer bar academic science from hybridizing with knowledge and belief systems that do not share the same intellectual values or standards of ‘good science’” (Ziman 1996a, 753). This hybridization of knowledge and its accompanying pluralism is “defiantly post-modern” (Ziman 2002, 210), without fear of being inconsistent resulting in a contrasting agenda compared to academic science. On the one hand, academic science aims at constructing a body of knowledge whereas post-academic science, in its postmodern outlook, focuses on its deconstruction (Parusnikova 1992). To understand this shift, we need to immerse ourselves in the tortuous world of postmodern thought. To contextualize post-academic science, it is imperative to examine and assess the nature of postmodern thought and identify how it bears on the metaphysical and epistemic assumptions of scientific knowledge. First off, postmodernity is a complex intellectual movement reacting to the totalizing account of modernity, resulting in a fragmented model of explanation of reality. As a philosophical endeavor it tends toward subjectivist and relativistic notions of perceptions and a fragmentation of knowledge, including science (Pluckrose and Lindsay 2020). For instance, Steven Pinker, a staunch critique of postmodernism, recognizes that science has been deployed as a tool for political propaganda and oppression, but he also observes that “scientistic methods” (i.e., quantification and systematic chronology) are not inherently evil as often presupposed by Critical Theory and postmodernist ideology. “Genocide and autocracy,” he writes, “were ubiquitous in premodern times, and they decreased, not increased, as science and liberal Enlightenment values became increasingly influential after World War II” (Pinker 2018, 397). Pinker also notes the current malaise in the humanities characterized by anti-­ intellectual trends reflected in our culture and the commercialization of universities. The damage to the disciplines encompassed by the humanities, whether self-inflicted or the result of outside forces such as postmodern ideology, reveals a rather pessimistic appraisal of their intellectual depth and relevance, as well as their ability to counter a destructive cultural movement. As Pinker poignantly puts it,

5  Techno-Science, Politics, and the Common Good 

109

The humanities have yet to recover from the disaster of postmodernism, with its defiant obscurantism, self-refuting relativism, and suffocating political correctness. Many of its luminaries … are morose cultural pessimists who declared that modernity is odious, all statements are paradoxical, works of art are tools of oppression, liberal democracy is the same as fascism, and Western civilization is circling the drain. (Pinker 2018, 406)

Rejection of modernity, language, and art as means of oppression, critique of democracy and Western culture, as well as rejection of objective truth (all knowledge is socially constructed) are key features of postmodernity.1 Since the 1980s many scholars have noted that contemporary Western culture is characterized by various irreconcilable ideologies, whether in political philosophy under the terminology of “mutually exclusive visions of the public good” (Hunter 1994, 15), in moral philosophy, using the language of “interminable arguments” (MacIntyre 1984a), in bioethics, describing the participants in moral debates as moral strangers (Engelhardt 1996), or in scientific discourses, that is, scientific knowledge as a kind of discourse which requires a particular language game (Lyotard and Jameson 1984).2 More recent works in political philosophy, ethics, and cultural-social studies have likewise examined the fragmentation of our society (Sandel 2020; Mitchell 2020; Levin 2020; Gutmann and Thompson 1998, 2014). What these scholarly works reveal is a fragmentated world where different epistemologies, moral and political frameworks compete for ideological hegemony. Postmodernity can be said to be an attempt to break with the meta-narrative provided by modernity which supported an account of history defined within a particular account of progress. This break produced a crisis within modernity (Harvey 1990). Whereas modernity as a rationalistic project of promulgating an intellectual emancipation from the Judeo-Christian heritage of Western culture and the establishment of a rationally justified common morality, postmodernity is an attempt to break with this agenda. Modernity has failed to provide a common morality and epistemological rationality that can be shown to govern all areas of human activity, including science and technology.3 Postmodernity is partly an acknowledgment of this failure but, importantly, without offering any alternative. On the contrary, postmodernity started a process of deconstruction by means of

110 

F. Jotterand

a combination of capitalism and technology, which alters social bonds, in turn alienating people from “the old poles of attraction” (Lyotard and Jameson 1984). The current socio-political context in the United States epitomizes this trend. Powerful tech companies use social platforms to reshape society in areas of science, culture, how we relate to each other via social media, education, and the media. Since the past does not provide an anchor to individuals for the development of their own identity, the structure of the postmodern social fabric requires novelty and self-­ adjustment. The implication is that the endless demand for novelty alienates the present condition from the past including its narratives, traditions, and practices. A narrative construed in such terms projects the past into the present in such a way that the transition from the “then” to the “now” is negated in order to make room for novelty. History becomes an ever-­ present actuality that denies its genealogy and development and does not provide a sense of belonging, meaning, unity, and purpose.

5.2 T  he Implications of Postmodernity for Science and Technology The contrast already developed between modernity and postmodernity displays the cardinal issues bearing on science and technology, namely, whether there is unity in the reductionist approach to science: scientific explanations are reducible to a limited number of theories and methodologies that unify science or disunity as in the postmodern pluralistic model of explanations in science (post-academic science), as well as how scientific knowledge can be legitimated or justified. In discussion pertaining to postmodernity in the philosophy of science, the debate concerning models of scientific explanation uncovers a dichotomy between scientific realism and scientific relativism (Rouse 1991). On the one hand, various forms of scientific realism are characteristic of modernity, particularly as they attempt to reduce scientific explanations to a cluster of universal laws and authoritative principles. Scientific realism is aimed at providing a meta-discourse able to describe, explain, and categorize the nature of things in universal terms. In

5  Techno-Science, Politics, and the Common Good 

111

contrast, a postmodern account (i.e., scientific relativism) is suspicious of totalizing concepts such as truth and reality in scientific explanations. Instead, a postmodern account stresses the local context of scientific inquiry and recognizes the plurality of scientific discourse (Rouse 1991; Murphy 1990; Pluckrose and Lindsay 2020). The conditions for scientific knowledge are not limited to “scientific knowledge” per se but are rather challenged or complemented by other “narrative knowledge” (Lyotard 1984, 7). Ultimately scientific knowledge, on the postmodern account (i.e., post-academic science), cannot be understood in terms of absolute and objective norms but rather as social constructs that do not aim at developing coherence and credibility at the conceptual level but rather find their legitimation in applications. This point is highlighted by Jean-Jacques Solomon who remarks that the idea of the social responsibility of scientists and researchers is rather rarely contested in what he calls the conditions of the “new contract” of science and technology (i.e., contract between society and the scientific community). He notes that in the current context researchers have the responsibility not only to take into account the problems generated by scientific research, but also, to some extent, develop innovations oriented toward the resolution of social problems, such as famine, poverty, and new sources of cheap and clean energy (Salomon 1999). The marriage between science and technology has thus been sealed inaugurating the era of techno-science.

5.2.1 The New Paradigm of Techno-Science The development of human civilization is marked by different stages of technological progress.4 Marx W. Wartofsky distinguishes four main technological revolutions: (1) “the revolution of the hand tool,” that is, the very creation of technology and use of tools; (2) the “industrial revolution” which marks the transition from “hand” to “machine” as a means of production; (3) the use of “calculating or computing machines in conjuncture with other machines” which produced the automatization of production; and (4) the “politicization of technology” in which science, technology, politics, economics, and values5 converge, technology

112 

F. Jotterand

playing a pivotal role in society at large. The fourth revolution is characterized by the central role of high technology (technology that involves highly advanced or specialized systems or devices) for social utility: The measure of the fourth revolution’s dominance is the degree to which high technology has become central to the economy, to government policy, to the military, a life and death issue of international hegemony, of competitive edge, of military superiority or inferiority. This is, of course, tautological: what I mean by the fourth revolution is just this dominance of technology in national political and economic life. (Wartofsky 1992, 28)

The profound transformation that took place within science resulted in a shift in which the ideal of the quest for truth (pure science) has been replaced by science as a source of economic, and, by extension, political power. (A good example of how economic growth, politics, and military supremacy are closely related is how governments around the world invest in scientific research and the development of emerging technologies such as Artificial Intelligence and neurotechnologies.) Contemporary science “is being pressed into the service of the nation as the driving force in a national R&D system, a wealth-creating techno-scientific motor for the whole economy” (Ziman 2002, 73). The politicization of technology has transformed science to an extent that the culture and the epistemic structure of scientific research and development have entered the new era characterized by three important features.6 First, the new scientific ethos is highly trans-disciplinary (i.e., involving the integration of scientific and non-scientific knowledge—the humanities and social sciences) rather than inter-disciplinary (collaboration of several disciplines). It ties the production of scientific knowledge to two key sub-elements: the first is the economic incentive in which the aims of techno-scientific development are not the production of knowledge per se but economic development, the funding for which depends on governmental agencies or the private sector (e.g., companies supporting research or philanthropic organizations). Due to the nature of certain technological applications, particularly military applications, the logical outcome is geopolitical supremacy. The second element is the norm of utility. This norm is a moral concept determined by human ends, goals, and values within the

5  Techno-Science, Politics, and the Common Good 

113

social context, but it is primarily used to evaluate the usefulness of scientific projects in terms of economic and scientific validity not in its moral dimensions. While it is true that those scientific projects and discoveries are evaluated from various economic and scientific angles, discoveries are first evaluated commercially before they are validated scientifically. That is, research with no social value (aka the production of knowledge for its own sake) will be difficult to justify unless it can be proven to have potential economic and political benefits (Ziman 2002, 74). Future economic benefits play an important role in the funding of research and in the development of strategies for industrial applications. In the light of the transformation of the ethos of science, the development of technology involves a change in the practice of science in which the ideals of “pure science” appear difficult to sustain. Rather, scientific research takes place in the context of application in which the process of discovery and fabrication are integrated, and decisional centers are located in the broad military-­industrial complex. Contemporary scientific research is not limited to the pursuit of knowledge as an aim in and of itself. It is necessarily bound to the imperatives of social utility, political power, and economic development set by governments and the industry. It signifies that knowledge production is directly linked to the commercialization of knowledge or the marketability of science and technology directly to consumers (e.g., consumer neurotechnologies), and thus requires not only the ability to generate useful and marketable knowledge but also involves an increasing collaboration with other sources of knowledge production outside in-house resources, that is, knowledge available in a vast global network where knowledge is exchanged (Gibbons et al. 1994). However, it is not application per se that is the primary focus of attention by the industry and governments. Beyond the application imperative lies a broader set of interests, which is the outcome of the shift that occurred in science. This suggests that scientific projects framed in terms of their economic potential are also constrained by the imperative of innovation. As in any other aspects of economic development, scientists and researchers will determine the focus of their research according to the prospective financial benefits they might gain in the process of developing new technologies. Scientific projects remain under the obligation to meet certain criteria of objectivity and innovation inherent to

114 

F. Jotterand

science. At the same time, these criteria are always mediated by social practices (Ziman 1996b). The norm of utility is another concept closely related to economic considerations and to the context of application since within the new scientific culture discoveries are assessed in terms of their economic potentials. However, the concept of utility within the new ethos of post-academic science encompasses considerations beyond economico-political concerns. Utility is a moral concept because it necessitates the inclusion of specific human values and concerns. Ideally these concerns should always be determined in relation to human existence and human ends. Whether or not one agrees with utilitarianism as a defensible account of morality, John Stuart Mill in Utilitarianism (1863) raises a good point as he develops an argument for the moral dimension of utility, particularly in Chapter II where he asserts that “this theory of morality” (utilitarianism) is grounded in “the theory of life” (i.e., the presence or absence of pleasure and pain). He also argues that while the principle of utility is largely limited to the private sphere as a “directive rule of human conduct,” and in rare occasions, when the “multiplication of happiness” is possible, utility is extended to the public sphere. The context of scientific research can be regarded as “an occasion” in which public utility must be assessed in relation to social accountability. The new culture of contemporary science must expand its epistemological horizon and better integrate the moral and social considerations related to the human good and human ends. These considerations are what Ziman calls “trans-epistemic factors” (human values and social interests) in the production of knowledge (Ziman 2002, 74, 210). The inclusion of social and moral concerns in science demands an interaction between the scientific community and society. Increasing efforts to guarantee social accountability in techno-political controversies are principally due to two main factors (Gibbons et al. 1994, 36). First, governmental agencies are pressured to justify public expenditures on science. Social accountability, in this sense, is closely related to the context of application and the commercialization of knowledge as pointed out. That is, the financing of projects is legitimized on the basis of performativity and marketability. Hence, the production of proof—designed to gain social approval—does not lie, sadly, in the constraints of truth or

5  Techno-Science, Politics, and the Common Good 

115

coherence but rather in the constraints of performativity (Lyotard and Jameson 1984). I would like to point out that this is not an endorsement of this state of affairs but rather a recognition of how the scientific ethos is being challenged by postmodernity and a subset of philosophical stands such as pragmatism and utilitarianism. Second, the scope of concerns is not limited to the practice of science but includes broad social concerns (e.g., the use of neurotechnologies to address criminality) bearing on the conduct of scientific research. The social and ethical issues raised by techno-science are no longer the responsibility of a “relatively closed bureaucratic-professional-legal world of regulation” but are relocated within the public arena which is characterized by competing models of scientific justification and political legitimation (Nowotny et al. 2001, 23). When combined with postmodern philosophy, cultural-political factors in scientific research provide the justification necessary for the implementation of neurotechnologies on pragmatic and utilitarian reasons, often undermining any notions of truth and legitimacy—as a scientist colleague pointed out to me lately in a private email exchange “data scientists don’t need scientific truth, they only need usable signals to sell algorithms to businesses.”

5.3 B  eyond the Postmodern Cacophony: Deliberative Democracy A critical analysis of the social and ethical implications of any technology is confronted with the difficult task to account for the pluralism of scientific discourses and the plurality of values and norms these discourses entail (the COVID-19 pandemic is a perfect example of the current confusion in how scientific information is being communicated to the public). Despite the challenges this context represents, the current ethos of scientific research and development is also an opportunity to further address the ethical and philosophical issues raised by neurotechnologies. This context is characterized by complex relationships between the industry, science, the academia, economics, and political entities that determine what type of research and development ought to be pursued on the ground of economic, political, and social reasons.

116 

F. Jotterand

The basic approach I am adopting in this work aims at anticipating the future trajectory of scientific discovery and neurotechnological advances as well as determining their ethical and anthropological implications. To this end, we must first understand the state of affairs of our current public discourse and our inability, in the American context, to debate controversial topics and seek compromise to build consensus.

5.3.1 Recovering from the Balkanization of our Society Alexis de Tocqueville (1805–1859) was a French diplomat, political scientist, historian, and politician. In 1831, he was commissioned by the French government, at that time a constitutional monarchy, to investigate the prison system in the United States. He spent nine months observing the American way of life and the political structure of the country, particularly its democracy in light of the French context following the French Revolution of 1789 and the subsequent forms of government that ensued: the instauration of the First Republic (1792–1804) followed by the first French Empire under Napoleon Bonaparte (1805–1814) and a constitutional monarchy (1815–1848). As a result of his sojourn on the American continent, Tocqueville wrote an official report regarding the US carceral system but more importantly two volumes entitled Democracy in America (1835 & 1840). In these two volumes, he analyzes what some have called the problem of democracy (Zetterbaum 1967), that is, the two democratic principles of equality and freedom. Tocqueville posed the problem of how to develop a sense of public morality in civil society on the basis of individual freedom and equality. One of the major threats to achieve a common morality is a type of individualism that promotes self-­ interest, materialism, concern for private welfare, and no consideration for public affairs. As a corrective, Tocqueville suggests an individualism that reflects self-interest properly understood: The doctrine of self-interest well understood does not produce great devotion; but it suggests little sacrifice each day; by itself it cannot make a man virtuous; but it forms a multitude of citizens who are regulated, temperate,

5  Techno-Science, Politics, and the Common Good 

117

moderate, farsighted, masters of themselves; and if it does not lead directly of virtue through the will, it brings them near to it insensibly through habits. (de Tocqueville 2002, 502)

Without a proper perspective on self-interest, citizens become isolated and unable to care for their own communities. Tocqueville insights support the idea that life in democracy cannot be reduced simply to the aggregate of individuals regulating their interactions based on particular interests, freedom, personal choices, and so on. It goes against the current balkanization already observed in a world of political and intellectual populism that seeks solutions outside institutions and focuses on high-­ profile individuals, ideals, or movements to achieve social change (Levin 2020). But democratic institutions play a crucial role in maintaining social cohesion and shaping public opinions. Universities, companies, the media, civic and professional associations are shaped by values, norms, and expectations that define, to a certain extent, practices and individuals within communities, regions, and nations. This means that in order to sustain civil society, individuals must participate in the continuous supporting and modeling of democratic institutions. “I have said,” Tocqueville writes, “that one must attribute the maintenance of democratic institutions in the United States to circumstances, to laws and to mores customs” (de Tocqueville 2002, 292). Accordingly, a healthy democracy in which institutions and individuals participate in the building of a nation depends on physical circumstances (social context, the environment, and wealth), laws (Constitution, statutes), and mores (customs, manners, intellectual habits, and values). Tocqueville considers the mores of the people as the main factor to maintaining a democratic republic: I said above that I consider mores to be one of the great general causes to which the maintenance of a democratic republic in the United States can be attributed. I understand here the expression moeurs in the sense the ancients attached to the word mores; not only do I apply it to mores properly so-called, which one could call habits of the heart, but to the different notions that men possess, to the various opinions that are current in their midst, and to the sum of ideas of which the habits of the mind are formed. I therefore comprehend under this word the whole moral and intellectual

118 

F. Jotterand

state of a people. My goal is not to make a picture of American mores; I limit myself at this moment to searching among them for what is favorable to the maintenance of political institutions. (de Tocqueville 2002, 275)

Mores, or what Tocqueville calls habits of the heart, are then defining the state of mind of a nation. Few key points deserve further explanation regarding the mores. First, Tocqueville mentions “different notions” and “various opinions” that individuals possess. These notions refer to conceptualization of the good, the right, and the just, to the values and norms people hold as moral agents. Second, he indicates that mores describe the “moral and intellectual state of a people.” Other translations of Democracy in America use the terminology of the “character of mind” of a people which is close to the language of virtues individuals embody in their every-day life. Third, Tocqueville points out that while individuals are important for the creation of communities, he also asserts that society, as an entity, displays a moral and intellectual identity that translates into particular social practices. Lastly, the aforementioned features, which can be sum up as the habits of the heart, represent the conditions to the maintenance of political institutions. Without these habits, democratic participation for the advancement of the public good becomes arduous because either individualism as a form of equality leads to self-­ interest, materialism, concern for private welfare, and the failure to consider public affairs or freedom results in an overemphasis on political independence and self-governance without any considerations about other individuals or communities. For this reason, Tocqueville saw the importance to keep equality and freedom in synergy as they are conducive to (1) public responsibility in that it considers carefully the common good and the necessity to associate with others of different moral and political sensitivities; (2) responsible citizenship leading to the cultivation of values not only for the sake of personal self-interest or private matters but also seeking what is right, just, and good as manifestation of good citizenship; and (3) the development of moral communities—while Tocqueville did not have a religious agenda, he saw religion as an anchor for morality and an anchor that regulates domestic life. These three pillars are manifested in different manners. First, through the role of the family as the locus for the inculcation of the habits of the heart essential for

5  Techno-Science, Politics, and the Common Good 

119

human interactions. These mores shape one’s moral identity and therefore require belonging to a particular community where moral standards are taught and exemplified through practices. Life in a smaller community (or, in smaller communities) should lead to an understanding that we don’t live in isolation but depend on each other to create better communities through democratic participation. What Tocqueville observed almost two hundred years ago regarding the United States is in sharp contrast with the current US context characterized by social tensions, political and ideological rifts, and a rampant pluralism that undermines not only our democracy but also the promotion of the common good. What are the causes of the balkanization of American society, and is there a way out? Joshua Mitchell in American Awakening: Identity Politics and Other Afflictions of our Time (2020) offers a plausible explanation I would like to briefly outline. Key to his analysis is the distinction between citizens in liberal democratic society and citizens embracing identity politics. Citizens in a liberal democratic society function according to the doctrine of “self-interest properly understood” which provides a quasi-moral argument for the regulation of human affairs. The doctrine does not constitute a moral framework—by itself it cannot make a man virtuous—but should be conceived as a procedural approach on how citizens living in community are “regulated, temperate, moderate, farsighted, masters of themselves.” It signifies that society depends on free individuals willing to work toward a mutual goal and toward the promotion of the common good despite potential disagreement at the moral and political level. In contract, identity politics—the idea that belonging to a specific community depends on strict criteria such as race, ethnicity, sexual orientation, and so on—“allows and encourages citizens to withdraw into themselves—to become ‘selfie man’” (Mitchell 2020, 21). The shift from liberal democracy to a society built around identity politics has resulted in a fractured culture where citizens don’t need each other, hence undermining communities and their ability to work toward the common good. Instead of seeking compromise and making the necessary adjustment to create a peaceful society, as in a liberal democratic society, identity politics divides through the quasi-religious political claims about social justice, privilege, and race to name a few. Identity

120 

F. Jotterand

politics functions as an ideological tool to pinpoint what separates citizens and justifies the reason why such divides must remain rather than providing a rationale as to why living in identity isolation is deeply inadequate to forming healthy communities. It is through the engagement with fellow citizens toward common goals that one realizes our interdependence as human beings because of our vulnerability and finitude (Snead 2020; MacIntyre 2001). Furthermore, interactions with individuals with different cultural, moral, and political perspectives are an opportunity to question one’s prejudices toward others. Hence, when citizens stop working together, this process of integration disappears on behalf of a culture of suspicion and antagonism which opens the door for the federal government to intervene to protect and regulate relationships between citizens with different identities (racial, ethnic, sexual, etc.). But as duly noted by Mitchell, this is the beginning of our troubles because the process of regulating relationships between citizens is enforced and imposed through various training programs rather than as a process of working together with fellow citizens. In his stern and insightful evaluation of our current American context, he warns us about the dissolution of our communities and the increasing role of the government in regulating our lives and communities. “Because citizenship in America no longer requires that we work with our neighbor on common projects,” Mitchell writes, “acrimony and distrust grows. The federal government is more than happy to step in and mediate” (Mitchell 2020, 40–41).

5.3.2 Deliberative Democracy To recover from some of the deleterious effects of the postmodern cacophony, a second preliminary theme needs to be investigated: how to manage moral (and political) disagreement in a pluralistic world. This question is of primordial importance since science, technology, and medicine cannot adopt a sectarian approach as they are built, to a certain extent, on some universal principles that regulate their practice within the broader social context. For instance, the way health care is delivered to a particular patient ultimately depends on the clinician, but medicine also relies on universal scientific knowledge, skills, and concepts that

5  Techno-Science, Politics, and the Common Good 

121

define it as a discipline and profession. The tension between a “cosmopolitan medicine” and a “sectarian medicine” cannot be resolved in this work. It should be noted, however, that because medicine’s goals aim at the relief of suffering and at meeting the needs of humanity, there are core universal values intrinsic to its nature that transcend personal and cultural conventions (Hanson and Callahan 1999). These remarks lead directly to socio-political considerations and the relationship between freedom of scientific research and institutional and governmental regulations. Three main conceptions of freedom as a principle or source of order for the development of science and technology are relevant to my analysis (Ezrahi 1990). The first model is the marketlike model, which implies the interaction among free and responsible individuals who contractually agree about certain ends and goals. This model assumes that “freedom can generate order without the requirements of cooperation or purposeful public-mindedness on the part of the participants” (Ezrahi 1990, 19). This conception of freedom is defended by individuals such as Adam Smith (The Wealth of the Nations) and Friedrich Hayek (The Road to Serfdom), both emphasizing that the promotion of public interest is, first of all, the outcome of individual actions organized around the principle of freedom. The second model presupposes a process of rational persuasion by which individuals adjust and cooperate on the ground that actions can be informed and voluntary. This type of approach is developed by Juergen Habermas, in his essay “Discourse Ethics,” where he articulates the ideas of “communicative action” in which each participant in society coordinates his or her plan of action consensually. “By entering into a process of moral argumentation,” Habermas writes, “the participants continue their communicative action in a reflexive attitude with the aim of restoring a consensus that has been disrupted. Moral argumentation thus serves to settle conflicts of action by consensual means” (Jürgen Habermas 1990, 67). The goal of modern society is to find consensus through argumentation so that participants attempt to convince others of the necessity of specific actions. In order to arrive at a consensus, there are two necessary conditions: the first is to be able to justify a norm. The validity of a norm is determined by the kind of agreement people have between them. Second, a norm is valid only if all those affected can accept the practical

122 

F. Jotterand

consequences of its observance. Only then, can a norm be acceptable within the social context. The third approach to the concept of freedom relates to the centralized socio-democratic administrative state. This model presupposes the governing of the few not through rational processes of persuasion and enlightenment but through publicly established standards within the political arena. “What supposedly ensure that the actions of the few are not arbitrary or subjective are the publicly established—extrapolitical—standards of adequate performance” (Ezrahi, 1990, 21). These different approaches to freedom as a principle of order show that depending on one’s political assumptions and understanding of the concept of freedom, the analysis of the ethical and philosophical issues related to (neuro)technology will take on different and particular orientations. Thus, the politicization of scientific research and technological development requires a careful consideration (or integration) of the socio-­political assumptions one brings into the debate. The impossibility of establishing a universal moral account by sound rational argument is in part due to our inability to reach consensus on a foundational level. However, a procedural ethic allows the possibility of a managed consensus that permits people of various moral and political commitments to enter in dialogue without necessarily agreeing at the foundational level. A procedural ethic limits the role of the states and recognizes the limitations of social collaboration without necessarily ruling out the possibility, through a political process, to reach a managed agreement. This is the case, in that the type of procedural ethics developed in this work identifies individuals and not the state as the source of moral authority; it likewise emphasizes that there are certain practices (medicine and science/technology, for instance) that cannot be understood apart from their social dimensions and requires moral (as opposed to political) reflections as a society despite competing moralities. Experts in philosophy and ethics have the arduous task of establishing the conditions for a pluralistic debate on moral issues pertaining to science and technology. The difficulty lies in the co-existence of competing ideologies in the face of the need to make concrete decisions. There is as well an inevitable tension between allowing a pluralism of ideas to exist and the globalization of ethical norms. Pluralism has permeated moral

5  Techno-Science, Politics, and the Common Good 

123

reflection as is expressed in seemingly interminable disagreements. That being said, the current trend is to settle moral controversies in a legal language of rights, which tends to universalize its authority beyond cultural and social diversities. It follows that the politicizing of morality emerges as the only option social-democratic societies of the West can produce. John Rawls epitomizes this political move. He attempts to remove the discussion from endless debates by appealing to political values, which are, in his view, morally neutral. He distinguishes two kinds of doctrines: the ones that cannot be used in public discourse and those that, although incompatible, remain reasonable or belong to what he calls “public reason.” In Rawls’ work, the idea of “public reason” is closely related to the concept of justice in the sense that “public reason” reflects a political conception of justice outside moral doctrines. Although he recognizes that the concept of justice is a moral concept (i.e., the content of justice is provided by certain ideals which reflect certain political values and norms), he points out that justice is a moral conception for political, social, and economic institutions (Rawls 1993, 11). Hence, justice within the context of political liberalism does not refer to morality per se, but to a reasonable understanding of justice that attempts to provide the basic structure of society without any reference or commitment to any other doctrines, that is, philosophical, religious, and moral doctrines (Rawls 1993, 11–13). Justice, then, is limited to the realm of the political (i.e., public reason) and is understood as the implementation of established rights and liberties so that each individual can justify his or her use of freedom. It also implies that any reference to secular philosophical doctrines (what he calls “first philosophy”), moral and religious arguments cannot provide public arguments not only because they are susceptible to disagreements but also because they are at odds with the ideal of discourse in a constitutional social democracy. By recasting the notion of justice in terms of reasonableness, Rawls separates the realm of the political from the concept of the good (morality), thus grounding the social order exclusively on hypothetical consent and a new theory of the good independent of a common substantive understanding of the good for society.

124 

F. Jotterand

Advances in science and biomedical sciences concern society as a whole and raise questions that cannot be limited to their political dimensions in the Rawlsian sense. It is my contention that an integrated procedural approach provides the conditions for avoiding a Rawlsian model of “moral” (i.e., political) reflections characterized by the absence of moral arguments due to their contentious nature. Such an approach recognizes our particular pluralistic and secular context but at the same time focuses on re-integrating moral reasoning in the public arena. To this end, the conception of deliberative democracy advanced by Gutmann and Thompson offers a fruitful approach to address disagreement, but, contrary to Rawls, by putting moral reasoning and moral disagreement back to front and center of the political discourse. So while social and political collaboration might be limited, a case should be made on the possibility, and even necessity, to reach some kind of managed agreement through a deliberative democracy form of government in order to justify decisions (Gutmann and Thompson 2004). Key to this approach is not only how to handle moral disagreement at the moral and political level. It is similarly about the willingness to continue cooperation through compromise within the boundaries afforded by a genuinely pluralistic public reason framework (Bohman 1996). It should be noted that deliberative democracy is now shaping the landscape of political discourse in many contexts. In their review article, Curato and colleagues argue that as a field of research, deliberative democracy has developed to become “(a) assertive in practice, (b) precise in theory, (c) global in reach and (d) ambitious in democracy” (Curato et al. 2020, 2). These four features form the core elements of the field today as reflected in its implementation in various international context such as in England, India, and Hong Kong (Curato et al. 2020; for an analysis of the use of deliberative democracy in the European Union see Blockmans and Russack 2020).

5.3.3 T  he Features and Purposes of Deliberative Democracy Deliberative democracy asserts that citizens and their representatives must provide justifications that legitimize their decisions. To this end,

5  Techno-Science, Politics, and the Common Good 

125

Guttman and Thompson (2004) appeal to four qualities intrinsic to such form of government: (1) the reason-giving requirement—this means that people interacting to make decisions are free and moral equals and that the reasons provided are not merely procedural (the majority of people agree that X is a good thing) nor merely substantive (X promotes the common good so necessarily it must be implemented); (2) accessibility— the reasons provided should always be accessible to all the people concerned and not limited to the private sphere. Any deliberations at the individual level should always consider what is right and good for the broader society and their content relevant to every citizen (avoiding a partisan mindset); (3) the decision that follows deliberations are only binding for a limited period. Once the deliberative process leads to a decision, the government or leaders enact what has been decided and deliberations should stop unless there are reasons to question the decision; hence, (4) deliberations are a dynamic process that allows a continuous dialogue between all stakeholders. This means that there is also room for disagreement as every decision, while binding, remains provisional in the sense that it must be open to criticism. In light of these four characteristics, Guttman and Thompson provide the following definition of deliberative democracy which is a form of government in which free and equal citizens (and their representatives), justify decisions in a process in which they give one another reasons that are mutually acceptable and generally accessible, with the aim of reaching conclusions that are binding in the present on all citizens but open to challenge in the future. (Gutmann and Thompson 2004, 7)

What this definition indicates is that at the core of deliberative democracy resides a strong emphasis on deliberation. Concretely it implies that all institutions of government are responsible for providing the conditions and incentives for citizens and their representations to engage in moral reasoning. I cannot stress this point enough. Moral values and political values must interact at every level of the social fabric of a nation. The practice of deliberation cannot be limited to political institutions and elected officials. Citizens should also seek “the cultivation of the virtues of deliberation” as a process of refining the basic values and moral

126 

F. Jotterand

principles shaping public policy (Gutmann and Thompson 1998, 358–59). So political legitimacy cannot simply be reduced to political reasons but requires an engagement with normative theories of democracy. Good political governance must be legitimized and validated according to public reasons whose development takes place according to normative principles and must be available to public deliberation and argumentation, and be good for public uses (Kettner 2007). In addition to provide legitimization to collective decisions in light of moral and political disagreement, deliberative democracy has three other specific purposes (Gutmann and Thompson 2004). First deliberative democracy aims at promoting “public-spirited perspectives on public issues.” In order to achieve this level of deliberation, all stakeholders must be willing to participate in uncomfortable discussions of public interest and to create a space where disagreement, and moral and political ambiguity, is tolerated if not encouraged, and altruism is embodied in one’s demeanor and actions. In the era of identity politics, this point is crucial to make any progress in civil discourse and as a culture. We must resist the temptation to balkanize our public discourse into identarian factions unable and unwilling to cope with moral and political disagreement and often eager to cut any ties necessary to sustain social cohesion. The second aim is the promotion of “mutual respectful processes of decision-­ making.” As noted by many philosophers, most notably Alasdair MacIntyre, our culture is characterized by incommensurable moral (and political) disagreements often based on ideologies rather than sound rational arguments. In this context, building consensus through some level of compromise becomes highly challenging because each party sees the other as an opponent to beat rather than a partner to engage with. The act of deliberation is perceived as an act of ideological war whose purpose is not to address questions of common interests. In contrast, a framework based on deliberative democracy principles, mostly the cultivation of the virtues of deliberation, does not strive to remove moral and political disagreement in society but the negotiation of peaceful arrangements that satisfy as much as possible all parties involved. But it would be a mistake, if not naïve, to think that irreconcilable values can become congruent simply through discursive reasoning. The aim is less ambitious as it only seeks to promote “practices of mutual respect” to advance the

5  Techno-Science, Politics, and the Common Good 

127

common good despite disagreement (Gutmann and Thompson 2004, 11). The third and final purpose of deliberative democracy relates to the dynamic process of deliberations. As citizens, elected officials, and institutions make mistakes in deliberations and in the implementations of specific decisions, there must be a process of continuous self-assessment at the individual and collective level to rectify public policy with harmful outcomes.

5.4 Applying the Deliberative Democracy Paradigm As I have outlined so far, deliberative democracy offers a credible approach to manage disagreement in society. However, as one can expect, not all of its proponents agree on its core assumptions (Gutmann and Thompson 2004). For instance, we need to determine (1) whether deliberative democracy has instrumental or expressive value; (2) whether the process of deliberation is guided by mere procedures or substantive principles; (3) the nature of the agreement deliberative democrats seek to establish, consensual or pluralistic; (4) the means for the promulgation of public deliberations, that is, representative or participatory approach; and (5) the role of institutions and the government as well as citizens in deliberative democracy (Gutmann and Thompson 2004 see especially 21–39). I will not outline each of these various perspectives but rather focus on the version of deliberative democracy I embrace in this work. More specifically, my approach has expressive and instrumental value, guided by substantive principles, working toward building consensus, and including the participation of all key stakeholders comprising citizens, institutions, and the government. This general framework characterizes the procedures by which deliberation occurs and the condition for moral inquiry. This last point is paramount. Moral reasoning cannot be neglected as it provides the space for normative analysis. As I already noted, the new ethos of scientific research and development is an opportunity to further address the ethical issues raised by science and technology. The potential changes that could happen not only in our way of life

128 

F. Jotterand

but likewise in our very understanding of human nature require further philosophical investigations. This new context is characterized by complex relationships between industry, science, academia, economics, and politics that determine what type of research and development ought to be pursued on the ground of ethical, economic, political, and social reasons. It is in this particular environment that deliberation must take place. Its outcome cannot be limited to mere procedures leading to the creation of policies (instrumental value) but must likewise express a collective process (expressive value) that assumes some agreed upon notion of the common good in the process of deliberation. The creation and implementation of laws, regulations, and public policies must be justified for their legitimacy, hence assuming the ability to find consensus despite potential disagreement as well as the commitment to respect each other. To avoid unfruitful public debates associated with advances in neuroscience and neurotechnology, more sustained critical reflections must take place among all stakeholders. It is necessary to integrate ethical and philosophical considerations at the most preliminary stages of scientific development, especially, I argue, philosophical anthropology as I will argue later in the book. In recent years, the need to reflect on the interplay between science and technology, the industry, economics, politics, and the academy has accelerated. Case in point: the COVID pandemic has revealed a poor integration of scientific, ethical, and socio-political considerations in how the public has been informed. And, this is only the tip of the iceberg. Emerging neurotechnologies, Artificial Intelligence, Brain-Computer Interfaces are likely to raise controversies that will need transparency and accountability to avoid the creation of a culture of suspicion on the part of the scientific community, scholars addressing the ethical, social, and anthropological concerns, and government. Approaches that promote dialogue and collaboration are necessary. To this end, I suggest a Procedural Integrated Model (PIM). There is a cluster of three main issues that an integrated model addresses. First, the problem of pluralism. An integrated model is not meant to solve the problem of moral pluralism per se but rather provides an avenue of how to reflect on moral issues raised by science and technology in a fragmented moral world. It aims at further developing a

5  Techno-Science, Politics, and the Common Good 

129

procedural process that allows rigorous moral reflections despite competing moral and political views. The second issue is the lack of assimilation of insights and knowledge between the cultures of science/technology, the humanities, and the social sciences or, more to the point, the question of how to build a bridge between the three cultures. In order to go beyond past controversies between the sciences, the humanities, and the social sciences, the development of approaches for fruitful interaction between various disciplines ought to be created in light of the latest advances in neuroscience and neurotechnology. The third and final issue is the question of consensus. While it would be difficult to support the possibility of the type of social agreements at the moral and political level that occur between moral friends in the current pluralistic context of moral reflections, deliberative democracy provides a substantive and procedural approach to reflect on the ethical, social, and regulatory implications of neurotechnologies. Thus, an integrated model addresses the question of how to partially solve ethical issues (how to reach consensus) in a pluralistic society while recognizing that the issues raised by (neuro)science and (neuro)technology include a broad range of considerations to be addressed across disciplines. Various “established” moral concepts, methods, and traditions in ethical theory have been proposed to solve moral issues in medicine and biotechnology with more or less success due to the competing models of ethical inquiry characterizing bioethics, none being authoritative. This state of affairs raises the question of the possibility of an “integrated ethic” in which diverse considerations and perspectives can provide a framework (or frameworks) for ethical and philosophical reflections that spring out of different fields of investigations, within science, technology, and the humanities. Paul Cilliers remarks that philosophical (and ethical) reflection can play an important role and be essential to scientific and technological development. Traditionally, the role of philosophy has been to provide a description of what is occurring in science and technology from outside this particular ethos. Cilliers, however, argues that the development of technology demands new ways of thinking that should include philosophical reflections at the core of scientific and technological practice. He writes,

130 

F. Jotterand

The rise of powerful technology is not an unconditional blessing. We have to deal with what we do not understand, and that demands new ways of thinking. It is in this sense that … philosophy has an important role to play, not by providing a meta-description of that which happens in science and technology, but by being an integral part of scientific and technological practice. (Cilliers 1998, 2; italics mine)

Cilliers hints to the integration of philosophical inquiry within scientific and technological research. Ethical reflection should be at the forefront of neurotechnological development and not a discipline at the periphery that provides input reactively but rather than proactively. In short, it is crucial to close the gap in time (proactive-reactive distinction) and in intellectual input between science/technology and ethics/humanities. This process of integration does not follow a mere procedural methodology. As stated previously, the justification of deliberation must rely on some substantive principles. Not only they ensure that procedural principles are open to revision, but they also afford a way to ground deliberations on moral reasons (including concepts such as basic liberty, fair opportunity, mutual respect, etc., which aim at the promotion of the common good). These procedural and substantive principles are morally and politically provisional. In addition, political authority is legitimized by citizens if and when basic moral principles are vetted through the democratic process which requires a sense of reciprocity obliging citizens to owe to each other a moral and political justification of the enactment of the laws and policies agreed upon (Gutmann and Thompson 2004). The expectation of reciprocity implies a quest for the common good. The assumption is not that such a goal is achievable as moral and political disagreement is inherent to life in society. Rather, despite the challenges it is a worthy goal, if not, in light of the current socio-political context in the United States and many countries in the West, a moral necessity. Jonathan Sacks argues that the West has undertaken a “move from the ‘We’ to the ‘I’” which has resulted in hyper-individualism and the failure to consider society and institutions as essential in seeking the common good. He rightly remarks that “[a] free society is a moral achievement, and it is made by us and our habits of thought, speech, and deed. Morality cannot be outsourced because it depends on each of us. Without

5  Techno-Science, Politics, and the Common Good 

131

self-­restraint, without the capacity to defer the gratification of instinct, and without the habits of the heart and deed that we call virtues, we will eventually lose our freedom” (Sacks 2020, 16). Yuval Levin also underscores the interconnectedness of the social and the personal as a condition for moral and social progress. In A Time to Build (2020), he underscores that as we seek renewal in our culture we often think as a matter of social transformation. But in his view, it is only partially the case stating that “at its core it is a matter of personal transformation. Enduring progress happens soul by soul, but that is actually why it can only happen through the institutions of society, which touch and form each of us” (Levin 2020, 41). Social institutions are formative and provide a space where individuals encounter others to be socialized and develop the habits of the heart to sustain communities. Faithfulness, loyalty, truth, seeking the common good, and integrity must be learned in contexts that are sometime conflictual. Thus, pluralism can accommodate consensus through a process of deliberation to sustain a peaceful society aiming at a thin conception of the common good (Gutmann and Thompson 2004). Ideally society should seek to develop and implement a robust notion of the common good, but the fragmentation of the moral code of the West might have run its course to its logical conclusion: identity politics, the atomization of the self, and the loss of community. In light of this state of affairs, it is imperative to reexamine the role of citizens in public deliberations. On the one hand, some deliberative democrats hold the position that there shouldn’t be any expectation for citizens to participate in political debates. Citizens should rely on representatives who will protect their interests and engage in these deliberations. In addition, citizens will hold representatives continuously accountable for their actions and the content of their debates. This model has the main advantage that it puts in place individuals with the necessary experience and knowledge to navigate the political system. The downside is that the majority of citizens become passive or mere spectators and there is no guarantee that their representatives will not fail in their duties. As Gutmann and Thompson put, “representative democracy places a very high premium on citizen’s holding their representative accountable … [but they] may fail to act responsibly, or even honestly” (Gutmann and Thompson 2004, 30). The current cynicism about, and a

132 

F. Jotterand

lack of trust in, political institutions and politicians provides reasons to look at its alternative: the direct participation of ordinary citizens in political deliberations about public policies. It has disadvantages, most notably it can only be implemented at the local level otherwise it becomes unpracticable and the involvement of citizens might not result in the development and acceptance of best law and public policies. Ultimately, elected officials and representatives might be better deliberators based on their experience and the time they can allocate to carefully examine the issues at hand. That said, a more direct participation of citizens should be encouraged in the development of laws and public policies as a way “to develop the virtues of citizenship” and its related moral commitments such as mutual respect, fairness, reciprocity, freedom, and so on (Gutmann and Thompson 2004). The latter point leads to the question of the role of governmental institutions (a legislature, the military, some educational entities, etc.) and private organizations (a company, a university, a church, a school, etc.) in facilitating deliberations and the participation in policy development. So institutions play a crucial role in the communication of values and habits to its members and they have two distinct features that connect them (Levin 2020). First, they are durable. This means that institutions maintain their identity over time and have the capacity to shape their environment and the individuals within it. Any change is the result of a deliberate progression that provides cohesion between past and present accomplishments. The second feature is that institutions are a form of association. But an institution is not simply a group of individuals sharing mutual interests, enjoying each other’s company, or defending a common cause. An institution is intrinsically formative as it shapes habits, expectations, and people’s character as well as it “organizes its people into a particular form moved by a purpose, characterized by a structure, defined by an ideal, and capable of certain functions” (Levin 2020, 20). The current emphasis on self-realization, identity politics, and the dismantlement of many institutions is detrimental to the development of a stable social structure and the formation of a healthy citizenry motivated by common ideals and purposes. Debates related to advances in science and technology and their impact on society and what it means to be human beings cannot be left to the government or individuals.

5  Techno-Science, Politics, and the Common Good 

133

Middle-­level entities, that is, institutions, must play a crucial role in shaping the type of discourse needed to develop laws and public policies, build consensus, and allow citizens to challenge potential governmental positions, regulations, and laws. Institutions must regain their formative functions in “the cultivation of the virtues of deliberation” while the government should provide and protect the moral and political space to allow robust debates beyond ideological positions. The key actors, citizens, cannot be mere spectators but must engage in various roles in building consensus at the moral and political levels. This is not to say that consensus will be achieved, and it is mostly likely that many issues will remain unresolved. The model of deliberative democracy within the context of deliberation regarding issues in science and technology requires a high tolerance for ambiguity and even sometime moral and political uncertainty. Guttmann and Thompson put it well stating that “the conception of deliberative democracy defended here puts moral reasoning and moral disagreement back at the center of everyday politics … While acknowledging we are destined to disagree, deliberative democracy also affirms that we are capable of deciding our common destiny on mutually acceptable terms” (Gutmann and Thompson 1998, 361). The model outlined in this work insists on the necessity to integrate moral reasoning and moral (and political) disagreement at the center of scientific and technological development. Both procedural and substantive principles must guide our reflections. This is especially crucial in light of the potential implementation of powerful and increasingly invasive neurotechnologies in the clinical and social contexts. In the next chapter, I shall look more closely at the justifiability of the use of neurotechnologies to address criminal behavior.

Notes 1. The definition of postmodernity provided here is rather limited in scope but is sufficient for our purposes. For a more encompassing definition see The Cambridge Dictionary of Philosophy that defines postmodern philosophy as follows: “Postmodern philosophy is … usefully regarded as a complex cluster concept that includes the following elements: an anti-(or

134 

F. Jotterand

post-) epistemological standpoint; anti-essentialism; anti-realism; anti-­ foundationalism; opposition to transcendental arguments and transcendental standpoints; rejection of the picture of knowledge as accurate representation; rejection of truth as correspondence to reality; rejection of the very idea of canonical descriptions; rejection of final vocabularies, i.e., rejection of principles, distinctions, and descriptions that are thought to be unconditionally binding for all times, persons, and places; and a suspicion of grand narratives, metanarratives of the sort perhaps best illustrated by dialectical materialism.” 2. For an analysis of the transition of modernity to postmodernity see Harvey (1990), especially 39–65. In his discussion on the relationship between modernity and postmodernism, later in the book, he concludes that “there is much more continuity than difference between the broad history of modernism and the movement called postmodernism. It seems more sensible to me to see the latter as a particular kind of crisis, within the former, one that emphasizes the fragmentary, the ephemeral, the chaotic side of Baudelaire’s formulation (that side which Marx so admirably dissects as integral to the capitalist mode of production) while expressing a deep skepticism as to any particular prescriptions as to how the eternal and immutable should be conceived of, represented, or expressed” (Harvey 1990, 116). 3. See in particular MacIntyre in After Virtue (1984a) and Whose Justice? Which Rationality? (1988). In After Virtue, Macintyre indicates that the failure of modernity to provide a common morality as part of the failure of the Enlightenment project. As he points out, “the problems of modern moral theory emerge clearly as the product of the failure of the Enlightenment project. On the one hand, the individual moral agent, freed from hierarchy and teleology, conceives of himself and is conceived of by moral philosophers as sovereign in his moral authority. On the other hand, the inherited, if partially transformed rules of morality have to be found some new status, deprived as they have been of their older ­teleological character and their even more ancient categorical character as expressions of an ultimately divine law. If such rules cannot be found a new status which will make appeal to them rational, appeal to them will indeed appear as mere instrument of individual desire and will” (MacIntyre 1984a, 62). The consequence is that moral reasoning is muted into competing expressions of rationality and morality unable to provide “agreed rationally justifiable conclusions.” MacIntyre notes that “arguments …

5  Techno-Science, Politics, and the Common Good 

135

have come to be understood in some circles not as expressions of rationality, but as weapons, the techniques for deploying which furnish a key part of the professional skills of lawyers, academics, economists, and journalists who thereby dominate the dialectically unfluent and inarticulate. There is thus a remarkable concordance in the way in which apparently very different types of social and cultural groups envisage each other’s commitments … We thus inhabit a culture in which an inability to arrive at agreed rationally justifiable conclusions on the nature of justice and practical rationality coexists with appeals by contending social groups to sets of rival and conflicting convictions unsupported by rational justification” (MacIntyre 1988, 5–6). See also Harvey who contends that “the moral crisis of our time is a crisis of Enlightenment thought” (Harvey 1990, 41). 4. I am indebted to Marx W. Wartofsky for my analysis (Wartofsky 1992). Other scholars have different categories of technological revolutions. See particularly Rodney A. Brooks (2003) in which he provides the following categories: agricultural revolution (10,000 years ago); civilization revolution (5500 years ago); industrial revolution (eighteenth century— invention of the steam engine); information revolution (nineteenth century—invention of the telegraph); robotics revolution (current); and biotechnology revolution (current). Although these categories are helpful, they do not constitute the basis of my analysis. 5. Values are not specifically mentioned by Wartofsky but as I will point out they play an important role in the current research culture of post-­ academic science: “Until recently academic scientists could dismiss the call for ‘social responsibility’ by claiming that they knew—and cared— nothing about the applications of their work, and therefore need not be concerned whether it might be linked with war-making, political and economic oppression, environmental degradation or other shameful activities. Post-academic science, being much more directly connected into society at large, has to share its larger values and concerns” (Ziman 2002, 74). 6. The characterization by Ziman of contemporary science as post-academic is meant to capture the social concern to apply pure scientific knowledge to practical problems, that is, industrial applications. As Ziman remarks, “[h]aving observed the revolutionary capabilities of this knowledge in medicine, engineering, industry, agriculture, warfare, etc., people have become very impatient with the slow rate at which it diffuses out of the

136 

F. Jotterand

academic world. Governments, commercial firms, citizen groups and the general public are all demanding much more systematic arrangements for identifying, stimulating and exploiting potentially useful knowledge” (Ziman 2002, 73). In other words, socio-economic constraints move scientists away from pure science and closer to pragmatic concerns, hence the emergence of techno-science.

References Blockmans, Steven, and Sophia Russack, eds. 2020. Deliberative Democracy in the EU. Countering Populism with Participation and Debate. London: Rowman & Littlefield International, Ltd. Bohman, James. 1996. Public Deliberation: Pluralism, Complexity, and Democracy. Cambridge, MA: The MIT Press. Brooks, Rodney. 2003. Flesh and Machines: How Robots Will Change Us. Illustrated edition. New York: Vintage. Cilliers, Paul. 1998. Complexity and Postmodernism: Understanding Complex Systems. 1st ed. London; New York: Routledge. Curato, Nicole, Jensen Sass, Selen A. Ercan, and Simon Niemeyer. 2020. Deliberative Democracy in the Age of Serial Crisis. International Political Science Review, August, 0192512120941882. https://doi. org/10.1177/0192512120941882. Engelhardt, H. Tristram. 1996. The Foundations of Bioethics. 2nd ed. New York: Oxford University Press. Ezrahi, Yaron. 1990. The Descent of Icarus: Science and the Transformation of Contemporary Democracy. 1st ed. Cambridge, MA: Harvard University Press. Gibbons, Michael, Camille Limoges, Helga Nowotny, Simon Schwartzman, Peter Scott, and Martin Trow. 1994. The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. 1st ed. London; Thousand Oaks, CA: SAGE Publications Ltd. Gutmann, Amy, and Dennis F. Thompson. 1998. Democracy and Disagreement. Cambridge; London: Belknap Press: An Imprint of Harvard University Press. ———. 2004. Why Deliberative Democracy? Princeton, NJ: Princeton University Press. Gutmann, Amy, and Dennis Thompson. 2014. The Spirit of Compromise: Why Governing Demands It and Campaigning Undermines It—Updated Edition. Revised edition. Princeton University Press.

5  Techno-Science, Politics, and the Common Good 

137

Habermas, Jürgen. 1990. Moral Consciousness and Communicative Action. Cambridge, MA: MIT Press. Hanson, Mark J., and Daniel Callahan, eds. 1999. The Goals of Medicine: The Forgotten Issues in Health Care Reform. Washington, DC: Georgetown University Press. http://press.georgetown.edu/book/georgetown/goals-­ medicine. Harvey, David. 1990. The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change. Oxford; Cambridge, MA: Wiley-Blackwell. Hunter, James Davison. 1994. Before the Shooting Begins. New York: Free Press. Kettner, Mattias. 2007. Deliberative Democracy: From Rational Discourse to Public Debate. In The Information Society: Innovation, Legitimacy, Ethics and Democracy In Honor of Professor Jacques Berleur s.j., IFIP International Federation for Information Processing, ed. Philippe Goujon, Sylvian Lavelle, Penny Duquenoy, Kai Kimppa, and Véronique Laurent, 57–66. Boston, MA: Springer US. https://doi.org/10.1007/978-­0-­387-­72381-­5_7. Levin, Yuval. 2020. A Time to Build: From Family and Community to Congress and the Campus, How Recommitting to Our Institutions Can Revive the American Dream. New York: Basic Books. Lyotard, Jean-François. 1984. The Postmodern Condition: A Report on Knowledge. Minneapolis: University of Minnesota Press. Lyotard, Jean-Francois, and Fredric Jameson. 1984. The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. 1st ed. Minneapolis: University Of Minnesota Press. MacIntyre, Alasdair. 1984a. After Virtue: A Study in Moral Theory. 2nd ed. Notre Dame, Ind: University of Notre Dame Press. ———. 1988. Whose Justice? Which Rationality? 1st ed. Notre Dame, Ind: University of Notre Dame Press. ———. 2001. Dependent Rational Animals: Why Human Beings Need the Virtues. Revised edition. Chicago: Open Court. Mitchell, Joshua. 2020. American Awakening: Identity Politics and Other Afflictions of Our Time. New York City: Encounter Books. Murphy, Nancey. 1990. Scientific Realism and Postmodern Philosophy. The British Journal for the Philosophy of Science 41 (3): 291–303. https://doi. org/10.1093/bjps/41.3.291. Nowotny, Helga, Peter B. Scott, and Michael T. Gibbons. 2001. Re-Thinking Science: Knowledge and the Public in an Age of Uncertainty. 1st ed. London: Polity.

138 

F. Jotterand

Parusnikova, Zuzana. 1992. Is a Postmodern Philosophy of Science Possible? Studies in History and Philosophy of Science Part A 23 (1): 21–37. https://doi. org/10.1016/0039-­3681(92)90025-­2. Pinker, Steven. 2018. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Illustrated edition. New York, NY: Viking. Pluckrose, Helen, and James Lindsay. 2020. Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity―and Why This Harms Everybody. None edition. Durham: Pitchstone Publishing. Rawls, John. 1993. Political Liberalism. New York: Columbia University Press. Rouse, Joseph. 1991. The Politics of Postmodern Philosophy of Science. Philosophy of Science 58 (4): 607–627. https://doi.org/10.1086/289643. Sacks, Jonathan. 2020. Morality: Restoring the Common Good in Divided Times. New York: Basic Books. Salomon, Jean Jacques. 1999. Survivre a la Science. NON CLASSE edition. Paris: Albin Michel. Sandel, Michael J. 2020. The Tyranny of Merit: What’s Become of the Common Good? New York: Farrar, Straus and Giroux. Snead, O. Carter. 2020. What It Means to Be Human: The Case for the Body in Public Bioethics. Cambridge, MA: Harvard University Press. de Tocqueville, Alexis. 2002. Democracy in America. Translated by Harvey C. Mansfield and Delba Winthrop. 1st ed. Chicago, IL: University of Chicago Press. Wartofsky, Marx W. 1992. Technology, Power, and Truth: Political and Epistemological Reflections on the Fourth Revolution. In In Democracy in a Technological Society, Philosophy and Technology, ed. Langdon Winner, 15–34. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978­94-­017-­1219-­4_2. Zetterbaum, Marvin. 1967. Tocqueville and the Problem of Democracy. 1st ed. Stanford University Press. Ziman, John. 1996a. Is Science Losing Its Objectivity? Nature 382: 751–754. ———. 1996b. ‘Post-Academic Science’: Constructing Knowledge with Networks and Norms. Science & Technology Studies, January. https://sciencetechnologystudies.journal.fi/article/view/55095. ———. 2002. Real Science: What It Is, and What It Means. Cambridge: Cambridge University Press.

6 Neurotechnologies and Psychopathy

6.1 Psychiatry and Moral Bioenhancement In the last two chapters, I outlined two main reasons why moral bioenhancement should be rejected: not only does it misconceptualize morality but it also assumes, at least based on what its proponents suggest, a particular socio-political metanarrative often uncritically assumed. My skepticism, however, should not be a deterrent to consider novel methods and approaches in psychiatry to address mental disorders with moral pathologies. In this chapter, I consider psychopathy as an exemplar of a psychiatric disorder with moral pathologies for which there are “no truly effective treatment programs” available (Brazil et al. 2018, 264) despite many attempts using various clinical approaches (Gibbon et al. 2010; G. T. Harris and Rice 2006; Salekin et al. 2010; Messina et al. 2003). In light of the lack of truly efficacious treatment for psychopathy, mental health professionals and society ought not to abandon individuals suffering from personality disorders. As Danish psychiatrist Georg Stuerup pointed out about psychopaths, “Don’t forget these people. They have no one, yet they are people. They are desperately lacking and in terrible pain. Those who understand this are so rare; you must not turn your back on © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_6

139

140 

F. Jotterand

them” (Stuerup 1951 cited in Millon et al. 2003, 28). Neurointerventions could suggest a Baconian turn with regard to the human dominion over nature at all costs or awaken bad memories from the dark past of psychiatry, but it would be premature to draw the conclusion that treatments are ineffective in populations with high levels of psychopathy. There is evidence that distinct group of individuals with antisocial behaviors react differently to treatment approaches and therefore treatment options should be developed specifically tailored to individuals as opposed to the condition (Brazil et al. 2018, 265; Salekin et al. 2010; D’Silva et al. 2004). Therefore, further evaluation of the feasibility, usefulness, and limitations of techniques or neurotechnologies (e.g., real-time functional Magnetic Resonance Brain-Computer Interface (rtfMRI-BCI), Deep Brain Stimulation (DBS), and the use of Selective Serotonin Reuptake Inhibitors (SSRIs)) in the diagnosis and treatment of individuals with psychopathic traits should be encouraged. My analysis will first start with a brief historical and conceptual overview of the concept of psychopathy followed by a critical appraisal of the use of neurotechnologies to treat the disorder.

6.2 Psychopathy 6.2.1 F rom Moral Insanity to Neuro-anatomical Abnormalities1 Descriptions of what we consider now as antisocial personality disorder can be traced back as far as Theophrastus (c. 371–287 BCE), a philosopher who succeeded Aristotle as the head of the Lyceum, in an essay entitled “the Unscrupulous Man.” In it, Theophrastus describes a close portrayal of antisocial behavior. The Unscrupulous Man will go and borrow more money from a creditor he has never paid. … When marketing he reminds the butcher of some service he has rendered him and, standing near the scales, throws in some meat, if he can, and a soup-bone. If he succeeds, so much the better; if no, he will snatch a piece of tripe and go off laughing. (Quoted in Millon et al. 2003, 3)

6  Neurotechnologies and Psychopathy 

141

There are also references to psychopaths in various myths and literary works such as in Greek and Roman mythology, the Bible (Cain), and Shakespeare (Kent A. Kiehl and Hoffman 2011). However, it is during the late part of the eighteenth century and early nineteenth century that the disorder was conceptualized more accurately as this period also coincided with important progress in psychiatry (Hervé 2007).2 The first individual to pay attention to the clinical features of antisocial personality was French physician Philippe Pinel (1745–1826), who was in charge of taking care of the insane at the Bicêtre in 1793. In his work on the etiology of insanity (la folie raisonnante), he observed the erratic and self-­ destructive behavior of some of his patients, who did not show any impairment in their reasoning abilities. He described these cases under the category of mania without delirium (manie sans délire) as follows: “I was not a little surprised to find many maniacs who at no period gave evidence of any lesions of understanding, but who were under the dominion of instinctive and abstract fury, as if the faculties of affect alone had sustained injury” (Pinel 1962, 9). According to Theodore Millon et al., Pinel’s interpretation of insanity made an important distinction in its conceptualization in that “there arose the belief that one could be insane (manie) without a confusion of mind (sans délire)” (Millon et al. 2003, 4). Benjamin Rush (1745–1813), considered the father of American psychiatry, is another important figure who helped shed light on psychopathy. His essay “The Influence of Physical Causes upon the Moral Faculty” (1786) is, according to Werlinder, the first account of mental illness based on reprehensible actions (Werlinder 1978). Rush saw psychopathic behavior as a disease of the moral faculty. In Medical Inquiries and Observations upon the Disease of the Mind (1812), he depicted individuals socially deranged but with a clear mind as inherently depraved due to “probably an original defective organization in those parts of the body which are preoccupied by the moral faculties of the mind” (Rush 1812, 112). He describes the state where the will (“the diseased will”) acts without any particular motive in such a manner that the passions lead to vicious actions. “The will,” he writes, “might be deranged even in many instances of persons of sound understandings … the will becoming the involuntary vehicle of vicious actions through the instrumentality of the passions” (Rush 1812, 124).

142 

F. Jotterand

James Cowles Prichard (1786–1848) has been credited for the first formulation of “moral insanity” although medical historian Roy Porter points out that Prichard developed his concept based on Pinel’s description of mania without delirium (manie sans délire) (Porter 1999, 496). Prichard accepted the premise of Pinel’s categorization but rejected the view that this class of disorders is morally neutral. Moral insanity, Prichard argued, reflects a defect in character that affects natural feelings while intellectual abilities remain mostly unaltered. Prichard notes that madness, consisting in a morbid perversion of the natural feelings, affections, inclinations, temper, habits, moral dispositions, and natural impulses, without any remarkable disorder or defect of the interest or knowing and reasoning faculties…There is a form of mental derangement in which the intellectual functions appear to have sustained little or no injury, while the disorder is manifested principally or alone in the state of the feelings, temper or habits. In cases of this nature the moral or active principles of the mind are strangely perverted or depraved; the power of self-government is lost or greatly impaired and the Individual is found to be incapable…of conducting himself with decency and propriety in the business of life. (Prichard 1835, 6, 85)

According to Prichard, insanity cannot be attributed to defective reasoning but on faulty natural affections. Hence, moral in moral insanity should be understood in moralistic terms which was, according to Millon et al., an “intrusion of irrelevant philosophical and moralistic values upon clinical judgments” and led to a renaming of the Prichard’s label to “inhibitor insanity” by British psychiatry (Millon et al. 2003, 6). German psychiatrists in the late nineteenth century distanced themselves from theories with moralistic overtones and focused on observational research. Julius Ludwig August Koch (1841–1908) was an eminent psychiatrist who specialized in concepts of personality disorders. He suggested replacing the terminology of “moral insanity” by “psychopathic inferiority” referring to an abnormal condition of the brain which was the basis for the deviant behaviors observed in some individuals. As Koch noted “all mental irregularities, whether congenital or acquired, that influence a man in his personal life and cause him, even in the most

6  Neurotechnologies and Psychopathy 

143

favorable cases, to seem not fully in possession of normal mental capacity” (Koch 1891, 67 cited in Millon et al. 2003, 8). Koch provided a causal explanation of mental abnormalities according to physiological abnormalities noting that “[t]hey [personality disorders] always remain psychopathic, in that they are caused by organic states and changes which are beyond the limits of physiological normality. They stem from a congenital or acquired inferiority of brain constitution” (Koch 1891, 67 cited in; Millon et al. 2003, 8). Koch’s physical etiology was later picked up by another German psychiatrist, Emil Kraepelin (1856–1926), who authored an important book entitled Psychiatry: A Textbook originally published in German (Psychiatrie: Ein Lehrbuch, first edition 1883—eighth edition 1915). Kraepelin described the “morally insane” as individuals with “psychopathic personalities…those peculiar forms of personality development which we have grounds for regarding as degenerative. The characteristic of degeneration is a lasting morbid reaction to the stresses of life” (Kraepelin 1887, 547 cited in Millon et al. 2003, 9–10). In the seventh edition (1904) of Kraepelin’s textbook, psychopathic personalities are listed according to four categories: (1) the born criminal (der geborene Verbrecher)— individuals in criminal activities enable to control their impulses; (2) the irresolute or weak-willed (die Haltlosen)—individuals unable to commit to long-term life plans; (3) the pathological liars and swindlers (die krankhaften Lügner und Schwindler) who use charm to manipulate others and lack moral integrity; and 4) the pseudoquerulants (die Pseudoquerulanten) which corresponds to paranoid personality (Crocq 2013). In the subsequent edition (1915, eighth edition), he separates psychopaths into two main categories based on deficiencies in their affect or their volition: those with morbid disposition, including obsessive, impulsive, and sexual deviants, and those with personality disorders, including the excitable, the unstable, the impulsive, the eccentric, the liars and swindlers, the antisocial, and the quarrelsome (Millon et al. 2003, 10). Millon and colleagues remark that only the last three groups would fit what we consider currently as antisocial behavior (Millon et al. 2003). In the first part of the twentieth century, Karl Birnbaum (1878–1950) sought to describe the social factors associated with psychopathy and the causes leading to criminal behavior. In his book Die psychopathischen

144 

F. Jotterand

Verbrecher (1926; The Psychopathic Criminals), he focused on the concept of psychopathic criminal. Influenced by the theory of degeneration developed in France, he suggested that not all psychopaths are inclined to criminal activities but only those who inherited a disposition, which was, for Birnbaum, the decisive factor explaining deviant behavior. Another German psychiatrist, Kurt Schneider (1887–1967), also shaped discussion of psychopathy. In 1923, he published an important book entitled Die psychopathischen Persönlichkeiten (1923; The Psychopathic Personalities) where he outlined his own taxonomy of psychopathic personalities. In his nosology, based on characteriological deviations, he distinguishes ten categories: the hyperthymic, the depressive, the insecure, the fanatic, the attention-seeking, the labile, the explosive, the affectionless, the weak-­ willed, and the asthenic. Of these ten only three traits (the attention-­ seeking, the affectionless, and the explosive) have been included in contemporary conceptualization of psychopathy (Hervé 2007). Schneider’s classification did not view psychopathy as a mental illness (i.e., due to a somatic injury or a disease process) but rather as deviation from the norm manifested in anti-social behavior. He described these characteriological deviations as part of abnormal personalities. “Psychopathic personalities,” he writes, “are those abnormal personalities that suffer from their abnormality or whose abnormality causes society to suffer” (Schneider 1923, 6). He also made the observation that individuals with psychopathic traits “were unusually successful in positions of either political or material power” (Millon et al. 2003, 12). In his later writings, he provided a definition of psychopathic personalities that is echoed in current definitions: “We mean personalities with a marked emotional blunting mainly but not exclusively in relation to their fellows. Their character is a pitiless one and they lack capacity for shame, decency, remorse, and conscience. They are ungracious, cold, surly, and brutal in crime…” (Schneider 1958, 126). Unsurprisingly, Schneider’s contribution to the conceptualization of psychopathy shaped the current DSM-5 and ICD-10 classification systems since they integrate many of his insights about psychopathic personalities (Sass and Felthous 2014, 56) As this brief overview of the concept of psychopathy demonstrates, German psychiatry was very influential in the late 1800s and early 1900s. Things started to change in the 1930s and 1940s under the leadership of

6  Neurotechnologies and Psychopathy 

145

two psychiatrists, one from Scotland, David Henderson (1884–1965), and the other from the United States, Hervey Cleckley (1903–1984), both authoring important works, Psychopathic States (1939) and The Mask of Sanity (1941), respectively. According to Kiehl and Hoffman, their work immediately instigated the reevaluation of the German school of thought on psychopathy. Rather than depicting psychopaths as deviant, both psychiatrists saw them as “often otherwise perfectly normal, perfectly rational, and perfectly capable of achieving [their] abnormal egocentric ends” (Kent A. Kiehl and Hoffman 2011, 6). As a result, a small group of psychiatrists started to reexamine the deficiency in moral reasoning among psychopaths but also seeking better diagnosis. Henderson, for instance, began defining psychopathy as antisocial or asocial in nature, but also stressed the fact that the condition can be observed in early childhood caused by a separation of affective traits and its moral dimensions from the intellect. Contrary to psychiatrists like Schneider, he strongly believed that psychopathy had a biological cause thus minimizing the role of social and psychological factors in the progression of the disorder. “The inadequacy or deviation or failure to adjust to ordinary social life,” he writes, “is not a mere willfulness or badness which can be threatened or thrashed out of the individual so involved, but constitutes a true illness for which we have no specific explanation” (Henderson 1947, 17 cited in Hervé 2007, 38). This approach was in sharp contrast with the German school of thought since it challenged the idea that psychopathic individuals are unable to follow social norms due to some deficiencies in affective traits. According to James Blair, Derek Mitchell, and Karina Blair (J. Blair et al. 2005) the current description of psychopathy and its symptoms originated in Cleckley’s The Mask of Sanity where he outlines 16 diagnostic criteria: (1) superficial charm and good intelligence; (2) absence of delusions and other signs of irrational thinking; (3) absence of “nervousness” or psychoneurotic manifestations; (4) unreliability; (5) untruthfulness and insincerity; (6) lack of remorse or shame; (7) inadequately motivated antisocial behavior; (8) poor judgment and failure to learn by experience; (9) pathologic egocentricity and incapacity for love; (10) general poverty in major affective reactions; (11) specific loss of insight; (12) unresponsiveness in general interpersonal relations; (13) fantastic and

146 

F. Jotterand

uninviting behavior with drink and sometimes without; (14) suicide rarely carried out; (15) sex life impersonal, trivial, and poorly integrated; and (16) failure to follow any life plan (Cleckley 1941/1988). These 16 criteria can be classified according to three main categories (Patrick 2018): (a) Mask features include criteria 1, 2, 3, 14 (see above) and describe the façade psychopaths create to relate to others. As Cleckley puts it, [t]he surface of the psychopath … shows up as equal to or better than normal and gives no hint at all of a disorder within. Nothing about him suggests oddness, inadequacy, or moral frailty. His mask is that of robust mental health. Behind the exquisitely deceptive mask of the psychopath the emotional alteration we feel appears to be primarily one of degree, a consistent leveling of response to petty ranges and an incapacity to react with sufficient seriousness to achieve much more than pseudoexperience or quasi-experience. (Cleckley 1941/1988, 383)

(b) Behavioral deviance features include criteria 7, 8, 4, 13, 15, 16 (see above). According to Cleckley, the psychopath’s disorder is not related to the adherence to a particular set of morals or ideas but rather to the inability to orient one’s life socially: By this is not meant an acceptance of the arbitrarily postulated values of any particular theology, ethics, esthetics, or philosophic system, or any special set of mores or ideologies, but rather the common substance of emotion or purpose, or whatever else one chooses to call it, from which the various loyalties, goals, fidelities, commitments, and concepts of honor and responsibility of various groups and various people are formed. (Cleckley 1941/1988, 371)

And (c) shallow deceptive features include criteria 5, 6, 10, 9, 11, 12 (see above). In Cleckley’s words, Although [the psychopath] deliberately cheats others and is quite conscious of his lies, he appears unable to distinguish adequately between his own pseudointentions, pseudoremorse, pseudolove, and the genuine responses of a normal person. His monumental lack of insight indicates how little he

6  Neurotechnologies and Psychopathy 

147

appreciates the nature of his disorder…Superficiality and lack of major incentive or feeling strongly suggest the apparent emotional limitations of the psychopath. (Cleckley 1941/1988, 385, 393)

The five editions of Cleckley’s Mask of Sanity (1941–1976) reveal the importance of his works in the conceptualization of disorder. Drawing upon Cleckley’s work, Robert Hare and his colleagues developed their own taxonomy called the Hare Psychopathy Checklist (PCL).

6.2.2 C  urrent Conceptualization: The Influence of the Hare Psychopathy Checklist As previously noted, early accounts of psychopathy referred to the “derangement of the moral faculties” or “moral insanity” (King 1999, 10). The current definition has moved away from the idea of moral deficit to signify a personality disorder characterized by emotional dysfunction and anti-social behavior as exemplified by the work of Robert Hare who published his Psychopathy Checklist (PCL) in the 1980s, revised in 1991 and subsequently in 2003 (PCL-R) (Hare 1980, 1991, 2003). Hare is considered one of the most influential theorists and researchers working on the psychopath construct, and developed and empirically established the validity of a rating scale for psychopathy considered “the gold standard in the assessment of psychopathy” (Hervé 2007, 50). Interestingly, psychopathy was not included in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) due to comorbidities with Antisocial Personality Disorder (ASPD) and other disorders (Widiger 2006). In the latest edition of the manual (DSM-V), however, there is an attempt to connect characterizations of psychopathy with antisocial personality disorder in section III called the Alternative Model for Personality Disorders (AMPD), which is based on “specific impairment in personality functioning and pathological personality traits” (Sass and Felthous 2014, 62). Hare breaks down the psychopath construct into four clusters of symptoms.

148 

F. Jotterand

PCL-R Items and Four Components Factors Interpersonal  •  Glibness/superficial charm  •  Grandiose sense of self-worth  •  Pathological lying  • Conning/manipulative Lifestyle  • Need for stimulation/boredom prone  •  Parasitic lifestyle  •  Lack of realistic, long-term goals  •  Impulsivity  •  Irresponsibility

Affective  •  Lack of remorse or guilt  •  Shallow affect  •  Callous/lack of empathy  •  Failure to accept responsibility Antisocial  •  Poor behavioral controls  •  Early behavior problems  •  Juvenile delinquency  • Revocation of conditional release  •  Criminal versatility

Note: Promiscuous sexual behavior and short-term marital relationships did not cluster with these four other items, perhaps because they encompass features of the other four categories. Adapted from Hare and Neumann (2009)

Depending on the particular groupings and symptom endorsements of these clusters, many PCL-R psychopathic individuals fall outside of significant criminal activity. Psychopathy affects approximately 1–2% of the general population (1% to 2% among women and 2% to 4% among men) based on the Psychopathy Checklist: Screening Version (PCL: SV) (Compton et al. 2007; Neumann and Hare 2008; Kent A. Kiehl and Hoffman 2011; Cummings 2015; van den Bosch et al. 2018). Criminal psychopaths form only between 15–30% of the male and female prison population but commit 50% more crime than non-psychopathic inmates (Sitaram et al. 2007). Psychopathy is considered a major predictor of criminal recidivism, and it is estimated that psychopathic offenders are approximately four times more likely to reoffend than nonpsychopathic offenders (Kent A. Kiehl and Hoffman 2011; Hemphill et al. 1998; C. Fine and Kennett 2004). The symptoms of psychopathy include emotional and interpersonal superficiality, egocentricity and grandiosity, lack of remorse or guilt, lack of empathy, deceitfulness and manipulation, and shallow emotions. The disorder also comprises social deviance manifested in impulsivity, poor behavior controls, need for excitement, lack of responsibility, early behavior problems, and adult antisocial behavior (Hare 1999). Psychopathy affects affective and cognitive functioning which

6  Neurotechnologies and Psychopathy 

149

translates into deficiencies in three main areas (1) language (i.e., abnormalities in processing abstract or emotional word stimuli); (2) attention and orienting processes (i.e., lack of fear conditioning and unresponsiveness to potential punishment threats); and (3) affect and emotion (i.e., lack of empathy, guilt or remorse, shallow affect, and irresponsibility and behavioral characteristics such as impulsivity, poor behavioral control, and promiscuity) (Kent A. Kiehl 2006).

6.3 The Diagnosis of Psychopathy At present three diagnostic tools are available for the determination of psychopathic traits or frank psychopathy: (1) the Psychopathy Checklist-­ Revised (PCL-R); (2) neuro-imaging techniques; and (3) neuro-genetics. Since the PCL-R has been addressed above, this section will mostly focus on neurotechnologies and neuro-genetics.

6.3.1 Neuro-Imaging Technologies Neuro-imaging technologies are powerful tools designed to observe some of the most complex aspects of the human brain. They allow neuroscientists to gather information on the unperturbed structural and functional levels in vivo, in situ, and, most often, in non-invasive fashion. Structural neuro-imaging techniques provide morphological data of the brain3 whereas functional neuro-imaging technologies, sometimes coupled with electrical recordings of brain activity, measure brain functions and determine the relationship between “regional brain activity and specific tasks, stimuli, cognition, behaviors, and neural processes” (Council et al. 2008, 251).4 Their potential applications include neuro-imaging methods that could be used to determine the correlation between neuro-anatomical abnormalities and psychiatric conditions like psychopathy. Recent cognitive neuroscience studies using neuro-imaging techniques have demonstrated that each of these psycho-physiological and cognitive abnormalities have neurobiological correlates in three key areas of the brain: (1) the prefrontal cortex (PFC), especially the ventromedial

150 

F. Jotterand

prefrontal cortex/orbitofrontal cortex (VMPFC/OFC); (2) the amygdala; and (3) the striatum. For instance, data suggest that damage to the orbital frontal cortex can be associated with cognitive impairments; damage to the anterior cingulate can associate with emotional unconcern, hostility, irresponsibility, and disagreeableness (Ling and Raine 2018; Yang et al. 2009; Raine and Yang 2006; Pridmore et al. 2005; Kent A. Kiehl 2006). Other studies revealed that humans with abnormalities in the amygdala have difficulty processing various affective stimuli such as the inability to recognize anger and fear on the face of others (Kent A. Kiehl 2006). Structural magnetic resonance imaging (sMRI) measures brain volume based upon gray and white matter densities and provides both adequate spatial resolution and replicable data sets. A limitation of sMRI is that the lack of temporal resolution impairs viable assessment of brain function. Functional magnetic resonance imaging (fMRI) measures blood flow in specific brain regions through BOLD (Blood-Oxygen Level-Dependent response), and while offering excellent spatial resolution, it too affords only moderate temporal resolution (at best). In contrast, electroencephalographic recording/event-related potentials (EEG/ ERP) assess the electrical activity of the brain through extra-cranial electrodes and provide superior temporal resolution (with poor spatial resolution). Still, these techniques have indicated the disruption of the functioning of particular brain regions in individuals with psychopathic tendencies and such findings challenge previous conceptualizations of the disorder (R. J. R. Blair 2008, 2010; Birbaumer et al. 2005; Yang et al. 2009). Taken together such results suggest that identifiable neuroanatomic features may be of value in assessing, detecting, and perhaps predicting aberrant cognitions, emotions, and behaviors that are representative of psychopathy (Ling and Raine 2018). A word of caution is needed though. In a recent meta-analysis examining 155 experiments, Poeppl et al. concluded, citing Koenigs et al. (2011), that “[g]iven [the] heterogeneity of neuroimaging results, it has been deemed ‘premature to interpret certain findings as support for any particular theoretical viewpoint’ regarding affected neural circuits…it still remains an open question whether psychopathy is based on a robust organic substrate or merely reflects a variant of bad character traits” (Poeppl et al. 2019). Evidence indicates that structural and functional

6  Neurotechnologies and Psychopathy 

151

findings do not allow specific conclusions other than (1) the need for further research on which regions of the brain associated with psychopathy may be impaired and (2) the reasons why brain impairments cause psychopathy are far from clear (Raine and Yang 2006).

6.3.2 Neurogenetic Studies Neurogenetics studies have suggested evidence of a genetic contribution to psychopathy (C. Harenski et al. 2010; Gunter et al. 2010; Tikkanen et al. 2011; Tamatea 2015). The challenge has been to isolate one gene or a cluster of genes responsible for psychopathy and antisociability (Tamatea 2015). Twin studies using the Psychopathic Personality Inventory revealed moderate-to-high heritability of psychopathic traits (Blonigen et al. 2003, 2005). A twin study by Viding et al. (2005) investigating the heritability of callous-unemotional traits, irresponsibility, impulsiveness, and antisocial behavior (e.g., assault, vandalism, and shoplifting) suggested an influence of genetic factors in behavioral characteristics, and genetic studies of psychopathy found “an association between a functional polymorphisms in the monoamine oxidase A (MAO-A) gene and conduct disorder—a precursor of psychopathy” (C. Harenski et al. 2010, 143 see also; Caspi et al. 2002; Hollerbach et al. 2018; Viding et al. 2005; Kim-Cohen et al. 2006; Tamatea 2015). Other studies have examined correlation of MAO-A and psychopathic traits such as aggression and impulsivity and a neuroimaging-genetic study found that males with low levels of MAO-A (gene and enzyme) had lower amygdalar volume and increased ventromedial prefrontal volume as compared to males with higher levels of the MAO-A (Meyer-Lindenberg et al. 2006; Buckholtz and Meyer-Lindenberg 2008). These studies suggest the contribution of particular genetic variations to both anatomical features and psychopathy, but this claim should be assuaged by the fact that the MAOA gene contributes only “a small amount of variance in risk” and therefore it “is not a ‘violence gene’ per se” (Buckholtz and Meyer-Lindenberg 2008, 127). Therefore, caution is warranted when proposing any form of definitive genetic reductionism, that is, establishing a wrong correlation between genes and behavior. Given the polymorphic and/or pleiotropic

152 

F. Jotterand

nature of neuropsychiatric disorders, the presence of specific genes is not sufficient to establish the primary cause of behavioral and/or psychological traits (FitzGerald and Wurzman 2010).

6.4 Treatment of Psychopathy The alleged lack of effective treatment constitutes a significant challenge for society and, particularly, for prison populations. Some argue that psychopharmacological interventions are ineffective whereas subjects who undergo behavioral therapy exhibit a prohibitive rate of recidivism (Harris and Rice 2006). This claim has been challenged by others, in particular by Rasmus Rosenberg Larsen. He argues that what he calls “the untreatability view” about psychopathy is erroneous due to the lack of sufficient scientific data (Larsen 2019). He expresses strong reservations on the current shift from the treatment of psychopathic patients to the management of the symptoms. He is also concerned that the “narrative about untreatability and adverse treatment affects” reflects unethical psychiatric practices such as the treatment program at the Oak Ridge Social Therapy Unit in Ontario, Canada, where patients were mistreated, if not tortured, according to the lawsuit pertaining to this case (S. Fine 2017).5 The abusive “treatment” program might simply have increased violent behavior not as a result of the treatment procedures, but due to prolonged confinement period, mistreatment of psychopathic patients, and prescriptions of dubious substances. The only evidence demonstrating adverse effects of treatment of psychopaths or the inability to treat psychopathy, according to Larsen, is a 1992 study by Rice and colleagues (Rice et al. 1992) based on a suspicious research methodology (small sample, no diversity in demographics, and therefore lack of generalizability). Despite the inability to replicate the findings of the 1992 study, Rice and colleagues published a review article in 2006, reiterating their claim that treatment of psychopaths was ineffective and resulted in adverse effects: We believe that the reason for these findings is that psychopaths are fundamentally different from other offenders and that there is nothing ‘wrong’

6  Neurotechnologies and Psychopathy 

153

with them in the manner of a deficit or impairment that therapy can ‘fix.’ Instead, they exhibit an evolutionary viable life strategy that involves lying, cheating, and manipulating others. (G. T. Harris and Rice 2006, 568)

These claims have been debunked by strong evidence that suggests the lack of a scientific basis about the clinical pessimism conveyed by Rice and colleagues (Salekin 2002; D’Silva et al. 2004; Salekin et al. 2010). Positive treatment outcomes of psychopathy have been reported in the literature (Polaschek and Skeem 2018). Repetitious transcranial magnetic stimulation (TMS) has proven to alter two areas impaired in psychopaths, emotional processing and moral judgment (Ling and Raine 2018; Baeken et al. 2011; Tassy et al. 2012). Theta burst stimulation and transcranial direct current stimulation have been found to “reduce risk-­taking, impulsivity, aggression, and antisocial inclinations in healthy participants by targeting PFC [prefrontal cortex] regions” (Ling and Raine 2018, 304; Cho et al. 2010; Choy et al. 2018; Dambacher et al. 2015; Riva et al. 2015). Brain-Computer Interface (BCI) technologies coupled with neuro-imaging technologies for the neurohabilitation of psychopaths and other disorders of cognition (Rota et al. 2009), emotions (A. Caria et al., 2007), and behavior—including psychopathic traits and deviant sexual demeanor such as pedophilia (Sitaram et al. 2007, 2009; Jotterand and Giordano 2015; Birbaumer et al. 2005; deCharms et al. 2005; Renaud et al. 2011; Vaadia and Birbaumer 2009; Birbaumer and Cohen 2007)— have or are being developed.6 A study on criminal psychopathy has demonstrated a deficit in metabolic activity (measurement of BOLD [Blood-Oxygen-Level-Dependent] effect) of the fear circuit (Birbaumer et al. 2005). Some researchers have hypothesized that psychopathic subjects could learn how to reactivate brain regions implicated in fear conditioning using neurotechnologies such as real-time functional Magnetic Resonance Imaging [rtfMRI]-Brain-Computer Interface [BCI] technology aimed at altering neural capacities and behaviors of people suffering from psychiatric disorders with moral pathologies (Birbaumer and Cohen 2007; Sitaram et al. 2007). Selective Serotonin Reuptake Inhibitors (SSRIs) have also been proposed to treat psychiatric conditions with moral pathologies (Persson and Savulescu 2012, 120–21). SSRIs are antidepressants prescribed for the

154 

F. Jotterand

treatment of major depressive disorder, anxiety, and obsessive-compulsive disorder (OCD). Serotonin plays a role in regulating various bodily functions such as eating, sleeping, digestion, vision, cardiovascular function, and sexual activity. But the influence of serotonin is allegedly thought not to be limited to bodily functions but also to moral judgment and behavior (Lucki 1998; Crockett et al. 2010; Crockett 2016). Some studies indicate that the administration of the SSRI citalopram increases willingness to cooperate and fairness in healthy subjects (Tse and Bond 2002). Furthermore serotonin, in humans and primates, is implicated in positive prosocial behaviors such as cooperation, grooming, and affiliation but also negatively in antisocial behaviors such as social isolation and aggression which are considered likely precursors to human moral morality (Crockett 2016; Higley and Linnoila 1997; Higley et al. 1996; Knutson et al. 1998). Various studies have focused on domains such as harm and care, empathy, fairness, and reciprocity, and empirical data indicate an association between serotonin function and prosocial behavior or antisocial behavior as noted by Molly J. Crockett: our observations are compatible with the hypothesis that serotonin regulates social preferences, where enhancing (versus impairing) serotonin function leads individuals to value the outcomes of others more positively (versus negatively…). This hypothesis unifies a range of empirical data describing positive associations between serotonin function and prosocial behaviour on the one hand, and negative associations between serotonin function and antisocial behaviour on the other hand. (Crockett 2016, 241)

But Crockett is keen to point out that further research to test this hypothesis is needed which will combine “more precise computational models of social preferences with pharmacological manipulations and neuroimaging” (Crockett 2016, 241). Harris Wiseman takes a more critical stand on the issue. In his view, there are good reasons to think that oxytocin is not a good means to alter moral behavior (Wiseman 2016). He outlines a series of specific problems: (1) the efficacy of oxytocin is rather low when used in persons (people with psychopathic traits), who would most likely benefit from such intervention; (2) the capacity of oxytocin to regulate trust and trustworthiness is, to a large extent, dependent upon

6  Neurotechnologies and Psychopathy 

155

developmental factors such as upbringing and nurturing; (3) oxytocin decreases inclusiveness which could lead to out-group aggression; and (4) the level of oxytocin increases in people engaged in morally reprehensible behavior which renders the ideal of oxytocin as the moral molecule questionable (Wiseman 2016). Furthermore, these claims about the efficacy of oxytocin have also been challenged by new research. A study published in 2019 in Nature Communication reveals that expression patterns of oxytocin genes in the brain are not limited to social behavior but also are associated with appetite, reward, and anticipation which call into question some of the desired effects in psychopaths (Quintana et al. 2019). In addition, claims about the effects of intranasal oxytocin on human social behavior have been exaggerated. Walum et al. investigated the evidence made available in the scientific literature regarding human intranasal oxytocin influences on human social behavior. In their analysis, they considered statistical features of studies that included statistical power, prestudy odds, and bias and concluded that taken together, we think that it is fair to say that the field of behavioral IN-OT [intranasal oxytocin] studies in humans is prone to several types of bias. If we consider both the potentially low PPV [positive predictive value] and the influence of different types of bias… it is possible that most published positive findings within the field actually are false-positives and thus do not represent true effects. (Walum et al. 2016, 255)

People, like Persson and Savulescu (“[i]n any case it is clear that modifications of the brain by drugs like SSRIs have moral consequences” (Persson and Savulescu 2012, 121)), or David DeGrazia (“selective serotonin reuptake inhibitors as a means to being less inclined to assault people” (DeGrazia 2014, 361; Crockett 2014)7), have wrongly appropriated Crockett’s or similar work to demonstrate that SSRIs can be a means to treating antisocial and aggressive behavior. Such misappropriations of scientific information fuel the already noisy debate over moral bioenhancement and hype the claims of its proponents. For now, suffice to say that we should consider such assertions with great skepticism. According to Polaschek and Skeem current findings on the treatment of adult psychopaths are encouraging, regardless of the method used, but

156 

F. Jotterand

not compelling enough (Polaschek and Skeem 2018). So, while the level of evidence regarding the treatability of psychopathy is thin, it might be necessary to strategize how to provide a counter narrative that reverses the focus on the management of psychopathy, an approach that assumes the inability to treat psychopathy. Polaschek and Skeem note that finding a treatment for psychopathy is intrinsically important for moral reasons but also instrumentally significant for two reasons: (1) to change the perception that “psychopathic individuals … are intractable threats who must be indefinitely detained” and (2) to help the criminal and juvenile justice system facilitate access to rehabilitation programs. The triad improving treatment intervention—change of perception—better access to rehabilitation programs combined with current knowledge of the condition is a source of optimism concerning our ability to change positively psychopathic individuals (Polaschek and Skeem 2018). As we move forward in the development of means to treat psychopathy, the question raised at the outset is one of mind-brain attribution: Does brain activity—whether spontaneous or trained—represent mental depiction or volition? And, does the modification of brain activity signify that the mental state of affairs has been (favorably) transformed? In other words, we could imagine psychopathic individuals faking particular rectified behaviors to give investigators what they want to see and hear without necessarily providing a long-term behavioral change or correcting a pathological process. These questions presuppose a conceptual understanding of what brain imaging reveals, as well as of what it does not represent. In light of (a) an increasing trend toward employing neuroscientific and neurotechnological approaches such as neuroimaging and neurogenetics to define, assess, and determine a variety of cognitive, emotional, and behavioral characteristics, (b) extant strengths and limitations of these approaches, and (c) a call to use such approaches in ways that can prevent potentially violent manifestations of particular psychiatric disorders such as psychopathy, I posit that it is important to elucidate if and how such approaches might be used to validly and reliably elucidate, and perhaps predict, psychopathic traits, and what criteria must be developed, addressed, and satisfied in order to ethically and legally substantiate using neurotechnologies in these ways.

6  Neurotechnologies and Psychopathy 

157

6.5 F easibility, Usefulness, and Limitations of Neurotechnologies To address the conceptual, ethical, and practical issues raised by the actual capability of neuroimaging (either alone or in combination with neurogenetic testing) to define and diagnose psychopathy requires identifying, analyzing, and suggesting ways to compensate for gaps in the description, prediction, and interventions (prevention, rehabilitation, and mitigation) against psychopathy. While neuroimaging and neurogenetics approaches offer viable descriptive inference, these techniques have a number of limitations that affect both their validity and value as predictive tools: sMRI and fMRI have relatively poor temporal resolution; EEG/ERP does not provide spatial resolution (Ling and Raine 2018); and the use of genetics could lead to inappropriate correlation of genetic markers and behavior. In addition to such technical limitations, there are a number of other challenges that are ethically and legally problematic. First, neuro-imaging studies have been performed on relatively few psychopaths because of the difficulty to recruit individuals that “truly” present with this disorder (i.e., exceed a threshold of 30 or higher on the PCL-R). Second, psychopaths often manifest co-morbidities such as substance abuse and dependence, which can alter function of certain brain regions (e.g., the orbitofrontal cortex) that have been implicated to be important to making a neuroanatomically based diagnosis of psychopathy. In addition, developmental and longitudinal neuroimaging to determine possible neural abnormalities in psychopathy is lacking (C. Harenski et al. 2010; Thibault et al. 2018). At this point it is not clear if neurobiological variations are the cause of psychopathy, or whether they are the effect or, perhaps, simply epiphenomena—although Poeppl and colleagues argue that there is enough evidence that aberrant brain activity is not just an epiphenomenon but directly related to the psychopathology of the disorder (Poeppl et al. 2019).8 Consideration of these approaches to define and/or predict psychopathy must address the question of how to elucidate and establish thresholds for both diagnosis and any subsequent intervention(s). While it is likely that abnormalities in brain structure and function are

158 

F. Jotterand

associated with psychiatric disease in general, and with psychopathy in particular, the actual substrates and mechanisms involved remain tentative, at best. Still, there is some historical impetus to demonstrate a connection between neurobiological variables and “bad or criminal behavior.” Italian psychiatrist Cesare Lombroso (1835–1909), who is widely considered to be the founder of modern criminology, attempted to explain criminal behavior in terms of biological characteristics. Lombroso used evolutionary principles that were in vogue at the time to develop a systematized typology of physical features (e.g., thick skull, large jaw, large ears, hard “shifty” eyes, etc.) that he believed were indicative of biological predispositions to criminal activities (Carrà and Barale 2004). Understandably, this work has since been discredited in light of more modern scientific principles, yet the underlying context of using currently accepted scientific knowledge and method(s) of biological characterization to define and predict potentially deviant or criminal predispositions and actions persists, albeit in a somewhat more sophisticated form. Thus, the use of neuroimaging and neurogenetic technologies in psychopathy should reject a “Lombrosian” approach to explain “bad or criminal behavior” in terms of neuro-anatomical characteristics and/or molecular biology to avoid therapeutic nihilism. Central to this issue is the question of whether, and to what extent genetic influence, and/or the development of—or insult to—neuroanatomical structures and functions contribute to and can be assessed as definitive and predictive of psychopathy. As psychologist James Blair remarks, there is always the risk that “…a brain scan diagnosis of psychopathy legitimizes the preventive incarceration of a ‘high-risk’ individual, and in which a static neuro-­ structural deficit may lead to a therapeutically nihilistic approach to such an individual on the grounds that he is beyond rehabilitation” (Blair 2003, 564). Blair’s point becomes even more pertinent upon consideration of the current limitations of neuroimaging and neurogenetic approaches, and the lack of effective treatment for psychopathy. Yet, events such as the Columbine and the Sandy Hook Elementary School shootings have prompted renewed public interest in the use of neuroscientific and psychiatric measures to predict and prevent violent psychopathic behavior(s). Key to this debate is if and how neurotechnologies

6  Neurotechnologies and Psychopathy 

159

can—and should—be used for social control and to protect the public, and what measure of analysis and scrutiny is necessary and sufficient to avoid interventional nihilism. The answer to this question depends, I will argue in the next chapter, on the level of technological intrusion, we, as a society, are willing to accept. More importantly, what role should technology play in addressing socio-political challenges such as criminality?

Notes 1. I am indebted to Millon et al. (Millon et al. 2003) and Hervé (Hervé 2007) for the historical overview of this section. 2. Hervé points out that “although psychopathic individuals have been described throughout the ages, psychopathy, as it was defined in the 20th century …, only began evolving into a clinical concept in the late 1700s to early 1800s. … Before that time, mental health problems were viewed as diseases of the mind or intellect … and, consequently, any discussion of psychopathy, which itself is best characterized as a disease of affect, was negated. … By the late 18th century, however, psychiatric descriptions began to include problems in regulating affect and feeling as well as the mind. … These descriptions, in combination with the emerging view that antisocial actions could result from illness rather than just vice or evil forces, laid the foundation for the development of the psychopathic construct as a real clinical entity” (Hervé 2007, 32). 3. They include techniques such as structural Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scanning. 4. These technologies employ various complementary approaches that include (1) multichannel electro-encephalography (EEG); (2) magneto-­ encephalography (MEG); (3) positron emission tomography (PET); (4) functional magnetic resonance imaging (fMRI); (5) functional near-­ infrared spectroscopy (fNIRS); (6) functional transcranial Doppler sonography (fTCDS); and magnetic resonance spectroscopy (MRS) (Council et al. 2008, 51–52). While some consider magnetic resonance imaging as the “gold standard for anatomical neuroimaging” (Council et al. 2008, 75), I will not address the question of which of these neuroimaging technologies provide the most reliable and accurate data. 5. As Larsen explains, “the details of the lawsuit confirmed widespread denigrating treatment procedures, such as chaining nude patients together for

160 

F. Jotterand

up to two weeks, keeping patients locked up in windowless rooms, feeding patients liquid food through tubes in the wall, experimenting with hallucinogens and delirium-producing drugs, and a complete disrespect and rejections of patient rights” (Larsen 2019, 253). 6. It should be noted that various neurotechnologies have demonstrated efficacy to address disorders such as Parkinson’s disease, major depression, schizophrenia, anxiety, alcoholism, eating disorders, and pain modulation (Tachibana 2018). 7. Crockett refutes De Grazia’s appropriation of her work as follows: “De Grazia cites several purported examples of ‘non-traditional means of moral enhancement’, including one of my own studies. According to De Grazia, we showed that ‘selective serotonin reuptake inhibitors (can be used) as a means to being less inclined to assault people’. In fact, our findings are a bit more subtle and nuanced than implied in the target article, as is often the case in neuroscientific studies of complex human behaviour. In our study, we tested the effects of the selective serotonin reuptake inhibitor (SSRI) citalopram on moral judgments about hypothetical scenarios, and on behaviour in an economic game. In the hypothetical scenarios, we found that citalopram made people less likely to judge it morally acceptable to harm one person in order to save many others. In the economic game, citalopram made people less likely to reduce the payoffs of other people who behaved unfairly toward them. We interpreted these results as evidence that serotonin enhances the aversiveness of harming others—either imagined harms (in the case of the hypothetical scenarios) or economic harms (in the case of the economic game). While our findings are consistent with the idea that SSRIs could reduce people’s inclination to assault others, to my knowledge this has not yet been demonstrated in the laboratory in healthy volunteers (and indeed would be quite difficult to implement, practically and ethically speaking)” (Crockett 2014). 8. As Poepple et al. conclude, “our analysis robustly pinpoint aberrant brain activity related to psychopathy in prefrontal, insular, and limbic regions. There regions may serve as targets for pharmacological interventions or brain stimulation techniques. Their alternations in activity may be based on structural brains changes … the results show that aberrant brain activity may not just be an epiphenomenon of psychopathy but directly related to the psychopathology of this disorder” (Poeppl et al. 2019, 8–9).

6  Neurotechnologies and Psychopathy 

161

References Baeken, C., P. Van Schuerbeek, R. De Raedt, J. De Mey, M.A. Vanderhasselt, A. Bossuyt, and R. Luypaert. 2011. The Effect of One Left-Sided Dorsolateral Prefrontal Sham-Controlled HF-RTMS Session on Approach and Withdrawal Related Emotional Neuronal Processes. Clinical Neurophysiology: Official Journal of the International Federation of Clinical Neurophysiology 122 (11): 2217–2226. https://doi.org/10.1016/j.clinph.2011.04.009. Birbaumer, Niels, and Leonardo G. Cohen. 2007. Brain-Computer Interfaces: Communication and Restoration of Movement in Paralysis. The Journal of Physiology 579 (Pt 3): 621–636. https://doi.org/10.1113/jphysiol.2006. 125633. Birbaumer, Niels, Ralf Veit, Martin Lotze, Michael Erb, Christiane Hermann, Wolfgang Grodd, and Herta Flor. 2005. Deficient Fear Conditioning in Psychopathy: A Functional Magnetic Resonance Imaging Study. Archives of General Psychiatry 62 (7): 799–805. https://doi.org/10.1001/archpsyc. 62.7.799. Blair, R. J. R. 2003. Neuroimaging Psychopathy: Lessons from Lombroso. British Journal of Psychiatry 182: 5–7. ———. 2008. The Cognitive Neuroscience of Psychopathy and Implications for Judgments of Responsibility. Neuroethics 1 (3): 149–157. https://doi. org/10.1007/s12152-­008-­9016-­6. ———. 2010. Neuroimaging of Psychopathy and Antisocial Behavior: A Targeted Review. Current Psychiatry Reports 12 (1): 76–82. https://doi. org/10.1007/s11920-­009-­0086-­x. Blair, James, Derek Mitchell, and Karina Blair. 2005. The Psychopath: Emotion and the Brain. 1st ed. Malden, MA: Wiley-Blackwell. Blonigen, Daniel M., Scott R. Carlson, Robert F. Krueger, and Christopher J. Patrick. 2003. A Twin Study of Self-Reported Psychopathic Personality Traits. Personality and Individual Differences 35 (1): 179–197. https://doi. org/10.1016/S0191-­8869(02)00184-­8. Blonigen, Daniel M., Brian M. Hicks, Robert F. Krueger, Christopher J. Patrick, and William G. Iacono. 2005. Psychopathic Personality Traits: Heritability and Genetic Overlap with Internalizing and Externalizing Psychopathology. Psychological Medicine 35 (5): 637–648. https://doi.org/10.1017/ S0033291704004180. van den Bosch, L.M.C., M.J.N. Rijckmans, S. Decoene, and A.L. Chapman. 2018. Treatment of Antisocial Personality Disorder: Development of a

162 

F. Jotterand

Practice Focused Framework. International Journal of Law and Psychiatry 58 (May): 72–78. https://doi.org/10.1016/j.ijlp.2018.03.002. Brazil, I.A., J.D.M. van Dongen, J.H.R. Maes, R.B. Mars, and A.R. Baskin-Sommers. 2018. Classification and Treatment of Antisocial Individuals: From Behavior to Biocognition. Neuroscience & Biobehavioral Reviews 91 (Aug.): 259–277. https://doi.org/10.1016/j.neubiorev.2016. 10.010. Buckholtz, Joshua W., and Andreas Meyer-Lindenberg. 2008. MAOA and the Neurogenetic Architecture of Human Aggression. Trends in Neurosciences 31 (3): 120–129. https://doi.org/10.1016/j.tins.2007.12.006. Caria, Andrea, Ralf Veit, Ranganatha Sitaram, Martin Lotze, Nikolaus Weiskopf, Wolfgang Grodd, and Niels Birbaumer. 2007. Regulation of Anterior Insular Cortex Activity Using Real-Time FMRI. NeuroImage 35 (3): 1238–1246. https://doi.org/10.1016/j.neuroimage.2007.01.018. Carrà, Giuseppe, and Francesco Barale. 2004. Cesare Lombroso, M.D., 1835–1909. American Journal of Psychiatry 161 (4): 624–624. https://doi. org/10.1176/ajp.161.4.624. Caspi, Avshalom, Joseph McClay, Terrie E. Moffitt, Jonathan Mill, Judy Martin, Ian W. Craig, Alan Taylor, and Richie Poulton, eds. 2002. Role of Genotype in the Cycle of Violence in Maltreated Children. Science (New York, N.Y.) 297 (5582): 851–854. https://doi.org/10.1126/science.1072290. Cho, Sang Soo, Ji Hyun Ko, Giovanna Pellecchia, Thilo Van Eimeren, Roberto Cilia, and Antonio P. Strafella. 2010. Continuous Theta Burst Stimulation of Right Dorsolateral Prefrontal Cortex Induces Changes in Impulsivity Level. Brain Stimulation 3 (3): 170–176. https://doi.org/10.1016/j.brs.2009.10.002. Choy, Olivia, Adrian Raine, and Roy H. Hamilton. 2018. Stimulation of the Prefrontal Cortex Reduces Intentions to Commit Aggression: A Randomized, Double-Blind, Placebo-Controlled, Stratified, Parallel-Group Trial. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 38 (29): 6505–6512. https://doi.org/10.1523/JNEUROSCI.3317-­17.2018. Cleckley, Hervey M. 1941. The Mask of Sanity. 5th ed. St. Louis, MO: Mosby. Compton, Wilson M., Yonette F. Thomas, Frederick S. Stinson, and Bridget F. Grant. 2007. Prevalence, Correlates, Disability, and Comorbidity of DSM-IV Drug Abuse and Dependence in the United States: Results from the National Epidemiologic Survey on Alcohol and Related Conditions. Archives of General Psychiatry 64 (5): 566–576. https://doi.org/10.1001/ archpsyc.64.5.566.

6  Neurotechnologies and Psychopathy 

163

Council, National Research, Division on Behavioral and Social Sciences and Education, Board on Behavioral Sciences Cognitive, and Sensory, Division on Engineering and Physical Sciences, Standing Committee for Technology Insight—Gauge Review Evaluate, and, and Committee on Military and Intelligence Methodology for Emergent Neurophysiological and Cognitive/ Neural Science Research in the Next Two Decades. 2008. Emerging Cognitive Neuroscience and Related Technologies. National Academies Press. Crockett, Molly J. 2014. Moral Bioenhancement: A Neuroscientific Perspective. Journal of Medical Ethics 40 (6): 370–371. https://doi.org/10.1136/ medethics-­2012-­101096. ———. 2016. Morphing Morals: Neurochemical Modulation of Moral Judgment and Behavior. In Moral Brains: The Neuroscience of Morality, ed. S. Matthew Liao, 237–245. New York: Oxford University Press. http://www. oxfordscholarship.com/view/10.1093/acprof:oso/9780199357666. 001.0001/acprof-­9780199357666-­chapter-­11. Crockett, Molly J., Luke Clark, Marc D. Hauser, and Trevor W. Robbins. 2010. Serotonin Selectively Influences Moral Judgment and Behavior through Effects on Harm Aversion. Proceedings of the National Academy of Sciences of the United States of America 107 (40): 17433–17438. https://doi.org/10.1073/ pnas.1009396107. Crocq, Marc-Antoine. 2013. Milestones in the History of Personality Disorders. Dialogues in Clinical Neuroscience 15 (2): 147–153. Cummings, Michael A. 2015. The Neurobiology of Psychopathy: Recent Developments and New Directions in Research and Treatment. CNS Spectrums 20 (3): 200–206. https://doi.org/10.1017/S1092852914000741. D’Silva, Karen, Conor Duggan, and Lucy McCarthy. 2004. Does Treatment Really Make Psychopaths Worse? A Review of the Evidence. Journal of Personality Disorders 18 (2): 163–177. https://doi.org/10.1521/ pedi.18.2.163.32775. Dambacher, Franziska, Teresa Schuhmann, Jill Lobbestael, Arnoud Arntz, Suzanne Brugman, and Alexander T. Sack. 2015. Reducing Proactive Aggression through Non-Invasive Brain Stimulation. Social Cognitive and Affective Neuroscience 10 (10): 1303–1309. https://doi.org/10.1093/ scan/nsv018. deCharms, R., Fumiko Maeda Christopher, Gary H. Glover, David Ludlow, John M. Pauly, Deepak Soneji, John D.E. Gabrieli, and Sean C. Mackey. 2005. Control over Brain Activation and Pain Learned by Using Real-Time Functional MRI. Proceedings of the National Academy of Sciences of the United States of America 102 (51): 18626–18631. https://doi.org/10.1073/ pnas.0505210102.

164 

F. Jotterand

DeGrazia, David. 2014. Moral Enhancement, Freedom, and What We (Should) Value in Moral Behaviour. Journal of Medical Ethics 40 (6): 361–368. https:// doi.org/10.1136/medethics-­2012-­101157. Fine, Sean. 2017. Doctors Tortured Patients at Ontario Mental-Health Centre, Judge Rules. The Globe and Mail, June 7, 2017. https://www.theglobeandmail.com/news/national/doctors-­at-­ontario-­mental-­health-­facility-­tortured-­ patients-­court-­finds/article35246519/. Fine, Cordelia, and Jeanette Kennett. 2004. Mental Impairment, Moral Understanding and Criminal Responsibility: Psychopathy and the Purposes of Punishment. International Journal of Law and Psychiatry 27 (5): 425–443. https://doi.org/10.1016/j.ijlp.2004.06.005. FitzGerald, K., and R. Wurzman. 2010. Neurogenetics and Ethics. In Scientific and Philosophical Perspectives in Neuroethics, ed. James J. Giordano and Bert Gordijn. Cambridge: Cambridge University Press. Gibbon, Simon, Conor Duggan, Jutta Stoffers, Nick Huband, Birgit A. Völlm, Michael Ferriter, and Klaus Lieb. 2010. Psychological Interventions for Antisocial Personality Disorder. Cochrane Database of Systematic Reviews 6. https://doi.org/10.1002/14651858.CD007668.pub2. Gunter, Tracy D., Michael G. Vaughn, and Robert A. Philibert. 2010. Behavioral Genetics in Antisocial Spectrum Disorders and Psychopathy: A Review of the Recent Literature. Behavioral Sciences & the Law 28 (2): 148–173. https:// doi.org/10.1002/bsl.923. Hare, Robert D. 1980. The Psychopathy Checklist. Multi-Health Systems. ———. 1991. Manual for the Hare Psychopathy Checklist-Revised. Multi-­ Health Systems. ———. 1999. Without Conscience: The Disturbing World of the Psychopaths Among Us. New York, NY: Guilford Press. https://www.guilford.com/books/ Without-­Conscience/Robert-­Hare/9781572304512. ———. 2003. Manual for the Hare Psychopathy Checklist-Revised. 2nd ed. Multi-Health Systems. Hare, Robert D., and Craig S. Neumann. 2009. Psychopathy: Assessment and Forensic Implications. The Canadian Journal of Psychiatry 54 (12): 791–802. https://doi.org/10.1177/070674370905401202. Harenski, C., Robert D. Hare, and Kent A. Kiehl. 2010. Neuroimaging, Genetics, and Psychopathy: Implications for the Legal System. In Responsibility and Psychopathy: Interfacing Law, Psychiatry and Philosophy, ed. Luca Malatesti and John McMillan. New York: Oxford University Press.

6  Neurotechnologies and Psychopathy 

165

Harris, Grant T., and Marnie E. Rice. 2006. Treatment of Psychopathy: A Review of Empirical Findings. In Handbook of Psychopathy, 555–572. New York, NY: The Guilford Press. Hemphill, James F., Robert D. Hare, and Stephen Wong. 1998. Psychopathy and Recidivism: A Review. Legal and Criminological Psychology 3 (1): 139–170. https://doi.org/10.1111/j.2044-­8333.1998.tb00355.x. Henderson, D.K. 1947. Psychopathic States. 2nd ed. New York: W.W. Norton & Company. Hervé, Hugues. 2007. Psychopathy Across the Ages: A History of the Hare Psychopath. In The Psychopath: Theory, Research, and Practice, 31–55. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. https://doi.org/1 0.4324/9781315085470-­2. Higley, J.D., and M. Linnoila. 1997. Low Central Nervous System Serotonergic Activity Is Traitlike and Correlates with Impulsive Behavior. A Nonhuman Primate Model Investigating Genetic and Environmental Influences on Neurotransmission. Annals of the New York Academy of Sciences 836 (Dec.): 39–56. Higley, J.D., S.T. King, M.F. Hasert, M. Champoux, S.J. Suomi, and M. Linnoila. 1996. Stability of Interindividual Differences in Serotonin Function and Its Relationship to Severe Aggression and Competent Social Behavior in Rhesus Macaque Females. Neuropsychopharmacology: Official Publication of the American College of Neuropsychopharmacology 14 (1): 67–76. https://doi.org/10.1016/S0893-­133X(96)80060-­1. Hollerbach, Pia, Ada Johansson, Daniel Ventus, Patrick Jern, Craig S. Neumann, Lars Westberg, Pekka Santtila, Elmar Habermeyer, and Andreas Mokros. 2018. Main and Interaction Effects of Childhood Trauma and the MAOA UVNTR Polymorphism on Psychopathy. Psychoneuroendocrinology 95: 106–112. https://doi.org/10.1016/j.psyneuen.2018.05.022. Jotterand, Fabrice, and James Giordano. 2015. Real-Time Functional Magnetic Resonance Imaging–Brain-Computer Interfacing in the Assessment and Treatment of Psychopathy: Potential and Challenges. In Handbook of Neuroethics, ed. Jens Clausen and Neil Levy, 763–781. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-­94-­007-­4707-­4_43. Kiehl, Kent A. 2006. A Cognitive Neuroscience Perspective on Psychopathy: Evidence for Paralimbic System Dysfunction. Psychiatry Research 142 (2–3): 107–128. https://doi.org/10.1016/j.psychres.2005.09.013. Kiehl, Kent A., and Morris B. Hoffman. 2011. The Criminal Psychopath: History, Neuroscience, Treatment, and Economics. Jurimetrics 51: 355–397.

166 

F. Jotterand

Kim-Cohen, J., A. Caspi, A. Taylor, B. Williams, R. Newcombe, I.W. Craig, and T.E. Moffitt. 2006. MAOA, Maltreatment, and Gene–Environment Interaction Predicting Children’s Mental Health: New Evidence and a Meta-­ Analysis. Molecular Psychiatry 11 (10): 903–913. https://doi.org/10.1038/ sj.mp.4001851. King, L.J. 1999. A Brief History of Psychiatry: Millennia Past and Present. Annals of Clinical Psychiatry: Official Journal of the American Academy of Clinical Psychiatrists 11 (1): 3–12. Knutson, B., O.M. Wolkowitz, S.W. Cole, T. Chan, E.A. Moore, R.C. Johnson, J. Terpstra, R.A. Turner, and V.I. Reus. 1998. Selective Alteration of Personality and Social Behavior by Serotonergic Intervention. The American Journal of Psychiatry 155 (3): 373–379. https://doi.org/10.1176/ ajp.155.3.373. Koch, Julius Ludwig. 1891. Die Psychopathischen Minderwertigkeiten. Kessinger Publishing, LLC, August. Koenigs M., A. Baskin-Sommers, J. Zeier, and J. P. Newman. 2011. Investigating the Neural Correlates of Psychopathy: A Critical Review. Mol Psychiatry 16 (8): 792–799. https://doi.org/10.1038/mp.2010.124. Kraepelin, Emil. 1887. Psychiatrie: Ein Lehrbuch Fur Studirende Und Aezte. 2nd ed. Leipzig: Abel. Larsen, Rasmus Rosenberg. 2019. Psychopathy Treatment and the Stigma of Yesterday’s Research. Kennedy Institute of Ethics Journal 29 (3): 243–272. https://doi.org/10.1353/ken.2019.0024. Ling, Shichun, and Adrian Raine. 2018. The Neuroscience of Psychopathy and Forensic Implications. Psychology, Crime & Law 24 (3): 296–312. https://doi. org/10.1080/1068316X.2017.1419243. Lucki, I. 1998. The Spectrum of Behaviors Influenced by Serotonin. Biological Psychiatry 44 (3): 151–162. Messina, Nena, David Farabee, and Richard Rawson. 2003. Treatment Responsivity of Cocaine-Dependent Patients with Antisocial Personality Disorder to Cognitive-Behavioral and Contingency Management Interventions. Journal of Consulting and Clinical Psychology 71 (2): 320–329. https://doi.org/10.1037/0022-­006X.71.2.320. Meyer-Lindenberg, Andreas, Joshua W. Buckholtz, Bhaskar Kolachana, Ahmad R. Hariri, Lukas Pezawas, Giuseppe Blasi, Ashley Wabnitz, et al. 2006. Neural Mechanisms of Genetic Risk for Impulsivity and Violence in Humans. Proceedings of the National Academy of Sciences of the United States of America 103 (16): 6269–6274. https://doi.org/10.1073/pnas.0511311103.

6  Neurotechnologies and Psychopathy 

167

Millon, Theodore, Erik Simonsen, and Morten Birket-Smith. 2003. Historical Conceptions of Psychopathy in the United States and Europe. In Psychopathy: Antisocial, Criminal, and Violent Behavior, ed. Theodore Millon, Erik Simonsen, Morten Birket-Smith, and Roger D. Davis, 3–31. The Guilford Press. https://www.amazon.com/Psychopathy-­Antisocial-­Criminal-­Violent-­ Behavior/dp/1572308648. Neumann, Craig S., and Robert D. Hare. 2008. Psychopathic Traits in a Large Community Sample: Links to Violence, Alcohol Use, and Intelligence. Journal of Consulting and Clinical Psychology 76 (5): 893–899. https://doi. org/10.1037/0022-­006X.76.5.893. Patrick, Christopher J. 2018. Psychopathy as Masked Pathology. In Handbook of Psychopathy, ed. Christopher J. Patrick, 2nd ed., 605–617. New York, NY: Guilford Publications. Persson, Ingmar, and Julian Savulescu. 2012. Unfit for the Future: The Need for Moral Enhancement. 1st ed. Oxford University Press. Pinel, Philippe. 1962. A Treatise on Insanity. Translated by D. Davis. New York: Hafner. Poeppl, Timm B., Maximilian Donges, Andreas Mokros, Rainer Rupprecht, Peter T. Fox, Angela R. Laird, Danilo Bzdok, Berthold Langguth, and Simon B. Eickhoff. 2019. A View Behind the Mask of Sanity: Meta-Analysis of Aberrant Brain Activity in Psychopaths. Molecular Psychiatry 24 (3): 463–470. https://doi.org/10.1038/s41380-­018-­0122-­5. Polaschek, Devon L.L., and Jennifer L. Skeem. 2018. Treatment of Adults and Juveniles with Psychopathy. In Handbook of Psychopathy, 2nd ed., 710–731. New York, NY: The Guilford Press. Porter, Roy. 1999. The Greatest Benefit to Mankind: A Medical History of Humanity. 1st ed. New York: W. W. Norton & Company. Prichard, James Cowles. 1835. A Treatise on Insanity and Other Disorders Affecting the Mind. London: Sherwood, Gilbert & Piper. https://www.abebooks.com/ first-­e dition/Treatise-­Insanity-­Disorders-­A ffecting-­Mind-­P RICHARD/ 6552241184/bd. Pridmore, Saxby, Amber Chambers, and Milford McArthur. 2005. Neuroimaging in Psychopathy. Australian and New Zealand Journal of Psychiatry 39 (10): 856–865. https://doi.org/10.1111/j.1440-­1614.2005.01679.x. Quintana, Daniel S., Jaroslav Rokicki, Dennis van der Meer, Dag Alnæs, Tobias Kaufmann, Aldo Córdova-Palomera, Ingrid Dieset, Ole A. Andreassen, and Lars T. Westlye. 2019. Oxytocin Pathway Gene Networks in the Human

168 

F. Jotterand

Brain. Nature Communications 10 (1): 668. https://doi.org/10.1038/ s41467-­019-­08503-­8. Raine, Adrian, and Yaling Yang. 2006. The Neuroanatomical Bases of Psychopathy: A Review of Brain Imaging Findings. In Handbook of Psychopathy, 278–295. New York, NY: The Guilford Press. Renaud, Patrice, Christian Joyal, Serge Stoleru, Mathieu Goyette, Nikolaus Weiskopf, and Niels Birbaumer. 2011. Real-Time Functional Magnetic Imaging-Brain-Computer Interface and Virtual Reality Promising Tools for the Treatment of Pedophilia. Progress in Brain Research 192: 263–272. https:// doi.org/10.1016/B978-­0-­444-­53355-­5.00014-­2. Rice, Marnie E., Grant T. Harris, and Catherine A. Cormier. 1992. An Evaluation of a Maximum Security Therapeutic Community for Psychopaths and Other Mentally Disordered Offenders. Law and Human Behavior 16 (4): 399–412. https://doi.org/10.1007/BF02352266. Riva, Paolo, Leonor J. Romero, C. Nathan Lauro, David S. Chester DeWall, and Brad J. Bushman. 2015. Reducing Aggressive Responses to Social Exclusion Using Transcranial Direct Current Stimulation. Social Cognitive and Affective Neuroscience 10 (3): 352–356. https://doi.org/10.1093/ scan/nsu053. Rota, Giuseppina, Ranganatha Sitaram, Ralf Veit, Michael Erb, Nikolaus Weiskopf, Grzegorz Dogil, and Niels Birbaumer. 2009. Self-Regulation of Regional Cortical Activity Using Real-Time FMRI: The Right Inferior Frontal Gyrus and Linguistic Processing. Human Brain Mapping 30 (5): 1605–1614. https://doi.org/10.1002/hbm.20621. Rush, Benjamin. 1812. Medical Inquiries and Observations upon the Disease of the Mind. Philadelphia: Kimber & Richardson. Salekin, Randall T. 2002. Psychopathy and Therapeutic Pessimism. Clinical Lore or Clinical Reality? Clinical Psychology Review 22 (1): 79–112. https:// doi.org/10.1016/s0272-­7358(01)00083-­6. Salekin, Randall T., Courtney Worley, and Ross D. Grimes. 2010. Treatment of Psychopathy: A Review and Brief Introduction to the Mental Model Approach for Psychopathy. Behavioral Sciences & the Law 28 (2): 235–266. https://doi.org/10.1002/bsl.928. Sass, Henning, and Alan R. Felthous. 2014. The Heterogeneous Construct of Psychopathy. In Being Amoral: Psychopathy and Moral Incapacity, ed. Thomas Schramme, 41–68. The MIT Press. https://doi.org/10.7551/ mitpress/9780262027915.001.0001.

6  Neurotechnologies and Psychopathy 

169

Schneider, Kurt. 1923. Die psychopathischen Persönlichkeiten. Vienna: Franz Deuticke. ———. 1958. Psychopathic Personalities. Translated by Maian W. Hamilton. 9th ed. London: Cassell. Sitaram, Ranganatha, Andrea Caria, Ralf Veit, Tilman Gaber, Giuseppina Rota, Andrea Kuebler, and Niels Birbaumer. 2007. FMRI Brain-Computer Interface: A Tool for Neuroscientific Research and Treatment. Computational Intelligence and Neuroscience 2007. https://doi.org/10.1155/2007/25487. Sitaram, Ranganatha, Andrea Caria, and Niels Birbaumer. 2009. Hemodynamic Brain–Computer Interfaces for Communication and Rehabilitation. Neural Networks, Brain-Machine Interface 22 (9): 1320–1328. https://doi. org/10.1016/j.neunet.2009.05.009. Stuerup, Georg K. 1951. Krogede Skoebner. Copenhagen: Munksgaard. Tachibana, Koji. 2018. Neurofeedback-Based Moral Enhancement and the Notion of Morality | Annals of the University of Bucharest—Philosophy Series. Annals of the University of Bucharest—Philosophy Series 66 (2): 25–41. Tamatea, Armon J. 2015. ‘Biologizing’ Psychopathy: Ethical, Legal, and Research Implications at the Interface of Epigenetics and Chronic Antisocial Conduct. Behavioral Sciences & the Law 33 (5): 629–643. https://doi. org/10.1002/bsl.2201. Tassy, Sébastien, Olivier Oullier, Yann Duclos, Olivier Coulon, Julien Mancini, Christine Deruelle, Sharam Attarian, Olivier Felician, and Bruno Wicker. 2012. Disrupting the Right Prefrontal Cortex Alters Moral Judgement. Social Cognitive and Affective Neuroscience 7 (3): 282–288. https://doi.org/10.1093/ scan/nsr008. Thibault, Robert T., Amanda MacPherson, Michael Lifshitz, Raquel R. Roth, and Amir Raz. 2018. Neurofeedback with FMRI: A Critical Systematic Review. NeuroImage 172: 786–807. https://doi.org/10.1016/j.neuroimage. 2017.12.071. Tikkanen, Roope, Laura Auvinen-Lintunen, Francesca Ducci, Rickard L. Sjöberg, David Goldman, Jari Tiihonen, Ilkka Ojansuu, and Matti Virkkunen. 2011. Psychopathy, PCL-R, and MAOA Genotype as Predictors of Violent Reconvictions. Psychiatry Research 185 (3): 382–386. https://doi. org/10.1016/j.psychres.2010.08.026. Tse, Wai S., and Alyson J. Bond. 2002. Serotonergic Intervention Affects Both Social Dominance and Affiliative Behaviour. Psychopharmacology 161 (3): 324–330. https://doi.org/10.1007/s00213-­002-­1049-­7.

170 

F. Jotterand

Vaadia, Eilon, and Niels Birbaumer. 2009. Grand Challenges of Brain Computer Interfaces in the Years to Come. Frontiers in Neuroscience 3 (2): 151–154. https://doi.org/10.3389/neuro.01.015.2009. Viding, Essi, R. James, R. Blair, Terrie E. Moffitt, and Robert Plomin. 2005. Evidence for Substantial Genetic Risk for Psychopathy in 7-Year-Olds. Journal of Child Psychology and Psychiatry, and Allied Disciplines 46 (6): 592–597. https://doi.org/10.1111/j.1469-­7610.2004.00393.x. Walum, Hasse, Irwin D. Waldman, and Larry J. Young. 2016. Statistical and Methodological Considerations for the Interpretation of Intranasal Oxytocin Studies. Biological Psychiatry 79 (3): 251–257. https://doi.org/10.1016/j. biopsych.2015.06.016. Werlinder, Henry. 1978. Psychopathy: A History of the Concepts. Uppsala; Stockholm: Coronet Books Inc. Widiger, T. A. 2006. Psychopathy and DSM-IV Psychopathology. In C. J. Patrick, ed., Handbook of Psychopathy. New York: The Guilford Press. Wiseman, Harris. 2016. The Myth of the Moral Brain: The Limits of Moral Enhancement. 1 edition. Cambridge, Massachusetts: The MIT Press. Yang, Y., A. Raine, P. Colletti, A.W. Toga, and K.L. Narr. 2009. Abnormal Temporal and Prefrontal Cortical Gray Matter Thinning in Psychopaths. Molecular Psychiatry 14 (6): 561–62, 555. https://doi.org/10.1038/ mp.2009.12.

7 Punishment, Responsibility, and Brain Interventions

Neurotechnological innovations have rendered the determination of a normative concept of human nature and human behavior difficult because technology has invaded what is traditionally considered human nature, that is, the biological functions of the human body, including the brain, and now the capabilities of the mind. As a result, the distinction between what is human (nature) qua biological and human (nature) qua artificial or technological has eroded and our definition of a human being, human capacities, and human appearance has become more and more dependent on technology and less and less on biological attributes. As noted by Kurt Bayertz as technological interfaces with the human body become increasingly available, “human nature will become technologically contingent” (Bayertz 2003, 132). The technologization of human beings is not without its conundrums, particularly how to draw the line between legitimate and illegitimate interventions in the body by means of technology. I address this issue in this chapter with particular attention to the justification of brain interventions as a potential form of punishment in the sentencing of criminal psychopaths. To this end, I examine which theory of justice informs decisions in the criminal justice system and what construct of responsibility offers an adequate framework © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_7

171

172 

F. Jotterand

to determine the nature and the level of responsibility incumbent to criminal psychopaths.

7.1 Retribution Versus Rehabilitation Whether rehabilitation or retribution should be at the forefront in the way society attends to criminal offenders hinges on what approach to justice guides the criminal justice system. The rehabilitationist view holds the position that the aim of sentencing is to reform the criminal behavior of offenders whereas in the retributive approach the goal is to change their attitudes and inclinations to make them law-abiding citizens through various means of punishment (Ryberg 2018). Most judicial systems in the Anglo-American world have adopted retributivist policies during the period between the mid-1970s and the mid-2010s followed by a reemergence of rehabilitationist approaches for reasons I will outline shortly (Ryberg 2004). Among the theoretical variations of retributivism, negative retributivism asserts that punishment is deserved if it is proportionate to the offense but, from a justice standpoint, it is acceptable to allow a less severe punishment. The second approach, limiting retributivism, suggests that “there is, for each crime, a punishment range within the limits of which all punishments are regarded as deserved” (Ryberg 2018, 180–81). Like negative retributivism, in limiting retributivism the reduction of punishment is acceptable from a justice perspective if punishment stays within a reasonable punishment range. According to Ryberg both approaches (the negative retributivism and the limiting retributivism) don’t provide insights as to the level of punishment that should be administered to individuals, and therefore other ethical considerations need to be included to address this challenge. For instance, it is not clear what a reasonable punishment entails and the basis of a reduced sentencing. In addition, the notion of desert and its applicability requires a normative framework to determine where a specific punishment lands within the range of all punishments. For this reason, negative and limiting retributivisms must be regarded as “mixed theories rather than genuine retributive theories” (Ryberg 2018, 181). In light of the limitations of these two frameworks, positive retributivism has

7  Punishment, Responsibility, and Brain Interventions 

173

been the dominant framework adopted in the aforementioned period of retributivist policies. According to this approach, punishment must be proportionate to the gravity of the offense, and it is morally impermissible to deviate from the standard of proportionality in terms of either a reduction or an increase in punitive measures. This approach meets the criterion of justice but does not provide much leeway for a reduced sentencing regardless of the behavior of the offenders while incarcerated. Retributivism has been promoted by philosophers and politicians alike. They agree that punishment is justified on the basis of (1) desert; (2) equivalence between the criminal offences and blameworthiness; and (3) proportionality, that is, legal punishment reflects a proportionate suffering imposed by the state on a blameworthy actor (Matravers 2018, 71). Other factors have also played an important role in adopting a retributivist framework particularly due to the ineffectiveness of rehabilitation strategies. Criminologists have argued that rehabilitation programs don’t lower recidivism rates and penal theorists have raised justice concerns about unspecified sentencing since similar criminal cases would be treated differently—unless a positive retributivism position is adopted. Another criticism mounted against rehabilitation approaches is the imposition of long sentencing until treatment measures adequately cure the offender or facilitate reintegration into society. Hence the overall objection to rehabilitationism has been the potential violation of the basic tenets of retributive justice and its emphasis on the connection between “the seriousness of the crime and the severity of the punishment” (Ryberg 2018, 178). As a consequence of the rejection of rehabilitative strategies and the acceptance of punitive approaches, the judicial systems in the United States and the UK have implemented harsh retributive policies resulting in an explosion of the carceral population in both countries (Matravers 2018; Garland 2001). Mass incarceration, however, provided the conditions for the resurgence of rehabilitationism mainly due to the increasing dissatisfaction of the prosecutors, judges, magistrates, and politicians involved in the criminal justice system, and the failure of current policies to handle the prison population based on punitive measures. In addition, the financial cost of mass incarceration has become a main concern as social institutions are increasingly requesting financial accountability for the exorbitant expenditure of incarceration. In England and

174 

F. Jotterand

Wales, the cost of incarceration alone was estimated at £ 3.4 billion per year (2016), not including other costs incurred by the inmates, their families, and dependents (Matravers 2018). In comparison, according to the Bureau of Justice Statistics, the direct expenditure in the United States related to the corrections and criminal justice system was estimated at $295.6 billion in 2016—$88.5 billion dedicated for operating the nation’s prisons, jails, and parole and probation systems (O’Neill Hayes 2020). Considering the increasing dissatisfaction with retributive approaches, rehabilitationist strategies have been envisioned including brain interventions. Discussions have taken place in the criminal justice context about neurointerventions as a form of punishment and as a potential treatment option leading to rehabilitation (Baccarini and Malatesti 2017; Crutchfield 2019). The impetus to intervene in the brain is not novel. The history of psychiatry is repleted with cases of such interventions exemplified by the practice of lobotomies or early electroconvulsive therapy to treat various psychiatric conditions. Current practices include chemical castration (a form of treatment consisting of a regular injection of a drug to decrease the level of testosterone in individuals suffering from deviant sexual disorders such as pedophilia), deep brain stimulation, and neurosurgery. Of significance is how these interventions are implemented either as mandated by the court in the case of specific types of offense or as a precondition for early release. Both perspectives raise noteworthy issues about the nature of neurointerventions and whether neurointerventions could prevent further offenses and enhance the good of society (i.e., the dangerousness argument).

7.2 Dangerousness and Prevention As the use of neurotechnologies is likely to expand in the near future, it is essential to determine up to which point society holds a moral obligation in protecting its citizens to intervene in the brain of either individuals suffering from mental disorders with moral pathologies who committed an offense (psychopaths) or people with psychopathic character traits but who did not commit a criminal act. Such a strategy to prevent and reduce

7  Punishment, Responsibility, and Brain Interventions 

175

crime in society would require the implementation of neurotechnological means for prognostic uses to take early preventive measures and establish, potentially by coercion, the diagnosis of a mental condition (e.g., establishing a Moral Deficiency Disorder, see Carter 2017) in offenders as well as use the relative preventive interventions in non-offenders. In the case of offenders, the court would have to mandate neurointerventions to alter behavior as a form of punishment and as a means for treatment. There are two potential options in using neurotechnologies to address crime. The non-offender paradigm promotes the usage of neurotechnology as a diagnostic and prognostic tool in light of presumed dangerousness to prevent potential future offenses and, based on the diagnosis, the implementation of coercive measures to “treat” the condition. Presumed dangerousness => prevention (means) => coercive measure-­ neurointervention (as a form of undeserved punishment)

Such measures, however, would be the equivalent of a form of undeserved punishment since there is no offense on the part of the individual undergoing the intervention. The justification of brain interventions would be based on an alleged future misdemeanor. The alternative has the same elements: dangerousness, prevention, punishment, and neurointervention but in a different order and with difference qualifiers. In the offender paradigm, the justification of brain interventions is grounded on the understanding that the dangerousness of the offender has been determined (established dangerousness), and therefore brain interventions are justifiable as a form of deserved punishment which should prevent future offenses. Established dangerousness => neurointerventions as a form of deserved punishment and treatment => prevention

In light of these two paradigms, the crucial question is whether dangerousness (presumed or established) justifies preventive measures (coercive or deserved). To investigate further this important matter, Stephen J. Morse provides a framework worth exploring briefly. It should be noted that he does not specifically focus on the use of neurotechnologies as

176 

F. Jotterand

preventive measures, but his analysis remains highly relevant, nonetheless. He considers the connection between the common good and the infringement on the liberty of individuals deemed dangerous for society. In particular, he examines the implications of pure preventive detention, that is, the case of a convicted felon for whom there are reasons to suspect he is likely to commit another offense. Would it be justifiable to deprive such person his liberty based on potential dangerousness? Morse answers affirmatively, but only if the following criteria are met: (1) the imposition of pure preventive detention should only occur when there is strong evidence of serious harm, that is, “the agent must be dangerous because he or she is suffering from a disease (especially a mental disorder) or because the agent is a criminal”; (2) the ability to predict future danger accurately enough to warrant a preemptive intervention; (3) the preventive measures were “maximally humane and minimally intrusive under the circumstances”; and (4) the implementation of the preventive action followed adequate due process (Morse 1999, 297). The two paradigms aforementioned (the non-offender paradigm and the offender paradigm) and its key concepts (dangerousness, prevention, punishment, and neurointervention) can be nicely juxtaposed with Morse’s four criteria: dangerousness refers to serious harm; prevention relates to how technology could predict behavior in order to prevent future offenses; brain interventions as a form of undeserved (preventative) or deserved punishment would demand due process; and neurointervention should limit intrusion and protect the humanity of the agent. Let’s assume that brain interventions are safe and efficacious. Is there a moral, legal, and clinical ground to promote neurointerventions as legitimate forms of punishment for psychopathic individuals who committed criminal offenses?

7.3 Capacity and Responsibility Key to how one responds to this challenge is whether psychopaths have the capacity to make moral decisions and, subsequently, their level of moral and legal responsibility. If the psychopath is competent there is no justification of an intervention without consent, which also means he/she is morally and legally responsible. If the psychopath is deemed

7  Punishment, Responsibility, and Brain Interventions 

177

incompetent, there might be a clinical justification for a brain intervention, assuming such intervention will enhance the well-being of that person (see clinical ideal in Chap. 2). Hence, we need to consider whether psychopaths are criminals deserving punishment—they are fully competent and responsible or at least responsible enough to deserve punishment, potentially including compulsory treatment—or patients needing treatment—they lack the capacity to be responsible for their choices—in the form of brain interventions aka compulsory treatment. In the latter case these patients would have a mental disorder posing a risk to themselves and/or others which provides a justification of a treatment forced upon them due to their impaired decision-making capacity and inability to manage their condition. In the former case, the intention of the intervention is mostly to protect society. For this reason, the relationship between decision-making capacity and responsibility must be further explored.

7.3.1 Incapacitation and Moral Agency Psychopaths are usually portrayed in the media, television series, novels, or movies as being irredeemable human beings who committed atrocious crimes for which they deserve punishment. As moral agents, they failed to respect the most basic moral prerequisites in human interaction and consequently, it is argued, they are morally and legally responsible for their behaviors and actions. Some researchers, however, question whether psychopaths have the capacities to be fully moral agents on the basis of their lack of empathy and guilt and their disregard for the value of human life. In their view, psychopathy is considered a disease of the mind (Nadelhoffer and Sinnott-Armstrong 2013), an evolutionary adaptation or “an adaptive life strategy” (G. T. Harris et al. 2001; Krupp et al. 2013; Hare and Neumann 2009) which provides a justification to cast doubt on the moral responsibility of psychopaths (C. Fine and Kennett 2004; Litton 2008, 2013; Kennett 2010). More specifically, psychopaths cannot understand the most basic moral concepts and standards that regulate relationship between people and therefore they lack the capacity to engage in moral dialogue the same way “normal people” would.

178 

F. Jotterand

Psychopaths are morally incompetent, have deficient moral concepts, and are deficient moral choosers (Litton 2013; Pillsbury 2013). Paul Litton notes that “[there] is evidence [that] supports the proposition that psychopaths have a significantly impaired capacity to assess the relative strength of different human concerns, thereby undermining their capacity to reason competently about moral considerations” (Litton 2013, 279; Duff 2010). This point is corroborated by Heidi L. Maibom who argues that psychopaths have emotional deficits but also an inability to form coherent plans of actions, hence lacking in practical reason. She points out that they are not necessarily mad, but their decision-making capacity impairs their practical reason (Maibom 2005). On these accounts, the impairment affects rational capacities which in turn alter the ability of psychopaths to control or manifest moral emotions (e.g., lack of empathy, shame, no feelings of guilt). In addition to compromised rational capacities, there are few other abnormal features characteristic of psychopaths. For instance, they don’t fit a normal understanding of human agency. Their weakened capacity to hold themselves accountable to evaluative principles to discriminate between appropriate and objectionable desires is indicative of an abnormal sense of human agency. “Psychopaths’ weakened capacity to value and embrace personal standards,” Litton writes, “is connected to other signs of impaired rationality” (Litton 2013, 283). Furthermore, as the result of a lack of moral understanding, psychopathy affects moral motivation or the ability to respond to moral reasons (Pillsbury 2013). To be responsible morally, a person must have the capacity to respond to moral reasons. Hence, rationality is a precondition for moral responsibility, and since psychopaths cannot respond to moral reasons (impaired rationality) they are not morally responsible. This approach refers to reason-­ responsiveness which “holds that to be morally responsible, an individual must be able to respond to moral reasons. It sees individual rationality as the core of moral responsibility…an individual who cannot respond to moral reasons is not morally responsible” (Pillsbury 2013, 301). On this account, such limitations in rational capacities and moral motivation constitute a mitigating factor in assessing moral responsibility. Other researchers, like Cordelia Fine and Jeanette Kennett, likewise argue that psychopaths should not be held responsible for their actions to

7  Punishment, Responsibility, and Brain Interventions 

179

the same degree as non-psychopathic individuals. Because of cognitive dysfunction due to aberrant or arrested early childhood development, psychopathic individuals lack moral understanding and therefore cannot be held responsible (Fine and Kennett 2004). Moral development, on their account, requires basic understanding of morality (notion of right and wrong, good and evil, etc.) in addition to fear, as “a facilitator of the development of moral conscience,” and guilt, “thought to be an important precursor of moral internalization” (C. Fine and Kennett 2004, 428–29). As evidence of an abnormal moral development in early childhood, Fine and Kennett refer to the inability of psychopaths to make the moral/conventional distinction (Blair 1995). Children as young as 39 months understand the difference between conventional transgressions (pertaining to custom or habits—for instance, a child going to school dressed as a clown per the teacher’s permission) and moral transgressions (pertaining to notions of right and wrong, good and evil—for instance, hitting a schoolmate because the teacher just said so which is inherently wrong) (Smetana and Braeges 1990). The deficits of psychopaths in all these areas and the presence of somatic markers represent, according to Fine and Kennett, a clear indication that psychopaths lack the capacities of healthy moral agents. “Empirical evidence,” they write, “strongly suggests that psychopathic offenders lack … the basis of moral understanding: they cannot meet the conditions of moral agency and so are not the kinds of beings to whom we should attribute moral responsibility” (C. Fine and Kennett 2004, 432–33). In short, psychopaths cannot understand moral concepts, fall short of meeting the standards of moral agency, and hence do not meet the requirements of moral responsibility.

7.3.2 Capacity and Moral Reasons A contrasting view has been put forward by other researchers. They argue that there isn’t enough evidence to demonstrate that psychopaths are so incapacitated that they cannot recognize and respond to moral reasons. While some authors describe psychopathy as a form of deficiency in rational capacities, others, like Jan Schaich Borg and Walter P. Sinnott-­ Armstrong, suggest that psychopaths might not display specific deficits in

180 

F. Jotterand

moral cognition despite variations in emotion, empathy, and moral action (Borg and Sinnott-Armstrong 2013, 124). Thus, psychopaths might have impaired rational capacities, but they are sufficiently rational to meet the standards of moral agency, which includes the intellectual capacity for mental processes involving language, judgments, and actions. Antony Duff makes an interesting observation about the type of rationality pertaining to psychopathic individuals. In his estimation, these individuals are not lacking rational capacities but, within the realm of reason, they feel at home in dealing with non-normative beliefs and in the type of “short-term practical reasoning” associated with the gratification of impulses and desires as opposed to well-reasoned decisions leading to good actions. In other terms, some areas of (practical) reason are not available to psychopathic individuals such as the capacity to connect desires or impulses to normative judgments and to act accordingly. This incapacitation doesn’t demonstrate that emotions and normative judgments are completely distorted or altered but that “that whole dimension of practical rationality is absent” (Duff 2010, 209).1 Duff’s contention is further corroborated by empirical evidence that indicates psychopathy does not obliterate the rational capacity to make moral judgments or remove the aptitude to differentiate right from wrong albeit revealing some differences between psychopathic individuals versus non-­ psychopathic individuals (Cima et al. 2010; Young et al. 2012; Decety and Yoder 2016). Psychopathic individuals act rationally and have the ability to understand moral as well as legal content. For instance, Andrea Yates suffered from postpartum depression and psychosis. She acted on a delusional conception of reality that led her to kill her five young children because she did not want them “to perish in the fires of hell” (Christian and Teachy 2002). Yates’ understanding of her role as a mother and the children’s circumstances was based upon a false, delusional belief. Ted Bundy, on the other hand, was not delusional and acted “rationally.” He understood his actions were morally reprehensible but was motivationally impaired to act upon his knowledge. He did not follow his understanding of the illegal nature of murder that demands not murdering his victims. He was morally responsible for his action despite being an impaired moral agent.

7  Punishment, Responsibility, and Brain Interventions 

181

For this reason, some researchers contend that the moral faculties of psychopaths are not impaired and that they do know right from wrong. Psychopaths simply don’t care (or cannot make the connection between normative judgment and desires/impulses) about moral knowledge or the consequences of inappropriate conduct (Cima et al. 2010). In their study, Cima et al. tested the hypothesis that psychopaths have normal moral understanding but abnormal regulation of morally appropriate behavior. They examined two theses: (1) the general thesis that adequate emotional processing is essential for moral understanding; and (2) the more specific thesis that the impaired emotional processing of psychopaths explains their abnormal moral psychology which encompasses a lack of attention to others and violent acts. The study involved the comparison of three groups: healthy controls (n = 35), non-psychopathic offenders (n = 23), and psychopathic offenders (n = 14). Each subject was assessed using (1) the Psychopathic Checklist-Revised (PCL-R) test (testing interpersonal and affective characteristics of psychopathy) and (2) the Trier Social Stress Test (physiological test of stress reactivity involving measures of cortisol). Based on both tests, the results supported that the psychopathic population showed “relatively flat emotional responses” which concur with the findings of other studies (R. J. R. Blair 2010; Yang et al. 2009; Birbaumer et al. 2005; K. A. Kiehl et al. 2001). But more importantly, the study demonstrated that there was no difference in moral judgments. Psychopaths make identical moral distinctions as healthy subjects in moral dilemmas which confirm Borg and Sinnott-­ Armstrong’s statement that psychopaths do not exhibit specific deficits in moral cognition. However, the lack of (moral) emotional response in psychopaths infringe on their ability to apply moral distinctions to their actions. The above analysis hinges on a longstanding distinction in psychopathology between the faculties of cognition (intellectual functioning, information processing) and conation (motivational or “willing” functions). Concretely, it means that psychopathic individuals can appraise cognitively the instruction of the moral norms and law and socially approved conventions, but (often, and variably) lack the conative wherewithal to act in accordance with their cognitive appraisal. If we accept the premise that rationality is a necessary condition for moral agency and, on

182 

F. Jotterand

empirical ground, that psychopaths act rationally, then we can assert that psychopaths are moral agents, morally responsible for their actions albeit their diminished conative ability. In addition, if we accept that the law of any judicial system reflects particular moral values and norms and that crimes represent the breaking of these “moral laws,” then psychopaths are likewise responsible criminally for their behavior and actions.

7.3.3 Threshold of Capacities If we accept the premise that psychopathy does not annihilate capacity for responsibility, but only mitigates it, it is reasonable to adopt an approach based on a threshold of capacities to determine the level of criminal responsibility (Jefferson and Sifferd 2018). To address the impact of the disorder on mental capacities (rationality) and determine whether an individual meets the requirement to engage in moral dialogue, legal scholars refer to “capacity responsibility” that is, an agent must demonstrate possession of mental capacities necessary for moral responsibility and legal liability. These include “understanding, reasoning, and control of conduct: the ability to understand what conduct legal and moral rules require, to deliberate and reach decisions concerning these requirements; and to conform to decisions when made” (Hart 1968, 227 cited in Jefferson and Sifferd 2018). Meeting these thresholds of capacity are indicators that an individual has the capacity to be responsible, morally and legally, for one’s actions. The failure to meet these standards of capacity would allow an individual to plead for insanity and therefore considered not responsible for their actions and do not deserve punishment. If a moral agent is rational or has the capacity to make moral choices but acts in a wrongful manner, that individual deserves punishment. Hence the subsequent question is to evaluate whether a psychopath who failed to act morally due to some other mental incapacitation (addictions or some compulsive conduct disorders such as kleptomania or pyromania) can be excused from moral and legal responsibility.

7  Punishment, Responsibility, and Brain Interventions 

183

7.4 Moral and Legal Responsibility 7.4.1 Moral Responsibility For an individual to be morally responsible an action must be voluntary and must be causally related to the agency of that individual—outside the constraints of a pathology that affects (fully) cognitive, affective, and conative dimensions of agency. Moral responsibility pertains to a biopsychological function of the person that includes both a systematic evaluation of the norms guiding standards of right conduct (ethics) and a reflection in how one engages in the world as a moral agent toward others within particular social contexts (morality). The guiding assumption throughout my analysis is that morality is inherent to human experience with a corresponding supportive (albeit only vaguely specified to date) neural architecture in the brain that provides a capacity for moral decisions and actions. Moral judgments are shaped through learning by moral education, historical inquiries, cultural and religious practices, belonging to particular communities, and environmental contexts, all dependent upon a functioning neural architecture that provides for a capability for moral judgments. Thus, the brain provides a cognitive architecture for ordinary learning to take place. According to this account, assessments of moral responsibility depend upon the distinction between moral content (particular moral frameworks, ideas, and morally laden actions) and the capacity to apprehend moral contents and act morally. Just as ordinary learning and actions depend upon a healthy and undamaged neural architecture, we might expect that an individual affected with brain abnormalities to have impairments in ordinary learning and actions. We should keep in mind that the ability to learn and the capacity to comprehend moral content do not entail an either/or distinction. The issue is not whether psychopaths have or have not moral capacity but the degree to which they can make moral judgments. Moral responsibility lies with moral capacities of an individual, not the normative moral contents. That being said, capacity does not exclusively determine moral responsibility. Individuals are equally responsible, from a moral standpoint, for the moral content they choose to embrace or

184 

F. Jotterand

promote (i.e., the endorsement of discriminatory ideologies that promote hate and social disruption). However, on this account, psychopathy does not affect the ability to understand moral contents. The disorder affects the moral capacity of psychopathic individuals due to the lack of moral emotions. This impairment in moral emotion limits the psychopath’s ability to respond to moral motivations, while leaving intact the cognitive appraisals that may also motivate behavior.

7.4.2 Criminal Responsibility Criminal responsibility requires an individual to have the capacity to understand and respond to moral reasons. The lack of such capacity or partial incapacitation should constitute a mitigating factor to reduce sentencing (Litton 2013; Brink and Nelkin 2013). Moral and legal responsibility standards, however, should not or do not coincide necessarily. As noted by legal scholar Stephen J. Morse, criminal law in the United States contains “instances of strict liability in which punishment, often potentially severe, is imposed without any proof of moral fault.” In most instances, he writes further, “the doctrines that excuse or mitigate criminal responsibility—lack of rational capacity and coercion—closely track the variables commonly thought to create moral excuse or mitigation” (Morse 2008, 208). We can thus assume, in the context of my analysis, that in most cases criminal law punishes or blames individuals who committed offenses are morally responsible. In contrast to moral responsibility, criminal responsibility is framed by the notion of crime. Crimes are defined by law and reflect a particular subset of normative moral contents historically, culturally, and politically defined. Because crime represents a social ontology of meaning and moral contents, criminal responsibility depends upon the person’s understanding of such socio-moral contents reflected in laws. The same way a person can act immorally without committing a crime (e.g., staying out late without parental permission), so crime may represent a subset of objectional moral contents, although it should be noted that in some instances some acts are considered crimes without necessarily breaking any moral law (e.g., hiding and protecting a Jewish family in Nazi Germany).

7  Punishment, Responsibility, and Brain Interventions 

185

Similarly, criminal responsibility turns upon the person’s capacities to understand and act in accordance with the law and therefore focuses on the character of the criminal act committed not the character of the person (Pillsbury 2013). Accordingly, criminal responsibility depends upon the person’s understanding and actions regarding the law of interest. Earlier in my analysis of the distinction between moral and criminal responsibility, I noted that the phenomenon we call morality depends upon one’s capacity to understand moral content. I also pointed out that criminal responsibility depends upon a subset of moral contents codified in laws and reflecting the social, historical, and political context of a country, a state, a county, and so on. To determine whether an action meets the requirements of criminal conduct would demand that the agent recognizes the lawfulness or unlawfulness of the action under consideration. This means that the standard for the determination of criminal responsibility of the psychopath would rest on whether the psychopath understood, at the time of the offense, that the action under consideration was unlawful. If this would be the case, the “cognitive test” would be fulfilled. However, if that individual, at the time of the offense, believes his action was lawful due to delusional psychotic beliefs (the case of Yates) then the cognitive criterion would not be met. In most US legal jurisdictions, the latter case is how criminal responsibility is appraised, based on the doctrine that mens rea (guilty mind) is required for criminal responsibility, and providing evidence that actus reus (guilty act) is absent and an indication for excusing unlawful conduct, potentially leading to an insanity defense. American law establishes criminal liability according to the principle expressed in the Latin phrase actus non facit reum nisi mens sit rea (“the act does not make a person guilty unless the mind be also guilty”) which comprises two elements: mens rea, a guilty mind, and actus reus, a guilty act. A crime involves a physical act (or an omission when the context requires it) and a mental state that shows intentionality or volition (Morse 2008).

186 

F. Jotterand

7.4.3 Insanity Defense I noted that at least one criterion of criminal responsibility is met by psychopaths, that is, they understand and recognize the unlawful act as unlawful. The question for further consideration is whether criminally psychopathic individuals can fail on other criteria, namely, whether our accepted conative disorder in psychopathy is sufficient to excuse the psychopath and therefore fails to meet the mens rea (guilty mind) criterion. Does the inability to fully engage in moral reasoning constitute a justifiable ground to exculpate legally a psychopath? In the United States, there are different tests the courts use to establish whether a defendant can successfully plead an insanity defense. The M’Naghten rule (1843), still the standard for insanity in most American jurisdictions, excuses the defendant if, at the time of committing the offense, he did not know, due to a mental incapacitation, the wrongfulness of the nature and quality of the act. Another test used in insanity defense cases is the Model Penal Code. Under this test, a person is not legally liable if, at the time of the criminal offense(s), he was suffering from a mental disease that affected him to a point of lacking substantial capacity to grasp the nature of the wrongful action or to conform to the requirements under the law. Finally, it might be worth also mentioning the “Irresistible Impulse” Test, but this test is more appropriate for manias and paraphilias and focuses on testing volitional dimensions of insanity. These three tests reveal different conceptualizations of insanity, either based on cognitive competence, volitional aptitude, or a combination of both, but it should be noted that the American criminal justice system does not exculpate psychopaths from their wrong doings. Psychopathy does not constitute a basis for an insanity defense regardless of how severely affected an individual might be. Currently the law in the United States does not recognize insanity as a reason to exculpate individual suffering from psychopathy. Under US law, psychopaths are considered sane because usually they know the norms of conduct expected by society (cognition) and their behavior demonstrates that their volitional capacities are not impaired enough to exculpate them (Jefferson and Sifferd 2018; Litton 2013). In addition, the absence of delusion provides a

7  Punishment, Responsibility, and Brain Interventions 

187

sufficient account of rationality to establish moral and criminal responsibility of psychopaths. As Stephen J. Morse remarks “unless psychopaths suffer from some other abnormal condition, there is no reason to believe that in general they do not act or cannot form the mental states the law requires when they commit crimes. In short, it will seldom be difficult to establish the psychopath’s prima facie criminal liability” (Morse 2008, 206). It signifies that the scope of criminal law is limited to the nature of some acts leading to legal blameworthiness under the assumption that the behavior of psychopaths is grounded on a rationality robust enough enabling the defiance of social norms and values. In and of itself this type of rationality is sufficient to make a psychopathic individual criminally responsible (Pillsbury 2013). One final point regarding the connection between medical diagnoses and legal pronouncements. In the medical context, the aim of diagnoses is to determine the best course of action for the treatment of the patient, whereas legal experts attempt to identify whether an individual has the capacity to understand and conform to the law and act accordingly. Hence, it is important to recognize that there is not necessarily a connection between mental disorders and the capacity to commit a crime. The use of the category of “disordered or diminished mental processes” in law and the categorization of mental disorders by the medical profession should be considered, as many psychiatric conditions are, irrelevant in the determination of an individual’s capacity to commit a crime. Conversely, criminal culpability does not necessarily imply that a person suffers from a mental disorder (Jefferson and Sifferd 2018). In addition, legal pronouncement in conjunction with psychiatric diagnosis can be used as an instrument to mitigate the moral and legal responsibility of psychopathy. In legal practice, medical evidence has been applied to sentence psychopaths more harshly. For instance, scoring high on the Hare Psychopathy Checklist can constitute an aggravating factor for tougher sentencing, and the diagnosis of psychopathy has been used for the justification of imposing the death penalty instead of a life sentence (Godman and Jefferson 2017; Edens et al. 2001). The utilization of psychopathy assessments has increased in the criminal justice system for the determination of sentencing and therefore there is the potential to label psychopaths as bad and mad. A recent meta-analysis examined how in 22 studies

188 

F. Jotterand

“psychopathic labeling” influenced the perception of dangerousness, treatment amenability, and legal sentence or sanction (Berryessa and Wohlstetter 2019). Fortunately, the findings indicate that the psychopathic label does not affect the perceptions of judges and probations officers pertaining to punishment. “It is possible,” Berryessa and Wohlstetter write, “that such labeling may not result in unjustly punitive outcomes for offenders due to the psychopathic label in criminal justice contexts related to punishment” (Berryessa and Wohlstetter 2019, 13). Among the lay people, however, there seems to be a labeling effect regarding the punishment of psychopaths. These conclusions are to a certain extent reassuring regarding criminal justice, but future studies are warranted to determine whether lay people and actors in the justice system will change their perception of the label psychopathy and its implications for sentencing, especially if new modalities of interventions and treatment (neurotechnologies) might be embraced by society. As I outline in this chapter, psychopaths or individuals with psychopathic traits cannot be deemed incompetent (absent of other mental conditions), and therefore do not meet the criteria of a psychopathic patient for whom neurointerventions are warranted. So, in principle, I argue that the use of neurotechnologies to interfere with brain capacities and the content of the mind is unjustified. The intrusion into this personal domain constitutes an unjustified invasion that conveys disrespect and a violation of human dignity, which some scholars equate with torture (Shaw 2018, 335). For this reason, coercive brain interventions, as punishment and as a means to obtain behavioral change of psychopaths, are inherently objectionable from an ethical standpoint.

Note 1. Duff summarizes the behavior of the psychopath as follows: “He can … operate effectively in a range of contexts: he has desires, and can act to satisfy them; he can to some degree (if he cares to) imitate the discourse of those around him. However, he cannot participate in either the activities or the discourses (the forms of life, one could say) that are informed by

7  Punishment, Responsibility, and Brain Interventions 

189

such emotions and values: he cannot but be an outsider, whose point of view remains ‘external’” (Duff 2010, 209).

References Baccarini, Elvio, and Luca Malatesti. 2017. The Moral Bioenhancement of Psychopaths. Journal of Medical Ethics 43 (10): 697–701. https://doi. org/10.1136/medethics-­2016-­103537. Bayertz, Kurt. 2003. Human Nature: How Normative Might It Be? The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine 28 (2): 131–150. https://doi.org/10.1076/jmep.28.2.131.14210. Berryessa, Colleen M., and Barclay Wohlstetter. 2019. The Psychopathic ‘Label’ and Effects on Punishment Outcomes: A Meta-Analysis. Law and Human Behavior 43 (1): 9–25. https://doi.org/10.1037/lhb0000317. Birbaumer, Niels, Ralf Veit, Martin Lotze, Michael Erb, Christiane Hermann, Wolfgang Grodd, and Herta Flor. 2005. Deficient Fear Conditioning in Psychopathy: A Functional Magnetic Resonance Imaging Study. Archives of General Psychiatry 62 (7): 799–805. https://doi.org/10.1001/ archpsyc.62.7.799. Blair, R. J. R. 1995. A Cognitive Development Approach to Morality: Investigating the Psychopath. Cognition 57 (1): 1–29. ———. 2010. Neuroimaging of Psychopathy and Antisocial Behavior: A Targeted Review. Current Psychiatry Reports 12 (1): 76–82. https://doi. org/10.1007/s11920-­009-­0086-­x. Borg, Jana Schaich, and Walter P. Sinnott-Armstrong. 2013. Do Psychopaths Make Moral Judgments? In Handbook on Psychopathy and Law, Oxford Series in Neuroscience, Law, and Philosophy, 107–128. New York, NY: Oxford University Press. Brink, David O., and Dana K. Nelkin. 2013. Fairness and the Architecture of Responsibility 1. Oxford Studies in Agency and Responsibility Volume 1. Oxford: Oxford University Press. https://oxford.universitypressscholarship.com/ v i e w / 1 0 . 1 0 9 3 / a c p r o f : o s o / 9 7 8 0 1 9 9 6 9 4 8 5 3 . 0 0 1 . 0 0 0 1 / a c p r o f -­ 9780199694853-­chapter-­13. Carter, Sarah. 2017. Could Moral Enhancement Interventions Be Medically Indicated? Health Care Analysis 25 (4): 338–353. https://doi.org/10.1007/ s10728-­016-­0320-­8.

190 

F. Jotterand

Christian, Carol, and Lisa Teachy (2002-03-06). Yates Believed Children Doomed. Houston Chronicle. Cima, Maaike, Franca Tonnaer, and Marc D. Hauser. 2010. Psychopaths Know Right from Wrong but Don’t Care. Social Cognitive and Affective Neuroscience 5 (1): 59–67. https://doi.org/10.1093/scan/nsp051. Crutchfield, Parker. 2019. Compulsory Moral Bioenhancement Should Be Covert. Bioethics 33 (1): 112–121. https://doi.org/10.1111/bioe.12496. Decety, Jean, and Keith J. Yoder. 2016. Empathy and Motivation for Justice: Cognitive Empathy and Concern, but Not Emotional Empathy, Predict Sensitivity to Injustice for Others. Social Neuroscience 11 (1): 1–14. https:// doi.org/10.1080/17470919.2015.1029593. Duff, Antony. 2010. Psychopathy and Answerability. In Responsibility and Psychopathy, ed. Luca Malatesti and John McMillan, 199–212. Oxford: Oxford University Press. https://oxfordmedicine.com/view/10.1093/ med/9780199551637.001.0001/med-­9780199551637-­chapter-­011. Edens, John F., John Petrila, and Jacqueline K. Buffington-Vollum. 2001. Psychopathy and the Death Penalty: Can the Psychopathy Checklist-Revised Identify Offenders Who Represent ‘A Continuing Threat to Society’? The Journal of Psychiatry & Law 29 (4): 433–481. https://doi.org/10.1177/00 9318530102900403. Fine, Cordelia, and Jeanette Kennett. 2004. Mental Impairment, Moral Understanding and Criminal Responsibility: Psychopathy and the Purposes of Punishment. International Journal of Law and Psychiatry 27 (5): 425–443. https://doi.org/10.1016/j.ijlp.2004.06.005. Garland, David. 2001. The Culture of Control: Crime and Social Order in Contemporary Society. The Culture of Control. Oxford: Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/acprof: oso/9780199258024.001.0001/acprof-­9780199258024. Godman, Marion, and Anneli Jefferson. 2017. On Blaming and Punishing Psychopaths. Criminal Law and Philosophy 11 (1): 127–142. https://doi. org/10.1007/s11572-­014-­9340-­3. Hare, Robert D., and Craig S. Neumann. 2009. Psychopathy: Assessment and Forensic Implications. The Canadian Journal of Psychiatry 54 (12): 791–802. https://doi.org/10.1177/070674370905401202. Harris, Grant T., Tracey A. Skilling, and Marnie E. Rice. 2001. The Construct of Psychopathy. Crime and Justice 28: 197–264. Hart, H.L.A. 1968. Punishment and Responsibility: Essays in the Philosophy of Law. Oxford: Clarendon Press.

7  Punishment, Responsibility, and Brain Interventions 

191

Jefferson, Anneli, and Katrina Sifferd. 2018. Are Psychopaths Legally Insane? European Journal of Analytic Philosophy 14 (1): 79–96. https://doi. org/10.31820/ejap.14.1.5. Kennett, Jeanette. 2010. Reasons, Emotion, and Moral Judgement in the Psychopath. In Responsibility and Psychopathy: Interfacing Law, Psychiatry and Philosophy, ed. Luca Malatesti and John McMillan, 243–259. Oxford: Oxford University Press. https://oxfordmedicine.com/view/10.1093/ med/9780199551637.001.0001/med-­9780199551637-­chapter-­014. Kiehl, K.A., A.M. Smith, R.D. Hare, A. Mendrek, B.B. Forster, J. Brink, and P.F. Liddle. 2001. Limbic Abnormalities in Affective Processing by Criminal Psychopaths as Revealed by Functional Magnetic Resonance Imaging. Biological Psychiatry 50 (9): 677–684. Krupp, Daniel Brian, Lindsay A. Sewall, Martin L. Lalumière, Craig Sheriff, and Grant T. Harris. 2013. Psychopathy, Adaptation, and Disorder. Frontiers in Psychology 4: 139. https://doi.org/10.3389/fpsyg.2013.00139. Litton, Paul. 2008. Responsibility Status of the Psychopath: On Moral Reasoning and Rational Self-Governance, SSRN Scholarly Paper ID 1310886. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/ abstract=1310886. ———. 2013. Criminal Responsibility and Psychopathy: Do Psychopaths Have a Right to Excuse? In Handbook on Psychopathy and Law, ed. Kent A. Kiehl and Walter Sinnott-Armstrong, 275–296. Oxford; New York: Oxford University Press. https://papers.ssrn.com/abstract=2296172. Maibom, Heidi L. 2005. Moral Unreason: The Case of Psychopathy. Mind & Language 20 (2): 237–257. https://doi.org/10.1111/j.0268-­1064. 2005.00284.x. Matravers, Matt. 2018. The Importance of Context in Thinking About Crime-­ Preventing Neurointerventions. In Treatment for Crime: Philosophical Essays on Neurointerventions in Criminal Justice, ed. David Birks and Thomas Douglas, 71–93. Oxford: Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198758617.001.0001/ oso-­9780198758617-­chapter-­4. Morse, Stephen. 1999. Neither Desert nor Disease. Legal Theory 5: 265–309. ———. 2008. Psychopathy and Criminal Responsibility. Neuroethics 1 (3): 205–212. https://doi.org/10.1007/s12152-­008-­9021-­9. Nadelhoffer, Thomas, and Walter P. Sinnott-Armstrong. 2013. Is Psychopathy a Mental Disease? In Neuroscience and Legal Responsibility, ed. Nicole A. Vincent, 229–255. Oxford: Oxford University Press. https://oxford.

192 

F. Jotterand

universitypressscholarship.com/view/10.1093/acprof:oso/97801999 25605.001.0001/acprof-­9780199925605-­chapter-­10. O’Neill Hayes, Tara. 2020. The Economic Costs of the U.S. Criminal Justice System. AAF, July 16. https://www.americanactionforum.org/research/ the-­economic-­costs-­of-­the-­u-­s-­criminal-­justice-­system/. Pillsbury, Samuel H. 2013. Why Psychopaths Are Responsible. In Handbook on Psychopathy and Law, ed. Kent A. Kiehl and Walter P. Sinnott-Armstrong, 297–318. New York: Oxford University Press. https://papers.ssrn.com/ abstract=2313758. Ryberg, Jesper. 2004. The Ethics of Proportionate Punishment: A Critical Investigation, Library of Ethics and Applied Philosophy. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-­1-­4020-­2554-­9. ———. 2018. Neuroscientific Treatment of Criminals and Penal Theory. In Treatment for Crime: Philosophical Essays on Neurointerventions in Criminal Justice, ed. David Birks and Thomas Douglas, 177–195. Oxford: Oxford University Press. https://forskning.ruc.dk/en/publications/neuroscientific­treatment-­of-­criminals-­and-­penal-­theory. Shaw, Elizabeth. 2018. Against the Mandatory Use of Neurointerventions in Criminal Sentencing. In Treatment for Crime: Philosophical Essays on Neurointerventions in Criminal Justice, ed. David Birks and Thomas Douglas, 321–337. Oxford: Oxford University Press. https://abdn.pure.elsevier.com/ en/publications/against-­t he-­m andatory-­u se-­o f-­n eurointerventions­in-­criminal-­sente. Smetana, Judith G., and Judith L. Braeges. 1990. The Development of Toddlers’ Moral and Conventional Judgments. Merrill-Palmer Quarterly 36 (3): 329–346. Yang, Y., A. Raine, P. Colletti, A.W. Toga, and K.L. Narr. 2009. Abnormal Temporal and Prefrontal Cortical Gray Matter Thinning in Psychopaths. Molecular Psychiatry 14 (6): 561–62, 555. https://doi.org/10.1038/ mp.2009.12. Young, Liane, Michael Koenigs, Michael Kruepke, and Joseph P. Newman. 2012. Psychopathy Increases Perceived Moral Permissibility of Accidents. Journal of Abnormal Psychology 121 (3): 659–667. https://doi. org/10.1037/a0027489.

8 Identity Integrity in Psychiatry

In this concluding chapter, I want to further elaborate on how neurointerventions are problematic to address issues of criminality and psychiatric disorders with moral pathologies or bad behavior. I will proceed in two steps. First, I will show how the push to implement neurotechnologies is a symptom of an anthropological identity crisis and argue that Western culture has lost key markers to protect the integrity of human identity (in its ontological dimension). Second, I will contend that there is a need to protect the brain and the mind. I suggest the concept of Identity Integrity as a framework for the protection of our humanity against the potential assault of technology on mental privacy and integrity.

8.1 Technology and the Current Anthropological Identity Crisis The prioritization of technologism in medicine over humanistic dimensions has been a source of concern for many decades, with an exponential correlation between the degree of integration of emerging technologies in clinical practice and their acceptance by clinicians. We often hear

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_8

193

194 

F. Jotterand

statements from clinicians complaining that their practice is dehumanizing or that patients are objectified to improve healthcare delivery through technological innovations. The issue of technology and its potential damaging impact on the medical profession and patients is not a new phenomenon, as it was already noted by German philosopher Martin Heidegger. In the 1960s, he was invited by psychiatrist Medard Boss to give a series of lectures on the use of his philosophical approach in psychotherapy to physicians in the town of Zollikon, near Zurich. He warned his audience of physicians that they must resist the temptation of “abandoning the field of medicine to scientific technicians” (Heidegger 2006 cited in Svenaeus 2013, 7) which could inadvertently impose on patients “a framework of technology” and result in their objectifying (Svenaeus 2013, 6). Hans-Georg Gadamer, addressing this very same issue in The Enigma of Health, speaks of the “loss of personhood” which “happens within medical science when the individual patient is objectified in terms of a mere multiplicity of data” (Gadamer 1996, 81). The problem of technology or the “framework of technology” is not so much what type of technology is being implemented in the clinical context, but rather how it undermines the notions of embodiment, or of the lived body as well as personhood in the context of the human encounter that occurs between the patient and the physician. To maintain the anthropological integrity of medicine, philosophical inquiry, Heidegger points out, is a vital endeavor to comprehend the various facets of how physicians relate to patients and engage with medical realities regarding (1) the nature of medical knowledge (epistemology); (2) the nature of the doctor-­ patient relationship (phenomenology); (3) the nature of the good in medicine (ethics); and 4) the nature of health and disease (ontology). In particular, he warns the dangers and limitations of the medical-technico-­ scientific framework prevalent in medicine and underscores the important contribution of philosophical investigations. Heidegger articulated his view prior to the advent of bioethics in the 1960s, so it is difficult to retroactively assess what he would have thought of bioethics as a field of investigation that attempts, to a certain extent, to address questions about the nature of the doctor-patient relationship, the nature of the good in medicine, and the obligation of the medical profession toward society. However, as many scholars have pointed out,

8  Identity Integrity in Psychiatry 

195

bioethics has, to a certain extent, failed because legal and economic concerns (biopolitics) have been prioritized and bioethical reflections have been sidelined (Jotterand 2005; Thomasma 1997; H.T. Engelhardt 2013; Snead 2020). The transition from the ethical (the good) to the political (the right) has proven inadequate to offer a context for the development of the moral identity of the medical profession, increasingly facing the challenge to reflect on the philosophical underpinnings of technological advances for clinical practice, including philosophical anthropology, in addition to the other questions related to epistemology, phenomenology, ethics, and ontology. On the question of anthropology and health care, O. Carter Snead has recognized the failure of the field of American public bioethics because it operates with a “flawed anthropological point of departure” (Snead 2020, 64). In his latest book, What it Means to be Human: The Case for the Body in Public Bioethics (2020), he argues that in order to address the complex issues arising in health care (his analysis specifically focuses on topics with anthropological implications such as abortion, assistive reproduction, and death and dying) the current theoretical, as well legal and political, resources are rather limiting and simplistic because the ethical framework converges on autonomy and self-determination (Snead 2020). Snead is rather skeptical about the ability of American public bioethics to offer a robust vision of human flourishing and identity. Its anthropology does not consider, in a coherent manner, the challenges raised by the human condition characterized by finitude, vulnerability, and dependence. A subsequent issue is the lack of integration of anthropological reflections in medicine, but the challenge is much deeper than one might think. We observe a balkanization of anthropology within the academe. Early 1960s, German philosopher Max Scheler lamented not only of the lack of intellectual dialogue between disciplines generating knowledge in anthropology but also on how the “study of man” has become problematic. In Man’s Place in Nature Scheler notes that in no other period of human knowledge man has ever become more problematic to himself than in our days. We have a scientific, a philosophical, and a theological anthropology that know nothing of each other. Therefore, we no longer possess any clear consistent idea of man. The ever growing

196 

F. Jotterand

multiplicity of the particular sciences that are engaged in the study of man has much more confused and obscured than elucidate our concept of man. (Scheler 1961, 128)

Scheler’s insights are still very much relevant today, if not even more so. The loss of personhood in medicine mentioned previously exposes the lack of consideration of the anthropological implications arising from the interactions of humans with devices (brain-computer interfaces) or with embodied AIs (human-machine relationships). The advent of these technologies is reconfiguring how human beings understand themselves ontologically, but also in the context of their broader natural environment, where homo sapiens has become an orderer and orderable through the powers of technology. This dual role as the agent of the ordering and the subject of the ordering has resulted in an intensifying anthropological identity crisis (Jotterand 2019a; Jotterand and Bosco 2020) that has been scrutinized by French political analyst Jérôme Fourquet in L’Archipel Français: Naissance d’une Nation Multiple and Divisée (2019). He describes the social metamorphosis France has undergone in the last few decades. The country used to have a strong unified cultural, religious, and political identity that has increasingly eroded and resulted in a major identity transformation with anthropological implications Fourquet designates as a basculement civilisationnel et anthropologique majeur (a major civilizational and anthropological shift). This shift is characterized by new ways of relating to one’s body, by the removal of a hierarchy between humans and animals, the acceptance of practices such as cremation, tattooing, veganism, and practices like IVF. This state of affairs reflects, Fourquet argues, a radical change in how human beings, especially the younger generations, relate to their bodies and a rejection of a Judeo-Christian anthropology.1 These concerns are also raised by feminist thinkers such as Leatitia Pouliquen with regard to the identity of women and how emerging technologies could alter their body, their role, and their behavior in society (Pouliquen 2016). In her book Femme 2.0, she laments that technology is threatening le corps féminin (the feminine body) through various increasingly accepted practices and asks provokingly whether it should step aside in the face of technology to allow the end of la femme and the emergence of la Femme 2.0 (Women 2.0) which would constitute

8  Identity Integrity in Psychiatry 

197

a radical departure from the anthropology common to all civilizations since the feminine origins (les féminines origines) of humanity. In her final analysis, she does not argue for a ban on all technologies and practices that could potentially imperil women’s role and identity in society but a careful consideration of all stakeholders of the anthropological implications and impact on women’s identity.2 Within the context of this emerging anthropological revolution, it is worth noting how Persson and Savulescu are willing to make a radical break with past anthropological understandings on questions such as the psychological differences between women and men. On their account, “it seems that in principle we could make men in general more moral by biomedical methods through making them more like the men who are more like women in respect of sympathy and aggression, but without the tendency to social forms of aggression” (Persson and Savulescu 2012, 111–12, 2016). This example illustrates how some proponents of moral bioenhancement are willing to take radical, biotechnological steps toward addressing behavioral problems in society even the use of state measures to implement covert program under the guise of a (doubtful) promotion of population health (Crutchfield 2019). The extent of this major anthropological shift can only take place in a highly technologized context where the boundaries of human biology can be challenged at will and the human body reduced to an object of technological manipulations. To understand how technology is re-­ shaping our conception of what it means to be human, I will examine Heidegger’s concept of enframing. I will outline the main tenets of Heidegger’s critique of modern technology to demonstrate how modern techno-science and its focus on utility and marketability challenges the humanistic dimensions of clinical care and undermines our humanity because it does not serve human ends but aims at increasing political and economic power.

8.1.1 Technology and the Real Heidegger, in his essay “The Question Concerning Technology,” is interested in investigating the essence of technology, which he distinguishes

198 

F. Jotterand

from technology proper such as the manufacturing and use of machines, equipment, or tools built by human beings to serve particular needs and ends. For Heidegger, technology holds at least two definitions, an anthropological one and an instrumental one. The anthropological definition of technology means that technology is simply considered a human activity whereas the instrumental definition holds that technology is a means to an end. But Heidegger is not interested in either. He rejects a “merely instrumental” and “merely anthropological” definition of technology because the business of modern technology is about revealing “the actual as standing-reserve” or what he calls enframing but not in the metaphysical sense—at least in “The Question Concerning Technology” where he asserts that the definition of technology “may not be rounded out by being referred back to some metaphysical or religious explanation that undergirds it” (Heidegger 1954, 326). Elsewhere, however, and because there can be no technology without science, however primitive it might be, Heidegger ventures into metaphysical considerations. In his essay “Science and Reflection”, he makes the provocative statement that “science is the theory of the real” (Heidegger 1977, 157 emphasis mine). Heidegger not only explains how science has achieved the status of “worldview” in modern times, he stresses that it is the worldview. As he notes, “whoever today dares, questioningly, reflectingly, and, in this way already as actively involved, to respond to the profundity of the world shock that we experience every hour, must…pay heed to the fact that our present-day world is completely dominated by the desire to know of modern science…” (Heidegger 1977, 157). Heidegger argues implicitly that science constitutes a metaphysical framework since it provides the elements to understand the nature, constitution, and structure of reality. This point is corroborated by Alfred I. Tauber. He makes the observations that “Heidegger purposefully assigned the scientific worldview a firm hold on what constitutes reality…When he observed, ‘science is a theory of the real,’ he qualified the epistemological standing of science by expanding the definition of science as also encompassing metaphysical concerns” (Tauber 2009, 34–35). When Heidegger states in “The Question Concerning Technology” that “technology is a way of revealing” and not a mere means to end or a human activity, he actually affirms that technology (or techno-science in our current context) constitutes a

8  Identity Integrity in Psychiatry 

199

mode of ordering and seeing reality in the metaphysical sense (Heidegger 1954, 318). This understanding of technology is better captured with reference to what he calls enframing (Gestell in German): Enframing means the gathering together of the setting-upon that sets upon man, i.e., challenges him forth, to reveal the actual, in the mode of ordering, as standing-reserve. Enframing means the way of revealing that holds sway in the essence of modern technology and that is itself nothing technological. (Heidegger 1954, 325)

The above definition summarizes the nature of his contention in various ways. First, as mentioned previously, technology as a way of revealing constitutes a mode of ordering reality or a metaphysical worldview. Second, the way Heidegger describes technology is not simply a concern about the danger of various technologies in their ability to cause harm or have destructive powers (e.g., atomic bomb, autonomous robot, etc.) but rather in how it undermines the very essence of humanity in two ways: first, through the experience of the enframing as a destining of revealing. In Heidegger’s description of enframing, destining is paramount. Destining is the open space where people can listen to what the revealing is divulging. However, there is no end to the revealing for individuals are continuously exposed to the new possibilities offered by modern technology which unceasingly orders their reality, hence undermining their agency as human beings since the telos of human existence is a constantly moving target. As Heidegger puts it, “the destining of revealing holds complete sway over man” (Heidegger 1954, 330). The second way technology undermines the essence of humanity is in how human beings are reduced to “standing-reserve” through the process of revealing. Standing-reserve is the way modern technology reveals the world: the world or nature—including human beings, their bodies, and their brains—are a place of infinite resources. Technology allows an infinite number of manipulations and enables us to reduce the body to an object of physical science (Zitzelsberger 2004) where the subject is taken out of the object leading to the objectifying of human beings. Hence the destining of revealing threatens human beings in their very essence. Technology places human beings in a position of orderers of the world,

200 

F. Jotterand

which includes themselves as part of these objects of manipulation and control. Heidegger contends that this state of affairs is detrimental to human beings because the essence of technology has replaced the essence of human beings. As technological development was supposed to improve the human condition and provide better understanding of who we have as human beings, in a perverse way technology has sidestepped human considerations. “In truth,” Heidegger writes, “…precisely nowhere does man today any longer encounter himself, i.e., his essence. Man stands so decisively in subservience … and fails in every way to hear in what respect he ek-sists, in terms of his essence, in a realm where he is addressed, so that he can never encounter only himself ” (Heidegger 1954, 332). This insight is the very core of the anthropological identity crisis Western culture is facing. Technology is not simply wounding our humanity but afflicting our essence as human beings. As we have tried to master nature and our own essence, we have in fact undermined our own ontological status in nature where human beings as subjects are now reduced to object challenging even notions of embodiment. The body is viewed as a biological object of manipulation as opposed to a way of being, perceiving, and interpreting in the world and constitutive of one’s identity. Snead observes that the anthropological vision projected in our contemporary culture sees the person as an agent of self-determination seeking to pursue future plans of his liking whereas the body (or the natural world) is reduced to a mere vehicle to fulfil one’s projects. Commenting on American public bioethics and its vision of human identity, he notes that “the anthropology of American public bioethics begins with the premise that the fundamental unit of human reality is the individual person, considered as separate and distinct from … [any] web of social relations. … Persons are identified and defined by the exercise of their will” (Snead 2020, 69–70). Such anthropology prioritizes the cognitive in terms of wants and desires (the mind) over any notion of embodiment with its limitations and frailties. The dualism at play that differentiates the mind from the body is the necessary condition that allows the justification of the enhancement of our body, brain, and mind. Enhancement is acceptable, permissible if not encouraged since the body, the brain, and the mind have no ontological status. They are, to use Heidegger’s

8  Identity Integrity in Psychiatry 

201

terminology, simply standing-reserve, waiting to be exploited toward ends and goals beyond human considerations (i.e., posthuman future). To sum up, what my analysis indicates is that the question concerning the essence of technology is not about technology itself but rather about what Hubert Dreyfus categorizes as the “technological way of being” (Dreyfus 2003, 54). So, while we should certainly be concerned about powerful technologies that can either have harmful and destructive consequences for human beings and their environment, or escape human control (e.g., Artificial Intelligence, autonomous intelligent robots, weapons of mass destruction), there might be an even more profound source of distress and danger: the leveling effect of technology on our understanding of being as well as the imposing of restriction in our way of thinking (Dreyfus 2003, 55). This point can be illustrated in reference to anthropo-technological devices (such as bionic legs, bionic arms, or any kind of nonbiological entities attached or incorporated to the human body) and human dignity. The biological presupposes biodiversity and uniqueness, and consequently irreplaceability, whereas the technological has a leveling effect, that is, it leads to uniformity. The nature of technology leans toward uniformity and standardization (Jotterand 2010).

8.2 Homo Sapiens Interacting with Machines To illustrate how technology is increasingly intruding the human body’s sphere, this section briefly explores some of the ways human beings engage and interact with technology. At the onset, it should be noted that humans have always interacted with technology. What is unique in the twenty-first century context, in light of current technological sophistication, is the nature, the level of emotional and psychological attachment, and the degree of intimacy of the interaction between humans and technology (e.g., humanoid robots and intelligent devices) as well as how these technologies are becoming integral to human identity and its embodied experience. As a society, we continue to wrestle with fundamental questions such as whether human nature constitutes a relevant concept for defining what it means to be human, the boundaries of human dignity, and the essence of human flourishing. There are also

202 

F. Jotterand

concerns about the future of the human species—whether it will be technologically determined—and the meaning of human nature or what it means to be human. More importantly, reflections must focus on how technology is reshaping the very notion of human agency and responsibility (critical in the context of the use of neurotechnologies to alter and control the behaviors of individuals suffering from psychiatric disorders with moral pathologies), and how human beings relate to sophisticated human-like entities for practices that used to be the exclusive prerogative of human relationships. The reach of technology’s influence on human existence has expanded to the point where the imperatives of pragmatism and the logical outcome of utilitarianism are promoted, thereby relegating concerns about human identity and human ontology to the margins of ethical reflections. Two cases related to sex robots exemplify this trend. First, the promotion by Nancy S. Jecker of sex robots in the context of older adults with disabilities. She writes: When older people cannot reciprocate sexually, their capability for affiliating diminishes. In such instances, sex robots can be a lifeline to human intimacy, with fewer side effects and risks than alternatives, such as medication or invasive procedures. Sex robots also play an important role as friends and companions for socially isolated people. More than any other age group, older adults are at risk of having no one. While it might initially seem far-fetched to suggest that sex robots could offer a viable substitute for humans, people already perceive and treat robots not just as machines but also as companions and partners. Unlike sex devices that function merely to enhance sexual pleasure, people bond to sex robots and feel close to them. Sex robots create the possibility not just of sexual pleasure but also of sexual relationships and interpersonal intimacy. (Jecker 2020)

It might be the case that sex robots have fewer sides effects and risks, but it is far from clear what psychological effects long term such interaction might have on people in general, and older adults with disabilities in particular. The promotion of sex robots as “friends and companion for isolated people” degrades human relationships to simplistic and likely unfulfilling exchanges. Friendship and companionship require a level of complex interactions to develop a sense of psychological safety,

8  Identity Integrity in Psychiatry 

203

acceptance, intimacy, and emotional attachment beyond mere aesthetic attraction. We should also doubt that sex robots “could offer a viable substitute for humans” because “people bond to sex robots and feel close to them” hence developing “interpersonal intimacy.” The human mind, the nature of human relationship, and the psychological dimensions of human sexuality will unlikely be replicated in robots. Ontologically robots will never be able to experience emotions such as empathy in its full dimension since robots, by definition, are not biological entity hence they cannot empathize with the realities of the human condition. Another example is the use of child sex robots as a deterrent of child molestation. Ronald Arkin, a roboticist, made the claim in 2014 that as methadone is used to treat people addicted to heroin, child sex robots could also be used likewise to treat individual with pedophilic predilections (Danaher 2019). Not surprisingly Arkin’s proposal has encountered, for good reasons, opposition. In the United States, US Congressman Dan Donovan from the 11th District of New York successfully passed the CREEPER3 Act which is now the so-called Curbing Realistic Exploitative Electronic Pedophilic Robots Act 2.0 (CREEPER Act 2.0). The bill “prohibit[s] the importation or transportation of child sex dolls, and for other purposes” and their possession, on the basis that there is a correlation between possession of the obscene dolls, and robots, and possession of and participation in child pornography … the robots can have settings that simulate rape … the dolls and robots not only lead to rape, but they make rape easier by teaching the rapist about how to overcome resistance and subdue the victim … or users and children exposed to their use, the dolls and robots normalize submissiveness and normalize sex between adults and minors … the dolls and robots are intrinsically related to abuse of minors, and they cause the exploitation, objectification, abuse, and rape of minors. (Buchanan 2020)

While in United States and in other countries like the UK (Danaher 2019) politicians have been adamant to put in place the right legislation, other countries like Japan have allowed the selling and purchasing of life-­ like child sex dolls. The company called Trottla, founded by Shin Takagi in Tokyo, has been selling these dolls for more than a decade. According

204 

F. Jotterand

to a 2016 interview conducted by Roc Morin, “Trottla has shipped anatomically-­correct imitation of girls as young as five to clients around the world” (Morin 2016). Whether these dolls can effectively treat pedophilia is beyond the scope of my analysis, but it is paramount to consider what Takagi recounts from the feedback he gets from costumers. Again, citing Morin, Takagi states that most of his clients are men living alone … The system of marriage is no longer working. … While most people buy dolls for sexual reasons, that soon changes for many of them. They start to brush the doll’s hair or change her clothing. Female clients buy the dolls to remind them of their past, or to reimagine an unfortunate childhood. Many of them begin to think of the dolls as their daughters. That’s why I never allow myself to be photographed. I want to prevent them from seeing me as the father of the dolls. (Morin 2016)

These two examples demonstrate a radical shift in how we relate to each other as human beings, but also how, due to pragmatic and existential factors, individuals are willing or forced to seek other sources of relational and emotional comfort. The rise of sex robots is the symptom of a deeper problem in human relationship and societal cohesion. While some of these technologies might afford short-term solutions, it is essential to develop a long-term vision in how we envision human relationships and the structure of human societies. The development of ethical frameworks, laws, and policies for a responsible development and implementation of emerging neurotechnologies in the social and medical contexts is an essential step toward harvesting their benefits. Before we can confidently move forward, however, the fundamental question remains about what anthropological framework, values, and ethical norms are guiding not only those who develop these technologies but also their recipients and, most importantly, what moral and anthropological provisions are shaping how we ought to relate to these machines. Without questioning the anthropological framework directing the current development of technology and without articulating the nature and ends of our relationship to these entities, we are running the risk of leading humanity into an existential and moral abyss. We are actors, spectators, and recipients of the technological

8  Identity Integrity in Psychiatry 

205

spectacle that is unfolding before our eyes. We might be amazed by the wonders of human ingenuity and the level of sophistication of current technologies, but it is imperative to firmly ground our reflections on an anthropological framework that will help direct and support ethical and legal decisions.

8.3 Identity Integrity Against this backdrop, I want to go back to the use of neurotechnology and brain interventions as some have argued that neurointerventions can be part of sentencing or that these interventions could represent an alternative to retributivism (punishment) to promote rehabilitationism (treatment) (Baccarini and Malatesti 2017; Greely 2008; Douglas 2008). I will argue that any mandated or coercive brain interventions should be rejected on the basis that it violates identity integrity, even in the case of individuals uttering deviant or criminal thoughts. This principle is grounded on the principle of cognitive liberty which the Center for Cognitive Liberty & Ethics (CCLE) defines as “the right of each individual to think independently and autonomously, to use the full spectrum of his or her mind, and to engage in multiple modes of thought.” As I will explain very shortly, cognitive liberty is too limited in scope as it focuses on the mind as opposed to the mind and the brain. Hence identity integrity aims at the protection, preservation, and restoration (in the clinical context) of psychological continuity (the mind) and acknowledges the embodied identity of individuals (the brain) as opposed to an identity based only on psychological continuity. It is also committed to the view that the body represents an essential repository to make sense of our psychological states since we are “embodied repositories of integrated psychological states” (Radden 1996, 11). In addition to embodiment, personal identity refers to notions of intentionality (how mental states situate lived body’s location in space and time); local rationality (depicts human beings as rational agents); respect for autonomy (as foundational for human existence and flourishing); identity (the character traits or personality of an individual); and a formal notion of self (one’s ability to construct a sense of self ) (Perry 2009). Additional complementary notions,

206 

F. Jotterand

but from psychiatrist John Sadler, include agency (the capacity to act in the world as an agent physically and mentally); identity (onself as distinct from others); trajectory (sense of purpose); history (how the self grasps temporal dimensions to provide a sense of the past, present, and future); and perspective (a unique viewpoint and experience of the outside world) (Sadler 2007). These attributes of the self highlight the uniqueness of individuals in their physical and mental dimensions. Elsewhere, I have referred to the argument from uniqueness to support human dignity as an intrinsic quality of human beings with regard to their unique biological identity (genetic makeup), agency, moral identity, mental capacities, and construct of self (Jotterand 2018). My analysis is based on Holmes Rolston’s work (Rolston 2008). He provides four categories that describe the process of individuation: 1. ideational uniqueness, which refers to the mental capacities of humans to process and interpret “thoughts, ideas, and symbolic abstractions,” which provide meaning and orientation in life; 2. idiographic uniqueness, which is the capacity of humans to act in the world as agents and thus create their biographical narrative; 3. existential uniqueness, which is the ability to experience a “phenomenological ‘I’” with regard to agency and responsibility; and 4. ethical uniqueness, which implies the capacity to reflect on the nature of right and wrong and act accordingly. These four elements constitute the basis of personal identity in their unique dimensions attributed to a distinctive individual. The two features that characterize human dignity need further clarification. At the foundation level, human beings have an inalienable moral status in the world regardless of developmental stage, demeanor, or bodily and mental capacity (ontological status). At a more practical level, an individual can lose his or her dignity as the result of bad behavior (e.g., psychopaths), but it does not mean that we can treat the person as if he or she lacks dignity in the ontological sense. Hence, “a person’s dignity resides in his or her biologically and socially constructed psychosomatic self with an idiographic proper-named identity” (Rolston 2008, 129). On this account, dignity does not apply to animals or things since it requires a

8  Identity Integrity in Psychiatry 

207

process of individuation leading to personal identity only human beings are able to accomplish. In addition, what we usually call human nature, including its biological dimension, is integral of the integrity of personhood and constitute a “normative boundary” crucial to avoid an implicit dualism of the mind and body and is worth preserving and protecting (Weiss 2014; Jotterand 2018). The protection of the brain and the mind become even more significant in light of recent advances in neuroscience and neurotechnology. Indeed, there is an increasing concern about the potential use of neurotechnologies to manipulate or alter brain functions and human behavior (Jotterand 2016, 2017; Jotterand and Levin 2017; Jotterand and Bosco 2020; Jotterand 2011, 2014, 2016, 2017; Shaw 2018; Sirgiovanni and Garasic 2020; Hübner and White 2016). In recent years, we have observed a rapid development of neurotechnologies, some see as a welcome development. But their level of sophistication regarding neurointerventions has called into question whether the existing regulations and ethical frameworks at the national level (e.g., The United States Bill of Rights, especially the 4th, 5th, and 8th Amendments) and at the international level (the Universal Declaration of Human Rights adopted by the United Nations General Assembly; the European Convention for the Protection of Human Rights and Fundamental Freedoms) provide the protection needed against invasive brain-computer interfaces (BCIs) aiming at the alteration of brain functions and mental states. Consider the following application of BCIs. The technology was used to record the brain data of a mouse eating. The same data was subsequently employed to reactivate and stimulate the region of the brain where the data was collected which caused the mouse to eat again against its will. The same procedure has been utilized to implant artificial memories and images into the brain of a mouse creating hallucinations and false memories of fear, indistinguishable from reality (Yuste et  al. 2021; Goering et  al. 2021). Applying this approach, that is, the idea that our thoughts, memories, perceptions, imaginations, decisions, and emotions are the result of neural activity in the brain, some researchers note that “for the first time in history, we are facing the real possibility of human thoughts being decoded or manipulated using technology” (Yuste et al. 2021, 155). The prospect of implementing this technology in humans has raised concerns

208 

F. Jotterand

among scientists, philosophers, ethicists, policy makers, and politicians who have argued for the necessity to create neurorights based on existing human rights (Goering et al. 2021; Goering and Yuste 2016; Ienca and Andorno 2017; Yuste et al. 2017; Yuste et al. 2021). These neurorights include (a) the right to personal identity, (b) the right to free-will, (c) the right to mental privacy, (d) the right to equal access to mental augmentation, and (e) the right to protections from algorithmic bias (Neurorights Initiative).4 In the context of my analysis on moral bioenhancement techniques and their use to alter or manipulate behavior of individuals suffering from psychiatric disorders with moral pathologies, I will consider specifically the right to personal identity as it constitutes the basis on which the other dimensions outlined in the other neurorights depend on. Without a sense of personal identity, discussion on free-will and mental privacy, for instance, does not make sense as they are a manifestation of the identity of a person. The right to personal identity states that “boundaries must be developed to prohibit technology from disrupting the sense of self. When Neurotechnology connects individuals with digital networks, it could blur the line between a person’s consciousness and external technological inputs.” The key words here are “disrupting the sense of self,” which can be caused by various factors such as disease, traumatic events, therapeutic interventions, psychopharmacology, and, most importantly, interventions in the brain through neurotechnologies to alter or manipulate behavior. In addition to the causes of disruption, we ought to also consider the potential intentions of different actors seeking the disruption of the sense of self. For instance, an individual might want to alter intentionally his sense of self or personal identity through the implementation of neurotechnologies available on the market (consumer neurotechnologies). In other instances, brain interventions in the clinical setting might be warranted to mitigate pathological signs and symptoms associated with neurological disorders such as Alzheimer’s disease or Parkinson’s disease (Jotterand and Giordano 2011; Jotterand et al. 2019). Finally, there are cases that involve unsolicited and covert brain interventions by third parties such as the government, military, private companies, and so on or mandated by the court and the state, as a form of punishment and treatment, or for preventive measures. The last type of possible interventions

8  Identity Integrity in Psychiatry 

209

(mandated by the court or the state) is the most problematic and deserves critical appraisal. The punishment, whether deserved or undeserved, of a person assessed on potential risks of offense clearly violates the prohibition to use any technology that will disrupt the sense of self as stated by the right to personal identity. The United States Bills of Rights likewise contains language that forbids particular types of punishments and the overreach of the government and other entities in the private sphere of citizens: the 8th Amendment of the US Constitution prohibits “cruel and unusual punishments inflicted”; 4th Amendment of the US Constitution secures the right of the people “against unreasonable searches and seizures”; whereas the 5th Amendment grants the right to remain silent, that is, the government cannot compel an individual to provide incriminating information. In Europe, the European Convention for the Protection of Human Rights and Fundamental Freedoms prohibits “inhuman or degrading treatment or punishment.” Often, the justification of rehabilitation is couched in terms of “treating criminals” as a communication stratagem to convey that the nature of these interventions necessarily addresses a pathology, and therefore the recipient will improve his mental and physical health. The assumption is that the psychopath is inherently a patient, but these “treatments” are never just treatment. They are constitutive of a broader scheme of sentencing that include neurointerventions. So, before any claim can be made about the adequacy of procedures in the brain, we need to distinguish sentencing qua punishment versus sentencing qua treatment and determine whether either type is acceptable and whether they are connected. On the one hand, neuroscientific treatments can take the form of a mandatory scheme where treatment is offered instead of prison. In this case punishment and treatment are conflated and raise the questions about (1) justice—does treatment constitute a form of punishment, and if so, does it meet the standards of justice? and (2) equivalence—since so far neurointerventions for the alteration and control of behavior are still investigational is it in the best interest of an offender to undergo an experimental procedure with potentially unknown implications regarding one’s personality? On the other hand, treatment can be offered for a reduced sentence. In this case, we need to evaluate how penal reductions represent, potentially, a form of coercion vulnerable individuals might

210 

F. Jotterand

not be able to resist. It is well established that certain forms of punishment should never be implemented if they are degrading, humiliating, cruel, inhumane, or undermine the dignity of a person. Graham Zellick captures the essence of degrading punishment which is inescapably humiliating and debasing beyond the normal limits of punishments, such that it reduces the essential humanity and dignity of the victim, leaving him with a feeling not simply that he has suffered discomfort or inconvenience or worse as a result of wrongdoing but that he has been reduced in status and subjected to an indignity incompatible with the status of man. (Zellick 1978, 669 cited in Ryberg 2018)

The form of treatment offered is likely to produce some type of humiliation and debasing since these interventions are meant to change the personality and behavior of the recipient. In essence, it undermines the ability of an individual to fully exhibit agency and autonomy, but it also objectifies the person reducing him to a lesser status. So, from a pragmatic and utilitarian standpoint, the promotion and implementation of neurointerventions in criminal sentencing might sound reasonable, if not legally required. From an ethical and philosophical perspective, on my account, they should be rejected in principle because they undermine the humanity of the recipient with respect to bodily and mental integrity. They violate human dignity. To stress even more this point it is worth examining the Basic Ethical Principles in Bioethics and Biolaw (1995–1998), published by the European Commission. In the document dignity conveys two distinct but interrelated dimensions: autonomy and intrinsic value. The report makes it clear that the loss of autonomy does not constitute a loss of dignity: Dignity should not be reduced to autonomy. It says more. Although originally a virtue of outstanding persons and a virtue of self-control in healthy life—qualities which can be lost, for instance by lack of responsibility or in extreme illness—it has been universalized as a quality of the person as such. … [I]t expresses the outstanding position of the human individual in the universe as being capable of both autonomy in rational action and involvement in a good life for and with the other in just institutions. (European Commission 1999)

8  Identity Integrity in Psychiatry 

211

In the above statement, the lack of responsibility due, for instance, to a psychiatric disorder does not undermine dignity and does not constitute a reasonable justification for coercive brain interventions. Bypassing the autonomy of a psychopath, on the basis of an alleged lack of responsibility, or more precisely diminished responsibility, and intervening in the brain to alter and control bad behavior or prevent future offenses, communicates a lack of respect of the humanity of that person. Hence, it is important to affirm the equality of human beings as stated by Article 11 of the Universal Declaration on Bioethics and Human Rights (UDBHR): “no individual or group should be discriminated against or stigmatized on any grounds, in violation of human dignity, human rights and fundamental freedoms.” There is a danger of labeling psychopaths (and other individuals with behavioral problems, as a matter of fact) as irredeemable human beings in need of some kind of moral bioenhancement because it treats them as second-class human beings whose humanity can be violated through mandatory neurointerventions regardless of whether they are morally and legally responsible. Protecting the humanity and personal identity of the psychopath, however, cannot be limited to psychological dimensions, that is, the connectedness between consciousness (ability to connect the past and the present), memory (facts about the past and the present), intentions and desires, and other psychological features (character traits, etc.). These mental properties are constitutive of personal identity and therefore must be protected to preserve the mental integrity of one’s sense of self. This is only one dimension of personal identity. Mental integrity focuses on the datafied brain or detectable neural activity but omits that the mind likewise requires protection. The term mental refers to the physical space where neural activity in the brain takes place and the capacity to respond to external reality (capacity), whereas the mind refers to the ability to reason according to a set of beliefs, values, and so on (content). Both the brain and the mind must be protected as they are both constitutive of one’s sense of self. For the reasons I outlined in this work, I believe that the use of identity integrity represents a fuller and more adequate description of the human condition. Identity integrity aims at the preservation (and restoration in the clinical context) of psychological continuity and acknowledges the

212 

F. Jotterand

embodied identity of individuals as opposed to an identity based only on psychological continuity. On the one hand, individuals have mental capacities and character traits that enable them to engage with the external world, including communicating with individuals and interacting with objects, to create meaning in their lives in relation to others as “we are never more (and sometimes less) than the co-authors of our own narratives” (A.  MacIntyre 1984, 213). On the other hand, our embodied reality cannot escape the biological dimensions of our existence. Notions of space and time are experienced, interpreted, and internalized through the various sensory capacities of our bodies. It is through the body that we inhabit the world, engage with it, and make sense of it as we interpret our sense of touch, sight, smell, taste, and hearing (Jotterand 2019b). The corporeal and psychological dimensions of personal identity need to be protected, especially when dealing with vulnerable individuals suffering from mental conditions with moral pathologies. What Taylor calls qualified horizons, grounded on moral frameworks, are essential to human agency and by extension personal identity. Stepping outside these boundaries or the coercive disruption of one’s self of agency would damage human personhood (Taylor 1989). This point becomes even more salient in light of the prospect of the availability of neurotechnologies able to decode or manipulate human thoughts. If neurotechnology provides a means to connect individuals with digital networks and could result (at this point highly speculative to my knowledge) in the creation of hallucinations and false memories of fear that are indistinguishable from reality, it might not be that far-fetched to envision neurointerventions to alter and manipulate human behavior. The endorsement of moral bioenhancement or any form of brain interventions to address crime or societal problems simply denigrates the humanity of people (albeit psychopathic) by undermining their ability to exercise their autonomy toward good behavior. Four decades ago, the principles of the so-called biological revolution were promoted in the hope to resolve many ailments associated with the human conditions. Such an enterprise failed to deliver what it promised as noted by Anne Harrington in her recent book entitled Mind Fixers: Psychiatry’s Troubled Search for the Biology of Mental Illness. She writes,

8  Identity Integrity in Psychiatry 

213

for make no mistake: today one is hard-pressed to find anyone knowledgeable who believes that the so-called biological revolution of the 1980s made good on most or even any of its therapeutic and scientific promises. Criticism of the enterprise has escalated sharply in recent decades. It is now increasingly clear to the general public that it overreached, overpromised, overdiagnosed, overmedicated, and compromised its principles. (Harrington 2019, xiv)

From troubled research to troubled past there is not a big leap. Psychiatry has a troubled past regarding the use of neurotechnologies. The field as well as all other stakeholders should carefully appraise the willingness to embrace novel neurotechnologies to address psychiatric conditions with moral pathologies or to promote them to enhance, alter, and manipulate the moral stamina of the masses. Better understanding how morality occurs in the brain and the brain regions associated with moral behavior might emerge in the future and shed light on the nature of psychiatric conditions like psychopathy. However, if the account of morality I articulated in this book is accurate, it is unlikely to that the moral bioenhancement agenda will ever be needed.

Notes 1. Fourquet explains the shift in French culture, and we could say Western culture in these terms: “…la montée en puissance de phénomènes aussi distincts que la crémation, the tatouage ou l’animalisme et le véganisme ne doivent pas être analysés comme de simples phénomènes de mode, mais comme les symptômes d’un basculement civilisationnel et anthropologique majeur. Au travers de ces nouvelles pratiques, des pans entiers du référentiel judéo-chrétien, qu’il s’agisse du rapport au corps ou de la hiérarchie entre l’homme et l’animal, apparaissent comme battus en brèche et obsolètes. Si les générations les plus anciennes demeurent encore fidèles à cette matrice séculaire, les plus jeunes adhèrent majoritairement à cette nouvelle vision post-chrétienne du monde” (Fourquet 2019, 15). 2. Pouliquen, in light of how emerging technologies could transform the body of women and their role in society, asks: “[Q]uelle sera cette Femme 2.0 et désirons-nous cette vision de la femme? Comment les technologies

214 

F. Jotterand

transformeront-elles les comportements féminins ? … [L]e corps de la femme doit-il s’effacer face à la technologie ? De la femme sans sexe a la femme ‘pucée’ et ‘robotisée’ puis à l’intelligence artificielle pure, la Femme 2.0 marque-t-elle la fin de la femme ? Doit-on résister à l’attrait de la technologie s’il y a danger pour la femme ? Le règne du matériel et du corporel mènera-t-il inéluctablement à des dérives et à des atteintes à la dignité de la femme ? …La Femme 2.0 sera-t-elle en rupture total avec l’anthropologie commune à toutes les civilisations depuis les féminines origines de l’humanité ?…[D]emandons qu’un débat honnête soit tenu au niveau international, afin d’amener les responsables politiques et les investisseurs technologiques à comprendre les enjeux anthropologiques dont celui d’affirmer l’identité féminine” (Pouliquen 2016, 23–24; 174). 3. CREEPER stands for “Curbing Realistic Exploitative Electronic Pedophilic Robots” 4. Ienca and Adorno list the following neurorights: (1) the right to cognitive liberty; (2) the right to mental privacy; (3) the right to mental integrity; and (4) the right to psychological continuity (Ienca and Andorno 2017).

References Baccarini, Elvio, and Luca Malatesti. 2017. The Moral Bioenhancement of Psychopaths. Journal of Medical Ethics 43 (10): 697–701. https://doi. org/10.1136/medethics-­2016-­103537. Buchanan, Vern. 2020. Text—H.R.8236—116th Congress (2019–2020): CREEPER Act 2.0. Webpage. 2019/2020. September 14. https://www.congress.gov/bill/116th-­congress/house-­bill/8236/text. Crutchfield, Parker. 2019. Compulsory Moral Bioenhancement Should Be Covert. Bioethics 33 (1): 112–121. https://doi.org/10.1111/bioe.12496. Danaher, John. 2019. Regulating Child Sex Robots: Restriction or Experimentation? Medical Law Review 27 (4): 553–575. https://doi. org/10.1093/medlaw/fwz002. Douglas, Thomas. 2008. Moral Enhancement. Journal of Applied Philosophy 25 (3): 228–245. https://doi.org/10.1111/j.1468-­5930.2008.00412.x. Dreyfus, Hubert. 2003. Heidegger on Gaining a Free Relation to Technology. In Readings in the Philosophy of Technology, ed. David M.  Kaplan, 53–62. Lanham: Rowman & Littlefield Publishers. Engelhardt, H. Tristram. 2013. Bioethics as a Liberal Roman Catholic Heresy: Critical Reflections on the Founding of Bioethics. In The Development of

8  Identity Integrity in Psychiatry 

215

Bioethics in the United States, Philosophy and Medicine, ed. Jeremy R. Garrett, Fabrice Jotterand, and D. Christopher Ralston, 55–76. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-­94-­007-­4011-­2_5. European Commission. 1999. Basic Ethical Principles in Bioethics and Biolaw 1995–1998. Fourquet, Jérôme. 2019. L’archipel français: Naissance d’une nation multiple et divisée. Paris: Editions du Seuil. https://www.lalibrairie.com/ livres/l-­archipel-­francais%2D%2Dnaissance-­d-­une-­nation-­multiple-­et-­divis ee_0-­5623745_9782021406023.html. Gadamer, Hans-Georg. 1996. The Enigma of Health: The Art of Healing in a Scientific Age. Translated by Jason Gaiger and Nicholas Walker. 1st ed. Stanford: Stanford University Press. Goering, Sara, and Rafael Yuste. 2016. On the Necessity of Ethical Guidelines for Novel Neurotechnologies. Cell 167 (4): 882–885. https://doi. org/10.1016/j.cell.2016.10.029. Goering, Sara, Eran Klein, Laura Specker Sullivan, Anna Wexler, Blaise Agüera y Arcas, Guoqiang Bi, Jose M. Carmena, et al. 2021. Recommendations for Responsible Development and Application of Neurotechnologies. Neuroethics, April. https://doi.org/10.1007/s12152-­021-­09468-­6. Greely, Henry T. 2008. Neuroscience and Criminal Justice: Not Responsibility but Treatment. Kansas Law Review 56 (5): 1103–1138. https://doi. org/10.17161/1808.20016. Harrington, Anne. 2019. Mind Fixers: Psychiatry’s Troubled Search for the Biology of Mental Illness. 1st ed. New York: W. W. Norton & Company. Heidegger, Martin. 1954. The Question Concerning Technology. In Basic Writings, ed. David Farrell Krell, 311–341. New York, NY: Harper & Row. ———. 1977. The Question Concerning Technology, and Other Essays. New York: Garland Pub. ———. 2006. Zollikoner Seminare: Protokolle—Zwiegesprache—Briefe. Edited by Medard Boss. 3rd ed. Frankfurt am Main: Verlag Vittorio Klostermann. Hübner, Dietmar, and Lucie White. 2016. Neurosurgery for Psychopaths? An Ethical Analysis. AJOB Neuroscience 7 (3): 140–149. https://doi.org/10.108 0/21507740.2016.1218376. Ienca, Marcello, and Roberto Andorno. 2017. Towards New Human Rights in the Age of Neuroscience and Neurotechnology. Life Sciences, Society and Policy 13 (1): 5. https://doi.org/10.1186/s40504-­017-­0050-­1. Jecker, Nancy S. 2020. Nothing to Be Ashamed of: Sex Robots for Older Adults with Disabilities. Journal of Medical Ethics, November. https://doi. org/10.1136/medethics-­2020-­106645.

216 

F. Jotterand

Jotterand, Fabrice. 2005. The Hippocratic Oath and Contemporary Medicine: Dialectic between Past Ideals and Present Reality? The Journal of Medicine and Philosophy 30 (1): 107–128. https://doi.org/10.1080/036053 10590907084. ———. 2010. Human Dignity and Transhumanism: Do Anthro-Technological Devices Have Moral Status? The American Journal of Bioethics: AJOB 10 (7): 45–52. https://doi.org/10.1080/15265161003728795. ———. 2011. ‘Virtue Engineering’ and Moral Agency: Will Post-Humans Still Need the Virtues? AJOB Neuroscience 2 (4): 3–9. https://doi.org/10.108 0/21507740.2011.611124. ———. 2014. Psychopathy, Neurotechnologies, and Neuroethics. Theoretical Medicine and Bioethics 35 (1): 1–6. https://doi.org/10.1007/s11017­014-­9280-­x. ———. 2016. Moral Enhancement, Neuroessentialism, and Moral Content. In Cognitive Enhancement: Ethical and Policy Implications in International Perspectives, ed. Fabrice Jotterand and Veljko Dubljević. Oxford; New York: Oxford University Press. ———. 2017. Cognitive Enhancement of Today May Be the Normal of Tomorrow. In Neuroethics: Anticipating the Future, ed. Judy Illes. Oxford University Press. ———. 2018. The Boundaries of Legal Personhood. In Human, Transhuman, Posthuman: Emerging Technologies and the Boundaries of Homo Sapiens, ed. M. Bess and P.D. Walsh. Farmington Hills: Macmillan Reference USA. ———. 2019a. How AI Could Exacerbate Our Incumbent Anthropological Identity Crisis. Bioethica Forum 12 (1/2): 58–59. ———. 2019b. Personal Identity, Neuroprosthetics, and Alzheimer’s Disease. In Intelligent Assistive Technologies for Dementia, ed. Fabrice Jotterand, Marcello Ienca, Tenzin Wangmo, and Bernice Elger, 188–202. Oxford; New  York: Oxford University Press. https://oxfordmedicine.com/view/10.1093/ med/9780190459802.001.0001/med-­9780190459802-­chapter-­11. Jotterand, Fabrice, and Clara Bosco. 2020. Keeping the ‘Human in the Loop’ in the Age of Artificial Intelligence. Science and Engineering Ethics, July. https:// doi.org/10.1007/s11948-­020-­00241-­1. Jotterand, Fabrice, and James Giordano. 2011. Transcranial Magnetic Stimulation, Deep Brain Stimulation and Personal Identity: Ethical Questions, and Neuroethical Approaches for Medical Practice. International Review of Psychiatry (Abingdon, England) 23 (5): 476–485. https://doi.org/1 0.3109/09540261.2011.616189.

8  Identity Integrity in Psychiatry 

217

Jotterand, Fabrice, and Susan B. Levin. 2017. Moral Deficits, Moral Motivation and the Feasibility of Moral Bioenhancement. Topoi April: 1–9. https://doi. org/10.1007/s11245-­017-­9472-­x. Jotterand, Fabrice, Marcello Ienca, Tenzin Wangmo, and Bernice S. Elger, eds. 2019. Intelligent Assistive Technologies for Dementia: Clinical, Ethical, Social, and Regulatory Implications. Oxford; New York: Oxford University Press. MacIntyre, Alasdair. 1984. After Virtue: A Study in Moral Theory. 2nd ed. Notre Dame, Ind: University of Notre Dame Press. Morin, Roc. 2016. Can Child Dolls Keep Pedophiles from Offending? The Atlantic. January 11. https://www.theatlantic.com/health/archive/2016/01/ can-­child-­dolls-­keep-­pedophiles-­from-­offending/423324/. Perry, John. 2009. Diminished and Fractured Selves. In Personal Identity and Fractured Selves, ed. D.J.H.  Mathews, H.  Bok, and P.V.  Rabins, 129–162. Baltimore: Johns Hopkins University Press. http://muse.jhu.edu/ chapter/69185. Persson, Ingmar, and Julian Savulescu. 2012. Unfit for the Future: The Need for Moral Enhancement. 1st ed. Oxford University Press. ———. 2016. Moral Bioenhancement, Freedom and Reason. Neuroethics 9 (3): 263–268. https://doi.org/10.1007/s12152-­016-­9268-­5. Pouliquen, Laetitia. 2016. Femme 2.0. Le Coudray-Macouard: Saint-­ Léger Editions. Radden, Jennifer. 1996. Divided Minds and Successive Selves: Ethical Issues in Disorders of Identity and Personality. Cambridge, MA: The MIT Press. https:// mitpress.mit.edu/books/divided-­minds-­and-­successive-­selves. Rolston, Holmes, III. 2008. Human Uniqueness and Human Dignity: Persons in Nature and the Nature of Persons. In Human Dignity and Bioethics: Essays Commissioned by the President’s Council on Bioethics, ed. Edmund D. Pellegrino, Adam Schulman, and Thomas W.  Merrill, 129–153. Washington, DC: President’s Council on Bioethics. Ryberg, Jesper. 2018. Neuroscientific Treatment of Criminals and Penal Theory. In Treatment for Crime: Philosophical Essays on Neurointerventions in Criminal Justice, ed. David Birks and Thomas Douglas, 177–195. Oxford: Oxford University Press. https://forskning.ruc.dk/en/publications/neuroscientific­treatment-­of-­criminals-­and-­penal-­theory. Sadler, John Z. 2007. The Psychiatric Significance of the Personal Self. Psychiatry 70 (2): 113–129. https://doi.org/10.1521/psyc.2007.70.2.113. Scheler, M. 1961. Man’s Place in Nature. Boston: Beacon Press. Shaw, Elizabeth. 2018. Against the Mandatory Use of Neurointerventions in Criminal Sentencing. In Treatment for Crime: Philosophical Essays

218 

F. Jotterand

on Neurointerventions in Criminal Justice, ed. David Birks and Thomas Douglas, 321–337. Oxford: Oxford University Press. https:// abdn.pure.elsevier.com/en/publications/against-­t he-­m andatory­use-­of-­neurointerventions-­in-­criminal-­sente. Sirgiovanni, Elisabetta, and Mirko Daniel Garasic. 2020. Commentary: The Moral Bioenhancement of Psychopaths. Frontiers in Psychology 10 (Jan.). https://doi.org/10.3389/fpsyg.2019.02880. Snead, O. Carter. 2020. What It Means to Be Human: The Case for the Body in Public Bioethics. Cambridge, MA: Harvard University Press. Svenaeus, Fredrik. 2013. The Relevance of Heidegger’s Philosophy of Technology for Biomedical Ethics. Theoretical Medicine and Bioethics 34 (1): 1–15. https://doi.org/10.1007/s11017-­012-­9240-­2. Tauber, Alfred I. 2009. Science and the Quest for Meaning. Waco, TX: Baylor University Press. Taylor, Charles. 1989. Sources of the Self: The Making of the Modern Identity. Cambridge, MA: Harvard University Press. https://www.goodreads. com/work/best_book/37652-­s ources-­o f-­t he-­s elf-­t he-­m aking-­o f-­t he-­ modern-­identity. Thomasma, David C. 1997. Antifoundationalism and the Possibility of a Moral Philosophy of Medicine. Theoretical Medicine 18 (1): 127–143. https://doi. org/10.1023/A:1005726024062. Weiss, Martin G. 2014. Posthuman Dignity. In The Cambridge Handbook of Human Dignity: Interdisciplinary Perspectives, ed. Dietmar Mieth, Jens Braarvig, Marcus Düwell, and Roger Brownsword, 319–331. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780 511979033.038. Yuste, Rafael, Sara Goering, Blaise Agüera y Arcas, Guoqiang Bi, Jose M. Carmena, Adrian Carter, Joseph J. Fins, et al. 2017. Four Ethical Priorities for Neurotechnologies and AI. Nature News 551 (7679): 159. https://doi. org/10.1038/551159a. Yuste, Rafael, Jared Genser, and Stephanie Herrmann. 2021. It’s Time for Neuro-Rights, 7. Zellick, Graham. 1978. Corporal Punishment in the Isle of Man. International & Comparative Law Quarterly 27 (3): 665–671. https://doi.org/10.1093/ iclqaj/27.3.665. Zitzelsberger, Hilde M. 2004. Concerning Technology: Thinking with Heidegger. Nursing Philosophy 5 (3): 242–250. https://doi.org/10.1111/j.1466-­ 769X.2004.00183.x.

9 Epilogue: Final Thoughts for the Path to Future Philosophical Explorations

Emerging neurotechnologies are on the verge of redesigning the boundaries of human existence in its embodied and psycho-social dimensions, and our understanding of personal identity and moral life. The necessity to engage philosophical and ethical inquiry in a constructive dialogue with the scientific community and the discipline of neuroscience is crucial to harvest their potential benefits. In this book, I have considered the philosophical and ethical implications of brain interventions to alter, manipulate, and enhance moral behavior as a form of social progress and in the clinical context to address psychiatric conditions with moral pathologies such as psychopathy. From its inception, I have been critical of the moral bioenhancement agenda (actually, I think it is simply a bad idea!) as it provides the basis for a type of social engineering detrimental to building health communities and to character formation in individuals. Chapter 2 outlines the strong limitations of moral bioenhancement as a means to moral formation and the development of moral communities due to a misconceptualization of moral agency and its reduction of morality to a biological phenomenon as examined in Chap. 4. But as I progressed in the writing of this book, I always kept the door open for potential clinical © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0_9

219

220 

F. Jotterand

implementations of neurotechnologies aiming at the recalibrating of the behavior of individuals suffering from psychiatric conditions with moral pathologies. After all, if the use of neurotechnologies could address behavior leading to criminal acts, why not move forward with the implementation of measures to that end. In Chap. 3, I suggested in framework for a responsible implementation of neurotechnologies in the clinical context and under strict conditions. When we step outside the clinical world, however, neurointerventions are difficult to justify unless, as a society, we are willing to bypass fundamental principles of human decency, respect, and dignity. Imposing neurointerventions to individuals who do not suffer from a psychiatric condition and who are not an eminent threat to society based on potential dangerousness and prevention sends the wrong message regarding how we treat each other as member of society and the role of the government in ordering human relationships. But at the core of my investigation, there is a more fundamental question that I only began to investigate in Chap. 8: what does it mean to be human in a technocratic age and what ought to be our relationship to technology, either with regard to how we relate to devices, integrate them in our bodies, or even how we develop some kind of emotional attachment to entities that mimic human features and demeanors. This book is a steppingstone toward a subsequent book project I intend to start in the coming months. As my analysis in Chap. 8 indicates, there is an urgent need to examine issues at the intersection of philosophical anthropology, emerging technologies, and medicine, with a particular focus on what anthropological framework(s) should guide the education of future physicians and clinical practice considering the increasing integration of powerful technology in medicine. My concern is grounded on the realization that an ethics without an appreciation of the significance of our embodied reality might end up undermining our own humanity. Proponents of moral bioenhancement are concerned about the survival of the human species and to this end suggest the implementation of biotechnological measures. It might be the case that human beings might survive through the help of technology, but will they flourish if technology invalidates some key features of our humanity? I hope this book and my subsequent work will be an invitation to reflect further on this challenging question.

References

Agar, Nicholas. 2014. Truly Human Enhancement. The MIT Press. https://mitpress.mit.edu/books/truly-­human-­enhancement. Allison, Truett, Aina Puce, Gregory McCarthy, Truett Allison, Aina Puce, and Gregory McCarthy. 2000. Social Perception from Visual Cues: Role of the STS Region. Trends in Cognitive Sciences 4 (7): 267–278. https://doi. org/10.1016/S1364-­6613(00)01501-­1. Annas, Julia. 1993. The Morality of Happiness. New York: Oxford University Press. Aubenque, Pierre. 1976. La Prudence chez Aristote. Paris: Presses Universitaires de France—PUF. Baccarini, Elvio, and Luca Malatesti. 2017. The Moral Bioenhancement of Psychopaths. Journal of Medical Ethics 43 (10): 697–701. https://doi. org/10.1136/medethics-­2016-­103537. Baeken, C., P. Van Schuerbeek, R. De Raedt, J. De Mey, M.A. Vanderhasselt, A. Bossuyt, and R. Luypaert. 2011. The Effect of One Left-Sided Dorsolateral Prefrontal Sham-Controlled HF-RTMS Session on Approach and Withdrawal Related Emotional Neuronal Processes. Clinical Neurophysiology: Official Journal of the International Federation of Clinical Neurophysiology 122 (11): 2217–2226. https://doi.org/10.1016/j.clinph.2011.04.009. Barraza, Jorge A., and Paul J. Zak. 2009. Empathy toward Strangers Triggers Oxytocin Release and Subsequent Generosity. Annals of the New  York

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0

221

222 References

Academy of Sciences 1167 (June): 182–189. https://doi.org/10.1111/j.17496632.2009.04504.x. Barraza, Jorge A., Michael E.  McCullough, Sheila Ahmadi, and Paul J.  Zak. 2011. Oxytocin Infusion Increases Charitable Donations Regardless of Monetary Resources. Hormones and Behavior 60 (2): 148–151. https://doi. org/10.1016/j.yhbeh.2011.04.008. Baudrillard, Jean. 1994. The Illusion of the End. Stanford, CA: Stanford University Press. Bauer, Walter. 2001. In A Greek-English Lexicon of the New Testament and Other Early Christian Literature, ed. Frederick William Danker, 3rd ed. Chicago: University of Chicago Press. Bayertz, Kurt. 2003. Human Nature: How Normative Might It Be? The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine 28 (2): 131–150. https://doi.org/10.1076/jmep.28.2.131.14210. Benning, T.B. 2003. Neuroimaging Psychopathy: Lessons from Lombroso. The British Journal of Psychiatry: The Journal of Mental Science 183 (Dec.): 563–564. Berryessa, Colleen M., and Barclay Wohlstetter. 2019. The Psychopathic ‘Label’ and Effects on Punishment Outcomes: A Meta-Analysis. Law and Human Behavior 43 (1): 9–25. https://doi.org/10.1037/lhb0000317. Birbaumer, Niels, and Leonardo G. Cohen. 2007. Brain-Computer Interfaces: Communication and Restoration of Movement in Paralysis. The Journal of Physiology 579 (Pt 3): 621–636. https://doi.org/10.1113/ jphysiol.2006.125633. Birbaumer, Niels, Ralf Veit, Martin Lotze, Michael Erb, Christiane Hermann, Wolfgang Grodd, and Herta Flor. 2005. Deficient Fear Conditioning in Psychopathy: A Functional Magnetic Resonance Imaging Study. Archives of General Psychiatry 62 (7): 799–805. https://doi.org/10.1001/ archpsyc.62.7.799. Birbaumer, Niels, Sergio Ruiz, and Ranganatha Sitaram. 2013. Learned Regulation of Brain Metabolism. Trends in Cognitive Sciences 17 (6): 295–302. https://doi.org/10.1016/j.tics.2013.04.009. Blair, R. J. R. 1995. A Cognitive Development Approach to Morality: Investigating the Psychopath. Cognition 57 (1): 1–29. ———. 2003. Neuroimaging Psychopathy: Lessons from Lombroso. British Journal of Psychiatry 182: 5–7. ———. 2008. The Cognitive Neuroscience of Psychopathy and Implications for Judgments of Responsibility. Neuroethics 1 (3): 149–157. https://doi. org/10.1007/s12152-­008-­9016-­6.

 References 

223

———. 2010. Neuroimaging of Psychopathy and Antisocial Behavior: A Targeted Review. Current Psychiatry Reports 12 (1): 76–82. https://doi. org/10.1007/s11920-­009-­0086-­x. Blair, James, Derek Mitchell, and Karina Blair. 2005. The Psychopath: Emotion and the Brain. 1st ed. Malden, MA: Wiley-Blackwell. Blockmans, Steven, and Sophia Russack, eds. 2020. Deliberative Democracy in the EU. Countering Populism with Participation and Debate. London: Rowman & Littlefield International, Ltd. Blonigen, Daniel M., Scott R.  Carlson, Robert F.  Krueger, and Christopher J.  Patrick. 2003. A Twin Study of Self-Reported Psychopathic Personality Traits. Personality and Individual Differences 35 (1): 179–197. https://doi. org/10.1016/S0191-­8869(02)00184-­8. Blonigen, Daniel M., Brian M. Hicks, Robert F. Krueger, Christopher J. Patrick, and William G. Iacono. 2005. Psychopathic Personality Traits: Heritability and Genetic Overlap with Internalizing and Externalizing Psychopathology. Psychological Medicine 35 (5): 637–648. https://doi.org/10.1017/ S0033291704004180. Bohman, James. 1996. Public Deliberation: Pluralism, Complexity, and Democracy. Cambridge, MA: The MIT Press. Boorse, Christopher. 1975. On the Distinction between Disease and Illness. Philosophy & Public Affairs 5 (1): 49–68. ———. 1977. Health as a Theoretical Concept. Philosophy of Science 44 (4): 542–573. ———. 1997. A Rebuttal on Health. In What Is Disease? Biomedical Ethics Reviews, ed. James M. Humber and Robert F. Almeder, 1–134. Totowa, NJ: Humana Press. https://doi.org/10.1007/978-­1-­59259-­451-­1_1. Borg, Jana Schaich, and Walter P. Sinnott-Armstrong. 2013. Do Psychopaths Make Moral Judgments? In Handbook on Psychopathy and Law, Oxford Series in Neuroscience, Law, and Philosophy, 107–128. New  York, NY: Oxford University Press. van den Bosch, L.M.C., M.J.N. Rijckmans, S. Decoene, and A.L. Chapman. 2018. Treatment of Antisocial Personality Disorder: Development of a Practice Focused Framework. International Journal of Law and Psychiatry 58 (May): 72–78. https://doi.org/10.1016/j.ijlp.2018.03.002. Bostrom, Nick. 2005. In Defense of Posthuman Dignity. Bioethics 19 (3): 202–214. https://doi.org/10.1111/j.1467-­8519.2005.00437.x. Brazil, I.A., J.D.M. van Dongen, J.H.R.  Maes, R.B.  Mars, and A.R.  BaskinSommers. 2018. Classification and Treatment of Antisocial Individuals:

224 References

From Behavior to Biocognition. Neuroscience & Biobehavioral Reviews 91 (Aug.): 259–277. https://doi.org/10.1016/j.neubiorev.2016.10.010. Brink, David O., and Dana K. Nelkin. 2013. Fairness and the Architecture of Responsibility 1. Oxford Studies in Agency and Responsibility Volume 1. Oxford: Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780199694853.001.0001/ acprof-­9780199694853-­chapter-­13. Brooks, Rodney. 2003. Flesh and Machines: How Robots Will Change Us. Illustrated edition. New York: Vintage. Buchanan, Vern. 2020. Text—H.R.8236—116th Congress (2019–2020): CREEPER Act 2.0. Webpage. 2019/2020. September 14. https://www.congress.gov/bill/116th-­congress/house-­bill/8236/text. Buckholtz, Joshua W., and Andreas Meyer-Lindenberg. 2008. MAOA and the Neurogenetic Architecture of Human Aggression. Trends in Neurosciences 31 (3): 120–129. https://doi.org/10.1016/j.tins.2007.12.006. Burns, Jeffrey M., and Russell H. Swerdlow. 2003. Right Orbitofrontal Tumor With Pedophilia Symptom and Constructional Apraxia Sign. Archives of Neurology 60 (3): 437–440. https://doi.org/10.1001/archneur.60.3.437. Canavero, Sergio. 2014. Criminal Minds: Neuromodulation of the Psychopathic Brain. Frontiers in Human Neuroscience 8. https://doi.org/10.3389/ fnhum.2014.00124. Caria, Andrea, Ralf Veit, Ranganatha Sitaram, Martin Lotze, Nikolaus Weiskopf, Wolfgang Grodd, and Niels Birbaumer. 2007. Regulation of Anterior Insular Cortex Activity Using Real-Time FMRI. NeuroImage 35 (3): 1238–1246. https://doi.org/10.1016/j.neuroimage.2007.01.018. Carrà, Giuseppe, and Francesco Barale. 2004. Cesare Lombroso, M.D., 1835–1909. American Journal of Psychiatry 161 (4): 624–624. https://doi. org/10.1176/ajp.161.4.624. Carter, Sarah. 2017. Could Moral Enhancement Interventions Be Medically Indicated? Health Care Analysis 25 (4): 338–353. https://doi.org/10.1007/ s10728-­016-­0320-­8. Caspi, Avshalom, Joseph McClay, Terrie E. Moffitt, Jonathan Mill, Judy Martin, Ian W. Craig, Alan Taylor, and Richie Poulton, eds. 2002. Role of Genotype in the Cycle of Violence in Maltreated Children. Science (New York, N.Y.) 297 (5582): 851–854. https://doi.org/10.1126/science.1072290. Chadwick, Ruth. 2008. Therapy, Enhancement and Improvement. In Medical Enhancement and Posthumanity, 25–37. Dordrecht: The International

 References 

225

Library of Ethics, Law and Technology; Springer. https://doi.org/ 10.1007/978-­1-­4020-­8852-­0_3. Cho, Sang Soo, Ji Hyun Ko, Giovanna Pellecchia, Thilo Van Eimeren, Roberto Cilia, and Antonio P. Strafella. 2010. Continuous Theta Burst Stimulation of Right Dorsolateral Prefrontal Cortex Induces Changes in Impulsivity Level. Brain Stimulation 3 (3): 170–176. https://doi.org/10.1016/j.brs.2009.10.002. Choy, Olivia, Adrian Raine, and Roy H. Hamilton. 2018. Stimulation of the Prefrontal Cortex Reduces Intentions to Commit Aggression: A Randomized, Double-Blind, Placebo-Controlled, Stratified, Parallel-Group Trial. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 38 (29): 6505–6512. https://doi.org/10.1523/JNEUROSCI.3317-­17.2018. Christian, Carol, and Lisa Teachy (2002-03-06). Yates Believed Children Doomed. Houston Chronicle. Churchland, Patricia S. 2012. Braintrust: What Neuroscience Tells Us about Morality. Reprint edition. Princeton, NJ: Princeton University Press. Cilliers, Paul. 1998. Complexity and Postmodernism: Understanding Complex Systems. 1st ed. London; New York: Routledge. Cima, Maaike, Franca Tonnaer, and Marc D. Hauser. 2010. Psychopaths Know Right from Wrong but Don’t Care. Social Cognitive and Affective Neuroscience 5 (1): 59–67. https://doi.org/10.1093/scan/nsp051. Cleckley, Hervey M. 1941. The Mask of Sanity. 5th ed. St. Louis, MO: Mosby. Compton, Wilson M., Yonette F.  Thomas, Frederick S.  Stinson, and Bridget F.  Grant. 2007. Prevalence, Correlates, Disability, and Comorbidity of DSM-IV Drug Abuse and Dependence in the United States: Results from the National Epidemiologic Survey on Alcohol and Related Conditions. Archives of General Psychiatry 64 (5): 566–576. https://doi.org/10.1001/ archpsyc.64.5.566. Condorcet, Jean-Antoine-Nicolas de Caritat. 1979. Sketch for a Historical Picture of the Progress of the Human Mind. Westport, CT: Greenwood Press. Council, National Research, Division on Behavioral and Social Sciences and Education, Board on Behavioral Sciences Cognitive, and Sensory, Division on Engineering and Physical Sciences, Standing Committee for Technology Insight—Gauge Review Evaluate, and, and Committee on Military and Intelligence Methodology for Emergent Neurophysiological and Cognitive/ Neural Science Research in the Next Two Decades. 2008. Emerging Cognitive Neuroscience and Related Technologies. National Academies Press.

226 References

Crockett, Molly J. 2014. Moral Bioenhancement: A Neuroscientific Perspective. Journal of Medical Ethics 40 (6): 370–371. https://doi.org/10.1136/ medethics-­2012-­101096. ———. 2016. Morphing Morals: Neurochemical Modulation of Moral Judgment and Behavior. In Moral Brains: The Neuroscience of Morality, ed. S. Matthew Liao, 237–245. New York: Oxford University Press. http:// w w w. o x f o r d s c h o l a r s h i p . c o m / v i e w / 1 0 . 1 0 9 3 / a c p r o f : o s o / 9780199357666.001.0001/acprof-­9780199357666-­chapter-­11. Crockett, Molly J., Luke Clark, Marc D. Hauser, and Trevor W. Robbins. 2010. Serotonin Selectively Influences Moral Judgment and Behavior through Effects on Harm Aversion. Proceedings of the National Academy of Sciences of the United States of America 107 (40): 17433–17438. https://doi.org/10.1073/ pnas.1009396107. Crocq, Marc-Antoine. 2013. Milestones in the History of Personality Disorders. Dialogues in Clinical Neuroscience 15 (2): 147–153. Crutchfield, Parker. 2019. Compulsory Moral Bioenhancement Should Be Covert. Bioethics 33 (1): 112–121. https://doi.org/10.1111/bioe.12496. Cummings, Michael A. 2015. The Neurobiology of Psychopathy: Recent Developments and New Directions in Research and Treatment. CNS Spectrums 20 (3): 200–206. https://doi.org/10.1017/S1092852914000741. Curato, Nicole, Jensen Sass, Selen A.  Ercan, and Simon Niemeyer. 2020. Deliberative Democracy in the Age of Serial Crisis. International Political Science Review, August, 0192512120941882. https://doi. org/10.1177/0192512120941882. D’Silva, Karen, Conor Duggan, and Lucy McCarthy. 2004. Does Treatment Really Make Psychopaths Worse? A Review of the Evidence. Journal of Personality Disorders 18 (2): 163–177. https://doi.org/10.1521/ pedi.18.2.163.32775. Dambacher, Franziska, Teresa Schuhmann, Jill Lobbestael, Arnoud Arntz, Suzanne Brugman, and Alexander T.  Sack. 2015. Reducing Proactive Aggression through Non-Invasive Brain Stimulation. Social Cognitive and Affective Neuroscience 10 (10): 1303–1309. https://doi.org/10.1093/ scan/nsv018. Danaher, John. 2019. Regulating Child Sex Robots: Restriction or Experimentation? Medical Law Review 27 (4): 553–575. https://doi. org/10.1093/medlaw/fwz002. Decety, Jean, and Keith J. Yoder. 2016. Empathy and Motivation for Justice: Cognitive Empathy and Concern, but Not Emotional Empathy, Predict

 References 

227

Sensitivity to Injustice for Others. Social Neuroscience 11 (1): 1–14. https:// doi.org/10.1080/17470919.2015.1029593. Decety, Jean, Kalina J.  Michalska, and Katherine D.  Kinzler. 2012. The Contribution of Emotion and Cognition to Moral Sensitivity: A Neurodevelopmental Study. Cerebral Cortex (New York, N.Y.: 1991) 22 (1): 209–220. https://doi.org/10.1093/cercor/bhr111. deCharms, R., Fumiko Maeda Christopher, Gary H.  Glover, David Ludlow, John M.  Pauly, Deepak Soneji, John D.E.  Gabrieli, and Sean C.  Mackey. 2005. Control over Brain Activation and Pain Learned by Using Real-Time Functional MRI. Proceedings of the National Academy of Sciences of the United States of America 102 (51): 18626–18631. https://doi.org/10.1073/ pnas.0505210102. DeGrazia, David. 2014. Moral Enhancement, Freedom, and What We (Should) Value in Moral Behaviour. Journal of Medical Ethics 40 (6): 361–368. https:// doi.org/10.1136/medethics-­2012-­101157. Douglas, Thomas. 2008. Moral Enhancement. Journal of Applied Philosophy 25 (3): 228–245. https://doi.org/10.1111/j.1468-­5930.2008.00412.x. ———. 2013. Moral Enhancement via Direct Emotion Modulation: A Reply to John Harris. Bioethics 27 (3): 160–168. https://doi. org/10.1111/j.1467-­8519.2011.01919.x. ———. 2014. Enhancing Moral Conformity and Enhancing Moral Worth. Neuroethics 7: 75–91. https://doi.org/10.1007/s12152-­013-­9183-­y. De Dreu, Carsten K.W., Lindred L. Greer, Michel J.J. Handgraaf, Shaul Shalvi, Gerben A. Van Kleef, Matthijs Baas, Femke S. Ten Velden, Eric Van Dijk, and Sander W.W.  Feith. 2010. The Neuropeptide Oxytocin Regulates Parochial Altruism in Intergroup Conflict among Humans. Science (New York, N.Y.) 328 (5984): 1408–1411. https://doi.org/10.1126/ science.1189047. Dreyfus, Hubert. 2003. Heidegger on Gaining a Free Relation to Technology. In Readings in the Philosophy of Technology, ed. David M.  Kaplan, 53–62. Lanham: Rowman & Littlefield Publishers. Duff, Antony. 2010. Psychopathy and Answerability. In Responsibility and Psychopathy, ed. Luca Malatesti and John McMillan, 199–212. Oxford: Oxford University Press. https://oxfordmedicine.com/view/10.1093/ med/9780199551637.001.0001/med-­9780199551637-­chapter-­011. Edens, John F., John Petrila, and Jacqueline K.  Buffington-Vollum. 2001. Psychopathy and the Death Penalty: Can the Psychopathy Checklist-Revised Identify Offenders Who Represent ‘A Continuing Threat to Society’? The

228 References

Journal of Psychiatry & Law 29 (4): 433–481. https://doi. org/10.1177/009318530102900403. Engelhardt, H. Tristram. 1984. Clinical Problems and the Concept of Disease. In Health, Disease, and Causal Explanations in Medicine, Philosophy and Medicine, ed. Lennart Nordenfelt, B.  Ingemar, and B.  Lindahl, 27–41. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-­94-­009-­6283-­5_5. ———. 1996. The Foundations of Bioethics. 2nd ed. New  York: Oxford University Press. ———. 2013. Bioethics as a Liberal Roman Catholic Heresy: Critical Reflections on the Founding of Bioethics. In The Development of Bioethics in the United States, Philosophy and Medicine, ed. Jeremy R. Garrett, Fabrice Jotterand, and D.  Christopher Ralston, 55–76. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-­94-­007-­4011-­2_5. Ezrahi, Yaron. 1990. The Descent of Icarus: Science and the Transformation of Contemporary Democracy. 1st ed. Cambridge, MA: Harvard University Press. Fallon, James. 2013. The Psychopath Inside: A Neuroscientist’s Personal Journey into the Dark Side of the Brain—1591846005 | Smithsonian.Com Store. https://www.smithsonianmag.com/store/books/psychopath-­i nside-­ neuroscientists-­personal-­journey/?no-­ist. Farrow, T.F., Y.  Zheng, I.D.  Wilkinson, S.A.  Spence, J.F.  Deakin, N.  Tarrier, P.D.  Griffiths, and P.W.  Woodruff. 2001. Investigating the Functional Anatomy of Empathy and Forgiveness. Neuroreport 12 (11): 2433–2438. Fine, Sean. 2017. Doctors Tortured Patients at Ontario Mental-Health Centre, Judge Rules. The Globe and Mail, June 7, 2017. https://www.theglobeandmail.com/news/national/doctors-­at-­ontario-­mental-­health-­facility-­tortured-­ patients-­court-­finds/article35246519/. Fine, Cordelia, and Jeanette Kennett. 2004. Mental Impairment, Moral Understanding and Criminal Responsibility: Psychopathy and the Purposes of Punishment. International Journal of Law and Psychiatry 27 (5): 425–443. https://doi.org/10.1016/j.ijlp.2004.06.005. FitzGerald, K., and R. Wurzman. 2010. Neurogenetics and Ethics. In Scientific and Philosophical Perspectives in Neuroethics, ed. James J. Giordano and Bert Gordijn. Cambridge: Cambridge University Press. Flanagan, Owen. 2009. The Really Hard Problem: Meaning in a Material World. Reprint edition. Cambridge, MA; London: A Bradford Book. Flanagan, Owen, Hagop Sarkissian, and David Wong. 2007. Naturalizing Ethics. In Moral Psychology, The Evolution of Morality: Adaptations and

 References 

229

Innateness, Vol. 1, ed. Walter Sinnott-Armstrong and Christian B. Miller, 1st ed., 1–25. Cambridge, MA: The MIT Press. Fourquet, Jérôme. 2019. L’archipel français: Naissance d’une nation multiple et divisée. Paris: Editions du Seuil. https://www.lalibrairie.com/ livres/l-­archipel-­francais%2D%2Dnaissance-­d-­une-­nation-­multiple-­et-­divis ee_0-­5623745_9782021406023.html. Funk, Chadd M., and Michael S.  Gazzaniga. 2009. The Functional Brain Architecture of Human Morality. Current Opinion in Neurobiology 19 (6): 678–681. https://doi.org/10.1016/j.conb.2009.09.011. Fusar-Poli, Paolo, Anna Placentino, Francesco Carletti, Paola Landi, Paul Allen, Simon Surguladze, Francesco Benedetti, et  al. 2009. Functional Atlas of Emotional Faces Processing: A Voxel-Based Meta-Analysis of 105 Functional Magnetic Resonance Imaging Studies. Journal of Psychiatry & Neuroscience: JPN 34 (6): 418–432. Gadamer, Hans-Georg. 1996. The Enigma of Health: The Art of Healing in a Scientific Age. Translated by Jason Gaiger and Nicholas Walker. 1st ed. Stanford: Stanford University Press. Garland, David. 2001. The Culture of Control: Crime and Social Order in Contemporary Society. The Culture of Control. Oxford: Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/acprof: oso/9780199258024.001.0001/acprof-­9780199258024. Gauthier, René Antoine, Jean Yves Jolif, and Aristotle. 1970. L’éthique à Nicomaque. Louvain; Paris: Publications universitaires; Béatrice-Nauwelaerts. Gazzaniga, Michael. 2006. The Ethical Brain: The Science of Our Moral Dilemmas. New  York: Harper-Perennial. https://www.press.uchicago.edu/Misc/ Chicago/1932594019.html. Gibbon, Simon, Conor Duggan, Jutta Stoffers, Nick Huband, Birgit A. Völlm, Michael Ferriter, and Klaus Lieb. 2010. Psychological Interventions for Antisocial Personality Disorder. Cochrane Database of Systematic Reviews 6. https://doi.org/10.1002/14651858.CD007668.pub2. Gibbons, Michael, Camille Limoges, Helga Nowotny, Simon Schwartzman, Peter Scott, and Martin Trow. 1994. The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. 1st ed. London; Thousand Oaks, CA: SAGE Publications Ltd. Godman, Marion, and Anneli Jefferson. 2017. On Blaming and Punishing Psychopaths. Criminal Law and Philosophy 11 (1): 127–142. https://doi. org/10.1007/s11572-­014-­9340-­3.

230 References

Goering, Sara, and Rafael Yuste. 2016. On the Necessity of Ethical Guidelines for Novel Neurotechnologies. Cell 167 (4): 882–885. https://doi. org/10.1016/j.cell.2016.10.029. Goering, Sara, Eran Klein, Laura Specker Sullivan, Anna Wexler, Blaise Agüera y Arcas, Guoqiang Bi, Jose M. Carmena, et al. 2021. Recommendations for Responsible Development and Application of Neurotechnologies. Neuroethics, April. https://doi.org/10.1007/s12152-­021-­09468-­6. Greely, Henry T. 2008. Neuroscience and Criminal Justice: Not Responsibility but Treatment. Kansas Law Review 56 (5): 1103–1138. https://doi. org/10.17161/1808.20016. Greene, Joshua D., Leigh E. Nystrom, Andrew D. Engell, John M. Darley, and Jonathan D.  Cohen. 2004. The Neural Bases of Cognitive Conflict and Control in Moral Judgment. Neuron 44 (2): 389–400. https://doi. org/10.1016/j.neuron.2004.09.027. Gunter, Tracy D., Michael G. Vaughn, and Robert A. Philibert. 2010. Behavioral Genetics in Antisocial Spectrum Disorders and Psychopathy: A Review of the Recent Literature. Behavioral Sciences & the Law 28 (2): 148–173. https:// doi.org/10.1002/bsl.923. Gutmann, Amy, and Dennis F. Thompson. 1998. Democracy and Disagreement. Cambridge; London: Belknap Press: An Imprint of Harvard University Press. ———. 2004. Why Deliberative Democracy? Princeton, NJ: Princeton University Press. Gutmann, Amy, and Dennis Thompson. 2014. The Spirit of Compromise: Why Governing Demands It and Campaigning Undermines It—Updated Edition. Revised edition. Princeton University Press. Habermas, Juergen. 1985. Modernity: An Incomplete Project. In Postmodern Culture, ed. Hal Foster, 3–15. London: Pluto Press. Habermas, Jürgen. 1990. Moral Consciousness and Communicative Action. Cambridge, MA: MIT Press. Haidt, Jonathan. 2001. The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment. Psychological Review 108 (4): 814–834. https://doi.org/10.1037/0033-­295X.108.4.814. ———. 2003. The Moral Emotions. In Handbook of Affective Sciences, ed. R.J.  Davidson, K.R.  Scherer, and H.H.  Goldsmith. Oxford: Oxford University Press. Hanson, Mark J., and Daniel Callahan, eds. 1999. The Goals of Medicine: The Forgotten Issues in Health Care Reform. Washington, DC: Georgetown

 References 

231

University Press. http://press.georgetown.edu/book/georgetown/ goals-­medicine. Harada, Tokiko, Shoji Itakura, Xu Fen, Kang Lee, Satoru Nakashita, Daisuke N. Saito, and Norihiro Sadato. 2009. Neural Correlates of the Judgment of Lying: A Functional Magnetic Resonance Imaging Study. Neuroscience Research 63 (1): 24–34. https://doi.org/10.1016/j.neures.2008.09.010. Hare, Robert D. 1980. The Psychopathy Checklist. Multi-Health Systems. ———. 1991. Manual for the Hare Psychopathy Checklist-Revised. Multi-­ Health Systems. ———. 1999. Without Conscience: The Disturbing World of the Psychopaths Among Us. New York, NY: Guilford Press. https://www.guilford.com/books/ Without-­Conscience/Robert-­Hare/9781572304512. ———. 2003. Manual for the Hare Psychopathy Checklist-Revised. 2nd ed. Multi-Health Systems. Hare, Robert D., and Craig S. Neumann. 2009. Psychopathy: Assessment and Forensic Implications. The Canadian Journal of Psychiatry 54 (12): 791–802. https://doi.org/10.1177/070674370905401202. Harenski, Carla L., Olga Antonenko, Matthew S. Shane, and Kent A. Kiehl. 2008. Gender Differences in Neural Mechanisms Underlying Moral Sensitivity. Social Cognitive and Affective Neuroscience 3 (4): 313–321. https:// doi.org/10.1093/scan/nsn026. Harenski, Carla L., Robert D. Hare, and Kent A. Kiehl. 2010. Neuroimaging, Genetics, and Psychopathy: Implications for the Legal System. In Responsibility and Psychopathy: Interfacing Law, Psychiatry and Philosophy, ed. Luca Malatesti and John McMillan. New York: Oxford University Press. Harlow, J.M. 1868. Recovery after Severe Injury to the Head. Publications of the Massachusetts Medical Society 2: 327–346. Harrington, Anne. 2019. Mind Fixers: Psychiatry’s Troubled Search for the Biology of Mental Illness. 1st ed. New York: W. W. Norton & Company. Harris, John. 1985. The Value of Life. Routledge. ———. 2007. Enhancing Evolution: The Ethical Case for Making Better People. Princeton, NJ; Woodstock: Princeton University Press. Harris, Sam. 2010. The Moral Landscape: How Science Can Determine Human Values. Reprint edition. New York: Free Press. Harris, John. 2011. Moral Enhancement and Freedom. Bioethics 25 (2): 102–111. https://doi.org/10.1111/j.1467-­8519.2010.01854.x. ———. 2016a. How to Be Good: The Possibility of Moral Enhancement. Oxford, New York: Oxford University Press.

232 References

———. 2016b. Moral Blindness—The Gift of the God Machine. Neuroethics 9 (3): 269–273. https://doi.org/10.1007/s12152-­016-­9272-­9. Harris, Grant T., and Marnie E.  Rice. 2006. Treatment of Psychopathy: A Review of Empirical Findings. In Handbook of Psychopathy, 555–572. New York, NY: The Guilford Press. Harris, Grant T., Tracey A. Skilling, and Marnie E. Rice. 2001. The Construct of Psychopathy. Crime and Justice 28: 197–264. Hart, H.L.A. 1968. Punishment and Responsibility: Essays in the Philosophy of Law. Oxford: Clarendon Press. Hart, John, Jr. 2015. The Neurobiology of Cognition and Behavior. Oxford; New York: Oxford University Press. Harvey, David. 1990. The Condition of Postmodernity: An Enquiry into the Origins of Cultural Change. Oxford; Cambridge, MA: Wiley-Blackwell. Hauerwas, Stanley. 1981. Vision and Virtue: Essays in Christian Ethical Reflection. Notre Dame, Ind.: University of Notre Dame Press. ———. 1994. Character and the Christian Life: A Study in Theological Ethics. 1st ed. Notre Dame: University of Notre Dame Press. Hauser, Marc D. 2006. Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. New York: HarperCollins. https://www.goodreads.com/ work/best_book/130737-­moral-­minds-­how-­nature-­designed-­our-­universal-­ sense-­of-­right-­and-­wrong. Heidegger, Martin. 1954. The Question Concerning Technology. In Basic Writings, ed. David Farrell Krell, 311–341. New York, NY: Harper & Row. ———. 1977. The Question Concerning Technology, and Other Essays. New York: Garland Pub. ———. 2006. Zollikoner Seminare: Protokolle—Zwiegesprache—Briefe. Edited by Medard Boss. 3rd ed. Frankfurt am Main: Verlag Vittorio Klostermann. Hemphill, James F., Robert D. Hare, and Stephen Wong. 1998. Psychopathy and Recidivism: A Review. Legal and Criminological Psychology 3 (1): 139–170. https://doi.org/10.1111/j.2044-­8333.1998.tb00355.x. Henderson, D.K. 1947. Psychopathic States. 2nd ed. New York: W.W. Norton & Company. Henry, Stuart, and Dena Plemmons. 2012. Neuroscience, Neuropolitics and Neuroethics: The Complex Case of Crime, Deception and FMRI. Science and Engineering Ethics 18 (3): 573–591. https://doi.org/10.1007/ s11948-­012-­9393-­4. Hervé, Hugues. 2007. Psychopathy Across the Ages: A History of the Hare Psychopath. In The Psychopath: Theory, Research, and Practice, 31–55.

 References 

233

Mahwah, NJ: Lawrence Erlbaum Associates, Publishers. https://doi.org/1 0.4324/9781315085470-­2. Higley, J.D., and M. Linnoila. 1997. Low Central Nervous System Serotonergic Activity Is Traitlike and Correlates with Impulsive Behavior. A Nonhuman Primate Model Investigating Genetic and Environmental Influences on Neurotransmission. Annals of the New  York Academy of Sciences 836 (Dec.): 39–56. Higley, J.D., S.T.  King, M.F.  Hasert, M.  Champoux, S.J.  Suomi, and M.  Linnoila. 1996. Stability of Interindividual Differences in Serotonin Function and Its Relationship to Severe Aggression and Competent Social Behavior in Rhesus Macaque Females. Neuropsychopharmacology: Official Publication of the American College of Neuropsychopharmacology 14 (1): 67–76. https://doi.org/10.1016/S0893-­133X(96)80060-­1. Hobbes, Thomas. 1651. Leviathan. New York, NY: Collins. Hoeprich, M. 2011. An Analysis of the Proposal of Deep Brain Stimulation for the Rehabilitation of Criminal Psychopaths. Presentation for the Michigan Association of Neurological Surgeons 11. Hollerbach, Pia, Ada Johansson, Daniel Ventus, Patrick Jern, Craig S. Neumann, Lars Westberg, Pekka Santtila, Elmar Habermeyer, and Andreas Mokros. 2018. Main and Interaction Effects of Childhood Trauma and the MAOA UVNTR Polymorphism on Psychopathy. Psychoneuroendocrinology 95: 106–112. https://doi.org/10.1016/j.psyneuen.2018.05.022. Honigsbaum, Mark. 2011. Oxytocin: Could the ‘trust Hormone’ Rebond Our Troubled World? The Observer, August 20, 2011, sec. Science. https://www.theguardian.com/science/2011/aug/21/oxytocin-­z akneuroscience-­trust-­hormone. Hsu, Ming, Cédric Anen, and Steven R.  Quartz. 2008. The Right and the Good: Distributive Justice and Neural Encoding of Equity and Efficiency. Science (New York, N.Y.) 320 (5879): 1092–1095. https://doi.org/10.1126/ science.1153651. Hübner, Dietmar, and Lucie White. 2016. Neurosurgery for Psychopaths? An Ethical Analysis. AJOB Neuroscience 7 (3): 140–149. https://doi.org/10.108 0/21507740.2016.1218376. Humanity+. 2009. Transhumanist Declaration. Humanity+ (blog). https:// humanityplus.org/philosophy/transhumanist-­declaration/. Hunter, James Davison. 1994. Before the Shooting Begins. New York: Free Press. Hunter, James Davison, and Paul Nedelisky. 2018. Science and the Good: The Tragic Quest for the Foundations of Morality. New Haven and London: Yale

234 References

University Press. https://www.amazon.com/Science-­Good-­Foundations-­ Foundational-­Questions/dp/0300196288. Ienca, Marcello, and Roberto Andorno. 2017. Towards New Human Rights in the Age of Neuroscience and Neurotechnology. Life Sciences, Society and Policy 13 (1): 5. https://doi.org/10.1186/s40504-­017-­0050-­1. Immordino-Yang, Mary Helen, and Vanessa Singh. 2013. Hippocampal Contributions to the Processing of Social Emotions. Human Brain Mapping 34 (4): 945–955. https://doi.org/10.1002/hbm.21485. Jackson, Philip L., Andrew N. Meltzoff, and Jean Decety. 2005. How Do We Perceive the Pain of Others? A Window into the Neural Processes Involved in Empathy. NeuroImage 24 (3): 771–779. https://doi.org/10.1016/j. neuroimage.2004.09.006. James, Hughes. 2006. Becoming a Better Person: Modifying Cognition and Emotion to Enhance Virtue. Lecture at TransVision 06. Jecker, Nancy S. 2020. Nothing to Be Ashamed of: Sex Robots for Older Adults with Disabilities. Journal of Medical Ethics, November. https://doi. org/10.1136/medethics-­2020-­106645. Jefferson, Anneli, and Katrina Sifferd. 2018. Are Psychopaths Legally Insane? European Journal of Analytic Philosophy 14 (1): 79–96. https://doi. org/10.31820/ejap.14.1.5. Jotterand, Fabrice. 2005. The Hippocratic Oath and Contemporary Medicine: Dialectic between Past Ideals and Present Reality? The Journal of Medicine and Philosophy 30 (1): 107–128. https://doi.org/10.1080/03605310590907084. ———. 2010. Human Dignity and Transhumanism: Do Anthro-Technological Devices Have Moral Status? The American Journal of Bioethics: AJOB 10 (7): 45–52. https://doi.org/10.1080/15265161003728795. ———. 2011. ‘Virtue Engineering’ and Moral Agency: Will Post-Humans Still Need the Virtues? AJOB Neuroscience 2 (4): 3–9. https://doi.org/10.108 0/21507740.2011.611124. ———. 2014. Psychopathy, Neurotechnologies, and Neuroethics. Theoretical Medicine and Bioethics 35 (1): 1–6. https://doi.org/10.1007/ s11017-­014-­9280-­x. ———. 2016. Moral Enhancement, Neuroessentialism, and Moral Content. In Cognitive Enhancement: Ethical and Policy Implications in International Perspectives, ed. Fabrice Jotterand and Veljko Dubljević. Oxford; New York: Oxford University Press.

 References 

235

———. 2017. Cognitive Enhancement of Today May Be the Normal of Tomorrow. In Neuroethics: Anticipating the Future, ed. Judy Illes. Oxford University Press. ———. 2018. The Boundaries of Legal Personhood. In Human, Transhuman, Posthuman: Emerging Technologies and the Boundaries of Homo Sapiens, ed. M. Bess and P.D. Walsh. Farmington Hills: Macmillan Reference USA. ———. 2019a. How AI Could Exacerbate Our Incumbent Anthropological Identity Crisis. Bioethica Forum 12 (1/2): 58–59. ———. 2019b. Personal Identity, Neuroprosthetics, and Alzheimer’s Disease. In Intelligent Assistive Technologies for Dementia, ed. Fabrice Jotterand, Marcello Ienca, Tenzin Wangmo, and Bernice Elger, 188–202. Oxford; New  York: Oxford University Press. https://oxfordmedicine.com/view/10.1093/ med/9780190459802.001.0001/med-­9780190459802-­chapter-­11. Jotterand, Fabrice, and Clara Bosco. 2020. Keeping the ‘Human in the Loop’ in the Age of Artificial Intelligence. Science and Engineering Ethics, July. https:// doi.org/10.1007/s11948-­020-­00241-­1. Jotterand, Fabrice, and James Giordano. 2011. Transcranial Magnetic Stimulation, Deep Brain Stimulation and Personal Identity: Ethical Questions, and Neuroethical Approaches for Medical Practice. International Review of Psychiatry (Abingdon, England) 23 (5): 476–485. https://doi.org/1 0.3109/09540261.2011.616189. ———. 2015. Real-Time Functional Magnetic Resonance Imaging–Brain-­ Computer Interfacing in the Assessment and Treatment of Psychopathy: Potential and Challenges. In Handbook of Neuroethics, ed. Jens Clausen and Neil Levy, 763–781. Dordrecht: Springer Netherlands. https://doi. org/10.1007/978-­94-­007-­4707-­4_43. Jotterand, Fabrice, and Marcello Ienca. 2017. The Biopolitics of Neuroethics. In Debates About Neuroethics, Advances in Neuroethics, 247–261. Cham: Springer. https://doi.org/10.1007/978-­3-­319-­54651-­3_17. Jotterand, Fabrice, and Susan B. Levin. 2017. Moral Deficits, Moral Motivation and the Feasibility of Moral Bioenhancement. Topoi April: 1–9. https://doi. org/10.1007/s11245-­017-­9472-­x. Jotterand, Fabrice, and Tenzin Wangmo. 2014. The Principle of Equivalence Reconsidered: Assessing the Relevance of the Principle of Equivalence in Prison Medicine. The American Journal of Bioethics: AJOB 14 (7): 4–12. https://doi.org/10.1080/15265161.2014.919365. Jotterand, Fabrice, Jennifer L.  McCurdy, and Bernice Elger. 2015. Chapter 11—Cognitive Enhancers and Mental Impairment: Emerging Ethical Issues.

236 References

In Rosenberg’s Molecular and Genetic Basis of Neurological and Psychiatric Disease (Fifth Edition), ed. Roger N.  Rosenberg and Juan M.  Pascual, 119–126. Boston: Academic Press. https://doi.org/10.1016/B978-­0-­12410529-­4.00011-­5. Jotterand, Fabrice, Marcello Ienca, Tenzin Wangmo, and Bernice S. Elger, eds. 2019. Intelligent Assistive Technologies for Dementia: Clinical, Ethical, Social, and Regulatory Implications. Oxford; New York: Oxford University Press. Kandel, Eric R. 2018. The Disordered Mind: What Unusual Brains Tell Us About Ourselves. New York: Farrar, Straus and Girous. https://www.amazon.com/ Disordered-­Mind-­Unusual-­Brains-­Ourselves/dp/0374287864/ref=sr_1_1?h vadid=77859242047257&hvbmt=be&hvdev=c&hvqmt=e&keywor ds=disordered+mind+kandel&qid=1560285387&s=gateway&sr=8-­1. Kass, Leon R. 1985. Toward a More Natural Science. Free Press. Kennett, Jeanette. 2010. Reasons, Emotion, and Moral Judgement in the Psychopath. In Responsibility and Psychopathy: Interfacing Law, Psychiatry and Philosophy, ed. Luca Malatesti and John McMillan, 243–259. Oxford: Oxford University Press. https://oxfordmedicine.com/view/10.1093/ med/9780199551637.001.0001/med-­9780199551637-­chapter-­014. Kettner, Mattias. 2007. Deliberative Democracy: From Rational Discourse to Public Debate. In The Information Society: Innovation, Legitimacy, Ethics and Democracy In Honor of Professor Jacques Berleur s.j., IFIP International Federation for Information Processing, ed. Philippe Goujon, Sylvian Lavelle, Penny Duquenoy, Kai Kimppa, and Véronique Laurent, 57–66. Boston, MA: Springer US. https://doi.org/10.1007/978-­0-­387-­72381-­5_7. Kiehl, Kent A. 2006. A Cognitive Neuroscience Perspective on Psychopathy: Evidence for Paralimbic System Dysfunction. Psychiatry Research 142 (2–3): 107–128. https://doi.org/10.1016/j.psychres.2005.09.013. Kiehl, Kent A., and Morris B.  Hoffman. 2011. The Criminal Psychopath: History, Neuroscience, Treatment, and Economics. Jurimetrics 51: 355–397. Kiehl, K.A., A.M. Smith, R.D. Hare, A. Mendrek, B.B. Forster, J. Brink, and P.F. Liddle. 2001. Limbic Abnormalities in Affective Processing by Criminal Psychopaths as Revealed by Functional Magnetic Resonance Imaging. Biological Psychiatry 50 (9): 677–684. Kim-Cohen, J., A. Caspi, A. Taylor, B. Williams, R. Newcombe, I.W. Craig, and T.E.  Moffitt. 2006. MAOA, Maltreatment, and Gene–Environment Interaction Predicting Children’s Mental Health: New Evidence and a Meta-­ Analysis. Molecular Psychiatry 11 (10): 903–913. https://doi.org/10.1038/ sj.mp.4001851.

 References 

237

King, L.J. 1999. A Brief History of Psychiatry: Millennia Past and Present. Annals of Clinical Psychiatry: Official Journal of the American Academy of Clinical Psychiatrists 11 (1): 3–12. Kitcher, Philip. 2011. The Ethical Project. Cambridge, MA: Harvard University Press. Knutson, B., O.M. Wolkowitz, S.W. Cole, T. Chan, E.A. Moore, R.C. Johnson, J.  Terpstra, R.A.  Turner, and V.I.  Reus. 1998. Selective Alteration of Personality and Social Behavior by Serotonergic Intervention. The American Journal of Psychiatry 155 (3): 373–379. https://doi.org/10.1176/ ajp.155.3.373. Koch, Julius Ludwig. 1891. Die Psychopathischen Minderwertigkeiten. Kessinger Publishing, LLC, August. Koenigs M., A. Baskin-Sommers, J. Zeier, and J. P. Newman. 2011. Investigating the Neural Correlates of Psychopathy: A Critical Review. Mol Psychiatry 16 (8): 792–799. https://doi.org/10.1038/mp.2010.124. Kraepelin, Emil. 1887. Psychiatrie: Ein Lehrbuch Fur Studirende Und Aezte. 2nd ed. Leipzig: Abel. Kristjánsson, Kristján. 2015. Phronesis as an Ideal in Professional Medical Ethics: Some Preliminary Positionings and Problematics. Theoretical Medicine and Bioethics 36 (5): 299–320. https://doi.org/10.1007/s11017-­015-­9338-­4. Krupp, Daniel Brian, Lindsay A.  Sewall, Martin L.  Lalumière, Craig Sheriff, and Grant T. Harris. 2013. Psychopathy, Adaptation, and Disorder. Frontiers in Psychology 4: 139. https://doi.org/10.3389/fpsyg.2013.00139. Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Books. Labarrière, P.-J. 1996. Postmodernité et déclin des absolus. In La théologie en postmodernité, ed. Pierre Gisel and Patrick Evrard. Genève: Labor et Fides. Larsen, Rasmus Rosenberg. 2019. Psychopathy Treatment and the Stigma of Yesterday’s Research. Kennedy Institute of Ethics Journal 29 (3): 243–272. https://doi.org/10.1353/ken.2019.0024. Lasky, Melvin J. 1976. Utopia and Revolution. Chicago, IL: The University of Chicago Press. Levin, Yuval. 2020. A Time to Build: From Family and Community to Congress and the Campus, How Recommitting to Our Institutions Can Revive the American Dream. New York: Basic Books. Ling, Shichun, and Adrian Raine. 2018. The Neuroscience of Psychopathy and Forensic Implications. Psychology, Crime & Law 24 (3): 296–312. https://doi. org/10.1080/1068316X.2017.1419243.

238 References

Litton, Paul. 2008. Responsibility Status of the Psychopath: On Moral Reasoning and Rational Self-Governance, SSRN Scholarly Paper ID 1310886. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/ abstract=1310886. ———. 2013. Criminal Responsibility and Psychopathy: Do Psychopaths Have a Right to Excuse? In Handbook on Psychopathy and Law, ed. Kent A.  Kiehl and Walter Sinnott-Armstrong, 275–296. Oxford; New  York: Oxford University Press. https://papers.ssrn.com/abstract=2296172. Lo, Bernard. 2000. Resolving Ethical Dilemmas: A Guide for Clinicians. 2nd ed. Philadelphia: Lippincott Williams & Wilkins. Lories, D. 1998. Le Sens Commun et Le Jugement Du Phronimos Aristote et Les Stoiciens. Louvain-La-Neuve: Peeters Publishers. Lucki, I. 1998. The Spectrum of Behaviors Influenced by Serotonin. Biological Psychiatry 44 (3): 151–162. Lyotard, Jean-François. 1984. The Postmodern Condition: A Report on Knowledge. Minneapolis: University of Minnesota Press. Lyotard, Jean-Francois, and Fredric Jameson. 1984. The Postmodern Condition: A Report on Knowledge. Translated by Geoff Bennington and Brian Massumi. 1st ed. Minneapolis: University Of Minnesota Press. Machiavelli, Niccolo. 1517. Discourses on Livy. Translated by Henry Neville. London: Tommaso Davies. MacIntyre, Alasdair. 1984a. After Virtue: A Study in Moral Theory. 2nd ed. Notre Dame, Ind: University of Notre Dame Press. ———. 1984b. Does Applied Ethics Rest on a Mistake? The Monist 67 (4): 498–513. ———. 1988. Whose Justice? Which Rationality? 1st ed. Notre Dame, Ind: University of Notre Dame Press. ———. 1998. “Plain Persons and Moral Philosophy: Rules, Virtues and Goods.” In The MacIntyre reader, edited by K. Knight, 136–154. Cambridge: Polity Press. ———. 2001. Dependent Rational Animals: Why Human Beings Need the Virtues. Revised edition. Chicago: Open Court. Maibom, Heidi L. 2005. Moral Unreason: The Case of Psychopathy. Mind & Language 20 (2): 237–257. https://doi.org/10.1111/j.0268-­1064.2005. 00284.x. Matravers, Matt. 2018. The Importance of Context in Thinking About Crime-­ Preventing Neurointerventions. In Treatment for Crime: Philosophical Essays on Neurointerventions in Criminal Justice, ed. David Birks and Thomas

 References 

239

Douglas, 71–93. Oxford: Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198758617.001.0001/oso9780198758617-­chapter-­4. McCullough, Jack. 2019. The Psychopathic CEO. Forbes. https://www.forbes. com/sites/jackmccullough/2019/12/09/the-­psychopathic-­ceo/. Mele, A.R. 1999. Motivation. In The Cambridge Dictionary of Philosophy, ed. R. Audi, 591–592. Cambridge University Press. Mendez, Mario F. 2006. What Frontotemporal Dementia Reveals about the Neurobiological Basis of Morality. Medical Hypotheses 67 (2): 411–418. https://doi.org/10.1016/j.mehy.2006.01.048. Merkel, Reinhard, G. Boer, J. Fegert, T. Galert, D. Hartmann, B. Nuttin, and S. Rosahl. 2007. Intervening in the Brain: Changing Psyche and Society, Ethics of Science and Technology Assessment. Berlin Heidelberg: Springer-Verlag. www.springer.com/gp/book/9783540464761. Messina, Nena, David Farabee, and Richard Rawson. 2003. Treatment Responsivity of Cocaine-Dependent Patients with Antisocial Personality Disorder to Cognitive-Behavioral and Contingency Management Interventions. Journal of Consulting and Clinical Psychology 71 (2): 320–329. https://doi.org/10.1037/0022-­006X.71.2.320. Meyer-Lindenberg, Andreas, Joshua W. Buckholtz, Bhaskar Kolachana, Ahmad R.  Hariri, Lukas Pezawas, Giuseppe Blasi, Ashley Wabnitz, et  al. 2006. Neural Mechanisms of Genetic Risk for Impulsivity and Violence in Humans. Proceedings of the National Academy of Sciences of the United States of America 103 (16): 6269–6274. https://doi.org/10.1073/pnas.0511311103. Millon, Theodore, Erik Simonsen, and Morten Birket-Smith. 2003. Historical Conceptions of Psychopathy in the United States and Europe. In Psychopathy: Antisocial, Criminal, and Violent Behavior, ed. Theodore Millon, Erik Simonsen, Morten Birket-Smith, and Roger D. Davis, 3–31. The Guilford Press. https://www.amazon.com/Psychopathy-­Antisocial-­Criminal-­Violent-­ Behavior/dp/1572308648. Milton, John. 1650. The Tenure of Kings and Magistrates. London: Matthew Simmons. Mitchell, Joshua. 2020. American Awakening: Identity Politics and Other Afflictions of Our Time. New York City: Encounter Books. Moll, J., P.J. Eslinger, and R. Oliveira-Souza. 2001. Frontopolar and Anterior Temporal Cortex Activation in a Moral Judgment Task: Preliminary Functional MRI Results in Normal Subjects. Arquivos De Neuro-Psiquiatria 59 (3-B): 657–664.

240 References

Moll, Jorge, Ricardo de Oliveira-Souza, Paul J.  Eslinger, Ivanei E.  Bramati, Janaína Mourão-Miranda, Pedro Angelo Andreiuolo, and Luiz Pessoa. 2002a. The Neural Correlates of Moral Sensitivity: A Functional Magnetic Resonance Imaging Investigation of Basic and Moral Emotions. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 22 (7): 2730–2736. 20026214. Moll, Jorge, Ricardo de Oliveira-Souza, Paul J.  Eslinger, Ivanei E.  Bramati, Janaı́na Mourão-Miranda, Pedro Angelo Andreiuolo, and Luiz Pessoa. 2002b. The Neural Correlates of Moral Sensitivity: A Functional Magnetic Resonance Imaging Investigation of Basic and Moral Emotions. Journal of Neuroscience 22 (7): 2730–2736. https://doi.org/10.1523/JNEUROSCI.2207-­02730.2002. Moll, Jorge, Ricardo de Oliveira-Souza, and Paul Eslinger. 2003. Morals and the Human Brain: A Working Model. Neuroreport 14 (3): 299–305. Moll, Jorge, Roland Zahn, Ricardo de Oliveira-Souza, Frank Krueger, and Jordan Grafman. 2005. The Neural Basis of Human Moral Cognition. Nature Reviews Neuroscience 6 (10): 799. https://doi.org/10.1038/nrn1768. Moll, Jorge, Frank Krueger, Roland Zahn, Matteo Pardini, Ricardo de Oliveira-­ Souza, and Jordan Grafman. 2006. Human Fronto-Mesolimbic Networks Guide Decisions about Charitable Donation. Proceedings of the National Academy of Sciences of the United States of America 103 (42): 15623–15628. https://doi.org/10.1073/pnas.0604475103. Moll, Jorge, Ricardo de Oliveira-Souza, Roland Zahn, and Jordan Grafman. 2008. The Cognitive Neuroscience of Moral Emotions. In Moral Psychology, Volume 3 The Neuroscience of Morality: Emotion, Brain Disorders, and Development, ed. Walter Sinnott-Armstrong. Cambridge, MA: The MIT Press. https://mitpress.mit.edu/books/moral-­psychology-­volume-­3. More, Max. 2010. The Overhuman in the Transhuman. Journal of Evolution and Technology 21 (1): 1–4. Morin, Roc. 2016. Can Child Dolls Keep Pedophiles from Offending? The Atlantic. January 11. https://www.theatlantic.com/health/archive/2016/01/ can-­child-­dolls-­keep-­pedophiles-­from-­offending/423324/. Morse, Stephen. 1999. Neither Desert nor Disease. Legal Theory 5: 265–309. ———. 2008. Psychopathy and Criminal Responsibility. Neuroethics 1 (3): 205–212. https://doi.org/10.1007/s12152-­008-­9021-­9. Murphy, Nancey. 1990. Scientific Realism and Postmodern Philosophy. The British Journal for the Philosophy of Science 41 (3): 291–303. https://doi. org/10.1093/bjps/41.3.291.

 References 

241

Nadelhoffer, Thomas, and Walter P. Sinnott-Armstrong. 2013. Is Psychopathy a Mental Disease? In Neuroscience and Legal Responsibility, ed. Nicole A. Vincent, 229–255. Oxford: Oxford University Press. https://oxford.universitypressscholarship.com/view/10.1093/acprof: oso/9780199925605.001.0001/acprof-­9780199925605-­chapter-­10. Nave, Gideon, Colin Camerer, and Michael McCullough. 2015. Does Oxytocin Increase Trust in Humans? A Critical Review of Research. Perspectives on Psychological Science 10 (6): 772–789. https://doi. org/10.1177/1745691615600138. Neumann, Craig S., and Robert D. Hare. 2008. Psychopathic Traits in a Large Community Sample: Links to Violence, Alcohol Use, and Intelligence. Journal of Consulting and Clinical Psychology 76 (5): 893–899. https://doi. org/10.1037/0022-­006X.76.5.893. Nietzsche, Friedrich. 2013. The Anti-Christ. SoHo Books. Nowack, Kenneth, and Paul J. Zak. 2017. The Neuroscience in Building High Performance Trust Cultures. Chief Learning Officer—CLO Media (blog). February 9. https://www.clomedia.com/2017/02/09/ neuroscience-­building-­trust-­cultures/. Nowotny, Helga, Peter B. Scott, and Michael T. Gibbons. 2001. Re-Thinking Science: Knowledge and the Public in an Age of Uncertainty. 1st ed. London: Polity. O’Doherty, J., M.L. Kringelbach, E.T. Rolls, J. Hornak, and C. Andrews. 2001. Abstract Reward and Punishment Representations in the Human Orbitofrontal Cortex. Nature Neuroscience 4 (1): 95. https://doi. org/10.1038/82959. O’Neill Hayes, Tara. 2020. The Economic Costs of the U.S. Criminal Justice System. AAF, July 16. https://www.americanactionforum.org/research/ the-­economic-­costs-­of-­the-­u-­s-­criminal-­justice-­system/. Parusnikova, Zuzana. 1992. Is a Postmodern Philosophy of Science Possible? Studies in History and Philosophy of Science Part A 23 (1): 21–37. https://doi. org/10.1016/0039-­3681(92)90025-­2. Pascual, Leo, Paulo Rodrigues, and David Gallardo-Pujol. 2013. How Does Morality Work in the Brain? A Functional and Structural Perspective of Moral Behavior. Frontiers in Integrative Neuroscience 7 (Sept.). https://doi. org/10.3389/fnint.2013.00065. Patrick, Christopher J. 2018. Psychopathy as Masked Pathology. In Handbook of Psychopathy, ed. Christopher J. Patrick, 2nd ed., 605–617. New York, NY: Guilford Publications.

242 References

Pellegrino, Edmund D., and David C.  Thomasma. 1993. Virtues in Medical Practice. 1st ed. New York: Oxford University Press. Perry, John. 2009. Diminished and Fractured Selves. In Personal Identity and Fractured Selves, ed. D.J.H.  Mathews, H.  Bok, and P.V.  Rabins, 129–162. Baltimore: Johns Hopkins University Press. http://muse.jhu.edu/ chapter/69185. Persson, Ingmar, and Julian Savulescu. 2008. The Perils of Cognitive Enhancement and the Urgent Imperative to Enhance the Moral Character of Humanity. Journal of Applied Philosophy 25 (3): 162–177. https://doi. org/10.1111/j.1468-­5930.2008.00410.x. ———. 2012. Unfit for the Future: The Need for Moral Enhancement. 1st ed. Oxford University Press. ———. 2013. Getting Moral Enhancement Right: The Desirability of Moral Bioenhancement. Bioethics 27 (3): 124–131. https://doi.org/ 10.1111/j.1467-­8519.2011.01907.x. ———. 2016. Moral Bioenhancement, Freedom and Reason. Neuroethics 9 (3): 263–268. https://doi.org/10.1007/s12152-­016-­9268-­5. Pillsbury, Samuel H. 2013. Why Psychopaths Are Responsible. In Handbook on Psychopathy and Law, ed. Kent A. Kiehl and Walter P. Sinnott-Armstrong, 297–318. New  York: Oxford University Press. https://papers.ssrn.com/ abstract=2313758. Pinel, Philippe. 1962. A Treatise on Insanity. Translated by D.  Davis. New York: Hafner. Pinker, Steven. 2011. The Better Angels of Our Nature. New  York: Viking/ Penguin Group. https://www.goodreads.com/work/best_book/16029496-­ the-­better-­angels-­of-­our-­nature-­why-­violence-­has-­declined. ———. 2018. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. Illustrated edition. New York, NY: Viking. Pluckrose, Helen, and James Lindsay. 2020. Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity―and Why This Harms Everybody. None edition. Durham: Pitchstone Publishing. Poeppl, Timm B., Maximilian Donges, Andreas Mokros, Rainer Rupprecht, Peter T. Fox, Angela R. Laird, Danilo Bzdok, Berthold Langguth, and Simon B.  Eickhoff. 2019. A View Behind the Mask of Sanity: Meta-Analysis of Aberrant Brain Activity in Psychopaths. Molecular Psychiatry 24 (3): 463–470. https://doi.org/10.1038/s41380-­018-­0122-­5.

 References 

243

Polaschek, Devon L.L., and Jennifer L. Skeem. 2018. Treatment of Adults and Juveniles with Psychopathy. In Handbook of Psychopathy, 2nd ed., 710–731. New York, NY: The Guilford Press. Porter, Roy. 1999. The Greatest Benefit to Mankind: A Medical History of Humanity. 1st ed. New York: W. W. Norton & Company. Pouliquen, Laetitia. 2016. Femme 2.0. Le Coudray-Macouard: Saint-­ Léger Editions. Prehn, Kristin, Isabell Wartenburger, Katja Mériau, Christina Scheibe, Oliver R. Goodenough, Arno Villringer, Elke van der Meer, and Hauke R. Heekeren. 2008. Individual Differences in Moral Judgment Competence Influence Neural Correlates of Socio-Normative Judgments. Social Cognitive and Affective Neuroscience 3 (1): 33–46. https://doi.org/10.1093/scan/nsm037. Prichard, James Cowles. 1835. A Treatise on Insanity and Other Disorders Affecting the Mind. London: Sherwood, Gilbert & Piper. https://www.abebooks.com/ first-­e dition/Treatise-­Insanity-­Disorders-­A ffecting-­Mind-­P RICHARD/ 6552241184/bd. Pridmore, Saxby, Amber Chambers, and Milford McArthur. 2005. Neuroimaging in Psychopathy. Australian and New Zealand Journal of Psychiatry 39 (10): 856–865. https://doi.org/10.1111/j.1440-­1614.2005.01679.x. Prinz, Jesse. 2016. Sentimentalism and the Moral Brain. In Moral Brains, ed. S. Matthew Liao, 45–73. Oxford University Press. https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199357666.001.0001/ acprof-­9780199357666-­chapter-­2. Quintana, Daniel S., Jaroslav Rokicki, Dennis van der Meer, Dag Alnæs, Tobias Kaufmann, Aldo Córdova-Palomera, Ingrid Dieset, Ole A. Andreassen, and Lars T.  Westlye. 2019. Oxytocin Pathway Gene Networks in the Human Brain. Nature Communications 10 (1): 668. https://doi.org/10.1038/ s41467-­019-­08503-­8. Radden, Jennifer. 1996. Divided Minds and Successive Selves: Ethical Issues in Disorders of Identity and Personality. Cambridge, MA: The MIT Press. https:// mitpress.mit.edu/books/divided-­minds-­and-­successive-­selves. Rafter, Nicole. 2008. The Criminal Brain: Understanding Biological Theories of Crime. New York: NYU Press. Raine, Adrian, and Yaling Yang. 2006. The Neuroanatomical Bases of Psychopathy: A Review of Brain Imaging Findings. In Handbook of Psychopathy, 278–295. New York, NY: The Guilford Press. Rawls, John. 1993. Political Liberalism. New York: Columbia University Press.

244 References

Redish, A. David. 2013. The Mind within the Brain: How We Make Decisions and How Those Decisions Go Wrong. Oxford University Press. Renaud, Patrice, Christian Joyal, Serge Stoleru, Mathieu Goyette, Nikolaus Weiskopf, and Niels Birbaumer. 2011. Real-Time Functional Magnetic Imaging-Brain-Computer Interface and Virtual Reality Promising Tools for the Treatment of Pedophilia. Progress in Brain Research 192: 263–272. https:// doi.org/10.1016/B978-­0-­444-­53355-­5.00014-­2. Rice, Marnie E., Grant T.  Harris, and Catherine A.  Cormier. 1992. An Evaluation of a Maximum Security Therapeutic Community for Psychopaths and Other Mentally Disordered Offenders. Law and Human Behavior 16 (4): 399–412. https://doi.org/10.1007/BF02352266. Riva, Paolo, Leonor J. Romero, C. Nathan Lauro, David S. Chester DeWall, and Brad J.  Bushman. 2015. Reducing Aggressive Responses to Social Exclusion Using Transcranial Direct Current Stimulation. Social Cognitive and Affective Neuroscience 10 (3): 352–356. https://doi.org/10.1093/ scan/nsu053. Riva, Paolo, Alessandro Gabbiadini, Leonor J.  Romero, Luca Andrighetto Lauro, Chiara Volpato, and Brad J. Bushman. 2017. Neuromodulation Can Reduce Aggressive Behavior Elicited by Violent Video Games. Cognitive, Affective & Behavioral Neuroscience 17 (2): 452–459. https://doi.org/10.3758/ s13415-­016-­0490-­8. Rolston, Holmes, III. 2008. Human Uniqueness and Human Dignity: Persons in Nature and the Nature of Persons. In Human Dignity and Bioethics: Essays Commissioned by the President’s Council on Bioethics, ed. Edmund D. Pellegrino, Adam Schulman, and Thomas W.  Merrill, 129–153. Washington, DC: President’s Council on Bioethics. Roskies, Adina. 2002. Neuroethics for the New Millennium. Neuron 35 (1): 21–23. https://doi.org/10.1016/S0896-­6273(02)00763-­8. Rota, Giuseppina, Ranganatha Sitaram, Ralf Veit, Michael Erb, Nikolaus Weiskopf, Grzegorz Dogil, and Niels Birbaumer. 2009. Self-Regulation of Regional Cortical Activity Using Real-Time FMRI: The Right Inferior Frontal Gyrus and Linguistic Processing. Human Brain Mapping 30 (5): 1605–1614. https://doi.org/10.1002/hbm.20621. Rouse, Joseph. 1991. The Politics of Postmodern Philosophy of Science. Philosophy of Science 58 (4): 607–627. https://doi.org/10.1086/289643. Rush, Benjamin. 1812. Medical Inquiries and Observations upon the Disease of the Mind. Philadelphia: Kimber & Richardson.

 References 

245

Russell, Bertrand. 1953. The Impact of Science on Society. 1st ed. New  York: Simon & Schuster. Ryberg, Jesper. 2004. The Ethics of Proportionate Punishment: A Critical Investigation, Library of Ethics and Applied Philosophy. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-­1-­4020-­2554-­9. ———. 2018. Neuroscientific Treatment of Criminals and Penal Theory. In Treatment for Crime: Philosophical Essays on Neurointerventions in Criminal Justice, ed. David Birks and Thomas Douglas, 177–195. Oxford: Oxford University Press. https://forskning.ruc.dk/en/publications/ neuroscientific-­treatment-­of-­criminals-­and-­penal-­theory. Sacks, Jonathan. 2020. Morality: Restoring the Common Good in Divided Times. New York: Basic Books. Sadler, John Z. 2007. The Psychiatric Significance of the Personal Self. Psychiatry 70 (2): 113–129. https://doi.org/10.1521/psyc.2007.70.2.113. Salekin, Randall T. 2002. Psychopathy and Therapeutic Pessimism. Clinical Lore or Clinical Reality? Clinical Psychology Review 22 (1): 79–112. https:// doi.org/10.1016/s0272-­7358(01)00083-­6. Salekin, Randall T., Courtney Worley, and Ross D. Grimes. 2010. Treatment of Psychopathy: A Review and Brief Introduction to the Mental Model Approach for Psychopathy. Behavioral Sciences & the Law 28 (2): 235–266. https://doi.org/10.1002/bsl.928. Salomon, Jean Jacques. 1999. Survivre a la Science. NON CLASSE edition. Paris: Albin Michel. Sandel, Michael J. 2020. The Tyranny of Merit: What’s Become of the Common Good? New York: Farrar, Straus and Giroux. Sass, Henning, and Alan R. Felthous. 2014. The Heterogeneous Construct of Psychopathy. In Being Amoral: Psychopathy and Moral Incapacity, ed. Thomas Schramme, 41–68. The MIT Press. https://doi.org/10.7551/ mitpress/9780262027915.001.0001. Savulescu, Julian, and Ingmar Persson. 2012. Moral Enhancement, Freedom and the God Machine. The Monist 95 (3): 399–421. Schaich Borg, Jana, Catherine Hynes, John Van Horn, Scott Grafton, and Walter Sinnott-Armstrong. 2006. Consequences, Action, and Intention as Factors in Moral Judgments: An FMRI Investigation. Journal of Cognitive Neuroscience 18 (5): 803–817. https://doi.org/10.1162/jocn.2006.18.5.803. Scheler, M. 1961. Man’s Place in Nature. Boston: Beacon Press. Schneewind, Jerome B. 1997. The Invention of Autonomy: A History of Modern Moral Philosophy. Cambridge; New York, NY: Cambridge University Press.

246 References

Schneider, Kurt. 1923. Die psychopathischen Persönlichkeiten. Vienna: Franz Deuticke. ———. 1958. Psychopathic Personalities. Translated by Maian W.  Hamilton. 9th ed. London: Cassell. Sestieri, Carlo, Maurizio Corbetta, Gian Luca Romani, and Gordon L. Shulman. 2011. Episodic Memory Retrieval, Parietal Cortex, and the Default Mode Network: Functional and Topographic Analyses. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience 31 (12): 4407. https://doi. org/10.1523/JNEUROSCI.3335-­10.2011. Shaw, Elizabeth. 2018. Against the Mandatory Use of Neurointerventions in Criminal Sentencing. In Treatment for Crime: Philosophical Essays on Neurointerventions in Criminal Justice, ed. David Birks and Thomas Douglas, 321–337. Oxford: Oxford University Press. https://abdn. pure.elsevier.com/en/publications/against-­t he-­m andatory-­u se-­o fneurointerventions-­in-­criminal-­sente. Shen, Helen. 2015. Neuroscience: The Hard Science of Oxytocin. Nature News 522 (7557): 410. https://doi.org/10.1038/522410a. Shenhav, Amitai, and Joshua D.  Greene. 2010. Moral Judgments Recruit Domain-General Valuation Mechanisms to Integrate Representations of Probability and Magnitude. Neuron 67 (4): 667–677. https://doi. org/10.1016/j.neuron.2010.07.020. Sinnott-Armstrong, W. 2008. Framing Moral Intuitions. In Moral Psychology, Volume 2, The Cognitive Science of Morality: Intuition and Diversity, ed. W.  Sinnott-Armstrong. Cambridge, MA: The MIT Press. https://mitpress. mit.edu/books/moral-­psychology-­volume-­2. Sirgiovanni, Elisabetta, and Mirko Daniel Garasic. 2020. Commentary: The Moral Bioenhancement of Psychopaths. Frontiers in Psychology 10 (Jan.). https://doi.org/10.3389/fpsyg.2019.02880. Sitaram, Ranganatha, Andrea Caria, Ralf Veit, Tilman Gaber, Giuseppina Rota, Andrea Kuebler, and Niels Birbaumer. 2007. FMRI Brain-Computer Interface: A Tool for Neuroscientific Research and Treatment. Computational Intelligence and Neuroscience 2007. https://doi.org/10.1155/2007/25487. Sitaram, Ranganatha, Andrea Caria, and Niels Birbaumer. 2009. Hemodynamic Brain–Computer Interfaces for Communication and Rehabilitation. Neural Networks, Brain-Machine Interface 22 (9): 1320–1328. https://doi. org/10.1016/j.neunet.2009.05.009. Smetana, Judith G., and Judith L. Braeges. 1990. The Development of Toddlers’ Moral and Conventional Judgments. Merrill-Palmer Quarterly 36 (3): 329–346.

 References 

247

Snead, O. Carter. 2020. What It Means to Be Human: The Case for the Body in Public Bioethics. Cambridge, MA: Harvard University Press. Stuerup, Georg K. 1951. Krogede Skoebner. Copenhagen: Munksgaard. Svenaeus, Fredrik. 2013. The Relevance of Heidegger’s Philosophy of Technology for Biomedical Ethics. Theoretical Medicine and Bioethics 34 (1): 1–15. https://doi.org/10.1007/s11017-­012-­9240-­2. Tachibana, Koji. 2018. Neurofeedback-Based Moral Enhancement and the Notion of Morality | Annals of the University of Bucharest—Philosophy Series. Annals of the University of Bucharest—Philosophy Series 66 (2): 25–41. Tamatea, Armon J. 2015. ‘Biologizing’ Psychopathy: Ethical, Legal, and Research Implications at the Interface of Epigenetics and Chronic Antisocial Conduct. Behavioral Sciences & the Law 33 (5): 629–643. https://doi. org/10.1002/bsl.2201. Tassy, Sébastien, Olivier Oullier, Yann Duclos, Olivier Coulon, Julien Mancini, Christine Deruelle, Sharam Attarian, Olivier Felician, and Bruno Wicker. 2012. Disrupting the Right Prefrontal Cortex Alters Moral Judgement. Social Cognitive and Affective Neuroscience 7 (3): 282–288. https://doi.org/10.1093/ scan/nsr008. Tauber, Alfred I. 2009. Science and the Quest for Meaning. Waco, TX: Baylor University Press. Taylor, Charles. 1989. Sources of the Self: The Making of the Modern Identity. Cambridge, MA: Harvard University Press. https://www.goodreads.com/work/ best_book/37652-­sources-­of-­the-­self-­the-­making-­of-­the-­modern-­identity. Thibault, Robert T., Amanda MacPherson, Michael Lifshitz, Raquel R. Roth, and Amir Raz. 2018. Neurofeedback with FMRI: A Critical Systematic Review. NeuroImage 172: 786–807. https://doi.org/10.1016/j. neuroimage.2017.12.071. Thomasma, David C. 1997. Antifoundationalism and the Possibility of a Moral Philosophy of Medicine. Theoretical Medicine 18 (1): 127–143. https://doi. org/10.1023/A:1005726024062. Tikkanen, Roope, Laura Auvinen-Lintunen, Francesca Ducci, Rickard L.  Sjöberg, David Goldman, Jari Tiihonen, Ilkka Ojansuu, and Matti Virkkunen. 2011. Psychopathy, PCL-R, and MAOA Genotype as Predictors of Violent Reconvictions. Psychiatry Research 185 (3): 382–386. https://doi. org/10.1016/j.psychres.2010.08.026. de Tocqueville, Alexis. 2002. Democracy in America. Translated by Harvey C.  Mansfield and Delba Winthrop. 1st ed. Chicago, IL: University of Chicago Press.

248 References

Trueman, Carl R. 2020. The Rise and Triumph of the Modern Self: Cultural Amnesia, Expressive Individualism, and the Road to Sexual Revolution. Wheaton, IL: Crossway. Tse, Wai S., and Alyson J. Bond. 2002. Serotonergic Intervention Affects Both Social Dominance and Affiliative Behaviour. Psychopharmacology 161 (3): 324–330. https://doi.org/10.1007/s00213-­002-­1049-­7. Tsetsenis, Theodoros, Xiao-Hong Ma, Luisa Lo Iacono, Sheryl G.  Beck, and Cornelius Gross. 2007. Suppression of Conditioning to Ambiguous Cues by Pharmacogenetic Inhibition of the Dentate Gyrus. Nature Neuroscience 10 (7): 896–902. https://doi.org/10.1038/nn1919. Vaadia, Eilon, and Niels Birbaumer. 2009. Grand Challenges of Brain Computer Interfaces in the Years to Come. Frontiers in Neuroscience 3 (2): 151–154. https://doi.org/10.3389/neuro.01.015.2009. Viding, Essi, R. James, R. Blair, Terrie E. Moffitt, and Robert Plomin. 2005. Evidence for Substantial Genetic Risk for Psychopathy in 7-Year-Olds. Journal of Child Psychology and Psychiatry, and Allied Disciplines 46 (6): 592–597. https://doi.org/10.1111/j.1469-­7610.2004.00393.x. Völlm, Birgit A., Alexander N.W. Taylor, Paul Richardson, Rhiannon Corcoran, John Stirling, Shane McKie, John F.W. Deakin, and Rebecca Elliott. 2006. Neuronal Correlates of Theory of Mind and Empathy: A Functional Magnetic Resonance Imaging Study in a Nonverbal Task. NeuroImage 29 (1): 90–98. https://doi.org/10.1016/j.neuroimage.2005.07.022. de Waal, Frans. 2013. The Bonobo and the Atheist: In Search of Humanism Among the Primates. W. W. Norton & Company. https://www.amazon.com/Bonobo-­ Atheist-­Search-­Humanism-­Primates-­ebook/dp/B007Q6XKEY. Walum, Hasse, Irwin D. Waldman, and Larry J. Young. 2016. Statistical and Methodological Considerations for the Interpretation of Intranasal Oxytocin Studies. Biological Psychiatry 79 (3): 251–257. https://doi.org/10.1016/j. biopsych.2015.06.016. Wartofsky, Marx W. 1992. Technology, Power, and Truth: Political and Epistemological Reflections on the Fourth Revolution. In In Democracy in a Technological Society, Philosophy and Technology, ed. Langdon Winner, 15–34. Dordrecht: Springer Netherlands. https://doi.org/ 10.1007/978-­94-­017-­1219-­4_2. Weiss, Martin G. 2014. Posthuman Dignity. In The Cambridge Handbook of Human Dignity: Interdisciplinary Perspectives, ed. Dietmar Mieth, Jens Braarvig, Marcus Düwell, and Roger Brownsword, 319–331. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511979033.038.

 References 

249

Werlinder, Henry. 1978. Psychopathy: A History of the Concepts. Uppsala; Stockholm: Coronet Books Inc. Wicker, Bruno, Christian Keysers, Jane Plailly, Jean Pierre Royet, Vittorio Gallese, and Giacomo Rizzolatti. 2003. Both of Us Disgusted in My Insula: The Common Neural Basis of Seeing and Feeling Disgust. Neuron 40 (3): 655–664. Widiger, T. A. 2006. Psychopathy and DSM-IV Psychopathology. In C. J. Patrick, ed., Handbook of Psychopathy. New York: The Guilford Press. Wiseman, Harris. 2016. The Myth of the Moral Brain: The Limits of Moral Enhancement. 1 edition. Cambridge, Massachusetts: The MIT Press. Wiseman, Harris. 2016. The Myth of the Moral Brain: The Limits of Moral Enhancement. 1st ed. Cambridge, MA: The MIT Press. Woodward, J. 2016. Emotion versus Cognition in Moral Decision-Making. In Moral Brains: The Neuroscience of Morality, ed. S.M. Liao, 87–116. Oxford University Press. http://www.oxfordscholarship.com/view/10.1093/acprof: oso/9780199357666.001.0001/acprof-­9780199357666. Wyschogrod, Edith. 1990. Saints and Postmodernism: Revisioning Moral Philosophy. 1st ed. Chicago: University of Chicago Press. Yang, Y., A.  Raine, P.  Colletti, A.W.  Toga, and K.L.  Narr. 2009. Abnormal Temporal and Prefrontal Cortical Gray Matter Thinning in Psychopaths. Molecular Psychiatry 14 (6): 561–62, 555. https://doi.org/10.1038/ mp.2009.12. Yong, Ed. 2015. The Weak Science Behind the Wrongly Named Moral Molecule. The Atlantic. Accessed November 13, 2015. https://www. theatlantic.com/science/archive/2015/11/the-­w eak-­s cience-­o f-­t he-­ wrongly-­named-­moral-­molecule/415581/. Young, Liane, and James Dungan. 2012. Where in the Brain Is Morality? Everywhere and Maybe Nowhere. Social Neuroscience 7 (1): 1–10. https:// doi.org/10.1080/17470919.2011.569146. Young, Liane, and Michael Koenigs. 2007. Investigating Emotion in Moral Cognition: A Review of Evidence from Functional Neuroimaging and Neuropsychology. British Medical Bulletin 84 (1): 69–79. https://doi. org/10.1093/bmb/ldm031. Young, Liane, and Rebecca Saxe. 2008. The Neural Basis of Belief Encoding and Integration in Moral Judgment. NeuroImage 40 (4): 1912–1920. https://doi. org/10.1016/j.neuroimage.2008.01.057. Young, Liane, Michael Koenigs, Michael Kruepke, and Joseph P.  Newman. 2012. Psychopathy Increases Perceived Moral Permissibility of Accidents.

250 References

Journal of Abnormal Psychology 121 (3): 659–667. https://doi. org/10.1037/a0027489. Yuste, Rafael, Sara Goering, Blaise Agüera y Arcas, Guoqiang Bi, Jose M. Carmena, Adrian Carter, Joseph J. Fins, et al. 2017. Four Ethical Priorities for Neurotechnologies and AI. Nature News 551 (7679): 159. https://doi. org/10.1038/551159a. Yuste, Rafael, Jared Genser, and Stephanie Herrmann. 2021. It’s Time for Neuro-Rights, 7. Zak, Paul J. 2001. The Moral Molecule: The New Science of What Makes Us Good or Evil. Paul j Zak. London: Corgi Books. ———. 2012. The Moral Molecule: The Source of Love and Prosperity. New York: Dutton. Zak, Paul J., Robert Kurzban, and William T.  Matzner. 2005. Oxytocin Is Associated with Human Trustworthiness. Hormones and Behavior 48 (5): 522–527. https://doi.org/10.1016/j.yhbeh.2005.07.009. Zak, Paul J., Angela A. Stanton, and Sheila Ahmadi. 2007. Oxytocin Increases Generosity in Humans. PloS One 2 (11): e1128. https://doi.org/10.1371/ journal.pone.0001128. Zak, Paul J., Robert Kurzban, Sheila Ahmadi, Ronald S. Swerdloff, Jang Park, Levan Efremidze, Karen Redwine, Karla Morgan, and William Matzner. 2009. Testosterone Administration Decreases Generosity in the Ultimatum Game. PLoS One 4 (12). https://doi.org/10.1371/journal.pone.0008330. Zellick, Graham. 1978. Corporal Punishment in the Isle of Man. International & Comparative Law Quarterly 27 (3): 665–671. https://doi.org/10.1093/ iclqaj/27.3.665. Zetterbaum, Marvin. 1967. Tocqueville and the Problem of Democracy. 1st ed. Stanford University Press. Ziman, John. 1996a. Is Science Losing Its Objectivity? Nature 382: 751–754. ———. 1996b. ‘Post-Academic Science’: Constructing Knowledge with Networks and Norms. Science & Technology Studies, January. https://sciencetechnologystudies.journal.fi/article/view/55095. ———. 2002. Real Science: What It Is, and What It Means. Cambridge: Cambridge University Press. Zitzelsberger, Hilde M. 2004. Concerning Technology: Thinking with Heidegger. Nursing Philosophy 5 (3): 242–250. https://doi. org/10.1111/j.1466-­769X.2004.00183.x.

Index1

A

Academic science, 9, 107, 108 Actus non facit reum nisi mens sit rea, 185 Actus reus (guilty act), 185 Adam, Smith, 121 Agar, Nicolas, 36, 38, 41, 45 Altruism, 21, 24, 84, 93–95, 126 Amygdala, 2, 55, 58, 63, 150 Anatomic-processing schema, 56 Angular gyrus, 58 Anterior cingulate cortex (ACC), 58, 63 Anthropocentric ideal, 36, 38, 41, 45 Anthropological identity crisis, 193–201 Anthropological integrity of medicine, 194

Anthropological revolution, 197 Anthropology, 69, 128, 195–197, 200, 220 Antisociability, 151 Antisocial behaviors, 22, 29n6, 140, 143, 145, 148, 151, 154 Aristotle, 18, 66–74, 76, 140 Arkin, Ronald, 203 Artificial Intelligence, 16, 112, 128, 201 Augmented justice, 95 Autonomous self, 74–76 B

Bacon, Francis, 13, 37 Baudrillad, Jean, 77 Bayertz, Kurt, 171

 Note: Page numbers followed by ‘n’ refer to notes.

1

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 F. Jotterand, The Unfit Brain and the Limits of Moral Bioenhancement, https://doi.org/10.1007/978-981-16-9693-0

251

252 Index

Biopolitics, 195 Biotechnologies, 8, 16, 20, 24, 35, 46, 47, 87, 129, 135n4 Birnbaum, Karl, 143, 144 Blair, James, 145, 158 Blood-Oxygen Level Dependent (BOLD) response, 150, 153 Bouleusis, 70 Brain Computer Interface (BCI), 3, 5, 8, 128, 153, 196, 207 Brain interventions, 8, 10, 48, 49, 51, 52, 171–188, 205, 208, 211, 212, 219 Brain structure, 1–3, 9, 25, 57, 61, 64, 65, 157 Bundy, Theodore Robert “Ted,” 1–3, 180 C

Capacity, 4, 7, 8, 13, 14, 17, 21, 23, 34, 35, 37, 38, 41, 43–48, 50–52, 62, 64, 65, 67–69, 74, 77, 79, 85–88, 90, 91, 93, 96n3, 131, 132, 143, 144, 153, 154, 171, 176–188, 206, 211, 212 Chadwick, Ruth, 36–38 Character, 7, 16, 17, 20, 21, 66–68, 72, 73, 79, 80, 83, 85, 86, 97n11, 132, 134n3, 142, 144, 185, 219 Character traits, 24, 47, 85, 93, 150, 174, 205, 211, 212 Child sex robots, 203 Cilliers, Paul, 129, 130 Cima, Maaike, 180, 181 Cleckley, Hervey, 145–147

Clinical ideal, 8, 33–52, 177 Coercion, 10, 175, 209 Coercive measures, 175 Cognitive and emotional integration theory, 62 Cognitive control and conflict theory, 62 Cognitive dysfunction, 179 Cognitive intelligence, 93 Cognitive liberty, 205, 214n4 Commercialization of knowledge, 113, 114 Common good, 28n3, 107–133, 176 Consumer neurotechnologies, 113, 208 Criminal behavior, 6, 133, 143, 158, 172 Criminal brain, 4 Criminal mind, 4 Criminal psychopaths, 2, 10, 148, 153, 171, 172 Criminal recidivism, 148 Criminal responsibility, 182, 184–187 Criminology, 6, 158 Critical theory, 108 Crutchfield, Parker, 25, 26, 174, 197 “Curbing Realistic Exploitative Electronic Pedophilic Robots Act 2.0” (CREEPER Act 2.0), 203 D

Dangerousness, 174–176, 188, 220 Darwin, Charles, 14 de Condorcet, Marquis, 13, 14, 37 de Tocqueville, Alexis, 116

 Index 

Deconstruction, 108, 109 Deep Brain Stimulation (DBS), 3, 6, 59, 140, 174 Deliberative democracy, 9, 115–133 Democracy, 20, 23, 94, 95, 109, 116, 117, 119, 123–126, 131 Descartes, René, 13, 37 Desert, 172, 173 Desire, 4, 16, 17, 21, 34, 64, 65, 69–74, 80, 88, 91, 94, 134n3, 178, 180, 181, 188n1, 198, 200, 211 Deviant sexual demeanor, 153 Disease, 35, 41–43, 46, 59, 141, 144, 158, 159n2, 160n6, 176, 177, 186, 194, 208 Disease of the moral faculty, 141 Disrupted self, 80–83, 86 Donovan, Dan, 203 Douglas, Thomas, 7, 16, 17, 24, 26, 34, 47, 88, 205 Dualism, 23, 200, 207 Duff, Antony, 178, 180, 188–189n1 E

Economic incentive, 112 Embodied identity, 205, 212 Embodiment, 194, 200, 205 Emotional intelligence, 93 Emotions, 6, 8, 9, 17, 34, 55–58, 61–65, 76, 83–86, 88, 91–93, 146, 148–150, 153, 178, 180, 184, 189n1, 203, 207 Emotivism, 74, 78 Emotivist self, 74, 79, 80

253

Empathy, 3, 17, 24, 47, 58, 84, 87, 148, 149, 154, 177, 178, 180, 203 Enframing, 197–199 Enhancement, 8, 13–16, 21, 23–25, 33–51, 83, 88, 93, 160n7, 200 Enhancement-disabled, 44–45 Enhancement-nondisabled, 44–45 Enhancement technologies, 15, 35, 40, 50 Enlightenment, 13, 28n5, 77, 108, 122, 134–135n3 Episteme, 69, 96n4 Equality, 18, 116, 118, 211 Ethical naturalism, 59, 60, 94 Ethical uniqueness, 206 Euboulia, 70 Eudaimonia, 67, 73 Eusunesis, 70 Existential uniqueness, 206 Expressive power, 75 F

Fear conditioning, 8, 58, 149, 153 Fine, Cordelia, 148, 177–179 Flanagan, Owen, 59, 60, 90 Folie raisonnante, 141 Fourquet, Jérôme, 196, 213n1 Fractured culture, 119 Freedom, 19, 27n2, 59, 75, 80, 116–118, 121–123, 131, 132, 211 Free-will, 5, 208 French Revolution, 19, 116 Frontal lobe, 4, 57 Functional neuro-imaging technologies, 149

254 Index G

Gadamer, Hans Georg, 194 Gage, Phineas, 3, 4, 59 Genetic engineering, 22 Gnome, 70, 73, 96n6 The good, 5, 13, 21, 61, 64, 68, 72–74, 78, 80, 81, 83–86, 91, 92, 94, 118, 123, 174, 194, 195 Good deliberation, 69–73, 85 Good life, 5, 64, 72, 73, 80, 84, 86, 91, 94, 210 Gutmann, Amy, 9, 109, 124–127, 130–133 H

Habermas, Juergen, 77, 80, 121 Hare, Robert, 147, 148, 177 Harrington, Anne, 212, 213 Harris, John, 7, 15, 17, 21, 22, 33, 34, 47 Having character, 85 Hayek, Friedrich, 121 Health, 2, 4, 15, 24–26, 34–36, 42, 43, 46, 95, 120, 139, 146, 194, 195, 197, 209, 219 Healthy democracy, 117 Hegel, Georg Wilhelm Friedrich, 19, 28n4 Heidegger, Martin, 194, 197–200 Henderson, David, 145 Hippocampus, 58 Human capacity, 13, 38, 41, 43, 45, 46, 50, 51, 171 Human character, 7, 16–17, 83 Human dignity, 75, 188, 201, 206, 210, 211 Human enhancement, 13–16, 35–38

Human evolution, 15 Human flourishing, 43, 64, 86, 90–92, 94, 195, 201 Human identity, 10, 81, 193, 200–202 Human moral psychology, 7, 55, 64, 89, 91, 92 Human nature, 14, 28n5, 42, 45, 56, 59, 68, 76, 128, 171, 201, 202, 207 Human psychology, 21, 65, 92 Human species, 14–18, 20, 38, 40, 42, 45, 89, 93, 202, 220 Hume, David, 61, 76 Hunter, James D., 26–27n1, 89–91, 109 I

Ideational uniqueness, 206 Identity, 5, 10, 55, 56, 59, 77, 80–85, 87, 94, 110, 118–120, 132, 193–201, 205–209, 211, 212, 219 integrity, 10, 23, 193–213 politics, 82, 119, 126, 131, 132 Idiographic uniqueness, 206 Incapacitation, 177–180, 182, 184, 186 Individualism, 78, 83, 116, 118, 130 Individuation, 206, 207 Industrial Revolution, 18, 111 Insanity defense, 185–188 Institutions, 7, 17, 117, 118, 123, 125, 127, 130–133, 173, 210 Insular cortex, 58 Integrative nondichotomist (IND) position, 63 Intellectual blindness, 85, 86

 Index 

Intellectual capabilities, 88 Intellectual virtues, 69 “Irresistible Impulse” Test, 186 J

Jecker, Nancy S., 202 Justice, 10, 18, 47, 93–95, 119, 123, 156, 171–174, 186–188, 209 K

Kant, Immanuel, 61, 62, 74–76 Kennett, Jeanette, 148, 177–179 Koch, Julius Ludwig August, 142, 143 Kraepelin, Emil, 143 Kurzweil, Ray, 14, 15 L

Legal responsibility, 5, 10, 176, 183–188 Levin, Yuval, 109, 117, 131, 132 Liberal democracies, 20, 95, 109, 119 Limbic lobe, 58 Limiting retributivism, 172 Lobotomy, 3, 59, 174 Lombroso, Cesare, 158 Loss of community, 131 M

M’Naghten rule, 186 Machiavelli, Niccolo, 19 MacIntyre, Alasdair, 76, 78, 79, 83–86, 97n11, 109, 120, 126, 134–135n3, 212

255

Maibom, Heidi L., 178 Managed consensus, 122 Mania without delirium, 141, 142 Marketability, 9, 113, 114, 197 Marx, Karl, 19, 134n2 Marx, Wartofsky W., 111, 135n4 Medial frontal gyrus, 55 Mens rea (guilty mind), 185, 186 Mental impairment, 45, 69 Mental incapacitation, 182, 186 Mental integrity, 210, 211, 214n4 Mental privacy, 193, 208, 214n4 Metanarrative, 134n1, 139 Mill, John Stuart, 114 Millon, Theodore, 140–144 Milton, John, 19, 28n3 Mitchell, Joshua, 82, 109, 119, 120 Model Penal Code, 186 Modernism, 77, 134n2 Modern self, 74–86 Molly J. Crockett, 154 Monoamine oxidase A (MAO-A) gene, 151 Moral agency, 7, 9, 35, 64–66, 68, 70, 72–74, 79–81, 84–86, 91, 92, 177–181, 219 Moral agent, 7, 8, 25, 56, 71, 74–76, 79, 81–84, 86, 92, 94, 118, 134n3, 177, 179, 180, 182, 183 Moral bioenhancement, 4, 6–9, 13–26, 33–52, 74, 84–88, 91–95, 139–140, 155, 197, 208, 211–213, 219, 220 Moral capacities/moral capacity, 7, 8, 14, 17, 23, 47, 64–66, 74, 84, 86, 91, 96n3, 183, 184 Moral cognition, 5, 6, 180, 181 Moral communities, 118

256 Index

Moral content, 64–66, 74, 84, 91, 92, 96n3, 183–185 Moral/conventional distinction, 179 Moral doctrines, 87, 89, 90, 123 Moral education, 5, 6, 18, 21, 87, 183 Moral insanity, 1, 140–147 Moral intuitions, 58, 60, 61, 81 Morality, 4, 5, 8, 9, 20, 22–24, 26–27n1, 33, 52, 55–95, 109, 114, 116, 118, 122, 123, 130, 134n3, 139, 154, 179, 183, 185, 213, 219 Moral judgments, 3, 6, 58, 60–65, 78, 81, 84, 86, 90, 92, 93, 98n12, 153, 154, 160n7, 180, 181, 183 Moral molecule, 22, 23, 29n6, 155 Moral motivation, 8, 65, 88–93, 178, 184 Moral neutrality, 9, 66, 83–84 Moral pathologies, 5, 8, 9, 34, 35, 41, 44, 47–52, 139, 153, 174, 193, 202, 208, 212, 213, 219, 220 Moral progress, 7, 22, 23, 93, 95 Moral psychology, 7, 18, 55, 60, 61, 64, 74, 75, 89–92, 96n1, 181 Moral reasons, 86, 130, 156, 178–182, 184 Moral responsibility, 177–179, 182–184 Moral self, 8, 55, 61–66, 78, 83, 86, 97n10 Moral sensitivity, 14 Morse, Stephen J., 175, 176, 184, 185, 187 Motivation, 21, 33, 65, 82, 88, 92–94

N

Narratives, 77–79, 83–85, 110, 134n1, 152, 156, 206, 212 Nedelisky, Paul, 26–27n1, 89–91, 96n1 Negative retributivism, 172 Neuroanatomical abnormalities, 1–3 Neurobiology, 55–95 Neurobiology of morality, 5, 55–61 Neuroessentialism, 55 Neuroethics, 5, 95 Neurogenetics studies, 151 Neurohabilitation, 153 Neurointerventions, 47, 140, 174–176, 188, 193, 205, 207, 209–212, 220 Neuroprosthetics, 5 Neurorights, 208, 214n4 Neurostimulation, 3, 5 Neurosurgery, 174 Neurotechnologies, 1, 5, 7–10, 25, 34–36, 46–48, 86, 112, 113, 115, 128, 129, 133, 139–159, 160n6, 174, 175, 188, 193, 202, 204, 205, 207, 208, 212, 213, 219, 220 Neutral morality, 78 Nietzsche, Friedrich, 15 Non-offender paradigm, 176 Norm of utility, 112, 114 O

Oak Ridge Social Therapy Unit, 152 Objective ideal, 36, 38, 41, 45 Obsessive-compulsive disorder (OCD), 154 Offender paradigm, 175, 176

 Index 

Orbital frontal cortex (OFC), 2, 55, 58, 63, 150, 157 Origins of morality, 60, 90 Oxytocin, 22–24, 29n6, 29n7, 30n8, 154, 155 P

Parietal lobe, 58 Pathological behavioral disfunction, 57 Pathologizing, 6, 26, 87–95 Pedophilia, 2, 153, 174, 204 Performativity, 114, 115 Personal identity, 5, 55, 59, 77, 205–209, 211, 212, 219 Personhood, 75, 82, 194, 196, 207, 212 Persson, Ingmar, 7, 8, 16–18, 20–25, 33, 34, 47, 87–89, 92–95, 153, 155, 197 Philosophical anthropology, 128, 195, 220 Philosophical religious and moral doctrines, 123 Phronesis, 66–74, 85, 96n4 Phronimos, 72–74, 96n6 Pinel, Philippe, 141, 142 Pinker, Steven, 23, 94, 95, 108, 109 Plato, 67 Pluralism, 82, 107, 108, 115, 119, 122, 128, 131 Politicization of technology, 111, 112 Porter, Roy, 142 Positive retributivism, 172, 173 Post-academic science, 9, 107–111, 135n5

257

Posterior cingulate cortex (PCC), 58 Postmodern context, 77 Postmodern ideology, 108 Postmodernism, 77, 96n7, 108, 109, 134n2 Postmodernity, 77, 78, 97n8, 108–115, 133n1, 134n2 Pouliquen, Leatitia, 196, 213–214n2 Practical rationality, 76, 135n3, 180 Practice, 14, 45, 46, 67, 79, 82–84, 110, 113–115, 117–120, 122, 124–126, 129, 152, 174, 183, 187, 193–196, 202, 220 Prefrontal cortex (PFC), 59, 149, 153 Presumed dangerousness, 175 Prevention, 25, 157, 174–176, 220 Prichard, James Cowles, 142 Proairisis, 72 Problem of democracy, 116 Procedural Integrated Model (PIM), 128 Process of rational persuasion, 121 Psychiatric disorders, 5, 8, 34, 35, 41, 44, 45, 47–52, 139, 153, 156, 193, 202, 208, 211 Psychological continuity, 205, 211, 212, 214n4 Psychologized expressive individual, 79 Psychopath, 1–3, 37, 146, 147, 176, 182, 185, 186, 188n1, 209, 211 Psychopathic brain, 8 Psychopathic criminal, 144 Psychopathic inferiority, 142 Psychopathic personalities, 6, 143, 144, 151

258 Index

Psychopathic traits, 10, 35, 36, 48, 51, 52, 140, 144, 149, 151, 153, 154, 156, 188 Psychopaths, 2, 3, 10, 139, 141, 143–146, 148, 152, 153, 155, 157, 171, 172, 174, 176–183, 186–188, 206, 211 Psychopathy, 1, 5, 9, 10, 35, 47, 48, 139–158, 159n2, 160n8, 177–182, 184, 186–188, 213, 219 Psychopathy Checklist (PCL), 147–149, 187 Psychotropic drugs, 3 Public good, 82, 83, 109, 118 Public reasons, 123, 124, 126 Punishment, 10, 58, 149, 171–177, 182, 184, 188, 205, 208–210 Q

Quality of life, 23, 36, 37, 45–51, 95 R

Rational capacity, 52, 178–180, 184 Rationalist dichotomy (RD) position, 62 Rawls, John, 123, 124 Real time functional Magnetic Resonance Brain-Computer Interface (rtfMRI– BCI), 6, 140 Real time functional Magnetic Resonance Imaging (rtfMRI), 8, 153 Reason, 7, 14, 17, 21, 29, 35, 37, 45, 48, 52, 55, 61–66, 68–71,

73, 75–77, 85–87, 92, 94, 95, 115, 118, 120, 123–126, 128, 130, 132, 139, 151, 152, 154, 156, 172, 176–182, 184, 186–188, 203, 204, 211 Reasoning, 9, 14, 15, 17, 26, 62, 65, 69, 70, 72, 73, 78, 84–86, 94, 95, 124–127, 133, 134n3, 141, 142, 145, 180, 182, 186 Recidivism, 148, 152, 173 Reciprocity, 130, 132, 154 Rehabilitation, 2, 9, 156–158, 172–174, 209 Rehabilitationist approaches, 172 Responsibility, 4, 5, 10, 35, 49, 95, 111, 115, 118, 146, 148, 171–188, 202, 206, 210, 211 Responsible citizenship, 118 Retribution, 172–174 Revolution, 18–20, 28n4, 75, 111, 112, 135n4, 197, 212, 213 Rolston, Holmes, 206 Roskies, Adina, 5 Rush, Benjamin, 141 Russell, Bertrand, 87 S

Sacks, Jonathan, 130, 131 Sadler, John, 96n3, 206 Savulescu, Julian, 7, 8, 16–18, 20–25, 33, 34, 47, 87–89, 92–95, 153, 155, 197 Schaich Borg, Jan, 58, 179 Scheler, Max, 195, 196 Schneider, Kurt, 144, 145 Science, 9, 13, 14, 18, 19, 23, 25, 28n4, 29n5, 46, 56, 59, 64,

 Index 

77, 89–91, 94, 107–115, 120–122, 124, 127–130, 132, 133, 135n5, 135–136n6, 194, 196, 198, 199 Science of morality, 23, 33, 60, 87–91 Scientific progress, 14 Scientific realism, 110 Scientific relativism, 110, 111 Scientific Revolution, 18 Selective Serotonin Reuptake Inhibitors (SSRI), 6, 140, 153–155, 160n7 Self, 2, 8, 55, 61–68, 73–87, 96n2, 97n10, 131, 205, 206, 208, 209, 211, 212 Selfie man, 119 Sentencing, 10, 171–173, 184, 187, 188, 205, 209, 210 Septum, 58 Serotonin, 24, 154, 160n7 Sex robots, 202–204 Shakespeare, 141 Sinnott-Armstrong, Walter P., 61, 177, 179–181 Snead, O. Carter, 120, 195, 200 Social accountability, 114 Social institutions, 7, 17, 131, 173 Social intuitionist theory, 62 Solomon, Jean-Jacques, 111 Sophia, 69, 96n4, 96n5 Species-typical levels of species-­ typical functions, 42, 43 Standard of proportionality, 173 Standing-reserve, 198, 199 Striatum, 63, 150 Structural neuro-imaging techniques, 149

259

Stuerup, Georg, 139, 140 Substantive principles, 127, 130, 133 Sunesis, 70, 73, 74 Superior temporal sulcus (STS), 55, 58 Survival, 17, 18, 20, 23, 41, 42, 60, 93–95, 220 T

Takagi, Shin, 203, 204 Taylor, Charles, 75, 80–83, 97n9, 97n10, 212 Techne, 69, 96n4 Technology, 8, 15–17, 19, 24–26, 35–38, 40, 41, 48, 50, 94, 109–115, 120–122, 127–130, 132, 133, 149–151, 153, 158, 159, 159n4, 171, 176, 193–202, 204, 205, 207–209, 213, 213–214n2, 220 Techno-science, 107–133, 136n6, 197, 198 Temporal gyrus, 58 Temporal lobe, 3, 58 Temporo-parietal junction (TPJ), 58 Thalamus, 55, 58 Theophrastus, 140 Theory of evolution, 14 Therapy, 35–41, 44, 46, 152, 153, 174 Theta burst stimulation and transcranial, 153 Thompson, Dennis F., 9, 109, 124–127, 130–133 Thresholds of capacity, 182 Traditional self, 79 Tradition-independent morality, 78

260 Index

Traditions, 77–80, 82, 83, 110, 129 Transcranial Magnetic Stimulation (TMS), 3, 153 Transdisciplinary, 9 Trueman, Carl R., 79, 83, 87 Truth, 59, 91, 109, 111, 112, 114, 115, 131, 134n1 U

Ubermensch, 15 Ultimate Harm, 25, 94

Virtues of citizenship, 132 Virtuous person, 70–73 Virtuous self, 67, 68 Volition, 8, 64, 143, 156, 185 W

Weakness of the will, 8, 92 Western culture, 10, 15, 83, 109, 193, 200, 213n1 Wisdom, 69, 71, 72, 85, 93 Wiseman, Harris, 154, 155 Y

V

Ventromedial prefrontal cortex (VMPFC), 57, 63, 149 Virtues, 9, 29n5, 66–74, 83, 84, 86, 97n11, 117, 118, 125, 126, 131, 133, 210

Yates, Andrea, 180, 185 Z

Zak, Paul, 22–24, 29n6, 29n7 Zellick, Graham, 210