Biolust, Brain Death, and the Battle Over Organ Transplants: America’s Biotech Juggernaut and Its Japanese Critics 9781350254992, 9781350255029, 9781350255005

In addition to a large body of influential publications, William LaFleur (1936–2010) left behind several unpublished wor

177 7 3MB

English Pages [265] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Contents
List of Contributors
Foreword
Introduction
Chapter 1 Surgical Masks
Chapter 2 Organ Panic
Chapter 3 Sweating Corpses
Chapter 4 Heart to Heart
Chapter 5 Fear as Discovery’s Instrument
Chapter 6 Sectioning Human Nature
Chapter 7 Campaign for Miracles
Chapter 8 Waste Management—As Philosophy
Chapter 9 Sidelining a Skeptic
Chapter 10 Closeted Medical Bombs
Chapter 11 Immortality and Desire
Conclusion
Appendix
Notes
References
Index
Recommend Papers

Biolust, Brain Death, and the Battle Over Organ Transplants: America’s Biotech Juggernaut and Its Japanese Critics
 9781350254992, 9781350255029, 9781350255005

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

BIOLUST, BRAIN DEATH, AND THE BATTLE OVER ORGAN TRANSPLANTS

Also Available from Bloomsbury: Defining Shugendo Edited by Andrea Castiglioni, Fabio Rambelli, and Carina Roth Dynamism and the Ageing of a Japanese “New” Religion Erica Baffelli and Ian Reader Spirituality and Alternativity in Contemporary Japan Ioannis Gaitanidis

BIOLUST, BRAIN DEATH, AND THE BATTLE OVER ORGAN TRANSPLANTS

America’s Biotech Juggernaut and Its Japanese Critics

William R. LaFleur Edited by Edward R. Drott

BLOOMSBURY ACADEMIC Bloomsbury Publishing Plc 50 Bedford Square, London, WC1B 3DP, UK 1385 Broadway, New York, NY 10018, USA 29 Earlsfort Terrace, Dublin 2, Ireland BLOOMSBURY, BLOOMSBURY ACADEMIC and the Diana logo are trademarks of Bloomsbury Publishing Plc First published in Great Britain 2023 Copyright © The Estate of William R. LaFleur, Edward R. Drott, and contributors 2023 The Estate of William R. LaFleur has asserted its right under the Copyright, Designs and Patents Act, 1988, to be identified as Proprietor of this work. Edward R. Drott has asserted his right under the Copyright, Designs and Patents Act, 1988, to be identified as Editor of this work. Cover design: Tjasa Krivec Cover image © DrAfter123/ iStock’ All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. Bloomsbury Publishing Plc does not have any control over, or responsibility for, any third-party websites referred to or in this book. All internet addresses given in this book were correct at the time of going to press. The author and publisher regret any inconvenience caused if addresses have changed or sites have ceased to exist, but can accept no responsibility for any such changes. A catalogue record for this book is available from the British Library. Library of Congress Control Number: 2022940068 ISBN: HB: 978-1-3502-5499-2 ePDF: 978-1-3502-5500-5 eBook: 978-1-3502-5501-2 Typeset by Integra Software Services Pvt. Ltd. To find out more about our authors and books visit www.bloomsbury.com and sign up for our newsletters

C ­­ ONTENTS List of Contributors

vii

Forewordviii Introduction1 Chapter 1 SURGICAL MASKS

23

Chapter 2 ORGAN PANIC

37

Chapter 3 SWEATING CORPSES

45

Chapter 4 HEART TO HEART

59

Chapter 5 FEAR AS DISCOVERY’S INSTRUMENT

73

Chapter 6 SECTIONING HUMAN NATURE

85

Chapter 7 CAMPAIGN FOR MIRACLES

101

Chapter 8 WASTE MANAGEMENT—AS PHILOSOPHY

115

Chapter 9 SIDELINING A SKEPTIC

129

Chapter 10 CLOSETED MEDICAL BOMBS

145

vi

Chapter 11 IMMORTALITY AND DESIRE

Contents

161

Conclusion173 Appendix190 Notes205 References222 Index236

CONTRIBUTORS William R. LaFleur was E. Dale Saunders Professor of Japanese Studies at the University of Pennsylvania. A prolific scholar, he left behind an influential and eclectic body of work. His early research centered on the intersection of Buddhism and literature in medieval Japan. In his later work, he took up questions related to religion and bioethics. His notable publications include The Karma of Words: Buddhism and the Literary Arts in Medieval Japan (1983), Liquid Life (1992), Dark Medicine (2007), and his essay “Body” in Critical Terms for Religious Studies (1998). Edward R. Drott is Associate Professor of Japanese Religions at Sophia University in Tokyo. His research has examined the intersections of religious and medical knowledge in premodern Japan, and the ways in which religious ideas and practices have helped shape perceptions of old age in early, medieval, and contemporary Japan. His publications include Buddhism and the Transformation of Old Age in Medieval Japan (2016), “Aging Bodies, Minds and Selves: Representations of Senile Dementia in Japanese Film” (2018), and “Gods, Buddhas and Organs: Buddhist Physicians and Theories of Longevity in Early Medieval Japan” (2010). Amy Borovoy is Professor at Princeton University in the Department of East Asian Studies in the field of cultural anthropology. Her work focuses on biomedicine, public health, and mental health care in Japan, and Japan’s family-oriented social welfare system. Her first book, The Too-Good Wife: Alcohol, Codependence, and the Politics of Nurturance in Postwar Japan (2005), explored alcoholism, recovery, and the gendered division of labor in Japan. She is currently working on a manuscript, provisionally titled A Living Laboratory: Japan in American Social Thought, concerning postwar Japan studies in the United States, revisiting canonical texts as they explored Japan’s modernity. Susumu Shimazono is Professor Emeritus of the University of Tokyo and, currently, Professor in the Graduate School of Applied Religious Studies of Sophia University. He has taught at the University of Chicago, L’École des Hautes Études en Science Sociales, Eberhardt Karls Universität Tübingen, Cairo University, and at Ca’ Foscari University of Venice. He has published widely on various topics in modern Japanese religion. His research has also examined ethical questions surrounding the use of various forms of medical technology. He is the author of many works, and among them is From Salvation to Spirituality: Popular Religious Movements in Modern Japan (2004).

F ­ OREWORD Edward R. Drott In 2010, William LaFleur died unexpectedly at the age of seventy-three. A prolific scholar in the field of Japanese studies, he left behind a wide-ranging, influential body of work. While his early research centered on Buddhism and literature in medieval Japan, his later work took up questions related to religion and bioethics.1 In the year following his death, his friend and colleague Linda Chance organized a symposium at the University of Pennsylvania to honor his life and his work. It was there that a group of Bill’s colleagues, family members, and former students (myself included) met to decide what to do with his unpublished writings. The most significant of these, and apparently the closest to completion, was the present volume—Biolust—a work that examines Japanese debates over the concept of brain death and engages with bioethical questions that grow ever more salient as the technologies we develop to extend life continue to advance. It was decided that I was best suited to oversee the publication of this book since my research on religion and the body was most closely aligned with its subject matter. I was deeply honored, and frankly quite daunted, that this task had fallen to me—at the time a freshly minted assistant professor, not so many years out of graduate school. There was also, however, a consensus that I should secure my own academic career and tenure before devoting time to this project. Although I was grateful for this, it also meant that several years had passed before I was finally able to closely examine the remains of Biolust to see what, if anything, might come of them. My first foray did not produce promising results. I had in my possession a copy of the digital folders and files that had been salvaged from Bill’s computer, as well as a canvas tote-bag full of file folders. These turned out to contain rough drafts of individual chapters or parts of chapters, along with notes, photocopied articles, and newspaper clippings. The digital folders, subfolders, and sub-subfolders— though meticulously indexed by Ken Winterbottom, one of Linda Chance’s student assistants—also failed to reveal a full, in-tact manuscript. Instead, the digital files consisted mostly of jottings, short fragments of chapters, notes for presentations, and various “leftovers.” Inquiries with publishers revealed that, although Bill had been in touch with them, none had a copy of the manuscript. Adding to the complication was the fact that Bill had apparently completed two separate versions of this work. Having finished a draft in 2002, he decided, for reasons still not entirely clear, to return to the drawing board. This second, 2010, draft was purportedly quite near completion at the time of his death. Digging through the digital files, however, I found no folder or set of files matching its description.

Foreword

ix

Finally, after a long series of false starts, I was able to locate large portions of the 2002 manuscript (in an obsolete file format) that were quite close to what appeared to be a finalized form, as well as a hardcopy of the 2010 draft. Reading these chapters for the first time, I remember feeling excited, intrigued, and, at times, startled by the audacity of some of their arguments. I became convinced that this was a work that needed to be published, not just to honor Bill’s memory and his family’s wishes but to give these highly original and provocative ideas the audience they deserved. Familial and other academic duties, however, soon intervened and the project was, I am ashamed to say, once again relegated to the back burner. In the intervening years, I continued to field questions from scholars curious about the status of the work, and in the winter of 2020, I found myself with time to begin re-reading the manuscript and comparing the two drafts. In terms of tone, the 2002 draft was more personal and essay-like. The 2010 draft retained LaFleur’s first-person voice, but was slightly more “academic” in character. The 2002 draft focused more on Japan, the second draft more on developments in the West. Neither draft presumed deep knowledge of Japan on the part of the reader, and were, like much of Bill’s work, written for a wider audience. Bill’s writing had always been comparative and sought to highlight the significance and relevance of his discoveries to readers unfamiliar with Japan. This often meant pointing out ways in which Japanese perspectives or practices could serve as a corrective to what he perceived to be the intellectual or ethical blind spots of American society or, at times, American academia.2 So, which draft to publish? Ordinarily, one would opt for the latest version, assuming it to be the most fully developed. But the 2010 draft was clearly less polished than the 2002 draft. As fate would have it, the solution was suggested by a relatively late version of the table of contents found among the files. It included chapter titles from both drafts and seemed to indicate that Bill was planning, ultimately, to combine the two. It was this model I decided to adopt in arranging the current manuscript. But this also meant more editorial decisions were incumbent upon me—to be detailed below. Although I could have forgone many of these editorial dilemmas by publishing everything “as is,” I have opted instead to present the most polished portions of the two drafts, pruning out the more fragmentary bits, and to the best of my ability weaving what remained into a unified whole. Throughout this saga, I was in regular touch with a small group of scholars, consisting of Bill’s former students, colleagues, and friends—Linda Chance, Richard Gardner, John Harding, and Jacqueline Stone. This “Biolust brain trust,” as I dubbed them, provided me with invaluable advice whenever I found myself at a loss. Bill’s wife Mariko and his daughter Kiyomi also provided vital support. Later, two more scholars were added to this circle—Professors Amy Borovoy and Susumu Shimazono—who have generously agreed to provide an introduction and a conclusion to this book. I am deeply indebted to all of these parties. Without them, this project surely would never have come to fruition. ***

x

Foreword

In my editing, I have been guided by a few basic principles: to produce a manuscript that presents Bill’s arguments in their most cogent and compelling form, to include as much as possible of what remains of the manuscript, and to be as faithful as possible to what seems to have been his vision for the work. To satisfy these imperatives, I have arranged the chapters close to what appears to have been Bill’s final version of the table of contents, but have opted, where possible, to include full chapters from the more polished draft, whichever it may be. For readers curious about versions of chapters not used in the current work, I have supplied synopses of the unused materials as an appendix. There were also, I’m afraid, a few instances, described more fully in the appendix, that necessitated the “transplantation” (forgive me, Bill) of certain sections from the body of one draft to another. I have, nonetheless, endeavored to keep such surgical interventions as minimally invasive as possible. In terms of line-editing I have done my best to employ a light touch. I have limited myself in most instances to correcting typos, tracking down bibliographic details, deleting the occasional interlinear note Bill had left for himself, or at times, relegating to endnotes passages that seem patently unfinished, fragmentary, or out of place. In rare instances, I have included bracketed interpolations to provide additional context. Where relevant, I have described changes or deletions in endnotes. One way in which, sadly, I was not able to fulfill Bill’s original intentions was my decision to abandon the effort to include images. His second draft, in particular, included various illustrations—ranging from pictures of the philosopher Hans Jonas to a still from the Kurosawa film Rashōmon. In some cases, these images provided substantive support for Bill’s arguments.3 For the most part, however, they served as embellishment and were not essential. Having already deprived readers of this book for too long, I decided to forego the time-consuming hunt for the origins of these images in a possibly futile effort to obtain permission to include them here. Copyright concerns also, unfortunately, necessitated the removal of much of Bill’s evocative epigraphy—a distinctive feature of his published work. Epigraphs that are in the public domain have been retained, and where possible others have been included in footnotes.

Overview and Chapter Summaries In 1997, after a lengthy public debate, the Japanese government enacted a law to allow people to choose to donate their organs in the event that they were declared brain dead. In the decades since, however, rates of donation have still fallen far below those seen in other countries. Biolust seeks to show that, contrary to what many in the West assumed, Japanese resistance to the practice of cadaveric organ transplantation was not based on the vestiges of some irrational (possibly religious) superstition but on a thorough, rational, public debate on the issue. And it was a debate, LaFleur holds, which was in fact more open and scientifically well informed than that which had occurred in the United States. LaFleur argues

Foreword

xi

forcefully that people in the English-speaking world should listen to Japanese critics of the concept of brain death, for in the end, he finds their arguments to be more convincing than those offered by the pro-transplantation camp. Furthermore, they hold a warning about what is to come—the dangers of what LaFleur refers to as a steadily approaching “biotech juggernaut.” Importantly, LaFleur does not limit himself to merely reporting on these bioethical debates, but joins them. In the process, he stakes out a highly original position that fails to fall neatly onto either side of the ideological divide easily recognizable in contemporary US culture wars. LaFleur argues that discussion of organ transplantation in North America has been one-sided and seeks to give voice to perspectives that he believes have been marginalized. The present volume, with the inclusion of chapters by Professors Borovoy and Shimazono, as well as my own endnotes, strives to include a more diverse and balanced set of views. Nonetheless, as readers will quickly observe, Biolust is not a neutral, dispassionate study. Nor does it pretend to be. In the Introduction, Amy Borovoy seeks to bring this work into conversation with recent scholarship. She situates Biolust in the context of contemporary scholarship on, and contemporary practices of, transplantation in Japan and the United States. Drawing on the work of ethicists, neurologists, historians, and medical anthropologists, she shows that brain death remains a fraught concept, but that scholars and medics have become, if anything, more sensitive in the decade since LaFleur’s passing to the ambiguities and ethical gray areas it presents. Although most in Japan were and remain highly skeptical of the concept of brain death, there have also been those who wished to allow cadaveric organ transplantation. Since LaFleur’s purpose was to highlight arguments against transplantation, which he saw underrepresented in Western bioethical debates, he spends little time detailing pro-transplant positions. Borovoy remedies this by including the arguments of Japanese proponents of cadaveric organ transplantation. She also provides a fuller picture of the clinical procedures and ethical protocols involved in the diagnosis of brain death and organ transplantation in North America. Chapter  1, “Surgical Masks,” describes the origins of this study, LaFleur’s initial skepticism of Japanese resistance to organ transplantation, and his gradual realization that the positions articulated by Japanese bioethicists were based not on some stereotypical, fuzzy “Eastern” spirituality but a better understanding of the science, and better reasoning. The title of the chapter, “Surgical Masks,” is based on LaFleur’s observation that, contrary to the canard that Japan is a society of masks, Japanese debates have been much more open (for those who can read the language). Instead, LaFleur claims, it is North American supporters of cadaveric organ transplantation who have presented the public with an overly vague and simplistic image of brain death. Chapter  2, “Organ Panic,” details some of the overlooked negative social impacts of organ transplantation. LaFleur notes that given the continuing lack of viable organs for transplant, those needing organs often come to feel that their lives are at risk not due to some underlying medical condition but due to the fact that some unknown individual has, for whatever reason, failed to become an organ donor.

xii

Foreword

Chapter  3, “Sweating Corpses,” deals with the unsettling new category of human being our technology has brought into existence: one simultaneously living and dead.4 LaFleur deals with Japanese responses to the ways in which brain-dead patients continue to show signs of life, including well-documented cases of “postmortem pregnancy.” Chapter  4, “Heart to Heart,” centers on the multiple meanings associated with the Japanese term kokoro—the metaphorical “heart” understood as the seat of mind, emotions, and moral intuition—and explores the role of emotion in ethical decision-making. LaFleur reports on potential Japanese organ recipients who confess feeling morally conflicted about the procedure, and provides religious and cultural context to help clarify these perspectives. He traces the history of efforts to identify the organ responsible for human consciousness, which culminated in the scientific consensus that the brain is the source of our mental life. LaFleur points to the intriguing research of Professor Kōshirō Tamaki, who highlights the continuity between the brain, the spinal cord, and by extension the rest of the body. The brain (and, LaFleur implies, the mind) inhabits a curious kind of a house: one with a roof in the form of the cranium, but no discernible floor. Chapter 5, “Fear as Discovery’s Instrument,” inspired by Jonas’s proposal that fear can be “heuristic,” seeks to examine and interpret, rather than dismiss, the deep unease felt in both Japan and the United States around the subject of organ transplantation. Part of this fear stems from the fact that the concept of brain death requires us to abandon the long-standing tradition that death could be verified by three universally observable signs: cessation of respiration, cessation of heartbeat, and dilation of the pupils. Ethicists in Japan have most often favored stricter, more conservative criteria for determining death. LaFleur notes that those who understand the science best are forced to admit that brain death is a more ambiguous condition than often portrayed. Many Western bioethicists who admit that brain death is a troubled category, however, reject instituting stricter criteria because that would make it impossible to obtain vital organs. For LaFleur, this exposes an uncomfortable truth: it is not the case that our definition of death has become more precise based on advances in technology, allowing organ transplantation as a side benefit. Rather, he claims, following Hans Jonas, the definition of death was changed in order to create a supply of organs. In Chapter 6, “Sectioning Human Nature,” LaFleur discusses how practitioners of biomedicine are often blind to the metaphysical assumptions still at work underneath their profession’s veneer of rationality and empiricism. In short, according to LaFleur, many doctors, scientists, and bioethicists implicitly rely on a crude, scientifically debunked model of consciousness, which Daniel Dennett has labeled “the Cartesian theater” (1991). LaFleur notes the pervasiveness of the Cartesian notion that the essence of humanity is consciousness, mind, or rationality. From this it follows that the non-thinking body need not be regarded as human. According to LaFleur, part of the reason the Cartesian model of human nature has become the default is that the humanities have, by and large, abandoned the question of what it means to be human. By not entertaining discussions of human nature, humanists have left it to the medical profession and our Cartesian

Foreword

xiii

intuitions to define. To contrast with this indifference, LaFleur introduces Japanese intellectuals who have raised the alarm that our very humanity might be at stake in the current trajectory of biomedicine. He concludes by proposing an approach, following Wittgenstein, for thinking about human nature without falling into the traps of reductivism or essentialism. Chapter  7, “Campaign for Miracles,” traces the process by which American Christians and Jews were “converted” to the idea that organ transplantation was consistent with Judeo-Christian values—a process less straightforward than many might expect. LaFleur hopes to move beyond the crude caricatures of Western/ Christian culture often put forward by conservative Japanese critics of organ transplantation—those Margaret Lock labels “traditionalists.” It is not the case, LaFleur shows, that Judaism and Christianity have always held a Platonic view of the body as a prison of the soul. Pointing to the long-standing Christian doctrine of the resurrection of the body, and Christ’s treatment as a healer of physical bodies, he notes that the body was given a more positive valence by the church in earlier centuries. Although it might seem common sense today that organ transplantation can be interpreted as a quintessentially Christian act of selfless giving, LaFleur historicizes our current understanding of Christian charity. He also points to Japanese critics of Western bioethics who have attacked the very notion of universal love as an unrealistic ideal. “This is not to deny the possibility of altruism but,” LaFleur writes, “rather, to express doubt about the wisdom of constructing an ethic that would implicitly denigrate or downgrade existing structures of inter-personal bonding.” Chapter  8, “Waste Management—As Philosophy,” presents a critique of the strong utilitarian undercurrent to much of European, North American, and Australian bioethics, which, in LaFleur’s view, has become little more than a public relations arm for technological medicine. In Japan, discomfort with utilitarianism is widespread and cuts across the political as well as the traditionalist/ nontraditionalist spectrum. LaFleur’s analysis includes sources ranging from literary works, to works of philosophy (in particular, those of the Kyoto School), to the writings of numerous bioethicists. LaFleur concludes with a warning that once “cost-benefit analysis” becomes the ultimate means of judging what should or should not be allowed, we eventually find ourselves running to extremes—including a proposal that brain-dead individuals, redefined as “neomorts,” be kept on life support so that they can be used as experimental guinea pigs or factories for blood, antibodies, and other biological products. Why, LaFleur asks, do we blanch at such proposals, and not at the practice of “harvesting” organs from neomorts and then letting them die? Chapter 9, “Sidelining a Skeptic,” begins with LaFleur’s observation that, while he had long been familiar with the work of philosopher Hans Jonas, he had not been aware of his writings on the ethics of organ transplantation until reading about them in several Japanese sources. Why, he wondered, had English-language scholarship completely ignored this aspect of Jonas’s work? And why had Japanese thinkers been so eager to take it up? LaFleur provides some historical background that helps explain why Jonas’s position on brain death was sidelined in the West.

xiv

Foreword

Jonas himself realized he was promoting ideas that were out of step with the emerging bioethical consensus, titling his essay opposing the redefinition of death “Against the Stream.” Chapter  10, “Closeted Medical Bombs,” further contextualizes Japanese and American attitudes toward medical technology by comparing ethically problematic medical research undertaken by both nations during and in the immediate aftermath of the Second World War, and the collective responses to those actions. LaFleur begins by noting that societal attitudes toward bioethics are related not just to culture but also to a nation’s historical experience. This, LaFleur notes, has nothing to do with Orientalist stereotypes of Asia as “irrational,” “nonscientific,” or “traditional.” At one stage in its history, Japan was quite optimistic about scientific medicine. In fact, the world was impressed by Japanese battlefield medicine in 1905. But the same trust in scientific medicine and technology in the aid of military victory led, in the 1930s and 1940s, to the depredations of Unit 731, the military unit that carried out horrific experiments on prisoners of war and civilians during the Second World War. The rationale in both cases was the prevention of casualties. In the case of Unit 731, lives would be taken with the understanding that the scientific data so gathered would save others. LaFleur chastens Americans who might feel a sense of morally superiority. As historians have shown, the United States orchestrated the cover-up of Unit 731’s activities in order to gain access to the data—another expediency justified, this time, by the Cold War. Americans also gathered medical data from atomic bomb victims by ethically bankrupt means, instituting a “no treatment” policy in order to trace the course of radiation sickness. All of this was undertaken with the rationale that collecting this data would eventually save lives, and was thus “too valuable to waste.” The chapter ends with a partial endorsement of Susan Sontag’s call for excising metaphors from medical discussions. LaFleur sees war metaphors as especially pernicious. Declaring war on a given disease or condition puts researchers and the public on a footing that will allow ethical norms to be sacrificed in the name of “winning” the battle and “saving lives.” Chapter 11, “Immortality and Desire,” provides an exploration of the intellectual historical underpinnings of deep-seated attitudes in Japan and the West toward the promises and dangers of biotechnology. Intellectual history, as he has shown in earlier chapters, is not destiny but can play a role in shaping attitudes and provide conceptual repertoires for people of the present day seeking to piece together a moral stance when confronted with new circumstances. As the title indicates, his main targets here are the contrasting religio-cultural views of the promise of immortality found in Christianity, enlightenment philosophy, and Mahayana Buddhism. Buddhism famously encourages an acknowledgment and acceptance of impermanence and the fleeting nature of human existence. According to LaFleur, Japanese Buddhists fault the way new biotechnology “throws emotional gasoline on the old embers of a longing for bodily immortality.” Western intellectual history, on the other hand, has not always treated immortality as an improper object of

Foreword

xv

desire, or even as something inherently “unnatural.” He notes that in Genesis the original human state is one of health and immortality. It is only through the advent of sin that our lives come to be cut short. Interestingly, LaFleur shows that enlightenment thinkers, even while challenging and overturning many religious dogmas, unwittingly became heirs of Christian attitudes toward immortality—the view that death was some kind of error that could be corrected, this time, however, through science. Since LaFleur left us no conclusion, Professor Susumu Shimazono has graciously offered to pen one. In this chapter, he reconsiders LaFleur’s arguments in the context of a broader set of Japanese bioethical discussions and developments. In addition to providing an overview of the current status of organ transplantation in Japan, he analyzes some of the characteristics of Japanese bioethical thought identified by LaFleur and considers Japanese bioethical thought in a global context, asking how the positions staked out in Japan compare to those articulated in other countries. In the final section of this chapter, Shimazono draws out and highlights some of the bioethical concerns identified in Biolust that can shed light on other areas of debate. *** Bill had no interest in establishing a “LaFleur school” or cultivating disciples among his students. Thus, it would not surprise or trouble him to learn that while there is much here with which I enthusiastically agree, there are also many areas in which our views differ significantly. Nonetheless, I appreciate this work perhaps especially for the ways in which it has challenged me to reflect on and reevaluate my own perspectives. Similarly, I would ask those who have strong views on these issues to read on with an open mind. Even those who disagree with the book’s conclusions will, I am sure, find much here of value. While strongly favoring one side of the brain-death debate, one of the book’s main goals is actually to bring attention to the complexity of the issue. Granted, LaFleur amplifies the voices of those who (at times using sensationalist language) argue that brain-dead individuals are unequivocally still alive. However, in spite of his strident rhetoric, he never fully or explicitly commits to this extreme position.5 His call, instead, is for a recognition of and honesty about the fact that technology has created a disturbingly wide gray zone between life and death. What he rejects are efforts to ignore these ambiguities, and claim that bright lines can be drawn.6 Medical anthropologists, most notably, Margaret Lock (2002) and Lesley Sharp (2006), have also directed attention to this gray zone.7 While both authors are supportive of cadaveric organ transplantation, they are critical of the category of brain death.8 Brain death, in their view, is a matter not of biological fact but of social consensus. It counts as death only insofar as those involved define human life in terms of the presence or permanent absence of a functioning mind—by no means a universal, culture-free criterion.9 Biolust argues that, insofar as there are cultures or communities who might define our humanity more broadly—and might

xvi

Foreword

perceive the body of a loved one, even one that will never regain consciousness and must be supported by a ventilator, as still somehow the person they had known and loved—their moral intuitions should not be dismissed or belittled.10 As this book’s subtitle suggests, the questions LaFleur raises can be fruitfully extrapolated to other bioethical scenarios. This was clearly his intention, perhaps his main intention. He sought to treat organ transplantation as a test case for ethical reasoning on biotechnological innovations yet unimagined. Biotech genies, after all, do not tend to go back into their bottles once released. Thus it is all the more important to fully consider the implications of a given technology before it is implemented. This book argues convincingly that when we engage in these discussions, our moral imagination can be enriched by opening ourselves to a more diverse range of voices—especially those that would ordinarily escape the notice of the English-speaking world. This work has other, broader, ramifications as well. It makes unique contributions to a by now substantial literature critiquing our culture’s “knee-jerk Cartesianism” and affirming the body as more than a mere appendage to, or possession of, the self or mind.11 It asks deep questions about the nature of consciousness, calling into question our tendency to view it as something that operates in a binary fashion like a light switch, either all the way on or all the way off, absolutely present or absolutely absent. Here, too, if we look carefully, we find areas of gray. It warns against the tendency in many quarters to consider biotechnological innovations only in the abstract—as if all will have access to them equally. LaFleur asks us to consider not just who may benefit from these technologies but also at whose expense those benefits are likely to come. We would do well to reflect not only on the wonders that may be possible for the lucky few, he tells us, but on the damages these technologies might visit on the vulnerable and powerless. Finally, in a broader critique of the cultural impact of the promotion of biotechnological “miracles,” LaFleur seeks to show that a compulsive grasping at dreams of immortality cannot but distort our perception of and hinder our appreciation of the life that we do have. Though he left no conclusion for this book, I tend to think that the quotation from Dōgen that ends its final extant chapter serves as an eloquent resting place and summation of LaFleur’s argument in its broadest terms. To paraphrase, it warns us not to let an obsessive hatred or fear of death interfere with the important business of living. To admit that life and death are necessarily intertwined, LaFleur argues, is not to be fatalistic or nihilistic; it is to be realistic. In time, new technologies will surely render cadaveric organ transplantation obsolete. But the value of this work will, I believe, persist, thanks in part to these broader contributions. On the subject of organ transplantation itself, I am sure Bill never expected this book to convert every reader to his point of view. But if it inspires us to reflect once more—or perhaps for the first time—on the questions he raises here, I am sure he would have considered it a success.

I N T R O DU C T IO N Amy Borovoy

William LaFleur began his career as a scholar of medieval Buddhism and the literary arts in Japan. Yet within that field his work stood out due, in part, to its comparative bent and its tendency to use data from Japan to question Western assumptions or social norms. Especially in his later writings, he sought to speak to contemporary social issues and to ask ethical questions about the nature of the good society. As a cultural anthropologist, I have used LaFleur’s work in my courses at Princeton University on Japanese society and culture and medical anthropology and bioethics. LaFleur’s arguments are often meant to provoke American readers to reflect on the assumptions which shape American conversations. For example, LaFleur’s Liquid Life: Abortion and Buddhism in Japan (1992) begins with a reflection on the memorialization of aborted fetuses and stillborn children on the grounds of a Buddhist temple, where children play on playgrounds nearby. In some instances, rites (mizuko kuyō) are performed to honor and protect the fetus or stillborn child. Such rites recognize the fetus as lifelike, and, yet, terminating the pregnancy is not considered immoral because the fetus is not yet fully human. This is not a matter of biology. Rather, according to Japanese Buddhist notions of the cycle of life and death, human life itself is understood not simply as biological life but also as social experience: social relationships and the process of coming of age give life meaning and “denseness.” This distinction between biological life and social life provides ease to the parents in returning the fetus to the world of the unborn. Although his treatment of mizuko kuyō was criticized for not highlighting the potentially exploitative aspects of these rites, LaFleur’s hope was to show the reader that the very terms which defined American debates about abortion, questions of individual or fetal rights and autonomy, were not universal (Hardacre 1997; see also LaFleur 1990).1 LaFleur’s Biolust is also meant to provoke. It is a dystopic and at times angry critique of the medical practice of procuring organs from recently deceased donors, now a mainstream medical treatment in many countries. LaFleur *I would like to thank Sharon Kaufman and Michael Solomon for their comments and suggestions and Michiru Lowe for research assistance and discussion of the materials.

2

Biolust

is  worried about the criterion of neurological death (“brain death”), which makes this treatment possible, and his intent is to rouse his audience from any sense of passive acceptance or easy normalization of these practices. In the normalization of organ transplantation, LaFleur sees a pioneering moment in the latter part of the twentieth century, in which the notion of neurological death made the practice of deceased donation possible, and thus gave rise to an era of transplantation medicine which allowed healthy organs to be removed from bodies, creating the possibility of treating incurable or terminal ailments, including chronic kidney disease, cystic fibrosis, cirrhosis of the liver, and heart disease (Fox and Swazey 1992; Lock 2002; Starzl 1992). Around the time transplantation medicine was taking off, other pioneering therapies were being developed, including in vitro fertilization and research in gene therapy—all technologies motivated by the desire to “save lives” while simultaneously bringing forth the specter of turning the human body or its parts into commodities to be dispersed. LaFleur was still working on his manuscript at the time of his untimely death in 2010. In Japan, the declaration of death based on neurological criteria was not legally recognized until 1997. Yet in 1997 an Organ Transplant Law was passed and it was revised in 2010, precipitating a growing number of transplants from deceased donors. Nonetheless, this number remains relatively small compared to other wealthy countries. Pointing to Japan’s more conservative approach to neurological death and organ transplantation, LaFleur asks us to reflect on how Americans so readily accepted this radical treatment—a treatment which necessitated a new definition of death itself and which prescribed new ideals of altruism and social obligation to strangers—ideals which were not entirely consistent with traditional Christian or Jewish beliefs. “Society would have to be trained to see this as good,” LaFleur writes. His project in Biolust is to hold up for scrutiny this package we have accepted, a practice represented in the context of the mass media, hospital transplantation programs, and government-contracted procurement organizations as onedimensionally positive, a means of saving lives and giving hope, and as a form of “progress,” “altruism,” and “enhancement.” He calls on us to question this practice as an “unalloyed good” and to look at difficult trade-offs and compromises—to look beyond the terminology of “life-saving technology” and “modern miracles” and to see what is sometimes hidden or unpleasant, that which sets us on edge— what some have called “the yuck factor,” “taboo,” or “moral fear” (Chapter 4). What LaFleur worries about is that the “usefulness” of a procedure such as deceaseddonor organ transplantation has become enough to prove that it is not only moral but necessary. Indeed, much of the American public sees organ transplantation as a normal, ethical, and urgent form of medical treatment. And drawing on Japan as a “useful foil,” LaFleur hopes to shine a light on the ideological work that needed to be done in order to accept the idea of deceased donation—removing organs from a body declared dead, while the heart still beats, supported by a mechanical ventilator. This is not a journey into “Eastern mysticism,” LaFleur insists. Japan is a highly educated

Introduction

3

and technologized nation with advanced medicine. The thoughtful debates about these issues among philosophers, ethicists, and scholars of religion are worthy of our attention as a fellow technologized society. In fact, LaFleur proposes to look at Japan not only with “curiosity” about different ways of doing things but as an example of a more clear-headed approach to thinking about the problem. LaFleur’s contributions are many, including an exploration of how the logic of Jeremy Bentham’s and John Stuart Mill’s utilitarianism worked its way into contemporary ethical and theological discourse in the United States, both Christian and Jewish, paving the way for the acceptance of donating organs to a stranger as an expression of love and altruism. He investigates how those who opposed the dominant logic of utilitarianism were sidelined in the United States. And he offers insight into the field of bioethics in Japan, an area that has not yet embedded itself in hospital practice, and which remains skeptical of utilitarian or consequentialist logic. A key concern in the book is with the creeping logic of utilitarianism into bioethical and theological reasoning—and with the way in which this utilitarianism may threaten the dignity and integrity of human life. In Chapter 7, “Campaign for Miracles,” LaFleur addresses the question of how organ removal came to be regarded as compatible with Christian ideals concerning the sacredness of the body after death, “the positive value attached to the dead” (LaFleur, quoting Ariés), and the centrality of early Christian doctrine of resurrection. LaFleur is fascinated by the ways in which churches and synagogues have lent their support to these emergent technologies and how the redefinition of love as agapé in popular theological writings made the idea of cadaveric donation embraceable.2 LaFleur begins the book by launching a vehement critique of the medical notion of neurological death. Following Hans Jonas, he regards it as an open question whether or not bodies which are continuing to breathe while supported by the mechanical ventilator—despite the medical declaration of irreversible cessation of brain function (cerebral and brain stem)—should be regarded as alive. He pays close attention to the critics of the notion of neurological death: D. Alan Shewmon, a US pediatric neurologist who has systematically studied bodies of those declared dead by cessation of neurological activity; Hans Jonas, a German-born American philosopher—influential among Japanese bioethicists— who was early to voice alarm about deceased organ donation; and a range of Japanese public intellectuals, philosophers, and scientists, who have expressed skepticism about this new category. Jonas believed that, although circulation and heart beat are supported by the ventilator and would fail without it, “death” could not be declared until the ventilator was removed and the heart stopped. If one accepts this logic, then it is the surgical removal of the organs which “kills” the patient. LaFleur worries, too, that the bio-technological capacity for transplantation has led people to assume that they have the “right” to another’s organs, or to wait in anticipation of another’s tragic accident. He worries about donor recruitment discourses which imply that an individual’s death may be “caused” by an insufficient supply of human donor organs. He worries that this “lust” for progress in the form of demand for human tissues has corrupted our

4

Biolust

humanity and prevents us from making peace with our own mortality. And he worries that these steps have paved the way for the commodification of the body and the buying and selling of body parts. To help us see the utilitarian logic we have accepted, LaFleur engages with a range of scholarship in philosophy of science, religious studies, and bioethics. I am particularly grateful for the research he introduces of Japanese philosophers and scientists whose work is not well known in the United States.3 I am also grateful to Edward Drott for his commitment to editing the manuscript and bringing it forward for students and other scholars to engage with. Professor Susumu Shimazono’s concluding essay is also an enormous contribution, updating the reader on philosophical bioethics in Japan as they are developing around such issues as stem cell therapy, gene editing, and the new reproductive technologies. I should also say that I find LaFleur’s strong rhetoric at times overblown. His initial chapters capture the reader’s attention with observations which suggest that bodies declared dead due to cessation of brain function may be still alive. This question carries tremendous ethical importance. At the same time, the question of whether whole brain death biologically means the end of life is one that in the end may be less productive than some of LaFleur’s other explorations, which examine questions around the dignity of the human and shifting notions of altruism which make organ donation compelling. It feels that in the years since LaFleur drafted the manuscript, the idea that there is a clear answer to the question of “what is death?” has become less plausible. Death is a biological process, but it is complex, involving multiple bodily processes; many would argue that it is more naturally seen as a process, rather than a moment. Bodies die gradually, some parts stopping first, while others continue on a slow path to eventual cessation. By the mid-twentieth century, the intervention of machine technology, such as the mechanical ventilator, had created further ambiguity, as the flow of oxygen allows the heart to keep beating on its own electrical signal. The “three signs” that marked the state of death before the advent of such technology, pupil dilation, cessation of breathing, and cessation of heart beat, could now be warded off. On a ventilator, the heart continues to beat, and circulation continues to supply oxygen and nutrients to support cell function. Some physicians, in particular neurologists, have called attention to bodily functions which nonetheless persist in this condition. Meanwhile neurologists continue to make discoveries about the brain’s complexity and seek even greater degrees of nuance about levels of neural activity.4 Today it seems less likely than ever that a consensus around a specific moment of complete biological death can be achieved (Kaufman 2000, 2020; Lock 2002; Singer 2018; Solomon 2018; Truog, Pope and Jones 2018; Veatch 2009). Although it may sound strange to say that such a high-stakes question remains unanswered, the extensive clinical ethical protocols which guide organ transplant attempt to protect the rights and autonomy of the donor and the dignity of the end of life in the absence of a firm conclusion. These protocols attempt to ensure that death is not declared until it is certain that a person will never regain consciousness and both the brain stem and cerebrum have ceased to function. They attempt

Introduction

5

to prevent conflict of interest between an organ donor and recipient. And they protect the patient by ensuring that the discussion of donation does not impact end-of-life care. Nevertheless, LaFleur’s questions are pressing and important. It is a striking and, for some, troubling fact that in contrast with “traditional” cardiac death, where the body grows cold and the pupils dilate with cessation of circulation, in the case of an organ donor, the ventilator is never turned off and the heart does not cease to beat until the procurement surgery is complete. The patient is pronounced dead when the determination has been made (after two independent examinations) that upper and lower brain activity have ceased. At that moment, the patient, still warm and with beating heart (supported by the oxygen provided by the ventilator), is wheeled to the operating room for the removal of the organs. It is this fact that spurred LaFleur’s alarm, provoking his references to fears of “vivisection” or “neocannibalism.” These larger concerns remain important, though I worry that his rhetoric may distract from what is really at issue: the fine lines and potentially slippery slopes (which medical ethical protocols are continually revised to protect against and prevent) involved in removing organs from recently deceased bodies. It is worth emphasizing that deceased donor transplantation is now a mainstream form of care, and ethical protocols and guidelines in hospitals attempt to protect the donor and provide rigorous end-of-life care. The idea of neurological death has become a medically and legally accepted aspect of health care provision in all societies with advanced biomedicine (though in East Asian societies, notably Japan, Korea, and China, there remains skepticism and these societies perform fewer transplants). In the United States, all fifty states recognize the cessation of whole brain function as human death.5 Nonetheless, there remain many ethical questions surrounding the procedure, from appropriate ways of soliciting consent or advertising transplant programs to potential recipients in ways that are sensitive to donors’ survivors, to the ethics of medical interventions on donor bodies which benefit the recipient. Controversy and gray areas remain. But I hope that a few remarks which situate LaFleur’s ideas in a broader context will allow readers to see more precisely what is at stake in the practice, a practice which LaFleur rightly suggests implicates nothing less than the sanctity of human life, respect for bodily integrity, and the sanctity of human death as both a rite of passage and a social event. In what follows I draw on scholarly works in medical anthropology, sociology, and bioethics on the question of neurological death and the ethical predicaments associated with organ transplantation. While I recently have been conducting research on living donor transplant in Japan and teach in the areas of medicine and bioethics, I rely heavily on the work of others. An experience that has also informed my views has been serving as a community member on the bioethics committee of a major urban hospital in central New Jersey for the past year and a half. This has given me insight into the daily ethical questions and predicaments raised on the hospital floor by the practice of deceased donor transplantation, and the protocols put in place to address them. At the end of this chapter, I offer a few thoughts about organ transplantation in Japan today.

6

Biolust

Issues Surrounding the Declaration of Death Based on Cessation of Whole Brain Function The notion of cessation of brain function as “death” was, as anthropologist Margaret Lock put it, a “redefinition” of death. “Brain death,” or, in medical terms, declaration of death due to irreversible cessation of whole brain function (I will use here the medical terminology), was the product of consensus, a definition accepted by most physicians as a valid and ethical notion of death, even as they recognized that the idea contained flaws. It did not mean the death of the brain in every aspect. This new definition of death was first put forward in a 1968 report, “A Definition of Irreversible Coma: Report of the Ad Hoc Committee of Harvard Medical School to Examine the Definition of Brain Death” (issued August 5, 1968). The report stated that its investigatory purpose was to redefine something that had once been considered “irreversible coma” as a “new criterion for death.” What had once been regarded as an irreversible cessation of brain function, a state which kept patients in the hospital supported by machines, was now regarded as “death.” A primary motivation for the committee to take this step was to create clarity in pronouncing death at a moment in which these boundaries had become unclear. The reason for this was the growing use of the mechanical ventilator, a technology originally developed in the early part of the twentieth century to support patients undergoing surgery, or to keep patients alive during unstable periods until they could recover from disease (Kaufman 2020: 3). Now, however, it was creating confusion, supporting an ambiguous status, in which patients’ lives could be sustained “in a gray zone” potentially for years or even decades, despite massive brain damage and permanent cessation of both brain stem and cerebral function (Kaufman 2020: 4). If the ventilator continued to provide oxygen to the heart and lungs, cardiac electrical activity could persist, and the heart could continue to beat for some time. Once, the cessation of the function of the brain stem would have led to the failure of the lungs, depriving the heart of oxygen and leading to the circulatory death customarily recognized as death—cessation of the heart, coldness, and pulselessness. Now, the ventilator had the capacity to delay the cessation of the heart. Although the cerebral cortex and brain stem had ceased to function in an integrated way, the patient with permanent cessation of brain function appeared flushed and restful, “as if asleep,” as Japanese intellectual Kunio Yanagida observed in a moving description of his son, declared dead after the tragedy of suicide, and his own attempt to come to terms with this diagnosis (Yanagida 1999/2017: 53; see also 112–14, 141). Although brain death was a newly conceived medical determination, it is important to distinguish this death-like state from other nonconscious states, including coma or persistent (or permanent) vegetative state (PVS). A person in a coma may be able to breathe on their own. (Though some patients in coma are placed on ventilators.) Though they lack awareness and wakefulness, their brain stem remains intact. A patient in a persistent vegetative state may have awoken from a coma but has still not regained awareness. They may sometimes respond to

Introduction

7

external stimuli, open their eyes or track objects, or evidence sleep–wake cycles. Although the percentage of patients who recover from this state is small and the prognosis worsens as time goes on, depending on the cause of coma, they may recover, and they are not considered dead. In contrast, a person diagnosed with irreversible cessation of brain function shows no responsiveness to any external stimuli, has no brain stem or cranial nerve reflexes (though they may still exhibit local reflexes mediated by the spine), and is incapable of spontaneous breathing.6 Organs are not procured from patients in a persistent vegetative state, and terminating life support for these patients (e.g., stopping nutrition and hydration) requires consent from family members (or others deemed competent to make health care decisions) and the primary health care team. In the case of brain death, in contrast, the body cannot breathe and the heart cannot continue to beat without the support of a ventilator. The definition of brain death specifically does not designate “loss of personhood” as a criterion as this could also occur in other conditions, including dementia, certain congenital deficits, and other vegetative states (Solomon 2018: 7). Criteria for declaring neurological death include the absence of response to pain and absence of specific brain stem and the basic nerve reflexes associated with brain stem activity: gag reflex, cough reflex, pupillary reflex in response to light, corneal reflex (blinking in response to touch), vestibulo-oculor reflex (in which cold or warm water is inserted into the ear, resulting in the movement of the eyes in the opposite direction), and eye motion in response to head turning (“doll’s eye” reflex). In addition, an apnea test is performed to determine the inability to resume spontaneous breathing when the ventilator is removed. Brain death is the total absence of all frontal cortex activity and all brain functions including the brainstem (American Academy of Neurology, https://www.aan. com/Guidelines/braininjurtyandbraindeath) and different from severe but incomplete brain damage. The authors of the Harvard Committee report were unequivocal that such a patient has “not the remotest possibility” of recovering consciousness (1968: 87). And there are no known cases of individuals regaining normal functions from this state. This newly created category of death has made it possible to procure organs from a cadaveric body in which the heart continues to beat (as long as the ventilator machine is providing oxygen). The body is pronounced dead before the ventilator is removed. The Harvard Committee emphasized that pronouncing the  patient dead by neurological criteria before removing the ventilator was important for various reasons (among these, legal ones). The first reason it cited was the large number of patients being kept alive on ventilator technology and the burden this placed on families and hospitals. But the second reason was quite explicit that defining this new death would make organ transplantation legal and ethical. The committee stated that “obsolete criteria for the definition of death,” i.e., cessation of circulatory function, which could now be delayed with the support of the ventilator, could lead to “controversy in obtaining organs for transplantation” (1968: 85). The declaration of death while the patient remains on the ventilator made possible the procurement of organs which are perfused with blood and

8

Biolust

oxygen. Most human tissue dies or is damaged quickly once it loses this blood and oxygen, and the declaration of death due to neurological criteria means that vital organs remain viable, particularly the heart and liver (kidneys removed after the heart has stopped beating are still viable) and can be removed if the patient has expressed the intention to donate. Despite the seemingly rapid public acceptance of the notion of neurological death, its acceptance by the American Medical Association, and the relatively rapid codification of the new notion into law in fifty states beginning in the 1970s, the question of what truly defines death (in a biological sense) and the defense of cessation of whole brain function as a criterion for pronouncing death continue to be revised. Cessation of whole brain function has been contested as human death in a biological sense. After the Harvard Ad Hoc Committee report was published, the US President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research (1981) took up the problem. It issued a report defending the new death on the grounds that cessation of whole brain function would lead to irreversible loss of the capacity for “somatic integration”—the capacity for the organism to function as a whole. The brain was responsible for integrative bodily function, the commission argued, including respiration, circulation, and excretion (1981 President’s Commission; Veatch 2005: 354). Without this higher command center, the organism would shut down. Indeed, it seems that when the Harvard Ad Hoc Committee issued its report, it was assumed that after the cessation of all brain function, the body would indeed suffer “somatic collapse,” including the disintegration of the brain itself (Singer 2018; Veatch 2005: 356). Later attempts to clarify the definition argued that the permanent cessation of brain function could be equated with the “cessation of functioning of the organism as a whole” since the nervous system is central to physiologic functioning (Bernat, Culver, and Gert 1981). However, in subsequent years this was shown not to be the case. The data most frequently pointed to has been collected by Dr. D. Alan Shewmon, a professor of pediatric neurology at UCLA, who, in 1998, published a review of cases diagnosed as dead due to irreversible cessation of whole brain function. He found evidence that the bodies of these “deceased” patients continued to function in many ways, with many cases surviving for weeks, months, and, in at least one case, years. The patients’ somatic functions persisted so long as the ventilator was attached and the patients were cared for in other ways (Shewmon 1998: 1538–1542; see Singer’s discussion 2018: 156). Shewmon’s data have shown that some somatic functions persist after the cessation of brain function, though these findings are contested and remain to be replicated. Other evidence suggests the presence of electroencephalographic activity that is not artefactual, and the capacity to heal wounds (Shewmon 2018a: S23; Shewmon 1998; Veatch 2005: 354). Shewmon’s data included documentation of a patient, “TK,” who was diagnosed as neurologically dead at age four but continued to survive until the age of eighteen and a half. These cases showed that, with the aid of a ventilator and nursing care, the organism “continues to function” after the criteria for irreversible cessation of neurological function have been met (Singer 2018: 157).7

Introduction

9

A tension that persists in defining neurological death and allowing the removal of vital organs from the body is that the body defies the desire to technologically pinpoint a “moment” of death.8 An interesting critique of the very attempt to pinpoint a universally agreed upon moment of death has been made by moral philosopher Peter Singer. Singer’s critique focuses on the ambiguity and contestation around biological notions of death. Highlighting the lack of consensus, Singer argues that behind the shifting definitions is a functional, utilitarian motivation to deliver viable organs to patients in need. Singer points out that the original Harvard Ad Hoc Committee Report made no claim to have discovered a scientifically “better” or more valid set of criteria for determining death. Rather, it emphasized the burden on families and hospitals (and patients themselves) of keeping patients on ventilators who would never recover. It furthermore explicitly pointed to the desire to further the cause of transplantation. Singer argues that the new definition of death cannot be supported by consistent, universal, or timeless biological criteria. He goes on to state that the definition of “brain death” as death has, “quite literally, made it possible for Christians to get away with what would, under the earlier traditional definition of death, have been murder—and without abandoning their support for the sanctity of all human life” (2018: 163, see also 158–9). Singer’s point here is not to challenge the acceptability of procuring organs from bodies declared dead based on neurological criteria. In a sense, it is a more radical argument: that the need to redefine death in order to legitimize the removal of organs from bodies which will never recover is itself problematic; it has raised questions that cannot be resolved for those who insist that ethical organ donation hinges on an unambiguous definition of death. Indeed it has made these thinkers vulnerable to accusations of hypocrisy. A similar point has been made by historian of science Yoshihiko Komatsu, whose history and critique of the brain death concept influenced LaFleur. Komatsu argues that death is no longer “objective.” Owing to the possibilities of technology, the rules defining death will continue to change infinitely (iryō gijutsu to no sōkan de, shi no kitei wa enen to kawariuru no de aru) (Komatsu 1996/2007: 35). Singer’s critique makes an interesting contrast with LaFleur’s, however. Both recognize the uncomfortable ambiguities around the definition of neurological death. But Singer is nonetheless comfortable with the practice of deceased donation. He advocates that irreversible loss of consciousness should be our criterion for allowing the removal of organs rather than searching for a somewhat arbitrary and ultimately elusive new definition of the moment of death itself (2018: 163).9 Yet despite these ongoing conversations, organ removal and transfer continues on the hospital floor in growing numbers. The consensus around the definition of neurological death is held together by a combination of factors. One is strict medical protocols which attempt to protect the integrity of end-of-life care and prevent conflicts of interests from arising: for example, the principle of “de-coupling,” whereby surgeons or physicians involved in care for the recipient cannot participate

10

Biolust

in care for the donor or the pronouncement of death (Shemie et al. 2017); rules stating that the family of the deceased cannot be approached regarding consent to donate until the patient has been declared dead; and the “dead donor rule,” which means that patients must be pronounced dead—either by neurological or cardiac criteria—before organs are removed.10 In the determination of brain death, the dead donor rule signifies that the organ removal itself does not cause the donor’s death. Death is allowed to occur but is not caused by discontinuing the ventilator. The relatively conservative definition of neurological death is another example. In the United States, death can only be pronounced when the whole brain has ceased to function. (Though in other countries, such as Canada and the UK, the definition is more lenient.) Those who support the practice of deceased donation rely on high levels of transparency and operate under carefully prescribed guidelines (Dutchen 2017; Truog and Miller 2008; Truog and Robinson 2003; see also Sade 2011). Even though biological “death” is not easy to pinpoint, it remains necessary to have socially agreed-on rules for when death is pronounced. Many health care practitioners recognize this explicitly. Michael Solomon, a physician and former chair of the Robert Wood Johnson Barnabus Health Committee on Bioethics, has argued that in our nomenclature describing “death due to irreversible cessation of whole brain function,” we should include the term “declaration” of death, acknowledging that death is a moment we define and declare, rather than an uncontested biological or physiological fact. He writes, “Let us emphasize that we are referring not to Death, the biologic phenomenon, but to declaration of death, the necessary legal, social, religious, and cultural determination” (2018: 8).11 While the Harvard committee’s proposed criteria have been modified and vary subtly in different states and countries, death by neurological criteria has remained an accepted basis for medical practice for several decades.12 Simply put, neurological death is defined by the loss of cerebrum and brain stem function—a condition from which no one has been proven to recover; it is a permanent nonconscious, nonresponsive state in which all continuing biological functions remain dependent on the ventilator. I should add, too, that there are multiple humanitarian arguments for terminating life support for such bodies. Even independent of the question of organ donation, the standard of care in American hospitals now is to withdraw life support from a patient declared dead from irreversible loss of all brain function— whether or not they will become organ donors.13 A body maintained in this state suffers muscle atrophy, bed sores, and hardening and shortening of the muscles, joints, and tendons (contractures). Such beings have an increased likelihood of suffering from multiple complications, including organ failure, pneumonia, and sepsis, and they may require surgery or other invasive medical interventions. It is widely accepted that providing invasive, nonbeneficial medical care to these patients can cause moral distress to family members and to health care providers (Solomon 2018).14 In the past fifty years, removing physiologic or life support from patients with severe neurological dysfunction (even in cases where they are not declared dead due to cessation of brain function) has become more common (Truog, Pope, and Jones 2018: 335–6).

Introduction

11

LaFleur’s call to arms in his early chapters (“Sweating Corpses” and “Fear as Discovery’s Instrument”) vividly reminds us of the remaining ambiguities and gray areas concerning neurological death, ambiguities which are not likely to be resolved soon. However, I find his invocation of terms such as “vivisection” (citing Jonas, who was writing in 1969, one year after the publication of the Harvard Ad Hoc Committee report) and “neo-cannibalism” (citing Tsuneishi) sensational and overdrawn. His tone here is strident and polemical. The analysis here is not a balanced, empirical attempt to present data from all sides, and, it seems, it was not intended to be one. Instead, he hopes to issue a wake-up call to a public who may not be aware of these fine lines. Though medical anthropologists, bioethicists, and physicians who have written about the topic of neurological death tend to treat it delicately, they too would like to caution their audience about its potential threat to humanist values and the slippery slope between removing organs from bodies with beating hearts and the potential to degrade or commodify the body, or to deprive the living of committed end-of-life care. LaFleur’s concern for the sanctity of human life, the sanctity of the dying process, and the desire to avoid the commodification of the body speak to broader concerns in the fields of bioethics, religious studies, and medical anthropology—and these concerns remain relevant despite the willingness on the part of many to accept the consensus that the irreversible cessation of whole brain function means that the donor is dead.

Medical Anthropological Perspectives on Efforts to Ensure the Dignity of the Dying and Resist the Commodification of the Body I’d like to summarize here a few points which have been made by medical anthropologists who share some of LaFleur’s concerns about the fine lines between ethical and potentially harmful medical practices. How do physicians and family members refer to the deceased organ donor? The United Network for Organ Sharing (UNOS) is very clear, referring on their website to the “body” after death: “After brain death, the donor’s body is supported by artificial means, such as a ventilator.” Yet journalists, laypeople, and even some health care providers continually refer to the ventilator as “life” support. One physician, Savitri Fedson, a transplant cardiologist and professor at the Center for Medical Ethics and Health Policy at Baylor College of Medicine, recommends that such support be referred to as “physiologic support” rather than “life support” in order to avoid confusion for family members and others about the status of their loved one. The body of the deceased donor is in some ways in a liminal state. In the case of bodies declared dead by neurological criteria, the body is also a container of vital organs which are now being sustained for the purpose of another individual. Such bodies might require medical interventions to maintain blood pressure, prevent clotting, or control sepsis in order to sustain the viability of the organs (Sharp 2001: 117).15 These therapeutic interventions, ordinarily treatments

12

Biolust

administered to return a patient to health, are performed with the interests of the organ recipient in mind. In keeping with LaFleur’s observations, the language of utilitarianism and its logic is central to promoting organ donation to the American public and is everywhere present on the websites of hospital transplantation programs and nonprofit organizations which coordinate donation under contract with the US government. The slogans “give the gift of life” or “donate life” send the message that one life can be used to save another. (A common metric seen on these sites is “One donor can save eight lives”—and can help many more through donation of corneas, bone marrow, and other tissues (organdonor.gov).16) An animated video on the website of the Organ Procurement and Transplantation Network (OPTN) states that becoming a donor “can turn a time of loss into a time of hope” (https:// www.organdonor.gov/).17 Organ procurement organizations (organizations responsible for recovering organs from a specified area through coordination with the donor’s family) often use the image of a tree to suggest that donated organs create new life. Some images show leafy trees with trunks entwined around one another—or the roots of a tree in the shape of a heart. Lesley Sharp has shown that such images used by local organ procurement organizations can make the donor’s surviving kin uncomfortable, because they eclipse the death and sacrifice of the donor, suggesting a symbiotic relationship of death giving birth to life. Sharp reports that some kin express abhorrence for the imagery of trees and the seeming “recycling” of life, which omits the loss or tragedy the family experienced (Sharp 2001: 120–22).18 One can firmly believe in the good that can come from transplantation and admire the impulse to donate one’s organs after death, while still feeling uncomfortable about the way this sacrifice is often represented too casually or too one-dimensionally as relentlessly positive. Closely tied to the issue of utilitarianism is the question of the body as a “social” entity and not simply a biological one. LaFleur himself was deeply concerned with these questions. In Liquid Life, he noted, “Throughout human history, religion and philosophy—for the most part at least—have strongly resisted the assumption that a given ‘life’ is neatly coextensive with that complex of chemicals we call the body” (1992: 30). LaFleur argued that according to Japanese Buddhist thought, a human being crystalizes or “densifies” through its participation in society, fully becoming a “child” once it has participated in coming-of-age rituals (including the shichi-go-san or seven-five-three festivals celebrated on “Children’s Day”) (1992: 33, 35). Similarly, the elderly depart gradually from the world in a process LaFleur describes as “ontological thinning” (1992: 33). The process begins with the sixtieth birthday (kanreki). After death, Buddhist memorial services are held on the funeral day, the seventh day, and the forty-ninth day. It is not until this forty-ninth day that the spirit is thought to leave the earthly world. (According to LaFleur, the sequence of rites after this is the 100th day, 1st year, 3rd year, 7th year, and 13th year, 17th year, 23rd year, and 33rd year, suggesting that rituals grow further and further apart as the deceased becomes more distant and more “rarefied” (1992: 32).)

Introduction

13

While clear criteria for declaring death are necessary for medical-ethical practices, and for comfort among the survivors, the memorialization of the deceased often reflects a more gradual passing of the body and soul. Margaret Lock’s remarkable book Twice Dead: Organ Transplants and the Reinvention of Death (2002) also focuses on Japan as a case study, taking the cultural and medical hesitation around “brain death” in Japan as a space for exploring the complexities associated with the practice of deceased donation. Lock includes an analysis of social and cultural beliefs surrounding death, mortuary practices, and cultural ideas about the body, soul, and transcendence. In many societies, death rites are bound by rituals of separation followed by a liminal period, an ambiguous state carefully orchestrated by society (Lock 2002: 193–4). Anthropological studies of the ancestors in Japan (Benedict 1946; Plath 1964; Smith 1974) have discussed the presence of the ancestors in this-worldly affairs of the family. The ancestors actively memorialized in Japanese homes are often those remembered as living individuals (Lock 218, 221; Plath 1964; Smith 1974). Lock’s research showed that many continue to participate in Buddhist obon memorialization ceremonies and visit the graves of deceased relatives, and more than half reported that family obligations require the deceased family members be treated in accordance with Buddhist ritual (224). In this family-centered world, the legacy of the individual life is carried forward by the family. The gradual departure of the soul from this world is difficult to reconcile with the declaration of death while the heart still beats and the procurement surgery.19 The individual’s embeddedness in the family line may make it difficult to make autonomous decisions concerning the fate of one’s body—or to give to strangers. Yoshihiko Komatsu, a historian of science and bioethics whose work greatly influenced LaFleur, wrote that “death reverberates” (shi wa kyōmei suru) (1996/2007). Death reverberates because it is more than an event which happens to an individual or a “physiological state.” Komatsu writes that death is not “hermetically contained within the individual [. . . .] [E]ven though death certainly affects the dying individual in a special way, each death also radiates out, or reverberates, beyond that individual, penetrating the lives and feelings of the bereaved, triggering a range of emotion from joy to sorrow [. . . .]” (1996/2007: 182). Lock reminds us that death and its resolution are symbolic for the community, and memorialization of the dead promotes continuities between the life of the individual and the collective (2002: 194). Lock also reflects on the care for the body after death and the notion of its gradual departure. She writes about the rites performed by medical students for the cadavers dissected in anatomy class. In the hospital, after death the patient’s body is taken to a reian shitsu, an undecorated “room for the repose of recently departed souls” (2002: 213). Members of the patient’s medical care team and the family participate in a ceremony. After the ceremony, the body is removed to a funeral home or to the family’s home for a wake (o tsuya) where there is usually an open casket. After cremation, the urn containing the ashes may be kept at home before being interred.

14

Biolust

Lock’s careful work here on the anthropology of the body and kinship helps us imagine how pinpointing a “moment” of death and removing organs from the body (in which the heart still beats) might offend deeply held assumptions in Japan. The work helps us see more of the social and historical context for some of the concerns raised by LaFleur.

Deceased and Living Donor Transplantation in Present-Day Japan LaFleur regrettably passed away in 2010, as he was continuing to develop the second draft of his manuscript. Although scientific knowledge develops quickly and medical practices change, the situation in Japan has changed only slowly, and LaFleur’s questions remain relevant. Public ambivalence and confusion concerning death by neurological criteria remain in Japan, as does ambivalence about organ transplantation itself. Transplantation is still an unusual specialization within surgery and medicine in Japan. And conservative approaches to sustaining life on mechanical ventilators or with feeding tubes continue to dominate mainstream medical practice (Long 2005), though these practices are gaining growing scrutiny as Japanese society rapidly ages (see, for example, Ishitobi 2010). Indeed, attitudes and practices were changing gradually at the time LaFleur was writing his manuscript. The Organ Transplant Law was passed in Japan in 1997, well after the Japan Medical Association stated that they accepted death by neurological criteria as human death in 1988, and well after the report was submitted to the government in 1992 by the Ad Hoc Committee on Brain Death and Organ Transplantation (Rinji nōshi oyobi zōki ishoku chōsakai). The committee  acted cautiously, and there was a minority dissenting opinion included in the report (which Susumu Shimazono discusses in his concluding chapter); but this committee, comprised mainly of nonphysicians, put forward a strong consensus for abiding by international norms and guidelines concerning declaration of death by neurological criteria and organ transplant.20 The committee’s stated purpose was to determine whether “brain death” (nōshi) constituted human death (hito no shi). The committee attempted to separate the question of “brain death” from organ transplantation, asserting that it was important to consider the two issues separately (Tachibana 1992: 244–5).21 The committee met for two years (from March 1990 to the issuance of the report in February of 1992), convening thirty-three times, undertaking three hospital visits and three international visits, and conducting two popular opinion surveys (Tachibana 1992: 244). The report is written in a way which seeks consensus, acknowledging that death is both a biological occurrence and is understood through philosophical, religious, and social prisms (Tachibana 1992: 246). It reiterates the findings of the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research in 1981, defining death as the loss of functional integration and organic unity (kinōteki sōgōtai to shite no kotai) and argues that this irreversible loss

Introduction

15

of unity and consciousness is death, even if the ventilator remains in place and the body does not show the traditional “three signs” of death (shi no san chōkō): cessation of heart beat, cessation of breathing, and pupil dilation. The report also acknowledges that cellular growth and metabolic function might continue in such a body (Tachibana 1992: 247–8). It asserts that “brain death” as human death is in accordance with international norms and that opinion polls increasingly showed that Japanese people agreed with this assessment (Tachibana 1992: 252). The report also proposes putting a structure in place to enable and guide organ transfer—including donor cards—banning the sale of organs, and highlights the need for specific ethical guidelines (Tachibana 1992: 259, 264). Section IV of the report, which contains the dissent, objects to neurological death as an “invisible death” and argues that all cadavers should be removed from the ventilator and should be allowed to die the same way (Tachibana 1992: 273). The dissenters included Tadahiro Mitsuishi, a lawyer; Takeshi Umehara, a philosopher; Hideo Hara, a lawyer for the Tendai-shū Buddhist sect; and Shōhei Yonemoto, a scientist and researcher at Mitsubishi Kasei. The dissenters follow Hans Jonas and criticize this new death as motivated by the desire to transplant organs. They are also critical of the majority members’ concession to allow patients and doctors to not use the language of “brain death” if they choose (if the patient is not an organ donor), creating the possibility of coexisting definitions of death, one for organ donors and another for those who are not donors. They criticize the ambiguous language used in the report with respect to the brain-dead body, referred to as the “living body of the brain dead person” (nōshi shita mono no shintai). They criticize the influence of Western-centrism (seiyō shugi) and plead for Japan to retain its true cultural traditions, against the onslaught of rationalism (risei shugi) and scientism (Tachibana 1992: 276; see also Mitsuishi 2014: 263–4; Tanaka 2017). Yet by the time the law passed in 1997 there was strong support for legal change from some quarters, including some physicians and patients. As organ transplant had become a more mainstream practice in wealthy, industrial, and postindustrial nations, and as surgical outcomes were improved with the use of the immunosuppressant cyclosporine, Japanese citizens seeking organs were traveling to other countries to receive transplants, including the United States, Pakistan, Philippines, India, Germany, and China. Many people today recount seeing leaflets requesting donations to support a family member’s transplant surgery abroad, often a heart transplant for a child (Ambagtsheer et al. 2016). Increasing population age was also a factor, as the incidence of chronic kidney disease rose and dialysis patients increased. Neighboring Asian nations, including Singapore, Taiwan, Thailand, and South Korea, were also developing transplant programs in the 1980s and 1990s. Atsushi Sugitani, a transplant surgeon who participates in the Japan Association of Cerebral Resuscitation and Brain Death (Nihon nōshi/ nōsosei gakkai), reflected later, “Can it really be considered a Japanese virtue to [reject brain death] while traveling to other countries to purchase the organs of foreigners?” (Sugitani 2020: 75).

16

Biolust

A member of the committee who spoke at the proceedings of the National Diet cited the development of transplant programs in neighboring Asian nations and made a strongly worded plea: In our country, on the other hand, where the question of whether brain death is human death and the question of whether cadaveric transplant is acceptable are still matters of debate, the many patients who cannot be treated otherwise find themselves waiting in the shadow of death, terrified, waiting for the day when transplant becomes a possibility, in a state of abject sorrow, each day an eternity (ichinichi senban no omoi de) [. . .] as they await death.22

The government revised the 1997 law in 2010, making it possible for surviving families to give permission for organ donation in the absence of a legal declaration of intention to donate. (It also made it possible for children fifteen years old and younger to become deceased donors.) The Declaration of Istanbul on Organ Trafficking and Transplant Tourism in 2008 also put pressure on Japan to move ahead with transplantation programs and regulations (Yokota 2010). It called for governments to pursue “self-sufficiency” in providing organs for transplantation to citizens, citing the inequities and human rights violations associated with transplant tourism.23 Nonetheless, hesitancy around transplantation has continued even after the passage of the 1997 law. After the revision of the law in 2010 a larger increase in the number of deceased donors was seen, but this was in proportion to a very small base. The number of deceased donors in 2019, the most recent year for which data is available, was ninety-seven, up from sixty-six in 2018.24 However, the number remains relatively small, even in comparison with Asian nations and certainly in contrast with the United States, where the number of deceased donors was 11,870 in 2019.25 In Japan, where all or almost all usable organs are removed from bodies of deceased donors, in 2019, 84 hearts were transplanted, 176 kidneys, 88 livers, and 79 lungs, relatively small numbers indeed. Why is that? Clearly, many of LaFleur’s and Lock’s insights remain relevant. The acceptance of declaration of death by neurological criteria was neither “natural” nor “inevitable” in all contexts; it was a leap precipitated by technological innovation which required social, cultural, and moral shifts to accommodate its demands. The reluctance of Japanese ethicists, some physicians, and others to embrace it makes us realize the work that American health care providers, clinical ethicists, social workers, patients, and families did in the United States to reshape understandings not only of death itself but also of the nature of social obligation to others, strangers or kin, of “the gift,” and of our comfort level with handling these fine lines in the context of hospital care. The work of anthropologists Lock, Kaufman, and Sharp is particularly helpful in visualizing this work, whether it concerns the memory work done to honor donors or the private ways in which living donors come to accept the idea of donating their healthy tissue for a loved one (Kaufman, Russ, and Shim 2006).

Introduction

17

This is one of the major contributions of this book, too, I believe: it shines a light on the conceptual work done in the context of popular religion, moral philosophy, and popular culture to make the gift acceptable. With respect to reasons why Japanese philosophers, bioethicists, and health care providers have resisted the designation of brain death as human death, LaFleur has focused on the fear of commodification of the body and squeamishness around the practice of organ transplant itself. Yet there are other reasons, too, why neurological death is rarely diagnosed in Japanese hospitals. One reason is the immersion of the self in family, an ideal that strongly shapes clinical decisionmaking in Japan. This includes the tendency to disclose diagnosis of terminal conditions to family members rather than to patients, and the need for family members’ consent to proceed with medical interventions or the withdrawal of life support (Long 2005: 89–93, 99–102, 125–33). Neither the removal of organs nor the declaration of brain death can take place without the family’s consent. The model of family decision-making has also shaped the ethics of deceased donor transplantation in Japan. The 1997 law stipulates that an individual’s expressed desire to become an organ donor (as indicated in writing on a donor card or insurance card) is only valid if surviving family members do not object (Kimura 1998: 57). In some instances, registering to donate (such as at the Japan Kidney Bank) requires a family member’s consent at the time of registration (Kimura 1998: 56). The imperative to care for a family member until death also makes terminating life support difficult (Lock 2002; Long 2005; Terunuma and Mathis 2021). The ethic of filial piety, the mandate to care for a family member as long as necessary, as well as the affordability of Japanese hospital care (through universal insurance coverage) may encourage families to simply wait for cardiovascular death (Terunuma and Mathis 2021: 3). More broadly, attitudes toward end-of-life care in Japan have generally favored supportive care. Withdrawing life-sustaining treatments can be punishable by law, and such incidences can generate extensive media coverage, making physicians more likely to be conservative in their decision-making, even in instances when a patient has been unconscious for a long time, is connected to a ventilator and feeding tube, and is unlikely to return to responsiveness (Aita et al. 2007). A study of twenty-seven physicians in the Tokyo area showed that most expressed skepticism about withholding artificial nutrition and hydration for severely stroke-impaired patients with poor prognoses. They felt less compelled to provide ventilation, but testified that withdrawing ventilation once provided could be considered a criminal act (Aita et al. 2007: 267). Some physicians spoke of the possibility of withdrawing physiological support gradually, to create a “natural ending” (268). (Lock also observed this pattern in her research.) Physicians spoke of the “strong emotional relationships” that some comatose persons had with the living (despite being unresponsive) and those who visited (268). A general lack of popular awareness and education around advanced care planning also makes it more likely for a patient to await cardiac death on physiologic support (Report on the Awareness Survey on the End-of-Life Care, March 2018).

18

Biolust

These beliefs and practices reveal a conservative approach to ending life and a strong sense of connection between the dying and the living. Cultural and religious beliefs create a specific understanding of death as gradual and as an act connected to living family members and society—factors which mitigate against the acceptance of defining death as a “moment” and not a process (waiting for the heart to stop beating and the body to become cold) and against the acceptance of removing organs from a body which has not yet become cold. Terunuma and Mathis argue that while in the United States, organ donation is characterized as a “moral issue” of giving or altruism, in Japan it is understood as a “family issue,” unable to be divorced from family authority or obligations. A major difference between Japan and the United States is that in Japan the diagnosis of death by neurological criteria is used only in instances of intended organ donation. It has never been normalized. This has perhaps created a selfreinforcing cycle, where the declaration of death by neurological criteria remains unusual, and therefore deceased donation also remains unusual. *** To understand the landscape of organ donation and the resistance to declaring neurological death in Japan, I believe it is necessary to appreciate not only the moral arguments made by philosophers and ethicists but also the broader political economy of health care and social welfare in Japan—the material conditions which affect treatment options and corresponding moral deliberations. In 2016, I received a National Endowment for the Humanities fellowship to begin the study of living donor transplantation in Japan. In the absence of deceased donors, living donor transplant programs have grown, as an aging population experiences increased incidence of kidney and liver disease. As I interviewed kidney donors, recipients, transplant surgeons, nephrologists, and endocrinologists and nutritionists treating diabetic patients (diabetes is the largest cause of kidney failure in Japan), I learned about the intensive, systematic, and thorough treatment that diabetic patients receive through the health care system in Japan, which, of course, provides care to all citizens. Diabetic patients receive inpatient education, counseling from trained rehabilitation nurses (ryōyō shidō) in nutrition, counseling from public health nurses at workplace (if a large company), and the physician’s own stern, paternalistic counseling (see Armstrong-Hough’s excellent discussion about medical paternalism, 2018; Borovoy 2015, 2017). Kidney dialysis patients have grown dramatically over the years, placing a growing burden on the nation’s health care budget. And yet dialysis is conducted by the highest quality equipment and medicalized in ways that result in long five- and ten-year survival rates—the longest in the world. Although LaFleur and Lock highlight cultural and moral concerns with accepting the idea of neurological death, one wonders, too, whether the virtues of social medicine and the provision of skilled care which everyone can access have dampened the demand for deceased donor transplants, or at least made it feel less urgent to take that leap. In contrast, perhaps Americans have reached for more technologically advanced, pioneering, and often lucrative

Introduction

19

medicine, while providing less adequate basic care—a point alluded to by LaFleur in his second chapter. At the same time, some have argued that with proper infrastructure and incentives, transplantation will take root in Japan, too. The expenses and infrastructure required of a hospital which performs procurement surgery on donors are considerable. Other nations, notably South Korea, have seen growing transplant programs when the government offered incentives to such hospitals, such as allocating one of two kidneys to the hospital which received the donor or providing financial incentives to start transplant programs (Murakami and Nakagawa 2018: 34–5). We should remember that LaFleur focused his research on the 1980s and 1990s, a time of concern and conservatism around the practice of deceased-donor transplant. But until the late 1960s, Japanese surgeons were keeping pace with the rest of the world and were capable of performing pioneering techniques in transplantation. The first successful kidney transplantation took place in 1954 in Boston. Japan’s first temporary kidney transplant (from a living donor) occurred at Niigata University in 1956. University of Tokyo surgeons transplanted the kidney of a deceased donor in 1964, although the recipient died twelve days later—just after the first kidney, lung, and livers were recovered from deceased donors elsewhere. World-famous surgeon Kōmei Nakayama performed the first liver transplant in Japan in 1964. (The recipient died after twelve days.) He was featured in Life magazine.26 Jurō Wada performed the first heart transplantation in Japan at the Sapporo Medical University in August of 1968, not even a year after the first heart transplantation in the world was performed (in 1967 in Capetown, by Christiaan Barnard, though the patient died eighteen days later).27 Wada’s surgery was marked among surgeons around the world as another step forward scientifically, since the recipient survived for eighty-three days (Ogawa 2014: 246.). It was the scandal around Wada’s attempt (discussed in the Appendix) that led the practice to be temporarily stopped in Japan. (Lock mentions that some bioethicists suggest that were it not for Dr. Wada’s missteps, Japan might have developed a healthy transplant program.) As Japan’s population ages and clinical ethical protocols around transplant are crystallized and mainstreamed, it is easy to imagine that, with continued modifications of the health care system and education of the populace, deceased donor transplant will gradually increase in Japan. There are many who support a more robust organ transplant program in Japan, including the Japanese Association for Emergency Medicine, which, in its position paper, argues that the declaration of brain death is not a social or ethical issue but rather a medical matter. Economic and financial disincentives for transplantation may be a further factor (Yokota 2010: 567). A public opinion survey of 1022 people showed that of three choices—active support for transplantation (nozomashii koto de ari, ōi ni susumeru beki koto da to omou), hesitant but somewhat supportive (amari nattoku dekinai ga, mā yoi to omou), and opposed (nozomashii koto dewa nai)— the largest category in Japan was somewhat supportive. Nonetheless, the number actively opposed was small. According to Minemura, Yamaoka, and Yoshino

20

Biolust

(2010: 308), most people in Japan know little about the facts of brain death and organ transplant and there is a widely recognized need to provide better education. *** Transplantation is a high-cost and radically invasive surgical solution to the problem of failing bodies. But this problem can be addressed in other ways—ways which are more consonant with contemporary Japanese values and mainstream medical practice. As an example, we can point again to the growth in living donation programs in Japan. In the case of kidney transplantation, two-thirds of recipients are middle-aged males, many of whom suffer from kidney damage as the result of type II diabetes. At some hospitals, though still a minority, doctors can offer living donation as one possible avenue to consider to treat chronic kidney disease. Living donor kidney transplants account for 88 percent of kidney transplants in Japan. (In the United States the number in 2018 was 40 percent.) Such a trend is in some ways more compatible with the family welfare model that Japan is familiar with. That is because Japan Society for Transplantation guidelines state that donors must be kin members; living kidneys cannot be given or received among strangers. Receiving a kidney from a family member may feel more comfortable to many Japanese people than giving or receiving this “gift” to or from a stranger. This exchange raises its own interesting ethical dilemmas, even as it wards off the demand for organs from cadavers. The largest fraction of donations occur from parent to child; however, the fastest growing category of donors is spouses. Almost two-thirds of living donor kidney recipients are male. Almost two-thirds of living donors are women. Such patterns have been noted elsewhere. Even in the United States kidney donors are disproportionately women. Megan Crowley-Matoka has analyzed the language of this sacrifice and gift in the Catholic context of contemporary Mexico (CrowleyMatoka 2016: 41–55.)28 When loved ones face death or a relatively young person faces a lifetime on dialysis without a donated kidney, and when the average waiting time for a deceased donor kidney is over twelve years, what pressures might this place on living kin, particularly female kin? What are the fine lines here between voluntary donation and coercive pressure? (See Kaufman, Russ, and Shim’s study of middle-aged “children” donating to elderly parents, 2006.) While the procedure of living kidney donation has become easier due to laparoscopy, those who pioneered transplant surgery explicitly imagined a future in which organs would be procured from deceased donors—not one in which healthy humans donated living tissue to one another. Japan is paying a high price for its conservative moral scrupulousness around the question of life and death. This heavy reliance on living donor transplant was recently showcased as Japan became the site of the world’s first living lung transplant for the treatment of a Covid-19 patient. A woman received lung segments from her son and husband after organ failure due to complications from Covid-19. The surgery at Kyoto University Hospital took eleven hours to perform, and as of May 2021 she was expected to remain in the hospital for two months (Asahi shinbun, May 29, 2021). A bioethicist interviewed in the article, Jirō Nudeshima, who has written on

Introduction

21

matters of neurological death, issued a cautionary warning that living donor lung transplants should be regarded as a last recourse and have not been performed on Covid patients elsewhere in the international community. William LaFleur’s Biolust powerfully sounds an alarm about pioneering technologies that raise profound questions about human dignity and the sanctity of life. LaFleur’s work provokes us to be mindful of how society accommodates these technologies and how, in doing so, society itself changes. He cautions against the complacent acceptance of these technologies in the interest of convenience, simple life extension, or profit. He also shows us that the reception of these new technologies will reflect the cultural, historical, and religious particularities of different societies.

22

­Chapter 1 SU R G IC A L M A SK S

Beyond Curiosity This book had its origins in curiosity. As it moved along, however, that curiosity turned into something else—namely, a sense of concern that increasingly deepened. Initially I was attracted to how looking closely at Japanese doubts about the morality of transplanting organs out of brain-dead persons into needy recipients—a matter debated there with an intensity unknown in America—might throw light on fascinating aspects of cultural difference between that society and our own. I imagined myself being satisfied if this book were to pique curiosity in its readers, getting them to think that, due to cultural difference, there is more than one way to view questions about the morality of doing what are called “cadaveric” transplants. When ten years ago I first began to talk with Japanese people concerning their negative feelings about transplants from presumed cadavers, I myself was clearly of the opinion that such procedures were an obviously “good” thing within advanced societies such as our own. I assumed too that the qualms of the Japanese sprung solely from the persistence of some archaic religious sentiments, more likely conditioned by fear than reason. But, since I am a student of Japanese society, these interested me. Inquiry into them might, I assumed, become another piece in my own mental portrait of what makes that society still rather different from our own. I also assumed that what interested me would soon become something of a fossil, even if one worth preserving in the form of a book of mine. That is, it was my sense that “enlightened” societies would, even if at differing rates, come eventually to embrace the cadaveric transplant. Japanese resistance, especially that articulated by representatives of its religions, would eventually collapse. When at that time I looked through the materials provided in America by organizations that promote the donation of organs—the United Network for Organ Sharing (UNOS) for instance—these indicated that virtually all the world’s religions had embraced the practice of extracting reusable organs from brain-dead bodies and transplanting them into persons who could live through their receipt. *Chapter 1 of the first draft (2002). Presented in full, except for one section “Societies and Their Masks,” summarized in the Appendix.

24

Biolust

That is, they seemed uniformly to see this practice a patently good, and a way in which modern technology can implement the human desire to be altruistic. Japanese Shinto, however, was singled out—at one point with the religion of the “gypsies” as its sole accomplice on the website—as being a holdout, a religion refusing to go along with this worldwide approval. How long, I wondered, could . . . or should this resistance in Japan go on? Could Shinto—or, more correctly, the variety of voices expressing resistance in Japan— afford being isolated as resistant to the trend seen as an aspect of enlightened ethics?1 And would these voices not simply be overwhelmed in time by pressures within Japan to retain the national image of being among the world’s leaders in technology, including research in medicine and the production of high-tech therapies and equipment? Along the way to this book, what had been mere curiosity became transformed into something quite different. Large matters, I was forced to see, were at stake. And, in addition, I realized that questions about organ transplantation could not be limited to the ethics of this procedure alone. On the contrary, transplantation had to be recognized as the window through which to observe and evaluate an entire trajectory of newly emerged and ever emerging medical technologies. One turning point came when, having begun to burrow my way into the vast literature on these things in Japanese, I saw that many of the writers expressing resistance to these technologies were themselves physicians. Moreover, more often than not, they were medics whose credentials included some years doing graduate research in American medical institutions and a deep familiarity with both the medical and bioethical literature on these topics. These persons were not writing out of ignorance. And they were not so much trying to carve out for their fellow Japanese a kind of cultural niche within which they could protect cultural particularity. On the contrary, they were raising profound questions about the motives, goals, and moral worth of an entire panoply of medical practices now being promoted, especially from American sources, as exactly what is needed by all humans. For reasons that become clear, I hope, in the chapters that follow, I became convinced that many of these Japanese writers—whether medics, philosophers, religionists, or ordinary citizens—had collectively become sensitive to something that might be extremely risky, even downright dangerous, in the trajectory taken by much of our forefront biomedical research. The more time and energy I spent looking into this matter, the more I saw that the real topic of importance here was something far broader than that of the ethics of transplantation. That with which the Japanese were apparently struggling as a modernized and informed society is, ultimately, not an issue peculiar to themselves and their own culture. At bottom it concerns myself as an American and my own fellow Americans as much as it does the people of Japan. It is, ultimately, the question of what we are subtly but surely doing to ourselves in and through the trajectory of our current and much touted medical technologies. The risks are real. My sources were cognizant that they were looking not just at a benign trajectory but at something that had taken on the character of a juggernaut. And they were asking themselves how it might be either stopped or, at the least,

1. Surgical Masks

25

slowed down sufficiently so that its course and probable consequences might be evaluated with far greater care. The second thing redirecting my study away from being one dealing with an ethnological curiosity “out there” was my gradual recognition that a fear vis-à-vis the direction taken by our own society in this matter is far more pervasive here in America than is commonly or publicly recognized. That is, although—for reasons described later in this book—Americans are now routinely conditioned to declare that the cadaveric organ transplant saves lives and, thus, must be an unquestionable good, they are far more likely to express open distaste for those kinds of medical “miracles” that are in many ways the conceptual and technological children of the organ transplant—things such as the use of fetal stem cells for research, the manipulation of genes, and laboratory manufacture of replacement body parts. This led me to suspect that the Japanese resistance to transplanting organs out of supposed cadavers may mean that as a society they registered deep concern and resistance at an earlier point in what constitutes a trajectory within which we both ought to see ourselves. This was something I could not ignore. We and they may be the co-inhabitants of virtually the same dwelling, one now possibly afire, and the major difference between the Japanese and ourselves is that they, because of a different set of cultural and religious sensitivities, appear to have smelled the smoke before we have. If so, then this topic had become one I could no longer approach with the diffidence of someone whose curiosity alone had been piqued.

Topical Description During the 1990s the small library of books in Japanese I was assembling and reading became increasingly more diverse and inclusive of sophisticated argumentation. Some of the ones I had picked up early during this period had a chauvinistic edge; they focused on organ transplants only and on how these constituted a typically “Western” procedure but one deeply contrary to all Japanese cultural and religious values. Moreover, they courted the macabre. The books I read early on had titles which, if translated, would be rendered Murder in the Surgical Suite or Death by Surgery—The Real Story of Organ Transplants! Although interesting as cultural artifacts, the intellectual and scientific substance of such books struck me as a bit thin. They, of course, did signal the extent to which the public debate over such things in Japan was spirited and why it was that the Japanese public had become extremely cautious about the legalization of “brain death.” As my library grew, I added titles that were less sensational. But greater restraint in the rhetoric was more than made up for by the fact that the spectrum from which the opposition came was widened in terms of both its ideological commitment and its professional expertise. Fascinating to me was the amount of agreement among what I had long taken to be very different camps. For some decades it had been possible to depict much of Japanese public opinion dividing along semipolitical lines—with “liberals” generally more open to the adoption of things designated as “progressive” coming from Europe or America and, on the other

26

Biolust

side, “conservatives” arguing for retention of traditional social and religious values in the face of such intrusion. Now, however, on the issue of brain death and organ transplantation a resistance came from both camps. One of the most vocal critics was Takeshi Umehara, a conservative thinker whose writings were often critical of most things American. Yet detailed and data-based criticism came also from someone such as Takashi Tachibana, an investigator of scrupulous standards and one usually associated with the progressives of Japan. Moreover, I discovered that many of the intellectuals among Japan’s Christians, usually progressive in matters having to do with the expansion of “Western”-style medicine, were reluctant to adopt what they had heard or read in the American context about how organ transplantation puts into practice the principle of Christian love, the agapé of the “good Samaritan.” Thus, I had to recognize that, when it came to this issue, the fault line between conservative and progressive in politics and the one between Japanese traditional religions and Christianity were anything but clear. And the range of professional credentials among writers in my library was at the same time expanding. Books questioning both the transplant and a larger range of procedures were being written by persons with solid credentials in science. They saw the language justifying such transplantation as slippery and its procedures as having run rashly over one of the principles of good science, namely, the need to exercise prudence. Whether written by scientists or philosophers, such books held that, unless there were absolutely solid evidence for making the death of the brain equal to the death of the human person, any extraction of inner organs from a person so defined would be unjustified.2 Contrary to everything I as an American had long assumed to be safe and secure in this domain, the Japanese I was reading saw the issue as far from settled. And for them, to be squishy in either the science or conceptualization of “brain death” was to throw prudence to the winds. Prudence, they held, was both an ethical principle and a norm for good science. And I was finding myself increasingly forced to agree. Then in 1998 I discovered something that has had a large impact upon the argument of this book. I was then spending an academic year in Kyoto both teaching and following what was going on in the transplant debate. A law had managed to get passed by the Japanese Diet making cadaveric transplants legal at last, and then, after much hesitation and with continuing great caution, the first such legal transplantation was carried out between February 28 and March 1, 1999. Even in late 1998, however, the matter was massive in the news. I was trying to keep up with the public media and the spate of new books coming out on these matters. It was during that time in Japan that I discovered something of great importance. In my reading in Japanese bioethics, I learned that cadaveric organ transplantation had been objected to already decades before by none other than the philosopher Hans Jonas. He too judged the notion of brain death to be seriously flawed. To me this was a startling bit of news. This was because his essay on this matter had somehow escaped my attention before. And thus it was in a book in Japanese that I first read the trenchant paragraphs of an essay by him originally entitled “Against the Stream: Comments on the Definition and Redefinition of Death,” written in 1974.3

1. Surgical Masks

27

I already had known of Jonas, a student of Martin Heidegger but one who, as a Jew in fascist Germany, not only escaped for his life but later in the United States had written extensively on the philosophy of biology and matters of developing technology. I knew that in my library back home in America I had not only a copy of his important early work on Gnosticism but also a copy of one of his later works, The Imperative of Responsibility: In Search of an Ethics for the Technological Age. But while in Japan I realized that there might be deep connections between Jonas’s objection to taking organs from persons only putatively dead and his more general perspective on technology. I also became curious about why the Japanese found so much importance in the work of Jonas—importance extending well beyond the fact that he and many of them disapproved of the cadaveric transplant. And as a corollary to this inquiry, I became interested in why Americans in general, with a couple of important exceptions, had largely ignored and peripheralized Jonas and what he had to say about ethics and medical technology. My thoughts on these matters take shape in a later chapter of this book. Why is the question of cadaveric transplantation—that is, the transfer of organs from “brain-dead” donors—important? On the first and most basic level it is so because there is a scientific and philosophical question as to whether the brain dead are truly dead. If they are not dead, then to remove their organs with a surgical scalpel could very well be exactly what causes their death. Lack of clarity on this issue could imply what was stated in some of the Japanese book with lurid titles, namely some level of “killing” in the surgical suite. And, as I was later to discover, the ethics of this has more recently become one of a major concern of some Europeans, including some Catholics, who have publicly challenged the morality of the cadaveric transplant. I too think this is important. Nevertheless, I think it necessary to go beyond the question of brain death. In this book I wish to keep in focus the role that cadaveric organ transplantation has historically played in our own society as being the paradigm case of what has been taken to be “clear success” and “unambiguous moral value” in an entire range of problematic bioethical techniques and technologies. That is, as I see things, the cadaveric transplant as a mode of technology has had a far-reaching significance and role in the justification to the public of an entire range of medical technologies; one after another they have been attached to it. Transplantation has done a lot more than save individual lives. It has also, at least until now, given “life”—in the sense of a nearly carte blanche validity—to an entire shelf of new technologies, beginning with in vitro fertilization but extending now into the future with projected proposals for human cloning. Its coattails have been extraordinarily long. Already in 1976 Robert M. Veatch caught this point and stated the matter with precision, writing: When Dr. Christiaan Barnard, in December 1967, cut out Louis Washkansky’s heart with the faith that he could put a new one in its place, the biological revolution had its symbol. Here was the first decisive campaign of an all-out war on the mortal flaws of the human body. [. . . .] The vision of the surgeon

28

Biolust purposefully cutting the heart from the human breast has symbolized the new era of the biological revolution as much as Sputnik has that of the physical sciences. (Veatch 1976: 250)

This statement, if anything, is more clearly true today. And the appeal to transplantation as the paradigm case, the supposedly unambiguous ethical success story, gets made whenever new technologies come on to the scene. Thus, a colleague of mine writes: Gene therapy bears strong resemblance to transplantation. Technologically, it is virtually identical, with many of the same problems and only a few fundamentally new ones, like the vector. Morally, it poses less significant issues than are already present for the transplant team. (McGee 1997: 43)

That is, if the transplant is unexceptional, then gene therapy is a fortiori so. No one, surely not I, can gainsay the fact that individuals, many in fact, have been given longer life through the receipt of donated organs. Yet it is time to look at the role that organ transplantation has had beyond the “saving” of individual lives. It has been the most salient “miracle” in medical technology of the latter half of the twentieth century and even today is used as the clear instance of what contemporary medicine can do to bring benefit to individuals in need of dramatic modes of therapy. In our private and public consciousness it is the most conspicuous example of a heralded medical procedure that can bring life to persons who, without it, would be dead. And since the American general public has become, at least on the surface of things, convinced that it may accept the transplant as un-problematic, the attractive patina of unalloyed benefit, it is hoped, ought to be recognized as having extended out to the newer technologies as well. But what if the miracle is, in fact, largely a ruse? This was the crucial question posed by Jonas. It is one among the many being asked within the Japanese debate about the trajectory of our current biotechnology, one largely American-led. And to recognize this as a trajectory is very important. If the cadaveric organ transplant is both chronologically and symbolically the quintessential “miracle” of modern medical technology, then whether or not it can stand critical scrutiny is no minor matter. And this, in turn, raises a question about the aura of altruism within which these kinds of medical procedures have been cast. No one can doubt that many individual life spans have been extended through the receipt of others’ organs. Nor can it be doubted that in the future organs might be cloned that surely could replace, with vast benefit to their owners, ones that are both defective and lifeendangering. Are these developments not ones that will benefit an untold number of our fellow humans? And would their pursuit, both in terms of research and application, not be precisely a way in which we can as societies give expressions of love and help to persons among us who are in one way or another ill, disabled, or incapacitated? In this sequence of existing and foreseeable benefits, is there not the collective articulation of our best human impulse, that is, the altruistic one?

1. Surgical Masks

29

If there is anything in this book that is likely to come as a surprise, perhaps even as a shock, to its readers, it will be the proposition that altruism so construed very likely is altruism misplaced and even, at least on some level, deceptive. I anticipate such a reaction because in many ways it was initially my own. That is, it has taken me some time to come to see that the things being argued for by both Jonas and many Japanese are ones which, while running against all our first intuitions in this area, are items designed to push us into a recognition that the altruism about which we most ought to be concerned should be directed differently. Later portions of this book will give the details of this argument. Here, however, it makes sense to express its core. It is that our usual notions of altruism and the good being done—concepts through which we justify each new bio development as it comes along—are today inadvertently preventing us from recognizing that our greatest responsibility, one that argues for far greater caution than we now show, is to guarantee that the “humanity” we pass down to generations still unborn is the same humanity—that is, literally, to be belonging to the same species—that we have ourselves known for the whole of human history up to this point. My intention here is to isolate and illustrate the significance of the most important insight of Jonas in this regard—namely, that we today, partially out of ignorance and partially also out of faith invested in fundamentally flawed philosophies, are in the process of altering the nature of human nature. I suggest here that, apparently in contrast to our counterparts in Japan, we in contemporary America have rendered ourselves intellectually incapable of thinking hard about the concept of “human nature,” one that both as scientists and as humanists we have thrown into an ideational dumpster. We have rather glibly and self-assuredly taken the whole notion of human nature to be useless or meaningless at best and quite likely even ideologically tainted. Ironically, however, within our practices, perhaps especially in our willingness to take the organs of the “brain dead” and in recent proposals that we do the same from persons in coma, lies a certain and scarcely examined definition of human nature. It holds that in essence we are our brains or possibly only our brains when they are providing us with consciousness. Thus the question we disdain to address as humanists and social scientists is one being answered behind our backs by the biological sciences. And the implications of that implicit answer, especially if in error, are deep and wide. In this book I explore the exorbitant price we may be about to pay for our intellectual laziness and terminological skittishness with respect to this term. Recognizing that in raising this issue I will, in the mode of Jonas, be writing “against the stream,” I see no alternative. The consequences of not doing so are, I think, profound.

Enhancing Skepticism If the points made by Hans Jonas are so central to what this book is all about, why, one could ask, is there a need to bother with so much attention to Japan? What is the rationale for so much attention to matters and perspectives seemingly so removed, not only geographically but even culturally, from our own? Although my hope is that a more full answer to this appropriate question will be provided

30

Biolust

by the details of what unfolds within the following chapters, a brief response here would seem to be in place. It is that, although I once understood the role of Japan specialists such as myself to be that of intellectual “gadflies” pointing out through research that the perspectives offered by “Western” civilization are not necessarily better than those of other cultures, I find myself no longer satisfied with merely giving reminders concerning the existence of “other” patterns and calling upon my fellow Americans to “appreciate” that their own ways of thought and action are, after all, contingent and relative. That is, in some matters—and bioethics will here be my laboratory for this—I no longer think it enough simply to call for a few moments of attention to the non-West, as if I were somewhat like a museum docent inviting a Renaissanceabsorbed group of visitors to “appreciate” also the rooms of Chinese art in some far-off wing of the building. Merely to request “appreciation” would be, I now fear, to leave things at the level of curiosity rather than at that of concern that has within itself an element of real fear for the future. I think it is highly likely that some of the things we are doing and now thinking of doing within the trajectory of our medical practices are not just culturally relative but, rather, seriously fraught with problems and real dangers. And these, I will argue, have been detected and described with greater clarity by some of the Japanese who have observed them closely and written about their own concerns. We should “visit” this part of the intellectual world to see what is being said there about a crisis coming to ourselves. Yet there is something I envy in what is done by the museum docent. They can depend upon the fact that for a long time we in the West have gotten ourselves accustomed to expecting to find value, perhaps even an equivalent value, in nonWestern artworks and styles. In the sciences and even more pointedly in the domain of medical ethics, the situation is vastly different. In these we automatically assume that anything worth saying will arise out of the perspective of the West. And already for some centuries it has been from Europe and America that the latest medical knowledge, often carried by “medical missionaries,” has been brought to other parts of the world. This missionizing tradition, although less overt and no longer under the aegis of churches, remains strong. The presumption remains that the best knowledge, the newest techniques, and what needs saying in bioethics will travel outward from the places of their origin in Europe or America. Clear examples both of this as a practice and of the complications it involves will show up in chapters that follow. Some American physicians, I will show, demonstrate an almost evangelical passion in their commitment to bringing the good news of regularized transplantation procedures to Japanese society. In this book I have no intention of presenting the data or arguments for anything that could by any stretch of the imagination be classified as “Eastern science.” I leave that task to others both better equipped and more eager than I to do so. In fact, this book will, perhaps to the surprise of some, stick stubbornly to what we think of as the norms and procedures of the sciences with which we are most familiar. At least for the purpose of my central argument here, I can accomplish what needs doing without appeal to what my potential critics might

1. Surgical Masks

31

see and then reject as appeals to the “paranormal,” the mystical, and the domain of things based on unfounded “beliefs.” What the reader finds here is not intended to be an invitation into the realm of “new” or “Eastern” science. The reader need not worry that they will be asked to approach matters in a more “mystical” vein or with a diminished concern to differentiate facts from speculation. In fact, what I write here will insist more strongly than usual that the things at issue are best resolved by use of the standards we commonly associate with evidence. As noted earlier, one of the things that impressed me about the public debate in Japanese was that a sizable portion of it was carried on by people in the sciences. And even most of those who are professional students of religion were committed to the common criteria of evidence. No, it is for greater, not less, skepticism that this book will argue. And this is because the materials I have read in Japanese and present here will for the most part be ones arising out of an enhanced, not a bracketed, skepticism. When they are critical of their American counterparts, the writers in Japanese will more often than not view us as making too many hasty assumptions and jumping to unwarranted conclusions. I find this especially valuable when some of the writers represented here seem to have seen through what not infrequently is an inflated hype with which we push our technologies both among ourselves and to others. Hype to which persons have not already become inured is hype sometimes more easily detected as such. Therefore, I offer most of what follows as an exercise not in lesser skepticism but in a deepening of that mode of inquiry, one we can and should associate with the best of science.

Revelations in Havana During the 1990s the definition of “brain death” was coming undone. And it was doing so now also under the scrutiny of scientists and philosophers in the West. Although this too will be scrutinized in more detail below, it here deserves noting that the progressive collapse of the “brain-death” concept—expressed in new admissions that it is primarily a “useful fiction”—validated both the objections to it offered already in 1970 by Hans Jonas and the bulk of the doubts that had for decades already been publicly articulated in Japan. For me this collapse and its implications came forward as certain when I took part in an international symposium held in Havana in January of 2000. The meeting I attended was the third in a series on the topic of “Coma and Death” and organized by Dr. Calixto Machado, a gregarious and efficient promoter of Cuba’s role in international medicine and research. Perhaps the fact that this conference was held in Cuba was part of the reason why so many of the world’s most eminent neurologists and transplant surgeons took part—including even the now-deceased Dr. Christiaan Barnard, who had performed the world’s first heart transplant in 1967. The level of interest was high. And it was so because most in attendance were aware that the notion of brain death had become fragile. Transplant surgeons

32

Biolust

were clearly anxious that the conceptual underpinning for much of their work was under threat. The presence of a number of Europeans, some of them philosophers and theologians, quickly introduced questions about the universality of American concepts. For instance, a readiness on our side of the Atlantic to refer to certain kinds of patients as in a “vegetative state” was objected to because of how it, by definition, dehumanizes persons who, although in an incurable state and in need of palliative care, are not subhuman. The issue was deemed especially serious because some transplantation promoters in the United States, in order to address the problem of an insufficient number of donors, had begun to suggest that organs be excised not only from the “brain dead” but also from persons in “irreversible coma” because, by virtue of lost consciousness and reason, such bodies may as well, it was thought, be reclassified as other than human. And to take organs from “vegetables” could not, these Americans implied, be judged as homicide. Many of the Europeans present were skeptical, noting that they refused even to use the term “vegetative state” because of how it prejudiced the matter. [Although often conflated in the public imagination, the conditions of brain death and persistent vegetative state (PVS) are not equivalent. Brain death typically signifies an irreversible loss of function of the whole brain, including the brain stem, which is responsible for respiration. PVS is a broader category, referring to the loss of the “higher” brain. PVS patients can breathe independently, and some have been shown to retain sentience (see Lock 2002: 349—50). The suggestion that PVS patients should also be considered for organ harvesting is, thus, a radical one.] But this discussion was a prelude to what virtually everyone anticipated to be the symposium’s critical juncture—a paper, with videotaped documentation, presented by Dr. D. Alan Shewmon, a professor of pediatric neurology at UCLA. Although questions about the “brain-death” concept had been raised during the 1990s by both Europeans (Seifert 1993) and Americans (Arnold and Youngner 1993), Shewmon as a scientist rather than philosopher or bioethicist would clearly have to be listened to very carefully by his fellow scientists at the symposium. They already knew, through existing publications (Shewmon 1998, for example), that he claimed to have empirical evidence that the brain-death criterion was inadequate. If so, the rationale for cadaveric transplants would be in trouble. Here I would like to document the impact made by his presentation in Havana and its confirmation to me that the Japanese debates were anything but scientifically eccentric. Moreover, the debate surrounding what Shewmon presented itself disclosed that Americans pushing advanced technologies would not be averse to engaging in the “cover-up” of problematic data. The revelations in Havana were, at the least, twofold. The central disclosure at Dr. Machido’s International symposium was deemed sufficiently important that it was later reported on in The New Yorker (Greenberg 2001). Dr. Shewmon showed the clinical videotape of a boy, technically “brain dead” for years, whose hand responded visibly to stimulations. Even more significantly, this boy, “brain dead” for more than a decade, has in fact undergone sexual maturation during that period of time—that is, without being in possession of the very brain commonly described as the organ that coordinates and makes

1. Surgical Masks

33

such things possible. Shewmon’s paper opened up the possibility that the “lowly” spinal cord was, in fact, doing things that had been, at least by the brain-death enthusiasts, till then denied. His videos and paper effectively demolished the assumption that the brain was all that matters, that in its absence central functions could not be coordinated, and that, once deactivated, organ harvesting may as well go forward since no “person” remained in the body in question.4 To my mind the second key revelation of this conference came immediately after Shewmon presented his paper. One of America’s most famous transplantation surgeons and someone clearly unnerved by the implications of Shewmon’s research rose to respond. In what seemed, at least at first sight, to be only a desperate, irrelevant remark, he attacked Shewmon for being “anti-Darwinian.” But at the same time he admitted he could find nothing, at least on scientific grounds, to counter what Shewmon had presented. But what he then proceeded to request was the shocker. He issued a plea to Dr. Shewmon that he keep his data and his doubts strictly to himself and his scientific peers. The general public, he said, need not and, in fact, should not be made of aware of the kinds of things that had come forward in this new research. I requested the floor to express my own sense that something was seriously wrong here. Is it not, I asked, one of the clear criteria of science worthy of the name that it be open, not concealed? What, I wondered aloud, were the reasons for wanting to keep such information, crucial to the public’s grasp of the condition of possible transplantation donors, such a closely and professionally guarded secret? To these questions no answers were given—at least in that public forum. Later, however, various individuals stated to me their own sense that it is personal investment, with both professional and financial aspects, that makes members of the transplantation community eager to hang on to the notion of brain death even after its collapse and, as much as possible, keep the damaging evidence away from the public’s eye. This fit something that had been festering like a seed of apprehensiveness within myself—namely, the evidences from here and there that the North American public was, for a number of rather questionable reasons, being allowed to remain “uniformed” about a number of things in this domain. The Japanese books and magazine essays I had been reading were far more detailed, candid, and honest about the conceptual and practical problems. By contrast, the American public, I realized in Havana, was being told over and over again that the only problem with such transplantation was that the number of people needing and wanting organs far exceeded the number of donors. This gap, we here were being repeatedly told, was the problem in this domain. The seriousness of this goes far beyond transplants. This is because, as noted above by Veatch, the excision and reuse of organs from persons putatively dead became and continues to serve as the symbol for the entire biological revolution. And, in the opinion of some, the most amazing parts of it remain still to be seen. In one of Japan’s major texts on bioethics, Tsuyoshi Awaya states that “organ transplantation was the starting point of revolutionary changes in the structure of the human species” (Awaya 1998: 97).

34

Biolust

Awaya’s suggestion that transplantation launched the program whereby the human species will bit by bit be changed may, I later realized, illuminate what might have been implied in Havana by the surgeon’s charge, seemingly irrelevant and inept at the time, that Dr. Shewmon and his research were “anti-Darwin.” This is to say that being “anti-Darwin” in our time may, at least in the minds of some persons, have come to mean not necessarily a stance, like that of Christian fundamentalists, opposed to the core concepts of the evolution of species but, rather, one of resistance to the clearly post-Darwinian notion that humans, having come this far, now may and should consciously alter their species. In this way what had been the description of the origin of species gets twisted into a totally new, normative program for utopian change, all supposedly under the adequate control of ourselves and our emerging technologies. Being “anti-Darwin” now has nothing to do with the advocacy of “creationism” in any of its multiple forms. It is, rather, now the paint that can be brushed on anyone who is openly skeptical about the heady utopianism of those who would have us redesign our humanity and regard all of what we have thus far been as our “childhood,” a stage in our collective growth we must happily leave behind.5 When Bill Joy, the founder of Sun Microsystems, suggested that there might be wisdom in abandoning the plan to engineer ourselves genetically (Joy 2000), a book by one of the evangelists of such engineering, Gregory Stock, dumped Joy into the class of those who believed in an Earth-centered universe and those who resisted Darwin (Stock 2002: 174–5). Stock holds that the rate at which we are changing our species is accelerating. He is probably right. But he seems to suggest that, since we are now moving at an ever-faster rate, we may as well accept the inevitability of these changes. He hints that the fact of such movement becomes the reason why any attempt to stop it must be futile. He seems oblivious to the fact that the fatalism not very far from the surface in his own perspective subverts the very thing he wishes to hold to as the essence of the humanity we have gotten this far—namely, our capacity to freely will our future. For surely such freedom should imply the right and the capacity to slow down or even stop this movement. Japanese scholars, who in their thinking about religion never seem to have seen The Origin of Species as causing difficulties for what they deemed important, are far more wary of what they see as utopian dreaming and pseudo-scientific hype in that line of American biotechnology that got its start, now retrospectively a shaky one, with the devising of the cadaveric transplant.6 If such transplants put humankind into a clear trajectory of development and application, they also started a process which now has clearly become something of a juggernaut. One thing I derive from Japanese writers on this subject—and it is in concert with the advice of Hans Jonas—is that one of the interestingly human things about human beings is that they have a capacity not only to begin doing things that may be beneficial but also, because it too will prove beneficial, to cease doing things that harm or may lead to disaster. Juggernauts are difficult, but not impossible to stop.

1. Surgical Masks

35

Naoki Morishita, coming to the fore as one of Japan’s most perceptive bioethicists, uses the phrase “applying the brakes” (burēki wo kakeru) when discussing this trajectory (Morishita 1999: 425ff). In some sense, the protracted Japanese debate over the morality of cadaveric transplants was itself such an attempt to slow things down—so as to get to know with certainty what is going on before proceeding forward in a half-blind state. The assumption, one now well verified by the research of someone such as D. Alan Shewmon, is that, if the cadaveric transplant—that is, the flagship and much-touted exemplar for all that was to follow—is deeply flawed, then the whole trajectory may need to be brought into question. Besides, good science may demand it.

36

­Chapter 2 O R G A N PA N IC

Gallows Humor It is undeniable that the success of organ transplantation on one level has been extraordinary. After a period of years during which too many organ recipients died too quickly due to rejection by the body’s immune system, ways of “tricking” the immune system, especially cyclosporine approved during the early 1980s, were located and put in place so that many recipients became capable of extended life for years and sometimes even decades. What once had been so rare as to be called a modern “miracle” became much more commonplace. But there is a problem. In fact, there are multiple problems, and one of which we have been made well aware is the ever-widening gap in numbers between donated organs and persons needing and wanting them. UNOS has long tried to show that this gap might be closed if only the public were made aware of how easy donation can be and its campaign sometimes uses catchy slogans and hyperbole-skewed “facts.” However, the gap does not close. On the contrary, it widens. And today there are persons and organizations in America openly insisting that a practice that till now has been both illegal and ethically rejected—namely, the buying and selling of organs and tissue—might as well be made legal and morally routine. That way, they say, the gap will close. Already in 1992 Renée Fox and Judith Swazey were among those recognizing that the organ crunch was leading to such calls for the commodification of body parts. They also found such a practice morally questionable (Fox and Swazey 1992: 64–72). It was probably inevitable that, in a society as market- and profit-oriented as that of the United States, promoters of transplants who are also somewhat panicked by the low numbers on the supply side would jump to a market solution. Although surely a gross oxymoron, paid “altruism” increasingly becomes both an advocated “solution” and a subject of careful scrutiny (for instance, Andrews and Nelkin 2001 and Goodwin 2006).1

*Originally the first chapter of the second draft (2010). Its first three sections, “New Guilt,” “The Fault Line,” and “Heart of the Matter,” recycled material that appears in the present volume in Chapters 1, 4, and 5. These sections have been omitted here and are described in the Appendix.

38

Biolust

Although Japanese who discuss both transplantation and the wider range of new biotechnologies are fully aware of what is both alluring and morally hazardous in such a move to marketed body parts, many of them push the discussion further. This is where, I suggest, they introduce a consideration that could prove to be an important contribution to our own discussions of biotechnology. That on which they focus has, to my knowledge, been overlooked in our own discussions and debates. Perhaps it has, in fact, been intentionally avoided. To uncover this it can be helpful to recognize that most Americans, whether or not they are donors, are aware that the gap between available organs and the person desiring them is not closing. This is so in spite of the fact that ongoing public touting of the unprecedented achievements of our own biotechnology has led many people to assume that they somehow now have a right to the organs of others—either because their original “owners” are no longer alive or because, as in the case for instance of a kidney, the present “owner” should be able to spare one because the remaining kidney in the pair will probably be able by itself to sustain the present “owner’s” life.2 In America, to the shock of Japanese reading about it in their own media, some couples have even gone so far as to conceive a child with the express intention that the child so produced will, by virtue of having bone marrow matchable with that of an older sibling who very much needs such a transplant, eventually feel obliged to donate it. The assumption here is that the person in need has some level of a right to other’s organ or tissue (LaFleur 2003). This, however, leads to taking something else for granted. Once people in a given society have begun to assume that access to organs is some kind of natural “right” that they possess, it seems clear that there occurs a subtle shift in what might be taken as the cause of death. More and more, in the minds of someone desperately ill and persons close to them, it will seem that death will occur not so much because an organ will fail but, rather, because some individual out there in society has failed to provide what could have been a life-prolonging body part. Ironically, in America the ongoing public campaign for more donors itself reinforces this changed perception about the cause of death. The not-so-subtle message conveyed is that people are dying because other people have selfishly failed to sign on as donors. There is the wait and then the acquisition failure. And it becomes easy for all concerned (perhaps the attending physicians especially) to blame the death on persons “out there” who refused to become donors.3 Unexpected consequences can be disturbing. In one case a potential recipient even went to court to try to compel his unwilling cousin to be a donor (Meisel and Roth 1978: 5–6). And the expectation that access is a right carries over into other areas of new biotechnology, the debate about stem-cell research, for instance. One need not be opposed to the use of embryonic stem cells to find something troubling in the charge that anyone reluctant to give carte blanche approval to this form of potential therapy will be, in effect, killing persons who might possibly be benefited by it. This, however, is exactly what Michelle Puczynski, who suffers from diabetes, has said in her criticism of certain legislators: “If they don’t do this they are taking lives away from people and they are pretty much taking away my life too” (Fox 1999).

2. Organ Panic

39

Moreover, the process of waiting for the brain death of someone else so that that person’s heart might be “harvested” and transferred into one’s own body can itself be debilitating. Persons in hospitals waiting for compatible organs get to know one another and, given their deep anxieties, the word to one that an organ has been found for them cannot fail to ignite resentment in the minds of others still waiting in the queue. James L. Levenson and Mary Ellen Olbrisch, psychiatrists, report that some of those waiting engage in “gallows humor.” “It is not uncommon for patients to talk about fantasies of standing on the roof of the hospital with a rifle nor for hospital staff to be asked about whether or not they have had any opportunities to run down pedestrians on their way to work” (Levenson and Olbrisch 1987: 400). This may be initially shocking but it is also, I think, understandable. Persons in this condition are like Tantalus: what they most desperately desire and can even see (at least in the “mind’s eye”) is for them still just out of reach. For them the oftcited gap between organs needed and organs available is no mere statistic but, by comparison, something that in the most literal sense has become a life-and-death matter. Not surprisingly, it deeply obsesses them. Another statistic is that many, perhaps even the majority, of the persons waiting in this particular queue will in fact never get the life-extending organ they so desperately desire. For many an organ that matches what their own bodies physiologically need will not be found. In these cases the failure will be one of fit. For others, although there may be matching organs out there, they themselves are not in the right place—either geographically or in the disposition of the transplant coordinators—to receive them. And for many others there may be an available organ and even what seemed to be a successful surgical transplant but then they may experience either a rejection for some reason or a life prolongation that, in fact, is no more than a matter of days or even weeks. Is anything of value lost when things take this turn? This is where, I think, paying attention to a Japanese viewpoint really matters. Many Japanese, because they are exceedingly close observers of American ways, hold that there is, in fact, a real loss here. And it is not just that a hoped-for extension of life never became a reality. The loss they have in mind takes place in a domain that American experts on transplantation, even on the ethics of it, either do not see or do not dare to touch. It is, these Japanese claim, the loss of something very valuable—namely, the psychological and spiritual equanimity that ought, if possible, to be present during the last phase of dying and is of extraordinary importance right then. Concretely, the last days, weeks, or even months prior to a person’s death should not be the ones in which the mind and spirit are thrust into a state of ugly competition with one’s fellow humans. It ought not to be a time of raw panic. The point here is not that physical life itself has no value or that it is somehow “unnatural” to want to go on living. But what makes the dying of persons plunged into such panic far more tragic than it need be is the resentment that may be experienced by persons who have been led to believe that they are dying not because of organ failure but because persons “out there”—either known or unknown—have refused or simply neglected to be one of the donors that

40

Biolust

otherwise might have helped them stay alive. The dark side of our acquisition of some new biotechnologies is what these, if not monitored in terms of their total impact, do to the inner life of many users sliding into a deep disappointment with the outcome of their use. Not getting a desperately desired organ or not getting access to what might prove to be a therapy via the use of embryonic stem cells appears to produce uncommon levels of resentment.4 One public commentator in Japan stated it bluntly: “There is something ignoble in the mind of a person who conceives of others, people still alive, as persons whose death will make possible, by organ transplantation, a prolongation of their own life” (Hiro 1992). And Yoshiteru Takatsuki, a philosopher at Tokai University whose expertise is on the traditions of the West, explains the problem of being a recipient: By simply being in the position of wanting the vital organs of another person and of actually waiting to receive them, the prospective recipient’s state of mind gets linked to a desire that that other person become brain-dead. The feelings that are bound to arise in such situations cannot do other than strike us being those we associate with guilt. At least one reason why in Japan it has been so difficult to get acceptance of brain-death and of organ transplants has been that such procedures require the death of another. And within the special ethical sensibility of us Japanese this gives rise to a sense of guilt. (Takatsuki 1999: 197)

Forest Preservation A question of importance is whether this kind of concern about the recipient’s mind-state merely expresses something rather idiosyncratic, the anxiety of a nation that sometimes takes pride in being uniquely able to dissent from the thinking and cultural patterns of the rest of the world. Some early Japanese advocates of rejecting transplantation—Takeshi Umehara most conspicuously—did, in fact, place emphasis on Japanese society as unique and, therefore, having the right to diverge from the perspective of “the West” on this issue. And, unfortunately, some early European and American commentators on Japan’s resistance to transplantation made much of Japanese cultural particularity either as genuine or as just an ideological smokescreen that had within it no larger ethical import. Over time, however, Japanese experts on these matters have come to insist that there is an authentic moral issue here. And it is one of which the Japanese have been made aware only because they have not, at least in contrast to their American counterpart, worked hard at repressing efforts to bring it to the surface in discussions of biomedicine and bioethics. And, gratefully, the more recent writing on this topic by Margaret Lock, an authority on this issue, moves away from earlier reductions of the Japanese perspective to mere ideology; Lock now openly recognizes that there are significant and cross-culturally relevant issues that have been at least implicitly raised by the preferences shown in Japan (Lock 2002).5

2. Organ Panic

41

These issues should focus not on the behavior of individuals but on how our respective societies will monitor both the explicit biotechnologies they adopt and how those technologies will, less conspicuously but no less powerfully, shape our values and, through these, the kind of peoples we become. These things happen not only in but to a given society. To do this circumspectly we must see how such questions are handled in different societies. And Japanese views, if not dismissed merely as a belated version of the medical curiosity or as some kind of head-inthe-sand intransigence vis-à-vis the views we Americans hold, can provide an especially valuable alternative aperture for looking at these matters. When Takatsuki and others in Japan reflect on the mindset of the potential organ recipient, the point is not to censure those recipients as individuals but, rather, to expose the far larger forces that have brought about a compassionworthy condition in such persons. Their anger, resentment, and turbulence of mind are things that any one of us would show if pushed into their position. They deserve not our censure but our compassion—plus our readiness to ask whether theirs is really a necessary plight. I can in no way fault such persons. I recognize that my identity with them is deep. And that is because if pushed as they have been pushed, I fear I would be no different. Being propelled by the peculiar ethos generating what I have called “organ panic” is, however, something different from the more basic desire and will to live. Surely, except for individuals who’ve become suicidal, no one wants to die. Most of us, if in a life-threatening situation, will fight to stay alive. The desire for life and for a continuation of life is the deepest and most automatic psychophysical drive we have. And the readiness to fight for continued life extends as well to persons closely related to us—by blood or long affection. That said, I hold that there is a difference between, on the one hand, the usual kind of drive to preserve our lives that is programmed into us and has been present in us for as long as we have existed and, on the other, what I describe here as an obsession with continued physical life so consuming that the mind begins to crave access to another’s bodily organs. Although the move from one of these to the other is on a continuum and we cannot locate a precise point of differentiation, the difference is one we can recognize. And here I think it not inappropriate to tag the latter as not just desire but, in fact, something at the level of lust. It is concretely a lust that wants access to another person’s body or, more anatomically, specific organs in that body. And, just as we today can identify sexual lust that is projected out into cyber-space, so what might be called the lust for organs or, by extension, an even wider biolust, can do what it does wholly within the mind of someone longing for an organ and extended outward within a field of potential, even if anonymous, providers. But that mind will suffer a real loss—especially if it develops into an intensive state during that period of time when dying and an acceptance of death would otherwise be in place. It replaces and blocks off the possibility of realized equanimity. And this will, at least for persons who otherwise would be making their own mental and spiritual peace with their own mortality, lead to a personal and private loss. If biolust shows up and takes over the mind at this point in

42

Biolust

time, there will be no opportunity for any experience of death as what religious folk have sometimes called a “blessing.” But, in fact, getting to place this kind of value on dying comes out of an insight surely not limited to either religion or a philosophical fatalism. Americans today have been corralled into focusing almost exclusively on the individual “miracles” like transplants and to accept without criticism an organ panic now regarded as normal. More broadly, they have supposedly inured themselves to a deep and society-wide obsession with getting access to whatever is promised in each and every new biomedical breakthrough. And in this domain we Americans, I hope to show, have become the losers— in a variety of ways. While not a few of our physicians, bioethicists, and legal experts go on repeating the old mantra about Japan being too much a society of necessary social consensus to allow individuals there to have access to the latest in biotechnology, they have repeatedly failed to recognize in what a facile way they themselves simply assume that the American ways of thinking and acting in this area are deservedly the world’s norm. Such minds are refusing to travel. And, as such, they are staying closed off to evidence out there that threatens to disconfirm their claims and the intellectual and institutional comfort zone such incuriosity provides. Among the things that drew me into the inquiry that evolved into this book was observing that many persons here who think of themselves as scientists holding to the strict criteria of evidence behave in some areas as de facto evidence suppressors. I aspire to show the presence and harmful role, one violating the norms of science, that this plays in our society.6 One hears it far less today but it was only a few years ago—in fact, when I began this study—that medicine in America was being described here, without qualification, as simply “the best in the world” and the standard against which others should measure their own progress. Now, of course, America’s health system is more commonly described even here as “broken” and in need of radical changes. The implied conclusion meant to be drawn from a decades-long self-delusion was that, since we must obviously be doing things well, every new biotech project is fully deserving of whatever might be of need to help it go forward. What makes a close of examination of Japanese medicine and medical ethics especially worth our while is that its statistics are so interesting—and challenging. Japan has a life expectancy that is the world’s highest for an entire nation and an infant mortality rate far below our own—these things are the case in a society where medical coverage is universal and at the same time, on a per capita basis, costs only half of our own. All the Japanese get better care than do we and while eating up, proportionately, half the GNP consumed by our own. In the face of this the fallback claim here is that our hospitals, or, at least, our university research hospitals, have in place the equipment and technologies that can provide the very latest in therapies for the most uncommon and most challenging of illnesses. This is at least a claim that might be true. What, however, is untrue is that our showpiece biotechnologies are ones to which anyone here can get access. Even

2. Organ Panic

43

more importantly, the existence of these technologies does not itself demonstrate that our society as a society provides adequate care for its people. We can trot out for public view those individuals who have been rescued from certain death by some cutting-edge technology. But this is like saying we keep some individual trees tall and green even though many around are undernourished and visibly scrubby. The old cliché about individual trees being seen but the larger forest being missed is applicable here. Our nation’s habituated individualism, probably today having been capitalized more extremely than at any point in our history, has left a deep impact on how we construe our medical systems and how we channel our approach to bioethics. And, as Bellah and his colleagues demonstrated already more than two decades ago, the impact on our life as a people was, even then, anything but benign (Bellah et al. 1985). If true then, it seems to be even more deeply and malignly so today. It is, therefore, no mere coincidence that something once judged to be ethically reprehensible, namely, the placing of price tags on organs and human tissues, can today be openly advocated without raising the level of moral alarm here that it would if so promoted in Japan. Where the infusion of new guilt seems no longer to be getting what’s needed, it is hoped that the incentive of cash might do the job.

44

­Chapter 3 S W E AT I N G C O R P SE S

Vex not his ghost: O, let him pass! he hates him that would upon the rack of this rough world stretch him out longer.

—Shakespeare, King Lear V: III

The Japanese people, it was long maintained by some Americans, ought not go on resisting the obvious benefits of transplanting organs. This is so important that they might, in fact, have to be shamed into a change of policy—if not of mind— on this matter. That is, since it was obvious during the 1990s that the Japanese advocates—mostly physicians but not anything near 100 percent even of them— for this medical procedure were far from successful in convincing the nation that this would be the more moral way to go, it was thought that what in Japan is called gaiatsu, pressure from overseas, might have to be applied. That is, the Japanese people or, at least, the vocal opponents of the brain-death definition of death would have to be internationally embarrassed into acknowledging that their views were soaked in ancient superstition and irrational fears about not wanting to disturb the bodies of the dead or nearly dead. It was, these critics charged, the drag effect of outmoded religious beliefs that were keeping the Japanese people from accessing the technologies that could help some of them live much longer. The pressure from outside would help to lift a disabling taboo.

An Appeal to Barbara Bush Early in 1992, the last full year of his one-term presidency, George Herbert Walker Bush made an official visit to East Asia, one that included a stop in Japan. During the visit he became publicly ill and collapsed during an official meal— an unfortunate event that captured almost all the news at that time. Related, at least potentially, to that visit (but not to Bush’s illness) was something I found *Chapter  2 of the second draft (2010). I have replaced the last paragraph of the section “Nurses for Corpses” with the ­corresponding text from the first draft (2002).

46

Biolust

in a Japanese publication, something apparently not available in English. It is the text of a December 4, 1991, letter, addressed to Barbara Bush, the American president’s wife. It was written and sent to her by Dr. Thomas Starzl, a cardiologist at the University of Pittsburgh and one of America’s most illustrious transplant surgeons. Starzl had performed the first successful liver transplant and was the first to use cyclosporine to suppress the body’s immune system’s efforts to reject a newly transplanted organ.1 These feats and others won Starzl wide acclaim and numerous awards, including the National Medal of Science, presented to him later by President George W. Bush on February 13, 2006. Long before this award and back in 1991, Starzl was still deeply engaged in an effort to convince the Japanese public to accept the “harvesting” of organs and their transplantation as morally acceptable. So on December 4, 1991, a month before the senior President Bush’s visit to Japan, Starzl’s letter to Mrs. Barbara Bush began by noting that “organ transplantation has been slow to develop in [Japan]” and that he and his colleagues in Japan wanted to request that the presidential couple participate in a public event in Japan that “would be widely reported in the media and would help to expedite transplantation acceptance and development [there]” (TRIO Japan 1993: 249). Starzl’s attempt to enlist politicians at the highest level in a campaign to change the public viewpoint of a foreign people, however, was in many ways the culmination of something that had been a concern of his for more than two decades. In his 1992 autobiography he had discussed the disappointment he had experienced two years earlier when he had revisited Japan: After Kauai we flew to Japan, hotbed of industry and high technology where social and legal inertia had frozen the conditions of organ donation into the mold that the Western world had shattered two decades before. Brain death was [still] not accepted, and public debate over what defined this condition remained at the same level as when I visited Japan for the first time in 1968. (Starzl 1992: 311)

Thomas Starzl’s impulse to “help out” in overcoming resistance in another society to technologies taken as “normal” in his own culture was, especially then, not uncommon. It has been characteristic of those persons and nations with “advanced” medical technologies to assume they have an obligation to promote these technologies—and the ethical viewpoint they express—so that they will reach global acceptance. An effort to transplant the transplant has surely been no exception. To my knowledge the hoped-for meeting in Japan never took place—perhaps because the American president took ill while there and hastily returned to Washington. Possibly, however, in spite of Starzl’s appeal to Mrs. Bush, this particular photo-op never made it onto the itinerary. Perhaps it was even deemed too intrusive a move by an American president into a matter about which many Japanese had very different and passionate feelings. If, in fact, there was a decision to be nonintrusive, we would have reason to view that president’s party as having been more sensitive to significant cultural

3. Sweating Corpses

47

differences than was Thomas Starzl. As a surgeon Starzl was no doubt top-tier. But complex problems require complex kinds of expertise and Starzl’s skills with a scalpel did not also give him one in reading Japanese. He spoke, then, out of ignorance when stating in 1992 that the “public debate [on these things] remained at the same level as when [he] had visited Japan for the first time in 1968.” Illiterate in this area, he could have had no direct access to the vast and sophisticated literature—much of it scientific—in Japanese about matters he was now declaring as ones about which the people of Japan were uniformed. True, he had had conversations with persons there who shared his own viewpoint. But, in fact, public opinion polls in 1992 were consistently indicating that the majority of Japanese people were still skeptical about brain death and organ transplants and some of the most articulately skeptical were themselves doctors. Moreover, since 1968 the people of Japan had probably become more knowledgeable about these things than any other people on earth. One thing that Starzl would have discovered if only he had been able to access the Japanese debates is that persons engaged in the debate in that country were fully cognizant of real and stubborn problems—not just ethical but even scientific ones—in the concept of brain death. The Japanese were scrutinizing and debating the details of brain death at a time when in America this was supposed to be something that had been long ago studied and confirmed to everyone’s satisfaction. In the United States people were expected to accept that it had been empirically proven that the person declared to be brain dead is really dead and, thus, the harvesting of organs may go forward. In the early 1990s, at least among Americans doing transplants and donating organs this assumption was thought to be rock solid. By the end of that same decade, however, it had begun to crumble. In 1997 Robert D. Truog, a Harvard professor of anesthesiology and bioethicist, would write that the brain-death notion was incoherent and ought to be abandoned (Truog 1997). During the decade between then and now the number of Americans, including experts, becoming skeptical about the viability of the brain death concept has increased. It was just prior to this development here that Starzl chided the Japanese for having a public debate that remained “at the same level” as it had been in 1968. But where he was ready to see stagnation is where there had, in fact, been a lot of discussion in Japan. And factored into that discussion and influencing persons skeptical about brain death were some of the same data and analyses that would shortly afterward convince even some Americans to see the whole concept as flawed. The evidence indicates that those Japanese studying the issue with care had caught on to the problems well before their counterparts in North America. *** So, which nation was really ahead of the other? It’s a nuanced answer that is needed. Starzl, it is clear, assumed that the society implementing the newest technology (at least in the area of his own expertise) proves that it is the more advanced. But the Japanese he regarded as behind the

48

Biolust

times were following one of core rules of the very best science, namely, Bacon’s crucial dictum that true science will test, even prioritize, the negative evidence, the data suggesting that an existing theory or practice, no matter how widely believed, may be dead wrong. In this case Japanese experts were looking hard at all the details of the theory of brain death and finding anomalies suggesting the notion is flawed. And they were doing this well before Truog and others reopened this whole issue in the United States.

­The Eyes Have It Americans who are not medical professionals but consider themselves at least reasonably informed about medical matters—how to recognize the symptoms of stroke, what are the optional therapies for cancer, etc.—usually know next to nothing about the actual condition of being brain dead.2 In discussing this with them I find that even people who have signed on as organ donors remain profoundly unaware of the details of the body’s physical state that would make it possible and legal for their own organs to be removed and reused. And this rather shocking level of ignorance is reason enough to wonder whether they prefer not to know these details or have been intentionally kept in the dark. Perhaps it is both. However, it is not at all difficult to understand why transplant surgeons and donation promoters would prefer to keep people “blissfully” ignorant.3 There are multiple reasons why so many Japanese think so differently from their American counterparts about brain death and organ transplantation. But it is surely important to note at the outset that one fundamental difference is that the Japanese people as a people are far better informed about this than are their counterparts in the United States.4 This difference in information base was initially obscured because the earliest Western attempts to account for Japanese resistance to transplants tended to focus on an ideological manipulation of Japan’s population, a turning of them toward a chauvinistic resistance of the West. A misreading of the Japanese debates as nothing but nationalistic flimflam was surely precipitated by the international coverage given to someone recognized also in Japan as embarrassingly chauvinist, Takeshi Umehara. His statements, both in public and in a 1990 article in a major publication, had described his own resistance to the notion of brain death as fundamentally a stance against the West (Umehara 1990). This reduction of the Japanese debates into no more than a reaction by disgruntled nationalists had the unfortunate result of lulling America and Europe into missing the fact that serious scientific, intellectual, and societal issues were and continue to be involved. Fortunately, the publication in 2002 of a book by Margaret Lock, a distinguished Canadian anthropologist who has used data from Japan to write perceptively about many cross-cultural issues in medicine, established the fact that the issues raised most forcefully in Japan are, indeed, serious ones (Lock 2002). The present book will push that point about the seriousness of these issues and their implications even further. First, however, I wish to underscore that

3. Sweating Corpses

49

nothing in America is comparable to the extent to which the Japanese general public has gotten informed about what the brain-dead body actually looks like, about how in many of its needs and functions it is disturbingly similar to the body of any other person in the ICU, and about the day-in-day-out care the brain-dead person’s body will require from nurses attending it there. The Japanese have not adopted anything like the “out-of-sight, out-of-mind” approach to brain-dead people that has come to be the rule in the United States. And this, ironically, is because a nation that routinely “harvests” the brain dead for organs also disposes of them quickly; whatever remains after key organs are excised will be sent directly to whoever will manage the funeral and subsequent burial or cremation. In Japan, by contrast, persons who had become brain dead were not candidates for “harvesting” and, therefore, were kept in ICUs for a much longer time. This provided an opportunity for an intrepid researcher to study and write about their physical condition. Michi Nakajima, whose training was in law and has been a medical correspondent, in 1985, published the book that brought wide public visibility to the actual condition of the brain-dead body. Her important book about the brain dead was titled Mienai shi, meaning “Invisible Death.” Based on what she had herself seen during five months of observing what happens in various ICUs in Japan, she came down hard in favor of letting the eyes of family members, trained on the body rather than on the EEG, be the basis for an empirical judgment (Nakajima 1985/1990). She insisted that, even when family members are ushered into the ICU unit for a few moments to view their loved one, the conditions are such that what happens is a poor excuse for what real and sustained “seeing” would be. In what follows, she synthesized what she had often herself witnessed once there had been a diagnosis of brain death: Members of the family, having donned white or blue sterilized garb and paper surgical masks, will line up for just a few moments next to monitoring machinery and a respirator at the bedside of their kin. On one side will stand the physician trying to explain brain death in simple language and on the other the family members whose eyes are fixed on the machinery and whose heads will be vacantly nodding assent. Although they have not really seen whether or not their loved one is really dead, they have had the physician’s testimony to that effect. All the effort has gone into getting each of them, both in heart and mind, to give their assent. And with that, matters are expected to be settled. (Nakajima 1985/1990: 19–20)

This book had a wide impact. It made visible the invisibility of the brain dead. Yoshihiko Komatsu, writing a decade later in what would be probably the most highly respected and important book on this topic in Japan, noted that Nakajima’s work had depicted a “slipshod” side to the way in which brain death is determined and a “ghastly” aspect to the manner in which, if legalized in Japan, organs would be extracted (Komatsu 1996: 4).

50

Biolust

Nakajima is a skilled writer and her book’s impact was no doubt due to how what she wrote carried readers into the actual scene of the ICU. She gave them eyes—and a reason to be skeptical about the adequacy of a “message” from the EEG. She noted that most people have been led to believe that, in contrast to the condition of the body of a person in PVS (persistent vegetative state), the braindead body has lost the “color” of a living person and has become a mere “thing” without signs of life.5 But this simply is untrue. I’m writing this because of what I found during five months of observing up close what takes place in the ICUs of a various hospitals. And day in and day out I saw persons declared to be brain dead and saw that reality is very different from what is commonly believed. [. . . .] Once a respirator is attached and the flow of intravenous nourishment has been set up, what sustains life has been reestablished—so much so that the face retains its color and the body will not get emaciated and waste away. (Nakajima 1985/1990: 17–18)

Although this deserves more discussion below, it is important to note here that Nakajima wrote as someone concerned about empirical evidence—literally, what the eyes might be telling us. This has always been the oldest criterion of true science. Some of her critics in Japan tried to suggest that her approach was too “emotional” but there is reason to think that her gender may have contributed to comments that would have been both sexist and wrong. Nakajima should, I would argue, be seen as having been a catalyst for what in Japan became a refusal to believe that, because the EEG is technological it is necessarily more scientific and therefore ought by itself to trump contrary evidence provided by our eyes! Americans are disturbingly ignorant about the condition called brain death. They do not realize that it is the product largely of language and because they as a people have been trained, not unlike the family members allowed only briefly into the ICU, to focus on the machine, not the body itself. They are led to assume that the EEG and only the EEG matters. And over the years television has lent a hand to this concealment. ER and its multiple TV progeny have showed it to us hundreds of times: the wavering or undulating line that suddenly goes flat! We watchers of TV are expected to know exactly what that means! The message of the flatline on the machine is supposedly simple and straightforward: the person to whom the lines of the electroencephalogram (or electrocardiogram) are connected is dead, unquestionably dead. We have been repeatedly but subtly persuaded into making this snap conclusion. For a long time, I was among the believers. I was part of the American general public trained to accept the evidence of the flatline. I long assumed there would be no need, whether I were awaiting another person’s death in a hospital or merely watching its simulation of TV, to take another especially close and sustained look at any body just declared to be “brain dead.” This was the case until I read a book on the topic in Japanese, Nōshi no hito, by Masahiro Morioka, one of Japan’s leading bioethicists (Morioka 1991). This book, whose title in English might be

3. Sweating Corpses

51

“The Brain Dead Human Person,” punctured my naiveté; even its title shows the author’s resistance to reducing the brain-dead person to a “body”—as if the person there, while showing so many signs of being still alive, is already a virtual corpse.6

­Nurses for Corpses In Japan the books by Nakajima and Morioka resonated especially deeply with those nurses who, as part of their responsibilities, care for brain-dead persons on a daily basis. More readily than their American counterparts they wrote openly about the disparity they were seeing. The brain-dead person, although immobile, will not in other ways resemble a corpse. It will have a face as colored and flesh as “alive-looking” as it had been before the accident or other event had brought this person into the ICU. It will not have gone gray or “ashen”—for the simple reason that blood, oxygenated through the assistance of the respirator, is still circulating through it. The thermometer taking its temperature will show no significant dip, no drop down into the coldness we associate with “the dead.” From time to time, beads of sweat will materialize on the forehead and upper lip and need to be wiped away by the nurse. The kidneys too will be functioning as usual, meaning that urine will need to be drawn away with a catheter. If kept for any length of time, this body will also need to be turned because it, unlike that of a true cadaver, will otherwise develop bed sores. And since the mouth will still be warm, it will continue to provide a hospitable habitat for the growth of living things. This means that the teeth and gums, like those of any other living person but totally unlike those of a corpse, will need to be brushed from time to time by the nurse. In Germany, Alexandra Manzei, a nurse who later became a bioethicist, felt compelled to expose the incongruities in this due to her own experience as a nurse and wrote a pointed critique of the brain-death notion (Manzei 1997). Nurses in the United States, expected by their profession to accept the concept of brain death and to promote donation, appear to be more cautious about expressing their sense of the conceptual, existential, and emotional dilemma they experience by seeing and tending persons in the bizarre state of the brain dead. One exception is Janice Brencick, a nursing instructor who describes private musing about what a nurse might observe: The sounds of the respirator interrupted her thoughts. Click ha, click-ha, clickha. The patient was still breathing. Then, bleep, bleep, bleep. The cardiac motor proclaimed that his heart was still beating. The bewildered family heard the sounds too. Sounds of life. They gazed at their son scrutinizing him as he lay there, then turned to look up at the nurse. The family spoke no words. Their expressions alone were enough to give away their secret thoughts. In silent language they asked what the nurse could not answer. Their son was still breathing. His heart was still beating. How could he lie there so peacefully, as though in some dreamless sleep, and yet still be called dead? Dead people don’t breathe, do they? Their hearts don’t keep on beating. They’re supposed to look cold, stiff, pale. (Brencick and Webster 2000: 13)

52

Biolust

­This nurse, ordered to make sure to keep “nourished” the organs taken from this “corpse,” wonders just “how many different body parts would have to remain alive before we could say that the person himself is still alive?” Here standpoint—that is, in the most literal sense of where one happens to be standing—makes a vast difference. Those persons, the nurses primarily, who will stand by the bedside in attendance for a protracted period of time and physically touch the body, are most apt to see what lies before them as “patient” and alive. And after their organs have been excised, it is those organs that are to be treated as “alive.” By contrast, the family, if allowed to see their brain-dead relative, will have much machinery in their line of vision and will receive the official description from a uniformed authority figure, the physician, who will direct the family to see the matter from their perspective. Their standpoint will not be that of the nurse. To be or not-to-be dead all depends upon where you are standing, how long you remain there, and what details of the person or body you are permitted to see.

A Pregnant Machine One difference between Japan’s lengthy and detailed discussion of these things and America’s far more rapid official decision that brain death equals true death is that more data was entered into the considerations of the Japanese. And one body of data, not available until the 1990s, consisted of medical cases of pregnant women being declared brain dead even while attempts were made to keep these women “alive” so that they might, if possible, go full term and have their babies removed by caesarian section.7 This bizarre phenomenon received a lot of attention in Japan. A cartoon appearing there was captioned by a question I translate as: “Can a Woman Be Pregnant and Brain Dead?”8 The most salient case of this occurred in Germany. On October 5, 1992, Marion Ploch, thirteen weeks pregnant, struck a tree with her automobile and suffered a fractured skull. Transferred by helicopter to the university hospital in Erlangen, she was placed in the ICU. Her parents refused the request of the doctors to transplant the organs of their now “brain-dead” daughter. It was then eventually concluded that, if Ms. Ploch could be sustained on life-support systems for the remaining time of her pregnancy—nearly six months in her case—a baby might be delivered alive and well. Its mother would then be allowed to die naturally. (Its father was unknown and remained so.) For approximately six weeks—that is, until a spontaneous abortion occurred on November 16—what was transpiring in Erlangen became a subject of national debate within Germany. A central aspect of the debate concerned the issue of whether in such a case the norm of “preserving a life”—even a potentially viable one such as that of this fetus—should override what otherwise would have been the natural demise of a fetus whose fate was naturally bound up with that of its mother. A public poll taken in Germany at the time suggested that those surveyed held that, by more than a four-to-one margin, a dead woman should not be caused, at least in such a fashion, to give birth to a child. She should be allowed to die.

3. Sweating Corpses

53

And beyond this not a few Germans viewed the doctors in the Erlangen hospital as far too ready and eager to carry on a shameless experiment, one using a human subject. Is our technology now so good that we can keep alive and viable a fetus for as long as six months in the womb of a woman who herself is not, at least by what is supposed to be a reliable definition, alive? The implicit question, at least as a problem for society, no doubt has its own fascination. But many Germans saw what was transpiring in Erlangen as far too similar to Nazi-style experimentation. They found this intolerable. And feminists could not but notice and express dismay at how the dignity of the “dead” woman was something that hardly seemed to matter when the possibility of achieving some kind of medical breakthrough presented itself to the physicians involved (Anstötz 1993). One researcher holds that this case was the catalyst for what in Germany, at least since 1992, has been an increasingly vocal opposition to what had been assumed to be the public’s acceptance of the transplantation technologies (Schöne-Seifert 1999). The Erlangen case, even though it elicited the most public discussion to date, was not isolated. Instances of women who were concurrently pregnant and “brain dead” have surfaced in various countries. The phenomenon has at times been referred to as the “postmortem pregnancy.” We can detect within some of the related discussions of physicians and bioethicists a concern that such phenomena could bring into open question the whole matter of brain death as equivalent to human death. And this, in turn, could lead to doubts about the morality of the transplant. After all, if a woman could, by being kept on a ventilator, be retained in her pregnant state so as to be able to give birth long after she had been declared “dead”—or, at least “dead” enough to have been considered eligible to be an organ donor if there had not been the fetus to consider—then the entire issue becomes very cloudy and uncertain. Is a woman alive enough to bring a fetus to term and give birth not also alive enough to make it clearly immoral to extract her organs and cause her to become unambiguously dead by means of that surgery? If this phenomenon itself has its grotesque dimension, the same could probably be said about some of the proposals for overcoming it. David Lamb, in a book originating within philosophy seminars given in England, saw the difficulties in this as easily surmountable. According to him the problem lay, in effect, in the mistaken view that the pregnant woman’s reproductive organs were part of a living being. Lamb’s solution was to redefine the womb. Rather than see it as a living part of a woman, it had to be viewed instead as a variety of machinery. Foetal survival following brain-stem death of the mother is nothing more than foetal survival in a ventilated cadaver. It signifies no more “life” in the expatient than embryonic survival in a test-tube would suggest that test-tubes or associated laboratory equipment were endowed with vital properties. (Lamb 1996: 101)

­The movement to establish equivalences here is swift but also facile. Ms. Ploch is viewed as a virtual test tube—but one that happens to come in the form of skin,

54

Biolust

muscle, circulating blood, and amniotic fluid that is fully adequate to do what it needs to do to sustain a developing fetus. (If in our epoch babies can be conceived in vitro, thus implicitly wrapping test tubes in the language of the flesh and the womb, should it be surprising that Lamb can imagine the converse of this, namely, language that mechanizes the uterus?) What I note here, however, is that in the view of Lamb the pregnant woman here is a cadaver already. And her uterus needs to be at least imagined as a detachable and detached organ. That is, our minds here should be able, he holds, to perform an act of imagined surgery. The uterus of such a woman is no different from any other organ packed in a transfer container and being shipped off for transplant. Lamb writes: In this respect there is nothing different in this case from any other use of cadaver organs to sustain life. Why should the uterus have greater significance than the heart or the kidneys of a cadaver? Given the possibilities of more extensive means of salvaging corpses, one should remember that what is being managed in these cases is not patients but organs (the remnants of patients). (Lamb 1996: 101–2)

As pointed out already in a critique of Lamb’s earlier work, his argument is deeply flawed (Gervais 1986: 144ff). He writes as one oblivious to the kinds of issues raised by the easy and self-serving objectification of a woman’s body and the reduction of it to no more than mechanized functions. Moreover, his logic is faulty. Writing as if the issue under consideration were not one of the morality of using the shaky concept of “brain death” itself to legitimate transplants for putative cadavers, he concludes that instances of brain-dead mothers pose no problem—because using them as incubators is no different from using the excised organ of a brain-dead person. His logic is circular, an attempt to prove that there is no problem with the brain-death definition via an appeal to the brain-death definition—itself the problem. But while these objections are serious, what might concern us most here is the complete disconnect between this mode of ethical “problem-solving” and the ICU-based experience and direct perceptions of the medical personnel, especially the on-duty nurses noted above. According to Lamb what the nurse sees is an illusion. The woman for whom the nurse may be caring for days and even weeks on end is already an “ex-patient.” They may be breathing but that does not matter. The body is a de facto corpse. Cases of women who, though dead by definition, would continue to be pregnant were very troublesome to some members of Japan’s Ad Hoc Committee on Brain Death and Organ Transplantation, which issued its “final” report on January 22, 1992. This committee, established by the Diet, came out with a split decision; its majority wanted brain death legalized as real death but a very vocal minority opposed this. Because the issuance of a minority opinion was so unusual in consensus-preferring Japan, the minority view, in fact, received an unusual amount of attention.

3. Sweating Corpses

55

The majority report, however, was also scrutinized. It recognized that cases of brain death pregnancy were troublesome—not just to the public but to the whole case for brain death. During its deliberations one ploy suggested was that an “exception” be made for such cases. Others thought that if mentioned it could sabotage the whole case for allowing those who recognize brain death to use it without being seen as causing someone’s death. Takashi Tachibana offered an astute critique of the committee. Tachibana was an investigative reporter and public intellectual who had earned a high reputation through his role exposing the nefarious crimes of Prime Minister Kakuei Tanaka, who then was arrested in 1976 for involvement in what was called the “Lockheed Scandal.” In Tachibana’s critique of the Ad Hoc Committee on Brain Death and Organ Transplantation, he reported on an argument that had been presented within it for keeping silent about cases like the one at Erlangen. The following had been said: If we put into our final report something that makes an exception of the pregnant woman, there will be unending doubts about how a person can be both pregnant and technically dead in the first place. I think that even to include mention of this will only stir up a snake asleep in the grass. Our committee is obliged to bring this whole discussion to a conclusion and, therefore, it would have just the opposite effect if, at this point, we were to allude to this problem. We would just be buying trouble for ourselves if we would invite public suspicion in this way. (Tachibana 1992: 132, original emphasis)

Knowledge of how eager some were to avoid making the public aware of this problem only intensified the suspicions of Tachibana. Tachibana balked in print over the idea that the government committee studying brain death should make no mention of the anomaly of brain dead but pregnant women. In the third of three books he wrote on brain death and organ transplant questions, he charged the government’s committee with disrespecting the intelligence and participation of the Japanese public. Of his own readership, a wide one, he asked: notice that language about there being no need to “stir up a sleeping snake” is used to describe you, the very public that in a democracy deserves to be fully informed (Tachibana 1992: 132–3). Tachibana’s point was that these matters go to the heart of questions about what kind of society it is that will gloss over such forms of public deception. And this is, of course, hardly a question unique to Japan. On the provision of adequate and true information about brain death, it would appear to have been even more true of society in the United States.

­Brain Dead in a Procrustean Bed Studying how Japan has handled these matters and reporting on them in some public contexts back here in the United States has made me keenly aware of how readily any questioning of new forms of biotechnology as represented by braindeath-based transplants invite what I might call the “Starzl response.” This is the

56

Biolust

charge that anything other than full acceptance must be due to some “time warp” caused by being still trapped in long-discredited religious and “metaphysical” positions and being “obviously” hostile to true science.9 Here the proponents of anything new in biotechnology make the snap judgment that opposition will be “faith-based” and unscientific. My hunch is that this way of reducing the complexity here as a religion-versus-science thing may be an unfortunate byproduct of America’s idiosyncratic and highly politicized “culture wars.”10 Japan’s debates, I found, were relatively free from such caricatures. This is not to say that diverse appeals to diverse religious perspectives do not show up in Japan. These can be, as I hope to show, interestingly different from those that surface in America’s discussion. But one thing seems clear: depending on old clichés to insist on a clear battleline between science and religion will ignore the multiple places where such lines are weak enough to cave in and disappear. Many Japanese who would never describe themselves as theists or even as detectably “spiritual” will often be very rigorous in demanding scientific proof from biotech advocates. These skeptics will happily track down the unfounded beliefs, leaps beyond logic, and positional backtracking in the arguments advanced by persons and groups assuming that the latest biotechnology will be based solely on science and reason. And anyone who decides to do the same will find flaws aplenty. For instance in 1996, David Lamb, intent on arguing that the concept of brain death was air-tight and would never be used to legitimate anything beyond organ transplants, issued something like a philosopher’s promise to anyone worried about “category slip” or ethical erosion. He wrote: When terms like “brain death” and “vegetative state” are used as if they were synonymous (in proposals for euthanasia or termination of treatment) there is not only factual error but serious risk of ethical abuse. Patients in a vegetative state are not dead. No culture in the world would consider them fit for burial, organ removal, experimentation, etc. (Lamb 1996: 6, emphasis added)

A lot of faith was being placed both in the air-tightness of the braindeath notion and in every society’s ability to see that it would be going over a moral boundary to consider taking organs from persons in a PVS, persistent vegetative state. Yet, within only slightly more than a decade of Lamb insisting that “no culture in the world” would consider persons in PVS to be candidates for harvesting, the “unthinkable” would not only be thought but even launched in public as one way of coping with the fact that most of the air had already seeped from the supposedly secure definition. For, as observed earlier, in 1997 Robert Truog not only noted that the brain-death definition had suffered a demise but went on to suggest that for that very reason it might now be time, because available organs were in ever shorter supply, to start taking them from persons in PVS! (Truog 1997). The proposal here is somewhat analogous to a driver, having just recognized that

3. Sweating Corpses

57

their vehicle’s brakes are broken, using that new knowledge to justify further acceleration by stepping down harder on the gas pedal. Now that the brain-death notion has collapsed, we may as well take steps—since organs are needed—to redefine death yet once again and this time as inclusive of persons whose physical state is, if anything, even more ambiguous and problematic. Although it still has not gotten into mass circulation media—with a likely result in that case that his information might shock the general public and possibly reduce the number of donors—Truog’s honesty within the confines of professional journals was commendable. He pointed to studies showing that “one third of physicians and nurses do not believe that brain-dead persons are actually dead, but feel comfortable with the process of organ procurement because the patients are permanently unconscious and/or imminently dying” (Truog 1997). There is, however, in much of what goes on here a deeply troubling disconnect between professional and public honesty. For four decades, tremendous effort had been put into convincing the public—that is, that vast pool of potential donors being eyed by both recipients and their physicians—that the brain-dead person is really dead. Persons still curious about this would be told that physicians and legislators at least in America had been especially scrupulous in insisting that more than just the cerebral cortex—that is, where our consciousness is located— must be irreparably damaged. Americans were told that Americans had taken the “conservative” approach by defining death as “whole brain death,” that is, inclusive of the lowly brain stem because it regulates so many of our bodily functions and, as Lamb would insist, is also the integrator of all functions and thus makes it possible to have the personal integration necessary to be a human person. This was the logic of the public argument that brain-death-based transplants need not be worrisome to anyone. The following chapters will explore why this definition of death had so precipitously imploded, as Truog admitted in 1997. Others, usually quite privately, began to refer to the brain-death notion as a “convenient fiction”—“convenient” because the public belief in its veracity facilitated their donations but a “fiction” in the sense that it simply is not true. Truog suggested moving on—and getting access to even more organs, those of persons with a still intact brain stem but a persistently nonfunctioning cerebral cortex. Terry Shaivo would be someone in that category—not so much dead by definition as dead by the irreversibility of her condition. To all appearances her human personhood was gone. Keeping her alive, it was now implied, would be a waste of good, re-cycleworthy, organs on an entity that retains the body of a human being but not the requisite mind. Loads of assumptions, based now on hard but problematic Cartesian views about body and mind, are wrapped up in Truog’s supposedly only practical suggestion about how to get more organs. He has hardly gotten free of the impulse to rationalize a medical procedure by coining a new definition. Being human gets redefined away from having “integrated functions” to being in possession of consciousness. (On this one we all, at least temporarily, lose our humanity whenever we sleep!) What is remarkable is how, having decided that a sometimes useful definition of death is flawed, a new one of human-ness gets trotted out. It is

58

Biolust

like, having discovered shakiness in a dock on which one sits, the decision is made to jump into a boat with no bottom. Usefulness becomes all. Lamb in 1996 predicted that “no culture would consider [brain-dead persons] fit for experimentation.” But by 1999 Renata Pasqualini and Wadih Arap at the M.D. Anderson Cancer Center in Houston, finding that taking multiple biopsies from conscious patients was too invasive, petitioned to be permitted to conduct this type of experiment on brain-dead or nearly dead patients—that is, persons intentionally not organ “harvested” but kept in this condition so that they might be used for still other purposes (Couzin 2002: 1210). The late Yonezō Nakagawa, a professor of medicine and writer on these topics, wrote that there is a name for this kind of arbitrary category manipulation. Citing a Greek myth, he refers to it as fitting the patient or even the concept into a “Procrustean bed” (Nakagawa 1996a: 180–4). Procrustes, whose name means “the stretcher”—that is, a person who stretches other things—was a bandit who kept and used an iron bed in his stronghold. Passers-by would be forced to lie down in it. If they were too long, he would cut off the excess and, if too short, stretch them out until they too could fit exactly.11 Nakagawa’s analogy seems apt. Defining and legalizing brain death involved a certain amount of data manipulation and logic-chopping to make it fit the need in the late 1960s to legitimate the removal of organs from a body, one that without this all-tooconvenient concept might have reasonably been thought killed by the surgeon’s scalpel. But later, when the initial legal hurdle had been scaled and, in fact, it was the simple need to get access to more usable organs or even use a wider category of putatively “dead” persons for experiments that became paramount, the braindeath criterion began to look scrapworthy. Ever newer language about what exactly constitutes death would be coined. The assumption is that the actual appearance of the person on the bed—alive in many ways—should not matter. Eyes cannot be trusted. If look and language are out of sync, let the language win. The stretching out of concepts and ethics becomes of paramount importance.

­Chapter 4 H E A RT T O H E A RT

The point is, not how long you live, but how nobly you live.

—Seneca the Younger

Ms. Sugita’s Dilemma In 1999 a woman I will here call “Ms. Sugita” explained a dilemma she had faced. It had been posed by a gap between moral values in her own society and pressures to embrace one she saw as embodied in a form of medical technology assumed by many Americans to be an unambiguous “good.” I saw her interviewed in Japanese on television during the fall of 1998. I was in Kyoto and casually watching various programs one evening when suddenly she was being interviewed. I will relate what I recall of what she said. It is important to know that at that time in Japan there still had been no legal transplant of organs from a person declared to be brain dead, although a law making such transplants at least legal had been passed. (There had been one actual heart transplant decades earlier in 1968 but it had been ethically compromised in multiple ways and had not been legal.) The woman’s name was not revealed but I will refer to her here as “Ms. Sugita” simply for the sake of convenience. She had been married for many years to a man who was sitting mutely next to her. Here is the gist of the notes I took while listening to her. It’s in her own voice. My husband here has a very weak heart and we have known for a while that within a year or so he will die because of the weakness. Some time ago our cardiologist here made arrangements with an American colleague on the west coast of the USA. If it were to happen that a good heart matching my husband’s bad one were to become available over there we would be contacted and immediately fly to America to have the heart transplanted into my husband’s body. But then the waiting began. My husband, who no longer could work, and I spent our days at home waiting for the phone call that might tell us that a heart had become available and we could go abroad to get it. We hardly dared leave our house lest we miss that call. We waited day in and day *Originally Chapter 3 of the second draft (2010).

60

Biolust out. The waiting dragged on. And the more we waited the more we found ourselves hoping that some unknown American somewhere would have the kind of terrible accident that would make them brain dead. This occupied our thoughts and, I must admit, we even prayed that something like this might happen. When the phone didn’t ring our anxiety grew—as did our sense of desperation. But then we began to have guilty feelings about this and my husband and I consoled one another for a while with the thought that we would, after all, not be actually causing someone else to die. We would merely be persons who, through modern technology, could be benefited by someone becoming brain dead and in this way providing us with an organ we could use. Over the months of waiting, however, we got so we could not bear it. It was not so much the simple fact of waiting as it was our awareness of how our own inner selves had changed. Day in and day out we were sitting there waiting and actually wanting someone else to die. Sure, we would not be actual murderers. But our hearts—I mean, the “heart” that is different from the physical organ— had gotten sullied, dirtied with this relentlessly obsessive desire. Our desire was focused on how we might profit from another’s tragedy. When we dared to look at what we were becoming we found what I can only call in each of us the “heart of the murderer.” We really wanted someone dead! And we wanted it more than anything else in the world! This got so bad that we could no longer live with ourselves. My husband and I got to the point where we said to one another that there are things in life worse than physical death, even death at an age younger than average. We said: Better to die sooner with a defective heart-as-pump than to live longer with a morally dirtied heart-as-conscience! Keeping the latter pure and intact is far more important to us. So we talked it over, my husband and I. And we decided not to go through with the plan. We told our cardiologist to cancel the arrangement he had worked out with his colleague in America. And since then, even though saddened by knowing our time together will not be long, we both feel much, much better. We feel cleaner inside. We think we made the right decision.

The interviewer thanked them and the program ended. Speaking in Japanese, Ms. Sugita could make a fortunate linguistic distinction that we can’t easily accomplish in English; Japanese differentiates the heart as muscle and pump (心臓 shinzō) from the heart as seat of sensitivity and morality (心 kokoro). The Sugitas wanted to preserve the kokoro even if that would mean foregoing the chance for a new shinzō. The moral heart, they decided, took priority over the hydraulic one. Their decision was final and they said they felt at peace after having made it. I found this fascinating. Since viewing and thinking about it I have conducted a small experiment. On various occasions while lecturing on this subject, I have provided a synopsis of the story of the Sugitas to groups of people both in Japan

4. Heart to Heart

61

and in the United States. And the reactions in both contexts were different—in very interesting ways. Audiences in Japan immediately grasped what ethical values were involved when the Sugitas chose an earlier physical death rather than living longer but with their moral centers sullied by desires of this type and intensity. All the Japanese discussing this episode with me stated that the Sugitas had made the right decision, an admirable one. They had understood and expressed true morality. By comparison when I have discussed this same episode with American audiences, I have usually received a much more mixed response. Some persons have reacted with open puzzlement: “The Japanese are really different, aren’t they?!” Others tacked onto such surprise a statement about the need to respect cultural variation: “They are surely different. But the Japanese have a right to be different. There is no need for us all to see things the same way.” Less charitable responses came, however, from persons who found something downright shocking in the decision reached by the Sugitas. “Huh? What a waste! Perfectly good organs that are of no use to the original owner ought to be used by others. That couple had no part in the death of any donor! Donation and transplants are good things—one of the great things given us by modern medicine. Why don’t the Japanese get the point?” And on two occasions it was suggested to me that the Japanese clearly do not understand those “Western” values that, giving emphasis to altruistic love, find beauty of donating organs. Importantly, however, a handful of the Americans to whom I presented the Sugitas’ dilemma expressed admiration for them in how they made their tough decision. It was not one, they admitted, to which they themselves would have automatically come but, on consideration, it was understandable—and noble. These persons also said that, having until then always focused on the good deed of the donor, they had never before considered how complicated and different things might be from the other, the recipient, side. And it’s there that the real moral problem might exist. It is this small group of American responders that most interests me. They showed a capacity to understand the qualms of the Sugitas. Moreover, the fact that even a handful of Americans register admiration for the Sugitas as having taken a moral stance is, I think, telling. It says, I think, that even here it is possible for individuals to enter at least imaginatively into a social and intellectual milieu wherein values other than those that are most strong and vocal within contemporary America dominate the moral map. But more specifically for even some Americans to admit that the Sugitas raise a legitimate concern suggests that they too recognize what will be expanded upon—namely, the problem of inordinate desire. In other words, what Fox and Swazey, when focused on the mindset of the organ recipient, perceptively called the “tyranny of the gift” is also a problem of morality and conscience. The Sugitas, it needs to be remembered, chose to end what had become an obsession with getting another person’s organs and did so in order that they might return to having an untarnished conscience. And in Japanese the term for such a clear conscience is ryōshin, literally a “good heart.”

62

Biolust

Moral Fear It is important, I think, to notice what Ms. Sugita did not claim to be the reason for her worries. She made no mention of worrying that, once having another person’s heart inserted inside her husband, he might undergo some kind of personality or psychological change—suddenly craving foods he had never liked or now loathing things that, prior to the transplant, had delighted him. In other words, her anxiety was not the one that sometimes surfaces in the United States—namely, that items of “personality” may inhere in physical organs such as hearts and therefore accompany the surgical transfer, thus changing these aspects of the recipient. In Japan as in the United States this kind of worry on the part of organ recipients is sometimes expressed, although in my own reading I have seen less reference to it there than here.1 The variety of fear that concerned the Sugitas had nothing to do with traits or predilections that might inhere in physical organs. They were concerned about what I think needs to be called moral fear. The Sugitas worried that something central to both of them was getting morally dirtied by the whole business of wanting someone else to die so they could get that person’s organ. What was moving into jeopardy was “central” in the sense of being essential and at their core—but not imagined by them as having a specific locus or connected to some specific organ of the body. Their word for it would be the one used by every Japanese to refer to what each would be most eager to keep uncontaminated—the kokoro. Some aspects of our own society’s approach to morality may be clarified by looking briefly at the role of the kokoro in Japanese life. Nothing in the West corresponds precisely to it. Most often explicitly analogized to a mirror, a person’s kokoro also combines a surface—which may be dirtied by the “dust” of selfish intentions and acts—with a detectable “depth” wherein they can see and again connect with that far better and pure self one has the capacity to be. Is and ought are both there. However, the deliberate act of looking into the kokoro will automatically draw one past the dirtied surface and into where the strong goodness, pure intentions, and positive emotions of one’s true self are ready to be activated.2 For us in our own society, too, being “bothered by conscience” often registers as bodily expressed emotions: a lowering of the eyes, a constriction of the voice, and so on. Most Japanese have long assumed the truth of something only recently recognized by some philosophers here—namely, that our emotions have cognitive content. We come to know certain things through how we feel; there is, we have had to admit, not a little rationality in our emotions (Solomon 1976; de Sousa 1991). Our human emotions have always, most Japanese assumed, been perfectly at home in the kokoro and they sometimes, by showing up when we would rather they not do so, call us to examine whether the self we are is the self we want to be. Gabrielle Taylor, a philosopher, calls these the “emotions of self-assessment” (Taylor 1985: 14–16). We see all this playing out dramatically in what happens in the lives of the waiting Sugitas. Each makes an assessment of the condition of their kokoro, concludes that current intentions—literally, the desire that some anonymous

4. Heart to Heart

63

person die so that they receive the “windfall” organ—had gotten deeply tarnished, finds this intolerable, and decides to reverse the decision so as to restore the kokoro to purity. If doing so requires the person die earlier than they would prefer, so be it. Longevity is a value but not if purity will have to be sacrificed to get it. In this system of values the fear of dying sooner gives ground to what I call moral fear, a fear of having one’s integrity perish. To be sure, life and more of it are to be treasured—but not elevated to where physical life, as an ultimate value, trumps all else. The Stoics thinkers of Greece and Rome would have understood this perfectly—and agreed. Takie Lebra, an American anthropologist whose insights into Japanese society have had no equal, traced the connection between societal values and ethics there. She astutely describes the role of inner purity in Japan. [In Japanese society] the pure self is identified, morally, as sincere, selfless, altruistic, while the impure self is identified with calculation and the pursuit of self interest. Lacking a dogma to serve as the ultimate value standard, the Japanese make moral judgments in accordance with the presence or absence of such pure, sincere motives. (Lebra 1976: 162)

The language here is one of impurity, not one of sin. Conscience has a large role but does so without needing reference to God or even the gods. (This, incidentally, suggests what’s without basis in the contention by some theists that a transcendent deity and a theology of sin are requisite for moral depth and the development of the conscience in an individual or a society.) The Sugitas’ concern for purity surely does not let them off easily. Although they, at least to my memory, made no reference to religion during their TV interview, persons in Japan who do wish to connect religious values to organ transplantation will tend to come to conclusions very different from those made by religion-sensitive persons in North America, especially if the latter ignore the mental and moral state of organ-waiting recipients. In his Transplantation Ethics, Robert Veatch gives an overview of the perspectives on this procedure offered by various religions. And there he quotes Masao Fujii as having stated in print that “Buddhism affirms the idea of giving one’s internal organs to others, but the idea that a recipient would receive an organ with the desire to prolong his own life is not supported” (Veatch 2000: 11). Fujii’s straightforward sentence encapsulates what in this book I refer to as the problem of biolust diagnosed by many Japanese. In a word, it was biolust and the damaging impact it had been making on their own monitors of moral integrity, their kokoro, that made the Sugitas cancel their doctor’s plan for how they might get an organ. A word of caution is needed here. For reasons made clear below, it would be a mistake to get too physiological when referring to the monitoring of one’s kokoro as an act of “introspection,” literally “looking inside.” The physical object most often analogized to the kokoro is a mirror and it is not a mere coincidence that mirrors, in addition to being “stand-ins” for the gods, kami, in the shrines of Shinto, are

64

Biolust

also that which the visitor to such a shrine confronts face-on on entering one. This type of simple mirror simultaneously serves as a non-anthropomorphic focus of worship and a catalyst for the human standing before it to engage in inspection of their own kokoro. Purification is expected to be in process. Historians are not agreed on whether or not the origins of Japan’s use of this motif are in China but there can be no doubt that the Buddhism originally at home in India was adjusted within China, Korea, and Japan to fit each society’s values.3 And in them all the mirror is a strong and pervasive symbol. And what is consistently pointed to is the expectation that a truly human life will involve an ongoing examination of how closely it conforms to what is normative. Confucianists, Daoists, Buddhists, and Shintoists could each articulate the specifics in somewhat divergent ways. But what they had in common was the assumption that both religious practice and humanistic philosophy provide impetus and contexts for persons to see if individual lives are matching the norm—one sometimes referred to as transcendent but more often envisioned as expressed in the harmonious communal life of virtue-practicing humans. To be sure, community or social values would be strongly reflected in the norms by which the Sugitas were measuring their own intentions. They were not trying to show themselves as “individualists” when deciding to go against the organretrieval plan laid out by their physician. The “self ” to which each of them had decided to be true was not an entity separated off from other humans. In an astute analysis of how the relations between culture and thought may differ in different contexts, Thomas Kasulis, a philosopher who knows Japan well, has America and Japan as two contrasting prototypes presented in a recent book. In one such as our own, he writes, “to ‘find myself ’ means to discover who I am independent of external factors. This may lead to a strong sense of autonomy. [. . . .] I am in control, at least theoretically, of how (and often to what) I relate.” In a society such as Japan’s, by comparison, “to ‘find myself ’ means that I see how I am interconnected with, and interdependent with, many other entities. My self-discovery discovers the interdependence defining my life” (Kasulis 2002: 61). Kasulis’s discussion can be very useful—as I hope to show later in discussing taboos.

Housing Problems There is a long history in the West of attempts to locate a single organ that is responsible for human emotion and cognition—first in the heart and more recently in the brain. In some ways the resistance of some Japanese thinkers to the concept of brain death arises out of their suspicion that, even though persons in the West no longer try to locate the human being’s mental, emotional, and moral life in some center within the chest, the search for what might be its true home has not been entirely abandoned—although now for medical rather than religious reasons. Even today the West has seemed loath to abandon this particular housing quest. No sooner did Dr. Barnard cite his surgery as proving that the heart is only a pump—that is, free of any nonmaterial heart element—than there appeared a

4. Heart to Heart

65

concerted effort in the West to stipulate exactly where within the body the core constituents of human life will, in truth, be found. In 1967 Barnard gave notice of a vacancy, and within a year, in 1968, Dr. Beecher and his committee at Harvard declared that, since it is the condition of the brain that in fact tells us all we need to know, that vacancy had already been filled. But that only began a fairly confusing process of trying to pin down exactly the right address: Whole brain? Brain stem? Cerebral cortex? In some ways the search still goes on. There can be no doubt of the vastly improved empirical results—in terms of diagnoses and therapies—that result from being able to have a sharply localized focus and by, in addition, working with the hypothesis that a vast number of medical problems have their basic “housing” somewhere within the brain. The last few decades’ findings from research on the brain, including a good deal of it done in Japan, have been so impressive and continuously promising that Takeshi Yōrō, a professor of pathology at the University of Tokyo, already in 1989 described our entire epoch as one shaped by a “brain-only theory”—as if it explains everything now (Yōrō 1989). Japanese researchers skeptical of the brain-death diagnosis of death do not dispute this trend. What intrigues them, however, is what gets overlooked or systematically excluded from research in the West. The conceptual “housing” of various human capacities (or incapacities) within specific organs or parts of organs tends, they claim, to block out evidence against so exclusive a positioning. When whole systems are taken into account, bodily functions—and especially the meanings we fasten on to them—are not simply housed within specific organs— including that organ called the brain. Tomio Tada, a retired professor of immunology at the University of Tokyo and widely admired both as a scientist and as a playwright, has among his books one that could be titled “On Interpreting Immunity” (Tada 1993a). In this influential book Tada built upon his own sophisticated knowledge of the body’s immune system to insist that, as a system found throughout the body, that system in fact is superior to the brain in terms of what right within the activities of daily life continuously differentiated my “self ” from what is not my self. Organ rejection occurs when that system identified what is alien to it. Of course, we may provisionally “trick” the immune system through a substance such as cyclosporine. But this does not alter the fact that this is precisely the system that does the active differentiation. Although the “self ” imagined by my brain will itself be undergoing some change all the time, the immune system will spot those germs and viruses that will preferably be kept at bay and out of the changing entity that “I” am. Tada sees no reason why the conditions of the brain are routinely given preferential treatment as having “meaning”—that is, whether or not a human being is authentically “alive” or whether viable “personhood” is or is not present. Why the brain and not the immune system? There is no intrinsic reason for the fact that we interpret and privilege the one and ignore the other. Is the immune system simply too “lowly” to have caught our attention in this way? Perhaps it is that we have only gotten accustomed to piling a certain amount of human romance on to the brain. Given his description of what he sees as a “super system” in that it gives

66

Biolust

us our immunity, it is not surprising that Tada’s work has raised scientific doubts about the brain-death criterion for doing transplants.4 Another skeptic is the late Kōshirō Tamaki. Although less widely known among Japan’s public, the late University of Tokyo scholar’s research also challenged the assumption that life in the brain is the place where my person and my life-worthiness are housed. The “laboratory” used by Tamaki was one with which the majority of persons in Europe and America are unfamiliar, although some recent works in the West have begun to test its usefulness. Although Tamaki was a specialist in Indian philosophy, his approach to the Sanskrit texts he knew well was empirical. That is, since the texts that most interested to him were ones describing what occurs during various modes of meditation, he set about testing their validity by replicating what they described. His graduate students knew, for instance, that their teacher would at times seclude himself for a week or more to engage in intensive meditation, periods during which he would test the accuracy and utility of things described in Indian texts composed approximately two millennia ago. Tamaki assumed something that even now in the West is gradually being recognized—namely, that, because the early Buddhists had participated in the Indian yogic tradition, their “meditations” were also mindbody experiments. Here I can scarcely do justice to the array of things Tamaki wrote about. So I will select something that should be noncontroversial among persons who today practice these forms of meditation and I will note the implication he saw for the ethics of transplantation. During the 1990s his writings reflected things he had begun to conclude on the basis of these experiments—and how they might apply to what some even in Japan were claiming about how what is in the brain will give us all we need to know to answer the biggest questions about life and death. Tamaki familiarized himself with up-to-date research on physiology. In a 1993 book he wrote that what most interested him was found within the brain stem. Within the lowest part of the brain we find that the medulla oblongata and the spinal cord are connected and comprise the “medulla oblongata-spinal cord.” What happens here relates to the regulation of our posture, our protective immunity, and the functioning of our internal organs—i.e. what our autonomic nervous reflexes do to control our respiratory, pulmonary, and circulatory functions. It can be said that in this location the autonomic nervous system maintains the stability of our body’s internal environment, its homeostasis, and by providing these reflexes it proves to be absolutely basic and indispensable for being alive. (Tamaki 1993: 212)

Tamaki insisted on linking, even as a hyphenated word, the terms “medulla oblongata” and “spinal cord.” He did so even though many graphed anatomies will position the former in the brain and the latter outside it, thereby privileging the brain as isolable center of control. Recent science, importantly, is on Tamaki’s side.

4. Heart to Heart

67

One source notes that the medulla oblongata “blends imperceptibly into the spinal cord” (Marieb 2004: 450) without any clear structural change and that the former is sometimes referred to merely as the “swollen tip” of the spinal cord. This means that, unlike the cranial bones that clearly “define” the top side of the brain, there is no comparable dividing line at its bottom. Tamaki’s main point, however, lies in the “fit” between the physiology described above and the experience of persons engaged in the traditional meditation practices of Indian origin. He uses a word, samādhi, known better to his readers in Asia, to refer to that state in which, through meditation in the Zen and other styles, both body and mind are sensed by the meditator as unified, at ease, and concentrated to a unique degree. In a 1996 publication Tamaki wrote the following: What, from what we know about what’s basic to meditative samādhi, is related to this description of how the all-important medulla oblongata-spinal cord provides maintenance for a proper bodily posture and the control of one’s breathing? The three basics for a meditator’s samādhi are: control of the body’s posture; control of one’s breathing; and control of one’s mind. If one gets into the right posture and gets the breathing regular and smooth, the mind as well will then spontaneously become settled and stabilized. (Tamaki 1996: 54)

The interesting thing here is that, although the kind of meditation to which Tamaki refers had always been regarded as both a mind and body process, his focus on what is structurally and functionally indivisible about the medulla oblongataspinal cord goes far to explain why this is physiologically so and not just so merely because ancient texts said it is so. Of course, what this shows also gave Tamaki reason to be skeptical of a medical practice based on the very shaky belief that the brain is a discrete organ, neatly separated off from the “lowly” spinal cord, and housing all that need to be known to rationalize certain biotech procedures. It’s an odd house, one with a roof but no discernible floor.

Analyzing Yuck If in America even today there remains a considerable gap between, on the one hand, society’s professed acceptance of organ donation as a supremely good deed and, on the other, a concealed reluctance to donate on the part of half of the population, this no doubt arises because—at least to date—even an ongoing public praise of donation has not been able to dislodge from the population what is sometimes called organ transplantation’s “yuck factor.”5 Many people still find something repugnant in rushing one person into a diagnosis of brain death, cutting open the body, transporting some of that person’s organs from place to place in what too much resembles a picnic food container, and installing them in another person’s now surgically opened body.6

68

Biolust

To what extent repugnance vis-à-vis some new procedure should be allowed to signal the possible presence of genuine moral fear or, alternatively, be seen as merely a socially inscribed habit that can and even should be removed by retraining is central to the discussion of this book. In North America the acceptance of new biotechnologies has for many decades relied on multiple strategies, some involving the open re-education of society but also, as detected by our Japanese critics, a fair amount of unjustified hype and the suppression of uncomfortable evidence. Among many Americans who identify themselves as progressives, there is a disposition in favor of what could be called old-fashioned behaviorism as what works well to overcome the nasty yuck factor. In many ways the implicit model for defusing yuck comes from what, as a professional group, physicians in training will need to undergo. Fox provided the classic account of this in her “The Anatomy: Its Place in the Attitude-Learning of Second-Year Medical Students.” Such students “are inclined to react with some emotion to their first sight of the cadaver, to the first incisions they make on the cadaver, and to dissecting those parts of the cadaver, such as the face and the hands, that tend to convey a sense of its ‘humanness’. Nevertheless, most students soon develop a comparatively ‘objective . . . loose . . . workaday attitude’ toward these things” (Fox 1988: 61). Promoters of new biotech procedures anticipate that, although in a much less accelerated and hands-on fashion, society as a whole will have to undergo its own “training” with respect to visceral repugnance. It will take time. And society will have to be fed a steady public diet of strictly positive terminology: life, saving lives, giving life, modern miracle, wonder of birth, enhancement, altruism, progress, hope, and so on. The other prong of this campaign will consist of referring to yuck factors as merely emotive, unreasonable, and holdovers from an unenlightened epoch when people were in the grip of fears and irrational taboos. Such are the assumptions of an old-fashioned behaviorism that has gone doctrinaire. In it, emotion and reason are opposites. It strives to make us think that a well-directed retraining program will wean us from emotive, primitive, and retrainable responses to things. This assumption is “old fashioned” because contemporary research has increasingly shown that our emotions have cognitive content. Emotions are not just expressive. They are among the things by means of which we learn about our world and how we ourselves fit or fail to fit into that found world. Mary Midgley, an English moral philosopher sometimes called “the scourge of scientific pretense,” has commented directly on why the yuck factor may not be banned in such a facile way. She suggests what is itself unreasonable in the behaviorist’s flippant way of isolating reason from feeling and goes on to make the following application: In the case of bio-engineering, I think this approach has been especially unfortunate. People often have the impression that reason quite simply favors the new developments although feeling is against them. This stereotype paralyzes them because they cannot see how to arbitrate between these very different

4. Heart to Heart

69

litigants. [. . . .] We have to articulate the ideas behind emotional objections and to note the emotional element in claims that are supposed to be purely rational. (Midgley 2005: 270)

This, I think, defines the real task. Certainly some revulsions are based on misinformation or raw prejudice and their owners need to get over them. I can recall a time when Americans would not conceive of eating raw fish with pleasure. Expressing visceral repugnance vis-à-vis interracial dating or marriage was once fairly common in much of the world—Japan as well as America. Today this often goes all but unnoticed. So too with reactions to the open expression of a gay identity; emergence from the “closet,” once sure to elicit jeers almost everywhere in our society, has—in part by becoming more common—forced this particular kind of “gut reaction” itself to become unacceptable in many places. Aware of these cases, we may be tempted to assume that all yuck factors are irrational, have nothing to do with genuine ethics, and will eventually go away. Within human history, however, the arising of a new sense of repulsiveness has led not only to close ethical scrutiny but also society’s rejection of a given practice. Public executions, even cases of drawing and quartering, were all the rage even in “enlightened” eighteenth-century Europe until individuals found them morally repulsive, events they could no longer watch without a physically felt reaction in the chest and gut. At a certain point in our own history, the sale of Blacks as if they were cattle was viscerally repulsive to a Quaker such as John Woolman (1720–72) and led eventually to emancipation and progress toward equality of rights and treatment.7 Midgley’s insistence is that yuck factors deserve to be scrutinized and analyzed, not dismissed out-of-hand as “mere” emotion that will be devoid of meaning. Our repugnance, even when felt in the gut, may be sending us a message, a message about morality, that we should not miss.8

Taboo Dismissal With headlines as large as those used at the outbreak or conclusion of wars, Japan’s national newspapers on February 28, 1999, announced that for the first time in Japan organs had been legally transplanted from someone deemed brain dead. Within hours the news of this event in Japan reached other nations. The March 1 edition of the New York Times carried a brief article about it on its first page. What I find especially interesting is the words used as heading for this item in the Times and its first two paragraphs. They are worth reproducing here. ­ eath Taboo Weakening, D Japan Sees 1st Transplant By Sheryl WuDunn TOKYO, Feb. 28—For the first time in Japan, doctors performed a legal heart transplant today, striking at a taboo on taking organs

70

Biolust from patients whom Japanese traditionally do not consider dead. As 100 reporters watched on closed circuit television, doctors removed an ailing heart from a man in his 40’s and stitched in a healthy one. The operation was an ethical and emotional milestone in a country that has long believed that death comes only after the heart stops beating and the body turns cold. (New York Times, March 1, 1999)

Details of the prior day’s surgery were accurate. The way in which what was reported was framed, however, is very problematic. Specifically, taking Japan’s extensive debates on and resistance to this procedure as no more than a taboo and the circumvention of that taboo as constituting “an ethical and emotional milestone” is grossly reductive. There is absolutely no hint in this that Japan had had decades of extensive and intelligent public discussion of brain death, of its scientifically problematic criteria, and of transplantation’s role as a “wedge issue” for the social embrace or refusal of a whole range of new biotechnologies. The word “taboo” clearly refers to all that had been worthless, below the requirements of a modern and enlightened rationality. Taboo is one of those words we use to suggest how others, whether as individuals or a society, are circumventing the need to think something through. The word appears to have been a mispronunciation of terms originally picked up from islanders during a stopover by Captain James Cook in what we now call Hawai’i probably in 1778. The term taboo quickly became fashionable back in England as a way of referring to some thing or some activity forbidden in a “lesser” society— but on no basis smacking of clear rationality. And so it has come down to us. Others have examined this matter more closely. In lectures given at Oxford before his death in 1952, Franz Steiner, a Czech Jew whose family had been murdered by the Nazis, demonstrated just how loaded with cultural prejudice the term taboo had been from the outset. In passages in the journals left by Captain Cook, Steiner detected “that superior and slightly irritated indulgence which some people have for others who cannot think clearly begins to creep into accounts of taboo” (Steiner 1967: 25–6). Later discussions of the term almost invariably distinguished persons and societies bound by primitive taboos from ones advanced through social and cultural evolution. The weight of such assumptions has been hard to throw off and, I suggest, the item in the Times retains them. That is why when a society such as that of the Japanese is said to have finally jettisoned a taboo it is also said to have passed “a milestone.” It becomes more like our own. Concerning the taboo, Alasdair MacIntyre invites us to engage in a thought experiment. Reverse, he suggests, the perspective of the observer and the observed. Imagine, that is, how an islander in Hawai’i in the late eighteenth century would have viewed the ethics of Captain Cook or, even more interestingly, those of the Victorians who, especially when composing the ninth edition of the Encyclopedia Britannica (1875–89), were determined to show a very wide evolutionary gap between cultures steeped in taboos and their own. MacIntyre suggests that our imagined islander will not likely view Victorian society as more advanced than his own, at least not in its

4. Heart to Heart

71

morality. However pronounced, the islanders’ word “taboo” was, MacIntyre claims, meant to function as does our word “ought” (MacIntyre 1990: 180–91 and Stout 1988: 200–6).9 The New York Times deployed the term “taboo” to imply that the matter was already settled. But this is because, I claim, the social promotion of new biotechnologies depends not only on flooding the public discourse with lots of positive terms referring to it but at the same time keeping negative and troubling evidence out of the public’s view. MacIntyre invited us to conduct a thought experiment and to envision “islanders” whose taboos might, in fact, sometimes be language about an “ought.” Perhaps the fear they express is a moral fear. Besides, such hypothetical islanders need really not be imagined. Real ones exist—on the archipelago of Japan. And when they say that something ought not to be done it might be to our own profit to examine and analyze the reasons for saying so. Most of them have been articulated in published forms.

72

­Chapter 5 F E A R A S D I S C OV E RY ’ S I N ST RUM E N T

Fear and Rationality Something interesting turns up when ordinary Japanese are asked about their attitude concerning transplantation. Having had multiple opportunities to ask a fair number of people in Japan how they would personally respond to being part of a cadaveric organ transplantation, I have been surprised at the candor with which negative feelings were readily expressed. Among many with whom I spoke, especially during a year there from 1998 to 1999, there was little hesitation to say outright that the prospect of being even somewhat alive during a surgery to remove organs—with total death as a result—was a fearful one. Being vivisected would, they said, be a horrible experience. And just to imagine what it would be like to be even minimally conscious during such a process makes the whole thing repulsive. And—on a point that will be examined further below—persons I spoke with told me that their fear would extend also to the experience of being the person on the other side of the transaction. That is, to be the recipient of organs from the body of someone who might have been even marginally alive during the removal of their organs would sully the gift. Who would want to be alive when it would be at the cost of having another person cut up in such a macabre fashion? It is, then, fear vis-à-vis the whole transplant transaction that gets readily expressed by many Japanese when asked. And this is, I found, often accompanied by an articulated sense of curiosity as to why so many Americans, at least according to what they have been told, are comparatively relaxed about undergoing surgery of this type. Why would anyone be so “nonchalant” (heiki in Japanese) about undergoing something quite that gruesome? At the time when I was asking Japanese about this matter, I had just read Laurie G. Futterman’s statement that for Americans “organ and tissue transplantation has progressed from a bizarre notion to an established lifesaving reality” and my sense was that, by contrast, for the Japanese this remains very much “a bizarre notion” and that what Futterman refers to as “progress” was not ready to be so readily greeted in those terms (Futterman 1998: 168). *Chapter 4 of the first draft (2002). The original draft concluded with a fragment reflecting on Tomio Tada’s Noh play “Well of Ignorance” (1993b, 1994), discussed in Chapter 4, n. 4.

74

Biolust

The present chapter is one in which I will suggest that we too should not too quickly dismiss what is bizarre in these things—or even assume that an attitude of fear with respect to them is unworthy, ignoble, or something we should conquer or repress. I will, that is, argue that there is something deeply instructive in such fears and that, in addition, these are fears not directed solely toward the prospect of personal vivisection but, in spite of not being so easily articulated, fears that are being registered vis-à-vis an entire trajectory of biotech experiments and applications. These fears are, I hold, an especially significant outcropping of what Leon Kass has rightly referred to as “the wisdom of repugnance” and the bioethicist Arthur Caplan calls “the yuck factor” (Kass 1997).1 I will, in addition, try to show that there is something blatantly unscientific in the manner in which much of the American medical community has tried to remove from view that of which such fears, ones the American general public itself surely shares, is itself a form of evidence. I begin, however, by noting something that appears with great frequency in Japanese writings on this matter—namely, that there is something itself “bizarre” about nowadays believing that the “proof ” of a person’s death comes by seeing the famous (but problematic) “flat line” on the EEG or ECG monitor and by, as a consequence, dismissing as important what had since “time immemorial” been used to diagnose death, namely, the usual “three indicators.”2 In Japanese the “three indicators” (san chōkō), to which there is repeated reference, are cessation of palpable heartbeat, the absence of perceptible breathing, and observed dilation of the pupils of the eyes. Much Japanese writing on this topic finds something both peculiar and suspicious in the alacrity with which many, perhaps especially in America, have claimed that these three signs, used by human beings in a wide variety of cultures and apparently from a time before the keeping of historical records, are totally unreliable. The fact of the matter is that, although not recorded mechanically, the traditional three indicators were empirical in the very best sense—that is, they relied entirely upon evidence registered on the senses of an observer. Persons wishing to test for symptoms of a death either having taken place or in process would put fingers to the chest and pulse points of a dying person, put the ear proximate to the nose and mouth to see if breath was emitted, and look closely at the pupils to observe their widening and unresponsiveness to any light directed at them. It is not that human beings, especially in such matters, began to see the superior value of what we call data registered on the senses only when in modern times the word “empiricism” was coined and a large portion of the scientific community went on to judge such sense data as the only allowable form of proof. Even from fairly archaic periods in human history persons were not deemed to be “dead” without showing some bodily signs of being so. And what the Japanese call the “three indicators” appear to have had fairly widespread use. Bodies which did not show them were not, except in cases of capital punishment, buried or burned. And it is not the case that they ruled out the possibility of misreading the signs or even of there being some revival of life even after the signs—or at least one or two of them—had seemed to

5. Fear as Discovery’s Instrument

75

be present. The whole notion of the “wake” is, as suggested by the anthropologist Namihira and others, at least in part an extension of the impulse to gain empirical evidence (Namihira 1994: 34). That is, although we ordinarily think of such activities as part of the religious funeral process, the fact is that the presumed corpse is monitored for a certain period of time. This serves as a double-check on the “three indicators” in terms of their continuity through time, thus providing reasonable evidence that death has taken place in fact and a disposal of the body is warranted.3 Why, then, in recent decades did some persons and communities decide to overrule these traditional, observation-based criteria and opt instead for decisions based on flat-line readings on technical equipment? There are, I suggest, two central reasons. The first is that the move from the “three indicators” to the mechanical flat-line constitutes a clear instance of what Gernot Böhme, a German expert on the history and philosophy of technology, has astutely described as a modern process involving “the gradual exclusion of personal experience from relevancy for ‘objective’ knowledge at all.” He writes: This abstraction from the personal context meant that natural science strove as far as possible to rid itself of the human senses as a medium of experience. In its essence, modern natural science is not based on sense-perception but on measurement—that is, instrumental experience. This process of replacing the human senses by instruments is a very characteristic one. Most cases of early modern science begin with instruments that support or extend the human senses. But after a certain period, there is a turn toward instruments to define the phenomenon or effect to be observed, so that the question then arises whether the human senses are appropriate to measure or estimate the phenomenon under consideration. The development of temperature measurements, e.g., the development of the thermometer, is a case in point. (Böhme 1992: 32)

Development of the EEG and ECG are equally fine examples of this process. The very existence of the new technology comes to disqualify the older use of the human senses to estimate the phenomenon. And what is especially fascinating— and troubling—about substituting the new indicator for the older is the assumption that only one index may and should replace what had been three indices.4 Therefore, if it is the case that death is a process rather than an exact point in time, it would seem far better to judge its presence on the basis of multiple instances rather than a single instance. And this brings us directly to the second reason why we have, perhaps especially in North America, been led to believe it necessary to make this switch in criteria—namely, the fact that, as noted already decades ago by Jonas, the Harvard redefinition of death was and remains driven by utilitarian concerns.5 Kyoto University’s Hisatake Katō, a leading philosopher and expert on bioethics in Japan, gets to this matter directly:

76

Biolust Into the unacknowledged presupposition that “death” means one thing and one thing only is also packed the assumption that, as long as the notion of brain death is taken to be the basis for determining death, then transplantation of organs is permissible. However, the truth of such an assumption is anything but self-evident. Such a hidden assumption covers and conceals the usefulness that is part of defining death in this fashion. Getting recognition of this new concept of “brain death” as what will now provide the definition of death is what allows people to assume that all kinds of legalized dispositions [of the corpse] are now in full agreement with what had been the understanding of death from ancient times. This, however, prioritizes usefulness as what is foremost in the conceptualizing of life and death and this is a change that turns the traditional understandings upside down. (Katō 1986: 64–5)

That is, the whittling down of three signs to a meager one and the forging of a new definition to fit—like people cut so as to fit into Procrustes’s bed—was, in the final analysis, driven by the eagerness to get organs legally. But there is more. As science it would be counterintuitive to replace what had been at least three windows for empirical observation with only one! Moreover, as suggested by Böhme, we may in such instances be allowing ourselves to be beguiled by an instrument that makes a mechanical measurement. But what seems to have made this waiving of the standards of higher objectivity tolerable to many is the fact that doing so makes it more socially allowable to believe that death is a point in time and that, therefore, it will be from a certified “corpse” that reusable organs will be excised. That this is an instance of jettisoning more empirical procedures and putting in their place a set of unexamined assumptions and unverifiable beliefs would seem to be the case. Especially when pushed by utilitarian considerations, science as we have known it seems to lose its own way. Perhaps, then, the Japanese public expressing fear vis-à-vis the transplant is not expressing something irrational but, on the contrary, is articulating in this way a concern quite natural to people who, having in the past had what all considered to be justly empirical ways of determining death, are now deprived of these and, as a facile substitute, are offered a system involving technologies in which they are asked to invest their trust. Family and friends, those persons ordinarily most close to the dying person, have in these modern contexts been twice-removed from the kind of observations important to them. Not only has an instrument of measurement replaced sense perception—that is, the gap between the record on the EEG and that on the eyes of the attending nurse—but the family and friends are told that a “death” has occurred, while not allowed to see the “corpse” with their own eyes. Mere hearsay has replaced what could easily be seen to contradict it. Michi Nakajima has caught the essence of the problem here with a characterization of it as an “unseen death” (mienai shi) (Nakajima 1990). Precisely because any close look at the body of the brain-dead person would, in fact, disconfirm the diagnosis of death, it has become important to those administering things that the family and friends not see the physical body involved. Indignation

5. Fear as Discovery’s Instrument

77

at the prospect of being so excluded can lead to a strident rhetoric describing this as “murder committed in a secret chamber” (Amagasa 1992). Nevertheless, the problem posed by this is very real. As cultural anthropologists have shown, a good portion of the authority commanded by modern medicine depends on “mystique”; the role of the white coat is not that different from that of the robes of a priest (Iryō jinruigaku kenkyūkai 1992: 82). The creation of mystique may be “rational” in terms of the sociology and psychology of professionalization, but it circumvents and may even contradict the rationality identified with science. And it is my suggestion that, when physicians are reporting a brain death, the mystique of machinery and of white coats attempts to supplant the much more inherently scientific rationality that had always been part of empirical observations of bodies on the way to becoming corpses. That is to say that, in terms of adherence to the criteria of science it is the family and friends, skeptical about the information being parlayed to them and voicing doubts about it, who show the greater rationality. Their wish that they might be able to see the “evidence”—and thus to test whether the still ruddy and breathing entity is a corpse or still a person—is totally rational. The impulse to go back to the classic “three indicators” is decidedly not to revert to belief but, on the contrary, a refusal to be credulous. This, I suggest, is a domain in which, in spite of attempts to convince the public of the contrary, it is a contortion of the situation to depict such expressions on the part of family and friends as mere “sentiment” or the residual playing out of “emotional” attachment to what is now, it is claimed, surely a corpse.

A Noncompliant Public Although long-standing cultural differences between Japan and America can account for much of the difference in how these two societies today respond to the technology of the transplant, we may not, I think, ignore the evidence of differences in how this technology has been promoted, mostly during the past few decades, in America and, by contrast, questioned in Japan. Although later chapters will discuss the details of how Americans were conditioned to assume that the organ transplant expresses the quintessence not only of modern rationality but also of their Judeo-Christian tradition, I here wish to consider some implications of the evidence that even today the majority of Americans are not organ donors.6 My view is that Japanese writers who claim that an eagerness to see the “three indicators” is fairly universal are correct. Just as the “wake” seems to be a practice across the span of time and cultures, so too the eagerness to have empirical evidence of death is universal or nearly so. This means that among Americans too we can expect to see persons and communities concerned that such evidence has, in fact, been fudged—a concern that takes expression as fear vis-à-vis the transplant. And, as suggested already, some forms of fear must be understood as evidence of rationality rather than its opposite.7 Most Japanese I queried concerning transplants responded, as already noted, with candid expressions that the prospect unnerved and frightened them. While

78

Biolust

in Japan I decided to see how this might differ from the reactions of some of my fellow Americans. So I asked some of those I met there about their feeling—most specifically, by way of a question whether or not they had, while back in the United States, signed on as organ donors. Even though my survey was not conducted with the controls and sampling requirements that would be in place if done by a sociologist, I suspect that my findings would not be greatly different from one so conducted in the United States. One important aspect of my queries, however, was that my respondents were questioned orally and in the presence of myself and sometimes others as well. This was intentional and important. What I found was that, in almost every case when an American stated that he or she had not signed on as a donor, this fact came out reluctantly and accompanied by some form of excuse. In spite of the fact that many of them had recently been resident in America, where the promotion of this is extensive and where becoming a donor is a simple procedure and part of obtaining or renewing a license to drive, such persons typically explained their nondonation in terms such as: “I have not yet gotten around to it” or “I didn’t realize becoming a donor was so easy.” Then in the continued conversation I would point out that even in America nondonors are the statistical majority. This both surprised and somewhat consoled the otherwise embarrassed nondonor. A further question asked whether fear of the transplant might not in fact be part of the decision not to donate. It did not wholly surprise me that, once an ambience was present allowing for expressing of such fear, most nondonors admitted to its role in their viewpoint. Encountering so many cases of this, I began to question the accuracy of the statement by Futterman, quoted above, that for Americans “organ and tissue transplantation has progressed from a bizarre notion to an established lifesaving reality” (Futterman 1998: 168). I began to see such a statement—and it is not atypical among those promoting the transplant in America—as a proscriptive one and decidedly not the descriptive one it purports to be. In other words, for many Americans the transplant remains a bizarre notion. And it remains so in the face of an ongoing campaign, one that has garnered religious and philosophical resources to itself, to convince Americans that the transplant is “an established lifesaving reality.” This is where I suggest the need for being skeptical. Proscription is not description. Advocacy of something as an “ought” or even a desideratum within society may not be put forward as if it depicts a state of affairs that already exists. In fact, the evidence is to the contrary. The very need for such an ongoing campaign is itself proof that for many Americans—perhaps even the majority of them if the rate of donor signing is an index—the transplant is still a bizarre notion. And to be skeptical here, as in many instances, is to favor more scientific rigor. My hypothesis is that the reason why individual Americans show embarrassment in “confessing” to being nondonors is that an ethos has been created that makes nondonation into the equivalent of an anti-social act. As looked at in detail in a later chapter, the organ transplant is promoted as the quintessential altruistic act and nondonation has come to be equated with selfishness. In America, the United Network for Organ Sharing (UNOS) and other organizations bend every effort to

5. Fear as Discovery’s Instrument

79

convince what is a still resistant and noncompliant public that the signing of donor cards is both procedurally simple and ethically an unqualified good. “The only cost is a little love” is the rather disingenuous bumper sticker seen on the New Jersey turnpike. “Don’t take your organs to heaven; heaven knows we need them here,” declares another on an auto in Philadelphia. Yet all of this takes the resistance of the public as a problem, not as a datum. In keeping with what I write below about the proposal by Jonas that fear can be heuristic, I here wish to suggest what might be gained in a switch from seeing such fear as something that both exists and needs to be interpreted, not dismissed out of hand. This is, I suggest, a procedural turnabout. Instead of thinking of the general public’s expressed and unexpressed fears as merely a residual obstacle to be overcome, we need to recognize them as constituting a body of data which in its own recalcitrant manner is addressing us. To date there has been a deep-seated and institutionalized refusal to take these fears as such a datum. Instead they have been dealt with solely as an irrational instinct that can, if sufficiently countered by reason, moral urging, and an ethos defining transplantation as a “medical miracle,” be moved out of the picture.8 This leads me to conclude that, although Americans are much more ready than their Japanese counterparts to donate organs, the difference is far from absolute and the phenomenon of fear vis-à-vis procedures of this sort appears to be fairly widespread in both cultures. The difference is that in Japan there has till now been an ambience permitting the public expression of such fear, whereas in the United States, since the moral high ground was captured early on by the advocates of transplantation, the expression of fear has been for the most part treated as shameful and evidence of some kind of “selfishness” on the part of the person involved.

Organ Scramble Perhaps it could be argued that it should not much matter that the notion of brain death is conceptually weak and technologically hard to specify. Maybe such things are not germane. An individual’s fears of being vivisected, while perhaps a natural part of the psychology of such a situation, should not be allowed to interfere with their moral obligation to become an organ donor. The altruistic objectives deserve, even through blatant promotion, to be made so clearly present to individuals that lingering fears will be put away. And if this is true for individuals it should also be the case for whole societies. What is most needed is education for donation.9 This would suggest, therefore, that if the Japanese people tend to give rather ready expression to such fears, what they most need is to be educated into a higher sense of ethical responsibility—to become less fearful and more ready to be of help to their fellow human in need. Perhaps they need to grasp more deeply the importance of the ethic of the Good Samaritan. The position I favor is clearly different. The problem here is decidedly not one merely of insufficient “education” on this question. In fact, I will here try

80

Biolust

to show that, although fear of the transplant usually takes concrete shape as a worry about being personally vivisected, much more is involved. The focus of this discussion will be on why the question of brain death and organ transplants is not a question of brain death and organ transplants alone. That is, I will argue that there is good reason for listening to Japanese who argue that the implications of this practice are probably as profound as anything that human beings will of necessity confront in the coming decades and perhaps even during much of the twenty-first century. The place to begin may be with a contrast—again one between Americans and Japanese. Many Americans, including most bioethicists among them, have come to view the problem of organ transplantation in a fairly limited way—namely, as that of how to increase the supply of available organs for persons in need of them. By contrast many Japanese, especially the bioethicists among them, see the problem of the transplant as equivalent to the opening of a Pandora’s Box of problems—technical, philosophical, religious, and ethical. Another way of putting this is to note that, since those in America who promote the transplant assume it to be already proven as a “good,” the remaining problem is that of how to increase the quantity of available organs. Efforts at moral suasion are part of this project—and the reason why churches and synagogues in America are enlisted in this campaign. Yet the statistical gap, instead of closing, widens. The demand far outstrips the supply. Guilt infusion becomes a method for changing this. The campaign, as well as public media in concert with it, suggests that individuals are dying because their fellow citizens have refused to donate. But one of the salient features of all of this in America has been the proliferation of techniques and ethically questionable moves in order to make up the difference. How to increase the number of organs has become the driving force here. In the course of his analysis of this trajectory in the United States—a trajectory outlined in detail in one of the most important American publications on the transplant—Naoki Morishita, an ethicist at Hamamatsu Medical University, has summarized it well: Although [in America] it had been hoped that traffic accidents might provide a sufficient means for coping with the shortfall problem, there turned out to be a statistical decline in the number of young people who can be declared “brain dead.” In spite of this, the attitude seems to be that, precisely because of this shortage, such organs are even more deeply desired and even more unusual measures can then be employed to get them. Does this not strike us as upsidedown reasoning? Yet in America such twisted logic goes virtually uncontested. And then all kinds of measures are taken to fill in for the lack of organs. It starts with taking organs from persons who may still in fact be alive. Next comes the use of parts of an anencephalic infant. From there it is the xeno-transplant. And this extends even to the practice of conceiving and giving birth to new children just so that their organs, through transplantation, may save the lives of existing ones. Anything and everything will get to be brought forward for use once things start going in this direction. Matters which had initially been thought of as purely voluntary acts of goodness turn soon enough into ones conceived of in

5. Fear as Discovery’s Instrument

81

terms of “obligation” and from there degenerate all the way into commercialized procedures. What you get, then, is the buying and selling of body parts. (Morishita 1999: 428)10

­Morishita is correct. In the effort to procure organs, the idea that donors might be somehow paid has begun to receive serious attention. As perhaps an opening wedge in moving public sentiment to accept this kind of thing, Pennsylvanians are now offered a $200 reward, admittedly still a paltry sum, for being organ donors. But there have been gap-filling measures beyond those in Morishita’s rapid overview. The methods of extension are numerous—and at each point in the expansion process societal and philosophical assumptions have to be made and hurdles have to be jumped. One method, adopted already in fact in some European nations, is to eliminate the need for a person to have explicitly declared willingness to have organs taken if found to be brain dead. That is, all members or, at least, all adult members of a given society are presumed to be donors unless they have, as individuals, explicitly stated that they do not wish to be so used. This, the so-called “presumed consent” strategy, has been argued for as one that Americans too should adopt—so that, it is claimed, a deeper sense of community might be expressed. Another possibility, one that uses children to provide organs for other children, sees the “harvesting” of the organs of anencephalic infants as morally justifiable. The Council on Ethical and Judicial Affairs of the American Medical Association has, in fact, proposed precisely this—because a neonate who is born with large portions of the brain missing is one who “will never experience any degree of consciousness.” That is, although unconscious activities—including the functioning of the heart, lungs, and digestive tract—are present, those same organs, it is argued, would benefit another child much more. Therefore, although the excision of organs would necessarily bring an end to the life of such child and because there exists a legal requirement—the “dead donor rule”—that donors be at least brain dead, the AMA committee argued that the good from reuse of such organs outweighed any reason that might prohibit the needed surgery (Caplan and Coelho 1998: 80–91). This proposal, to be sure, is not without its critics—most notably an incisive rejection, based both on technical and philosophical grounds, offered by a team of experts in 1989 (Shewmon et al. 1989).11

The Heuristics of Fear The argument I am presenting here—that we seriously examine the source of our discomfort with these procedures—calls for more, not less, rigor. And to the extent that we may identify the rigorous demand for proof with one of the principal methods of science, I have been suggesting that in some of these matters, scientists need to be more scientific—at times far more scientific. The rise of what we call “modern science” often occurred in contexts in which the masses, untrained in science, expressed not only reservations about scientific

82

Biolust

procedures but also fear concerning the application of what scientists took to be clear discoveries. This has, even today, left us with the impression that the general public has a tendency to be fearful and that its fears are irrational. Because this has so often been the case, we easily go on to assume that it will always be the case: when there is an instance of science coming up with some “breakthrough” concerning which the public is apprehensive, it will with the passage of time and provision of evidence be made clear that the former was right and the latter wrong. Yet, as the above discussion has shown, what can easily be disparaged or dismissed as “irrational fear” can, not infrequently, turn out later to have been the demand, a highly rational demand, for requisite proof. And when people’s lives or fortunes are being placed in jeopardy by what is new but still unproven, we should not be surprised that their demand for proof takes the form of resistance accompanied by emotion. It is never irrational to express fear if and when there is something there that ought to be feared. In such cases the irruption within ourselves of fear deserves to be valued as a “gift,” one that reaches us from those parts of our sensory and rational selves about to grasp certain situations better than can our consciousness (de Becker 1997). Important historical research—“science” with the requirement of proof carried out in the laboratory of texts—has shown, for instance, that individuals in nineteenth-century Europe who feared being used—without anything like informed consent—as cadavers for anatomy studies, or even possibly murdered for such use, were not without justification, especially if they were homeless or indigent (Richardson 1987). Similarly, fascinating historical research by Martin S. Pernick has shown that the horror of being mis-diagnosed as dead became pervasive in late-eighteenth-century Europe, in part because various kinds of new experiments demonstrated a variety of death-like states, and concludes that we may presently be coming to the end of a rather exceptional sequence of decades— ones during which the American public had inordinate faith in the physician’s ability to pinpoint death (Pernick 1988: 17–4).12 Ruth Richardson rightly notes: At the heart of resistance to organ donation lies fear. [. . . .] But much of the fear may not, in fact, be irrational. It may be based upon informed deductions from knowledge—both public and private—of the powerlessness of the “brain-dead” and of the attitudes, behavior, powers, and priorities of the doctors and of the institutions in which they work. (Richardson 1996: 90)

This is to say that fear can at times be not only something that forefronts an issue needing public attention in terms of policy but also what can only be called an instrument of discovery. That is, although it can at times be undeserved, fear can in certain contexts be a modality through which the deepest scientific requirement—namely, that of proof—exerts itself. Hans Jonas, writing “against the stream” on this as in other matters, provides an incisive analysis of how fear can be “heuristic.” Although an analysis of exactly this matter is of tremendous importance and will be the subject of a later chapter,

5. Fear as Discovery’s Instrument

83

here we can, at least, point out what Jonas meant by “the heuristics of fear.” His argument was based both on what he saw as the demands of moral philosophy and on his own close analysis of the workings of technology. In his The Imperative of Responsibility: In Search of an Ethics for the Technological Age, Jonas insisted that fear plays a constructive role in our lives because, through becoming aware of what we want to avoid—that is, the object of our fears—we often first come to realize the importance and value to us of what we already have, prize, and do not wish to lose through inattention. He wrote: Because this is the way we are made: the perception of the malum is infinitely easier to us than the perception of the bonum; it is more direct, more compelling, less given to differences of opinion or taste, and, most of all, obtruding itself without our looking for it. [. . . .] We know much sooner what we do not want than what we want. Therefore, moral philosophy must consult our fears prior to our wishes to learn what we really cherish. (Jonas 1984: 27)

In the same work he described what seems to have become almost a “law” of technological development—that of which, I suggest, a concrete example has been provided above in my account of the dynamics that have pushed along the expansion of the pool of organ donors. Experience has taught us that developments set in motion by technological acts with short-term aims tend to make themselves independent, that is, to gather their own compulsive dynamics, an automotive momentum, by which they become not only, as pointed out, irreversible but also forward-pushing and thus overtake the wishes and plans of the initiators. The motion once begun takes the law of action out of our hands [so that] we are free at the first step but slaves at the second and all further ones. (Jonas 1984: 32)

And in another passage, he brilliantly brings together the demands of moral philosophy and his keen insight into the overwhelming advantage given to technology in the contemporary West—in order to make a case for the importance of prudence. [My position has no] bias against technology as such, let alone against science. The “technological imperative” is nowhere questioned, as it is indeed unquestionable in its anthropological primacy and integral to the human condition. But it needs no advocates in the Western world of the twentieth century; intoxication has taken its place. As things are with us, the technological drive takes care of itself— no less through the pressure of its self-created necessities than through the lure of its promises, the short-term rewards of each step, and not least through its feedback-coupling with the progress of science. There are times when the drive needs moral encouragement, when hope and daring rather than fear and caution

84

Biolust should lead. Ours is not one of them. In the headlong rush, the perils of excess become uppermost. This necessitates an ethical emphasis which, we hope, is as temporary as the condition it has to counteract. But there is also a timeless precedence of “thou shalt not” over “thou shalt” in ethics. Warning off from evil has always, for reasons dealt with before, been more urgent and peremptory than the positive “thou shalt” with its disputable concepts of moral perfection. (Jonas 1984: 203–4)

This is both to put substantive content into the principle of heuristic fear and to articulate why it is especially relevant in our present historical situation. If Jonas here provides the philosophical rationale, it has been, I have suggested, the Japanese public which has, with a candor not matched in the United States, expressed such fear vis-à-vis the cadaveric transplant—arguably the most important step of a society into high-tech medicine.

­Chapter 6 SE C T IO N I N G H UM A N NAT U R E

Dutch Learning and Chinese Medicine There is a fascinating episode within Japan’s history that can serve as both “background” and as an important component in our effort to understand aspects of Japanese thinking about biotechnology. It centers on an event in 1771 and something that is widely regarded by historians as one of the most dramatic encapsulations of a perceived difference between the West’s medical science and the Chinese mode of science that had had a dominant place in Japan up to that point. Our best knowledge of the event comes from the pen of the man at its center, a samurai who was also the personal physician to a local lord, or daimyō. His name was Genpaku Sugita (1733–1817) and he wrote up the account of his experience in an extant book he entitled Rangaku kotohajime, which translated literally means “The Beginnings of Dutch Learning.” This was, it needs to be noted, a time in Japanese history when most contact with the outside world and especially with Europe had been cut off. Earlier on, especially during the sixteenth century and during a period of considerable internal turmoil in Japan, Japan had been fairly open to resident Europeans. During this earlier time many, perhaps most, Europeans in Japan had been from Spain and Portugal and were missionaries of the Catholic Church. This led to a number of difficulties, not the least of which was a suspicion in Japan that the domination by

*Chapter  5 of the first draft (2002). As epigraphs, the original manuscript included quotations from the Jōshūroku (a Chan/Zen Buddhist text), and Ludwig Wittgenstein. The first quotation depicts an exchange between master Jōshū and an unnamed interlocutor: “Someone asked: What is the original essence of the self? Jōshū said: I do not use a butcher knife.”

(Hoffmann, trans. 1978)

The quotation from Wittgenstein anticipates LaFleur’s discussion of the philosopher later in the chapter: “In order to find the real artichoke we divested it of its leaves.”

(Pitcher, ed. 1968)

86

Biolust

foreign powers might be in the offing. By 1639, therefore, edicts had come down from the Shogunal rulers that excluded most Europeans from the archipelago and made Christianity a forbidden religion. The single exception, at least for Europeans, applied to Dutch merchants, who were permitted to reside on a small, man-made island, Dejima, in Nagasaki Harbor. Between 1641 and 1856—that is, most of the period of “national seclusion” right up to the point when the American Commodore Perry “reopened” Japan with his gunboats—the Dutch based on this island were the only Europeans officially permitted to remain in Japan. The books brought in by these persons—that is, books in either Dutch or German—were the ones which fell into the hands of Japanese curious about what was going on in Europe. This meant that what was European became identified with what was Dutch. And thus the arts and sciences of Europe—that is, Europe’s “learning”—became, by a kind of metonymy, thought of and designated as “Dutch Learning.” And since so much of this was in the areas of science and medicine, domains of learning that had been exploding with new developments in Europe, the term “Dutch Learning” (rangaku) became virtually a synonym for scientific and medical knowledge newly arrived through these books and persons in Japan (Goodman 1986). Sugita, at least by his own account, had gotten possession of a book of anatomy while in Edo, the city that would later become Tokyo. Titled Ontleedkundige Tafelen, it was the translation into Dutch of a German work, Anatomische Tabellen, by J. A. Kulmus. Sugita recounts that his curiosity had already been stimulated by what he knew of others in Japan who saw such books, wrote: Of course, not a word of them could we read, but the structures of internal organs and the skeletal frames illustrated in them appeared very different from those we had seen in [Chinese] books or had heard of in the past. We concluded that these must have been drawn from the real things. And somehow a strong desire to possess them arose in me. (Sugita, trans. by Matsumoto 1969: 24)

The discrepancy between the portrayals in the Chinese books—with the internal organs arranged symmetrically so as to express the principle of “harmony”—and that in the “Dutch” books was considerable. Sugita goes on to tell how he obtained permission to witness the dissection of an executed female criminal. The actual cutting was done by a ninety-year-old member of the eta, or “outcaste” class, but this man could identify the different organs. What was pointed out to Sugita and others perfectly matched what could be seen in the European books. He goes on: After the dissection was over, we were tempted to examine the forms of the bones too, and picked up some of the sun bleached bones scattered around the ground. We found that they were nothing like those described in the old [Chinese and Japanese] books, but were exactly as represented in the Dutch book. We were

6. Sectioning Human Nature

87

completely amazed. [. . . .] We felt ashamed of ourselves for having come this far in our lives without being aware of our own ignorance. (Sugita 1969: 31)

With that Sugita decided he had better learn the Dutch language—so that he could read the words of his book as well and go on to master more of the science and medicine of the West. Sugita was convinced that the Dutch were right and the East Asian tradition was wrong. The matter was open and shut.1 For him the “beginnings of Dutch Learning” in Japan was also the beginning of his learning of the Dutch language.2 Sugita wrote with a sense of the epochal nature of what was going on and he put the core metonymy right into the naming of his own written account when he titled it Rangaku kotohajime, or “Beginnings of Dutch Learning.” Although his recognition of the discrepancies between Chinese and European medicine had, in fact, been anticipated by others in Japan, Sugita’s rhetorical skills were considerable. He so dramatized the transition underway that Marius Jansen rightly notes: “As one reads Sugita—his memoires, his diary, and his dialogues—it is clear that [. . .] one is entering a new era. The dominance of China is at an end” (Jansen 1980: 34). It may have lost its dominance, but Chinese medicine surely did not disappear— not in the eighteenth century and still not today in the twenty-first. Although the theory and practice of Western medicine entered so deeply into Japanese life that for most people it is what is meant when a term such as “medicine” is used, an alternative—namely, a Chinese style of practice that has undergone changes in Japan—remains almost universally available within Japan and not just in rural settings but in the midst of urban ones as well. Its different therapies go under a variety of names but that part of it which administers the herbs and other substances of the traditional pharmacology is referred to specifically as “Chinese medicine” (kanpōyaku). The relationship between these two types of medicine is complex, but expertly delineated by Margaret M. Lock (Lock 1980).3 During the late 1950s in Japan there was a “Chinese medicine boom” in Japan—in part as a way of resisting what was seen as the much greater chance of experiencing “side effects” when using (Western) biomedicine. This trend continued in a way that contrasts biomedicine as “reductive, aggressive, and hyperscientific” and Chinese medicine as “holistic, prevention-oriented, and expressive of an orthodox tradition” (Iryō jinruigaku kenkyūkai 1992: 221). The continued popularity of Chinese-style medicine among the general population of Japan is because “it works” for a wide variety of maladies.4 The foregoing becomes especially relevant to the larger concerns of Japanese bioethics as reflected in the debate over transplants, however, when we explore the intellectual ramifications of that “double-ness” in the existing health care and therapy systems of Japan. My contention would be that the coexistence of two systems, each of which is “alternative” to the other, can provide those who can access both with a range and variety not known to persons and societies having only a single system at their disposal. But this does not always mean a cozy complementarity. And in many ways the protracted debate in Japan over brain death and the cadaveric transplant

88

Biolust

has been the most explicit context in which aspects and values implicit in the two systems were in fairly open conflict. And to witness that we need to see how and why during the 1980s and 1990s an intellectual challenge to some of the things so readily grasped by Sugita in 1771 took shape in Japan.

Umehara versus Descartes Sugita was seeking the answer to a specific intellectual question—namely, does Dutch or Chinese medicine provide a more accurate depiction of the human viscera? Yet in order to find out what the viscera look like—as opposed, for instance, to a question about what results they produce—he relied on a technique that had by his century become established in Europe as the fast track to increased knowledge of the body: the dissection of the body. This technique assumes that real knowledge is available only by looking inside the human body and, as a corollary, the outside may either refuse to disclose information or even dissemble and deceive. And because getting from the outside to the inside had become so important in Western medical research, cutting into and even an evisceration of the body became the prized method. Knowledge came via the autopsy if the subject were dead and via the surgical cut if alive. In other words, actions which in ordinary life would be judged “violent” or even “murderous” were, as long as medicine was the context, part of information-gathering and of healing. The blade or sword was then a scalpel. One way of conceiving Chinese medicine’s difference here is in terms of its far greater trust in the capacity of the well-trained physician to make an accurate diagnosis solely on the basis of a truly careful observation of the surface areas of a patient’s body. Much can be told, this system assumes, from a detailed and somewhat lengthy sensing of the patient’s pulse. And, in a method that often seems strange to Europeans and Americans, the Chinese physician with fine training will base much of their diagnosis on a detailed observation of the tongue. This is to say that in Chinese medicine a great deal about what we often call the “inside” of the distressed patient can be learned if one really knows how to gather and then interpret the data expressed on the “outside.” Although they know how to use thermometers, blood-pressure gauges, stethoscopes, and some other modern indicators, as a general rule, physicians in the Western tradition have considerably less training or skill in these things. They do not know how, for instance, to recognize all the things that can be told by a skilled observation and analysis of the state of the tongue. And, as a result of their ignorance they conclude, wrongly, that the “outside” is either relatively mute or deceptive. They think that the outside consigns one to guesswork; real knowledge will come only by going “inside.” To this must be added what we know of a Confucian disposition against any cutting into or dismemberment of the human, especially the body of a close relative. In a book on Confucianism which tries to connect it to the transplant debate, Nobuyuki Kaji writes:

6. Sectioning Human Nature

89

When they are polled people here will sometimes say they would not mind having their body used for a transplant but when push comes to shove they refuse. Why? It is because the majority of Japanese continue to have a rather Confucian view of life and death. To them this gives a sense that they have received their own bodies from their parents as a gift to be cherished. So it becomes important not to wound or harm it. As a result it is distasteful to think of having your body cut open so that organs can be extracted. (Kaji 1994: 219)

­These things, I suggest, were part of the intellectual and medical presuppositions of Japanese culture when the Dutch books began to arrive and when Sugita dramatized his own decision to pursue medicine in a more Western mode. What took place in Japan during the 1980s and 1990s and came to articulation in a movement to reject the transplant of internal organs was largely, I suggest, an attempt to insist that these older patterns retain meaning and viability in the present. Conversely, they implied a rejection of the level of medical aggressiveness seen as part of “Western” medicine, an aggressiveness tagged by Lynn Payer as particularly “American” in a comparative study of medicine and culture: [Historically for Americans] disease could be conquered, but only by aggressively ferreting it out diagnostically and just as aggressively treating it, preferably by taking something out rather than adding something to increase the resistance. [. . . .] As Oliver Wendell Holmes put it: “. . . how could such a people be content with anything but ‘heroic’ practice?” (Payer 1988:127)

This even today means a difference in surgery rates. In their important work on Japanese medical system, Campbell and Ikegami write: [T]here is far less surgery of all kinds [in Japan] than in the United States—on average about one-third the amount [. . . .] [T]he fees for surgery are low enough to make operations rather unprofitable, but a cultural distaste for invasive procedures is also a factor. Indeed, the long tradition of Chinese medicine in Japan has led to a general preference for conservative rather than aggressive treatments. (Campbell and Ikegami 1998: 13)5

If this is so in general, then it is a fortiori the case with cadaveric transplants. To not a few Japanese, then, the transplant seemed to be the epitome of a Western— and especially American—way of thinking about medical practice. It became, especially for thinkers wishing to revalue the East Asian tradition, the procedure which had to be questioned and resisted. It has obviously not harmed the anti-transplant cause to have in its corner physician-scholars who teach at prestigious universities and have specific expertise in anatomy and dissection—that is, the very trajectory of inquiry Sugita wished to see become the standard in Japan. One such is Takeshi Yōrō, a professor of anatomy

90

Biolust

at the University of Tokyo and a prolific writer on this topic. In a 1992 book, he and a colleague, Iwane Saitō, noted that the hegemony of Chinese medicine in Japan was lost after the Meiji Restoration in 1868, after which Western medicine became very strong and introduced also Western ways of thinking about death and dying. However, the daily life of the Japanese people never became fully Westernized and the strongly-rooted indigenous culture remained. And concerning how to decide on questions of death it was on this traditional and very Japanese way of thinking that the general consciousness was based. It is the gap between these two that is a major ingredient bringing about what today we call the “problem of brain death.” (Yōrō and Saitō 1992: 93)

These writers, even as “Western”-style anatomists, view the traditional viewpoint as one that should not be jettisoned too quickly. This line of argument—that something very important to Japan is at stake— came to be identified especially with Takeshi Umehara (1925–2019), a philosopherauthor with a large public persona and readership. His role in Japan’s transplant debate has been the object of both praise and criticism, with the latter often coming from persons critical of both his broad generalization and the flagrant mistakes made in his data and opinions. His personal relationship with highly placed politicians, especially Yasuhiro Nakasone during the 1980s, and his ability to attract attention through his books and lectures allowed him to articulate the public’s opposition. His appointment as a standing member of the government’s Ad Hoc Committee on Brain Death and Organ Transplantation was probably one of the reasons why that committee’s interim report, issued in June 1991, appeared with both a majority and a minority viewpoint—with Umehara and three others writing against the adopting of a brain-death definition of death.6 Umehara afterward announced at a press conference that although Japan is famous for its capacity to reach consensus on contested issues, this issue was so important that the minority felt compelled to resist. Why so? What were Umehara’s objections? He stated them most clearly in an essay that appeared in 1990 in the widely read journal Bungeishunjū. One of his major contentions there was that his contemporaries were mistaken to assume that, just because the transplant had been tagged as an index to a given society’s “progress,” it really was so. Umehara expressed his skepticism vis-à-vis such claims as follows: Physicians who do transplants say that this technology is in the forefront, the most up-to-date that we have. We do not really know, however, whether it is the most advanced or not. It could turn out to be a technology which is all the rage but only for a time and then fizzles out abruptly, like a show of fireworks. During my student days much ado was made over the fact that there had been found a way of treating tuberculosis through surgery. This was touted as the most advanced form of medicine and a number of my friends, in fact, underwent

6. Sectioning Human Nature

91

such operations. A year later, however, streptomycin was discovered and it became possible to treat tuberculosis pharmaceutically—resulting in my friends’ recovery within a year. They had, however, ongoing effects and pain resulting from an unnecessary surgical procedure that had been acclaimed as “advanced” medicine. (Umehara 1990: 351)

In many ways Umehara on this point rightly targeted the way in which much of the rhetoric in favor of brain death was riding uncritically on the notion that what can be done ought to be done because it constitutes progress. But that at which Umehara took special aim—and did so with his own studies of Western philosophy as credentials—was the Cartesianism he detected at the basis of the Western intellectual trajectory leading to the transplant. He wrote: One could say that surgery is the stellar centerpiece of modern medicine and that, given this, such medicine moved inevitably in the direction in which Descartes had pointed it—that is, to the place where surgery makes for an exchange of organs between persons. Therefore, the notion of brain-death and the practice of organ transfer are given their warranty by Descartes’s philosophy and its idea of the human being as characterized by a fundamental duality. (Umehara 1990: 355)

Although the suggestion that the transplant was a direct application of a bodymind dualism given strong emphasis in the thought of René Descartes (1596– 1650) was not original with Umehara, he can be credited with having given it wide publicity in Japan. He attacked such a dualistic notion of human existence as philosophically flawed: The phrase “cogito ergo sum” is, I think, a bit of nonsense that resulted from an arrogant humanity’s overestimation of its own minuscule rationality. Certainly I do not derive my existence because “I” am engaged in thinking. Whatever existence this “I” has is the result of the long [evolutionary] development of life. (Umehara 1990: 357)

The dualism in this approach equates what is doing the reasoning—that is, the brain—with the human person but this leaves the body as something else altogether, a mere thing. According to Descartes every existent thing that is not mind is material substance. According to this viewpoint everything which does not have the capacity to reason—such as animals and plants—are just matter. So too is the human corpse. The conclusion to which Descartes’s thinking pushes is that everything that is in front of the thinking self is nothing but a material part of nature, devoid of life, and to be mastered by human science and technology. (Umehara 1990: 357)

92

Biolust

And once the body is just another “thing” among the material objects in the world, others can easily begin to think of it as something they can use or abuse as they will. The route to the notion of brain death and to the assumption that the organs of the brain-dead person may as well be reused is one that can be traced back to Descartes. Implicit in Umehara’s argument is that cultures and societies can be more powerfully influenced by philosophers than we, especially in the recent West, are likely to admit. And he was perfectly candid about his interest in bringing to the attention of his society the fact that medicine and its technologies are pre-loaded with philosophical and even religious assumptions. And to a large extent he was ready to declare that to redefine death was to make for changes much deeper than simply on the level of available technology. It was to accept a certain notion of what we are as humans. And this was a notion from a particular time and culture—that of the modernizing West. And to adopt it would be, he was warning, an act with serious consequences. Japan had reached a point in time, he felt, when further “Westernizing” was unwarranted and on some level would wound its traditional and Asian components.7 Skilled at taking a rhetorical stance, Umehara titled one of his books Nihongaku kotohajime, or “The Beginnings of Japanese Learning.” Putting a twist on the title of Sugita’s eighteenth-century book celebrating the West as having better medicine, Umehara was suggesting the need to move, if not in the opposite direction, at least in one that would reappreciate indigenous traditions.8 Appropriating “Dutch learning” may have been a good idea in the eighteenth and nineteenth centuries but, he implied, to reappropriate “Japan’s learning” makes more sense in the present. And the need to be skeptical about Descartes appears there too: In terms of his methodology in the sciences I have learned things from René Descartes, but in terms of his view of humanity and the world I find myself in diametric opposition to him. Descartes held that rationality is the essence of the human being and he looked at both humans and the world in a way that made rationality the center. (Umehara 1985: 178–9)

Not without its own problems, Umehara’s position at least demonstrates how one thinker was able to see the brain-death notion and the transplants based on it as tied to one specific intellectual and cultural trajectory. And that is to say that its universality is far from obvious and necessary.

­A Catastrophic Split There have been questions raised about whether or not an exposé of hidden Cartesianism in the rationale for brain death and transplants originated in Japan with Umehara.9 This notwithstanding, his ability to show to the general public that there was at least an interesting and important philosophical issue and that

6. Sectioning Human Nature

93

it is closely connected to public policy options in medical matters should not be faulted. The contrast with the American public’s almost total indifference to the philosophies embedded in medical practices is quite striking.10 This matter of the nexus with Descartes became important within the writings on this question on the part of others in Japan as well.11 I suggest the need to look past what may have been some strident chauvinism, and overly simple East-West contrasts that at times have shown up in Umehara’s rhetoric. There is a valid, and in fact a critically important, issue here concerning the deeply rooted Cartesianism in the trajectory of modern biotechnology. It is not that philosophers in the West have not recognized problems in the way Descartes conceived a division between the body and the mind; as long ago as 1970, Spicker was able to put together a small anthology of philosophers—Jonas among them— whose writings all shared a rejection of Cartesian dualism (Spicker 1970). But what the Japanese thinkers have done has been to articulate both with close attention and with some sense of urgency the nexus between that dualism and the pattern followed by our most advanced medical technologies. It can be instructive to read or re-read a portion of the Discourse on Method with that in mind—for instance, the French thinker’s famous cogito ergo sum statement and what he did with in it in the manner of a “thought experiment.” Since this truth, I think, therefore I am, was so firm and assured that all the most extravagant suppositions of the sceptics were unable to shake it, I judge that I could safely accept it as the first principle of the philosophy I was seeking. I then examined closely what I was, and saw that I could imagine that I had no body, and that there was no world nor any place that I occupied, but that I could not imagine for a moment that I did not exist [. . . .] [T]herefore I concluded that I was a substance whose whole essence or nature was only to think, and which, to exist, has no need of space nor of any material thing. Thus it follows that this ego, this soul, by which I am, is entirely distinct from the body and is easier to know than the latter, and that even if the body were not, the soul would not cease to be all that it now is. (Descartes, trans. Lawrence LaFleur 1960: 21)

One of the things that today strikes us as somewhat surprising as we look again at this statement is the fact it took so long to see how truly strange, how bizarre, this portrait of human existence by Descartes really is. That there has been even in Europe and American for some time now a growing discomfort with the Cartesian concept of our humanity is clear. Yet it has only been rather recently that that disquiet has become a full-blown critique and, as well, a level of criticism willing to suggest that, unless we note how wrongheaded it really is, it will go on having deleterious consequences, some of which are felt within our society and its practices. This is to say that the matter is not “purely academic” and without practical applications. The issue has been flagged as very serious in a critically important work by Lakoff and Johnson, who note that the wide acceptance of the metaphors of Descartes has had “catastrophic significance” and go on to make the startling claim that:

94

Biolust [T]he influence [of Descartes’s metaphors] is not limited merely to philosophy. They have also made their way into other academic disciplines, into our educational system, and into popular culture as well, as in the pervasiveness of the computer metaphor for the mind. These beliefs, in the popular imagination, have led to the dissociation of reason from emotion and thus to the downplaying of emotional and aesthetic life in our culture. (Lakoff and Johnson 1999: 400–1)

This is a claim with which I agree. To it I, as the argument of this book has already suggested, would add the data concerning the trajectory of our biomedicine. I see this especially exemplified in the flawed rationale for the cadaveric transplant as well as in the subsequent efforts both to cover up these flaws and at the same time extend even farther the impact of the Cartesian catastrophe. In many ways the growing critique of Cartesianism that we find in the West is not substantively different from what it is in Japan. Both are focusing in on the inadequacy and mistakes in the body-mind dualism at the heart of Descartes’s formulation. The Japanese analysis, however, seems to have been at least spurred on to a large extent by a perception, one communicated even to the nonacademic public without too much difficulty, that Cartesian assumptions can lead to socially and psychologically bizarre forms of medical practice. That is, a sensitivity to the social and practical ramifications of this philosophy was both earlier and deeper. The criticism of Descartes in Europe and America, by contrast, began among philosophers detecting flaws in his formulations, then showing how these problems carried over into the assumptions and procedures of analytic philosophy, and only from there—as in Lakoff and Johnson—now beginning to be noted as having also led to significant problems within the public domain. In his Consciousness Explained, the most incisive discussion of this matter to date, Daniel Dennett insists it is in the interest of being more, not less, consistently scientific that even unacknowledged and subtle forms of dualism need to be rejected. He finds the “fundamentally antiscientific stance of dualism [to be] its most disqualifying feature” and a prime reason why Dennett wishes to “adopt the apparently dogmatic rule that dualism is to be avoided at all costs”—in spite of the fact that his own approach “seems at first to be strongly at odds with common wisdom” (Dennett 1991: 37). Dennett coined a term, “the Cartesian theatre,” to tag what he saw as a deeply flawed but also deeply embedded metaphor for how consciousness operates (Dennett 1991: esp. 107ff.). It has been easy for us to see the error of the French philosopher’s notion that the pineal gland is the locus of the soul, but far more difficult to get rid of the metaphor of our minds as theaters where ideas pass before something else inside us that has the role of viewer and are in that sense “seen” as they come and then go in the “stream of consciousness.” In a many-faceted attack Dennett dismantles this metaphor as simplistic—even though it has a hold not only on the popular mind but also on that of many scientists. Continuing both their series of studies of the role of pervasive but unrecognized metaphors in our mental and public life and at the same time embracing the significance of Dennett’s work, Lakoff and Johnson in their Philosophy in the Flesh

6. Sectioning Human Nature

95

further draw out the consequences— profound ones indeed—of getting rid of all our latent Cartesian dualism and taking seriously the indivisible unity of body and mind (Lakoff and Johnson 1999). And the list of important, in many ways revolutionary, new works can be extended—especially if to it is added Damasio’s acclaimed Descartes’ Error: Emotion, Reason, and the Human Brain, a work in which a neurologist demonstrates not only the misunderstandings but also the therapeutic errors that can result from the fundamental dualism that Descartes fastened into the presuppositions of both the thinking and the practices of the modern West. This is Descartes’ error: the abyssal separation between body and mind, between the sizable, dimensioned, mechanically operated, infinitely divisible body stuff, on the one hand, and the unsizable, undimensioned, un-pushpullable, nondivisible mind stuff; the suggestion that reasoning and moral judgment, and the suffering that comes from physical pain or emotional upheaval might exist separately from the body. (Damasio 1994: 249–50; emphasis mine)

And in a conclusion to a study that comes very close to the central concern of this one as well, Damasio asks: “Why bother with this particular error of Descartes?” and answers: The idea of a disembodied mind also seems to have shaped the peculiar way in which Western medicine approaches the study and treatment of diseases. [. . . .] The Cartesian split pervades both research and practice. (Damasio 1994: 251)

In a postscript he specifies a number of these. These paragraphs by Damasio in particular seem to echo almost exactly earlier judgments by Japanese philosophers and physicians. The reason why, however, I used italics to bring emphasis to one particular phrase—“infinitely divisible body stuff ”—in the quotation above is that, although Damasio does not go into questions of brain death, the recycling of body parts, and the ethical dilemmas such pose, it would be, at least in my view, these that present the most clear instance of Cartesianism working its mischief not just in our philosophy but in our medical practice.

The Question of Human Nature Sometimes it takes a while for a significant difference to come to consciousness. At first I did not notice it. But as I was reading through the discussions of these problems by Japanese bioethicists and physician-writers, I gradually realized the presence there of something I scarcely saw in the writings of their American counterparts—that is, a recurrent raising of the question as to whether our very humanity might not be at stake in the trajectory of things we are doing in the area

96

Biolust

of biotechnology. And, although at first I suspected it might be a kind of rhetorical “overkill” on the part of writers opposed to brain death and the transplant, I eventually understood the sense in which the point was valid and important. In a 1988 book whose title could be rendered as Brain Death Theory: Between Being Human and Inhuman, Koyata Washida, a philosopher and ethicist at Sapporo University, writes: The problem of brain death is not simply one having to do with medical technology. Rather each new development in medical technology forces us to address the really difficult one: “What is human?” [. . . .] And the question about brain death is central, not just one among a number of other problems. And its solution requires wisdom, even more than mere intelligence. [. . . .] That solution lies not in the process of evolution or in some future time but rather by scrutinizing our past and our traditions with exceeding carefulness. But the supremely important thing is for us all as individuals to put preeminent in our thought not just how we might hang on to our own individual existences but rather what might be the fate of our shared humanity. Medical therapies which served to save valuable individual lives now are pushing us to give thought to how they may now be altering the human species as a whole. (Washida 1988: 218–19)

Washida follows up this statement with a graphic presentation of the dilemma of someone who, although agreeing that the larger entity, “humankind,” is at risk in the trajectory of our biomedicine, is faced with the prospect of getting an extension of personal life span for either themselves or a beloved fellow human. This makes the problem so poignant. Much of the wider suggestion that it may, in fact, be our humanity that is being put at risk by these kinds of technology is articulated as suspicion of what might be the unintended consequences of all the various attempts to “improve” ourselves. In fact, given what we already know about our inability to either foresee or negate some of the side effects and structural consequences of our emerging technologies, the attempt to point out the high likelihood of such arising from our most vaunted projects strikes many Japanese writers as precisely what we should be expecting. Atsuhiro Shibatani, a specialist in biology but someone who has written also about discrimination problems, thinks we need to question whether we have not become so infatuated with our own capacity through biology to “improve the breed” of animals that we have begun, without being conscious of the fact, to try to improve our own species (Shibatani [in conversation with Morioka] 1999: 18–21). This is to say that, in contrast to what appears to be a reluctance among Americans to envision in what subtle ways we have placed our own kind within the pool of animate beings whose characteristics and structure we might be willing to alter—and always with rhetoric about “improvement” or the obligation to provide measures that might “heal”—these thinkers in Japan ask that we face directly the question of human nature and whether it really is something we wish to change. The list of those taking this question seriously there is long: Hisatake

6. Sectioning Human Nature

97

Katō (1993: 77–80), Tsuyoshi Awaya (1998: 71–97), Michio Imai (1999: 101–13), Seishi Ishii (1995), and Yoshinori Ikebe (1991: 45–70), to name a few. This, I opine, is a question that Americans—with very rare exceptions—not only do not know how to address but have been, for a variety of reasons, studiously evading. Although it may be, as Washida suggests, the question of importance in these matters, it is also the matter we are refusing to recognize as such. My contention is that, with the rare exceptions of Hans Jonas and a handful of other thinkers, the questions “What is human nature” and “Is it in danger of being lost through inattention?” are questions the intellectual ambience of contemporary America—and perhaps much of the contemporary West—has seemed eager to exclude. Although worthy of a book in itself, the question of why we are so skittish about asking this question deserves some attention here. I suggest the reasons are multiple. One is that our civilization operates on the assumption—a totally false one—that the answer has already been provided. But the “answer” in this case is, ironically, precisely the same thing that makes the problem itself so aggravated. What I mean is that the operable answer is the Cartesian one—namely, our human kind has an essence and that essence is our rationality. And it is nowhere more obvious that we act as a society as if this answer were adequate than in our eagerness to regularize cadaveric transplants—and to do so even if the notion of brain death, the original rationale for this practice, lies in shambles intellectually. Descartes’s cogito ergo sum (I think, therefore I am) and the nearly equal definition of our humanity by his near contemporary Blaise Pascal (1623–62)—namely, that “man is a reed, but a thinking reed”—can be directly, bodily enacted within the surgical suite by taking up its corollary. That is, the non-thinking body that is likely never to think again—whether by being defined as “brain dead” or as in a “vegetative state”—is by virtue of this definition not the body of a human. It is something else now. And, therefore, in spite of all the ongoing biological functions witnessed by the attendant nurses, it becomes ethical to excise its “parts.” Most of the time such Cartesianism is implicit but occasionally, especially when needed to push the frontiers of questionable bioengineering even farther, it becomes overt. One such instance of the latter case is in Remaking Eden by Lee M. Silver, a Princeton biologist, who shores up his highly suspect proposals for genetic engineering with the notion that in the case of humans there exists “life in a special sense” and it involves consciousness (Silver 1997: 24–6). Among us in North America the apparent “certainty” of the medical scientist about human nature being still adequately defined by Descartes is in sharp contrast to the inability and unwillingness on the part of the vast majority of humanists to even ask any serious questions about human nature. That is, the essentialism about our kind that is part and parcel of the operations of this kind of medicine is anathema to humanists and their discussions. “Human nature” is a term which, when dropped, will cause the humanist to turn up their nose as if something intellectually odiferous had wafted into the room. There are specific and historical reasons for this and some of them are reasonable and worthy. For instance, there remains a sense that David Hume (1711–76) made

98

Biolust

a valid and continuously important point in book III of his Treatise on Human Nature when he showed how easy it was, especially when using this term, to dress up something prescriptive in the language of mere description. What is put forward as an “is” is intended by the speaker or writer as an “ought.” Although Hume merely indicated the need to be clear about this distinction and never that all reference to “human nature” would be so pre-loaded, many in the modern academy took things a step further and, under the aegis of what Paul Ricour has called “the hermeneutics of suspicion,” decided that a term such as “human nature” will invariably be ideologically driven and ought, therefore, to be banished from those things about which we dare speak. This leaves humanists insisting they must not refer to about “human nature” or anything that might resemble the concept.12 Yet it has not been proven that, if a word can be easily abused, it and any attempt to use it with caution may as well be abandoned altogether. The second reason why most humanists avoid the term “human nature” today, a reason with which I am in full agreement, is that users of it in the past and sometimes even today assume there is some one, central faculty, characteristic, or feature that make humanity the humanity it is. Throughout our history the candidates proposed to serve that central, essential role have been multiple. Yet all to date have been reductive; they assume it may be possible to locate the one characteristic which, when possessed, sets off its possessor as human rather than as not. And although it has not been universal, the attempt that has most influenced the civilization of Europe and America and, therefore, left an impact on the medical research in the Euro-American mode has been that of the ancient Greeks. For many of the ancient Greeks and for many who have thought themselves to be rationalists in more recent centuries, it has been reason itself that sets off mankind from his animal kin.13 And in many ways, although the precise terms will vary— consciousness, a sense of selfhood, the ability to use language, and so on—these all tend to be variations on the ratio criterion. We think; therefore, we are human. Nevertheless, there exists, I hold, a much more satisfactory and much less conceptually problematic way of thinking and talking about human nature. That is, human nature may be far too valuable a concept to simply fall out of our conversations but it may also be far too rich a matter to be reduced to any one, single function or capacity. Lakoff and Johnson are, I judge, correct in wanting “human nature without essentialism.” Unfortunately, however, they then do not recognize that their own description of “human nature” in terms of “variation, change, and evolution” may be merely another one of the new vocabularies in which the old ratio scheme disguises itself (Lakoff and Johnson 1999: 557). We become willing to see ourselves as parts of the evolutionary process but then immediately go on, as if we have Descartes there insisting we do so, to say that, although we are part of evolution, we are the species—that is, the only species—that can voluntarily change itself. Our creativity is our “hallmark.” And my suggestion is that this puts us again perilously close to the essentialism we were trying to escape. Wittgenstein, I believe, provided what is still the best way out of this kind of essentially—and it was through his homey but ultimately satisfying notion of “family resemblances.” That he had come to distrust all essentialisms and

6. Sectioning Human Nature

99

reductionisms is clear—and perhaps nowhere more neatly so than in the wry statement of his that serves as an epigraph for this chapter. [See note on page 85.] His proposal, presented without fanfare and disarmingly simple, was that we make a mistake when we “look for something in common to all the entities which we commonly subsume under a general term.” The error lies in looking for some single, distinguishing thing inside the general term—like an “ingredient” in the same way that, for instance, “alcohol is of beer and wine” (Bambrough 1968: 187). Wittgenstein, instead, asked that we take advantage of what we already know about families and how within them various members will tend to “look alike” without there ever being either a single distinguishing feature. “Some of them have the same nose, others have the same eyebrows and others again the same way of walking; and these likenesses overlap.”14 The point here is that we rightly use the term to encompass a variety of instances without having either the ability or the need to locate the feature that they all share in common. And our sense that the family is a family is not disrupted by the fact that “the X family nose” is found in only half its members or the “X family eyebrows” are not common to all. The “strength” of the term “X family” lies in the laminations, multiple overlappings—none of which stretches across the board. Or again: [I]n spinning a thread we twist fibre on fibre. And the strength of the thread does not reside in the fact that some one fibre runs through its whole length, but in the overlapping of many fibres. (Wittgenstein 1958/1992: 32e, no. 67)

­Wittgenstein suggests that in a similar way we can refer to a number of items as beautiful without assuming we could isolate out “pure beauty, unadulterated by anything that is beautiful” (Wittgenstein 1958/1980: 17). One suggestion of the present book is that the term “human nature” serves us in our common language in much the same way and that humanists need to recover from the excessive concern about ideological abuse so that they can again monitor what may, in fact, be much larger abuses by technology of a human nature which very much deserves and needs protection.15 The store is too full of valuables to go on going unwatched. If our skittishness keeps us allergic to any and all uses of the term “human nature,” we will ourselves be partially responsible for what might turn out to be one of the “unintended consequences” of the current biotechnological trajectory of our civilization—that is, a destruction or deformation of the very human nature that we have, without being essentialist, long felt to be of unparalleled value. And this returns us to a central concern of Hans Jonas—his insistence that we interpret our fears, not simply dismiss them as unworthy. The heuristics of fear have probably nowhere or never been so important. And if there exists a growing fear among humans—a fear which, for a variety of reasons, can be more easily expressed by Japanese than by Americans—that our human nature itself is at risk, then we ignore to interpret this only at our own peril. Terms like “human nature” and the “human condition” turn up with surprising frequency in the writings of

100

Biolust

Jonas, and it seems clear that a concern about tragic, irreparable damage to human nature was one of his major concerns.16 It appears clearly in the following: Previously it would have been senseless to talk about such things. What is in jeopardy raises its voice. That which had always been the most elementary of the givens—that there are men, that there is life, that there is a world for both— suddenly stands forth, as if lit up by lightening, in its stark peril through human deed. In this very light the new responsibility appears. Born of danger, its first urging is necessarily an ethics of preservation and prevention, not of progress and perfection. (Jonas 1984: 37)

­Chapter 7 C A M PA IG N F O R M I R AC L E S

Angels and Flying Hearts During late 1998 and the early half of 1999 there was an effort in Japan to introduce the concept of the organ donor card to the general public. A new law had made death by brain death legal and the nation was being readied for what would, at the end of February 1999, prove to be the first transplant from a putative cadaver in thirty-one years. Many newspapers were running stories about where to obtain donor cards and how they might be filled out. Getting such cards was made especially convenient by having them available at “convenience” stores—known in Japan as konbini— and commercial venues that are, if anything, actually closer to being omnipresent in Japan than they are in America. The organ card itself was carefully constructed. Not only was its reverse side relatively specific about organs that an individual may or may not wish to have excised but its face side indicated it had been approved by the Ministry of Health and Welfare, that it was to be carried on the person of the donor, and that they were publicly thanked for signing. It bore four hearts—not hearts in the shape of the cardiac organ but in the stylized form associated in today’s Japan with positive emotions. Each of these was winged and flying. And they surrounded a central figure—the image of a smiling child with brownish-golden hair and hands extended downward in the body language suggesting the presentation or offering of something. Most striking about this figure, however, was its own wings. They unambiguously communicated a message that the donor is something of an angel. And what is striking about this particular angel is that its stylization was such that it was much more in the line with the many angels that figure in the Christian art and literature of the West and not likely to be associated in the mind with the angel-like figures that show up, with much less frequency, in Buddhist art. The graphic nexus between organ donation and something “angelic” in the person willing to be a donor was not, I hope to show here, something that came *Chapter  7 from the 2002 draft. Portions of this chapter were originally published in LaFleur (2001) and (2002a). Reproduced with permission of Oneworld Publications through PLSclear, and Wiley.

102

Biolust

into being in Japan by chance. Nor did mere chance bring about the iconographic details suggestive of “Christian” lore. On the contrary, the forging of a tight relationship between this form of modern medical technology and Christianity itself constitutes a fascinating story, one concerning which Americans are generally unaware. What is interesting here is that the interweaving of this kind of medical practice with the network of what is assumed—with a somewhat problematic justification—to be “traditional” Christian values has become so deep in America that most Americans are unaware of the following: that it is anything but “natural” and has been achieved, in fact, as the product of an extended campaign to create the connection, that it has been inculcated as a religio-ethical value concerning which “good” persons could harbor no doubts, and that its extension as a medical practice throughout the world is one way in which values associated with what is still “Christian” and “morally upright” about America may be communicated. While not without its counterparts in Europe, the history of the creation of linkages between organ transplants as the ideational ground-clearer of ultra-modern medical technologies and the Christian values and institutions of America is an important episode in the history of religion in twentieth-century America. Although I will here make reference to Japanese materials for their comparative value, the principal focus is on the till-now largely ignored question of how it happened that in North America the doing of cadaveric transplants, so contentious an issue in Japan, not only received a relatively swift sanction from most religious organizations but even today is a procedure often promoted through church and synagogue homilies and active campaigns. Moreover, I here offer my own hypotheses—new ones I believe—concerning how and why this ready acceptance came into being. The central of these is that the Christian embrace of the new transplant technology is best seen as contingent rather than necessary and that, looked at historically, it took place at what was, at least from the perspective of this new technology’s promoters, a specific and perhaps even unique “window of opportunity” in time. An instructive aspect of this is that synagogues too in America have largely embraced the transplant, as evidenced by the fact that the weekend of November 13–15, 1998, became one during which, for instance, “churches and synagogues across the United States encouraged their faithful to sign donor cards.”1 This was, of course, responding to encouragement from organ transplant organizations eager to correct what was seen as a serious lack of donors in America, The story of this shift is, of course, not exactly the same for Christians and for Jews, and among the latter there remains even today a fair amount of theological and emotional resistance to cadaveric transplantation. One theological problem faced by both, however, was that of reconciling the removal of organs with concerns about the need for bodily integrity at the time of bodily resurrection. Although he himself supports organ donation by Jews, Elliot N. Dorff explores in some detail how the resurrection is cited as a factor in at least the explanations offered by many Jews—many of them otherwise totally secular—for why they resist any cutting of the cadaver. He calls attention to a discrepancy:

7. Campaign for Miracles

103

The fact that so many Jews object to autopsies and to organ donation on the grounds of their incompatibility with a belief in resurrection means [. . .] that a far higher percentage of Jews believe in life after death than are willing to admit that they do. (Dorff 1998: 238)2

Although comparative data might suggest also that for Jews there may be much more going on here than can be explained by Dorff ’s reference to a discrepancy between “the popular belief that impedes donation and the rabbinic disgust with this belief ” (Dorff 1998: 235), my point here is simply that, for both Jews and Christians, traditional ideas of bodily resurrection have in our times had to be reckoned with—and perhaps even significantly reinterpreted—so as to make acceptable the excision of a cadaver’s organs. It may also be that, in spite of the efforts by some bioethicists such as Dorff to articulate a distinctly Jewish perspective, there is at least some carryover effect, one that reaches into the lives of American Jews, of the campaign to see organ donation as expressive of what is best in “Judeo-Christian” American experience. Those ethicists, such as Rabbi J. David Bleich, who have objected to the cadaveric transplant, appear not to have had their views become part of the generalized American discussion of these matters.3 I, however, leave that as something to be explored and estimated by the historians of Judaism in modern America.

Making Medicine the Miracle One of the complaints leveled against Takeshi Umehara, the Japanese philosopher largely responsible for articulating the anti-transplant minority position within the Ad Hoc Committee, is that in his writings he has too easily identified all of the West with Christianity and all of Christianity with a platonizing view of the body as a kind of “prison house” of the soul. There is, I think, some truth in that charge: Umehara’s writings have not paid adequate attention to the nuances and the significance of debates on these matters within European and American history. It is, I hold, anything but clear that early or medieval Christians, even when mouthing statements about the body as something from which one hoped to be released, would have condoned—or even understood—why their counterparts in contemporary America often seem so eager to see cadaveric organ donation as expressive of quintessential Christian values. To get to the present required centuries of change and revalorization, in which a certain level of implicit secularization played a part. It is instructive to recall that in many early Christians texts, some of which became part of the New Testament, the human body was given a high value not only by a central teaching concerning its physical resurrection but also by accounts of Jesus as someone capable of performing miracles that repaired what was maimed and restored health to human bodies. Therefore, it is probably most accurate to say that within the history of Christianity in Europe there was an ongoing question about the status and value of the body. Surely one pole within this tension were

104

Biolust

those who disparaged it—whether by advocating literal eunuchism both as a way of following Matt. 19:12c and as a way of “solving” the problem of sex, or by going along with the dualism of Augustine of Hippo in projecting what, according to Peter Brown, was a seeing of things as “part and parcel of the great aboriginal catastrophe of physical existence” (Brown 1988: 406). Others within Christianity, however, could not easily overcome a sense that something somehow “sacred” about the body dictated that it, perhaps even when having become a fresh corpse, ought not to be rudely violated or dismembered. This impulse to respect the newly dead and to fear an intentional trampling on their bodily integrity had, no doubt, Jewish as well as Greek roots. The former are well known and the latter, I suggest, best recalled by noting that, according to Alasdair MacIntyre, “[d]eath in Homer is an unmixed evil; the ultimate evil is death followed by desecration of the body. The latter is an evil suffered by the kin and the household of the dead man as well as by the corpse” (MacIntyre 1990: 120). It is far beyond my purpose or expertise to recount here the history of Christian thinking about the body.4 The point I make here is simply that we cannot assume that a sense of obligation to donate one’s organs would have been seen by premodern Christians as a simple “given,” something obviously true. The matter would, if anything, have been contested. And a certain amount of contestation concerning Christian ways of treating dead bodies was to be found in medieval Europe. One conclusion to be drawn from the fascinating research on death and burial practices that has been carried out by a historian such as Philippe Ariés is that the sheer diversity of such practices and the changing valorizations within European history make it impossible to identify anything that could qualify as the Christian perspective on such matters. There were no constants. Although mass graves had been common even for Christians until the eleventh century, there occurred then, according to Ariés, “a return to the individuality of the grave and its corollary, the positive value attached to the dead body” (Ariés 1981: 208). In the later Middle Ages there were many instances of the flesh being cut away from the bones and of bones and flesh being buried at separate sites. And this occasioned a papal ban on such practices. In a historical note with special relevance to the present study, Anne Marie Moulin detects a certain irony when she notes that Pope Boniface VIII in 1299 “forbade the cutting up of remains—evisceration, in short, all the practices that are now necessary for the transplantation of organs” (Moulin 1995: 79). Nevertheless, a trend of the twentieth century can be seen in multiple efforts to render perfectly acceptable certain treatments of the corpse that earlier would by many have been deemed religiously objectionable. Cremation for Catholics is a salient example. Consistent with what had been a stance since at least the time of Charlemagne, as late as 1886 the Catholic Church explicitly forbade its adherents from undergoing cremation. Yet in 1963 the Second Vatican Council, partially in response to the fact that Catholics in Tokyo were caught between this ecclesiastical prohibition and a municipal law that forbade anything other than cremation within that city, removed the interdiction for Catholics—while insisting

7. Campaign for Miracles

105

that ashes should not be scattered on the sea or earth or in the air. In parts of East Asia, this change undoubtedly began to alter what had been seen as one of the most concrete, ritualized indices of core difference between Christians and Buddhists.5 It was, however, the decades of the 1950s and 1960s that were, I argue, crucial for making the changes under review here. Not only then were there official moves to declare that resurrection doctrines did not disallow organ removal but it was then that new, more technical, ways of measuring a body’s “vital signs” appear to have convinced some religious authorities to cede over to medicine whole territories that up to that point had been considered religion’s own. This was shown when in 1958 Pope Pius XII in an encyclical, The Prolongation of Life, stated that any pronouncement determining the point of death was not a matter for the Church but for the physician (Lamb 1996: 52). It is, I surmise, probable that Japan’s Buddhists would, if asked, have balked at making a comparable concession. To them, we may assume, to relinquish the right to say things about dying and death would be somehow equivalent—in a cultural way not without its economic entailment—to “giving away the store.”6 *** A promotional publication of the United Network for Organ Sharing (UNOS), supported by a grant from the Upjohn pharmaceutical company, shows two hearts side-by-side and bears the title It’s a Miracle: Facts about Organ and Tissue Donation. Yet the use of the term “miracle” in such contexts deserves attention. It is used, we must assume, only because much time and intellectual space have intervened between the present and those contexts in earlier days of Christianity when miracles were understood to be distinctly supernatural events, not ones that might just as easily be rendered completely intelligible and understood in detail by naturalistic means. How, then, we might ask, did Western Christendom get to the point at which the general population of a country such as the United States sees routinized references to medical miracles and finds nothing either religiously strange or scientifically bizarre in such usage? My suggestion here is that this could not have happened if it had not been for three processes—a move, first, within the Protestant Reformation to hold that the “age of miracles” was already long past; second, a readiness in liberal Protestantism to flatten the distinctions between ordinary and miraculous events; and, third, a gradual but eventually successful coopting of the “aura of miracle” by the practices of modern medicine and especially in procedures that demonstrate some new therapeutic “breakthrough.” Each of these steps deserves brief attention here. Keith Thomas noted that, although the Bible indicated that God could at will “make the sun stand still” and that Jesus healed people in miraculous ways, “by the sixteenth century the general opinion in England was that the age of miracles had ceased” (Thomas 1971: 80). This became a claim largely of Protestants, who held that “miracles were the swaddling-bands of the early Church, necessary for

106

Biolust

the initial conversion of unbelievers, but redundant once the faith had securely established itself ” (Thomas 1971: 124–5). This did not mean that Catholics of the time adopted such a view or that within Protestant groups—the early Quakers being an example—strong beliefs in contemporary miracles could not be held. Yet the Protestant position opened the way for an identification between a Christianity gotten “mature” and the removal of any need for such a faith to go on verifying itself by ongoing evidence of divine interruptions in natural law. “Miracle” was on the way to being redefined as a way of seeing the natural rather than an irruption of the supernatural. Although, as explicated by Proudfoot, the “grammar of the concept” dictates that “miracles are events that are deemed to elude our ordinary explanatory schemes,” the influential Protestant theologian Friedrich Schleiermacher (1768–1834) disagreed, declaring that “‘miracle is simply the religious name for an event’ and that ‘the more religious you are, the more you see miracle everywhere’” (Proudfoot 1985: 141). The most spectacular move in this process, however, was probably the one through which medical methods described as distinctly “modern” got to be valorized as the “modern miracle.” This was largely possible because, since it was Europe that exhibited the most “advanced” forms of science and technology in the modern era, these things themselves could be taken as the “proofs” not only of the superiority of Europe and America as a civilization but also of the principal religion assumed to undergird it. Especially in the context of the colonialisms of the late nineteenth and early twentieth centuries, the regnant view of the world divided it into regions of religious and scientific “darkness,” and the enlightened, Christian West. No one embodied this better than the medical missionary, the person who penetrated “dark” continents and cultures. Their successes in healing illness and in preventing disease, although accomplished through wholly naturalistic means, were valorized as Christ-like acts. This was thought to be demonstrated by the dispensing of pharmaceuticals and the amputation of gangrenous limbs in Africa, India, and China. Perhaps the most significant exemplar of this was Albert Schweitzer (1875–1965), a world-renowned figure who, before becoming a doctor in Africa, had in his biblical research concluded that Jesus had had mistaken views and was self-deceived in his apocalyptic vision. Schweitzer rejected these and redescribed Jesus as principally an “ethical ruler.” And such redescription allowed Schweitzer to try to re-embody that same ethical concern by serving as medical missionary for decades in Africa. Awarded the Nobel Peace Prize in 1952 and hailed as a “modern saint,” he represented the high point in the West’s confidence that historic Christianity could be successfully repristinated in a modern medicine dispensed with moral concern (LaFleur 1998: 36–54). What is interesting is that, although these developments were part of a Christianity that was being both modernized and in some senses secularized, a willingness to refer to medical miracles is in today’s America one that cuts across the older division between Catholics and Protestants and the now sometimes stronger one between “conservative” Christians and more “liberal” ones. We have learned to accept as natural the fact that each and every new development

7. Campaign for Miracles

107

in emergent medical technology will be greeted with phrases that what is new is again another “miracle” of medicine. And although, for instance, conservative Catholic theology holds that divine interruptions in the fabric of natural law still happen—and happen most significantly through the deeds of persons on the way to official sainthood—the “miracles” in which most American Catholics will take refuge for healing in the course of their lifetimes will be those located in the wholly naturalistic hospitals and clinics—and that by a vast majority. And, although a demonstrable impact of the mind on the body can probably explain much of what are greeted as “miracles” in the contexts called “faith healing,” these too cannot be compared in number to the healings that take place under the auspices of persons with ordinary medical and nursing degrees.

­From Agapé to Organs In view of the fact that even within Christianity there was some concern about corpse desecration, we might expect that the excision and transfer of inner organs, especially ones deemed “vital,” might provoke a resistance based on beliefs. And it is the case that some Christian individuals have expressed a concern, one perhaps reflective of the deeper fears of the transplanted discussed earlier, that the Christian doctrine of the bodily resurrection suggests the eventual need for all parts of the body and to dismember a Christian might throw their potential for being resurrected in jeopardy. Not a few Jews will also object both to autopsies and to cadaveric transplantation on the grounds that such is incompatible with belief in bodily resurrection (Dorff 1998: 238). It is not yet clear that twentieth-century efforts by Christian clergy and many Jewish rabbis will be fully successful in convincing their respective constituencies that organ removal for transplant need pose no real problem in contexts of future bodily resurrection. Although, for instance, in 1988 the Southern Baptist convention, in order to address this problem, stated that “complete resurrection of the body does not depend on bodily wholeness at death,” ordinary adherents may perhaps need to be forgiven for harboring the view that a truly physical resurrection might be at least facilitated by keeping the physical parts (or what is left of them) as contiguous as possible. Yet for most American Christian denominations organ donation and the cadaveric transplant were not just things to be tolerated. They were, on the contrary, given an extraordinarily warm embrace. The technology, of course, was welcomed in the same way as its immediate antecedents—namely, with language about being miracles of the modern sort. But to that was added the all-important fact that the transplant involved a higher level of interpersonal (or, at least, intercorporeal) relations than had been the case in most medicine, except for blood transfusions, up to that point in time. I believe that central to understanding this high valorization given the transplant is a grasp of the degree to which it depended upon a unique historical convergence. That is, what was an unusual time of opportunity for a new medical

108

Biolust

technology to gain the immediate blessing of most of the American religious community also happened to be a somewhat exceptional time in the history of modern theology—namely, one in which the concept of agapé was being much bandied about and many in the Christian community were eager to show that theological concepts were not just mental constructs but could be made concrete in inter-human relationships and social praxis. The result of this was that the predeath donation of one’s own cadaveric organs was seen as an especially exemplary instance of Christian agapé in action. Although agapé was a Greek term of significance in the New Testament, it seems clear that until the twentieth century it had not been singled out to designate and tag the kind of love deemed specific to Christianity. Although it is quite likely that with Kierkegaard’s Works of Love of 1847 the quest to locate a specific and unique mode of Christian love took off in earnest, the term agapé as the term to designate that specificity gained prominence only with Anders Nygren’s Agape and Eros, a work of 1930 but not available and widely known in America until its appearance in English translation in 1957. In an excellent overview of these matters published in 1972 Gene Outka signaled at the outset the formative importance of Nygren’s study, one which “first distinguished what he took to be two radically different kinds of love. [Nygren] so effectively posed issues about love that they have had a prominence in theology and ethics they have never had before” (Outka 1972: 1). It is far from my purpose to enter here into the complex theological and ethical debates about agapé.7 What does interest me is what I see as the profound cultural significance of the specific time frame—that is, from the late 1950s until the early 1970s—during which discussions both of agapeic love and of how it might be societally implemented first played a large role in American intellectual and religious life. My point is that talk about agapé was very much “in the air” in and around the year 1967 when Dr. Christiaan Barnard performed the world’s first heart transfer out of the body of a putative cadaver in South Africa. Impressive is the alacrity and intensity with which explicit connections were at that time drawn between one of that epoch’s most salient theological discussions and its newest, most awesome medical technology. In a word, the transplant seemed to have been made for agapé and agapé for the transplant. The reasons for this are, no doubt, multiple. In a recent essay, Courtney S. Campbell explicitly asks why, at least among American fundamentalists (at that time being reconfigured as “evangelicals”), there was no raising of serious questions about the 1968 “Report of the Ad Hoc Committee of the Harvard Medical School to Examine the Definition of Brain Death.” This report, of course, was the document which provided the (still rather) deeply problematic equation between death and “brain death,” thus giving scientific legitimacy to the removal of inner organs of persons defined thereby as “dead.”8 In answer to the question Campbell raises about the whereabouts of fundamentalists on this issue, his own thesis is that 1968 was simply too early a date for the sensitivities of these Christians to be alert and publicly watch-dogging a public policy issue such as this. He writes: “The time frame is very important. One cannot speak of a politically mobilized and socially active fundamentalist

7. Campaign for Miracles

109

movement until after the Roe v. Wade decision legalizing abortion in 1973, some five years after the report of the Harvard committee” (Campbell 1999: 199). This attitude toward new developments in medicine as ethically unproblematic appears to have continued even after American evangelicals became politically active and mobilized. One part of the explanation for this may lie in the tendency of the evangelical movement to focus its criticisms somewhat narrowly—even though intensely. From this movement’s beginning until the present, it has been legalized abortion that served as its well-known bête noire. In matters of science it has been the presence of Darwinism in public education and, more recently, the prospect of human cloning that have been the objects of criticism. On virtually all other issues of science and medicine, by comparison, evangelicals have not issued concerns that have registered significantly in the public domain. In fact, as David F. Noble shows, in most matters of advanced science the evangelical form of American Christianity has been not only receptive but unusually ready to supply both support and advocacy (Noble 1999: 194–200). It would appear, then, that questions having to do with “brain death” and what might be ethically problematic about cadaveric transplants were ones that fell outside the ambit of the evangelicals’ attention. The contrast here with its problematization within communities of American Jews, the orthodox most especially, can be instructive. It was also the case that evangelicals also seem to have been in no way inclined to doubt that the donation of organs was morally and religiously right and worthy of praise. To match the “miracle” of modern medicine with individual acts of self-giving donation would clearly have been, they assumed, to express Christian love. For America’s more liberal Christians, however, the address to questions about the ethics of the transplant followed, I wish to show, a more intellectually ambitious and interesting trajectory. It is among them that an affirmation of the transplant as a quintessential social expression of agapé gained its fullest rationalization. Once again the matter of “time frame” is crucial. The person of central importance in this process was Joseph Fletcher (1905–91), the author in 1966 of Situation Ethics: The New Morality, a widely read reinterpretation of Christian ethics, and—very importantly—someone widely recognized today as one of the founders of the subfield of bioethics. It was Fletcher who in print made the explicit link between agapé and organs. It was also Fletcher who became the best-known public advocate for all types of new biotechnology—as shown in the range of his writings and culminating in his 1988 book, The Ethics of Genetic Control: Ending Reproductive Roulette. But it was also Fletcher, I suggest, whose overall intellectual career itself gave expression to the greatest conceptual problem for the relationship between Christianity and the ethics of this technological trajectory. The nub of this problem was the antinomy between one project which strove to isolate and prize what was unique in Christianity and another which so emphasized the infusion of secular thought into Christianity that its distinctiveness would be virtually liquidated. In the earlier part of his career—that is, that part of it which had a profound impact on the Christian embrace of new medical technologies—Fletcher seems not to have

110

Biolust

recognized that he was moving simultaneously in two incompatible directions. One part of him was raising high the unique importance of agapé as the essence of what is of value in Christianity. However, another part, especially as spurred on by the interests shown already in his Morals and Medicine of 1954, wanted a Christianity so deeply relevant to contemporary social issues that it should and would happily “update” its tradition by massive transfusions from secular sources. One project was Kierkegaardian but the other, as will be seen below, was utilitarian to the core. And it seems likely that Fletcher’s gradual awareness that these were incompatible and that he would opt to be a utilitarian rather than a Kierkegaardian—or, in fact, even a Christian—was what shaped the changes in his professional career and public stance. He who had had his strongest impact upon American Christianity during the days when he had been teaching at the Episcopal Theological School in Cambridge, Massachusetts, eventually made a break with Christianity and with religious perspectives more broadly, a move defended publicly on the Dick Cavett television show in the early 1980s. It is, however, the earlier Fletcher, the one interested in the linkage between agapé and medicine, who had a profound impact upon the embrace of cadaveric transplants by American Christianity. Again what I call the temporal “window of opportunity” is very significant here. Fletcher’s Situation Ethics, his most important work and one widely read and discussed in America, was published in 1966. And Christiaan Barnard’s performance of what was called “the miracle at Cape Town” was an event of December 1967. During 1968 Fletcher became the most conspicuous Christian public proponent of such transplants and his “Our Shameful Waste of Human Tissue: An Ethical Problem for the Living and the Dead” was published in 1969 (Fletcher 1969: 1–30). The trajectory of how Fletcher moved from Kierkegaard to the transplant is in many ways the most fascinating part of this story. Although his Situation Ethics was the subject of extensive controversy among theologians and ethicists, there was very little objection to that part of the book that discussed agapé—perhaps because much of what Fletcher said there seemed to merely re-express what had become the “common sense” within much of American Protestantism. Latching on strongly to the Kierkegaardian emphasis on volition—to the virtual exclusion of emotion—as what is central to love in Christianity, Fletcher wrote: “Agapé’s desire is to satisfy the neighbour’s need, not one’s own, but the main thing about it is that agapé love precedes all desire, of any kind. It is not at all an emotional norm or motive. It is volitional, conative” (Fletcher 1966: 104). And at the same place explicitly acknowledging his own debt to the Danish philosopher in this matter, he wrote: “According to Søren Kierkegaard, to say that love is a feeling or anything of that kind is an unchristian view of love.” It seems clear that at this point in time Fletcher was interested in isolating and prizing what was unique and uniquely Christian about agapé. And the fact that this formulation relegated emotion— and, by implication, its expression in interpersonal affective ties—to what was at best without value and at worse an impediment to agapé was not without massive importance for how transplants to anonymous recipients would be valorized as “Christian.”

7. Campaign for Miracles

111

Yet it is also important to note precisely how Fletcher saw this ideal articulated in modern professional life. Having pursued the Kierkegaardian trajectory so as to disallow any attention to “lovability” in the object of real love, Fletcher explicitly used the physician and nurse as exemplars of agapé translated into the routine of daily work. Ignoring the fact that these medical professionals are also constrained both by law and the code of medicine to practice as they do, Fletcher had no difficulty seeing continuity between the crucifixion and the hospital. Where were there ever more unlovable men than those who stood around the cross of Jesus, yet he said: “Forgive them”? Paul gave this its cosmic statement:  “While we were yet sinners Christ died for us. (Rom. 5:8). Nonreciprocity and nondesert apply even to affection-love: Reuel Howe explains why “my child, your child, needs love most when he is most unlovable.” Good medical care prescribes “t.l.c.” (tender loving care) every hour on the hour, whether doctors or nurses like the patient or not. (Fletcher 1966: 109)

This selection of medical practitioners, however routinized in fact their practices may be, as the models of such intentional love suggested Fletcher’s growing readiness to give his unequivocal blessing to procedures and developments in the medical field. As one of the first to be recognized as a bioethicist in America, Fletcher showed a distinct proclivity for cheerleading rather than for close inspection and wariness vis-á-vis medicinal practices. Already in Situation Ethics we can detect the direction—specifically a move which, we may assume, would likely have been anathema to Kierkegaard. That is, in making Christian ethics “situational,” he did so largely by stuffing it with the perspective and values of utilitarianism. Few moves in modern ethical discourse, I believe, have had such a profound impact on bioethics in general and on the valorization of cadaveric transplants in particular. Through it he radically redescribed the concept of Christian agapé so that, as long as the inconsistencies went unnoticed, it could come to serve as the religious rationale for removing the organs of a person described as “brain dead.” Again it may be instructive to note that this articulation of a marriage between agapé and utilitarianism had already been put into place by Fletcher and inserted into the public domain just a year before the first cadaveric transplant. Much of what had always been appealing in utilitarianism had been expressed in its preoccupation with avoiding waste. And this reference to “waste” became crucial both for transplantation’s initial rationale and for subsequent decades of rhetoric aimed at a general public being repeatedly told that organs not reused would be organs foolishly squandered. Fletcher gravitated easily to the notion that the organs of the deceased would be wasted if not recycled. Strategic re-utilization and the avoidance of waste had become core values for him—so much so that he made an “updating” of Christianity via the utilitarianism of Jeremy Bentham and John Stuart Mill an explicit part of what he meant by making Christian ethics

112

Biolust

situational. And this involved putting out a religious welcome mat for acts of calculation. In Situation Ethics, he had written: Justice is Christian love using its head, calculating its duties, obligations, opportunities, resources. [. . . .] Justice is love coping with situations where distribution is called for. On this basis it becomes plain that as the love ethic searches seriously for a social policy it must form a coalition with utilitarianism. It takes over from Bentham and Mill the strategic principle of “the greatest good of the greatest number.” (Fletcher 1966: 95)9

­In what seemed easy to Fletcher but looks retrospectively now like it might have actually been an ominous leitmotif for the kind of torturous calculations that have become part and parcel of organ transplants during more recent decades, Fletcher wrote of “distribution” as the remaining core problem. Once Christianity could be convinced to “use its head,” Fletcher saw it as necessarily bringing about a marriage between quantitative analyses and love—but a love now narrowed down so as to be made up entirely of the volitional, and decidedly not the emotional, element. Love-acts of pure will and not tainted by emotion were, this formulation asserts, to be put into praxis by calculations aimed at benefiting the maximum number at the minimum cost. Agapé was to be linked in eternal union with computational analysis. And skilled at constructing neologisms, Fletcher for this purpose coined the term “agapeic calculus”—that is, what he defined as achieving “the greatest amount of neighbor welfare for the largest number of neighbors possible” (Fletcher 1966: 95). This is how the utilitarian abhorrence for waste seems to have resonated especially within Anglo-American Protestantism. It resulted in a fairly widespread interest, one both new and strikingly modern, in exploring how the human body after death might still prove useful to the human community. That is, anything remaining of a traditional concern for the bodily integrity of the corpse was assumed to have no detectable ethical import. Consequently, this traditional use was thus seen as constituting a flagrant example of a waste of time and resources. Therefore, once it came to be assumed that the expression of true agapé required something more noble and decidedly “Christian” than what was present in the traditional dispositions of the corpse, many of the leaders of American Christianity were primed not only to accept one or another version of a Fletcherian “agapeic calculus” but also to praise and promote cadaveric donation.

Japan and the Agapé Boom Already in 1958 an essay by Osamu Itō in Shisō, Japan’s premier intellectual journal of the time, explicitly brought up the relationship of agapé, so prominent then in Western discussions, and Japanese culture. Itō suggested that it would be a mistake to assume—as some in Japan apparently had been assuming—that what Christians meant by “love,” something theoretically directed to anyone without

7. Campaign for Miracles

113

distinction, was roughly equivalent to terms found in the works of Confucianism or Buddhism (Itō 1958). But if some thinkers were suggesting that the “gap” between Japan and the West was one which ought to be filled by a deeper Christianization of the East Asian archipelago, other Japanese, especially when thinking concretely about the ethics of organ transplants, held that the traditional Japanese position is the more reasonable, that agapé is an unrealizable ideal, and that Japan’s religious and cultural difference from Christian societies is one worth retaining. A discussion of comparative notions of “love” enters, for instance, into a book by Makoto Ogiwara, the title of which might be translated as Why Is It That the Japanese Reject “Brain Death” and Organ Transplants? Although he generalizes a bit too broadly to all of Christianity, Ogiwara is, I suggest, basically right concerning a concept of love in Christianity holding sway at that point in time—the late 1960s through the 1980s—in America when organ transplants were deemed an adequate, even exemplary, expression of such love. In a book which argues against the notion that a “higher” concept of universalizable love should sweep away all cultural objections to cadaveric transplants, Ogiwara wrote: When we Japanese hear the word “love” we link it to matters of the heart, to feelings, and to emotion. The notion of “love for the neighbor” in Christianity, however, does not put the same degree of emphasis on the emotional element and in its stead prioritizes love as expressed in acts of volition. Of course the emotional element is also important, but that is not the whole story. In Christianity the question becomes: Is not the real evidence of love’s presence shown in actual deeds? Is it not rather meaningless to be only saying with the mouth that love is present? Love so conceived, I suggest, is not love based on sentiment or the emotions. It has nothing to do with the kind of natural emotion that springs up when we say about another person that we like or love them. No, rather, this is a kind of love that is an act of the will. Therefore in some sense love as conceived in Christianity is one which is produced by humans [in contrast to love that would arise naturally and spontaneously]. It is love that is un-natural. When Jesus demands “Love your enemies and pray for those who persecute you” he is requiring something that is not emotionally possible. (Ogiwara 1992: 151–2)

Ogiwara gives articulation to something I have found to be common in Japanese discussions of these matters, namely, an affirmation of the Confucian principle of parent-child relations as the best paradigm of love because it is also the one that is most realistic.10 Along with this comes a skepticism about the emotional likelihood of being able to prioritize a willed “love” for an unknown and anonymous person (the “neighbor” of the agapé concept) over the existing power of bonds to persons to whom one is already related. This is not to deny the possibility of altruism but, rather, to express doubt about the wisdom of constructing an ethic that would implicitly denigrate or downgrade existing structures of interpersonal bonding, especially those of close

114

Biolust

familial relations.11 It is, in a word, to reaffirm a Confucian preference and to insist that the Kierkegaardian concept of love is not only unnatural but also, from this perspective, unethical. That is, there is not only doubt that we can emotionally exclude all sense of “personhood” from how we respond to the still-present corpse but the additional problem that it is close interpersonal ties, especially those of near kin, which make exercises in premature mental distancing seem deeply problematic, impious, and even wrong. Such redefinitions may look good as high-wire acts of the mind. But they run counter to our natural emotions and, in truth, our emotions are not to be dismissed or denigrated in the making of moral judgments.12 The parents told that their child, now suddenly “brain dead” due to an accident, will not only “naturally” but also rightly reject the suggestion that they be “harvested.” To many Japanese, then, the cutting into the body and removal of organs of a freshly “dead” member of the family will, even if for an altruistic purpose such as the transplant, seem not only highly “unnatural” but also an act that transgresses some of the best-known norms and values of what is meant by “love” in Japanese society.

­Chapter 8 WA S T E M A NAG E M E N T— A S P H I L O S O P H Y

Everyone knows the value of what’s useful; Does anyone know the value of what’s useless?

—Zhuangzi

The Useful Dead Rashomon is known in the West as the title of a classic film directed by Akira Kurosawa and released in 1950. Originally, however, Rashomon—or more accurately “Rashōmon”—was simply the name given to a famous gate in the city of Kyoto, one large enough to be itself a building, one with enclosed rooms atop its aperture, below which persons and vehicles could pass. Before Kurosawa made this gate the setting for what transpires in his film, “Rashōmon” had been the title of a short story published in 1915 by Ryūnosuke Akutagawa (1892–1927). It is important to realize that the plot of Akutagawa’s story differs entirely from that of Kurosawa’s film and, in fact, embraces a theme with striking relevance to this book. Akutagawa was not only a brilliant writer but one who on occasion would put core philosophical questions discussed publicly in his own time into the details of his captivating, often grotesque, tales. His stories skillfully projected very modern questions back into medieval settings. First, however, it may help to outline the basics of this story, not the film, titled “Rashōmon.” The Rashōmon gate is described as in its horribly dilapidated state in the late twelfth century. It had been ravaged by earthquakes, fires, and warfare. Akutagawa wrote: Taking advantage of the devastation, foxes and other wild animals made their dens in the ruins of the gate, and thieves and robbers found a home there too. Eventually it became customary to bring unclaimed corpses to this gate and abandon them. After dark it was so ghostly that no one dared approach. (Akutagawa 1959: 32) *Comprising mainly Chapter 6 of draft two (2010), I have seen fit to include the section “The Contemporary Critics” from the eighth chapter of draft one (2002).

116

Biolust

One man, the servant of a samurai, does, however, dare to enter the gate and climbs up to its second-story room. Starving, he is contemplating becoming a thief himself. Then in the ghastly darkness he eventually sees some scattered, decomposing corpses. And among them he spies an old crone hunched down to pluck (harvest?) long hairs from the heads of the dead. Overcome with repugnance and a penetrating sense of being in the presence of great evil, he grabs the old woman and upbraids her for her act. She pleads with him, saying: Indeed, making wigs out of the hair of the dead may seem a great evil to you, but these that are here deserve no better. This woman, whose beautiful black hair I was pulling, used to sell cut and dried snake flesh at the guard barracks, saying it was dried fish. If she hadn’t died of the plague, she would be selling it now. The guards liked to buy from her, and used to say her fish was tasty. What she did couldn’t be wrong, because if she hadn’t, she would have starved to death. There was no other choice. If she knew I had to do this in order to live, she probably wouldn’t care. (Akutagawa 1959: 38)

The man, having heard this, decides to overrule his own sense of repugnance. He adopts the woman’s own rationale for taking from the weak and defenseless— in order to live, stating: “Then it’s right if I rob you. I’d starve if I didn’t.” He tears her clothes from her body in order to sell them. And he steps out into the night, one which is “only darkness . . . unknowing and unknown.” Akutagawa was writing, of course, in an era before it became possible to harvest, recycle, and reuse the internal organs of persons judged to be brain dead. Yet, the particular mode of philosophy which in the West would later be employed to rationalize this new medical technology and to dismiss the relevance of any “yuck factor” connected to it was, in fact, very much a part of what was being discussed within the intellectual world of which Akutagawa was a part when writing this tale in 1915. This new philosophy recently imported from the West then was utilitarianism, known in Japan since 1877 through the writings of its British advocates. Many people in our own society today do not expect that what’s called “philosophy” will exert a weighty influence on our daily lives. But many Japanese during the last decades of the nineteenth century and the early ones of the twentieth saw things very differently. Philosophy was avidly studied because it was assumed that within the options presented by different philosophers lay the choices a modernizing society had to make about itself and its future. Philosophy was expected to clarify the issues. No study could be more important. Akutagawa was writing during such a time. It was also a time when there was intense interest in what, in the West’s philosophies, had made America and the nations of Europe technologically advanced—as proven by America’s ability in 1853 with the “Black Ships” commanded by Matthew Perry, to intimidate Japan into negations and opening itself up to the West.

8. Waste Management—as Philosophy

117

One question that remained a lively one in Japan during the subsequent decades was whether there might be in the West a philosophy that gave it such a clear advantage in certain technologies. Utilitarianism by John Stuart Mill, published in England in 1861, had been already, by 1877, translated into Japanese by Amane Nishi, who had lived in Europe and was a skilled interpreted of European philosophy. It was read avidly. As their name indicates, the utilitarians gave priority to a thing or an idea’s utility or usefulness. Usefulness should be, they held, the criterion of value. And, as we will see, for another utilitarian, Jeremy Bentham, the value placed on the maximum usefulness of things was extended even to the bodies of the dead. Loading values in this way, certain persons in Japan realized, called for a fundamental change not only in human thinking in general but also in how ethically responsible persons and communities would, if convinced by the utilitarians’ arguments, be expected to deal with corpses. A dead body burned or even left to rot would be body wasted. Finding at least its hair useful for making a wig and maybe even getting a bit of cash that way would be justifiable by the new philosophy from the West. Akutagawa, seemingly skeptical of this new perspective making utility the primary value, portrays a man first finding himself overwhelmed by the “yuck factor” when confronted with that level of utility. But he too, the tale suggests, can shortly put all this aside and, rationalizing his deeds by his own desperate need to go on living, joins himself to the ethos allowing the strong to take from the weak. If the old crone had plucked the hairs from corpses, he may as well rip the clothes from her body. Why waste what’s right in front of one and can be of use? Only the fit will survive. And, when that is assumed, notes Akutagawa, the samurai’s servant will step into where it is “only darkness . . . unknowing and unknown.”

Bentham’s Bent In many ways, the dilemma in the mind of the man in Akutagawa’s gruesome tale prefigures the one—philosophical, religious, and cultural—that Japan has been struggling with in recent decades concerning whether or not the bodies of the dead are to be used. Behind this lies a question about what happens to the minds of individuals and the quality of a society when it has been decided that the bodies of the dead are simply “too good to waste.” Akutagawa and many of his contemporaries appear to have been keenly aware that going the route outlined by the British utilitarians would have wide and deep consequences. And, I suggest, one of the things that makes even contemporary Japanese thinking about the ethics of organ transplantation so interesting is that it continues a debate about the pros and cons of a utilitarian value system. More so than in America today, Japanese bioethicists devote attention to the underlying philosophy implicit in what we decide on bioethical issues.1 A recent book whose title in Japanese means “Bioethics and utilitarianism” begins by noting how pervasive is the impact of the suppositions of this particular philosophy on what we call bioethics. The authors of that book insist on bringing to the surface all the utilitarian values implicit in

118

Biolust

how scholars, especially in England and America, discuss issues such as organ transplants, assisted pregnancy, the “new” eugenics, the use of embryonic stem cells, and so on (Iseda and Katagi 2006). My own view is that, although the major focus of this book is on transplantation, there is accuracy in the common view in Japan that transplant technology is where we can very readily see how a disposition in favor of raw usefulness and utilitarian views propelled most of bioethics in one direction almost from the outset. The field is all but soaked in utilitarian values. And these impact directly on public matters because the practical utility of their philosophy was part of the design of the perspective. Utilitarianism is known today by its most memorable maxim—“the greatest good for the greatest number”—and this as a principle in ethics surely has had implications for modern medical practice. Experts may want this motto of the utilitarians to be more finely nuanced but there can be no doubt that it was the rule used by Joseph Fletcher when in 1966 he wrote that the Christian gospel by itself is inadequate; he announced that, when in coalition with utilitarianism, it “takes over from Bentham and Mill the strategic principle of “the greatest good for the greatest number” (Fletcher 1966: 95). From there to an unqualified embrace of the new transplant technology was a short jump for Fletcher. He picked up on what especially Bentham had written in descrying waste. Two years later, in 1968, he did this—even in the title of an important essay, “Our Shameful Waste of Human Tissue: An Ethical Problem for the Living and the Dead” (1969). Fletcher attacked what he called the “body taboo,” insisted that it is “selfish” to take one’s organs into the grave, cited numbers of persons dying for lack of donated organs, denounced “next of kin” for throwing down roadblocks to the harvest of organs, predicted that the need for human organs would prove only temporary until those of animals might be technically transplantable, and in the essay’s finale asked clergy, teachers, health officials, and physicians to join him in his campaign to find good uses for dead bodies. How did the argument get to this point? A philosophy prioritizing usefulness will be one in which anything smacking of waste will be deplored. So it can be appropriate here to take a quick look at how adamant Jeremy Bentham (1748–1832) had been about avoiding not just waste in general but specifically the waste of dead human bodies. Bentham, I maintain, laid down the conceptual framework for Fletcher’s much later campaign to turn Dr. Barnard’s technical breakthrough into a norm for society and public policy. From what we now know, Bentham’s initial concern was wholly admirable. He lived at a time when rapidly developing medical research and the training of physicians made acute the need for cadavers. Richardson and Hurwitz (1987) describe the social impact of this. Teaching hospitals wanted corpses for dissection but the rich folk in that society wanted no part of that. This led to a pattern in which corpses of the indigent were not only often stolen and sold but some persons, historians point out, were even murdered for the prices their cadavers might fetch. Hospitals paid for cadavers and asked no questions about their origin. Bentham saw injustice in this and, commendably, was disturbed by the fact that

8. Waste Management—as Philosophy

119

the bodies of the poor were being collected and dissected so that physicians might, through what they had learned, more readily correct the illnesses of the rich, who could afford their services. His noble response was to direct that, upon his death, his own corpse be made useful. He put his body where his mouth was. He altered his will and testament so that that body might be, first, available for use by being dissected in an anatomy lesson and, thereafter, be properly prepared so that it could be put on ongoing public display. The latter was his way of demonstrating publicly that there was no need to bury the dead. Bentham referred to his mummified corpse as an “auto-icon;” it still exists today on exhibition in University College, London, although now with its head replaced by an artist’s attempt to copy the now decomposed original. Bentham’s absolute insistence that corpses not be wasted eventually took a rather peculiar bent. Hoping to generate a wider public trend, he saw each and every corpse as having a potential, one never realized earlier, for becoming an objet d’art. Cadavers, rightly and efficiently reused, he argued, could make the efforts of sculptors expendable. Instead of statues in parks and public places, Bentham wanted the placement there of the mummified dead—so as to avoid the waste by burial of such bodies and, presumably, also the waste of time and energies expended by artists. Richardson and Hurwitz note: Bentham’s quirky vision of the uses of human taxidermy included the erection of temples of fame and infamy in which auto-icons would take the place of carved statuary or waxwork: “so that every man be his own statue.”

They continue: Lacking religious belief, Bentham viewed the human carcass as matter created by death. As an eighteenth century rationalist, he found little difficulty in addressing the problem of how this matter might be best disposed of with a view to maximising the “Felicity of Mankind.” (Richardson and Hurwitz 1987: 196)

Here Bentham was prefiguring not only the position of Fletcher but also what many thinkers in America—including clergy—would also conclude. Christians would allow the utilitarian perspective to be smuggled in under rhetoric about agapé and many Jewish rabbis would do the same by saying that, although their tradition ordinarily forbade any mutilation of the dead, the principle of pikuach nefesh (“to save a life”) should be given priority, thus not only making the excision of organs licit but even more moral than letting them go to “waste.” By the end of the 1970s in America, many churches and synagogues held annual organ donation drives. Bentham would not be mentioned. But his idea, mediated by Fletcher, that the corpse should not be wasted was, through the miracles of advanced medicine, being given a whole new venue for expression.

120

Biolust

­Nishida’s Objections Utilitarianism in Japan had a different career. The peak of its popularity was probably very soon after it had been introduced there—and that happened early. As noted, Utilitarianism, the work by John Stuart Mill widely regarded as the key treatise of the new perspective, was translated into Japanese already in 1877, although its original in English had appeared as essays only sixteen years earlier in 1861. Its translator, Amane Nishi (1829–97), had as a youth spent time in Leiden, the university town in the Netherlands where studies of Asia were already a tradition due to the fact that the Dutch had been the only Europeans allowed some entry into Japan during its long period of “national seclusion” (1639–1854). Thomas Havens, the biographer of Nishi in English, notes that he had become “a thoroughgoing Utilitarian by the 1870s” (Havens 1970: 219). Nishi, skilled at inventing new terms in Japanese to render Western concepts, also introduced philosophical aesthetics to Japan. But, according to Michael Marra, aesthetics posed a problem to Nishi when trying to present “the ‘usefulness’ of a science that was born as a form of knowledge free from pragmatic concerns of utility” (Marra 2001: 5). Fascination with utilitarianism on the part of Japan’s thinkers continued for some decades. It declined sharply, however, with the publication in 1911 of the work even today regarded as the most significant and groundbreaking treatise in Japanese in the twentieth century. This book was Zen no kenkyū (An Inquiry into the Good) by Kitarō Nishida (1870–1945). It is difficult for us today to grasp the impact on a whole generation provided by the publication of one book in philosophy. But Nishida’s book had that kind of impact. Since Japan’s “opening” to the West in the nineteenth century, it had been Western thinkers such as Mill who received almost all the attention, and it seems that within Japan there were deep doubts that any “Eastern” thinker could ever match them in sophistication. But Nishida’s book turned that tide. To many of Japan’s young intelligentsia, it proved that the West was not always right (Kōsaka 1964: 33–4). Nishida clearly was fully conversant with the thinking of the West but he was far from ready to jettison that of Japan and what Japan had learned from other Asian thinkers of the past. Central to Nishida’s book was his criticism of utilitarianism. He viewed Bentham rather than Mill as the more consistent of the utilitarians, probably because Bentham made it most clear that, although the rhetoric of utilitarians stressed creating the maximum amount of good for the largest possible population, for them “good” meant, ultimately, pleasure in one or another of its forms. Nishida faulted it for being essentially individualist hedonism in search of a social dimension; he could not agree to what he saw as utilitarianism’s reduction of the complexity of human life and values. Of course, among today’s philosophers and ethicists—still primarily but not exclusively in the Anglophone societies—who espouse some form of utilitarianism there has come to be, over time, a considerably variety. Many of them today would prefer to be called consequentialists, not utilitarians. It would not be inaccurate to note that, if philosophers and medical ethicists in English, American, and

8. Waste Management—as Philosophy

121

Australian societies are predisposed to be in favor of such view, their Japanese counterparts have for the most part even up to the present time tended to be predisposed against the viewpoint of utilitarian/consequentialists. In fact, a recent book in Japan on this matter states on its first page: “Utilitarianism as an ethics is certainly not highly regarded here in Japan. And this low opinion of it is especially prevalent in matters having to do with bioethics” (Iseda and Katagi 2006: i). The authors strive in that volume to have its readers grasp why utilitarianism has such a strong grip on Anglophone bioethics from transplants to stem-cell use. Nishida bore down on the fact that, since the criterion of utility still makes it necessary for us to judge among a spectrum of potentially useful goods, those persons and communities faced with the need to do this will be forced into making calculations. Terms such as “the greatest good” and “the greatest number” invite us to weigh quantities. Which product or procedure will, if used, produce as consequence the greatest benefit for the greatest number of people? Quantities matter. Therefore, so too does the effort necessary to calculate what “maximal” might mean and how to achieve it. Nishida recognized how this posed a problem. [In Bentham’s view] all pleasure is identical, differing only in quantity, not quality. (Therefore the pleasure of the game of roulette and the pleasure of sublime poetry are identical.) The value of conduct lies not, as the intuitive theorists contend, in the conduct itself, but in the results of the conduct—that is, conduct that gives rise to great pleasure is good conduct. (Nishida 1911/1990: 117)2

The Contemporary Critics This skepticism about utilitarianism carries over, I claim, into contemporary Japanese writers on medical ethics—although Japan, at least in comparison with North America, has a much smaller cadre of persons we would think of as bioethicists. In Japan, the professionalized bioethicist who has been incorporated into the decision-making process of the hospital is much more rare than in America. This does not mean, however, that the field of inquiry as such scarcely exists. In fact, many of those who write about issues of baioeshikusu or seimei rinri are in departments of either philosophy or ethics—thus suggesting that the interest in core philosophical questions may be deeper—or are practicing physicians who write about ethics. The number of the latter is quite large. Many of them have spent years studying or practicing medicine in Europe but especially in North America. That persons with such training would spend as much effort as they do on discussions of philosophy is interesting. And, perhaps more importantly, that so many of them have returned to Japan from North America with the impression that American medicine is pervasively informed by utilitarianism and that its results are far from benign is, at least in my view, one of the most significant aspects of this matter. Moreover, although utilitarianism as a philosophy is by no

122

Biolust

means an American invention, the fact that a large number of the “textbook” cases in bioethics—cases frequently discussed in utilitarian terms—arose in the United States is also a factor in why it happens that bioethics in America seems at least implicitly, even if not explicitly, the object of Japanese criticisms. Hiroshi Mizutani, for instance, is a neurosurgeon who served as the head of various medical institutions in Japan but also spent three years in advanced study in New York doing research on brain tumors and hydrocephalia. In two of his publications, Nōshiron (1986) and Nōshi to seimei (1988), he discusses utilitarianism at a length and to a depth which will not likely be found in American writing. And, not unexpectedly, he finds implicit use of utilitarian criteria in American medical practice, especially when it comes to matters having to do with advanced technologies. Within what is, in fact, his extensive treatment of this, I select out only one paragraph that catches my attention in his 1986 book: Brain death is recognized in certain countries as death and in others as not—a fact which itself serves as proof that bioethics is a relative matter. We are now able to transfer so vital an organ as the heart, thus saving one life by removing the heart—that is, causing the “heart-death”—of another person. And in order to explain the killing of one person in order to save the life of another, society finds it necessary to have a rationale, something by which it can recognize such an act [as ethical]. Therefore, even though “brain death” may not be exactly equivalent to death, it is by taking a stance on utilitarianism’s concept of usefulness and by considering the comparative life prospects of, on one hand, a brain-dead patient who may be already be dead, and, on the other, a patient in need of a heart transplant, that it is made to seem obvious that the life of the latter, especially if they are young, must be saved. (Mizutani 1986: 185)

Mizutani moves from this directly into a discussion of how risky it is to start measuring the quality of life and then deciding that some lives are of such low quality that they are expendable. He claims that the now commonplace phrase “quality” became a phrase in writings of the 1920s and soon thereafter such comparisons were used by Nazis to justify the extermination of unqualified lives. Although it is in another context that Mizutani brings up the much closer-tohome horror of the Japanese Medical Unit 731’s experiments in China during the Pacific War, it is worth noting here that there has been a growing attention within Japanese writing on ethics to the relevance to bioethics of this nefarious episode in Japan’s own twentieth-century history (Mizutani 1988: 132). Through the efforts of researchers such as Saburō Ienaga earlier and Kei’ichi Tsuneishi more recently, the horrors of the experiments in China, rationalized at the time by the conditions of war and the making of comparisons between lives of great and little value, have been made more clear to the Japanese public than had formerly been the case.3 And a readiness to relate this to the presence in wartime Japan of a version of utilitarianism is becoming clear. An essay by Tsuneishi specifically suggesting parallels between the rationalizations of the 731 Unit and the rationalized trajectory

8. Waste Management—as Philosophy

123

of contemporary medical technology figures largely in a 1998 volume of essays on bioethics (Tsuneishi 1998: 188–216). Tsuneishi holds that certain contemporary practices may rightly be seen as “neo cannibalism” and objects especially to the commodification of body parts. His essay coheres well with an essay in the same volume by Tsuyoshi Awaya, one which outlines a range of areas where either research or its application becomes highly questionable. Among the objectionable rationalizations for this today Awaya cites utilitarianism as the first, followed by the materialization of the body and the principle that all choice is to be left to the individual. “Put simply,” he writes, “utilitarianism holds that, since the biological sciences and their application through technology will bring greater happiness to human beings, they are the good. And that implies that they should be promoted” (Awaya 1998: 87). The trend toward the commodification of body parts, consonant with utilitarianism, shows up in the readiness within medical communities to refer to bodies as being “resources” for therapy (Awaya 1998: 94). And in America, where cadaveric organ transplants are common but face a chronic “resource” (organ shortage) problem, he notes the presence of “strongly rooted views in favor of the buying and selling of organs in spite of current laws against this” (Awaya 1998: 93). That he is correct on this can be seen both in existing and proposed American legislation for financial rewards to organ donors and in the fact that some bioethicists—Arthur Caplan (1998) for instance—feels it necessary to write in opposition to this growing trend.4 Tsuneishi is not alone in likening the utilitarian trend toward the commodification of body parts to “neo cannibalism.” Yoshihiko Komatsu (1996: 49) also suggests what we have here may be practices that bear comparison to cannibalism, a theme developed in greater detail in the writing on this topic by Koyata Washida, a philosopher who makes use of the studies of cannibalism in French by Jacques Attali. Washida writes: Although human beings, in order to maintain and extend their lives, devise various ways to keep themselves alive and healthy, they themselves put a limit on this. The taboo [against eating one another] is one they are loath to violate. This notwithstanding, humans have now gotten to where they excuse themselves for using various body parts when it is a matter of doing so for the life of another living person. This kind of biological utilitarianism does not involve the ingestion and eating of body parts of another human being. Yet for the sake of giving happiness through the body [to that recipient person] the various parts of the corpse of a dead fellow human are allowed to be taken inside. (Washida 1988: 208)5

­Using a Calculator Calculation gives the appearance of making moral decisions easier, efficient, and more accurate. But Nishida and much ethical thinking in Japan even up to the present have been wary of it. Of course it would be naïve to think

124

Biolust

that in weighing ethical choices our minds can simply exclude all projections about likely or unlikely consequences of what we decide to do. Doing that is probably impossible. It is, rather, that relying so heavily on calculation becomes a trap. Quality, whether it’s the quality of life or the quality of dying, cannot be satisfactorily quantified. To assume otherwise is to latch onto a piece of bait that hides a very nasty hook. In saying “no thanks” to this, Nishida would position both himself and several generations of his followers in Japan as in resistance to a major trend in the thought and practice of Europe and America in the nineteenth and twentieth centuries. Although the trend had been begun earlier, during the nineteenth century and especially during the twentieth quantification became pervasive. Concerning the present Ian Hacking writes: “Probability and statistics crowd in upon us. The statistics of our pleasures and vices are relentlessly tabulated. [. . . .] This obsession [. . .] descends directly from the forgotten annals of nineteenth century information and control. This imperialism of probabilities could occur only as the world itself became numerical” (Hacking 1990: 4–5).6 Among contemporary philosophers who have challenged the adequacy of utilitarianism, none surpasses the late Bernard Williams. Because of its relevance to the specific topics of this book, what interests me especially in the views of Williams is his critique of what he somewhat derisively called “Government House utilitarianism.” By this he suggested that the ethical calculations of the utilitarians were found to be very useful by the governors and lieutenant governors whose bureaucracies were the mechanism through which England ruled its widely flung colonies during the epoch of the British Empire. Holding that there were “important colonialist connections of utilitarianism,” Williams found something deeply troubling in it. He focused on the writings of Henry Sidgwick (1838–1900), a thinker in the tradition of Bentham and Mill. In his The Method of Ethics in 1874, Sidgwick had cautioned against too much disclosure to the public: Thus, on Utilitarian principles, it may be right to do and privately recommend, under certain circumstances, what it would not be right to advocate openly [. . .]; it may conceivably be right to do, if it can be done with comparative secrecy, what it would be wrong to do in the face of the world. (Sidgwick, quoted in Williams 1985: 109)

Sidgwick had gone on to say that this way of doing ethics in secrecy “should itself be kept secret.” Understandably, Williams pointed out that “Government House utilitarianism is indifferent to the values of social transparency [. . . .]” (Williams 1985: 109). I am not pursuing an irrelevant historical sidetrack here. How some forms of utilitarianism tend to court the muddying of public transparency is surely apropos. The point made by Williams is directly related to what was discussed earlier concerning how many professionals in the transplant enterprise, even when aware that science has toppled their bases, go on doing what they do and assume that the public need not know that deep problems exist.7

8. Waste Management—as Philosophy

125

In North America today, there is a disconnect between a growing number of philosophers who now see the utilitarian ethic as irredeemably flawed and the majority of medical practitioners and bioethicists who cling to the idea that demonstrating the usefulness of a procedure is virtually enough to prove that its use is not only moral but mandated. Mark Johnson represents those philosophers who have articulated the problem of assuming you could do ethics via a kind of “moral arithmetic.” He writes: This halo of scientific objectivity and rigor (coupled with its adherence to a version of the principle of universal moral personality) has sustained utilitarianism’s appeal in our common understanding long after it has ceased to be a defensible position. It is simply enough, by way of criticism, to call the bluff of those who continue to assume, without argument or example, that such calculations are possible, much less useful, in actual situations. (Johnson 1993: 122)

Maximal Neomort Use Martin Pernick has noted that, since today a “cost-benefit” analysis is so routinely used in our society in making decisions across a wide spectrum—from pharmaceutical use to plans for war—we will have difficulty even imagining that time in the past before the utilitarian perspective and a “numerical frame of mind” had become so much a part of how we think (Pernick 1985: 108). Within a society already so predisposed, I suggest, the utilitarian cost-benefit analysis, having become the focal point of ethical decision, will force existing “yuck-factor” qualms down into insignificance and invisibility, especially when compared to projected benefits that might result from the use and reuse of human corpses and body parts. After 1968 in America, the routine “harvesting” of organs was so regularly touted as eminently moral that repugnance had to be dismissed as due to irrational fear. Sniffing this in the air, Willard Gaylin, an eminent psychiatrist and then president of what would become the Hastings Center, in 1974 published a very provocative essay, “Harvesting the Dead: The Potential for Recycling Human Bodies,” in Harper’s. Gaylin pointed out what many in the general public seemed to be ignoring in the report of Harvard’s Ad Hoc Committee in 1968. He wrote: “For if it grants the right to pull the plug, it also implicitly grants the privilege not to pull the plug [. . . .]” (Gaylin 1974: 26). Noting that problems of inefficiency and waste had to be addressed, Gaylin recognized that what had begun to be referred to as a “neomort” (aka brain-dead body) could, if not immediately harvested for transplantation, provide a far wider spectrum of uses if maintained far longer on its respirator. That is, compared to other uses, even transplantation is something of a waste! A good calculation aims at maximum utility. Gaylin proceeded to provide a stunning array of possibilities for better use of the neomort:

126

Biolust

—— Training: Uneasy interns could do their first-time routine procedures as well as more “exotic” procedures on “patients” whom they need not worry about harming; —— Testing: Drugs and new procedures could be tried on neomorts without concern about the legal consequences of adverse or unintended effects, including that of “death”; —— Experimentation: Since going from animals to humans in trying new procedures involves great risk, the neomort could serve as an “experimental bridge”; —— Banking: An equivalent of the “blood bank” could be made to store a range of body parts that could be catalogued and used when needed; —— Harvesting: Since it would continue to replenish blood produced but drained off from it, the neomort could be an ongoing supplier of blood and other valuable products; —— Manufacturing: By being injected with toxins, the neomort could be something like a machine that makes anti-toxins. Gaylin concluded his essay by noting the problems with all of this. This is the reason why, in spite of his essay’s matter-of-fact style, his real intent is seen as ironic. He wrote: Cost-benefit analysis is always least satisfactory when the costs must be measured in one realm and the benefits in another. The analysis is particularly skewed when the benefits are specific, material, apparent, and immediate, and the price to be paid is general, spiritual, abstract, and of the future. (Gaylin 1974: 30)

What fascinated and often stunned most readers of Gaylin’s essay was not the cautionary paragraphs of its final page but what the preceding pages provided as a graphic and detailed scenario of multiple uses to be found for the neomort’s body. Here was, it seemed, a display of potential utilizations so great that, ironically, even the use of a brain-dead body for immediately extractable organs and the placing of these into the body of a recipient would, by contrast, be itself a kind of waste. The seemingly sanguine way in which Gaylin laid this out in 1974 has over time led some of its appalled readers to assume that the article must have been written as a twentieth-century counterpart to the 1729 essay, “A Modest Proposal,” in which Jonathan Swift proffered that the problem of hunger in Ireland might be “solved” if the Irish would eat their own children. Japanese who read “Harvesting the Dead” had their own reasons to be horrified by the essay’s implications. As detailed in Chapter 10, something like an extreme form of an anything goes for the sake of utility rationale had come into play during the Second World War when a medical unit of Japan’s military took advantage of their nation’s occupation of Manchuria to conduct horrendous experiments on members of the captive population there. If a body’s utility emerges as the

8. Waste Management—as Philosophy

127

dominant criterion, then it becomes just one more step to conclude that a very extensive usefulness might even justify the use of the neomort for experiments even if these will result in the death of the experimentee. The calculation of truly maximum benefit cannot leave even vivisection off the table.8 Japanese reactions to Gaylin’s scenario are, thus, especially important. Masahiro Morioka’s important 1991 book on brain death cited Gaylin’s essay, laid out even more putative uses for the neomort, and then cautioned: “All of us who live in ‘advanced’ nations are in societies shored up by technology. But we have now arrived at a point where we can no longer go on believing all the talk about technological advances as sure to bring unconditional happiness” (Morioka 1991: 123). There is no reason, once corpses or near corpses are seen as useful, that their utility need to be limited to medical research per se. The saving of lives more generally can be used to rationalize their use. Perhaps the most fascinating— even if repulsive to some—reuse of human corpses has been provided by some automobile manufacturers. Naturally concerned to produce cars along with a sales-pitch about maximum safety and evidence that a given model will do well in protecting their human occupants even in very bad crashes, auto manufacturers test their own vehicles under crash conditions. Traditionally they put inside them dummies that were human surrogates and could be tested via horrendous collision tests. But dummies are not really human bodies. They are bloodless and cannot show what would be real damage to human flesh. Therefore, a further application of the utility principle seems to have meant to some manufacturers that tests would be far more accurate and lead to far safer automobiles if the flesh-and-bloodless dummy were replaced in the crash-test car by a real corpse. This apparently was already being done during the 1970s by the Battelle Memorial Institute, although between 1977 and 1981 that institute reacted to public revulsion by placing a moratorium on the practice. Of course, a premium had been placed on keeping the public ignorant of this research; its manager admitted that “publicizing it [would not be] in the best interests of anyone” (New York Times, Oct 11, 1981). As should be expected in the case of such a secrecy-obsessed project, rumors about it have proliferated. As recently as 2005 it was reported that bodies supplied by the Medical School of Austria’s University of Graz provided corpses for such studies (USA Today, Mar 22, 2005). This list could be extended. In Japan, the manufacturers of cars, given their economic importance there but also the Japanese public’s high sensitivity to the reuse of dead bodies, seem to have been especially cautious to avoid either this practice or any hint of it. Toyota developed a computer-simulated “virtual” dummy that is as close to a human body as possible. It seems that when it comes to putting our dead into test cars and intentionally having them be the subjects of devastating physical damage, both the yuck factor and serious ethical objection kick in. However, when this is considered objectively, we may not be easily able to say how and why this differs from using

128

Biolust

long-maintained neomorts in laboratories. For both procedures, it can be argued that they save lives. And, pushing this a step further back in the history of our rationalizations, it is far from clear why the range of things brought up in Gaylin’s “harvesting” article differs from taking even one brain-dead person’s body and extracting its organs for immediate transplant. Once it is assumed that any reuse is justifiable as long as it can be said that “lives will be saved,” the prospects for such use are virtually infinite.

­Chapter 9 SI D E L I N I N G A SK E P T IC

The force of the negative instance is greater.

—Francis Bacon, Aphorism 46, Novum Organum

The name was vaguely familiar to me. In 1999, while in Kyoto studying Japanese views on various questions in medical ethics, I noticed frequent references in their writings to a philosopher of the West, Hans Jonas. At the time what I knew about Jonas (1903–93) was that as a young Jewish philosopher he had escaped Hitler’s Germany and eventually moved to America. I had earlier read one book by him, his first, titled The Gnostic Religion. What I did not know is that Jonas had later made a large career turn and focused most of his attention on the philosophy of biology, the problems posed by our new bio-technologies, and how we should deal with those technologies in an ethical way. This is why in 1999 I became curious to know exactly why not a few Japanese were so interested in Jonas. I discovered that there were primarily two reasons for this. One, to which I turn later in this chapter, was because Jonas was seen as a thinker deeply concerned with our need to leave to future generations not only a livable world but also an intact version of what it means to be human—that is, not a version of our species “enhanced” by biotechnology as has been proposed by persons saying we ought to take evolution under our own control. But the second and more immediate aspect of the Japanese attention to the work of Jonas was his stated objection to assuming that what’s called brain death is adequate to legitimate the transplantation of organs. I first read parts of Jonas’s argument about this as translated into Japanese and incorporated into Baioeshikusu to wa nanika, a work on bioethics by one of Japan’s leading ethicist (Katō 1986: 61–6). I will shortly indicate what Jonas had written, its relevance to this book, and why it was so appealing to so many Japanese. First, however, I should note a few things about Jonas’s personal and intellectual biography. Hans Jonas was born in 1903 in the German city of Mönchengladbach. After attending a Jewish high school in Berlin, in 1921 he began to study philosophy and *Chapter 7, draft two (2010).

130

Biolust

art history at the University of Freiburg and become member of a Zionist student organization. Having begun some studies with Martin Heidegger (1889–1976), already then widely acclaimed as probably the most significant and innovative philosopher of the era, in 1924 he moved to Marburg to continue with Heidegger. Other Jewish students of Heidegger during that period were Hannah Arendt, Herbert Marcuse, and Karl Löwith. Having also studied with Rudolf Bultmann (1884–1976), a famous scholar of the New Testament, Jonas in 1928 wrote his PhD dissertation on Gnosticism, one which became a seminal work on that topic even though the unearthing of Coptic Gnostic texts in 1945 led to a perspective on Gnosticism very different from that taken earlier by Jonas (see essays in TiroshSamuelson and Wiese 2008). But in 1933 Hitler grabbed power in Germany and the persecution of Jews in Germany became intense. In May of that year, Heidegger, having been appointed rector of the University of Freiburg, championed Hitler and National Socialism in a public lecture. Jonas, appalled by his teacher’s stance and recognizing how things were becoming mortally precarious for Germany’s Jews, moved to London, lived for a while also in Palestine, and in 1937 met Lore Weiner, who in 1943 became his wife. He eventually became part of the “Jewish brigade” of the British army in its war with Germany. In July 1945 as part of the victorious Allied armies, Jonas returned to his birthplace in Germany and learned that his mother had been murdered in Auschwitz. He subsequently moved with his wife, Lore, to Palestine and worked part-time as a lecturer at the Hebrew University in Jerusalem. Between 1949 and 1955, he had positions at McGill University and Carleton College in Canada and went from there to accept a position as a professor of philosophy at the New School for Social Research in New York, where he taught until retirement in 1976. And this in many ways facilitated such a change in the specific focus of his work that it would almost amount to a new and second scholarly career. Although it was while serving with the British army in the early 1940s that Jonas showed, via letters to his wife, a growing interest in questions of biology (Wolters 2001: 88), the years he spent teaching and writing in New York gave him the chance to turn serious attention to biology, biotechnology, and medical ethics. In 1966, Harper and Row published the first edition of his The Phenomenon of Life: Toward a Philosophical Biology, a work he later regarded as his most important in philosophy.

Harvard’s Red Herring Jonas was at this point poised to become involved with the practical dimension of medical and research ethics. And he did so—quite boldly—by touching what we might call the “third rail” of medical research at that time, namely, the growing consensus that someone brain dead should be deemed dead enough so that a moral and legal green light could be given to the removal of organs for transplantation. In November of 1967—that is only a month before “the miracle of Cape Town”—Jonas took part in an American Academy of Arts and Sciences

9. Sidelining a Skeptic

131

conference in Boston and presented the lecture titled “Philosophical Reflections upon Experiments with Human Subjects.” The deeply cautionary content of that lecture surely reflected, at least implicitly, what had become well known by that time concerning the horrible experiments carried out by the Third Reich. Jonas now began to articulate opposition to the growing efforts to rationalize the “harvesting” of organs. During the following summer on August 5 of 1968, the “Harvard Report” on defining death was made public in the Journal of the American Medical Association, and many in America’s community of physicians were prepared to accept it. Jonas’s objections were known and discussed by some; they were published in Daedalus in the spring of the following year. He had praise for the personal integrity of Dr. Henry K. Beecher, the chair of the committee that had authored the report, but found something ominously suspicious in the report’s claim that it had somehow suddenly become necessary to provide a “new definition of death.” As the first part of this pressing need, the Report had claimed that “improvements in resuscitative and supportive measures” produce an individual “whose heart continues to beat but whose brain is irreversibly damaged” and this, in turn, puts an unnecessary burden on families and on needed hospital beds (Kuhse and Singer: 1999: 287). Jonas saw right through this. No new definition was, in fact, needed to stop futile treatment in such cases. Jonas noted that even the Catholic Church, which he cited as “for once eminently reasonable,” recognized that extraordinary means to maintain life in such cases are not obligatory. He implied that the Harvard rationale was little more than a red herring put in place to divert attention from the fact that the true aim of the report lay in its second part—namely, to define brain death as death so as to circumvent controversy in obtaining organs for transplantation.1 Jonas did not mince words. The new definition was designed to allow a transplantation physician “to advance the moment of declaring [a patient] dead:” Permission not to turn off the respirator, but, on the contrary, to keep it on and thereby maintain the body in a state of what would have been “life” by the older definition (but is only a “simulacrum” of life by the new)—so as to get his organs and tissues under the ideal conditions of what would previously have been “vivisection.” (Jonas 1969: 243–4)

Jonas had publicly exposed precisely the problem that had been noticed earlier—and privately!—by a Harvard dean when vetting an earlier draft of Beecher’s report. Beecher had been warned: “[This] suggests that you wish to redefine death in order to make viable organs more readily available to persons requiring transplants” (Rothman 1991: 163). In the Memoirs that he wrote late in life, Jonas reports on the strong resistance to his resistance. We can surmise that, although he was not the only public figure to express reservations about the adequacy of the brain-death concept, Jonas’s argument was far more severe. Persons promoting transplantation wanted it to have public acceptance as an established medical therapy; Jonas, by contrast,

132

Biolust

classified it under the rubric of “experimenting with human subjects.” Moreover, it would not have gone unnoticed that he had depicted it as being essentially the same as what previously would have been called “vivisection.” These, especially for some physicians, were devastating comparisons. Jonas recalls: Shortly after the publication of my paper, a group of doctors from San Francisco approached me, asking me to give my philosophical blessing after all for the definition of death as brain death, so important for the whole enterprise of organ transplantation [. . . .] [They] viewed [it] as major medical progress. (Jonas 2008: 199)

Jonas did not deny that these physicians were concerned to retain clear consciences and that the one who initiated exchanges with him was a “fellow emigré from the Hitler period, Otto Guttentag.” Invited by them to spend a week at the University of California Medical Center in San Francisco, Jonas heard their arguments for doing transplants and, guided by one of them, touched with his own hand the fluid that flowed as the result of a kidney transplant—although, in fact, the transplants from living donors had little to do with that to which he had publicly objected. Although Jonas was impressed with the professional dedication of this group, he did not change his mind. “There could be no doubt that what they were doing was magnificent, but my essential objection remained intact—the interests of the unconscious patient whom the doctors declared dead needed to be protected” (Jonas 2008: 200). The whole reason for Beecher’s eagerness to keep the respirator or ventilator on rather than off would be that, without it in play, natural processes would mean that the shutting down would travel from organ to organ: a brain gone dead would rapidly, even if not instantly, lead to a nonfunctioning heart and lungs as well. Bodies just work that way. And, as noted in an earlier chapter concerning what Japanese called the “three indicators,” humans throughout history, aware of this and opting for certainty, would customarily wait for multiple indicators—that is, a maximal, not a minimal, show of death signs.2 Jonas recognized that the transplant advocates wanted a different kind of society, one that perhaps for the first time in the history of science would allow a jump backward from maximal to minimal proof. In order to believe—and it surely was a matter of belief, not proof—that organs such as a still-beating heart may morally and legally be cut out from a dying body, society would, it was implied, have to pretend that it knows that the nonfunctioning of only one organ, the brain, is by itself what differentiates life from death. The problem is that, in Jonas’s view, this is to “arrogate a knowledge which we cannot possibly have.” He continued: Since we do not know the exact borderline between life and death, nothing less than the maximum definition of death will do—brain death plus heart death plus any other indicator that may be pertinent—before final violence is allowed to be done. (Jonas 1969: 245; original emphasis)

9. Sidelining a Skeptic

133

Minimum had been shoved into the place where good science had always required maximum. Therefore, Jonas wrote, organ removal constitutes “a final intervention of the most destructive kind.” An unethical practice had been built on bad science. Bluntly stated, he held that here it is the surgery that kills.

Against the Stream At that time Jonas remained on the faculty of the New School for Social Research in New York, an institution founded in 1933 and sometimes called the “University in Exile” because so much of its faculty consisted of scholars, Hannah Arendt for instance, who had fled the Nazi regime. But in the late 1960s new technologies were bringing into existence a field of study that would be called “bioethics,” and Jonas, who had been intensely researching the philosophy of biology, was invited to be a fellow at the Hastings Center, established in 1969 in Garrison, New York. Under the direction of Daniel Callahan, its purpose was to engage in research and discussion of “fundamental and emerging questions in medicine, health care, and biotechnology.” At the same time Jonas was working on what would eventually become his most important work, a book that came out in German in 1979 but not published in English until 1984 as The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Jonas did not give up on his objections to the ethics of transplants based on the notion of brain death. Others had qualms but he had strong arguments, ones he was putting together into an essay written in 1970 and published in 1974. Its title, “Against the Stream: Comments on the Definition and Redefinition of Death,” reveals that Jonas himself realized that public opinion in the United States had been moving in the direction against which he had leveled his warnings. Indeed, the decade of the 1970s was hardly receptive to trenchant criticisms of a new procedure that was being showcased as itself a triumph of Western, especially American, biomedicine and sure to be the forerunner of what, many held, would and should be an uninterrupted sequence of dramatic biomedical breakthroughs. Laws were being passed that stipulated that transplants take place only after death of the “whole brain.” This term referred to everything down to where, as noted in Chapter 4, the medulla oblongata portion of the brain stem becomes, without clear differentiation, the spinal cord. Although the research of Shewmon would later show that this “whole” is not whole enough to account for what he had been observing, that decades-long American insistence on the so-called “whole brain” criterion, especially when put into legislation, reinforces in the public mind the impression that “maximum” was being met and covered. It was probably hoped that objections such as those voiced by Jonas were being met and defused. If transplants posed a problem, it seemed then to be technical, not ethical. For much of the 1970s, the rejection rate of transplanted hearts was high—so high, in fact, that it led either to a temporary moratorium or quasi-moratorium on cardiac transplants in some contexts (Fox and Swazey 1974: 312). This began to change

134

Biolust

dramatically with the discovery in 1972 that cyclosporine could be effectively used as an immunosuppressant; this enabled organ recipients to have far longer postsurgery lives. Fox and Swazey note: The “advent” of cyclosporine precipitated a “transplant boom” [. . . .] [W]hat was euphorically called a “golden age of transplants” was tempered only by the few clinical moratoria that were eventually called on certain experimental forays and by the gradual and reluctant recognition of the side effects and limitations of cyclosporine. (Fox and Swazey 1992: xvi)

In addition, as noted earlier, the unabashed touting of usefulness by utilitarian writers such as Joseph Fletcher was having a powerful impact—presenting the transplant as the quintessential expression of Christian love in our modern world. This stuck even after Fletcher himself gave up being a Christian. Paul Ramsey, a conservative Christian ethicist at Princeton (and Fletcher’s bête noire), though articulate about the way in which the whole era seemed to be lusting after “eternal life” through biomedicine, did not openly fault the transplant boom. Probably also in Ramsey’s case the argument for going from Christian agapé to organs seemed irresistible, especially in that epoch.3 And among American Jewish scholars doing work on medical ethics, in spite of some initial protests against transplantation by orthodox Jews in Israel, the overwhelming opinion was that transplants, by saving lives, were not only licit but even mandated by certain ways of interpreting the Torah. It would not, I think, be a mistake to say that at least in North America Jonas’s “Against the Stream” critique has until very recently been a spectacularly unnoticed essay. Wiese notes: “At that time, in the late 1960s and early 1970s, Jonas was a lonely voice when it came to the discourse on bioethics within Judaism as well as in general society” (Weise 2008: 437). Even works that, especially in recent years, have resurrected his name as having importance and relevance to Jewish studies and even to bioethics will tend to make no mention of this essay. Jonas will be recognized as having issued early general warnings about research on human subjects; that he, getting specific, held that brain death-based transplants are unethical tends to go unmentioned. This mode of recycling and reusing bodies— bodies which Jonas recognized were legally but not empirically dead—probably to his mind smacked too much of the methods and rationale of the Nazis to pass without a critique. *** What was the gist of the argument in “Against the Stream”? It is clear that, having presented his criticisms of brain death in some public contexts, he had already found that the resistance to them was strong, perhaps vehement. He was going against what had become close to settled opinion in North America. His critics said that his science was flawed and that he had been trying to “counter precise

9. Sidelining a Skeptic

135

scientific fact with vague philosophical considerations.” But Jonas would have none of that. He knew good science when he saw it. With wit he sallied back: [M]ine is an argument—a precise argument, I believe—about vagueness, viz., the vagueness of a condition. Aristotle observed that it is the mark of a welleducated man not to insist on greater precision in knowledge than the subject admits [. . . .] I am challenging the undue precision of a definition [of death] and of its practical application to an imprecise field. (Jonas 1974/1980: 134)

That is, to simply declare that death has occurred because the brain no longer functions while other organs are kept going is to impute precision—that of determining death—where none exists. And it is with precision that this domain, wherein precision is impossible, needs to be marked off as such. Imagining precision within it is an error. The specialists’ talk about spontaneity, however, was the most problematic in the view of Jonas. His critics were saying that, since something initially extraneous to the human body—a respirator, for instance—is required to maintain its requisite functions, the body so assisted is not spontaneously or independently alive and, therefore, may be regarded and treated as dead. This is the condition of all brain-dead bodies being maintained and prepared for the removal of their organs. Without that external help from equipment, the shutdown would naturally travel from organ to organ until there would be no doubt of death. Jonas recognized that the promoters of transplantation wanted to treat a hypothetical as if it were an actual. They were deploying a “what-if ” state of affairs—what if you were to remove the ventilator?—to state that the body when ventilated is dead. In order to show that the promoters had arbitrarily selected one organ as the only one that really matters, Jonas answered them with his own “what-if ” thought experiment. What if, he asked, we were to have technology capable of assisting and augmenting the brain of the brain-dead body? “The case is wholly hypothetical, but I doubt that a doctor would feel at liberty to pronounce the patient dead on the ground of the nonspontaneity at the cerebral source, when it can be made to function by an auxiliary device” (Jonas 1974/1980: 136). The point that others had tried to make about spontaneity fell flat. Jonas’s point would now, even if not already then, be made by the concrete cases of persons who clearly are alive even though they would not be so if deprived of technological supports. People who need pacemakers are not living “spontaneously” in the way that Jonas’s critics were requiring. Others needing periodic dialysis would die without it. And we know of persons whose sports injuries forced them into being cervical quadriplegics, clearly alive although never without their ventilators. Christopher Reeves, after a devastating injury and for the rest of his life, was living proof that on this point Jonas had been right all along. Stephen Hawking also demonstrated that persons whose continued capacity to live depends on mechanical assists are not ones we today dare think

136

Biolust

of as dead. What’s different about the brain-dead persons is that they are so deeply comatose that they would not be able to object if and when their organs were to be surgically extracted and the termination of their lives made certain via such surgery.

Trans-Atlantic Jonas’s ability to establish his case was in jeopardy on two fronts. Surely the medical establishment was resistant. But, in addition, as a philosopher in America during the 1970s, he was, to put the matter bluntly, a philosopher of the wrong kind. Although the New School in New York had welcomed émigrés from Europe such as Hannah Arendt and himself, the time when Jonas came on the scene here was an era when the study of philosophy in America had turned against continental thought for supposedly having nothing significant to say to philosophers thinking and writing in English. Anglo-American philosophy, perhaps in part because a person had to be Anglophone to follow its conversations, was riding high. In those days if you were not a philosopher that the English and Americans could call “analytic,” you would likely be excluded from respect. Jonas later recounted how strong was the animus against himself and other Europeans during that time. In 1991, he told an audience in Germany that early in the 1970s a dean at the New School had privately told him that someone selfdescribed as an “analytic philosopher” had passed judgment on Arendt and Jonas, saying: “[T]his is not philosophy. Philosophy, you should know, is a special science these days with a well-defined, very formal subject matter and a well-defined method. And you do not notice anything like that in the work of Arendt and Jonas” (quoted in Wolters 2001: 86–7). McCumber, whose book Time in the Ditch holds that the fear-mongering political climate of the McCarthy era in America was a factor in how analytic philosophy, as a mode of thought then opting to “play it safe” politically, created a situation in which “the continentals, as they were beginning to be called, were like an oppressed minority asking only for equal rights and a chance to be heard” (McCumber 2001: 50). But even apart from what McCumber suggests, analytic philosophy in that era showed a knee-jerk resistance to pursuing topics that its practitioners could dismiss as “metaphysical” and likely to produce airy speculation rather than clear results. The fledgling subfield gradually coming to be called “bioethics” was deeply impacted by this. “Metaphysical” was often a code word for “religious” or even “theological” and, as such, reason enough to be kept out of discussions aimed to being “purely” secular. Callahan tells how, as director of the Hasting Center, by the mid-1970s he had trouble getting the new batch of philosophers to tolerate the participation of thinkers tainted by theology (Callahan 1999: 61). Ruth Macklin, who had entered this new field as a recent PhD in analytic philosophy, recollected to Fox and Swazey in a 1999 interview that Hans Jonas was “from the Continental School—from the non-analytic school” (Fox and Swazey 2008: 42).

9. Sidelining a Skeptic

137

The assumption then clearly was that there would be nothing of value or even “truly philosophical” coming from someone such as Jonas. Sidelining the “continentals” would nearly silence a dissident voice. The analytic philosophers’ eagerness to be as scientific as possible often led them into a ready embrace of whatever new came along in medical science. And this was especially so if new procedures could show that what was offered was newly useful. Analytic philosophers would purify the language and step aside to let the pre-existing utilitarian bias in Anglo-American thought shape the values. The sheer usefulness of anything, even an organ, was becoming paramount. Jonas saw this happening. In “Against the Stream” he targeted thinkers who would allow nothing to “interfere with the relentless expanding of the realm of sheer thinghood and unrestricted utility” (Jonas 1974/1980: 140). He recognized too that the whole stream was being pushed along by the forceful current that Americans refer to as the “bottom line.” Certainly the new transplant technologies would extend lives. But they would also keep the public in awe of a whole sequence of medical miracles—thus helping to justify each new expensive technology that would come along. Hospital budgets, research programs, and professional salaries would expand. Enterprise was American. Although it did not alter his objections, Jonas was realistic about what was blinding his contemporaries. In the final paragraph of his “Against the Stream” essay, he wrote: The beginning has been made; fiction cedes to enterprise, and the end is nowhere in sight. All that my essay can still do now—with little hope that it or its like will succeed—is to help ensure that “society,” this most nebulous of entities, goes through that door with its eyes open and not shut. (Jonas 1974/1980: 140)

The “new” Germany, trying to grasp the implications of turning decisively away from the ethos of the Third Reich, was, at least compared to the Americans, far more ready to listen to the ethical perspective of Jonas—including what he was saying about the need to monitor the uses of our technologies. Two of his books were published in Germany in the early 1970s. His Das Prinzip Verantwortung even became something of a bestseller when it appeared in 1979. He was given an honorary doctorate from the University of Marburg in 1976. Europeans, far ahead of their American counterparts in recognizing the potential for ecological catastrophe, saw the singular importance of this work by Jonas, one that would be published by the University of Chicago Press in 1984 as The Imperative of Responsibility: In Search of an Ethics for the Technological Age. In Germany, Jonas, now awarded prizes and honorary degrees, was being read widely as an intellectual whose thinking was so relevant that it could not be flippantly dismissed as “too metaphysical.” And Jonas’s depiction of the only “god” we might be able to respect “after Auschwitz”—one that had been literally powerless to intervene— is, Europeans seemed more able to see, one that makes the “theology” of Jonas scarcely of the type to which adamant secularists need object.

138

Biolust

How very different the situation has been in America is indicated by a 2007 review in Forward [formerly The Jewish Daily Forward] of a book about Jonas. The review was provocatively titled “One of the Most Relevant Thinkers You’ve Never Heard Of ” (Kaufmann 2007). And Lawrence Troster, the Jewish chaplain of Bard College, writes: “And while among European environmentalists [Jonas’s] writing on environmental ethics is highly regarded, among North American environmentalists his work is not well known” (Troster 2008: 374). The fact that Jonas is a contemporary thinker with wide relevance may be beginning to catch on even here.

Trans-Pacific The appreciation of Jonas in Japan may well be the strongest of any in the world, perhaps even exceeding that found in Germany. The books and essays about him in Japanese that I have been able to collect and collate are too numerous to mention and I am no doubt unaware of some in existence. The Japanese interest in Jonas shares with that of the Germans a firm sense that Jonas has unique relevance because of his articulation of the need for intergenerational responsibility, specifically our own generation’s duty to ensure that future ones receive, handed down from us, not only a livable planet but also a kind of humanhood biologically the same as our own.4 The interest in Jonas in Japan, however, has tended to differ sharply from that in the United States on one point—namely, their readiness to cite his critique of braindeath-based transplantation contrasted with Americans’ apparent preference to keep that barely mentioned, if at all. As noted earlier, it was in a work in Japanese by Hisatake Katō, a major ethicist, that I first came even to know that Jonas had written “Against the Stream.”5 Here, however, I turn to a book that early on in my own studies caught my attention. Seishi Ishii, also a philosopher, had until his death been an ethicist at the Hyogo Prefectural School of Nursing. One of his books, published in 1995, focused on the principles of healing and included an extensive discussion of the importance of Jonas to medical practitioners. Ishii traced exactly how Jonas’s critique of brain death fit into his philosophical perspective on biology. This had been set forth in Jonas’s The Phenomenon of Life: Toward Philosophical Biology, published in 1966 and, although regarded by the author as of paramount importance, given little attention in the Anglophone world. Ishii read and cited its German version, titled Organ ism us und Freiheit. He commented: With respect to the principle of freedom our usual tendency is to view it as a special characteristic of the human mind and especially as expressed in what we as subjects show in our moral behavior and artistic creativity. Jonas, by contrast, detects the origins of freedom at a far earlier point—in the existence of any organic life-form. And because European thinkers were incapable of

9. Sidelining a Skeptic

139

recognizing that all organisms are in possession of the core principle of freedom, their philosophy got trapped in a serious error. (Ishii 1995: 60)

Ishii saw the precise reason why the philosopher had been unable to accept brain death. He recognized that Jonas was moving against not just an American stream but, more broadly something much more widely assumed in EuroAmerican thought. According to Ishii, Jonas had uncovered something “European thinkers were incapable of seeing,” namely, that freedom is not exclusive to homo sapiens. Not only virtually every philosopher but ordinary people in the modern West, Jonas realized, simply assume that freedom is first and exclusively found in the special kind of consciousness that we humans have. We see it as rooted in our special kind of brain. But Jonas insisted that we look farther back in the evolutionary line and recognize that all forms of organic life have freedom—whereas inorganic ones do not. He wrote: One expects to encounter the term [freedom] in the area of mind and will, and not before: but if the mind is prefigured in the organic from the beginning, then freedom is. And indeed our contention is that even metabolism, the basic level of all organic existence, exhibits it: that it in itself is the first form of freedom. (Jonas 1966: 3)

This is the central contention of The Phenomenon of Life. Jonas grants that humans are the most developed species in terms of freedom but insists for this there is a continuum, one whose starting point is far lower and in preconscious organic life. Where there is metabolism there is freedom. This is not unrelated to bioethics. To recognize the entrenchment of what Jonas opposed, we need only recall how quickly Truog claimed that, if the existing notion of brain death had collapsed, the absence of consciousness in the “vegetative” state should qualify a body for organ removal. Recall, as well, those who charged Shewmon with going against the results of Darwinian evolution when demonstrating that “high” functions were found to be based lower than in the medulla oblongata. Paradoxically, it was both Jonas and the Japanese agreeing with him who were far more comfortable with the evolutionary processes depicted by Darwin and others. The belief no longer credible to Jonas was what had been expounded by René Descartes (1596–1650), one that divides us into our mental life (res cogitans) and “the rest” (res extensa), that part to which our bodies belong. The Cartesian belief that humans are what they are because of their capacity to think was being confronted by a fundamental doubt. Descartes had had such a faith in how consciousness divides us at our core from all lower creatures that he had, now somewhat infamously, even insisted that nonhuman animals simply do not have minds and, therefore, do not have genuine feelings. Performing vivisections on them—something done publicly in the eighteenth century to suggest that their

140

Biolust

yelps and whimperings did not prove that animals feel real pain—had been one practical conclusion drawn from Descartes’s sharp differentiation. Jonas, ready to spot false dualisms ever since having detected some of them in Gnostic texts, saw that one implication of Cartesianism was that “exterior reality, under the title of res extensa entirely detached from the interior reality of thought, henceforth constituted a self-contained field for the universal application of mathematical and mechanical analysis: the very idea of ‘object’ was transformed by the dualistic expurgation” (Jonas 1966: 35). Descartes had provided the rationale for assuming, later when it became technically possible, that a mind-less or brain-less (aka “brain dead”) but otherwise alive body was no longer that of someone really human but, rather, a thing that might be mined for its usable parts. Seishi Ishii clearly saw that Jonas had exposed a fundamental fault in modern European philosophy, a whole tradition of thought deeply indebted to Descartes (Ishii 1995: 73). But this, Ishii went on to claim, is why questions about the ethics of doing organ transplants are anything but esoteric or merely a matter of how to iron out some troublesome wrinkles within medical practice. “The problem raised by doing transplantation of this type goes to the core of what for each and every one of us it might mean to be human” (Ishii 1995: 123). Ishii held that if you read Jonas as he was meant to be read, you would see that American biotechnology had created more problems than it solved. Japan ought not to fall in line. “These are problems which, so it is claimed, have already been solved in America. Jonas’s arguments, however, bring up some fundamental matters that—perhaps especially by us in Japan—ought today to be scrutinized anew” (Ishii 1995: 127).

Organs Missing Although developing later than elsewhere, even in North America there has been a gradual growth in attention to Jonas. Scholars engaged here in Jewish studies and giving attention to philosophers who were Jews have come to see that Jonas had been seriously neglected. One of the first was Lawrence Vogel, who wrote important introductions to his own editing of Jonas’s Mortality and Morality: A Search for the Good after Auschwitz, published in 1996, and for the republication of The Phenomenon of Life in 2001. Also in 2001 Richard J. Bernstein and the other editors of the Graduate Philosophy Journal, published by the university where Jonas had long taught, put together a special edition to re-appraise his work. Other books and essays, not mentioned here, have appeared. The most comprehensive one in English to date is the volume edited by Tirosh-Samuelson and Wiese (2008). The focus in most of these works has been either on Jonas as a Jewish thinker or on him as situated within the context of philosophy in Europe during the twentieth century. In spite of evidence that the philosophy of biology and questions related to medical research and bioethics were central to his concerns during the decades of his most mature writing, his views on these matters, with some exceptions, have tended to be neglected or even sidelined. In 1995, two years after his death,

9. Sidelining a Skeptic

141

the editors of the Hastings Center Report published a series of essays and tributes under the rubric of “The Legacy of Hans Jonas.” Its lead article was by Leon Kass, a physician teaching at the University of Chicago (Kass 1995). Jonas, who had already been sidelined as too “metaphysical” when analytic philosophers began to dominate American bioethics, could hardly be helped in the short run by being championed by Kass, who himself subsequently became a controversial public figure as the chair between 2001 and 2005 of the President’s Council on Bioethics appointed by George W. Bush. Inadvertently caught up on the bioethics front of America’s culture wars, even after his death, the stock of Jonas among liberal and even more centrist bioethicists here could hardly have been expected to rise. The treatment of Jonas in a widely read 2001 book by Richard Wolin on the four most eminent Jewish students of Heidegger probably did Jonas’s reputation for contemporary relevance no good. Wolin, unfairly in my view, charged Jonas with remaining far too Heideggerian in his own perspective and holding views tending to be both “antidemocratic” and “paternalistic” (LaFleur 2008; Wolin 2001). A charge of “paternalism” would be enough to damn him in the eyes of most American bioethicists, and I have myself encountered the opinion of persons summarily dismissing Jonas as a “Heideggerian.” Perhaps unknown to American bioethicists, who will find it appalling if a Jew remained intellectually beholden to someone who became a Nazi, are strong writings by Jonas in which he outlined his fundamental disagreements with his one-time mentor. In an essay on Jonas that I regard as very important, Becchi calls attention to Jonas’s focus on applied ethics and suggests that, even though Jonas himself might have wanted it otherwise, what is most relevant in his writings can be appropriated if “read independently of the metaphysical foundation” Jonas himself wanted (Becchi 2002: 158). My impression is that virtually all the Japanese who read Jonas for the significance of what he says about biotechnology and ethics read him in precisely this way—that is, as not having to be encumbered by the “metaphysics” that in America makes his defenders skittish and gives others a rationale to avoid reading him altogether. Here, however, is where Japanese readers of Jonas implicitly expose another item of detectable strangeness in how Jonas tends to be read by Americans. It’s the fact that there is among us virtually no mention of his opposition to organ transplants based on the notion of brain death. Within the more than 500 pages of the recent volume on him, there is only one reference to the fact that he dealt with the ethics of organ transplants and it does not tell the reader that Jonas wrote against this practice if based on the notion of brain death (Wiese in Tirosh-Samuelson and Wiese 2008: 437). To my knowledge, Leon Kass, the most public defender of Jonas in the American context, never mentions the “Against the Stream” essay. In their studies of transplantation in American society, Fox and Swazey, while known to value Jonas’s prudentialist position on many issues, do not explicitly mention his opposition to the practice. In a word, what Jonas had to say about organs and their transplantation has been missing from what in fact have been fairly extensive discussions by Americans about the ethics of this procedure. When recognized as having been omitted, it shapes up as a rather spectacular omission.

142

Biolust

That’s been so until now. It may be changing. Now the notion of brain death has been coming unraveled by scientific evidence. And what looks very much like a rather desperate scramble to find alternative “definitions” of death by Truog and others has not fixed that problem and holds no promise of doing so. And some philosophers have begun to notice. In a recent study Lizza, for instance, argues that we should give up the search for “a unitary definition or criterion of death.” At the very outset of his own study he writes that his own thesis is not new: “Hans Jonas got it right very early in the debate over the definition of death when he wrote that ‘the decision to be made [on how to treat individuals who have lost all brain function . . .] is an axiological one and not already made by clinical fact’” (Lizza 2006: 4). A viewpoint long sidelined may be coming back, where it can be seen. Lizza also comments that the Japanese were right to be skeptical—and skeptical early on. I confess to having been curious as to whether or not Jonas, especially when aware that his views on these matters were not so much contested as simply dropped out of sight for decades, changed his criticism of brain death and his deep reservations about doing transplants. I took advantage of an opportunity to question his wife, Lore Jonas, on this and other matters relating to the views of her late husband. At the outset of our private conversation Lore Jonas drew my attention to an essay by Nirenberg that had appeared in print two months earlier. While reviewing a recent book on Jonas by Wiese as well as the published Memoirs of Jonas edited by Wiese, Nirenberg discussed the particular problems of defining Jonas as a “Jewish” thinker (Nirenberg 2008). The philosopher’s wife liked what he had written and stated, from her perspective, some comparisons: “Wolin,” she said, “makes Hans out to be a Heideggerean and, although I am very grateful to Wiese, he makes my husband out to be more of a ‘Jewish thinker’ than he really was. Nirenberg gets it right” (Conversation with L. Jonas, 13 January 2009). I eventually got around to my most pressing question: Did Hans Jonas, perhaps, change his mind and later in life abandon his view that the notion of brain death is wrongheaded? Or think it no longer so important? She replied that he definitely did not change his viewpoint. And she then retrieved from her own library a 1994 book in German titled Wann ist der Mensch tot? Organverpflanzung und Hirntodkriterium and allowed me to borrow it to show how consistent and concerned he had been on this to the very end of his life. The opening sentence of this book of essays refers to what in 1970 Jonas had written on this subject, the “Against the Stream” essay published in 1985 and titled “Gegen den Strom.” This book’s first item is the copy of a letter which Jonas less than three months before his own death in 1993 had written to the two editors of the new book in German; he wrote in support of their project of keeping the public aware that the brain death/organ transplantation issue had never been satisfactorily solved. In doing so he provided them with a copy of a letter he had written a month earlier in German to an old friend, Hans-Bernhard Wuermeling, a physician who had been part of the committee in the city of Erlangen at the time when the decision at a hospital there had been in favor of continuing “life support” to Marion Ploch,

9. Sidelining a Skeptic

143

the  pregnant but brain-dead woman discussed in this book’s third chapter. Wuermeling was serving on a committee whose express hope was that through technological intervention the woman, though technically dead, could be assisted to deliver a live child many months later. This, the infamous “Erlangen Case,” had, as noted, been widely debated in the German public media at the time. Jonas, as could be expected, deemed the committee’s decision to be in error and wrote out his opposition strongly and in detail (Hoff and Schmitten 1994). From this it is clear that Jonas not only did not alter his position but also continued to think of the issue as one of ongoing ethical importance, one that should not be put on the intellectual and social “back shelf ” and forgotten.

144

­Chapter 10 C L O SE T E D M E D IC A L B OM B S

Battlefield Research Divergent viewpoints among societies concerning the propriety of certain medical research agendas are not based solely upon differences in the general intellectual, religious, and cultural traditions of those societies. They can also be powerfully influenced by crucial differences in the respective historical experiences each has had. Especially those parts of such experience which are fairly recent and retained within the memory of living persons are likely to influence what they assume they have learned as the significant “lessons” therefrom. Wars and the aftermath of wars will tend to act most powerfully within such memories and indirectly influence different societies’ divergent opinions about the role of medicine and medical technologies. Technology, both military and medical, that facilitates a nation’s victory will retrospectively seem itself to have been a “good” aspect of a “good war” and this, in turn, can easily become that nation’s fairly optimistic view of military-medical technology in general. On the other hand, nations and peoples that lose wars, especially if their own most “advanced” technologies had in the end failed them and if they ended up by being devastated by the “latest” developments in firepower of their opponents, can reasonably be expected to have a much more skeptical view vis-à-vis large claims that new technologies bring only “miracles.” Their own experience had been that, depending upon the use and outcome, such technologies can also bring devastation and death.

*Chapter 6, draft one (2002). Among the epigraphs included in the original manuscript was a quotation from Rihito Kimura (1987): The idea of a “patient’s right to know,” so important in the conceptualization of bioethics, must be linked to a whole society’s fundamental right to be kept informed about what its own government is doing in political, military, and diplomatic affairs. This requires close linkage, with the government obliged to gain the “consent” of its people. This is, of course, in keeping with one of Biolust’s central arguments—that for there to be truly informed consent on an individual level, more work needs to be done on the societal level to inform the public of precisely what is involved in each new biotechnological breakthrough. LaFleur sees Japan’s public debate on the issue of brain death as exemplary in this regard.

146

Biolust

There is no East/West divide on this issue. There is no reason to assume that, because it was not traditionally a participant in “Western civilization,” Japan would have had a deeply engrained knee-jerk hostility to medical science.1 Nor, importantly, should we overlook the fact that differently experienced wars can be expected to register differing levels of optimism or pessimism about such technologies—and do so within the experience of one and the same nation over time. If looked at historically, we can see Japan at one stage in its modern history extraordinarily optimistic about there being only a positive connection between “advanced medicine” and Japan’s capacity as a modern nation to fight and win wars. By calling attention to the significance of a book published as long ago as 1906 by an American physician, a 1996 book by Hal Gold helps us grasp the roller-coaster aspect of Japan’s experience with war-related medicine during the twentieth century. The story does well to start with the 1906 book by Louis Livingston Seaman, MD, LL.D., an army doctor who had officially retired at the time he wrote his book, The Real Triumph of Japan: The Conquest of the Silent Foe. It is the appeal of what today we might call a “whistle-blower.” Addressing himself in print to “the President, the Cabinet, the Army, the Congress and the People of the United States,” Seaman wanted all these parties to “take heed of what the Yellow Man of the Orient has done, and to realize the positive criminality in ever again going to war [as Americans] unprepared to combat preventable disease” (Seaman 1906: 26). He had been appalled at what he regarded as the high number of totally unnecessary deaths during some American military campaigns, those of the 1898 Spanish-American War in particular. What enraged him was the fact that, due to policies whereby medics on the field had advisory roles but no authority, their sound advice was more often than not ignored or even demeaned by the military brass. The result was that the death of American soldiers by disease and infection vastly outnumbered those dying of bullet wounds. He cites a record he had personally received from the surgeon general giving statistics for the American Army in 1898. In the course of fighting in the Philippines, in Puerto Rico, and in Cuba, deaths from disease were fourteen times higher than those from battle casualties (Seaman 1906: 288–9). This had prompted the American doctor to see if things were different in the way Japan had conducted its war with Russia in 1904–5. He had gotten official permission, ostensibly from Japanese imperial authorities, to be a foreign military observer both of hospitals in Tokyo and of conditions on the battlefields of Manchuria. What he observed there and what was given him as statistics proved, at least by his own account, to be data in stark contrast to those of contemporaneous American practice. Japan had reversed the proportions, with the result that in the Russo-Japanese war they lost only one soldier to disease for every four killed by artillery. What the Japanese had recognized and the Americans had stubbornly refused to see is that the medic on the field can often save more lives than can the regular officer.

10. Closeted Medical Bombs

147

[Japan] organized her Medical Department on broad, generous lines and gave its representatives the rank and power their great responsibilities merit, recognizing that they had to deal with a foe that [in the past had] killed eighty [percent] of the total mortality. (Seaman 1906: 7)

Most of this impressive saving of human lives, often resulting in a rather rapid return to battle on the part of the slightly wounded who had been then repaired on the field, was seen by Seaman as due to Japan’s far superior level of sanitation and its vigilance in the early detection of “the deadly germs of typhoid, dysentery, and cholera”—that which he called “the silent foe” in any military operation.2 His prose puts it pithily, even if perhaps a bit hyperbolically: I have just returned from the headquarters of the Second Imperial Army on the Mongolian frontier, commanded by General Oku, where I found the busiest instrument in the campaign was not the Murata rifle, but the monocular microscope. My opportunities for observation were unexcelled [. . . .] (Seaman 1906: 14)

Seaman noted that sanitation was complemented by research; “every hospital throughout Japan, and every base and field hospital in Manchuria, has its bacteriological laboratory” (Seaman 1906: 11). He was, in addition, amazed that the medics he met, even though their salaries were unimpressive, carried on their own private research in addition to what they did in the larger laboratories. Seaman, interestingly, never uses his age’s common stereotypes of Japanese and other Asians except on those occasions, rare in his narrative, when he seems to want to be echoing the surprised voices of his countrymen, persons who have been shown by a book such as Seaman’s own that in some areas Americans have been shown to be less scientific, less rational, less intellectually curious, and less “advanced” than the Japanese.3 Why is Seaman’s book significant? We can, I suggest, draw this out on different levels. Hal Gold grasps one. At the outset of a book detailing what were, in fact, horrific and despicable practices of the Japanese medical military just a few decades later than the Russo-Japanese war, he notes the stark nature of the contrast: By protecting its soldiers from disease in the Manchurian conflict thirty years earlier, Japan had earned international admiration by establishing itself as the world leader in military medicine. Now, the direction into which it had channeled its medical energies had changed, and its ethics began to twist and mutate, as well. The leaders of Japan’s military during the days of the RussoJapanese War would undoubtedly have been appalled. (Gold 1996: 36)

148

Biolust

That is, the world that admired and praised Japan in 1905 was a world that later became appalled when learning, initially since the 1960s but more fully in the 1990s, of what the Japanese medical military had done in China during the 1930s and early 1940s. Most egregious then were the operations of a special military unit, number 731, whose activities were kept secret to the general Japanese public during the Second World War and revealed only in the postwar period by intrepid researchers and historians who felt compelled to make the facts public. One of these was Saburō Ienaga (1913–2002), who in public lectures in 1965 provided the following information to a largely shocked Japanese public. In the resultant book, he wrote: The 731 Unit was a bacteriological warfare research unit located in the suburbs of Harbin, Manchuria, under the cover name of Epidemic Prevention and Potable Water Supply Unit. Lieutenant General Shirō Ishii, a medical doctor, was the commanding officer. The unit was top secret; even its existence was not known to the public. After the war its activities came to light in various ways.  [. . . .] The Unit 731 research included the plague, cholera, typhoid, frostbite, and gas gangrene. Their successes included a defoliation bacilli bomb that blighted an area of 50 square kilometers. A special project named Maruta [logs] used human beings for experiments. Several thousand persons were secretly transported from places in Manchuria and China and confined in a special unit prison. When needed for the experiment, the human guinea pigs were placed in laboratory rooms and injected with bacteria to test a germ’s potency. [. . . .] The Maruta prisoners were given food dosed with potassium cyanide: those who did not eat the food were machine-gunned. [. . . .] [At war’s end] personnel of 731 Unit were given highest priority evacuation back to Japan, before the rest of the Kwantung Army or other units. (Ienaga 1978: 188–9)

The ostensible focus of Unit 731 was merely an extension of what had proven so beneficial to the Japanese military operation in 1904–5 and that for which Japan had received international praise—namely, the construction of systems for water filtration and research on the prevention of epidemics that emerge in the midst of military campaigns. What was different in the 1930s, however, was not only the scale of the research being pursued “on the field” but the ratcheting up of the research program to include the use of thousands of non-Japanese as the “subjects” of live experiments, ones which in most cases ended in their deaths. The physical facilities were huge. Although almost totally destroyed by the Japanese military itself at the war’s end to prevent it from falling into the hands of the Soviet army, the complex at Pingfang used by Unit 731 had, at least according to its blueprint, seventy-six structures (Harris 1994: 34). Far fewer in number but also atrocious were the vivisections of eight downed American airmen by physicians at Kyushu Imperial University just prior to the end of the Second World War. Ienaga relates the gruesome details (Ienaga 1978: 189–90). The Japanese public was made widely aware of this in 1958 with the

10. Closeted Medical Bombs

149

publication of Umi to dokuyaku (English translation, The Sea and Poison, 1973), a novel by Shūsaku Endō which centers on these experiments and the subsequent guilt of the physicians involved. The old rationale of preventing casualties was being used. But by this point in time it had been turned into the stated justification for horrific experiments, ones that used uninformed human prisoners in torturous contexts, ones often resulting in death. Vivisections, envisioned universally to be an unparalleled way to observe human functions and pathologies, were, by the doctors of Unit 731, actually carried out—and in great numbers. As Gold notes, the Japanese were expecting to have to fight the Soviet army at some point and to have to do so in conditions of unbearable cold. At the time of the Manchurian Incident of 1931, Japan’s military doctors had already seen lots of the soldiers afflicted with frostbite. Therefore, getting a detailed and accurate knowledge of the process of frostbite could, they knew, lead directly to knowledge of how to prevent and treat it. We can assume that the medical officers of Unit 731 easily reasoned that through such research, however horrendous and objectionable under ordinary conditions, the lives of many Japanese soldiers could be saved. Moreover battles, perhaps the war as a whole, might thereby be won. Gold narrates the details, revealed after the war, of this research: People were taken from prison into below-freezing temperature. They were tied up, with their arms bared and soaked with water. Water was poured over the arms regularly; sometimes the ice that formed on them would be chipped away and water again poured over. The researcher would strike the limbs regularly with a club. When an arm made a sound like a wooden board’s being hit, this indicated that the limb was frozen through, and from there different methods of treatment were tested. (Gold 1996: 81–2)

Here the pursuit of knowledge had become totally paramount. Lives were being taken—with the rationale that thereby other lives would be saved. These operations in China also gathered precise data about the potential of bacteriological and chemical warfare. Anthrax, according to those who have researched this topic, had its world debut under the aegis of Dr. Ishii’s infamous program (Harris 1994: 68–9). Although the Americans too were exploring the possibility of using anthrax and other biological weapons at almost the same time, the Japanese in China moved more aggressively forward, testing their anthrax on Chinese prisoners, persons who they saw fit to redescribe in the official reports as maruta, or “logs.”

Finding Cover for Unit 731 The release of poisonous sarin gas in the subways of Tokyo by the religious group Aum Shinrikyō caught the world’s attention in 1995. Unknown outside of Japan, however, was the fact that a near avalanche of acerbic limericks (senryū) referring

150

Biolust

to this cult and its killings was contributed by Japanese individuals to the national newspapers immediately after the attack. And at least one of these drew a telling connection.4 It went: Ah, Aum Shinrikyō— The Unit 731 Of the 1990s.

The reference was arch. The way in which it came out suggests that in Japan today it is possible but still far from comfortable to make reference to Unit 731 and its terribly immoral record of war-related medical research. Persons outside of Japan, of course, are much more free and sometimes even eager to recite the details of these experiments, perhaps even as a way of suggesting the existence of some fundamental cleavage between a “West” that adheres to moral values and “other” societies (preeminently Japan?) which, they assume, do not. There is, however, a real limit to the amount of political or ideological mileage that can be gotten by “our side” in relating the facts about Unit 731. This has been especially so since the publication in 1994 of a book by Sheldon H. Harris, a book whose title, Factories of Death: Japanese Biological Warfare, 1932–45, and the American Cover-up, itself suggests the dilemma faced by any white/black contrast. The book’s last third details the degree to which American officials, under the new aegis of the Cold War after 1945, entered into deep complicity with Dr. Shirō Ishii, the head of 731 and someone who, if he had not been so “valuable,” would surely have been executed as one of Japan’s war criminals. Harris writes: Ishii [after the war] now understood how eager the Americans were to learn all they could from him about his work in Manchuria. He soon concluded that his knowledge, and that of his former associates, could be bartered with the Americans. The price for their precious information would be immunity from prosecution for crimes. (Harris 1994: 191–2)

Ishii and his colleagues thus evaded being brought before the Tokyo War Crimes Trials of 1946–8.5 The Cold War by then had intensified—with the result that Americans had become increasingly eager to cover up the extent to which their own medical and military knowledge was being much enhanced by access to the results of Unit 731’s work. Harris, who checked the records, writes: No one in 1948 was prepared to raise the issue of ethics, or morality, or traditional Western or Judeo-Christian human values [. . . .] The questions of ethics and morality as they affected scientists in Japan and in the United States never once entered into a single discussion that is recorded [. . . .] (Harris 1994: 221)

10. Closeted Medical Bombs

151

This cover-up was, at least for Americans, fairly securely in place—until the end of the Cold War and access to formerly classified documents. Disclosure of this history of complicity makes it now much more difficult to paint an intellectual portrait that would contrast, at least in this domain, the behavior of an ethically “upright” side and its opposite, the party of total depravation. A central question we are now forced to ask becomes: If it is reprehensible to use “hot war” conditions to rationalize the killing of humans for the sake of gaining valuable medical knowledge, is it not also reprehensible, even if at a lower level, to use “cold war” conditions to rationalize the gaining and use of that same acquired knowledge, especially when doing so involves entering into a relation of moral complicity between the knowledge-gatherers and the knowledge-receivers? This may be a question which the parties involved could not and dared not ask during the twentieth century. In the twenty-first century, however, I suggest, we have both a right and a need to ask a more probing set of questions—especially those which look hard at the generalized and universalized process whereby the rationale of saving lives becomes the rhetoric under which research agendas bit by bit move, first, into ethically questionable and, from there, into deeply compromised domains.

Gathering Genetic Information It might have been a consolation to Americans if the “difficult” evidence concerning their own actions had been limited to complicity with Dr. Ishii in order to gain access to what he had learned. Unfortunately, that is not the case. Now also available to us are materials showing Americans themselves actively collecting data in a manner that is ethically suspect. Whether it was the receipt of tainted data or the active collection of the same, the ruling motive seems always to have been (and likely remains) the same: materials that contain knowledge that can be put to use are materials simply too valuable to waste.6 Here, something more needs to be said about what war can teach us about the pattern of our rationalizations. Hot wars and cold wars, real wars and potential wars—they all bring extraordinary pressures to bear upon medical communities and at the same time offer them extraordinary venues for research and possible personal renown. Perhaps this is nowhere more in evidence than in what we now know about the medical research aspect of the nuclear bombing of Hiroshima and Nagasaki in August 1945. To refer to a “medical research aspect” to these bombings, it needs saying clearly, is not to suggest that we have evidence showing such to be part of the intention of dropping those two bombs.7 But, it is clear that, at least after the fact, the usefulness of what had been done to those two cities in terms of their provision of medically significant data was readily grasped. The bombs destroyed but they also produced! And what they had produced was an extraordinarily rich cache of empirical data about the effect of radiation upon the human body. This meant that the bodies of persons still alive, Japan’s radiated

152

Biolust

hibakusha, but also the body parts of persons killed by the bombs provided what was tantamount to a data-rich laboratory and one of uncommon value. In her thorough and troubling examination of the involvement of American scientists in the postwar gathering and use of atomic bomb data, M. Susan Lindee has shown how radiated Japanese were taken as invaluable data providers for American scientists—a sentiment perhaps let slip out by a director of the Atomic Bomb Casualty Commission who in 1957 referred to such Japanese as “the most important people living” (Lindee 1994: 5). Such people were a “windfall” benefit, at least for research, of the bombs. The collection of case histories of the hibakusha and the hauling off to Washington of at least 4,000 parts of human bodies were acts that some Japanese later cited as proof that America confiscated at least these items as “spoils of war.” Lindee concedes: “Indeed, if spoils are the material things that back up and make manifest military victory, then the body parts of the atomic bomb victims were, in some ways, just that” (Lindee 1998: 383). I think it not inappropriate to state that, unlike any other bomb made and used prior to 1945, the kind of weapon used on Hiroshima and Nagasaki was, in its effect (even if not in its design), what we may rightly refer to as a medical bomb. And it was quite probably the world’s first. Concerning this kind of weapon, Lindee reminds us: The bomb’s peculiar form of terror, ionizing radiation, was known to cause biological changes in both humans and experimental organisms. This made the long-term studies of the atomic bomb’s victims a high priority, for unlike victims of conventional bombing, their bodies’ responses to the effects of bombs might take decades to appear. [. . . .] The long-term studies of the atomic bomb survivors differed from [those of conventional bombs] in that they involved a fundamental biological problem with ramifications far beyond military strategy. The survivors had been exposed to a form of energy that was widely used in medical therapy and known to cause heritable mutations and other cellular changes in experimental organisms. (Lindee 1994: 10–11)

This desire for getting the data led to some outrageous polices. The American physicians actively collecting data in Hiroshima and Nagasaki were forced by their government at home to adopt a “no treatment” policy on the ground. This meant that American physicians, not without conflicts of conscience in some cases, were prohibited by policy from giving medical treatment to bomb victims in spite of their obvious suffering. It was hinted that any such treatment would itself sully the purity of the data (Lindee 1994: 117–42). Japanese repugnance vis-à-vis this research forced the Americans to offer incentives. Lindee reports that, precisely because the Americans were so interested in the effects of radiation on the newborn, the Atomic Bomb Casualty Commission put monetary rewards before midwives in Hiroshima and Nagasaki so that they could receive a bonus for reporting on births of abnormal infants. Language in this situation too had to be rendered less direct. “The fact that many Japanese

10. Closeted Medical Bombs

153

mothers did not want their infants to be autopsied led the ABCC to begin calling the autopsy an ‘examination’” (Lindee 1994: 93). A massive amount of collected data was then stored in Washington. It eventually became the property of the State Department, and then “by 1972 the body parts had become valuable to the Unites States not as pieces of nature but as commodities that could be used in negotiations with the government of Japan” (Lindee 1998: 403). By this point, of course, there had been a growing public awareness of this situation within Japan, and politicians at the highest level there were being publicly urged to repossess and repatriate the bodily remains of former citizens. The issue had become one which Japan’s socialists and communists were ready to exploit. Therefore, at some point during the Nixon administration the decision was made to return the data, perhaps now no longer as valuable as it once had been. During the spring of 1973, boxes containing 23,000 items, including 4,000 body parts preserved as “specimens,” arrived in Japan from the United States (Lindee 1998: 376).

Reckonings Kei’ichi Tsuneishi (b. 1943) is probably the scholar who, more than any other, has researched and exposed the details of Japan’s wartime medical atrocities. His Kieta saikin sen-butai (The Bacteriological War Unit That Disappeared) has now become the standard work in this field (Tsuneishi 1993). And in 1998 he contributed an essay on “Medicine and War” to an important set of published lectures on bioethics. Having shown that at least one person partially responsible for HIV deaths in Japan due to the sale there of tainted blood was someone who decades before had been a never prosecuted participant in the 731 crimes, Tsuneishi faulted the Japanese Ministry of Health and Welfare in particular but the medical community as a whole for even today not having come to grips with the wartime crimes and their long-range significance. In the view of Tsuneishi it today remains necessary for the Japanese medical community, of which Unit 731 was part in the past, to deal openly with this phase of its own history. He writes: To do so would be for physicians themselves to take some very strong medicine. But unless they do so the community of medics will continue to lack the confidence of ordinary people. A loss of their confidence shows up when today even persons otherwise willing to be involved in procures such as organ transplantation here refuse to do so. In a word, as long the medical community is perceived from the outside as having undergone some kind of internal collapse, there will be suspicion of medical research projects and also of things said by physicians. (Tsuneishi 1998: 215)

Holding that Germany’s physicians have regained an important level of public confidence by having come to grips with the medical crimes of their Nazi-era

154

Biolust

colleagues, Tsuneishi avers that sooner or later Japan’s medics will realize they have no alternative but to do the same. He appears to be right in this. This appropriately opens up toward at least two further questions. The first would be whether or not the American medical community ought to be more forthright about ways in which war or the mentality warfare occasioned—or may even still occasion—has affected the trajectories of its own research. Americans generally ignore or resist applying to themselves the import of an observation such as the following by one Japanese ethicist: The thesis that war is the parent that gives birth to scientific and technological advances appears especially true with respect to medicine. It can be said that developments in bacteriology, in experiments performed on the human body, and similar things were often direct results—irreversible ones at that—of warfare. (Washida 1988: 55, note 4)

Although the level of seriousness surely differs, the American medical community might also do well to recognize its own historic complicity in research that benefited by the conditions of war, by receiving and using data procured by making compacts with criminal doctors, and by operations in devastated Hiroshima and Nagasaki that focused so exclusively upon the collection of physical materials that nearby noncombatants, suffering and dying on the scene, were refused treatment lest the data be lost or sullied. This “no treatment” policy adopted in Hiroshima and Nagasaki is suspiciously like the practice of medical researchers in the now exposed Tuskegee study, in which African American men, suffering and soon to die from syphilis, were deliberately untreated even though effective medication had become available. In Tuskegee as in Hiroshima and Nagasaki the desire for data overwhelmed the medical profession’s obligation to heal. That in both instances non-Caucasians died as a result is not without significance.8 And this leads to the second question: Is it not time for medical communities everywhere to recognize that the problems and dangers here are intrinsic to the nature of the increasingly worrisome relationship between the therapeutic goal of medicine and the research which historically had been its assistant but now too often functions as its master? If this is as central a question as I think it is, then the history of Japan’s twentieth-century experience—both the heights and depths—in this domain can be unusually instructive. And it becomes especially so if conceptually tied to—that is, not just contrasted with—the behavior of America’s own history of medical research during the same period. A proposal to look at the significance of the points of similarity and continuity— even for heuristic purposes—will meet with no little resistance here. And that is in part because we are habituated to the making of easy contrasts between Japan and “the West” in these matters. Still lodged with surprising tenacity within the popular consciousness on our side of the sea is the notion that the religious and intellectual traditions of Europe and American constitute ours as a “guilt culture,” whereas

10. Closeted Medical Bombs

155

those of Japan made her into a “culture of shame.” This hypothesis, of course, was famously stated as the conclusion of her mid-war research by Ruth Benedict in her 1946 classic, The Chrysanthemum and the Sword: Patterns of Japanese Culture. Although others have been proffered, it would be difficult to find any Western attempt to explain Japan’s particularity that has had a staying power equal to that of Benedict’s pithy theory about Japan vis-á-vis the West. And, of course, implicit in it is the view that a shame culture is, at least within the evolution of societies, the one that is distinctly inferior or immature. Thus, for Japan, real growth—psychic, political, even religious—would, it is implied, necessitate developing the concepts, behaviors, and laws characteristic of the peoples of Europe and America. I hold that the difference between Japan and “the West” is overdrawn by such a stark contrast. Awareness of guilt, not shame, can be detected in Japanese society in contexts one would not, at least if operating on Benedict’s theory, expect to find it.9 More basically, however, I think it is time to abandon our dependence upon some kind of grand theory to explain a fundamental difference between Japan and “the West” specifically and, perhaps, “the West” and some putative “East” more generally. And, I suggest, a benefit to be derived from dropping this way of creating divisions would be a resultant greater ease in admitting continuities in kind even if not in scale between what caused and facilitated the most heinous episodes in Japan’s medical research history and those within our own. Besides, a close examination of materials shows that Japan’s reckoning with this part of its past has gone farther than we usually assume. I was forced to acknowledge this myself and along the following trajectory. When beginning my research in this area during the early 1990s, I read something that Rihito Kimura, at that time a bioethicist at Johns Hopkins University, had in 1987 written in English concerning the Japanese resistance to organ transplantation: Japan’s lay public still has negative feelings towards a too hasty advance of science and growth of technology. Perhaps this is due to Japan’s first national experience in the world—having monumentally suffered from the extraordinarily “successful achievement” of science and technology in Hiroshima and Nagasaki in 1945. (Kimura 1987: 269)

Reading this, I had the sense that here Kimura had told only half the story. Why, I wondered, was there reference to Hiroshima but none to Unit 731? Missing from his account was any Japanese awareness that Japan’s own horrendous wartime medical experiments had burned into the collective consciousness the core of a wariness vis-à-vis certain kinds of experiments and technologies. Was the Japanese people’s wariness vis-à-vis research-linked medicine due not only to Hiroshima but also to what was now known by them about Unit 731 and from reading Endō’s novel about the vivisection of American pilots? When I first met Kimura—who had served the World Health Organization, had lectured in Southeast Asia during the 1970s, and is highly respected in many

156

Biolust

international contexts—at his present base at Waseda University, he provided me with the Japanese text of a lecture he had given in October of 2001. Invited by the YMCA of Tokyo to elucidate the bioethical dilemmas posed by the possibility of human cloning, Kimura opened his exposition by reference to what had just been the recent reports of intentional anthrax contamination in the United States. Kimura began with a reference to the recent events and then continued: When I heard these news reports I was startled by my awareness that there was a connection between them and Japan. Just in hearing the word “anthrax” I was reminded of its use by Japan’s military medical Unit 731. The truth of the matter is that during the World War II this unit, commanded by Dr. [Shirō] Ishii, purportedly engaged in the prevention of epidemics and the construction of water filtration systems, in fact was engaged in research on biological weapons on the mainland of China. This unit released anthrax in totally dehumanizing experiments with the result that many Chinese, Russian, and Korean prisoners of the Japanese were killed from it. This completely inhumane kind of military medicine used human beings as guinea pigs, referring to them in the process is nothing other than “logs.” These were atrocious experiments on humans and anyone in Japan engaged in the study of bioethics needs very much to keep this part of our history clearly in mind. (Kimura 2001: 6–7)

Then, by reading one his books, I had to realize that already in 1987 Kimura had publicly shown a readiness to connect the 731 atrocities with today’s problematic research agenda (Kimura 1987: 267). And I have found that the list of Japanese ethicists willing to take up the subject of Unit 731 and connect it to the controversies surrounding organ transplantation can easily be extended. Ken’ichirō Yamaguchi (b. 1949) is a brain surgeon who had his medical training in Nagasaki, where he had also grown up. In 1992 he coedited the journal of a mother who over a period of time observed the condition of her brain-dead child—but, in their view, as a still living person rather than just as a repository of potentially reusable organs. He had also written about the 731 Unit and wartime medical atrocities and in 1995 wrote a book connecting these two concerns. A translation of its title might be Trifling with Life—Contemporary Medicine. It is a book whose covers include photos of the ominous smokestacks of the Manchurian buildings where the horrendous experiments were carried on and where discarded bodies were burned. In it he recounted what had happened in China but also research programs in Japan that had, in the view of both Yamaguchi and others, clearly transgressed the line of ethical medicine. Central to his discussion is a deeply contested episode of organs removed from a brain-dead person in an Osaka research hospital in 1989, an instance in which questions long lingered about the real condition described as “brain death” and whether or not the patient may still have been saved through surgery. Yamaguchi saw strict parallels:

10. Closeted Medical Bombs

157

The readiness to go to any extreme to achieve one’s end—the development of medicine—in the Osaka case arises from exactly the same kind of thinking that was present among those involved in Unit 731 experiments. And another simple substitution also applies: what the military medics referred to as “logs”—human beings—are now being defined as the “brain dead.” And just as the Unit 731 personnel treated those “logs” as material things, so too patients who are declared “brain dead” become nothing more than materials to be used for research. (Yamaguchi 1995: 70)

In the course of a splendid overview and analysis of Japanese writings on brain death and transplants, Yoshihiko Komatsu notes that some works, in detailing a certain gruesome aspect in the transfer of bodily organs, have seen comparisons between it and the experiments of the Nazis and the Japanese Unit 731. He judges, however, that the comparisons have been to date too random—so that the connection gets weakened and lost in people’s thinking. Nevertheless, practices such as that of organ transplants based on the idea of brain death as well as the kinds of medical experiments carried out by Unit 731 and the Nazis were all, in one way or another, practices that came into being in the context of orthodox medical science and medical therapies. And would it not then be incumbent upon us to look hard at these historical realities? Much to be desired would be historical inquiry into these connections. (Komatsu 1996: 48–9)

One person who very clearly draws out a nexus between Unit 731 and the problem of “cadaveric” transplants is Tomoko Abe, a pediatrician who trained at the University of Tokyo and did epidemiological research abroad at the Mayo Clinic. Very active in the effort to raise public awareness about problems associated with brain death, she states the following in a very recent book of essays by professionals—the title of which, if translated, could be I Am Not an Organ Donor. It is to be noted that she writes this after the recent legalization of brain death and the performing of some cadaveric transplants in Japan in 1999. Openly shocked by the degree to which humans can inure themselves to thinking of the body parts of others as materials to be reused, she writes: Things we know about the past caused us to feel not only appalled but also the necessity for deep examination of it—especially the manner in which German Nazis plundered the bodies of murdered Jews in order to turn the body parts into objects for everyday use (lamp-shades out of human head skin, for instance). Then too there was the way in which our own Japanese military in China referred the bodies of Chinese as “logs” and then went about performing vivisections on Chinese prisoners of war, using them for the purpose of experimentation. In the past our awareness of these terrible things seems to have kept us from thinking it acceptable to take organs for transplantation or the reuse on a grand scale of body parts in any systematic way. (Abe 2000: 43)

158

Biolust

Abe suggests, interestingly, that an awareness of the history of medical war crimes had been, in fact, quite widespread in Japan—even prior to the greater readiness today to discuss it openly.

Metaphors Marching On At the outset of her second book on illness, Susan Sontag wrote: “Of course, one cannot think without metaphors. But that does not mean there aren’t some metaphors we might well abstain from or try to retire” (Sontag 1990: 93). Her point is well taken. And to retire the “medicine as warfare” motif might serve us all well—as emotionally difficult as that may be when we feel ourselves or persons we love to have been “invaded” by something foreign to the human body. Japan, which after August 1945 had no reason to look back on what had happened as a “good” war, also saw no sense during the postwar period in describing the medical research projects as wars or campaigns. Such language appears to have crept into the public use only more recently and may be one reason why Dr. Makoto Kondō, one of Japan’s pioneers in cancer research, gives one of his books a title which, if rendered into English, would be “Please, Patient, Don’t Fight Against Cancer.” It is no doubt significant that this same book, written by a physician, includes an extensive section in which the author deplores the ethos of Unit 731 and its more muted reappearances in the present (Kondō 1996: 141–63). But, as noted, the rationale and rhetoric of “saving lives” have tremendous power. Coming out of a “good” war, they can extend far beyond the period of actual fighting and be the bearers of large and extensive agendas for medical research. Edward S. Golub states the matter precisely: After the Second World War the promise of specific therapies became a dramatic reality with antibiotics and immunizations—exemplified in the mind of the public by penicillin and the Salk vaccine. In the prevailing wartime mentality, science began to grow and an all-out “attack” on disease was made. [. . . .] The military metaphor helped to ensure that money would flow to medical science, and with the launching of Sputnik, the general increase in spending on science as part of our “national security” meant that research in medical science was a ship that rose on the rising tide. (Golub 1994: 215)

That is, “following World War II, the equation of medical, hospital-based science with an army was reflected in ‘the war on cancer,’ ‘the fight against polio,’ and the March of Dimes. In clinical practice, the developing concept of heroic intervention made the physician, the surgeon in particular, a kind of warrior, the ruggedly individualistic risk taker [. . . .]” (Guillemin 1998: 65). The transplantation of organs from putative cadavers has, perhaps like no other surgical procedure, exemplified what, at least on the surface, seems downright heroic in modern medicine as a whole but in its American mode most especially.

10. Closeted Medical Bombs

159

Sublimated warfare is much of what it is all about.10 The situation is as stated in a paragraph, one justly cited with frequency, written by Renée C. Fox and Judith P. Swazey at the conclusion of their second major study of organ replacement in America. They had found deeply disturbing aspects in the entire ethos of the world of persons involved in the excision and reinstalment of bodily organs. They wrote: This ethos includes a classically American frontier outlook: heroic, pioneering, adventurous, optimistic, and determined. It also involves, however, a bellicose, “death is the enemy” perspective; a rescue-oriented and often zealous determination to maintain life at any cost; and a relentless, hubris-ridden refusal to accept limits. It is disturbing to witness, over and over, the travail and distress to which this outlook can subject patients. (Fox and Swazey 1992: 199)11

It should come as no surprise that, when translated into Japanese, a book inclusive of statements such as the above was hailed in the press as expressing concerns already articulated in Japan (LaFleur 2003). To not a few Japanese it seemed that some Americans too were, at last, beginning to grasp the problem.

160

­Chapter 11 I M M O RTA L I T Y A N D D E SI R E

“Desires Are Innumerable” That the anxiety about being turned into a morally compromised person by wanting another’s organs is much more readily expressed in Japan has, I suggest, something to do with the continuing role of Buddhism in the sensibility of many Japanese. Sachiya Hiro, someone writing in the popular media on the relevance of Buddhism to contemporary society, comes closest to turning the screws on this, stating the following in one of Japan’s principal dailies: “There is a meanness of spirit in the person who conceives of others, people still alive, as persons whose deaths will make it possible for one to prolong their own life” (Hiro 1992). Deserving of notice is the fact that it has been Japan’s Buddhists who, in addition to representatives of some of that nation’s “new” religions, have been the most early and the most vocal in questioning the ethics of the cadaveric transplant. They see the introduction of such transplants into society as part of a larger but wrong-headed agenda. The reasons for Japan’s Buddhists’ resistance are, no doubt, multiple. But there is one thing that stands out among these. It is that there is something morally risky in the hype that presents new technologies as “modern miracles,” then throws emotional gasoline on the old embers of a longing for bodily immortality, and finally goes on to justify the conditions in which humans are allowed to lust after the organs of their fellows in order to prolong their own individual lives. Technologies in themselves may be merely physical phenomena. Yet they never occur except within social and psychological contexts—contexts and conditions they are bound to alter. These critics claim that “medicine” has become something different, perplexingly or even disturbingly so, from what it had traditionally been. Our new technologies encourage—sometimes even seem to constrain—us to adopt a new view of the human being. Since developments let us envision ourselves

*Chapter 9, draft one (2002). Echoing one of the chapter’s main themes—that death must be considered part of life—the original manuscript began with an epigraph quoting Renate Rubinstein (1989: 23): “The supreme example of a progressive disease is, of course, life itself, because you die of it.”

162

Biolust

as assemblages of virtually infinitely replaceable and interchangeable parts— “spare parts” in the telling phrase coined by Fox and Swazey—we should not be surprised to find ourselves, then, as individuals desiring those “parts” of our fellow humans (or of our fellow simians and transgenic pigs) that we might take into our own bodies to give us the longer lives we so deeply want. Hans Jonas astutely observed that “the mind has made the human being into the most gluttonous of all creatures” (Jonas 1996: 53). He was anticipated by, among others, the poet Issa, someone whose adherence to Buddhist teachings about impermanence was put to a grueling test by the sequential deaths of persons he loved. Nevertheless Issa, in a phrase that articulated a central Buddhist concern, wrote: “We need no longer ape the busy spider by stretching the web of our desire across the earth [. . . .]” (Issa 1819/1972: 139). “Desires,” states a phrase known and sometimes ritually chanted by many of Japan’s Buddhists, “are innumerable.” This observation about human nature is, when chanted, immediately followed by a statement of intent: “We vow to extinguish them.” This fits what already in ancient Buddhism had been a sustained and searching analysis of our multiple desires coupled with a presentation of the value of reducing their number and, as much possible, the negative impact of human covetousness and greed upon social relationships. This has in some cases led to a critique of what in capitalist economies is an intentional exacerbation of desires—just so that the efforts to satisfy those same desires will increase consumption and keep economies “healthy.” And being critical of this is, in turn, often part of the ecological concerns of Asia’s Buddhists.1 With newly emergent medical technologies as part of his purview in a book on ethics for the twenty-first century, Hisatake Katō sees the tensions between technocrats and ethicists as, at least to some degree, a contemporary version of the ancient difference that some have drawn between magic and religion. That is, the purveyors of magic claimed to be able to know how to satisfy mundane desires by an esoteric knowledge of the cause-effect nexus—something now able to be done by technology’s more precise grasp of causality. Katō holds that religion, when distancing itself from magic in this mode, lays emphasis upon an interiorized control of desires—in contrast to their material satisfaction (Katō 1993: 185). In making this particular contrast between magic and religion, Katō relies upon a distinction widely accepted in Japan. Through use of it the particularity of Buddhism, or, at least, Buddhism in its earliest phases, is described. That Buddhists have much trouble putting this into practice, especially when today’s leading economic system describes the route to happiness as the exact opposite of this norm, is clear to them as well as to others outside Japan. Some of Japan’s thinkers see ideational and institutional connectives running among the following: the manner in which cadaveric transplants showed us how we might satisfy a personal desire for longer life through a “consumption” of another’s organs; the whole range of emergent technologies that dangle before us a future in which those with the requisite funds can virtually buy long life for themselves; the prospect through genetic engineering of giving birth to “designer” babies; and a disturbing similarity in the vaunting of consumer choice in the market and that

11. Immortality and Desire

163

in the technologically advanced hospital. Happy enough to have become a fully capitalist society during that period in the twentieth century when the only—and obviously repugnant—alternative on the horizon was that of becoming a Marxist or Maoist state, Japan’s Buddhists are now beginning to wonder whether there might not, in fact, be valid reasons for them to engage in their own critique of a socioeconomic system that rhapsodizes about “choice” and turns the human body into disparate parts that can be marketed like everything else (Morishita 1999: 216ff). The prospect of physical immortality (or even something approximating it) is, of course, as common a desire as is likely to be found in human beings. And the trajectory of medical “miracles” that we have seen since the first cadaveric transplants has given us reason to think that we might, if the pace continues, soon be able to live in the world which Descartes, himself a futurist, envisioned in 1637. In that year he wrote: It is true that medicine which is now in vogue contains little of which the utility is remarkable; [. . . .] all that men know is almost nothing in comparison with what remains to be known—[namely how] to be free of an infinity of maladies both of body and mind, and even also possibly of the infirmities of age, if we had sufficient knowledge of their causes, and of all the remedies with which nature has provided us. (Descartes 1637/1997: 111)

Descartes went on to declare his own intention of dedicating his life to that path—although noting that he would not likely be able to see it realized before he himself would die. In his view it is “an infinity of maladies” from which we may eventually be freed and he predicted the possibility of satisfying the desire for immortality. Such a project will be realized, he opined, ultimately through medicine and medical research. And to Descartes the prospect of humans coming into possession of such immortality was an unqualified good. And it is that assessment, one of optimism unalloyed and unbothered, which has been the pervasive one in medicine of the modern era. It is medicine propelled by eschatological dreaming.

Longing Living through Chemistry That level of optimism is much less often witnessed in Japan. The difference gets reflected very concretely in data showing that, in contrast to Americans, Japanese wish, if possible, to avoid being treated, especially surgically, in university hospitals. They regard the local one, not so likely to be engaged in risky research, as far more safe (Hosaka 1997). Americans, in sharp contrast, often feel that the university hospital, where research is the norm, will give them what is “the latest” and, therefore, also the best. Some see this inability to be sanguine about the miracles of medical research as merely the result of unfortunate medical events and the notoriety these received

164

Biolust

in Japan. On this view it has been largely a sequence of widely publicized “medical mistakes” and sloppy procedures in Japan—everything from the scandalously loose protocols of Dr. Jurō Wada’s first transplants in 1968 to the profit-driven importation of contaminated blood plasma in the mid-1990s—that have made both the general public and the community of bioethicists in Japan less than optimistic about emergent technological wonders in medicine. Persons prone to analyze the core problem as lying at this level envision a relatively easy repair, one consisting of tighter procedures, more professional oversight, and a lot more “informed consent.” When lax procedures alone are seen as the root problem, the solution is often articulated in terms of Japan’s need to be more American, not less! The American model of bioethics also then is pushed forward—both from persons within Japan and surely from our side of the sea as well—as what, at least when fully adopted, will give the Japanese general public as much optimism about medical technology as that of their American counterparts. This is inadequate. The core problem lies at a “deeper” level, one not likely to be even faintly recognizable to the American medic or bioethicist holding that Japan needs to become more deeply informed by the procedures and expectations cherished in the West. And by use of the phrase “deeper” I mean to suggest the real differences lie on that level of culture where they have been informed by religious and philosophical traditions different from those of Europe and America to at least a meaningful degree. While the tightening of protocols is surely a factor and needed, the root issue, I here suggest, is far deeper. Therefore, it makes sense to ask why so many Japanese, including so many bioethicists—that is, persons with sophisticated knowledge of comparative philosophies and religions—clearly balk at medical agendas packed with a simplistic optimism.2 And they are listened to with respect because, unlike the American general public, that of Japan is not conditioned to think that death is an ancient enemy to be conquered or that its alternative, bodily immortality, would constitute an unambiguous and unqualified good. To see different peoples conditioned differently is to take, among other factors, their separate histories seriously. When Descartes envisioned a future in which human beings might be made free “of an infinity of maladies both of body and mind, and even also possibly of the infirmities of age,” he was tapping into the dominant Western belief, articulated most memorably in the book of Genesis, that there had occurred at some point a tragic loss of human immortality. Turning myth into a statement easily read as describing an actual event in the earliest history of humankind, the author of the Epistle to the Romans had declared that “the wages of sin is death” (Rom. 6:23). It is no wonder then that Descartes, a devout Christian, envisioned a return to physical and corporeal immortality as a return to what human had once had by nature. And Milton rendered the cosmic drama memorably as a lost paradise that could be regained. Better even than life is life everlasting. But what, of course, reveals him to be a distinctively modern Christian—perhaps the model of such—is Descartes’s readiness to see a recapture of a lost immortality via the technologies of science and medicine. And, as persons outside the ambit of

11. Immortality and Desire

165

the West sometimes seem to see more clearly than those within it, this reveals the role that a myth central to the experience of Europe and America has had in our expectations about medicine as having the capacity to give us back an immortality which is “naturally” ours. The myth resonates in the lives of modern Christians and Jews, even though as individuals they may disavow any attachment to the Genesis account of the loss of immortality and paradise. The perdurance of the ancient myth is that it first shaped the dominant ontology of the West and from there came to live on powerfully—but nearly invisibly—in the rhetoric we customarily use to describe the boon to humans coming from our newest medical technologies. Our engineered “miracles” give us back what we once had. Therefore, when Lee M. Silver, a Princeton molecular biologist, titles a book unabashedly promoting the benefits of genetic engineering and cloning “Remaking Eden,” that title, probably assumed to be nothing more than a metaphor by the author, recapitulates and gives new credence to the Western dream of regaining paradise (Silver 1997). And the “paradise” envisioned by our technologies as now perhaps within reach in the near future is a physical immortality via our most sophisticated technologies. A remake of Eden is what it is all about. One reason this all has very little play in Japan is that neither this myth nor the ontology it articulates is traditional there. This is not to say that dreams of physical immortality did not arise—and arise as spontaneously as they seem to have arisen among most humans. But the dominant viewpoint of the Buddhists has been that we have no reason to believe that human beings were immortal in the past or might possibly become so in the future. Myths of a lost paradise are not part of their repertoire and have been usually viewed with either skepticism or gentle toleration. History matters here. In fact, part of the reason why Buddhism gained the hold it came to have within the lives of many Japanese for more than a thousand years is that its position on the necessity and naturalness of death won out in what was, in fact, an intellectually significant struggle against an alternative view, one that posited the possibility of physical immortality. Although it seems likely that a cross-cultural human dream of avoiding death provided a climate for interest in this topic, this as an issue of debate entered forcefully into Japan’s religious and intellectual life at one point. This was because technologies originating in China included claims about elixirs and processes that could provide individuals with either immortality or, at least, life spans significantly longer than those deemed natural or usual. Historians have shown a fairly long period in Chinese history and a shorter one in Japan when there were expectations that certain natural substances, if isolated and properly processed, could, in fact, prolong life. That much of this is often categorized as “alchemical” and that its provision was often ritualized should not cause us to overlook the fact that the immortality elixirs at the heart of the Daoist technologies of the time were the equivalents of what we would now refer to as “chemicals.” That there was something always experimental about these activities seems clear and it is not too difficult to recognize parallels between the modern laboratory and

166

Biolust

the rooms within which Chinese experts worked on a variety of natural substances to try to locate wherein might lay the access to immortality. “The aim of [Daoist] alchemy is not so much to make gold, but rather to refine raw ingredients found in nature and thus to discover the ultimate and fundamental element, the seminal essence of the world, which, once absorbed within our bodies, might confer immortality” (Schipper 1993: 175). Rumors of success were rampant, but confirmation of it was nonexistent. There were claims of “verification,” but this is only because the standards were interpreted imaginatively. The physical “immortality” thought to have been achieved by chemistry had “proofs” that could be witnessed, some claimed, in bodily transformations. But, at least in the case of fourth-century texts analyzed by Sivin, what was observed was very likely uncommon and special conditions in bodies which we would designate as corpses rather than immortals. Such bodies had become light in weight, temporarily undecaying, and free from the odors that usually arise from the body that has died. The Daoists claimed success, describing the transformed person as now immortal and proven to be so by the above indices. Sivin points out, however, that we should not be surprised that the rigid bodies in question differed from what is usually observed in corpses. What had been ingested prior to dying/becoming immortal had made for that difference. The elixirs were “largely based on arsenic and mercury compounds, which have excellent embalming properties” (Sivin 1968: 41). In such a context it seems clear that one person’s “immortality” was another person’s “death.” It is also likely that the mercuric sulfide ingested as “cinnabar,” although ultimately poisonous, had a neurological effect, so that symptoms we would identify as the effects of mercury poisoning—distorted vision, irregularity in hearing, frenetic bodily tremors, and so on prior to death—were given a positive spin and interpreted as indices to the emergent immortal’s union with a deity. Thus both on the way to becoming an “immortal” and in its observable condition after having gotten there, such a body differed from the norm. But, for reasons that are now not too difficult to understand, this trajectory of applied experimentation and imaginative interpretation eventually ran its course. It was abandoned when it was clearly seen that “immortality,” if of this sort, was gotten at far too great a price. Japanese historians have uncovered evidence showing a period during which the Daoist technologies had a following in the archipelago. Sekiyo Shimode, a leading authority on this, holds that in Japan there was far more experimentation with longevity substances than is commonly assumed. And, as in China, royal patronage was sought and Japan’s royal figures became actively involved. The sovereigns Junna (r. 823–833) and Ninmyō (r. 833–850) appear to have ingested cinnabar, and Uda (r. 887–897) ingested another immortality-promising substance of uncertain definition. “One must conclude,” writes Shimode, “that Daoist-based medicinal technologies found their way into the inner compartments of the [Japanese] imperial court” (Shimode 1972: 231). Nevertheless, the Japanese participation in this project came relatively late and it is likely that they in some way benefited from the fact that by the end of the ninth century the Chinese experimentation with ingested immortality substances

11. Immortality and Desire

167

had largely run its course. The epoch of danger-laden experiments was coming to a close. But, importantly, the continental system that was making a deep impact upon Japan precisely at that time was Buddhism. And most Buddhists, laying emphasis upon the classical teaching of unavoidable impermanence, rejected the immortality quest on philosophical grounds.3 This, rejected by Japan’s Buddhists on the level of ideas, was matched by institutional hostility. That is, the rapidly expanding Buddhist monastic institution in Japan during these centuries actively blocked the development there of any Daoist priesthood or “church,” institutions common in China at the time. That there was an intense debate about the possibility of physical immortality is evident in Sangō shiiki, a treatise in which the monk Kūkai (774–835), a nonpareil founder figure in Japan, wrote up a paper debate between a Confucian, a Daoist, and a Buddhist. What he wrote seems likely to have been intended for royal readers and used forensic and rhetorical skills to guarantee that the Buddhist, representing Kūkai’s own views, wins the debate hands down. The Daoist is given a speech that touts what he can provide in terms of miracles and extended bodily life. But the Buddhist will have none of it, arguing that the rule of impermanence knows no exceptions: And Daoist “immortals” find that their self-indulgent lives pass as quickly as a clap of thunder. [. . . .] Our transitory lives are carried by the winds of time into the far-off skies just like leaves in autumn are scattered. [. . . .] And emperors hailed as staying alive for ten thousand years are also soon enough turned into the smoke rising skyward from a cremation. [. . . .] One can drink a thousand cups of potent elixirs or burn a hundred sticks of incense to bring the spirit back into a dead body, but the ebbing away of life cannot be stopped for even a moment. We all sink down into the springs that run under the earth. (Kūkai 1975: 60–4, trans. mine)

Buddhists such as Kūkai won out in the competition for access to power and prestige. But they also gained the position from which they could define Japanese intellectual life for a millennium and beyond. The Daoist immortality project had been described and discredited as a fraud. Buddhist teachings about a necessary and fully natural impermanence became what was to be seen as normative.4

Death Struggle To retrace something of that history here is not gratuitous. The intent, rather, is to demonstrate that, at least in Japan, the kinds of questions raised today by technologies promising a vast extension of the human life span and something close to freedom from corporeal death are not entirely new. There is a hint of the “been there/done that” sense in some Japanese writing on these topics. Descartes was not the first to value immortality as worthy of pursuit and to assume that the right kind and amount of technical knowledge might permit it to be realized.

168

Biolust

It may be objected, however, that the immortality quest of the Daoist and the kind of greatly increased life span that will become available to humans in the twenty-first century are distinctly different. The former, although trying to search out useful materials, was ultimately bad science, one that for a while tried to conceal its total failure through a fanciful reinterpretation of the resultant data. By contrast the scientific and medical project unfolding in our own time is, we are told, bound to succeed simply because what appear to be “failures” along the way are overcome by science itself, not by the ruse of a very stretched interpretation. And the case that would seem to illustrate this most dramatically is the discovery during the 1970s that cyclosporine can serve as an immunosuppressive drug. Prior to that discovery and application, the natural rejection of transplanted organs had been so rapid and common—resulting in death—that the whole project of transplantation had come under criticism as being a very questionable “success.” It is, therefore, little wonder that cyclosporine, even though itself not free of serious side effects, was from the outset hailed in “upbeat, almost millennial ‘wonder drug’ kind of language” (Fox and Swazey 1992: 4). Both groups of pharmacological technicians compared here made use of positive language to conceal problems and to insist upon experimental success.5 Nevertheless, the Daoist had far more serious problems, had no new chemical element that could be brought in to deflect criticism, and ultimately were totally dependent upon a re-description that interpreted the physiological signs of death as really those of immortal life. This difference is not negligible. The Daoist project was in the end a physical failure, whereas it appears likely that, given their resourcefulness, our medical technologies may prove to be a qualified physical success. That is, we cannot discount the distinct possibility that the life spans of individuals will be significantly, even rather amazingly, extended. It is reasonable to expect that genetic manipulation which eliminates latent illness propensities, the cloning of organs for reuse, xenotransplantation, and a variety of technologies not yet developed will either singly or together give us some level of physical immortality, one after which the Daoists could lust but never enjoy. Given this, it would be unwise for the Japanese critics of new medical technologies to depend upon assumptions that these will fail physically and technically. Kūkai could mock the Daoist project because of its lack of empirical success. He could also afford to assume that no “breakthrough” in their experiments would change things. Such is not the case today. Predictions of failure themselves run at a high risk of failing. This feature of the present situation has forced into existence a recognition that the real problem with the current immortality project is not that it is futile but that, even if technically successful, it may still be problematic on a religious and philosophical level. However, finding exactly how to indicate this level of the problem has been far from easy. And the likely Buddhist component in this generalized sense that something is basically wrong in the contemporary immortality quest comes to expression most frequently in the way in which many Japanese philosophers, ethicists, and bioethicists put distance between their own thinking about death and what they take to be the one most common in the West.

11. Immortality and Desire

169

What they frequently find objectionable either on the surface or as a hidden presupposition in the bulk of Western writing about medical miracles is the view of human death as some kind of totally negative entity—that is, something against which we have every right to engage in total battle. An animus against death is omnipresent in the West’s campaigns to conquer all illnesses and liberate humankind from what always before had been the natural necessity, one simply assumed, of aging and dying. To recognize that we all as humans have a deep desire to cling to life and avoid death is not the same as to embrace a religious and philosophical viewpoint that invariably configures death as an entity somehow separate from ourselves, programmed so as to deprive us of Life, and something against which we have every right to do ceaseless battle. Since this is the pervasive view in our society, it finds its way easily into the images and attitudes of advanced medicine. Among the problematic attitudes of American transplant surgeons detected by Fox and Swazey in the course of their extensive research was “a bellicose, ‘death is the enemy’ perspective” and “a rescue-oriented and often zealous determination to maintain life at any cost” (Fox and Swazey 1992: 199). It is the presence of this view of death, at times explicit but more often implicit, in the vast bioethical literature produced by Americans that, perhaps more than anything else, seems to unsettle Japanese thinkers reading in it. And it is, I judge, this conception of death, unnatural and deeply problematic in their view, against which they often struggle in their own writing. It is what they sense to be “wrong” on a very deep level in the dominant approach taken in America. An attempt to differentiate a Buddhist perspective from that of the West is behind what was written specifically about medicine by Aiyoshi Kawahata. Quotations from a range of medieval and modern Japanese Buddhists form the basis for the statement that wraps up his discussion: Death is a generalized natural phenomenon and at the same time the necessary conclusion of organic functioning. Dying and being born are mutually dependent in the same way that light and darkness are. They are both aspects of the laws by which the cosmos operates. Buddhist terminology phrases this as the perpetual movement of all things within the Dharma. Within the vast temporal and spatial expanse of all things, dying and being born form a part. And they are an inseparable pair. Neither death nor life should be regarded as special, as a phenomenon that stands out from all the rest as irregular or unique. (Kawahata 1988: 445)

Here a writer depicts death as completely natural, a constitutive part of the way things simply are and always have been. And, as has been typical in the writings of modern Japanese, the view presented here as authentic Buddhism is fully consistent with the best of what modern science has shown about the nature of the cosmos and the natural role of both life and death within it.6 So in many ways the struggle of these writers is not against modern science per se. It is, rather, poised against the degree to which contemporary medicine in the West, modern and

170

Biolust

scientific in its experimental methods, remains deeply beholden to a premodern theology in the rhetoric it uses to gain public support for its agendas. Inasmuch as they are informed by a Buddhism which has no trouble seeing death as natural, Japanese writers on medical matters balk at the degree to which writers in the West portray death in ways that implicitly suggest that nothing has changed, at least at the rhetorical level, since the time when medieval Christians viewed it as something inserted into our world as a satanic maneuver so as to bring misery to humankind. In other words, that to which these Buddhists take exception is not so much the science in our recent medical technologies as it is the theology hidden but banefully at work within them. Masao Abe, the Japanese philosopher who in the late twentieth century captured the attention of major thinkers in the West, has written: “In Buddhism life is not considered as having priority over death. Life and death are antagonistic processes, negating one another, yet inseparably connected with one another” (Abe 1985: 131).

Lust for Life The forté of Masao Abe lay in his demonstration that, although Western philosophers were claiming to have jettisoned once and for all their roots in theology, their own facile passage over into the linguistic acrobatics of AngloAmerican analytic philosophy left a monstrous and important problem unsolved. That is, they all too conveniently opted to step aside from all questions about whether or not the West’s theo/philosophical tradition of privileging Being was accurate and ultimately defensible. This left Anglo-American bioethicists, trained often in the “analytic” mode, free to act and write as if there were nothing problematic in the general culture’s pervasive rhetoric about carrying on a war to defeat death and all its minions in the bodies of humankind. Analytic philosophy opted to be not only a-political but also a-cultural.7 And by a disdain for the “old” and unsolvable questions of ontology, Anglo-American philosophers generally and the bioethicists they often trained were left free to see as unproblematic the basic theo/ontology, sharpened in fact by Cartesian dualism in the modern era, that caricatured death as the thief in the house of Being and prized each and every effort to banish it.8 Life, at least when juxtaposed to its assumed destroyer, became, in part because many philosophers chose to ignore the question, more sanctified than ever before in the general culture. And these philosophers, thinking they had passed far beyond theology’s questions, had no tools with which to question the theological and ontological assumptions rampant in the medical technologies of their time. Abe saw things differently. He held that ignoring the problem of ontology merely shoved it into a place where it would now do its work less easily seen. It did not make it go away. So what is the core question? And why does it matter so much? In an essay, “Non-Being and Mu: The Metaphysical Nature Negativity in the East and the West,” Abe showed that both the philosophies and religions of the

11. Immortality and Desire

171

West have tilted toward prioritizing Being over its opposite. Being is not just different from non-being; it is assumed to be infinitely better. Entities that truly are of value—God, the world, the self—must exist; for them to cease to exist would be loss, negativity, even evil. However, it is important, argues Abe, to see that “there is in reality no ontological ground on which being has priority over non-being; being need not be assumed to be superior to, or more ultimate than, non-being” (Abe 1985: 123). Abe sees Buddhist texts and ones from the Daoist tradition sharing the view that Being ought to have no priority. Although it may seem ironic in view of what was noted above about materials that posit existence and its retention as paramount values, texts usually identified as the core classics of Daoism are emphatic in their insistence that being and non-being, just like light and darkness or cold and hot, are natural complements. Neither has priority.9 Almost at its opening, the Dao de jing famously states: “Something and Nothing produce each other.” Ames, by a text-rich demonstration of the philosophical importance of this and many other statements, points out how they differ from the tradition of the West. He has carried it forth into a discussion of death: “Like ‘up and down’ or ‘left and right,’ life and death are correlative categories which depend upon each other for explanation” (Ames 1998: 59). Our minds, however, may begin to balk at this point. This is because we— especially in the West—are conditioned to assume that, unless Life and Being are described as unqualifiedly positive and the highest of all our values, we will begin a downward descent into philosophical nihilism and psychological thanatophilia.10 Do we not need, we begin to surmise, the tilt toward Being and the prizing of everything vital just simply so that we do not “cheapen” life and begin to take the deaths of ourselves and our fellow humans as matters of little consequence? Is there not a danger in this of giving an automatic green light to euthanasia and to parents who would choose to end the lives of severely handicapped infants? Although he does not address these specific questions, Abe’s position would be that getting right about ethics is too important a matter to try to ground it on a fundamentally flawed and contrived set of judgments about positivity and negativity. He writes: The Western understanding of human negativity as inferior to positivity is based on an attitude which, while apparently optimistic, is, in fact, idealistic. On the other hand, the Buddhist understanding of man’s negativity as equal to positivity is supported by an attitude which, while appearing pessimistic, is, in fact, radically realistic with regard to human nature. (Abe 1985: 131)

This perspective, one which is in keeping with what was quoted above from Kawahata concerning the “positive” role of death within biological processes, suggests that an ethic based on an ontology rigged largely for the sake of the supposed psychological boost it may give is itself a flimsy and unreliable structure. We should not opt for philosophical and religious lop-sidedness because we want

172

Biolust

it to provide us with an optimism we can then use to keep ourselves in moral check. To do so is equivalent to fabricating a lie simply because we need it to lean upon. But when “Being” and “Life” are given a priority in our minds that they do not have in our larger universe or in our biology, we are at risk of living on top of a structure we have made for our own convenience and because of our own failure to be realists. If we have, moreover, simply ratcheted “Life” up into this privileged position simply because we fear what we might do to ourselves without this selfdeception, our supposed “optimism” is really a pessimism inventing a disguise for itself. This is where ontology and psychology would do well to cohere. And it is where we can profitably detect how the privileging of Being and Life in our own society gives ease of access to the “death-is-the-enemy” mentality and a rampant rhetoric about defeating whatever would deprive us of the life assumed to be our “natural” right. Immortality projects, even if not today operating through cinnabar-based chemistry, go forward through more complex technologies and ride easily on the provided rhetoric. Medical “miracles” give the impression of turning Descartes’s dream into an almost tangible reality. Death, the hated, is in retreat and the lust for Life is everywhere.11 Dōgen, the thirteenth-century Zen master whose writings have become a valued resource for Japanese and others trying to think through these questions in new ways, recognized both the existence of such lust and its deleterious effect upon us as individuals and societies. In his understanding, there was no “nirvana” apart from the wholly natural processes through which we have life and have death. One is not “good” while the other is its opposite. And so in an essay the title of which could be “Life-and-Death,” he counseled the cultivation of the following: Just understand that life-and-death itself is nirvana, and you will neither hate one as being life-and-death, nor cherish the other as being nirvana. Only then can you be free of life-and-death.

C O N C LU SIO N Japanese Bioethics Then and Now—Reflections on William LaFleur’s Biolust Susumu Shimazono

In this chapter, I would like to reconsider William LaFleur’s arguments concerning cadaveric organ transplantation in Japan in the context of a broader set of Japanese bioethical discussion and developments. It is my hope that by doing so we can better grasp the meaning that this book, and its consideration of the ideological foundations of bioethics, has in the present-day world. I begin by briefly explaining how I came to be acquainted with LaFleur and his work. I then provide an overview of the current status of organ transplantation in Japan. The rest of the chapter will analyze some of the characteristics of Japanese bioethical thought identified by LaFleur. The most significant of these is a proclivity for what might be called “prudentialist arguments,” or arguments based on the principle of caution (shinchōron). I explore how this approach has played out in other Japanese bioethical debates—concerning not only end-of-life issues but those concerning life’s beginnings as well. As we will see, arguments for caution are most salient in Japanese discussions of biotechnologies that might lead to the “sorting of life” (inochi no senbetsu)—practices verging on a “new eugenics,” where, through genetic selection or manipulation, individuals will be able to design or choose their offspring, and in the process, weed out potentially weak or vulnerable members of the human population. From there, we consider Japanese bioethical thought in a global context—how do the positions staked out in Japan compare to those articulated in other countries? I then discuss the recent tendency in Japan on the part of the government and other powerful institutions to eschew the kind of vigorous public debate documented in this book, and how this has led to a gradual, if still cautious, acceptance of certain new technologies. In the final section of this chapter, I draw out and highlight three bioethical concerns identified in Biolust, which can shed light on other areas of debate in Japan. To conclude, I offer some final reflections on LaFleur’s project and its role in moving us toward a more interdisciplinary, more international bioethics. *Translated by Edward Drott

174

Biolust

I am a scholar of religious studies who has mainly studied the history of religion and religious thought in Japan, but I have also been interested in the relationship between religion and medicine. Thus, working from a philosophical and humanistic standpoint, I came to be involved in discussions on bioethics held by the Japanese government from 1997 to 2004. In the process, I began to interact with William LaFleur—who was also, of course, a scholar of religion involved in bioethics. In the 2000s we had many opportunities to exchange opinions on various matters. In 2004, Professor LaFleur served as the main organizer of a meeting held from April 28 to May 1 at the University of Pennsylvania under the title “Going Too Far: Unethical Medical Research in Japan, Germany and the United States.” The result was published as Dark Medicine: Rationalizing Unethical Medical Research (LaFleur, Böhme and Shimazono 2007). Subsequently, German and Japanese versions have also been published.1 After that, we continued to interact.2 Through these exchanges, I learned that LaFleur was putting together a book comparing the responses in Japan and the United States to brain death and organ transplantation, with special attention to their attendant ethical problems. I was looking forward to its publication. However, in February 2010, I was confronted with the sudden news of Professor LaFleur’s death. I later learned that, fortunately, substantial portions of the book were almost complete. It took a long time to organize these materials, but I am very happy that Edward Drott’s efforts have led to this publication. Remarkably, when I was finally able to read the manuscript in the spring of 2021, I felt that the content was not out of date in the least. I was left with a renewed recognition that LaFleur’s work comprised a wide-ranging yet profound exploration of these problems, and thus remained highly relevant. Nonetheless, a considerable amount of time has passed since the 2000s, when LaFleur was writing. Therefore, in this chapter, in addition to reflecting on the period in which there was an active debate over brain death and organ transplantation in Japan—that is, the period from the 1970s to the 1990s—I will trace the major flow of discussion in Japan’s bioethics up to the present day—the early 2020s. In the process, I hope to show that the content of this book is still of great significance for understanding the characteristics of Japanese bioethics, and its place in a broader, global context.

Cadaveric Organ Transplantation in Japan: Its History and Current Status Following the first heart transplant in 1967, and the advent in 1968 of the so-called “Harvard standard” defining “irreversible coma” as a new standard of death, the first law governing brain death was enacted in Finland in 1971. In 1979, the criteria for diagnosing brain death due to brainstem death were published in the UK, and in 1981, the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, promulgated its “Guidelines for the Determination of Death,” a unified set of rules concerning the judgment of death. As LaFleur notes, however, in the beginning, there were a series of cases in which the immune response of recipients impeded the proper function of the transplanted organs, limiting the effectiveness of the procedure. The number of transplants,

Conclusion

175

thus, remained low. It was only with the introduction of the immunosuppressant cyclosporine in the early 1980s that heart transplantation from brain-dead donors became widespread. By the mid-1990s, the number of procedures worldwide increased to 4,400 per year. Japan, of course, was comparatively late to the game. In 1997, Japan finally allowed the transplantation of organs from brain-donors with the enactment of the “Organ Transplant Law” (Zōki no ishoku ni kan suru hōritsu). From the start, however, few organ transplants were performed. When the numbers failed to increase in the decade that followed, the Organ Transplantation Law was amended in 2010 to allow organ transplants from brain-dead people, including children, with the consent of the family, but without the prior consent of the individual. Previously, the maximum number of cadaveric organ transplants had been thirteen per year. This increased to twenty-nine in 2010, forty-four in 2011, sixty-four in 2016, and ninety-seven in 2019 (Hayashi 2020). However, this remains a very small number compared to other countries. For example, in 2017, the number of liver transplants from brain-dead donors in the United States was 7,715; in Japan it was sixty-nine. So, even with the revision of the Organ Transplant Law, the number of procedures has remained extremely low. Why is this the case? A major factor is often said to be the circumstances surrounding the first heart transplant performed in Japan by Professor Jurō Wada at Sapporo Medical University in August 1968. Not only did the recipient die in just over two months, there were many questions about whether or not the donor was definitely in a state where death was unavoidable. But, as LaFleur shows, there were various other factors as well. Prior to the legalization of cadaveric organ transplantation, there had been a lively debate about how to determine brain death and the criteria for organ removal. In the 1980s through the 1990s, these questions were deliberated in various venues, but never sufficiently resolved. As a result, in 1989, the Ad Hoc Committee on Brain Death and Organ Transplantation (known as the Nōshi rinchō) was established, and met between 1990 and 1992. In 1992, they issued their report. While the report recognized brain death as death, it included as one of its four chapters a minority opinion, “A Position Opposing the Equation of ‘Brain Death’ and ‘Death.’” Nonetheless, based on this report, the Organ Transplant Law was passed in 1997.3 As noted above, the Organ Transplant Law was amended in 2010, relaxing the criteria for consent. A major factor leading to the law’s amendment was that between 2006 and 2007 it became clear that people had been traveling abroad for organ transplants. It was found that between 1984 and 2005, 522 people traveled to the United States, Australia, China, the Philippines, and other countries to receive heart, liver, and kidney transplants. There was also a suspicion that some might have become involved in the purchase of organs in Asia (Kagawa 2021). In spite of efforts on the part of the government to make it easier to obtain an organ transplant in Japan, however, the number of organ donors has remained low. Thus, even with the passage of the 2010 law, those unable to obtain organs in Japan might still be traveling abroad. In fact, in spite of an increase in organ transplants from brain-dead donors, the number of transplants from donors meeting the old criterion for death (cardiac arrest) has dropped. This means that, on the whole,

176

Biolust

the total number of transplants has not actually increased significantly. On the other hand, living-donor kidney and liver transplants have been performed from as early as 1989, with the number of living-donor liver transplants exceeding 400 per year. All this is to say that as of 2021, transplants utilizing organs from braindead donors have, to some degree, become possible in Japan, but the number of procedures remains small. On the other hand, living-donor liver transplants and kidney transplants are frequently performed. Given this background, it seems safe to say that a positive acceptance of organ donations based on the equation of brain death with death has not spread among the Japanese people. Therefore, although cadaveric organ transplantation is being actively and vocally promoted by medical professionals and the government, as well as patients who hope to receive donated organs, there is little momentum toward acceptance. Instead, the skeptical view of cadaveric organ transplantation is still predominant. It is worth noting that Japan has taken a cautious approach to other end-of-life issues, as well, though the situation seems to be slowly changing. Other bioethical issues closely related to those raised by cadaveric organ transplantation include the suspension of end-of-life treatment and the creation of living wills. These topics lead, in turn, to questions of death with dignity and euthanasia. Although these issues have continued to be debated, in the 1990s and 2000s a general consensus on procedures for discontinuing excessive life-prolonging treatment—while at the same time not allowing euthanasia—was compiled by the Ministry of Health, Labor and Welfare in its “Guidelines for decision-making processes for terminal care” (2007). This is not to say that there have been no voices calling for the legalization of euthanasia and insisting on the right to the “self-determination of death.” But as of 2021, there are no prospects of such views becoming adopted at an institutional level.

Arguments for Caution (Shinchōron) and Japanese Responses to Other Biotechnologies In this study, LaFleur identifies a powerful strain of prudentialism (shinchōron) in Japanese bioethical thought—a tendency to err on the side of caution in matters concerning life and death. It is this inclination, LaFleur argues, that led many Japanese bioethicists to embrace the work of Hans Jonas, who also highlights the importance of restraint. As we shall see, these arguments have been especially prevalent in Japan regarding practices that have the potential to take a turn toward eugenics—practices in which humans seek to decide what type of person should live and which should not, based on our own preconceived image of what would constitute an ideal human existence. In other words, arguments for caution are most often invoked in cases that raise the specter of “unnatural selection,” or the “sorting” of human life into categories of desirable and undesirable (inochi no senbetsu). LaFleur also argued that cadaveric organ transplantation would be just the beginning of a new wave of biotechnologies, and that it would open the way for

Conclusion

177

their proactive use. So, what has been Japan’s approach to biotechnology more broadly? What bioethical discussions have these technologies engendered? First of all, on the whole, biotechnologies still in their early stage of development tend not to be promoted in Japan in cases in which controversy is anticipated. However, this does not mean that these technologies face strong, vocal, or socially conspicuous resistance. As an example of this, let us consider the history of Japanese responses to the use of prenatal testing technologies in the early detection and abortion of fetuses with disabilities. To many in the West, Japan’s extreme caution regarding organ transplantation seems paradoxical when considered against the backdrop of its relatively liberal stance toward abortion—a topic LaFleur dealt with, of course, in his book Liquid Life (1992). However, in the discussion that follows, I hope to show that the principles of caution are also present in the Japanese abortion debate, but only in attitudes toward mass prenatal screening and practices that might be seen as trying to “improve” the human race. Anxieties over the “New Eugenics” and the “Sorting of Life” Abortion following the discovery, by amniocentesis, of chromosomal abnormalities associated with Down syndrome or spina bifida was legalized in the UK in 1967. The so-called “fetal clause,” allowing abortion if there is a fetal disorder, then expanded to other countries, as well. In Japan, however, the Eugenic Protection Law of 1948 already included, as one of its provisions, the “prevention of the birth of ‘defective’ (furyō) offspring.” For instance, if the mother, spouse, or a relative within four degrees of kinship had Hansen’s disease or some hereditary disease, sterilization and abortion were permitted, with sterilization labeled “eugenic surgery.” When the Eugenic Protection Examination Board judged that, in the public interest, eugenic surgery was necessary to prevent the perpetuation of hereditary diseases, it was even possible to force sterilization without consent. Furthermore, from the mid-1960s to the mid-1970s, a grassroots movement developed seeking to “avoid the birth of unfortunate children.” In Europe and the United States, when arguing in favor of the legality of abortion, advocates usually invoked reproductive rights, in other words, a women’s right to choose whether or not to give birth. In Japan, however, the 1948 Eugenic Protection Law had already legalized the choice of abortion for economic reasons (the economic clause). Since abortion could be carried out on the basis of the economic clause, there was no further need to argue in favor of elective abortion. This meant that debates over abortion in Japan did not divide into clearcut “pro-choice” and “anti-abortion” camps the way they did in the West. Absent any true threat to the legality of abortion in Japan, discussions of reproductive rights tended to fade into the background. Rather, it was the debate over eugenics that became pronounced. Beginning in the 1970s, people with disabilities began forcefully criticizing policies favoring abortion as a means of preventing birth defects. As a result, in the

178

Biolust

mid-1970s, many local governments stopped actively conducting amniocentesis. This also had a major impact on the overall debate on abortion. Given the relative security in Japan of the right to abortion, discussion has instead focused on the concerns of the disabled. In response to these pressures, in 1996, the section of the law describing “eugenics” was deleted, and the “Eugenic Protection Law” was renamed the “Maternal Protection Law.” In keeping with its cautious approach to new biotechnologies and its distaste in recent decades for anything that might smack of eugenics, Japan was slow to implement the most up-to-date prenatal testing methods. In the United States and the UK, mass screening using maternal serum markers was introduced in the 1970s to promote prenatal diagnosis. On the other hand, many in Japan, including pediatricians, obstetricians, and gynecologists, held critical opinions of this. In 1997, the Ministry of Health and Welfare established a committee that declared that “it is not necessary for doctors to make a special effort to inform pregnant women of the results of these tests, nor should they recommend them.” It is important to note that quite a few Japanese feminists supported this decision, though one can imagine that such a policy would come under criticism in the United States or Europe as counter to a woman’s right to be informed about such matters.4 As a result, prenatal testing was performed at a low rate in Japan in the 2000s. The situation changed in the 2010s, however, with the introduction of the new Noninvasive Prenatal Testing Procedure (NIPT). NIPT was first marketed in the United States in 2011, and immediately spread to China and European countries. The Japanese Society of Obstetrics and Gynecology issued guidelines to allow this procedure in Japan, and it has been widely used from 2013. And so, although up to the end of the 2010s there had been a tendency to refrain from conducting prenatal testing in Japan, this is no longer the case. Up until the 2010s, Japan had also remained relatively cautious about preimplantation genetic diagnosis, which attempts to eliminate abnormal embryos by genetically testing fertilized eggs produced by in vitro fertilization. This practice was introduced in Europe and the United States in the 1990s with the aim of preventing miscarriages and the birth of children with abnormalities and disabilities. However, it was feared that this technology may lead in the future to “designer babies,” by allowing the comparison of genomes and the selection of children preferable to parents. Determining when such sorting should or should not be permitted would be a difficult matter. Starting in 1998, the Japanese Society of Obstetrics and Gynecology issued notices stating that this practice would only be tolerated when “serious hereditary diseases” were feared, and encouraged restraint on the part of obstetricians. “Serious hereditary diseases” were limited to cases where there was concern about the birth of a child with Duchenne muscular dystrophy. But these strictures have been weakening. Since there are no legal restrictions against the practice of preimplantation genetic diagnosis, there are an increasing number of cases where clinics recommend it at their own discretion, even using it for purposes such as sex selection. In 2017, the Japanese Society of Obstetrics

Conclusion

179

and Gynecology began to signal a more accepting stance toward these tests. In 2019, cases emerged in which requests for preimplantation genetic diagnosis were approved to prevent the birth of children with retinoblastoma, indicating that what constitutes a “serious” hereditary disease had been significantly downgraded. Emerging Trends: Gradual Acceptance of Controversial Research, Medical Practices, and Technologies Just as in the case of prenatal testing, we see initial caution giving way to slow acceptance in the case of research that intervenes in the early stages of human development—on fertilized eggs, ES cells, and cloned embryos. Such types of research were also viewed as unacceptable in Japan until the 2000s. According to the “Guidelines for Handling Specified Embryos,” established by the government in 2001, the production of human-animal hybrid embryos (chimeras formed by inserting human stem cells into animal embryos) was to be limited to basic research aimed at producing organs that could be transplanted to humans. The production and culture of these chimeric embryos was allowed for a limited number of days. However, as a result of discussions started in 2012, the guidelines were revised in 2019 to permit the production of chimeric animals for the purpose not only of organ transplantation but also the elucidation of disease mechanisms and the development of new drugs. So, these regulations have also been greatly relaxed. There have been slow shifts on other fronts, as well. In Japan, artificial insemination between nonspouses through the donation of eggs had not been common. But in December 2020, the Japan Society of Obstetrics and Gynecology announced that it would begin considering the implementation of assisted reproductive technology involving eggs donated by third parties, which the abovementioned guidelines had not allowed. Although cryopreservation of eggs for nonmedical uses is not recommended by the Japanese Society of Obstetrics and Gynecology, it is not prohibited. In fact, the procedure can be seen to have gained tacit approval. Surrogate pregnancy is not yet widely tolerated in Japan, but other major biotechnologies related to fertility treatment have been rapidly adopted since the late 2010s (Kugu 2021). There are two other controversial practices for which we can actually foresee an accelerating trend toward acceptance, if still at a slower pace than in other parts of the world. The first is euthanasia, and the other is the expansion of the scope of application of genome editing technology. As of 2021, there is still a deep-rooted caution regarding these practices among scholars in the humanities and social sciences, as well as among medical professionals and journalists who are familiar with these issues. On the other hand, while strong skepticism remains toward biotechnology and research that could lead to eugenics and genetic sorting, the 2010s witnessed a slow evolution in public attitudes toward various forms of biotechnology and research. This is most evident, for instance, in the emergence of a cautious acceptance of research on induced pluripotent stem cells, or iPS cells.5

180

Biolust

Japanese Bioethics in the Global Context In the flow of bioethical discussion in Japan from the 1970s to the 2010s, the issue of cadaveric organ transplantation holds a unique position. It remains the most hotly debated subject of the last fifty years. This was especially true in the period spanning the 1970s to the mid-1990s, when the issue attracted the attention of a wide range of parties. In those years, the Japanese public engaged in in-depth scrutiny of end-of-life issues, and specifically the question of whether it was permissible to artificially accelerate death in cases where organs could be harvested. Japan’s heated debate over organ transplantation contrasted with the active, but relatively more subdued, debates on other issues, specifically concerning the beginning of life, which have been the subject of great controversy in Europe and the United States. As noted above, in Japan, the allowance for abortion was stipulated by the 1948 Eugenic Protection Law (revised to the Maternal Protection Law in 1996), and although we cannot say there have been no efforts to overturn this law, no serious organized opposition has arisen. As a general tendency, Japan has occupied a unique position in the world due to its high tolerance for abortion and low tolerance for cadaveric organ transplantation. These trends became apparent between the 1970s and the 1990s. More generally speaking, at this stage, among economically advanced countries, Japan ranked among the most conservative in terms of the degree of acceptance of biotechnology. In other words, it was in a position close to countries of the European continent such as Germany, France, and Italy, and less aggressive than the United States and the UK. Compared, as well, to China, Singapore, and South Korea, Japan’s cautious approach stood out. In recent decades, however, bioethicists around the world have had to grapple with a whole new array of technological advancements. Dolly the sheep, the world’s first cloned animal, was born in 1996; human ES cells were created in 1998; Craig Venter independently completed a draft of the human genome sequence in 1999; the international Human Genome Project completed decoding the human genome in 2003; genome editing was developed in 2005. Looking at the ethical debates taking place against the backdrop of such rapid technological change, it appears that prudential arguments (shinchōron) are gradually losing ground both in Japan and in continental Europe. Clearly, with the remarkable development of biotechnology and the progress of new research worldwide, it has been difficult to keep them in check. Now, countries such as China have joined in the development of new biotechnologies, and it is difficult to imagine bioethical theories stressing caution—which have traditionally been based on religious perspectives— developing there. However, together with some countries on the European continent such as Germany, it can be said that a cautious bioethical stance is still deeply rooted in Japan. For instance, as we have observed, skeptical views are often expressed about euthanasia and the “sorting of life.” In discussions of bioethics in the humanities and social sciences, there is still a great deal of concern about the excesses of biotechnology and attempts to clarify the theoretical basis for restraint. In this regard, the reaction

Conclusion

181

to the announcement in November 2018 that a Chinese scientist had produced genetically altered children is worth noting. On December 25, 2018, the Board of Directors of the Philosophical Association of Japan, the Board of Trustees of the Japanese Society for Ethics, and the Board of Directors of the Japanese Association for Religious Studies jointly issued a “Statement on the Birth of Genetically Altered Children.” In the long history of these three academic societies (each of them having existed for more than fifty years, one for nearly a century), it is unprecedented for them to issue a joint statement. Since it provides insight into the current state of mainstream Japanese bioethics, I reproduce it here in full. Statement on the Birth of Genetically Altered Children At the end of November 2018, it was reported that Associate Professor Jiankui He of China’s Southern University of Science and Technology oversaw the birth of twin girls using fertilized embryos whose genome had been edited to prevent infection with HIV (the so-called “AIDS virus”). A detailed report was made at the “International Summit on Human Genome Editing” (November 27–29, 2018, Hong Kong) held by scientists hailing from the United States, the United Kingdom, China and other countries. Genome editing is still a developing technology. If these children have in fact been born, there are serious medical and ethical concerns, including unexpected side effects and human rights issues. There has been criticism of using such methods when there are other methods to prevent HIV infection. Questions have also been raised concerning the potential for harm to these children and about how parental consent was obtained. These are serious ethical issues, and are likely sufficient to deem these clinical studies and medical practices unacceptable. However, there are even more serious ethical issues raised by this conduct. Genetic alterations can be irreversibly transmitted to offspring across generations and can thus be the beginning of changes to the human species at the genomic level. This is an extremely serious ethical issue that affects the future of humanity as a whole, not just medical researchers/scientists, patients with specific diseases, or related parties. What has become clear from this report is that it is relatively easy to edit the genome of germline cells (sperm, eggs, fertilized eggs) that are transmitted across generations. However, it is never acceptable for clinical research and medical practice with such extremely serious ethical consequences to be conducted without going through the process of consensus building. If a situation develops in which, for instance, parents are able to have designer babies, it will lead to the eugenic modification or breeding of human beings. What may be done in the future, such as for the treatment of a particular disease, must be limited to very narrow exceptions. Given these concerns, we need to seriously consider legal restrictions on human genome editing, especially on germline cells, as well as on the implantation of embryos developed (or grown) from gene-edited germline cells. We should also start examining the possibility of international regulation. If

182

Biolust

left to the discretion of each country, there might be areas of the world in which these practices are not regulated. In this case it will be impossible, as a global society, to stop these practices, even if most other countries respect these ethical considerations. When contemplating such regulations, we must dig deep into their ethical grounds and clarify their rationale. It is necessary to think broadly with the public about the ethical issues of genome editing and intervention in life in its earliest stages, and to obtain social consensus on these matters. These are not just issues for scientists; they also demand the consideration of academics in various fields of the humanities and social sciences. We, the Board of Directors of the Philosophical Association of Japan, the Board of Trustees of the Japanese Society for Ethics, and the Board of Directors of the Japanese Association for Religious Studies, fully recognizing the importance of such questions, seek to contribute to the foundations of academic knowledge on these issues in order to form a social consensus on the ethics of genome editing and intervention in life’s early stages.

The Waning of Public Bioethical Debate in Japan In the face of the rapid development and expanded application of biotechnology in Japan, bioethical arguments for caution, once so conspicuous in the debate over cadaveric organ transplantation, are becoming less prominent. From the 1970s to the 1990s, the government was willing to actively discuss these issues in a public context and reflect the consensus in public policy. Scholars and the media also held such discussions and were instrumental in raising the visibility of these issues in the public sphere. However, since the 2000s, the Japanese government seems disinclined to promote public debates over bioethics. The 2010 amendment to the Organ Transplant Law, which took place without public debate, was a prime example of this new attitude. Since the 2000s, there has been an increasing tendency to refrain from public discussions of the pros and cons of introducing new biotechnologies on the part of the Japanese government, medical personnel, and scientists who are conducting research in these fields. Debate is thus much less robust, at least when compared to the debate over the acceptance of brain death and organ transplantation. It seems that the government, bureaucrats, scientists, business leaders, and others who benefit from technological advancement see these discussions leading to an undesirable delay in the progress of research and its application. In the case of cadaveric organ transplantation, a highly detailed discussion in public forums deepened the understanding of both concerned parties and the public. However, as a result, the promotion of cadaveric organ transplantation was delayed considerably compared to other countries of the world. Even in the 2020s, this trend has continued. The government, the business community, and experts who want to keep up with the rest of the world in their research seem to have learned an important lesson from this.

Conclusion

183

In the early 2000s, public discussion was still seen as necessary and there was still a somewhat cautious approach on the part of these leaders. In the “Basic Concepts on the Handling of Human Embryos,” compiled by a government advisory body in 2004, the argument that we should exercise caution in the production of cloned embryos and the use of ES cells seems to have gained a degree of support (Shimazono 2006). But it could also be argued that this led to a delay in Japanese ES cell research. In addition, regarding issues such as egg donation and surrogacy, the government requested the Science Council of Japan to “discuss various issues related to assisted reproductive technology,” and in 2008, a “response” was submitted. However, the government has not taken up the issue at all since then, and the situation continues to be one in which no significant legal or institutional system of regulation has been established.6 In the 2010s, the pros and cons of research into and the use of ethically controversial biotechnologies were not examined in earnest. Without having made any clear determinations in a publicly visible way, little by little, measures have been taken by medical doctors and businesses to increase the acceptance of these technologies. This is a trend typically seen in Japan, but it seems that the tendency toward this kind of tacit acceptance is increasing worldwide. In the United States as well, the President’s Council on Bioethics, chaired by Leon Kass, was convened under President Bush in the early 2000s and received a great deal of worldwide attention, but there have been no noticeable government endeavors since then (Kass 2003).

Perspectives on LaFleur’s Work and Its Ramifications LaFleur argued that the practice of cadaveric organ transplantation anticipated and, in some sense, opened the door to an array of ever more radical biotechnological interventions. To grasp what was at stake, therefore, in the debate over brain death would be critical in order to clarify what was at stake in the debates to come. Furthermore, Japanese thinkers raised important questions that had not been fully considered in debates over organ transplantation or other forms of biotechnology in the United States. LaFleur’s work thus represents a trenchant critique of US bioethics and its blind spots. On the many occasions in which I interacted with him, these points were driven home to me. And what are some of the problems highlighted by the Japanese debate over organ transplantation that might be relevant to other bioethical controversies? There are three themes in particular that LaFleur introduces in this book that I would like to expand upon and examine in light of broader Japanese bioethical discussions. The first is the problematic need to accelerate the declaration of death in order to obtain healthy organs, and how discomfort with this practice resonates with Japanese concerns over the “sorting of life.” The second is LaFleur’s observation that Japanese bioethics tends to be wary of the influence that powerful institutions can wield over medicine. The third point I would like to examine has to do with

184

Biolust

the dangers, in cadaveric organ transplantation and in the application of other biotechnologies, of reducing the human body to a collection of alienable “parts.” Resisting Premature Declarations of Death LaFleur notes that one problem with redefining death is that it seems to have been done solely for the purpose of obtaining viable organs. Once we advance to a style of utilitarian decision-making in which we judge it appropriate to truncate the life of a patient to achieve some other benefit, we might find ourselves on a slippery slope, gradually progressing to other new medical “treatments” that involve accelerating the arrival of death. We might, for example, consider allowing patients in a persistent vegetative state to die. From there we might move on to allow euthanasia for people with intractable diseases such as ALS. At the extreme end of this trajectory, we find abhorrent practices, like killing children with disabilities or elderly people with serious illnesses. LaFleur also touches on these questions, but they are not covered in great detail. As we have discussed, the voices of patients with intractable diseases and social movements in support of persons with disabilities have been highly respected in Japanese bioethics. Japanese bioethicists are sensitive to the ways that biotechnology and new systems of medical intervention may be used to hunt down those who are less likely to generate economic output due to disabilities and whose survival is considered a cost to society. In this way, we are cautious about introducing technologies and systems that target and seek to weed out the vulnerable. In connection with this, I would like to return to the debate over mass prenatal screening. According to a 2004 book by obstetrician and gynecologist Kōdō Satō, in 1996, only ten percent of Japanese fetuses with Down syndrome were aborted. In France and England, the number was closer to fifty  percent. In addition, mass screening requiring pregnant women to undergo maternal serum marker testing at an early stage in the United States was 167 times that of Japan; in the UK, France, and Germany it was ten times that of Japan (Satō 1999: 53). These figures remained relatively stable throughout the 2000s. One of the reasons for this disparity is that many Japanese obstetricians and gynecologists listened to the voices of persons with disabilities and their allies; those arguing from a theory of caution (shinchōron) were in the majority. According to Satō, the decision to undergo mass screening is often more influenced by those who seek to encourage the expansion of testing, rather than by pregnant women actively seeking out such tests. Satō sees this as the influence of a “new eugenics.” The eugenics practiced from the end of the nineteenth century to the Second World War was enforced from the top down, as the nation tried to control marriage and childbirth in order to “improve” the genetic quality of its people. In the postwar period, of course, Nazi eugenics in particular was seen as abhorrent and inspired strong resistance. However, since the end of the twentieth century, we can see the rise of a new eugenics that has spread, this time from the grassroots, originating with individual citizens who wish to give birth to “good quality children” and medical personnel willing to assist them in this.

Conclusion

185

Obstetricians and gynecologists tend to have a more negative image of Down syndrome than pediatricians do. Not surprisingly, therefore, they are often the ones who recommend mass screening tests. While obstetricians and gynecologists commonly discourage parents from giving birth to children with disabilities, unlike pediatrician, they rarely get to see the children with disabilities who grow up healthy and happy. Thus, they are likely to make arbitrary decisions about the happiness or unhappiness of people with disabilities while knowing little about their lives. The new eugenics is also bolstered by misleading narratives about the cost to society of people with disabilities. Some of those who advocate the benefits of serum marker testing say that finding and preventing the birth of a disabled person is cheaper than the cost of a child with disabilities. However, these cost-benefit calculations are not always correct. The social cost of persons with disabilities is insignificant considering the total national budget and the total amount the government dedicates to medical expenses. This entire discussion of costs gives the impression that people with disabilities are detrimental to society as a whole and encourages parents to hesitate before having children with disabilities. Manipulating information in this way creates an environment in which it is difficult for individuals to choose to have children with disabilities and creates a new style of eugenics under the guise of neoliberal self-determination. This situation does, however, benefit doctors and life scientists who want to spread knowledge about genetics, businesses who make money through screening, and politicians supported by related businesses. Satō thus argues that we should be cautious about prenatal testing.7 What I have been discussing here is mainly “beginning of life” issues, but bioethicists resisting hasty declarations of death have also made powerful arguments in debates over euthanasia and death with dignity. In this regard, the observations of patients with intractable diseases such as ALS are helpful. Quite a few, even while relying on a ventilator, are able to communicate and engage in a considerable degree of social activity. In collaboration with others, they have battled narratives that encourage them to hasten their deaths. Skepticism of Medicine in the Service of Power A second theme I would like to highlight and expand upon is the danger of subjugating medicine to the objectives of government, business, or other powerful interests. LaFleur observes that along with their divergent cultural and religious backgrounds, we can attribute some of the differences between American and Japanese bioethics to differences in historical experience. He refers to widespread awareness in Japan of the outrageous human experimentation undertaken by Japanese military Unit 731 and the efforts on the part of the US government to prevent these facts from being revealed. He also describes the ABCC (Atomic Bomb Casualty Commission, later the Radiation Effects Research Foundation), which adhered to the policy of “observing but not treating” survivors of the atomic bomb.

186

Biolust

As LaFleur sees it, opinions on cadaveric organ transplantation have been influenced by the perceived willingness of wartime and postwar medics to disregard human life in favor of national objectives. This history has strengthened Japan’s sensitivity to “cutting-edge” medical research and aggressive medical care. That is certain. In fact, we can point to other well-known cases that colored Japanese attitudes toward the nexus between medicine and power. As Japan began to focus on the development of nuclear power, the National Institute of Radiological Sciences, established in 1957, was criticized for its cold treatment of people exposed to radiation. In response to the death of Aikichi Kuboyama, a crew member on a fishing boat who was exposed to fallout—the so-called “ashes of death”—from the US nuclear test in the Bikini Islands in 1954, both Japan and the United States took measures to suppress criticism of the nuclear test. Many of the crew members died early due to liver damage. One crew member, Matashichi Ōishi (1934–2021), criticized the fact that he was not informed of liver damage despite having been examined at the National Institute of Radiological Sciences (Ōishi 2003: 100−1). This is not just about the atomic bomb. Since the 1960s, the fact that the medical community was so slow and, even now, reluctant to recognize the etiology of Minamata disease—widespread methylmercury poisoning caused by industrial pollution—has led to criticism that it was partially responsible for that tragedy. Although LaFleur did not live to see it, following the Fukushima Daiichi Nuclear Power Plant disaster caused by the Great East Japan Earthquake on March 11, 2011, the medical responses led by the military and politicians only served to reinforce public distrust. Many residents across a wide area were concerned that they might have been exposed to radioactive materials in the first few weeks following the disaster. However, the government, medical researchers, and life scientists specializing in radiation exposure were extremely reluctant to confirm and publicize the diffusion of radioactive materials and the possibility of their uptake into the body. A systematic investigation was begun, but it was to obtain epidemiological data, and there was no sense that its intention was to investigate health hazards and help protect citizens from them. This attitude was perceived as a continuation of the one on display in the US surveys of radiation exposure after the atomic bombing (Shimazono 2013/2021). Caution Regarding the Instrumentalization and Commodification of the Body Japanese proponents of a bioethics of caution have expressed a deep discomfort with practices that would reduce the human body to a collection of “body parts,” and treat them like possessions, interchangeable components (buhin), or tools. To treat a body as a thing makes it available for the use of others. It encourages a process of price-seeking and raises the possibility of the commodification and commercialization of body parts. Once we start to see the body as nothing more than a collection of parts, even if we avoid creating a market for organs, we end up in a situation where there is moral pressure to donate body parts as acts of “love.” However, this is an unreasonable ethical proposition, insofar as we find

Conclusion

187

ourselves paradoxically forced to perform a purportedly voluntary service out of “love.” In harvesting organs from brain-dead donors, the suspicion that we are treating the body like a thing is especially heightened because the procedure must be performed while the donor is, in fact, still alive—or, at least, still shows signs of life. In addition to criticism of organ harvesting, this cautious attitude is manifested in Japan in a reluctance to donate eggs and/or engage in surrogacy, as well as in attitudes toward ES cells, human embryo research, and human-animal hybrid embryos (chimeras). In egg donation and surrogacy, women in financial distress take great risks to earn a living by offering their physical functions for others. Initially the practice emerged in the Philippines, India, and Thailand, but later it has also been taken up in the United States. The practice is banned in Japan. And while the Japanese government gestured toward trying to lift the ban, it made no progress until the 2010s. This is also connected to the cautious attitude toward human embryo research and utilization. For example, in 2004, the Bioethics Expert Committee under the Council for Science and Technology Policy, headed by the prime minister, compiled the “Basic Concepts on the Handling of Human Embryos.” In the section “Restrictions on the Acquisition of Unfertilized eggs, etc. and the Protection of Female Donors,” it addresses “using humans as tools or as a means to an end,” stating: Using the current nuclear transfer technology, the production of human cloned embryos for use in research and other applications requires the collection of many more unfertilized eggs than fertilized eggs. For this reason, the acquisition and collection of unfertilized eggs for the production and utilization of cloned embryos has larger social and ethical effects than the collection of fertilized embryos. To avoid concerns about humans being used as tools, or as a means to an end, particular attention should be paid and most stringent restrictions should be taken. Regarding the collection of unfertilized eggs from so-called unpaid volunteers, if this is approved, not only will the women who provide these eggs be physically invaded and mentally burdened, but there will also be growing concerns about the use of humans as tools or as a means to an end. In principle, it should not be permitted.8

Elsewhere in this document, we read that “in principle, the development and implementation of research objectives that would lead to the damage of human embryos are not permitted, but they may be provided in exceptional cases.” Thus, the production of fertilized embryos and cloned embryos is limited but allowed. However, five of the fifteen members of the governmental Bioethics Expert Committee issued a more restrictive “Joint Opinion on the Basic Concepts on the Handling Human Embryos.” There, they write: “Lax handling of human embryos can lead to outcomes such as the instrumentalization, commodification and transformation of human bodies and human life into resources, all of which

188

Biolust

damage human dignity and the fundamental value of life.” They continue: “In principle, it is not permissible to produce fertilized human embryos for research,” and “in order to lift the ban on the production of human cloned embryos, there must be considerable scientific evidence for its necessity and timeliness, but the discussions so far have not provided sufficient evidence for this, nor have they arrived at public understanding on the matter” (Shimazono 2006: 278). LaFleur focused on the debate over cadaveric organ transplantation in Japan from the 1970s to the 1990s, and on Japanese discomfort with the way that cadaveric organ transplantation treats body parts as things. Similarly, looking back on Japanese discussions on the use of human embryos from the late 1990s to the early 2000s, in ethical arguments for caution (shinchōron), fears of “instrumentalization, commercialization and transformation of human bodies and human lives into resources” played a major role. Clearly there is a connection between the two discussions. Regarding the “end of life” in cadaveric organ transplantation and “beginning of life” in human embryo research, the potential to treat life and body like things and use them according to human desires is considered a threat to human dignity.

In Conclusion One of the most significant features of this book is LaFleur’s focus on and discussion of the philosophical foundations of bioethics. He puts a great deal of effort into investigating how early modern and modern Western thought has influenced modern bioethical debates over organ transplantation, and offers many important suggestions. He provides highly original considerations of René Descartes’s philosophy and its modern critics, utilitarianism, theologian Joseph Fletcher’s altruistic ethics, Hans Jonas’s “ethics of responsibility,” and more. At the same time, LaFleur refers to Japanese religious philosophy and intellectual history: Amane Nishi as an introducer of utilitarianism in Japan; Kitarō Nishida as the first Japanese critic of utilitarianism; the medieval Buddhist master Dōgen’s thinking about death; and Masao Abe, who, in modern times, carried out a critique of Western thought based on Buddhist ideas. All of these analyses are enlightening, but what is a little confusing to me is LaFleur’s treatment of Takeshi Umehara’s argument, perhaps because unlike these other thinkers, it is less obviously indebted to Buddhism. In Chapter 6, LaFleur endorses Umehara’s claim that the concept of brain death is closely related to Cartesian philosophy. On the other hand, in Chapter 3, LaFleur criticizes Umehara for stoking nationalism through his praise of Japanese thought. However, I wonder whether Umehara’s criticism of brain death was so nationalistic. It seems that Umehara took the leadership in gathering support for the minority report, opposing the view that equated death with brain death, which was appended to the official report allowing the redefinition of death and organ transplantation (1992). In the fourth section of the minority report, titled “The Ideological Position of the Minority Opinion,” it states:

Conclusion

189

What we need in Japan today is not simply to accept something unconditionally just because the developed nations of Western Europe are doing it, but to carefully consider these matters one by one according to our own cultural traditions. For true internationalization, it is necessary to have our own principles that can be clearly explained to other countries. (Umehara 2000: 470)

­So, what are these principles of “our own”? The minority report lists four premises of brain death that they reject: “scientism, rationalism, a mechanistic theory of the human, and “West-centrism” (seiyō shugi). We cannot settle for such notions, and choose instead to value the human body as something wherein dwells a spirit as well as a unique individual life. Recognizing our commonality with all living things, we seek to consider humans as amounting to more than just our reason, rejecting the excesses of scientistism, respecting humankind’s spiritual traditions, and adopting aspects of modern Western civilization only after critical reflection. (Umehara 2000: 470)9

The positions of LaFleur, who values ​​Buddhism, Umehara, and his colleagues, and that of Hans Jonas stand as alternatives to a bioethics strongly influenced by utilitarianism, which is still predominant in the modern English-speaking world. At stake are the fundamental ideologies that support the practice of cadaveric organ transplantation, which has become a standard procedure in Western societies. To challenge them is to undermine the foundations of a strain of bioethics of which the United States has been in the vanguard since the 1960s. So what kind of new framework could replace the current global bioethical norm? This book poses just such a question to its readers. It stands as a highly original achievement, surely owing in part to that fact that LaFleur, as a scholar of religious studies, brings a fresh perspective to old problems that nonetheless continue to demand our attention—the questions of what it means to be alive, and what it means to die.

APPENDIX This appendix provides an overview of alternative drafts of chapters from Biolust and describes materials that, due to their redundancy or their fragmentary nature, were excluded from the final published text. It consists of synopses of and quotations from chapters and sections that were, in the end, not utilized.1

CHAPTER ONE (2002): SURGICAL MASKS Apart from the section summarized below, the rest of this chapter has been presented in full. Societies and Their Masks ­ aFleur notes with irony that Japan, a society at times stereotyped as prone to L subterfuge and concealment, had a more frank and open debate about the ethics of organ transplantation than the purportedly more transparent societies of the West. The debates that took place in Japan were not concealed from foreign observers, provided they could read the language. According to LaFleur, it is the proponents of organ transplantation, no matter their nationality, that have shown more of a tendency toward reticence, euphemism, and concealment.2 Among these, LaFleur includes Dr. Jurō Wada, who conducted the first heart transplant in Japan under dubious circumstances in 1968.3 LaFleur describes interviewing Dr. Wada on November 6, 1996. According to LaFleur, Wada was unwilling to discuss the details of his first transplant, but he was more than willing to offer a critique of the inability of Japanese society to accept medical progress. Japan, according to Wada, was “a society still hopelessly ‘backward,’ ‘closed,’ and prone to give more credence to the religious mystagogue than to truth-telling scientists like himself.” Ignoring the fact that it had been reporters, not religionists, who had unearthed the details of the “Wada affair” of 1968, he repeatedly portrayed the situation as a day-and-night contrast between a rational and openness-prizing society, my own in America, and a mystique and darkness-loving one, that of the Japan in which he was now living, under duress, most of the time.

CHAPTER ONE (2010): ORGAN PANIC The first three sections of this chapter comprised material taken from various chapters from the 2002 draft. Its final two sections, “Gallows humor” and “Forest preservation,” are presented in full in Chapter 2 of this volume.

Appendix

191

New Guilt LaFleur opens this version of the manuscript with the anecdote, alluded to in the present Chapter 5, describing bumper stickers he encountered in the United States using vaguely religiously rhetoric to guilt other drivers into becoming organ donors. He notes that earlier generations of American Christians and Jews would have found the theological rationale behind these slogans unconvincing and would have viewed the practice of organ transplantation as an abomination. He then briefly outlines the theological shifts, more fully developed in Chapter 7, that made organ transplantation acceptable. LaFleur returns to the bumper stickers and their attempt to elicit feelings of guilt. Given the widespread consensus in the United States that organ donation is an unqualified good, why might they be necessary? As in Chapter 5 of the current volume, he relates his conversations with Americans, in which he notes their reluctance to admit that they have not (yet?) volunteered to become organ donors. Assuming themselves to be in the minority, they are largely unaware that they, in fact, comprise 50 percent of the population. In Japan, however, there are even fewer donors, but people are open about the fact. Having enjoyed “what among the world’s peoples is surely the longest and most probing public debate about the ethics of organ transplantation,” they feel relatively confident about their stance. The Fault-Line The chapter’s second section describes the impetus for this study, much along the same lines of Chapter  1 of the present volume. Why, LaFleur wondered, were the Japanese so concerned about this particular issue of organ transplantation— one that all sides of the American political spectrum, including religious liberals and conservatives, found themselves in ever-so-rare agreement over? The typical explanations, of course, involved clichés about Japanese backwardness and irrationality. Heart of the Matter Here LaFleur observes that while attitudes among the general public in the United States have not seen much change, at the time of writing in the first decade of the twenty-first century, philosophers and scientists have begun to raise serious questions about brain death. The general public was given a rare glimpse of the renewed controversy within the scientific community in a New Yorker article published in 2001, discussed in Chapter  1 of the present volume. He depicts questions about brain death arousing a sense of panic within professions tied to the practice of organ transplantation. Websites promoting the practice quietly changed reference to “brain dead” donors to “dead” donors, hinting at doubts behind the scenes about the usefulness of the concept of brain death. To give a sense of what is at stake, LaFleur writes that in the first three months of 2008 alone, an estimated 5,222 organs were transplanted, with the average cost of a heart transplant coming in at $148,000, a lung transplant at

Appendix

192

$235,000. Clearly, there are significant financial interests riding on maintaining the viability of the concept of brain death.4

CHAPTER TWO (2002): A CORPSE THAT SWEATS The present volume includes the 2010 version of this chapter. What follows here is a summary of the 2002 version. The Living Dead This section introduces the theme of the chapter by noting that “our modern technologies have presented us with entities whose existence in earlier societies was feared but also usually dismissed as a logical impossibility—namely, human bodies that are simultaneously both dead and alive.” A Pregnant Machine Here, LaFleur makes more of the fact that since cases of so-called “postmortem pregnancy” first arose in the decades following the legalization of organ transplantation in the United States, the Japanese, in their deliberations over the legalization of cadaveric organ transplantation, “had to deal with a much more rich and complex amount of data.” Jonas versus Harvard The discussion here of Hans Jonas and his criticism of the justifications for the redefinition of death were expanded upon in the second draft, included here in full as Chapter 9.

CHAPTER THREE (2002): GLOBAL SEARCH This chapter focused on the history of Japan’s first heart transplant. Its discussion of Wada remained rough and focused on how Wada’s time in the United States influenced his later intellectual trajectory. No version of the chapter appeared in LaFleur’s second draft, but it seems he was planning to include portions of it in a yet undrafted chapter titled “Vivisection’s Afterlife,” possibly combining it with material on Unit 731 now found in Chapter 10. The World of Doctor Wada This section provides a detailed biography of Wada, taken from his 1992 autobiographical `Nōshi` to `shinzō ishoku`—are kara nijūgo nen. We learn that his father had studied abroad in the United States for eighteen years and that Wada

Appendix

193

grew up in a home with a “Christian atmosphere” (Wada 1992: 109). Wada himself would travel to the United States to study “with some of the world’s most eminent persons doing heart surgery at that time.” In 1957 he established a department of cardiology at Sapporo Medical University. LaFleur then describes the “Wada incident,” in which Wada transferred the heart of a drowning victim, Yoshimasa Yamaguchi, to a cardiac patient, Nobuo Miyazaki. First met with accolades, the procedure later led to accusations of murder against Wada. The questions and the charges were multiple (Tachibana 1988: 265–83; Mizuno 1991: 18ff). And even though the Sapporo prosecutor eventually dropped the charges, three basic questions have never—at least in the minds of most who have explored the matter in depth—been adequately answered. They are: first, whether the donor, Yamaguchi, was really dead when declared to be so; second, whether the recipient, Miyazaki, was in fact in a medical condition so hopeless that he needed the transplant surgery that may have caused his death after eighty-three days; and, third, whether it was not the case that Japan’s first transplant surgery, given the fact that the retained records were so scanty, was not in fact a very slipshod affair (Mizuno 1991: 18–19). To correspond to these questions, the salient facts that continue to dog the Wada case are: first, that “brain-death” was declared even in the absence of the requisite equipment to gauge it; second, that the Teruo Fujimoto, the Sapporo colleague who sent Miyazaki to Wada, held that a mitral valve would have been enough to keep him alive—thus showing a false diagnosis by Wada leading to the patient’s death; and, third, that Wada either concealed evidence or kept records in a very unprofessional fashion (Feldman 2000: 134–6).

Tachibana and the Brain as Organ LaFleur introduces Takashi Tachibana, the journalist who—in addition to exposing the “Lockheed scandal” that brought down Prime Minister Kakuei Tanaka—has written extensive critiques of the concept of brain death. LaFleur describes how: [Tachibana attacked] the various interim and final reports by government agencies, statements by professional medical organizations, and government position papers on brain death and transplantation that appeared between approximately 1974 and 1992. The most important of these was the final Report of the Ad Hoc Committee on Brain Death and Organ Transplantation, submitted to Prime Minister Miyazawa on January 22, 1992.

One aspect of his critique targeted the “Takeuchi Criteria,” a set of guidelines that were “emerging in Japan at the time as the government’s likely path to a legalization of cadaveric transplants.”

194

Appendix

The three usual categories of lost brain function are thought to be: 1) that of the brains-stem (nōkan)—i.e., that unpaired portion of the brain at its base; 2) that of the cerebrum (dainō), resulting in the condition known in North America as “persistent vegetative state”; and 3) that of the whole brain (zennō)—i.e., both of the forgoing as well as the cerebellum in addition.

The Takeuchi Criteria required the death of the whole brain. But this was not enough for Tachibana. Tachibana held that for brain death to truly qualify as death, what was required was the actual death of the brain as an organ (kishitsu), not merely the cessation of brain function (kinō). “In Tachibana’s view the latter, inadequate for these purposes, might be registered as the flat line on an EEG. But, by contrast, the brain that has ceased its existence as an organ is one whose cells have collapsed and disintegrated, resulting in the passage of blood from the skull and out through the nose.” Tachibana quotes Kōji Mitsui, a lecturer at the University of Tokyo: Isn’t it the case that death is other than simply the loss of functions? For instance, imagine a case where somewhere along an arm the nerves that run down to the hand have all been severed. The hand then will have lost all its ability to function. But do we then say the hand is “dead”? Or think, alternatively, of an eye which for some reason can no longer see. That is, the eyeball will have lost its functionality. But we do not then say that eye has died. The loss of functions differs from death. Even the irreversible loss of functions is not equal to death. (Tachibana 1988: 87–8)

Absent the evidence of cranial blood, Tachibana is unwilling to grant brain death as proof of death. LaFleur observes that “[i]t is Tachibana’s ongoing demand for empirical proof of death that has been the hallmark of his writing and lecturing on this issue.” Interestingly, although calling for stricter criteria to determine brain death, Tachibana has no problem with the claim that the locus of the human personality is in the brain. The Argument about Progress LaFleur returns to his conversation with Wada in 1996: “Japan, he reiterated, is a country known for its technological progress and, since the cadaveric transplant is where modern progress shows up most clearly in our times, for Japan to disallow this is to cause embarrassment to the whole nation in the opinion of the world. It leaves the impression that Japan is still in thrall to pre-modern religious viewpoints.” Wada called attention to the fact (also mentioned in his book) that: [It was] practitioners of Chinese medicine who had been the first to charge that there had been something wrong in the Wada transplant. The clear implication of this is that it would only be from the perspective of antiquated, deeply “Asian,”

Appendix

195

non-scientific, and progress-impeding kinds of thought that there could be any possible objection to what he did in 1968 and what those with his viewpoint would like to do now.

To LaFleur, it appeared that this was one of the most influential arguments put forward by the pro-transplant camp—that continued opposition to organ transplantation was jeopardized Japan’s standing as an advanced nation: Resident in Japan for a year then I was surprised at the number of people, both in the media and in personal contacts, who said that, although they personally would never wish to be part of any process involving [organ harvesting from a brain-dead donor], the procedure itself should probably be legal—for the simple reason that Japan should keep pace with “advanced” countries in this area. This was to say that for a nation to permit acts which once would have been considered repulsive may simply be part of the price of progress. Whether it is “progress” at too great a price remains to be seen.

CHAPTER FOUR (2010): SCIENCE IN HAVANA For its epigraph, this chapter began with a well-known adage: “The first rule, if you’re trying to get out of a hole, is to stop digging.”

More than Just a Junket This chapter focuses on the symposium LaFleur attended in the year 2000 on the topic of “Coma and Death” in Havana, Cuba, described in Chapter 1. This version provides details that he later published elsewhere (LaFleur 2001, 2002a). It also gives more background on the research findings of Dr. D. Alan Shewmon, whose presentation, in LaFleur’s telling, electrified the audience: In a professional journal Neurology [Shewmon published] a scientific paper showing what is wrong with the claim made by a Presidential Commission in 1981 that the death of the brain will within several days surely result in the asystole of the heart—that is, its failure to continue to beat. If this happens so quickly, it was assumed by those doing transplants, it must be that the brain is so much the central integrator of all the body’s functions that brain death is tantamount to an imminent disintegration of the body as a system. Shewmon’s patients were children and adolescents, not adults. And they were not “fitting the criterion of asystole “within several days.”

Shewmon found an inverse relationship between the “age of the brain-dead person and length of time before the heart would fail.” One of Shewmon’s brain-dead

196

Appendix

patients had survived 14.5 years at the time he was writing. He also found 175 cases of brain-dead individuals surviving a week or more (Shewmon 1998: 1542). He noted that the number would have been larger if those with healthy organs were not, ironically, the “best candidates for the removal of such robust organs.” Such removal by surgery, in effect, systematically destroys the very data which otherwise would call into question the very notion being used to shore up the belief that such surgery is unproblematic and ethical. Japan, because it was not routinely extracting organs and thus retaining and caring for more of the relatively more “healthy” brain-dead persons, was providing data of later asystole of the heart. Citing Japanese studies as corroborating his own data, Shewmon referred to the existing theory and transplantation practices as “incompatible with optimal scientific methodology.”

Shewmon’s Videos Here LaFleur provides descriptions of the video evidence presented by Dr. Shewmon. One case was a young man who had been brain dead for fourteen years and had been sustained by a ventilator and feeding tube. MRIs revealed the majority of the subject’s brain to have atrophied and to be “largely made up of disconnected remnants of its former contents.” Stimulus to the subject’s head resulted in no response. But other stimuli, like ice water trickled down his shoulder or lifting his arm, resulted in movement that was deemed impossible in the “orthodox” view of brain death, “since within his skull there was nothing able to serve as the ‘integrator of bodily unity’ and functioning.”5 The body had also grown, “demonstrated the capacity to fight off infections and heal wounds by itself.” Shewmon further argued that, although there are profound differences between the brain dead and those with high spinal cord injuries (SCI), physiologically the conditions share some similarities. Following Shewmon, LaFleur claims these similarities reveal something significant about the brain’s role in integrating physiological functions: Much, perhaps virtually all, of what shows up as either integrated or not integrated in both diagnosed conditions is what is due not to the physiological condition of the brain, the supposed “integrator” according to the orthodox view, but is in fact contingent on capacities and operations of a part of the body that does not even show up on some diagrams of the brain, namely the spinal cord that, as discussed in the last chapter, falls below the imagined but nonexistent “floor” of the brain as a discrete unit.

Shewmon later published his findings, which LaFleur quotes: The brain cannot be construed with physiological rigor as the body’s “central integrator,” in the sense of conferring unity top-down on what otherwise would

Appendix

197

be a mere collectivity of organs. Neither is any other organ ‘the central integrator.’ A living body possesses not an integrator but integration, a holistic property deriving from the mutual integration among all its parts. (Shewmon 2004: 38)

Picking Up Pieces ­ s described in Chapter 1, LaFleur observed that no one disputed the accuracy of A Shewmon’s findings, or the implication that brain death was, in the words of one attendee, “a useful fiction.” According to LaFleur, other pro-transplant figures, acknowledging the collapse of the concept of brain death, have begun to argue that we should take advantage of the blurriness of the category and expand the definition of death to include anencephalic infants or patients in a persistent vegetative state. In other words, they recommended no longer insisting on the death of the whole brain, but erring instead on the side of collecting more organs. LaFleur likens this approach to those who find themselves in a “conceptual and empirical hole” and whose advice is to “go on digging!” The Three Indicator Standard According to LaFleur, Japanese critics anticipated this attempt to further loosen the criteria by which we define death, and that Americans would “likely switch to something even more fully Cartesian. Descartes’s famous Cogito ergo sum, by being made to imply [not just ‘we think therefore we are,’ but] ‘we are [only] if and when we think,’” could be used to justify “deleting” those who are not thinking from the “category of life-deserving human beings.” This was followed by an expanded version of the discussion in Chapter  5 of the traditional criteria for death, the “three indicators” (Jp. san chōkō)— lack of heartbeat, breath, and pupil dilation. He describes the work of Naoki Morishita: [Morishita] places emphsis on dying as process rather than taking place at one fixed point. [He] cites a sequence of graphic medieval Japanese paintings, The Scroll Depicting Nine Stages of Dying, to remind his readers that the art of centuries ago shows that people of the past observed with care the physiological stages through which a person, once alive, gradually decomposes and turns into a virtual nothing (Morishita 1999).

Importantly, Morishita does not appeal to something particularly “Japanese” in this; his point, rather, is that we moderns too easily overlook the degree to which earlier peoples also recognized the importance of close observation and using the evidence of

Appendix

198

our senses to make crucial decisions such as whether or not another human is dead or alive. We deceive ourselves if we assume that being empirical is a unique “modern” discovery.

Hands On! Hands Off! ­ aFleur begins this section by attempting to unpack the somewhat confusing L claim that Shewmon’s research was “anti-Darwinian.” The evocation of Darwin, according to LaFleur’s reading, stems from notions that, since the neocortex is a recent evolutionary development, it can be viewed as the seat of human consciousness and that which sets us apart from lower orders of animals. Once that is presumed, the death of the neocortex can be equated with the death of the human person. By this logic, anyone who fails to equate the death of the neocortex with death per se “must be an anti-Darwinian.” Given the context of US culture wars, LaFleur sees this as an attempt to draw a line where there isn’t one, and paint doubters of brain death, no matter their credentials or evidence, as “anti-science.” Defenders of biotechnologies assume that “every challenge to their position must arise out of such a mindset.”6 Returning to the traditional criteria for death, LaFleur notes that not every scientific advance renders previous knowledge “unscientific.” Furthermore, there might be aspects of traditional medical practice that are worth retaining. As a case in point, he quotes Tomio Tada: I am of the view that it is preferable for physicians to touch their patients physically. Recently they do not touch them but make what can only be called a paper-based diagnosis. From beginning to the end they check that data that has been entered into a computer, not bothering to observe the face of a patient or touch them. However, the human hand is a superb sensor, one that outstrips the capacities of a machine. (Tada 1999: 20)

CHAPTER FIVE (2010): CONVERTING THE CHRISTIANS This chapter details the shifts in Western religious thought that led to the relatively rapid acceptance in the United States of the notion of brain death. It follows Chapter 7 closely and much of it reproduces sections of the article “From Agapé to Organs” (2001, 2002a). Here I will highlight the portions that do not appear elsewhere. Christiaan and the Pope The suite in which Christiaan Barnard performed the first heart transplant has been turned into a museum. When LaFleur visited it in 2004 he was interested to see a display featuring a picture of Barnard standing with Pope Paul VI, signaling

Appendix

199

the Roman Catholic Church’s tacit approval of the procedure. The Pope, as LaFleur puts it, had been “converted.” But “getting to that moment of convergence had not been so easy.” The rest of this section covers the same ground as “Making Medicine the Miracle” from Chapter  7 and the article “From Agapé to Organs.” LaFleur underscores the fact that, historically, we cannot find a consistent view of how to handle dead bodies in the Christian world. Treatment of corpses varied according to socioeconomic status and changed drastically in times of duress, such as famine, war, or epidemic. Miracle and Yuck in Cape Town Here LaFleur reiterates the point raised in Chapter 7 that American Christians’ embrace of the new technology of the transplant was contingent and took place during a unique window of opportunity. It was a time in which agapé—a term from the New Testament for pure, selfless, altruistic love—“was being put forward as the quintessence of Christianity.” He then returns to the use of the term “miracle” in describing advances in biomedicine, which, given the Protestant view that the “age of miracles” had passed, left open the door for the “modern miracles” of biomedicine to be used as proof of Christendom’s cultural superiority, and as a means to convert those of other cultures and religions through medical missionizing. Finding Love in Japan LaFleur observes that even after a law passed the Japanese Diet in July 2010 making it easier to perform transplants, resistance has remained. Some Westerners assume this has to do with the religious disposition of the Japanese. LaFleur recounts an anecdote in which he gave a lecture in 1999 at an American Midwestern university. [A] medical professional who claimed to have visited Japan frequently objected to my somewhat complex explanation for Japan’s slow resistance to braindeath based transplants. “It’s much more simple than that,” she declared. “The reason for not doing transplants there is simply that Japan is not yet a Christian society!”

The confluence, at least in the mind of this one doctor, of medical and religious missionizing could not be clearer. LaFleur then turns to the critiques raised by Itō, discussed in Chapter 7, of the “unnaturalness” of the “voluntary love” of agapé. While Itō promotes familial love as normative, Fletcher, in his view of agapé, saw family ties as obstacle—the kinds of emotions he wished to remove from consideration in the drive toward organ donation. LaFleur continues:

200

Appendix

[A]lthough Japanese are usually uncomfortable asking or even suggesting the specific religious commitment or proclivity of one another, there is a detectable sense that a difference in religion may be involved in whether or not to endorse organ transplantation. Some contemporary Japanese, especially if they themselves are skeptical about transplantation, tend to see a subtle Christianizing agenda in the enthusiasm with which some in Japan, whether Christian or only deeply sympathetic to Christianity, promote this technology and speak of it as a way through which something still lacking in Japanese society might be provided.

Deeken and Doi Despite the perception that organ transplantation accords with Christian values, LaFleur shows that Japanese Christians do not necessarily feel they need to convert to the cause. Here, as in his article for Zygon (2002a), LaFleur contrasts the views of Alfons Deeken, a Catholic priest who taught for decades in Japan at Sophia University, and Kenji Doi, a theologian associated with the United Church of Christ in Japan. LaFleur describes Doi as “part of a generation of younger, well-informed scholars of Christianity raised outside the ambit of Euro-American Christendom and willing to question the interpretations of Jesus and the early Church that have dominated the scholarship and church life of the West.” Although LaFleur introduces Doi in his article, here he gives his analysis of the parable of the Good Samaritan a more thorough treatment: Doi quotes in full the well-known section of Luke’s Gospel 10:30-37 and scrutinizes its details. What to him stands out as especially important is the exact wording about the behavior of the Samaritan, someone whose ethnicity made him despised by the religious leaders of the Jews at that time; in contrast to them and their unconcern, he actually went up to the man in the road who had been beaten by thieves and left in a “half-dead” condition. Doi calls attention to verse 33: “But a Samaritan, as he journeyed, came to where [the beaten man] was; and when he saw him he had compassion.” At that point he treated the wounds and brought the injured man to an inn. Doi notes the precise wording of the conditional phrase: “when he saw [the beaten man], he had compassion,” and finds it significant that it is “the act of seeing” that gives rise to compassion and subsequent rescuing activities (Doi 2000: 250–1). The Greek original, idon esplagchnisthe, has, he notes, the compassion following the act of seeing. When exploring how this text should or should not be connected to public policy, Doi finds that the way organ transplantation has been institutionalized not only does not allow for the “compassion based on seeing” sequence but actually goes out of its way to prevent this from taking place. Like many other Japanese commenting on this matter, Doi insists that to judge its ethics we must keep all the players in view. This means not only that the recipient as well as the donor are involved but also that a third entity, the person or persons who act as the transplant coordination network, is present and playing a key role. And, according to Doi, a large part of that actual role involves, under the euphemism

Appendix

201

of “coordination,” tactics for blocking contact. Because the body of someone declared “brain dead” will show empirically so many signs of being still alive, the sight of this body/person and the thought that organs will be excised will likely disturb the family and friends. It then becomes important to prevent the kind of seeing that is so important to the parable of the Good Samaritan. The potential recipient too will never see the donor in that state. The “persons in the middle” here do not so much mediate as serve to prevent actual seeing on-site and the feeling that might arise in such a context.

For Doi, in the case of organ transplantation, intermediation is not so much a means of connecting people as closing them off from each other. As he puts it, “When we look at how things have actually developed in America we see that altruism has been made into a duty, something simply expected of everyone. But can such a thing really qualify as love for one’s neighbor?” (Doi, quoted in Komatsu 2002: 285). LaFleur concludes: “Dr. Barnard got the pope to endorse his procedure with relative ease and speed. Japan’s Christians, perhaps more well informed than the pope about what is actually involved, seem much less ready for conversion . . . at least on this matter.”

CHAPTER EIGHT (2002): BODIES, WASTE, AND PHILOSOPHY This chapter corresponds to Chapter  8 in the present volume and provides an analysis and critique of utilitarian bioethics. The first draft focused mainly on Japanese critics of utilitarianism. The second presented more background on Bentham and gave more attention to his Western critics. Overall, the second draft was more polished—and corrected some minor errors that were present in the original draft. But this draft also elided some of the most interesting analysis of Japanese bioethicists. For this reason, Chapter 8 of the present volume combines the best of both drafts. What follows here are synopses of materials that were not included. The first draft included a section titled “Creeping Flesh,” providing an overview of the work of Hisatake Katō—“a professor at Kyoto University and quite possibly the senior bioethicist in Japan today”—and his 1986 book, Baioeshikkusu to wa nani ka [What Is Bioethics?]. Katō, a critic of utilitarianism, is one of the scholars who alerted LaFleur to the Japanese affinity for the work of Hans Jonas. This section, unfortunately rather rough, sketches out ideas and connections that are not fully developed. The fact that LaFleur appears to have abandoned this section in the second draft suggests that he judged some of these arguments not worth pursuing. LaFleur describes Katō’s rejection of the reductive tendency of utilitarian bioethics, which eliminates important considerations. “For instance, Katō finds value in a position taken by the British philosopher Mary Warnock,” who argues that we tend to assign legal rights based on our instincts and feelings, specifically in response to violations that would “make the flesh creep.” Katō

202

Appendix

connects Warnock’s evocation of moral horror to Rudolph Otto’s theory that fear is at the heart of all religious experience, in which one is confronted with the mysterious, “numinous” presence of the divine. Katō connects this to “animistic” religion—or the belief (as in Japan’s traditions of kami worship) that the objects of the world are infused with spirits. LaFleur then jumps to a discussion of how the term “animism” has become something of a third-rail in Japanese studies, mainly because it was used by Takeshi Umehara to argue for Japanese cultural uniqueness and “a form of intellectual nationalism.” LaFleur touches on Umehara’s opposition to the legalization of organ transplantation, but sees Katō’s interest in animism as more worthwhile, and not “as some kind of index to something uniquely preserved by the Japanese.” In many ways his own discussion of this topic is for the purpose of analyzing just what may have been lost by modern European philosophers when they sought to dismiss “the emotion of fear” as “primitive” and therefore “dispensable” for ethics. [. . . .] That is, the suggestion here is that the “fear” element, one we usually dismiss as residual “animism,” may be precisely what might now prove most important in the realm of bioethics; it could re-legitimate what we are being told by our own emotions of disgust vis-á-vis certain kinds of biotechnology. This is what Jonas meant by fear’s capacity to be “heuristic.” [. . . .] Jonas is right when he states we need to give priority in our consideration to the “Thou shalt not” that arise from negative examples of the past rather than to the “Thou shalt” that springs from merely utopian dreams of the future. This would mean, then that bioethicists should give greater, not less, attention to the crushingly negative examples the 20th century—Nazi experiments, those of Unit 731, and what is now increasingly seen as having been the “useful” medical “fallout” from Hiroshima and Nagasaki.

LaFleur then moves to a discussion of Jonas that anticipates Chapter 9. One passage, however, that was not included in Chapter  9 underscores Jonas’s own distaste for utilitarianism: I would here call attention to the fact that Jonas, in one of the places where he was most critical of his one-time mentor, took issue with Heidegger’s notion that “things are primarily zuhanden, that is, usable [. . . .]” This is part of the reason why Jonas objected to existentialism. It, he held, perpetuated the old gnostic contempt for nature. And, in what could serve as an accurate characterization of some recent works in bioethics, Jonas wrote: “So radically has anthropomorphism been banned from the concept of nature that even man must cease to be conceived anthropomorphically if he is just an accident of nature. As the product of the indifferent, his being, too, must be indifferent” (Jonas 1958: 339).

This is followed by LaFleur’s impression that today’s utilitarian bioethics does little more than serve as a public relations campaign for biotechnology.

Appendix

203

[­ T]here exists a high likelihood that the bioethics enterprise, precisely because it will bend over backwards to prove itself “useful,” will be enlisted to serve as the “soft,” “human,” and “ethically sensitive” face shown to the world by the advocates of each and every emergent technology. This will turn out to be bioethics as public relations. To that extent bioethicists, especially if they see themselves as preeminently pragmatists and utilitarians, will be found allaying the fears—perhaps the eminently heuristic fears—of “the public.”

The remainder of the section briefly touches on materials, such as the dilemma faced by the Sugitas, more fully developed elsewhere.

CHAPTER EIGHT (2010): CLOSETED MEDICAL BOMBS This chapter is nearly identical to Chapter  10, though the order in which subjects were introduced was changed, and it is missing its final pages. It is possible that LaFleur intended to conclude the chapter by pointing out that we should not attempt to explain the differences in Japanese and American responses to the atrocities of wartime medicine by resorting to essentialist narratives about Japanese or American culture, since they derive instead from historical contingencies—a prominent theme in the first draft but relatively underplayed here. Windfall Labs In this draft, LaFleur starts with material detailing American efforts to glean knowledge about the effects of radiation by exploiting Japan’s atomic bombing victims, corresponding to the section “Gathering Genetic Information” in Chapter 10. Contrasting American and Japanese reflections on the Second World War, he points out that “Americans gave lots of credit in public to the virtues they saw as linked to their own national character but were also aware that certain technological advances, ones not yet in the hands of the Japanese, had given them victory.” He includes a brief history of scientific knowledge on radiation, and the fact that, although it was known that radiation caused genetic mutations, it was impossible to conduct experiments on humans to determine the exact effects on the human population. “This is why the dropping of the bomb was such a windfall for medical science.” In this version he also underscores the fact that US research on radiation was done against the backdrop of the cold war, and that “gaining as detailed knowledge as possible of the effects of such weapons would have seemed imperative.” War-to-War Research This section corresponds to “Battlefield Research” in the present volume, and his discussion of the work of Seaman is largely identical to that in Chapter 10. Turning

204

Appendix

to the history of Unit 731, he observes that “under certain conditions, medical personnel can be suborned into shelving their duty to be healers in order that their primary professional obligation not interfere with research agendas either sought by, or forced upon them.” LaFleur comments that when working on his 2008 edited volume Dark Medicine, he had been surprised by the frequency with which Japanese bioethicists referred to the atrocities “and to the need for appropriate fear vis-à-vis how current international competition in biotechnology might also co-opt physicians into an easy disregard for the norms of medicine and ethics” (LaFleur 2008: 1–12). Unit 731 This section is largely the same as “Finding Cover for Unit 731.” Though here he pushes back against the “unfounded rumor” in the West that the Japanese had “settled into some kind of national ‘amnesia’ about the facts of Unit 731.” When details of the activities of the unit became public in the 1990s, there had been an undeniable amount of public discourse on that history. Here he ties together more closely his analysis of the research undertaken by Unit 731 with that undertaken by American physicians during the occupation. [Tsuneishi (1982)] showed how the Japanese army’s invasion of Manchuria in 1931 was an “occupation” with windfall benefits, at least for persons wanting to carry out uninhibited research on human subjects. Funds were far more abundant than they would have been at home and civilian oversight, a loathed irritant in the eyes of the physician researchers, was non-existent. What was being done in Manchuria was not only far from the eyes of the Western world but even from those of Japanese at home.

Here, he places more emphasis, as well, on the fact that the horrors of Unit 731 could be rationalized as “collateral damage, things that could not be avoided in an effort to save lives on ‘our’ side.” In LaFleur’s words, such a rationalization involves “calculation”—the few are sacrificed so that more may live.

NOTES Foreword These include Liquid Life (1992), Dark Medicine (2007), and his essay “Body” in Critical Terms for Religious Studies (1998). 2 This, however, is perhaps his most pointed and polemical work to date. 3 To illustrate, for example, the degree to which those “medical missionaries” seeking to promote organ transplantation in Japan saw the practice tied to not just a Western but a specifically Christian ethos, Bill included reproductions of a Japanese organ donor card embellished with angels—imagery that evoked Christianity as opposed to Japan’s more dominant Buddhist or Shinto traditions. ­4 Since the brain-dead body relies on a ventilator to sustain respiration, Margaret Lock (2002: 40–1) refers to it as a “machine-human hybrid,” “neither alive nor dead, both dead and alive.” 5 Nowhere, for instance, does he advocate keeping brain-dead bodies on life support indefinitely—which, as Borovoy notes in the Introduction, gives rise to its own set of ethical problems. Nor does he call for a moratorium on the practice of organ transplantation. 6 As Borovoy shows in her Introduction (p. 9), these gray zones have not been vanquished: “the body defies the desire to technologically pinpoint a ‘moment’ of death.” 7 I recommend that Biolust be read in conjunction with these works; all three complement each other in various ways. 8 Lock (2002: 376–7) argues that the category of brain death should be dropped, and instead allow irreversible loss of consciousness and spontaneous breathing to be declared the “death of a person,” with the caveat that “what constitutes the concept of ‘person’ is by no means universal [. . . .]” For Sharp’s views on brain death, see Sharp (2006: 16–17, 46–7, 85, 99). On her support of organ transfer, she writes: “[m]any people I have met through this research would be long dead were it not for transplantation; thus, on a more visceral level, a condemnation of organ transfer would mean that individuals whom I cherish would no longer walk this earth. Thoughts of, for instance, a moratorium on transplantation, or an active campaign to discourage procurement, disturb me deeply” (2006: 29). 9 Sharp (2006: 99), for instance, writes of brain death, that “its medicolegal success and social acceptance ultimately insist that we be reduced to our mind-brains. Or, put another way, organ harvesting is driven by the assertion that when the mind-brain no longer functions, we cease to be ourselves, to be full-fledged human beings and, thus, truly alive.” See also Lock (2002: 116–20, 374) and Sharp (2006: 16–17, 46–7). 10 LaFleur’s concerns echo those of critical medical anthropologists, who, in the words of Charlotte Ikels, are concerned “about the potential exploitation of vulnerable populations—the brain-dead, the irreversibly comatose, or those in a permanent vegetative state—as analytic philosophers propose revising, by narrowing, the concept 1

206

Notes

of personhood” (2013: 90). Regarding the breadth or narrowness of our definitions of personhood, my own work on representations of senile dementia in Japan (Drott 2018) has shown a tendency to eschew the kind of discourse common in North America that equates selfhood or humanity solely with the presence of a functioning mind. 11 As Richard Shusterman writes, we would do well to overcome the attitude that the body “merely belongs to the self rather than really constituting an essential expression of selfhood” (2008: 3–4).

Introduction 1

2

3

4 5

In an essay on these issues, I discuss the more family-oriented nature of the discourse around abortion in Japan which is less preoccupied with the question of when life begins and more preoccupied with parenting and the creation of a good society (Borovoy 2011). Specifically, he explores the secularization of American Christian thought and the influence of utilitarianism on Protestant theologian and bioethicist Joseph Fletcher. Fletcher’s notion of “situational ethics” preached the logic of consequentialism—the notion that the morality of an action can be judged by its effects, rather than its means or underlying principles. And his notion of agapé preached the virtue of love as an act of obligation (the virtue of the “good Samaritan”) rather than as an expression of emotion or desire (Chapter 7). For LaFleur, the publication of these ideas in the late 1960s coincided with early successes in the field of transplant and helps explain why the Harvard Ad Hoc Committee’s report (a landmark publication which defined “brain death” for the first time) was released without comment, while the passage of Roe v. Wade in 1973 just five years later drew so much ire. This includes the work of professor of immunology Tomio Tada, in the 1970s, who, according to LaFleur’s synopsis, has ­challenged the notion that the nervous system should be regarded as the dominant organ system (which determines the existence of life or personhood), instead showing the immune system’s sophisticated work in distinguishing “self ” from “other”; the work of Kōshirō Tamaki in Indian philosophy on the meditative state, which led him to privilege the spinal cord and medullaoblongata as an element of human awareness; the work of Takeshi Yōrō on the loss of hegemony of Chinese medicine; the work of Atsuhiro Shibatani, a biologist concerned with theories of human distinctiveness; and the work of conservative philosopher and public intellectual Takeshi Umehara (1925–2019), a critic of the Cartesian dualism that some argue is central to the rationalization of the “brain death” idea. Furthermore, there is a good deal of confusion among the public as to what “brain death” is, with many laypeople confusing the state of coma or persistent vegetative state with “brain death” (Sade 2011). States passed their own laws; the Uniform Declaration of Death Act 1981 by the National Conference of Commissioners on Uniform State Laws, which sought to uniformize this definition of death, was adopted by most US states in order “to provide a comprehensive and medically sound basis for determining death in all situations.” The act was supported by the American Medical Association, the American Bar Association, and the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research under Jimmy Carter. The Uniform Declaration of Death Act deemed the new definition

Notes

207

of death necessary due to “modern advances in life-saving technology” (UDDA 1980 https://www.uniformlaws.org/HigherLogic/System/DownloadDocumentFile. ashx?DocumentFileKey=341343fa-1efe-706c-043a-9290fdcfd909). 6 The persistent vegetative state raises its own ethical dilemmas, explored powerfully in Sharon Kaufman’s “In the Shadow of “Death with Dignity”: Medicine and Cultural Quandaries of the Persistent Vegetative State.” Kaufman argues that the capacity of highly skilled nursing and nutritional support (or the support of a mechanical ventilator in cases of acute respiratory illness) creates a quandary for families and health care providers, as “death” and “life” “collapse into one amorphous category” and the family is forced to “choose” whether to actively “allow” the patient die (by withdrawing nutritional or other physiologic support) or to keep the patient alive by seemingly unnatural means (2000: 70, 72). 7 Truog, Pope, and Jones include a helpful discussion of these points in their, “The 50Year Legacy of the Harvard Report on Brain Death” (2018). Other questions also have been raised about the validity of the criteria for neurological death, since certain states including hypothermia, barbiturate overdose, or ischemic stroke result in cessation of adequate amounts of blood reaching parts of the brain. These events might induce states which mirror criteria for neurological death (cessation of cortical activity or brain stem dysfunction). Also, oculo-vestibular reflexes might be diminished due to pre-existing conditions or use of certain medications. These phenomena might confound the diagnosis, with tragic consequences. Nonetheless, the criteria for declaring death necessitate ruling out these confounding circumstances. 8 Shewmon later questioned whether the criteria for pronouncing brain death were indeed accurate, and suggested that the condition of “brain death” could be confounded with other, perhaps non-irreversible states of neurological malfunction, in which intracranial blood flow was insufficient for synaptic function, but just sufficient enough to keep brain cells alive—an ambiguous state that mirrored cessation of brain function but which Shewmon named a “minimally conscious state”—a state of neurological collapse that was nonetheless not death (Shewmon 2018b: 168–9). 9 Others have argued that the focus on somatic integration is itself a kind of “biological fundamentalism,” when our focus should rather be on irreversible loss of consciousness and moral standing (Veatch 2005: 366). Veatch points out that the criterion of loss of integrative function necessitates accounting for multiple exceptions—cellular activity that “does not count as ­performing integrative function”—a process he calls “selective discarding” (2005: 358). 10 This last rule ensures that, in the case of donation after cardiac death, the heart is monitored for two to five minutes after removal of the ventilator to make sure no spontaneous activity is measured (if the heart restarts in the absence of the ventilator, organ removal cannot occur). In the determination of brain death, the dead donor rule requires that the organ removal does not cause the donor’s death. Death is allowed to occur, but is not caused, by discontinuing the ventilator. 11 Solomon writes: “Because Life is a continuum, evolving over the course of 3.5 billion years, with individual cells replicating, and individual organisms reproducing, there is difficulty determining when one individual organism’s Life truly begins and ends. Some of our most heated debates involve attempts to clarify when an individual person’s life begins and ends. If we parse anything sufficiently we find it is

208

Notes sometimes impossible to say where the dividing line is. We have often noted that it is easy to distinguish noon from midnight but it is not so clear at dusk whether it is day or night. This may apply to the most fundamental aspect of medicine, knowing when someone is living or dead” (2018: 6). Solomon cites James L. Bernat, 2013 Hastings Center Report: “On Noncongruence between Concept and Determination of Death”.

12 In 1981 The President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research tried to help develop a universal definition that would be adopted by all states. The Uniform Declaration of Death Act legalized the declaration of death in all states to include this new death (UDDA 1980 https://www.uniformlaws.org/HigherLogic/System/DownloadDocumentFile. ashx?DocumentFileKey=341343fa-1efe-706c-043a-9290fdcfd909). 13 There is only one state in the United States, New Jersey, where family members may be granted an exemption from the declaration of death by irreversible cessation of whole brain function based on religious or ideological beliefs. ­14 Such care is sometimes referred to as medically “futile” care, though this terminology has fallen out of favor. Ethical dilemmas caused by the technical capacity for continuing respiratory support even in the presence of multiple organ system failure or loss of engagement with the environment (through the ventilator and extracorporeal membrane oxygenator (ECMO)) have become commonplace on hospital floors and the struggle to ensure respect for patient autonomy (a fundamental bioethical principle) constitutes the day-to-day business of primary care teams and professional ethical consultants on the hospital floor. 15 A frequent intervention is for diabetes insipidus—the loss of the ability to maintain salt and water balance by the kidneys due to absence of ADH (anti diuretic hormone) made by the brain. ADH must be replaced to maintain bodily integrity for those with whole brain loss. 16 The OPTN site also states that in the United States, twenty patients die a day due to a “lack” of donor organs. 17 A clearinghouse that is operated under contract with the US Department of Health and Human Services. 18 Contestation exists, too, on how to memorialize donors. Some hospitals have created donor gardens, anonymous sites memorializing the collective act of donation. In contrast, surviving kin have created quilts and virtual cemeteries which humanize and personalize donors with images and photographs, including dates of birth and of death (Sharp 2001: 124–6). 19 LaFleur’s argument concerning the fetus is that, because it has not yet entered the social world, in Japanese Buddhist thought it remains a liminal entity and it is not regarded as immoral to return to the fetus to the other world, although some temples communicate to the parents that the fetus is unhappy and thus must be placated with visitations and ceremonial rites (1992: 4–10; see also Hardacre 1997). 20 The committee was composed of fifteen members and five advisors. Thirteen of the fifteen committee members supported the consensual decision. The four dissenters included two committee members and two advisors. ­21 The committee report is published in its entirety in Takashi Tachibana, Nōshi rinchō hihan, (1992: 244–84). 22 Kōsei Iinkai Gijiroku No. 5 Date: March 18, 1997. The full quotation is: Ippō, wagakuni ni oite wa, nōshi wa hito no shi ka, nōshitai kara no zōkiishoku wa mitomerareru no ka ni tsuite, zōki ishokuigai de wa tasukarenai ooku no kanja wa,

Notes

23

24 25 26 27 28

209

semarikuru shi no kage ni obietsutsu, ishoku wo ukeru koto ga dekiru hi wo ichinichi senban no omoi de machiwabinagara munen no namida wo nonde shi wo matte orareru no ga genjō de arimasu. Principle #11 stated that “Countries should strive to achieve self-sufficiency in organ donation and transplantation.” The declaration distinguished between “travel for transplantation” and “transplant tourism,” saying that travel for transplantation was acceptable so long as it did not rely on trafficking or prevent the destination country from providing for its own citizens. Japan society for Transplantation, Nihon Ishoku Gakkai, Deeta de miru zōki ishoku, http://www.asas.or.jp/jst/general/number/ United Network for Organ Sharing, https://unos.org/news/organ-donation-setsrecord-in-2019/ The first successful liver transplant with a longer survival was in 1967. Barnard performed a second heart transplant in 1968 when the recipient lived for six months. She talks about organ donation as interpreted in terms of sacrifice, nurturance, and “housework” as well as the “brutal” economic calculations which make women’s donation necessary to support male labor and family livelihoods (Crowley-Matoka 2016: 41–5).

Chapter 1 1 ­2

3 4 5 6

Actually, Shinto organizations have said next to nothing on this issue. By contrast, some of Japan’s “new religions” have been quite vocal and active in opposition. Editor’s note: Medical anthropologists have rightly observed that while the condition of brain death can be demonstrated clinically using scientific measures, it cannot be scientifically “proven” to be death, since what counts as death also relies on social consensus and is embedded in historically received beliefs. Thus, it is difficult to imagine what kind of proof could be offered to satisfy the critics LaFleur mentions. See Lock (2002: 116–20, 374) and Sharp (2006: 16–17, 46–47). Katō (1986) is surely one of the most important books in this field. See also Jonas (1974/1980: 132–40). Editor’s note: On loss of “somatic integration” as criterion for justifying brain death, see the Introduction, p. 8. A term used in Stock (2002: 17). In fact, not a few Japanese Buddhist thinkers of the late nineteenth and early twentieth centuries praised Darwin and prized Darwinism.

Chapter 2 1

Editor’s note: Lesley Sharp notes that when she began her research in 1991, there was very little support for proposals to increase the supply of organs by allowing for their commoditization, but that as of 2006 discussions involving the American Medical Association and UNOS had increasingly moved in that direction. Though not promoting the sale and purchase of organs, suggestions included subsidizing funerals of organ donors, the provision of tax credits for their families, or charitable donations in the name of the deceased (2006: 18). See also (2007: 56). For a broader discussion of the commodification of organs as a fait accompli, see Lock (2002: 48).

210 2

3 4

5

6

Notes Editor’s note: Lock (2002: 372) concurs that “aggressive language so often associated with drives to procure more organs” amounts to “a language of entitlement [. . . .]” She argues that the “long and tortuous history of vivisection, cadaver dissection, and commodification of the human body in Europe” gives rise to a “tradition” that “facilitates the argument that organs that are not retrieved go to waste, and that sick people have a right to organs for transplant” (2002: 52–3). Sharp (2006: 17) writes that “[t]he intense desire to prolong life and cure disease has spawned [. . .] an ever-increasing national list of patients for whom transplantation is deemed a basic medical right.” See also Ikels (2013: 91). Editor’s note: See note 2. This is similar to what Fox and Swazey, as social scientists who have interviewed multiple organ recipients and their families, have discovered, called the “tyranny of the gift,” and subjected to a searching analysis (Fox and Swazey 1974: 383 and Fox and Swazey 1992). Editor’s note: LaFleur, here, contrasts Lock’s treatment of Japanese opposition to organ transplantation in her 2002 book with some of her earliest writing on the topic, which focused more on Japanese mistrust of the medical profession and the ways in which arguments against organ transplantation resonated with nationalist and traditionalist discourses. See, for example, Lock (1996). Editor’s note: LaFleur, here, likely refers to the reaction of scientists to Shewmon’s presentation at the Second International Conference on Brain Death in Havana (1996), reported on in Greenberg (2001). Sharp (e.g., 2006: 12, 66, 81) and Lock also document a tendency among physicians and organ procurement organizations (OPOs) to provide overly simplistic accounts of brain death and less than candid accounts of what is involved in organ harvesting. According to Lock (2002: 248), “[o]ne doctor recoiled when I suggested that more public education in connection with brain death might be a good thing. In his opinion, education should take place at the bedside when needed, and not be open to distortion and greater confusion in a public forum, as may have been the case in Japan.”

Chapter 3 1 2 ­3

4 5

Editor’s note: Starzl’s career is discussed in Lock (2002: 93). Editor’s note: Lock reaches the same conclusion regarding the American public’s understanding of brain death (2002: 32). Editor’s note: Although not going as far as LaFleur, to imply a “cover-up,” Lock and Sharp both show that, while in private, many professionals involved in the practice of organ transplantation frankly admit their ambivalence about the organ-harvesting procedure, or the slipperiness of the concept of brain death, they are loath to engage in a public discussion of such concerns. See, for instance, Sharp (2006: 72, 74, 82, 85) and also Chapter 2, note 6. See also Appendix, note 2. Editor’s note: Although not disputing this assertion, Lock is also careful to point out instances in which she feels the Japanese public has been subjected to misinformation about the situation in North America (e.g., 2002: 153–5). Editor’s note: On the differences between brain death and PVS, see Lock (2002: 349–50), and Borovoy’s Introduction, pp. 6–7.

Notes

211

6

Editor’s note: Morioka articulates a definition of personhood that, unlike that employed by many Western bioethicists or assumed by the general public, is not premised on the presence of consciousness. For a discussion of various attempts to define personhood as it relates to brain death, see Lock (2002: 116–20). 7 Editor’s note: Lock (2002: 165–6) discusses some of these cases. 8 Editor’s note: The original manuscript included a cartoon from an uncredited source of a nurse holding a crying newborn, looking down at a brain-dead woman being sustained by a ventilator. The nurse and unconscious mother both shed tears. 9 Editor’s note: Sharp reports similar attitudes among transplant professionals, who tend to regard family members who harbor moral qualms about having organs harvested from their loved one as ignorant, backward, superstitious—regardless of their level of education or degree of religiosity (2006: 24, 82–6). 10 The way these culture wars are distorting the issues that we, in fact, should be debating has now been superbly described in Fox and Swazey (2008). [Editor’s note: This passage, included in the body of the original manuscript, seems more suitable as an endnote.] 11 Eventually, however, he met his match. Theseus, who by legend was a king of Athens, made Procrustes get into the bed he had made and forced him to “fit” by cutting off the notorious bandit’s own feet and head. [Editor’s note: This passage, included in the body of the original manuscript, seems more suitable as an endnote.]

Chapter 4 1

2 3 4

Editor’s note: Sharp describes how organ recipients are encouraged by their surgeons to think of their new organs in purely mechanistic terms, as “pumps” in the case of hearts or “filters” in the case of kidneys, but the tendency to wonder about one’s donor often persists. Ostensibly because it is “psychologically dangerous” (but also, perhaps, because it threatens the ideological edifice on which the enterprise of organ transfer is based), such ruminations are strongly discouraged. “Recipients also learn very quickly to speak only privately—or, at the very least, out of earshot of transplant professionals—about any sentiments they feel for their donors.” If recipients describe feeling differently because they received a heart of a relative youth, for instance, or otherwise “assign meaning to their organ’s origins [they] may well be shuttled down the hall for a psychiatric evaluation, only to emerge with the diagnosis of ‘Frankenstein Syndrome’ [. . . .]” (Sharp 2007: 21). See also Beidel (1987), Helman (1988; 1992), and Youngner (1990: 1015). Lock describes how more than half of her Japanese informants reported that they thought that “ki”—believed to be a form of vital energy not necessarily related to personal identity—from a donor would be present in donor organs (2002: 226). Editor’s note: For more on the various meanings of kokoro, see Traphagan (2002). Editor’s note: On the uses of the mirror metaphor in Indian and Chinese Buddhism, see Wayman (1974). Editor’s note: In an interview with Lock, Tada claimed to not be “personally” opposed to organ transplantation, but “striving for a ‘sound ethics’ and a thorough discussion of the complex facets of the brain-death problem [. . . .]” He did, however, pen a modern Noh drama, The Well of Ignorance (Mumyō no i), which is far from a resounding endorsement of the practice. Transporting the debate about organ

212

Notes

transplantation into a mythic past, the play tells of a rich man’s daughter who is saved by a doctor who provides her with the heart of a drowned fisherman. The daughter dies of guilt and the fisherman’s unsettled soul is unable to move on to the next world. The play ends with both spirits begging a Buddhist priest to help end their suffering, but without any further resolution (Lock 2002: 157–9). 5 Editor’s note: A term coined by Arthur Caplan and popularized by Leon Kass (1997). See also Rozin and Fallon (1987) on the psychology of disgust. 6 Editor’s note: Medical professionals will, no doubt, dispute the notion that anyone is “rushed into” a diagnosis of brain death. The work of Sharp and Lock, however, demonstrate that, at least at the time of their writing, varying degrees of pressure were brought to bear on families of potential organ donors to accept the diagnosis and proceed with transplantation, or even, in at least one exceptional case, to allow their child to become an organ recipient (Lock 2002: 233, 243, 310–14; Sharp 2006; 2007). 7 Another instance gets us even closer to questions about revulsion and biotechnology. In Europe, it was once acceptable to castrate young males who, through that procedure, would possess the equivalent of female voices well beyond the years of ordinary puberty. Such castrati were prized and praised within the courts and even within the Catholic Church for centuries. This quasi-medical procedure of enhancement—and it makes sense to call it that!—led also to a lack of testosterone in such males and, because their bone joints did not harden, rib joints grew unusually long, thus giving them not only exceptionally high voices but also extraordinary lung capacity and power. The result was a kind of singing voice, soprano or mezzo-soprano, that easily attracted crowds, patronage, and praise. “Angelic” was a term used.   But in the eighteenth century some persons found such a technological intervention in the ordinary maturation process to be repulsive. The adulation and income of the castrati was no longer able to justify what came to be seen as a ­disgusting level of physical and emotional torture. The yuck factor had surfaced and it was, when interpreted, seen as transmitting a message that was both cognitive and moral. Consciences were quickened. And the disgust intensified—so much so that within the nineteenth century the practice of “enhancing” young males through castration was outlawed. Giovanni Velluti (1780–1861) is thought to have been the last castrato. Society was no longer in awe of a technology earlier thought to be benign, morally neutral, and “enhancing.” In such cases the “emotion” of repulsion was communicating a message; what I have called “moral fear” had cognitive content. And it had ethical import. [Editor’s note: I have taken the liberty of repurposing the above, which originally appeared in the body of the text, as an endnote.] 8 Hans Jonas (1903–93), whose views on a number of related things receive attention later in this book, noted that, especially vis-à-vis new and newly touted biotechnologies, we need to exercise what he called “the heuristics of fear” (Jonas 1984: 26–7, 202–3). A “heuristic” is a tool for opening up something so as to discover what might lie within it. Jonas, like Midgley, was insisting that even our fears in the face of new technologies deserve, at least until proven otherwise, the benefit of respectful scrutiny. [Editor’s note: To avoid redundancy, I have rendered the preceding as an endnote.] 9 We can take this further. MacIntyre sees a strong connection between the Victorians’ belief that they had put all taboos behind them and the absorption of this belief— nothing more than a belief—into the ethical views of Henry Sidgwick (1838–1900), the proponent of utilitarianism who was selected to write the important entry on ethics for the widely influential ninth edition of the Britannica. Sidgwick completed his essay during 1900, the year of his death. And, notes MacIntyre, the edition of the

Notes

213

Encyclopedia composed at that time would give considerable attention both to taboos as part of earlier, more primitive, societies and to the kind of higher, more rational, ethic expressed in English utilitarianism.   In a later chapter I wish to show how fundamentally the thought of the British utilitarians shaped the concept of morality at the base of biotech promotions in the Anglophone world—even today. [Editor’s note: The preceding originally appeared in the body of the text.]

Chapter 5 1 2 3

Editor’s note: See Chapter 4, n. 5. On the unreliability of the EEG, see Currie (1978: 182). Tachibana (1992: 93–9) argues that the “three indicators” are not precise as “vital signs,” but that they are not infallible has been long recognized—and largely compensated for by social practices that involve continued watchfulness over the assumed corpse. 4 Editor’s note: There are, in fact, multiple indicators that can be used in determining brain death, though how many of them are used and in what order is not standardized in North America (Lock 2002: 237; Sharp 2006: 62). On techniques for determining brain death in Japan, see Sasaki (2021). 5 Editor’s note: Jonas’s critique of the utilitarian motives behind the Harvard determination is discussed in Chapter 9. 6 Editor’s note: Lock (2002: 249) reported that of the thirty-two North American doctors she interviewed, only six had signed their organ donor cards. None, she notes, gave convincing reasons as to why not. 7 See de Becker (1997). 8 Editor’s note: See Sharp (2006: 72–82) on efforts of organ procurement organization (OPO) representatives to educate families of brain-dead patients, and their reactions to those who refuse to accept brain death as death. 9 Some organ donation advocacy groups in America have recently suggested that such education, in order to offset negative feeling about this procedure, should begin very early, preferably at the elementary level of schooling. 10 Editor’s note: As one reporter for The Globe and Mail put it, “there just aren’t enough accidents” to address the shortage of organs (Lock 2002: 365). 11 Editor’s note: The following passage originally appeared in the body of the text. I have rendered it here as an endnote since a version of it appears in Chapter 3 in the section “Brain Dead in a Procrustean Bed”:   In many ways, however, the most telling proposal within the trajectory of those for pool expansion has been that we should no longer be so squeamish about taking the organs of persons who, although not technically brain dead by the older criteria, are in an apparently “permanent vegetative state” [PVS] and may as well be deemed “dead” because they will never regain consciousness. The proposal here is somewhat analogous to a driver, having just recognized that their vehicle’s brakes are broken, using that new knowledge to justify further acceleration by stepping down harder on the gas pedal. Now that the brain death notion has collapsed, we may as well take steps— since organs are needed—to redefine death yet once again and this time as inclusive of persons whose physical state is, if anything, even more ambiguous and problematic. Thus in a 1997 essay entitled “Is It Time to Abandon Brain Death?”

214

Notes

Robert D. Truog reviews the proofs for the conceptual messiness of this notion, rejects “the cardiorespiratory standard” as foolproof but bad because it would “make it virtually impossible to obtain vital organs,” and reaches out to suggest putting in place a policy of taking out the organs—and thereby terminating the life—of persons who have suffered the loss of all consciousness. And this, he predicts at the point of the bottom line, would mean that “the pool of potential donors may be substantially increased” (Truog 1997). That the brain-death notion is in deep trouble can now be admitted by even those who had vested interest in it—as long as that “trouble” can itself now be used to rationalize an ever more bold, potentially more troublesome step. This method might be called “resolution by enhanced recklessness.” And it is instructive to go back only a few years to note what was then being used to justify the now problematic brain-death notion—namely, as in the claim by David Lamb that: [In contrast to the brain dead,] patients in a vegetative state are not dead. No culture in the world would consider them fit for burial, organ removal, experimentation, etc. (Lamb 1996: 6) What Lamb predicts as virtually inconceivable in 1996 becomes for Truog in 1997 not only conceivable but something recommended for adoption as a new standard within a whole society. 12 I am grateful to Pernick for bringing his important research to my attention.

Chapter 6 1

In an observation that forces us to see that things were more complex than the simple “right/wrong” conclusions to which Sugita came, Nathan Sivin, the West’s leading authority on Chinese science, writes: “[W]hat we learn about the Chinese conception is not anatomical but physiological and pathological [. . .] not what the viscera are but what they do in health and sickness” (Sivin 1987: 120–1). See Kuriyama (1992: 21–43). Editor’s note: See also the work of Emiko Ohnuki-Tierney (1984). Editor’s note: The following sentence—dealing as it does with medicine as practiced in China—seems to be somewhat of a non-sequitur, so I have relegated it to the endnotes.

2 3 4



5 6 7 8

Nevertheless, as indicated by Sivin, theory is not unimportant in Chinese medicine. “Some doctors with little education may have had little theory at their disposal, but those who knew it used it, and those who wrote on it did so with therapy in mind. That point is unmistakable in the literature, old and new” (Sivin 1987: xxii). Editor’s note: Here, LaFleur included a reference to Lock (1980: 91) where she provides some examples of traditional Japanese therapies, such as moxibustion, that are indeed “[d]ramatic, radical, and painful.” A copy of both reports is included as an appendix to Umehara (1992: 263–312). Editor’s note: For an account of some of Umehara’s more inflammatory statements, see Lock (2002: 155). This book does not—aside from its title—directly take on Sugita as a past intellectual and critique his position. Umehara’s book commences with a dialogue between himself and Shunpei Ueyama, his mentor, about the need to give public recognition to the continuity and contemporary relevance of the tradition of Japanese thought and religion.

Notes 9 10

11 12 13

14 15 ­16

215

Morioka (1994: 156) attributes its first articulation by a Japanese to Shunpei Yonemoto in a book on bioethics published in 1985. Editor’s note: The works of Lock (2002) and Sharp (2006) highlight a similar “philosophical indifference” among many of the medical professionals they interviewed. For instance, in Lock’s discussions with North American intensivists, no one questioned the principle that “absence of person is based on irreversible loss of brain function,” with a few asserting that at that stage the “soul” or “spirit” has “departed” (2002: 249). “Personhood,” “soul,” and “spirit” are, of course, metaphysical categories that cannot be the subject of medical diagnosis. Nakagawa (1996b: 78–92); Washida (1988); Yōrō (1989). Editor’s note: Here LaFleur included a reference to Williams (1985: 34–5), where Socrates, Plato, and Aristotle’s views of human nature are discussed. It is possible to argue that for both Jews and Christians the notion that God created humans in God’s own image means the net is cast very widely—thus disallowing any exclusion of even the very deformed or persons with extreme dementia (Dorff 1998: 18–20; Ramsey 1970; Post 1995: 32–5). However, this has not, especially during the twentieth century, prohibited both Jews and Christians from trying to locate a more specific qualifying feature—not infrequently that of rationality or consciousness. As Dorff notes, although the Torah declares we are made in God’s image, “exactly which feature of the human being reflects this divine image is a matter of debate within the tradition” (Dorff 1998: 19, emphasis mine). Is it perhaps the case that to begin to look for the feature is to put the Torah’s declaration in jeopardy? Wittgenstein (1958/1980: 17). Editor’s note: Here LaFleur left instructions for himself to provide sources expanding on this theme. Unfortunately, he did not name the sources he had in mind. As an alternative, I would direct the reader to his 1998 essay “Body,” where he puts forth a similar proposal. Jonas might have profited from the attempt made here to use Wittgenstein’s insights about “family resemblance.” It would appear, however, that he continued to see Wittgenstein in terms of the latter’s early connections with the Vienna Circle and not, therefore, acceptable to Jonas.

Chapter 7 1 2 3 4 5

6

The Japan Times, November 15, 1998. See also Dorff (1996: 168–93). Editor’s note: Although Bleich favors heavy restrictions on transplants, he is not completely opposed to the practice. See Dorff (1998: 226–7 and n. 10). See Brown (1988 and 1981). In 1967 during a visit to Seoul, I was shown a hillside burial site by a Korean Christian. With obvious pride he commented that such sites of interment would not be so easily found in Japan, where Buddhist cremation was still the common practice. He went on to cite this as evidence that Korea was becoming a Christian country. Two decades later, however, on another visit to Korea I learned that the extensive usage of prime land for burials had come under public criticism as ecologically unwise. Editor’s note: LaFleur alludes here to the fact that in Japan providing ritual mediation in the event of the death of a loved one has been and continues to be the main point of contact between Buddhist institutions and the laity, as well as their main source of revenue from donations. See Stone and Walter (2008).

216

Notes

To Nygren the older, largely Catholic, notion of love as a kind of eros directed to God had to be replaced with a more specifically New Testament kind of love. He held that agapé was very different and virtually an act of the will alone. But a Catholic scholar would later comment: “Such love has its place but Christian life would be impoverished if this love were its exclusive ideal” (Vacek and Collins 1994: 23l). More recently Joseph Runzo shows how religion is impoverished if the explicitly erotic element is denied; see his “Eros and Meaning in Life and Religion” (2000: 186–201). ­8 Editor’s note: The report can be found in Kuhse and Singer (1999: 287–91). 9 Moreover: “Our situation ethic frankly joins forces with Mill; no rivalry here. We chose what is most ‘useful’ for the most people” (Fletcher 1966: 115). 10 Already in medieval Japan the paradigm of love was the affective one of the parent– child relationship. See LaFleur (2000: 337–48). 11 Ogiwara and others must be seen as responding to the quasi-theological American debate about the possibility of agapaic love. 12 Very recent work in advanced neurology strongly supports this view. See, for instance, Damasio (1994). 7

Chapter 8 1 2

3

4

Editor’s note: Lock (2002: 116–20) presents the work of Western bioethicists attentive to the philosophical issues underlying discussions of brain death. Of course, Joseph Fletcher was merely being a good Benthamite when he declared that Christians now need an “agapaic calculus” and, if they use that calculus, they will see that buried or cremated organs are wasted organs, whereas donated ones will give longer life—assumed to grant pleasure—to someone else. And in America it is very difficult today to challenge that assumption. Invoking the argument from utility jumps forward from procedure to procedure. Bentham’s reach is a long one. And Fletcher is always somehow there in the shadows. Significantly, in a book titled Who’s Afraid of Human Cloning?—which argues for giving parents full “reproductive liberty” to originate a child even by “nuclear somatic transfer” (i.e., full cloning)— Gregory Pence holds that we should use a technology if we have it. Not to use what we have would be wasteful. Pence acknowledges his own intellectual parentage and wraps up his argument with a teasing invitation: “Call me Joe Fletcher’s clone” (Pence 1998: 175). [Editor’s note: Although the preceding paragraph originally appeared after the quotation from Nishida, it seems misplaced, given that this section focuses on Japanese critics of utilitarianism.] “The 731 Unit research included the plague, cholera, typhoid, frostbite, and gas gangrene. Their successes included a defoliation bacilli bomb that blighted an area of 50 square kilometers [. . . .] When needed for the experiment the human guinea pigs were placed in laboratory rooms and injected with bacteria to test a germ’s potency [. . . .] The frostbite tests also used human beings to test exposure to freezing temperatures” (Ienaga 1978: 188). See also Harris (1994) and Tsuneishi (1982). Editor’s note: Sharp (2006: 24–5, 92–9) writes at length on the economic forces shaping the incentives of those involved in the procurement of organs, including the worrisome “corporate restructuring of organ transfer.” Although she is careful to note that the professionals she worked with were far from callous:

Notes

217

  Of special concern to transplant and procurement professionals is the constant demand from superiors to increase the number of successful outcomes. A team of transplant surgeons can qualify for transplant unit status under hospital and UNOS guidelines only by performing a minimum number of surgeries per year; even greater numbers are required to merit additional funds and staffing necessary for experimental research. Throughout the nation, hospitals proclaim their success through the actual number of transplant surgeries they perform, and certain organs, such as hearts and livers, bear more cultural capital than do others, such as kidneys. Procurement specialists in turn are under great pressure to increase their number of successful donations. It is not enough, however, simply to identify donors or acquire consent; only the actual number of organs transplanted in the end matters. [....] Such demands inevitably facilitate the rapid dehumanization of patients and undermine staff morale. Under the worst circumstances, sloppy work or employee corruption may result. [....] As organ transplantation shifts to the corporate model, organ transfer is driven increasingly by market forces that discourage health professionals’ compassionate and courageous acts. This in turn affects the social value of the potential donor, who may be transformed even more rapidly from a dying patient to a body capable of supplying society with highly coveted, ­reusable parts. (Sharp 2006: 25) 5

6

7

8

Washida makes use, in this work, of Jacques Attali (1979). Although it is fairly common in Japanese writings to see similarities between transplantation and cannibalism, to my knowledge the only American writers willing to suggest such have been Leon Kass (1992) and, in special contexts, Renée Fox. Already in the seventeenth century Pascal’s famous “wager,” in fact, had put the calculation of likely consequences—heaven or hell in Pascal’s case—right smack into how, if we are wise, we decide whether or not God exists. See Schneewind (1977), Hacking (1972), and Stout (1981: 56). [Editor’s note: This aside seemed more suitable as an endnote.] This too is where Japan will tend to differ. As noted in the [third] chapter, close to the outset of Japanese questioning the morality of transplants Michi Nakajima as a nurse raised precisely this question about a wide truth gap between what some in the medical profession were doing and what was being told to the public. She had had no need to read Bernard Williams about Government House utilitarianism in order to recognize something was being concealed in the rhetoric about universal love being translated, via transplantation, into advanced forms of medicine. [Editor’s note: I have repurposed this paragraph as an endnote. On the discursive and “ideological disjunctions” between medical professionals, organ procurement organizations (OPOs) and donor families, see Sharp (2006: 14, 40, 77–8, 86, 96).] At least since the general public in Japan had come to be aware of these clandestine wartime experiments, there has been something of a passion to monitor medical research—basically to make sure that there be no repetition of comparable procedures or any facile rationalization of them. See LaFleur, Böhme, and Shimazono eds., (2007). [Editor’s note: Since it is somewhat redundant, I have relegated this observation to the endnotes.]

218

Notes

Chapter 9 1 2 3 4 5

Editor’s note: Writing in 2006, Sharp claimed that this interpretation of the motivations behind the Harvard report was by then widely accepted among medical historians and ethicists (2006: 16). Editor’s note: See Chapter 5. Editor’s note: For his views on organ transplantation, see Ramsey (1970: 198–238). In Japan, spotting this in Jonas has even led to public recognition via an op-ed on him in one of Japan’s mass-circulation newspapers (Kanamori 2001). [Editor’s note: this appeared in the body of the original, but seems more worthy of an endnote.] Editor’s note: Here LaFleur also refers to a work by Tetsuhiko Shinagawa comparing “the ethics of responsibility formulated by Jonas with the ethics of care articulated by Carol Gilligan as a psychologist (Shinagawa 2007)” but the sentence is garbled, and it is unclear what he wants to say about it, other than to list it as another example of a Japanese philosopher inspired by Jonas.

Chapter 10 1

Editor’s note: For an overview of Japanese attitudes toward scientific medicine, see Norbeck and Lock (1987). 2 That Japan already, well before the Russo-Japanese War, had a relatively high level of sanitation has now been amply demonstrated by Susan B. Hanley. Her excellent study of physical well-being in Japan during the period 1600–1867 includes evidence supportive of a claim that “sanitation in Japan through the mid-nineteenth century was as good as or better than that in the West” (Hanley 1997: 105). 3 His irony is intentional. We see this, for instance, when he scores what he sees as “criminal incompetency of [some Americans] in executive position, while our friends, the Japanese, just awakening from so-called barbarism, an Oriental [...] race, which we have hitherto patronised, has shown us that with proper foresight, system, and skill, men need not, in appalling numbers, rot and die horribly in the trenches from disease” (Seaman 1906: 12).   Seaman obviously intends in this way to rob the stereotypes of their usual power to conceal ignorance. His use of the phrase “barbarism” too is with irony. In his eleventh chapter, “The History of Medical Science in Japan,” he argues, using information surprisingly good for his time, that intellectual curiosity and a readiness to pursue scientific inquiry had long been the dominant pattern in Japanese history. 4 I am grateful to Professor Richard Gardner of Sophia University for bringing this to my attention. 5 “[A]n official of [Japan’s] Health and Welfare Ministry admitted before the Japanese Diet in 1982 that Ishii had been given a retirement pension” (Cook and Cook 1992: 158). 6 The subject of how perceptions of unnecessary “waste” will enter into the extension of programs for medical research today—and easily sully the ethics involved—will be the more explicit concern of a later chapter of this book. [Editor’s note: It is unclear whether LaFleur refers here to Chapter 8, or to a new chapter he had planned to write.] 7 Potentially troublesome statements are made, however, concerning Dr. Stafford Warren, who, while chief of radiology at Rochester’s Strong Memorial Hospital, had “led an experiment in which eleven Rochester patients were injected with forty times the amount of plutonium the average person could expect in a lifetime. The subjects were

Notes

219

not informed of their role in this experiment, which was being conducted to test human tolerance of the highly carcinogenic plutonium 239, one of the most toxic substances known, and one of the main ingredients in the atomic bomb” (Cartwright 1995: 136). In 1943, Dr. Warren was appointed chief medical advisor the Manhattan Project. 8 As might be expected, there has been interest within Japan in the exposure, after a long period of concealment, of the horrendous Tuskegee experiments (Hoshino 1998). Parallels are not difficult to find. [Editor’s note: This was originally in the body of the text, but seems intended as an endnote.] 9 See LaFleur (1992: 151–5). 10 Editor’s note: Writing in 2007, Sharp remarked that during more than a decade of research on organ transplantation, “the language and imagery of war are increasingly pervasive.” Organ procurement is likened to a battlefield. Once described as “stars,” organ donors have come to be referred to first as “heroes,” and then as “national heroes” (Sharp 2007: 109). 11 Editor’s note: In 2006, Sharp observed that the “death is the enemy” attitude had, if anything, gotten more intense since Fox and Swazey described it in 1992 (2006: 243).

Chapter 11 1 2

3 4 5

6 7 ­8 9

See the arguments presented in Swearer (1989). One of the most egregious problems in the supposed universalizing of AngloAmerican bioethics is that those who do it were trained at a time when budding bioethicists did analytic philosophy, a mode of philosophy that was so wrapped up in the notion that the Anglo-American modality had attained a culture-free highpoint that any suggestion of the need to pay attention to philosophies originating elsewhere (even including continental Europe) was dismissed in an a priori fashion. Editor’s note: For another perspective on the history of Buddhist engagement with immortality medicine in premodern Japan, see Drott (2015). Becoming the norm did not mean people ceased hoping for physical immortality or now showed total skepticism concerning reports of individuals who achieved it. Ironically, Kūkai himself became an object of such credulity (LaFleur 2002b). Indications that the cyclosporine does not entirely solve the problem surface in L’Intrus by the French philosopher Jean-Luc Nancy (2000). Nancy, himself a recipient of a transplanted heart, reflects on how he felt invaded by an alien being. A review of the book by Gernot Böhme (2002) shows that Nancy’s way of “solving” this problem was by appeal to the standard body/mind split fostered by Descartes. The degree to which there was on the part of Japanese Buddhist intellectuals a perception of agreement between Buddhist and modern scientific cosmologies and ontologies is amply documented in essays in Mineshima (1982). That analytic philosophy conveniently sidestepped an entire range of culturally significant issues is apparent in McCumber (2001). Although the term “ontology” has occasionally been used in bioethics as a rubric for discussions of what qualifies as personhood (e.g., in Gervais 1986), I here restrict its use to the basic question of existence and nonexistence. Although today’s students of China are less ready than those of an earlier generation to attribute all such differences to a clean disjunction between a “philosophical Daoism” and a “religious Daoism,” we can still note considerable differences between the values expressed in different texts. It would make no sense to try to force a coherence and consistency upon their diversity.

220

Notes

10 European missionaries entering Japan in the sixteenth century anticipated that the absence there of Christianity would have resulted in a devaluation of life and of the family. Finding that the real situation was, in fact, otherwise, they were confused. See LaFleur (1992: especially 40ff). 11 See, for instance, Hall (2000) and Olshansky and Carnes (2001).

Conclusion 1 2

3

4 5 6 7 8 9

For the Japanese edition, see LaFleur, Böhme, Shimazono (2008). We both also participated in a symposium on religious studies in Japan and the United States held at Sophia University in 2009. The results of this symposium were published, with LaFleur contributing an essay to the conference volume titled, “Looking back at the past and looking to the future—from Saigyō to bioethics.” See Gardner and Murakami (2015). However, as mentioned above, under this law, the practice of organ transplantation was subject to the consent of the individual and progressed quite slowly. There were only four cases in 1998, five cases in 1999, and the total number was eighty-six before the 2010 revision. Discussed, for instance, in Tateiwa (2000) and Morioka (2001, see esp. p. 190). A notable example is the Japanese government’s support of iPS research—exemplified by Professor Hiromitsu Nakauchi’s work on human-animal chimeras. In the 2010s, government-led public discussion declined even more. See also Shimazono (2008). See Shimazono (2006: 270). Umehara’s minority report also implicitly takes a stance in opposition to “modernism.” Although the report presents a somewhat caricatured version of the pro-transplantation position, it does present its own particular ideological position, and it is not one I would associate with Buddhism. Umehara instead argued that his position resonated with an animism purported to have predated the formation of civilized society—a perspective that views this world and the world of the dead as adjacent to one another. See Umehara (1991).

Appendix 1

2

Where possible, the present volume includes complete chapters as they appeared in one or the other draft. Chapters 1, 5, 6, 7, 10, and 11 are renumbered but reproduced largely unchanged from the first draft. I have also included, in full, two chapters from the second draft that had no counterpart in draft one: Chapters 2 and 9. For Chapters 3, 4, and 8, where both drafts included nearly identical chapters, I opted to include what I deemed to be the more finalized of the two. See Chapter 2, note 6. Sharp sees the public discussion of brain death, organ harvesting, and transplantation as overly “compartmentalized,” thus not giving parties involved a full picture of what is entailed by what she terms “organ transfer” (2007:10). She notes that OPO representative’s description of the donation process “fails to mention a number of procedures essential to procurement work” (2006: 66). She also discusses the disparate vocabularies employed among procurement

Notes

3 4

5

6

221

professionals, surgeons, donor kin, and recipients when discussing the brain dead and their organs (2007: 20–2). Lock (2002: 130–1) provides a detailed account of the “Wada incident,” with special attention to the emotional toll the case had on the families of both the donor and the recipient. Sharp acknowledges that this is a lucrative field and a burgeoning industry. Though no organs are bought and sold, “organ transplantation is among the most profitable medical specialties in this country” (2007: 17). However, she cautions that it would be a gross oversimplification to assume that the desire for transplantable organs is driven solely by “corporate greed” (2006: 99). In fact, it is not uncommon for brain-dead patients to display spontaneous movements due to residual spinal cord reflexes. Sharp notes that for this reason, at least when she was writing in 2006, brain-dead donors were routinely administered anesthesia during organ harvesting to relax the body (and provide reassurance to the surgical team in case they harbored any doubts about whether or not the neurologically dead still experience some form of pain) (2006: 67, see also 86–91). In spite of the entreaties of some doctors, LaFleur notes that word of the Havana symposium and Dr. Shewmon’s presentation eventually reached the public. He refers to an article in the New Yorker, titled “Is There Really Such a Thing as Brain Death?” (Greenberg 2001); an article by Michael Potts, titled “A Requiem for Whole Brain Death” (2001); the work of John Lizza (2006); and an issue of the Hastings Center Report.

­REFERENCES Abe, Akiyoshi (2021), “Treatment with Living Donor Transplant: A Tall Hurdle. (Seitai ishoku de chiryō: takai haadoru),” Asahi shinbun, May 29. Abe, Masao (1985), Zen and Western Thought, ed. William R. LaFleur, Honolulu: University of Hawai’i Press. Abe, Tomoko (2000), “Bunka toshite no shi no kaitai to ningen kaitai wo maneku ‘nōshi · zōki ishoku,’” in Makoto Kondō, et al. (eds.), Watashi wa zōki wo teikyō shinai, 26–56, Tōkyō: Yōsensha. Aita, Kaoruko, Hiroaki Miyata, Miyako Takahashi and Ichirō Kai (2007), “Japanese Physicians’ Practice of Withholding and Withdrawing Mechanical Ventilation and Artificial Nutrition and Hydration from Older Adults with Very Severe Stroke,” Archives of Gerontology and Geriatrics, 46: 263–72. Akutagawa, Ryūnosuke (1959), Rashomon, and Other Stories, New York: Bantam Books. Amagasa, Keisuke (1992), Nōshi wa misshitsu satsujin de aru, Tokyo: Nesuko. Ambagtsheer, F., J. de Jong, W. M. Bramer and W. Weimar (2016), “On Patients Who Purchase Organ Transplants Abroad,” American Journal of Transplantation, 16 (10): 2800–15. Ames, Roger T. (1998), “Death as Transformation in Classical Daoism,” in Jeff Malpas and Robert C. Solomon (eds.), Death and Philosophy, 57–70, London and New York: Routledge. Andrews, Lori and Dorothy Nelkin (2001), Body Bazaar: The Market for Human Tissue in the Biotechnology Age, New York: Crown Publishing Group. Anstötz, Christoph (1993), “Should a Brain-Dead Pregnant Woman Carry Her Child to Full Term? The Case of the ‘Erlanger Baby,’” Bioethics, 7 (4): 340–50. Ariés, Philippe (1981), The Hour of Our Death, trans. Helen Weaver, New York: Vintage Books. Armstrong-Hough, Mari (2018), Biomedicalization and the Practice of Culture: Globalization and Type 2 Diabetes, Chapel Hill: University of North Carolina Press. Arnold, Robert M. and Stuart J. Youngner (1993), “The Dead Donor Rule: Should We Stretch It, Bend It, or Abandon It?” Kennedy Institute of Ethics Journal, 3 (2): 263–78. Attali, Jacques (1979), L’ordre Cannibale: Vie et Mort de la Médecine, Paris: Editions Grasset & Fasquelle. Awaya, Tsuyoshi (1998), “Zōki ishoku to hito kakumei—bunmei toshite no iryō tekunorojī no yukue,” in Takao Saitō and Arifumi Kōyama (eds.), Seimeirinrigaku kōgi—igaku · iryō· ni nani ga towareteiru ka, 71–97, Tokyo: Nihon hyōronsha. Bambrough, Renford (1968), “Universals and Family Resemblances,” in George Pitcher (ed.), Wittgenstein: The Philosophical Investigations, 186–204, London: MacMillan. Becchi, Paolo (2002), “Technology, Medicine, and Ethics in Hans Jonas,” Graduate Faculty Philosophy Journal, 23 (2): 155–82.

References

223

Beck-Gernsheim, Elisabeth (1991/1995), The Social Implications of Bioengineering, trans. Laimdota Mazzarins, Atlantic Highlands: Humanities Press. ­Beidel, Deborah C. (1987), “Psychological Factors in Organ Transplantation,” Clinical Psychology Review, 7: 677–94. Bellah, Robert Neelly, Richard Madsen, William M. Sullivan, et al. (1985), Habits of the Heart: Individualism and Commitment American Life, Berkeley: University of California Press. Benedict, Ruth (1946/2005), The Chrysanthemum and the Sword: Patterns of Japanese Culture, Rutland, VT: Tuttle Publishing. Bernat, James L. (2013), “On Noncongruence between Concept and Determination of Death,” Hastings Center Report, 43 (6): 25–33. Bernat, James L., Charles M. Culver, and Bernard Gert (1981), “On the Definition and Criterion of Death,” Annals of Internal Medicine, 94 (3): 389–94. Bernstein, Richard J., et al. eds. (2001), Special Issue: “The Philosophy of Hans Jonas,” Graduate Faculty Philosophy Journal, 23 (1). Bleich, J. David (1981), Judaism and Healing, New York: Ktav. Böhme, Gernot (1992), Coping with Science, Boulder: Westview Press. Böhme, Gernot (2002), “Mein Herz: Probleme einer Ethik des Leibes am Beispiel der Transplantationsmedizin,” Zeitschrift für Didaktik der Philosophie und Ethik, 1. Borovoy, Amy (2011), “Beyond Choice: A New Framework for Abortion?” Dissent, Fall. Borovoy, Amy (2015), “Japanese and American Public Health Approaches to Preventing Population Weight Gain: A Role for Paternalism?” with Christina Roberto, Social Science & Medicine, 143: 62–70. Borovoy, Amy (2017a), “Between Biopolitical Governance and Care: Rethinking Health, Selfhood, and Social Welfare in East Asia,” with Li Zhang, Medical Anthropology, 36 (1): 1–5. Borovoy, Amy (2017b), “Japan’s Public Health Paradigm: Governmentality and the Containment of Harmful Behavior,” Medical Anthropology, 36 (1): 32–46. Brencick, Janice M. and Glenn A. Webster (2000), Philosophy of Nursing: A New Vision for Health Care, Albany: State University of New York Press. Brown, Peter Robert Lamont (1981), The Cult of the Saints, Chicago: University of Chicago Press. Brown, Peter Robert Lamont (1988), The Body and Society: Men, Women, and Sexual Renunciation in Early Christianity, New York: Columbia University Press. Callahan, Daniel (1999), “The Hastings Center and the Early Years of Bioethics,” Kennedy Institute of Ethics Journal, 9 (1): 53–71. Campbell, Courtney (1999), “Fundamentals of Life and Death: Christian Fundamentalism and Medical Science,” in Stuart J. Youngner, Robert M. Arnold and Renie Schapiro (eds.), The Definition of Death: Contemporary Controversies, 194–209, Baltimore and London: The Johns Hopkins University Press. Campbell, John Creighton and Naoki Ikegami (1998), The Art of Balance in Health Policy: Maintaining Japan’s Low-Cost, Egalitarian System, Cambridge: Cambridge University Press. Caplan, Arthur L. and Daniel H. Coelho, eds. (1998), The Ethics of Organ Transplants: The Current Debate, New York: Prometheus Books. Cartwright, Lisa (1995), Screening the Body: Tracing Medicine’s Visual Culture, Minneapolis: University of Minnesota Press.

224

References

Chiba, Norikazu and Asako Kamihigashi (2020), Rupo “inochi no senbetsu”—dare ga yowamono wo kirisuteru no ka? Tōkyō: Bungei shunjū. ­Cook, Haruko Taya and Theodore F. Cook (1992), Japan at War: An Oral History, New York: New Press. Cortezzi, Hugh (1985), Dr. Willis in Japan: British Medical Pioneer 1862–1877, London: The Athlone Press. Couzin, Jennifer (2002), “Study of Brain Dead Sparks Debate,” Science, 295 (5558): 1210–11. Crowley-Matoka, Megan (2016), Domesticating Organ Transplant: Familial Sacrifice and National Aspiration in Mexico, Durham: Duke University Press. Currie, Bethia S. (1978), “The Redefinition of Death,” in Stuart F. Spicker (ed.), Speicial issue: “Organism, Medicine, and Metaphysics: Essays in Honor of Hans Jonas on his 75th Birthday, May 10, 1978,” Philosophy and Medicine, 7: 177–197. Damasio, Antonio R. (1994), Descartes’ Error: Emotion, Reason, and the Human Brain, New York: Avon Books. de Becker, Gavin (1997), The Gift of Fear: Survival Signals That Protect Us from Violence, New York: Dell Publishing. “A Definition of Irreversible Coma. Report of the Ad Hoc Committee of the Harvard Medical School to Examine the Definition of Brain Death” (1968), Journal of the American Medical Association, 205 (6): 337–40. Dennett, Daniel C. (1991), Consciousness Explained, New York: Little, Brown and Company. Descartes, René (1637/1997), “Discourse on the Method,” in Elisabeth S. Haldane and G. R. T. Ross (trans.), Descartes: Key Philosophical Writings, Hertfordshire: Wordsworth. Descartes, René (1960), Meditations on First Philosophy, trans. Lawrence LaFleur, Indianapolis: Bobbs-Merrill. de Sousa, Ronald (1987/1991), The Rationality of Emotion, Cambridge: The MIT Press. Doi, Kenji (2000), “Rinjinai no bimei no moto ni—nōshi ishoku ni taisuru Kirisutokyōteki shiten kara no mondai teiki,” Gendai shisō, 28 (10): 242–63. Dorff, Elliot N. (1996), “Choosing Life: Aspects of Judaism Affecting Organ Transplantation,” in Stuart J. Youngner, Renée C. Fox and Laurence J. O’Connell (eds.), Organ Transplantation: Meanings and Realities, 168–93, Madison: University of Wisconsin Press. Dorff, Elliot N. (1998), Matters of Life and Death: A Jewish Approach to Modern Medical Ethics, Philadelphia and Jerusalem: The Jewish Publication Society. Drott, Edward (2015), “‘To Tread on High Clouds’: Dreams of Eternal Youth in Early Japan,” Japanese Journal of Religious Studies, 42 (2): 275–317. Drott, Edward (2018),“Aging Bodies, Minds and Selves: Representations of Senile Dementia in Japanese Film,” Journal of Aging Studies, 47: 10–23. Dutchen, Stephanie (2017), “A Fine Line: Is It Time to Reconsider the Dead Donor Rule?” Harvard Medicine Magazine, January 6. Endō, Shūsaku (1958), Umi to dokuyaku, Tōkyō: Shinchō bunko. Endo, Shusaku (1973), The Sea and Poison, trans. Peter Owen, Tokyo: Charles E. Tuttle Company. Feldman, Eric A. (2000), The Ritual of Rights in Japan: Law, Society, and Health Policy, Cambridge: Cambridge University Press. Fletcher, Joseph (1954), Morals and Medicine, Princeton: Princeton University Press. Fletcher, Joseph (1966), Situation Ethics: The New Morality, Louisville: Westminster John Knox Press.

References

225

Fletcher, Joseph (1969), “Our Shameful Waste of Human Tissue: An Ethical Problem for the Living and the Dead,” in Donald R. Cutler (ed.), The Religious Situation: 1969, 223–52, Boston: Beacon Press. Fletcher, Joseph (1988), The Ethics of Genetic Control: Ending Reproductive Roulette, New York: Prometheus Books. ­Fox, Maggie (1999), “Patients Offer Selves as Living Cases for Research,” Reuters, July 12. Fox, Renée C. (1988), Essays in Medical Sociology: Journeys into the Field, New Brunswick, NJ: Transaction Books. Fox, Renée C. and Judith P. Swazey (1974), The Courage to Fail: A Social View of Organ Transplants and Dialysis, Chicago: University of Chicago Press. Fox, Renée C. and Judith P. Swazey (1992), Spare Parts: Organ Replacement in American Society, New York: Oxford University Press. Fox, Reneé C. and Judith P. Swazey (2008), Observing Bioethics, New York: Oxford University Press. Futterman, Laurie G. (1998), “Presumed Consent: The Solution to the Critical Organ Donor Shortage?” in Arthur L. Caplan and Daniel H. Coelho (eds.), The Ethics of Organ Transplants: The Current Debate, 161–72, New York: Prometheus Books. Gardner, Richard and Tatsuo Murakami, eds. (2015), Between Religion and Religious Studies: A Prospect for a New Community, Tokyo: Sophia University Press. Gaylin, Willard (1974), “Harvesting the Dead: The Potential for Recycling Human Bodies,” Harper’s, 249 (1492): 23–30. Gervais, Karen Grandstrand (1986), Redefining Death, New Haven: Yale University Press. Gold, Hal (1996), Unit 731: Testimony, Tokyo: Yenbooks. Golub, Edward S. (1994), The Limits of Medicine: How Science Shapes Our Hope for the Cure, Chicago and London: The University of Chicago Press. Goodman, Grant K. (1986), Japan: The Dutch Experience, London: The Athlone Press. Goodwin, Michele (2006), Black Markets: The Supply and Demands of Body Parts, Cambridge: Cambridge University Press. Greenberg, Gary (2001), “As Good as Dead: Is There Really Such a Thing as Brain Death?” The New Yorker, August 5. Guillemin, Jeanne (1998), “Bioethics and the Coming of the Corporation to Medicine,” in Janardan Subedi and Raymond DeVries (eds.), Bioethics and Society: Constructing the Ethical Enterprise, 60–77, Upper Saddle River: Prentice Hall. Hacking, Ian (1972), “The Logic of Pascal’s Wager,” American Philosophical Quarterly, 9 (2): 186–92. Hacking, Ian (1990), The Taming of Chance, Cambridge: Cambridge University Press. Hall, Stephen S. (2000), “The Recycled Generation,” The New York Times Magazine, January 20. Hanley, Susan B. (1997), Everyday Things in Premodern Japan: The Hidden Legacy of Material Culture, Berkeley: University of California Press. Hardacre, Helen (1997), Marketing the Menacing Fetus in Japan, Berkeley: University of California Press. Harris, Sheldon H. (1994), Factories of Death: Japanese Biological Warfare, 1932–45, and the American Cover-Up, London: Routledge. Havens, Thomas R. H. (1970), Nishi Amane and Modern Japanese Thought, Princeton: Princeton University Press. Hayashi, Shōha (2020), “Waga kuni ni okeru zōki ishoku no genjō—JOT no shimei to yakuwari,” INTENSIVIST, 12 (3): 459–68. Helman, Cecil (1988), “Dr. Frankenstein and the Industrial Body,” Anthropology Today, 4 (3): 14–16.

226

References

Helman, Cecil (1992), The Body of Frankenstein’s Monster: Essays in Myth and Medicine, New York: Norton. Hiro, Sachiya (1992), Asahi shinbun, January 30. ­Hoff, Johannes and Jürgen in der Schmitten (1994), “Kritik der ‘Hirntod’-Konzeption. Plädoyer für ein menschenwürdiges Todeskriterium,” in Johannes Hoff and Jürgen in der Schmitten (eds.), Wann ist der Mensch tot?: Organverpflanzung und “Hirntod”Kriterium, 153–252, Reinbek bei Hamburg: Rowohlt. Hosaka, Masayasu (1997), Daigaku igakubu no kiki, Tōkyō: Kōdansha. Hoshino, Kazumasa (1998), “Minshuka no hōri iryō no baai—Tasukigī baidoku jintai jikken to kokujin higaisha e no daitōryō no shazai,” Toki no hōrei, 1570: 45–51. Ienaga, Saburō (1978), The Pacific War, 1931–1945: A Critical Perspective on Japan’s Role in World War II, trans. Frank Baldwin, New York: Pantheon Books. Ikebe, Yoshinori (1991), Igaku wo tetsugaku suru—igaku, kono mondai naru mono, Kyōto: Sekai shisōsha. Ikels, Charlotte (2013), “The Anthropology of Organ Transplantation,” The Annual Review of Anthropology, 42: 89–102. Imai, Michio (1999), Seimei rinrigaku nyūmon, Tōkyō: Sangyō tosho. Iryō jinruigaku kenkyūkai (1992), Bunka genshō to shite no iryō—“i to jidai” wo yomitoku kīwādo shū, Tōkyō: Medica shuppan. Iseda, Tetsuji and Noriaki Katagi (2006), Seimei rinrigaku to kōri shugi, Kyōto: Nakanishiya shuppan. Ishii, Seishi (1995), Iyashi no genri—homo ku-ransu no tetsugaku, Kyōto: Jinbun shoin. Ishitobi, Kōzō (2010), Heionshi no susume—kuchi kara taberarenakunattara dō shimasu ka? Tōkyō: Kōdansha. Issa (1819/1972), The Year of My Life, trans. Nobuyuki Yuasa, Berkeley: University of California Press. Itō, Osamu (1958), “Kindai Nihon ni okeru ‘ai’ no kyogi,” Shisō, 409 (7): 34–40 [998–1004]. Jansen, Marius B. (1980), Japan and Its World: Two Centuries of Change, Princeton: Princeton University Press. Johnson, Mark (1993), The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason, Chicago: University of Chicago Press. Jonas, Hans (1958), The Gnostic Religion: The Message of the Alien God and the Beginnings of Christianity, New York: Beacon Press. Jonas, Hans (1966/2001), The Phenomenon of Life: Toward Philosophical Biology, Evanston: Northwestern University Press. Jonas, Hans (1969), “Philosophical Reflections on Experimenting with Human Subjects,” Daedalus, 98 (2): 219–47. Jonas, Hans (1974/1980), Philosophical Essays: From Ancient Creed to Technological Man, Chicago: University of Chicago Press. Jonas, Hans (1974/2009), “Against the Stream: Comments on the Definition and Redefinition of Death,” in John P. Lizza (ed.), Defining the Beginning and End of Life: Readings on Personal Identity and Bioethics, 498–506, Baltimore: Johns Hopkins University Press. Jonas, Hans (1984), The Imperative of Responsibility: In Search of an Ethics for the Technological Age, Chicago: University of Chicago Press. Jonas, Hans (1996), Mortality and Morality: A Search for the Good after Auschwitz, Evanston: Northwestern University Press. Jonas, Hans (2008), Memoirs, ed. Christian Wiese, trans. Krishna Winston, Waltham: Brandeis University Press; Hanover and London: University Press of New England.

References

227

Joy, Bill (2000), “Why the Future Doesn’t Need Us,” Wired, April 2000. Kaji, Nobuyuki (1994), Chinmoku no shūkyō—Jukyō, Tōkyō: Chikuma shobō. Kagawa, Chiaki (2021), Inochi wa dare no mono ka, Tōkyō: Discover 21. Kanamori, Osamu (2001), “Yonasu—sedaikan no rinri to sekinin,” Mainichi shinbun, November 19. Kass, Leon (1992), “Organs for Sale? Propriety, Property, and the Price of Progress,” The Public Interest, Spring 107: 65–86. Kass, Leon, ed. (1995), Special Issue: “The Legacy of Hans Jonas,” Hastings Center Report, 25 (7). Kass, Leon (1997), “The Wisdom of Repugnance,” New Republic, 216 (22): 17–26. Kass, Leon (2003), Beyond Therapy: Biotechnology and the Pursuit of Happiness, Washington D.C.: President’s Council on Bioethics. ­Kasulis, Thomas (2002), Intimacy or Integrity: Philosophy and Cultural Difference, Honolulu: University of Hawai’i Press. Katō, Hisatake (1986), Baioeshikusu to wa nanika, Tōkyō: Miraisha. Katō, Hisatake (1993), Nijūisseiki no echika—ōyō rinrigaku no susume, Tōkyō: Miraisha. Kaufmann, David (2007), “One of the Most Relevant Thinkers You’ve Never Heard of,” Forward (The Jewish Daily Forward), October 17. Kaufman, Sharon (2000), “In the Shadow of ‘Death with Dignity’: Medicine and Cultural Quandaries of the Vegetative State,” American Anthropologist, 102 (1): 69–83. Kaufman, Sharon (2015), Ordinary Medicine: Extraordinary Treatments, Longer Lives, and Where to Draw the Line, Durham: Duke University Press. Kaufman, Sharon (2020), “Neither Person nor Cadaver,” aeon, February 6. https://aeon.co/ essays/why-medics-and-the-law-clash-with-family-in-brain-death-cases Kaufman, Sharon R., Ann J. Russ and Janet K. Shim (2006), “Aged Bodies and Kinship Matters: The Ethical Field of Kidney Transplant,” American Ethnologist, 33 (1): 81–99. Kawahata, Aiyoshi (1988), “Shi wo megutte igaku to Bukkyō—anrakushi, sōgenshi, shi wo meguru igakuteki hantei,” in Hajime Nakamura (ed.), Bukkyō shisō—Shi, 403–59, Kyōto: Heirakuji shoten. Kimura, Rihito (1987), Inochi wo kangaeru—baioeshikkusu no susume, Tōkyō: Nihon hyōronsha. Kimura, Rihito (1998), “Organ Transplantation and Brain-Death in Japan. Cultural, Legal, and Bioethical Background,” Annals of Transplantation, 3 (3): 55–8. Kimura, Rihito (2001), “Unpublished lecture,” YMCA of Tokyo, October 2001. Koestler, Arthur (1961), The Lotus and the Robot, New York: Macmillan. Komatsu, Yoshihiko (1996), Shi wa kyōmei suru—nōshi zōki ishoku no fukami e, Tōkyō: Keisō shobō. Komatsu, Yoshihiko (2002), Tairon—hito wa shinde wa naranai, Tōkyō: Shunjusha. Komatsu, Yoshihiko (2007), “The Age of a ‘Revolutionized Human Body’ and the Right to Die,” in William LaFleur, Gernot Böhme and Susumu Shimazono (eds.), Dark Medicine: Rationalizing Unethical Medical Research, 180–200, Bloomington and Indianapolis: Indiana University Press. Kondō, Makoto (1996), Kanja yo, gan to tatakau na, Tōkyō: Bungei shunjū. Kondō, Makoto, et al., eds. (2000), Watashi wa zōki wo teikyō shinai, Tōkyō: Yōsensha. Kōsaka, Masa’aki (1964), Nishida Kitarō to Watsuji tetsugaku, Tōkyō: Shinchōsha. Kōsei Iinkai (1997), Kōsei iinkai gijiroku, No. 5, March 18.

228

References

Kugu, Kōji (2021), Kinmirai no “ko-zukuri” wo kangaeru—funin chiryō no yukue, Tōkyō: Shunjūsha. Kuhse, Helga and Peter Singer, eds. (1999), Bioethics: An Anthology, Oxford and Malden: Blackwell Publishing. Kūkai (1975), “Sangō shiiki,” in Shunkyō Katsumata (ed.), Kōbō daishi chosaku zenshū, Vol. 3, 1–79, Tōkyō: Sangibō busshorin. ­Kuriyama, Shigehisa (1992), “Between Mind and Eye: Japanese Anatomy in the Eighteenth Century,” in Charles Leslie and Allan Young (eds.), Paths to Asian Medical Knowledge, 21–43, Berkeley, Los Angeles and London: University of California Press. LaFleur, William R. (1990), “Contestation and Consensus: The Morality of Abortion in Japan,” Philosophy East and West, 40 (4): 529–42. LaFleur, William R. (1992), Liquid Life: Abortion and Buddhism in Japan, Princeton: Princeton University Press. LaFleur, William R. (1998), “Body,” in Mark C. Taylor (ed.), Critical Terms for Religious Studies, 36–54, Chicago: University of Chicago Press. LaFleur, William R. (2000), “Love’s Insufficiency: Zen as Irritant,” in Joseph Runzo and Nancy M. Martin (eds.), Love, Sex, and Gender in the World Religions, 37–47, Oxford: Oneworld. LaFleur, William R. (2001), “From Agape to Organs: Religious Differences between Japan and America in Judging the Ethics of the Transplant,” in Joseph Runzo and Nancy M. Martin (eds.), Ethics in the World Religions, 271–90, Oxford and New York: Oneworld. LaFleur, William R. (2002a), “From Agape to Organs: Religious Differences between Japan and America in Judging the Ethics of the Transplant,” Zygon, 37 (3): 623–42. LaFleur, William R. (2002b), “The Afterlife of the Corpse: How Popular Concerns Impact upon Bioethical Debates in Japan,” in Susanne Formanek and William R. LaFleur (eds.), Practicing the Afterlife: Perspectives from Japan, 485–504, Vienna: Oesterreichische Akademie der Wissenschaften. LaFleur, William R. (2003), “Transplanting the Transplant: Japanese Sensitivity to American Medicine as an American Mission,” in Carla M. Messikomer, Judith P. Swazey and Allen Glicksman (eds.), Society and Medicine: Essays in Honor of Renée C. Fox, 87–107, New Brunswick: Transaction Publishers. LaFleur, William R. (2008), “Infants, Paternalism, and Bioethics: Japan’s Grasp of Jonas’s Insistence on Intergenerational Responsibility”, in Hava Tirosh-Samuelson and Christian Wiese (eds.), The Legacy of Hans Jonas: Judaism and the Phenomenon of Life, 461–80, Leiden and Boston: Brill. LaFleur, William R., Gernot Böhme and Susumu Shimazono, eds. (2007), Dark Medicine: Rationalizing Unethical Medical Research, Bloomington: Indiana University Press. LaFleur, William R., Gernot Böhme and Susumu Shimazono, eds. (2008), Akumu no iryōshi—jintai jikken ·gunji gijutsu· sentan seimei kagaku, trans. Keishi Nakamura and Yoshiko Akiyama, Tōkyō: Keisō shobō. Lakoff, George and Mark Johnson (1999), Philosophy in the Flesh: The Cognitive Unconscious and the Embodied Mind: How the Embodied Mind Creates Philosophy, New York: Basic Books. Lamb, David (1996), Death, Brain Death and Ethics, Aldershot: Avebury. Lebra, Takie Sugiyama (1976), Japanese Patterns of Behavior, Honolulu: University of Hawai’i Press. Levenson, James L. and Mary Ellen Obrisch (1987), “Shortage of Donor Organs and Long Waits,” Psychosomatics, 28 (8): 399–403.

References

229

Lindee, M. Susan (1994), Suffering Made Real: American Science and the Survivors at Hiroshima, Chicago: University of Chicago Press. Lindee, M. Susan (1998), “The Repatriation of Atomic Bomb Victim Body Parts to Japan: Natural Objects and Diplomacy,” Osiris, 13: 376–409. Lizza, John (2006), Persons, Humanity, and the Definition of Death, Baltimore: Johns Hopkins University Press. ­Lock, Margaret M. (1980), East Asian Medicine in Urban Japan: Varieties of Medical Experience, Berkeley: University of California Press. Lock, Margaret M. (1996), “Displacing Suffering: The Reconstruction of Death in North America and Japan,” Daedalus, 125 (1): 207–44. Lock, Margaret M. (2002), Twice Dead: Organ Transplants and the Reinvention of Death, Berkeley: University of California Press. Long, Susan O. (2005), Final Days: Japanese Culture and Choice at the End of Life, Honolulu: University of Hawai’i Press. MacIntyre, Alasdair C. (1990), After Virtue: A Study in Moral Theory, London: Duckworth. Manual for Judging Legal Brain Death (Hōteki nōshi hantei manyuaru) (2010), MHLW Science Research Grant WHLW Science Special Research Project (Kōsei rōdō kagaku kenkyūhi hojokin kōsei rōdōkagaku tokubetsu kenkyū koto-gyō). “Research on maintenance of in-hospital system at organ donation facility” (Zōki teikyō shisetsu ni okeru in'nai taisei seibi ni kansuru kenkyū). “Research group of standardization of criteria for judging brain death” (Nōshi hantei kijun no manyuaru-ka ni kansuru kenkyū han). Manzei, Alexandra (1997), “Hirntod, Herztod, ganz tot? Von der Macht der Medizin und der Bedeutung der Sterblichkeit für das Leben; Eine Soziologische Kritik des Hirntodkonzeptes,” diss. Frankfurt am Main, Germany. Marieb, Elaine Nicpon (2004), Human Anatomy & Physiology, Sixth Edition, San Francisco: Pearson, Benjamin Cummings. Marra, Michael F. (2001), A History of Modern Japanese Aesthetics, Honolulu: University of Hawai’i Press. McCumber, John (2001), Time in the Ditch: American Philosophy and the McCarthy Era, Evanston: Northwestern University Press. McGee, Glenn (1997), The Perfect Baby: A Pragmatic Approach to Genetics, London: Rowman & Littlefield Publishers, Inc. Meisel, Alan and Loren H. Roth (1978), “Must a Man Be His Cousin’s Keeper?” Hastings Center Report, 8 (5): 5–6. Midgley, Mary (2005), The Essential Mary Midgley, ed. David Midgley, New York: Routledge. Minemura, Yoshiki, Kazue Yamaoka and Ryozo Yoshino (2010), “The Issue of Organ Transplants and Brain Death in Japan, Based on a Cross-National Comparative Study of Life and Culture (Seimeikan no kokusai hikaku kara mita zōki ishoku/nōshi ni kan suru waga kuni no kadai no kentō),” Japan National Institute of Public Health (Hoken iryō kagaku), 59 (3): 304–12. Mineshima, Hideo, ed. (1982), Kindai Nihon no shisō to Bukkyō, Tōkyō: Tōkyō shoseki. Mitsuishi, Tadahiro (2014), “Kanja · hikensha no kenri wo mamoru to wa,” in Chiaki Kagawa and Yoshihiko Komatsu (eds.), Seimei rinri no genryū—sengo Nihon shakai to baioeshikkusu, 253–66, Tōkyō: Iwanami shoten.

230

References

Mizuno, Hajime (1991), Nōshi to zōki ishoku, Tōkyō: Kiinokuniya shoten. Mizutani, Hiroshi (1986), Nōshiron—ikiru koto to shinu koto no imi, Tōkyō: Sōshisha. Mizutani, Hiroshi (1988) Nōshi to seimei, Tōkyō: Sōshisha. Morioka, Masahiro (1991), Nōshi no hito—seimeigaku no shiten kara, Tōkyō: Fukutake bunko. ­Morioka, Masahiro (1994), Seimeikan wo toinaosu—ekorojī kara nōshi made, Tōkyō: Chikuma shinhso. Morioka, Masahiro, ed. (1999), Gendai bunmei wa seimei wo dō kaeru ka, Kyōto: Hōzōkan. Morioka, Masahiro (2001), Seimeigaku ni nani ga dekiruka—nōshi · feminizumu · yūsei shisō, Tōkyō: Keisō shobō. Morishita, Naoki (1999), Shi no sentaku—inochi no genba kara kangaeru, Tōkyō: Madosha. Moulin, Anne Marie (1995), “The Ethical Crisis of Organ Transplants: In Search of Cultural ‘Compatibility’,” Diagones, 43 (172): 73–92. Murakami, Naoka and Yuki Nakagawa (2018), “The Current Status of Kidney Transplantation in Japan and the U.S. (Jinishoku no kokusai hikaku),” Nephrology & Urology 7: 30–7. Nabeshima, Naoki (1996), “Seimei sōsa wo Bukkyōsha wa dō miru ka?” Bukkyō (Tokushū: Seimei sōsa), 34: 120–9. Nakagawa, Yonezō (1996a), Igaku no fukakujitsusei, Tōkyō: Nihon hyōronsha. Nakagawa, Yonezō (1996b), Iryō no genten, Tōkyō: Iwanami shoten. Nakajima, Michi (1985/1990), Mienai shi—nōshi to zōki ishoku, Tōkyō: Bungei shunjū. Nakano, Tōzen (1998), Chūzetsu · songenshi · nōshi · kankyō—seimei rinri to Bukkyō, Tōkyō: Yūzankaku. Namihira, Emiko (1994), Iryō jinruigaku nyūmon, Tōkyō: Asahi sensho. Nancy, Jean-Luc (2000), L’Intrus/Der Eindringling [Text in French and German], Berlin: Merve Verlag. Nirenberg, David (2008), “Choosing Life,” The New Republic, November 5: 39–43. Nishida, Kitarō (1911), Zen no kenkyū, Tōkyō: Kōdōkan. Nishida, Kitarō (1911/1990), An Inquiry into the Good, trans. Masao Abe and Christopher Ives, New Haven and London: Yale University Press. Nobel, David F. (1999), The Religion of Technology: The Divinity of Man and the Spirit of Invention, New York: Penguin Books. Norbeck, Edward and Margaret Lock, eds. (1987), Health, Illness, and Medical Care in Japan: Cultural and Social Dimensions, Honolulu: University of Hawai’i Press. Ogawa, Hidemichi (2014), “Wada shinzō ishoku to baioeshikkusu,” in Chiaki Kagawa and Yoshihiko Komatsu (eds.), Seimei rinri no genryū—sengō Nihon shakai to baioeshikkusu, 239–51,Tōkyō: Iwanami shoten. Ogiwara, Makoto (1992), Nihonjin wa naze nōshi zōki ishoku wo kobamu no ka? Tōkyō: Shinyōsha. Ohnuki-Tierny, Emiko (1984), Illness and Culture in Contemporary Japan: An Anthropological View, Cambridge: Cambridge University Press. Ōishi, Matashichi (2003), Bikini jiken no shinjitu—inochi no kiro de, Tōkyō: Misuzu shobō. Olshansky, S. Jay and Bruce A. Carnes (2001), The Quest for Immortality: Science at the Frontiers of Aging, New York: W. W. Norton & Co. Outka, Gene (1972), Agape: An Ethical Analysis, New Haven: Yale University. Payer, Lynn (1988), Medicine and Culture: Varieties of Treatment in the United States, England, West Germany, and France, New York: Penguin Books.

References

231

Pence, Gregory E. (1998), Who’s Afraid of Human Cloning? Lanham, Boulder, New York, and Oxford: Rowman & Littlefield Publishers, Inc. Pernick, Martin S. (1985), A Calculus of Suffering: Pain, Professionalism, and Anesthesia in Nineteenth-Century America, New York: Columbia University Press. ­Pernick, Martin S. (1988), “Back from the Grave: Recurring Controversies over Defining and Diagnosing Death in History,” in Richard M. Zaner (ed.), Death: Beyond WholeBrain Criteria, 17–74, Dordrecht and Boston: Kluwer Academic Publishers. Plath, David (1964), “Where the Family of God Is the Family: The Role of the Dead in Japanese Households,” American Anthropologist, 66: 300–17. Post, Stephen G. (1995), The Moral Challenge of Alzheimer Disease, Baltimore: Johns Hopkins University Press. Potts, Michael (2001), “A Requiem for Whole Brain Death: A Response to D. Alan Shewmon’s ‘The Brain and Somatic Integration,’” Journal of Medicine and Philosophy, 26 (5): 479–91. Proudfoot, Wayne (1985), Religious Experience, Berkeley: University of California Press. Ramsey, Paul (1970), The Patient as Person: Explorations in Medical Ethics, New Haven: Yale University Press. Report on the Awareness Survey on the End-of-Life Care (Jinsei no saishū dankai ni okeru iryō ni kansuru ishiki chōsa hōkoku-sho), Committee of Dissemination and Awareness Raising for End-of-Life Care. (Jinsei no saishū dankai ni okeru iryō no fukyū keihatsu no arikata ni kansuru kentōkai). March 2018. Richardson, Ruth (1987), Death, Dissection and the Destitute, London: Routledge & Kegan Paul. Richardson, Ruth (1996), “Fearful Symmetry: Corpses for Anatomy, Organs for Transplantation?” Stuart J. Youngner, et al. (eds.), Organ Transplantation: Meanings and Realities, 66–100, Madison: University of Wisconsin Press. Richardson, Ruth and Brian Hurwitz (1987), “Jeremy Bentham’s Self-Image: An Exemplary Bequest for Dissection,” British Medical Journal, 295 (6591): 195–8. Rothman, David J. (1991), Strangers at the Bedside: A History of How Law and Bioethics Transformed Medical Decision Making, New York: Basic Books. Rozin, Paul and April E. Fallon (1987), “A Perspective on Disgust,” Psychological Review, 94 (1): 23–41. Rubinstein, Renate (1989), Take It and Leave It: Aspects of Being Ill, London and New York: Marion Boyers. Runzo, Joseph (2000), “Eros and Meaning in Life and Religion,” in Joseph Runzo and Nancy M. Martin (eds.), The Meaning of Life in the World Religions, 187–202, Oxford and Boston: Oneworld Publications. Sade, R. M. (2011), “Brain Death, Cardiac Death, and the Dead Donor Rule,” Journal of the South Carolina Medical Association, 107 (4): 146–9. Sasaki, Kaori (2021), “Standardized Brain-Death Diagnostic Procedure: The Japanese Controversy of the 1980s and 1990s,” in Susanne Brucksch and Kaori Sasaki (eds.), 113–41, Humans & Devices in Medical Contexts: Case Studies from Japan, London: Palgrave Macmillan. Satō, Kōdō [Takamichi] (1999), Shusseizen shindan—inochi no hinshitsu kanri e no keishō, Tōkyō: Yūhikaku sensho. Schipper, Kristofer (1993), The Taoist Body, trans. Karen C. Duval, Berkeley: University of California Press.

232

References

Schneewind, Jerome B. (1977), Sidgwick’s Ethics and Victorian Moral Philosophy, Cambridge: Cambridge University Press. ­Schöne-Seifert, Bettina (1999), “Präimplantationsdiagnostik und Entscheidungsautonomie: Neuer Kontext – altes Problem,” Ethik in der Medizin, 11 (1): 87–98. Seaman, Louis Livingston (1906), The Real Triumph of Japan: The Conquest of the Silent Foe, London: S. Appleton. Seifert, Josef (1993), “Is ‘Brain Death’ Actually Death?” The Monist 76 (2): 175–202. Sharp, Lesley A. (2001), “Commodified Kin: Death, Mourning, and Competing Claims on the Bodies of Organ Donors in the United States,” American Anthropologist, 103 (1): 112–33. Sharp, Lesley A. (2006), Strange Harvest: Organ Transplants, Denatured Bodies, and the Transformed Self, Berkeley: University of California Press. Sharp, Lesley A. (2007), Bodies, Commodities, and Biotechnologies: Death, Mourning, and Scientific Desire in the Realm of Human Organ Transfer, New York: Columbia University Press. Shemie, Sam D. MD, et al., on behalf of the EOL Conversations with Families of Potential Donors participants (2017), “End-of-Life Conversations with Families of Potential Donors,” Transplantation, 101 (5S): S17–S26. Shewmon, D. Alan, et al. (1989), “The Use of Anencephalic Infants as Organ Sources. A Critique,” JAMA, 261 (12): 1773–81. Shewmon, D. Alan (1998), “Chronic ‘Brain Death’: Meta-analysis and Conceptual Consequences,” Neurology, 51 (6): 1538–45. Shewmon, D. Alan (2004), “The ‘Critical Organ’ for the Organism as a Whole: Lessons from the Lowly Spinal Cord,” in D. Alan Shewmon and Calixto Machado (eds.), Brain Death and Disorders of Consciousness, 23–42, New York: Springer. Shewmon, D. Alan (2018a), “Brain Death: A Conclusion in Search of a Justification,” Hastings Center Report, 48 (S4): S22–S25. Shewmon, D. Alan (2018b), “Truly Reconciling the Case of Jahi McMath,” Neurocritical Care 29 (2): 165–70. Shibatani, Atsuhiro (1999), “Sennō toshite no kagaku bunmei,” in Masahiro Morioka (ed.), Gendai bunmei wa seimei wo dō kaeru ka, 5–48, Kyōto: Hōzōkan. Shimazono, Susumu (2006), Inochi no hajimari no seimei rinri—juseiran · kurōnhai no sakusei · riyō wa mitomerareru ka, Tōkyō: Shunjūsha. Shimazono, Susumu (2008), “Inochi no senbetsu wa naze sakeru beki no ka?—shusseizen shindan wo meguru Nihon no keiken kara,” Shiseigaku kenkyū, 10: 41–4. Shimazono, Susumu (2013/2021), Tsukurareta hoshasen anzenron—kagaku ga michi wo fumihazusu toki, Tōkyō: Senshū daigaku shuppankyoku. Shimode, Sekiyo (1972), Nihon kodai no jingi to Dōkyō, Tōkyō: Yoshikawa kōbunkan. Shinagawa, Tetsuhiko (2007), Seigi to sakai wo sessuru mono—sekinin to iu genri to kea no rinri, Kyōto: Nakanishiya shuppan. Shusterman, Richard (2008), Body Consciousness: A Philosophy of Mindfulness and Somaesthetics, Cambridge and New York: Cambridge University Press. Silver, Lee M. (1997), Remaking Eden: How Genetic Engineering and Cloning Will Transform the American Family, New York: Avon Books. Singer, Peter (1994), Rethinking Life and Death: The Collapse of Our Traditional Values, New York: St. Martin’s Griffin. Singer, Peter (2018), “The Challenge of Brain Death for the Sanctity of Life Ethic,” Ethics and Bioethics (in Central Europe), 8 (3-4): 153–65.

References

233

Singer, Peter (2019), Comments, Princeton University Center for Human Values Symposium, panel, “The Challenge to ‘Brain Death’: Are We Taking Organs from Living Human Beings, and If We Are, Does It Matter?” 30th Anniversary James Madison Program, October 2. Sivin, Nathan (1968), Chinese Alchemy: Preliminary Studies, Cambridge: Harvard University Press. Sivin, Nathan (1987), Traditional Medicine in Contemporary China, Ann Arbor: University of Michigan Center for Chinese Studies. Smith, Robert J. (1974), Ancestor Worship in Contemporary Japan, Stanford: Stanford University Press. Solomon, Michael (2010), Unpublished Manuscript on Jahi McMath. Solomon, Michael, M.D. (2018), “Revised Jahi McMath Paper,” October 28 (unpublished ms.). Solomon, Robert C. (1976), The Passions, Garden City, NY: Anchor Press. Sontag, Susan (1990), Illness as Metaphor and AIDS and Its Metaphors, New York: Anchor Books, Doubleday. Spicker, Stuart F., ed. (1970), The Philosophy of the Body: Rejections of Cartesian Dualism, New York: Quadrangle/The New York Times Book Co. Starzl, Thomas E. (1992), The Puzzle People: Memoirs of a Transplant Surgeon, Pittsburgh: University of Pittsburgh Press. Steiner, Franz (1967), Taboo, London: Penguin. Stock, Gregory (2002), Redesigning Humans: Our Inevitable Genetic Future, New York: Houghton Mifflin Company. Stone, Jacqueline and Mariko Namba Walter (2008), Death and the Afterlife in Japanese Buddhism, Honolulu: University of Hawai’i Press. Stout, Jeffrey (1981), The Flight from Authority: Religion, Morality, and the Quest for Autonomy, Notre Dame: University of Notre Dame Press. Stout, Jeffrey (1988), Ethics after Babel: The Languages of Morals and Their Discontents, Boston: Beacon Press. Sugita, Genpaku (1969), Dawn of Western Science in Japan: Rangaku Kotohajime, trans. Ryōzo Matsumoto, Tokyo: The Hokuseido Press. Sugitani, Atsushi (2020), “Aratamete tō—‘nōshi wa hito no shi ka?’ Ishokui kara mita zōki teikyō—tekishutsu no genjō,” Nōshi/Nōsosei, 32 (2): 74–85. Swearer, Donald (1989), Me and Mine: Selected Essays of Bhikku Buddhadāsa, New York: State University of New York Press. Tachibana, Takashi (1988), Nōshi sairon, Tōkyō: Chūō kōronsha. Tachibana, Takashi (1992), Nōshi rinchō hihan, Tōkyō: Chūō kōronsha. Tada, Tomio (1993a), Men’eki no imiron, Tōkyō: Seidosha. Tada, Tomio (1993b), The Well of Ignorance, trans. S. Kita and B. Parker, No publisher. Tada, Tomio (1994), “Asking the Dead if They’re Dead: A Modern Noh Drama,” Japan Society Newsletter, March, 2–3. Tada, Tomio (1999), Seimei wo meguru taiwa, Tōkyō: Chikuma shobō. Takatsuki, Yoshiteru (1999), “Nihon no shiseikan to zōki ishoku no rinri,” in Masachika Sudō, Yoshihiko Ikeda and Yoshiteru Takatsuki (eds.), Naze Nihon de wa zōki ishoku ga muzukashii no ka, 133–229, Tōkyō: Tōkai daigaku shuppankai. Tamaki, Kōshirō (1993), Seimei to wa nani ka—Budda wo tōshite no ningen genzō, Kyōto: Hōzōkan. Tamaki, Kōshirō (1996), Nōkan to gedatsu—katachi naki inochi ga tsūtetsu suru, Tōkyō: Tetsugaku shobō.

234

References

Tanaka, Akashi (2017), “Nōshi to iu shi no kijun to Nihon no seimei rinri—sono rekishiteki kōsatsu,” Tōkyō Ika Shika Daigaku kyōyōbu kenkyū kiyō, 47: 33–47. ­Tateiwa, Shin’ya (2000), Yowaku aru jiyū e—jiko kettei kaigo seishi no gijutsu, Tōkyō: seidosha. Taylor, Gabrielle (1985), Pride, Shame, and Guilt: Emotions of Self-Assessment, Oxford: Clarendon Press. Terunuma, Yuri and Bryan Mathis (2021), “Cultural Sensitivity in Brain Death Determination: A Necessity in End-of-Life Decisions in Japan,” BMC Medical Ethics, 22 (58): 1–6. Thomas, Keith (1971), Religion and the Decline of Magic: Studies in Popular Beliefs in Sixteenth and Seventeenth England, London: Penguin. Tirosh-Samuelson, Hava and Christian Wiese, eds. (2008), The Legacy of Hans Jonas: Judaism and the Phenomenon of Life, Leiden and Boston: Brill. TRIO Japan, ed. (1993), Kore kara no ishoku iryō—ishokusha tachi kara no hatsugen, Tōkyō: Haru shobō. Troster, Lawrence (2008), “Caretaker or Citizen: Hans Jonas, Aldo Leopold, and the Development of Jewish Environmental Ethics,” in Hava Tirosh-Samuelson and Christian Wiese (eds.), The Legacy of Hans Jonas: Judaism and the Phenomenon of Life, 373–96, Leiden and Boston: Brill. Truog, Robert D. (1997/1998). “Is It Time to Abandon Brain Death?” Hastings Center Report 27: 29–37. Reprinted in Arthur L. Caplan and Daniel H. Coelho (eds.), The Ethics of Organ Transplant: The Current Debate, 24–40, New York: Prometheus Books. Truog, Robert D. and Walter M. Robinson (2003), “Role of Brain Death and the DeadDonor Rule in the Ethics of Organ Transplantation,” Critical Care Medicine, 31 (9): 2391–6. Truog, Robert D. and Franklin G. Miller (2008), “The Dead Donor Rule and Organ Transplantation,” New England Journal of Medicine, August 14, 359: 674–5. Truog, Robert D., Thaddeus Mason Pope and David S. Jones (2018), “The 50-Year Legacy of the Harvard Report on Brain Death,” Journal of the American Medical Association, 320 (4): 335–6. Tsuneishi, Kei’ichi (1982), Saikin senbutai to jikettsu shita futari no igakusha, Tōkyō: Shinchōsha. Tsuneishi, Kei’ichi (1993), Kieta saikin sen-butai—Kantōgun Dai 731 Butai, Tōkyō: Chikuma shobō. Tsuneishi, Kei’ichi (1998), “Igaku to sensō—ima igakkai ni towarete iru koto,” in Arifumi Kōyama (ed.), Seimei rinri kōgi—igaku. iryō ni nani ga towareteiru ka, 187–216, Tōkyō: Nihon hyōronsha. Umehara, Takeshi (1985), Nihongaku kotohajime, Tōkyō: Shūeisha. Umehara, Takeshi (1990), “Nōshi—Sokuratesu no to wa hantai suru—seimei e no ifu wo wasureta gōman na ‘nōshiron’ wo haisu,” Bungei shunjū, 68 (13): 344–64. Umehara, Takeshi (1991), Mori no shisō ga jinrui wo sukū, Tōkyō: Shōgakkan. Umehara, Takeshi (1992), Nōshi wa, shi de nai, Kyōto: Shibunkaku shuppan. Umehara, Takeshi (2000), “Nōshi” to zōki ishoku, Tōkyō: Asahi shinbunsha. Vacek, Edward and S. J. Collins (1994), Love, Human and Divine: The Heart of Christian Ethics, Washington, DC: Georgetown University Press. Veatch, Robert M. (1976), Death, Dying, and the Biological Revolution: Our Last Quest for Responsibility, New Haven and London: Yale University Press. Veatch, Robert M. (2000), Transplantation Ethics, Washington, DC: Georgetown University Press.

References

235

Veatch, Robert M. (2005), “The Death of Whole-Brain Death: The Plague of the Disaggregators, Somaticists, and Mentalists,” Journal of Medicine and Philosophy, 30 (4): 353–78. ­Veatch, Robert M. (2009), “The Impending Collapse of the Whole-Brain Definition of Death,” in John P. Lizza (ed.), Defining the Beginning and End of Life: Readings on Personal Identity and Bioethics, 483–97, Baltimore: Johns Hopkins University Press. Wada, Jurō (1992), `Nōshi` to `shinzō ishoku`—are kara nijūgo nen, Tōkyō: Kanki shuppan. Washida, Koyata (1988), Nōshiron—ningen to hiningen no aida, Tōkyō: San’ichi shobō. Wayman, Alex (1974), “The Mirror as a Pan-Buddhist Metaphor-Simile,” History of Religions, 13 (4): 251–69. Weise, Christian (2008), “‘God’s Adventure with the World’ and ‘Sanctity of Life’: Theological Speculations and Ethical Reflections in Jonas’s Philosophy after Auschwitz,” in Hava Tirosh-Samuelson and Christian Wiese (eds.), The Legacy of Hans Jonas: Judaism and the Phenomenon of Life, 419–60, Leiden: Brill. Williams, Bernard (1985), Ethics and the Limits of Philosophy, Cambridge: Harvard University Press. Wittgenstein, Ludwig (1958/1980), The Blue and Brown Books: Preliminary Studies for the Philosophical Investigations, Oxford: Basil Blackwell. Wittgenstein, Ludwig (1958/1992), Philosophical Investigations: The English Text of the Third Edition, trans. Gertrude E. Anscombe, New York: Macmillan. Wolin, Richard (2001), Heidegger’s Children: Hannah Arendt, Karl Löwith, Hans Jonas, and Herbert Marcuse, Princeton: Princeton University Press. Wolters, Gereon (2001), “Hans Jonas’ Philosophical Biology,” Graduate Faculty Philosophy Journal, 23 (1): 85–98. WuDunn, Sheryl (1999), “Death Taboo Weakening, Japan Sees 1st Transplant,” New York Times, March 1. Yamaguchi, Ken’ichirō (1995), Seimei wo moteasobu—gendai no iryō, Tōkyō: Hyōronsha. Yanagida, Kunio (1999), Gisei—waga musuko · nōshi no jūichinichi, Tōkyō: Bunshun bunkō. Yokota, Hiroyuki (2010), “The Revised Organ Transplantation Act in Japan from the View of Emergency Doctors (Kaisei zōkiishokuhō no mondaiten to sono taiyō: kinkyūi no tachiba kara),” Brain and Nerve, 62 (6): 565–73. Yonemoto, Shunpei (1985), Baioeshikkusu, Tōkyō: Kōdansha. Youngner, Stuart J. (1990), “Organ Retrieval: Can We Ignore the Dark Side?” Transplantation Proceedings, 22 (3): 1014–15. Youngner, Stuart J. (1996), “Some Must Die,” in Stuart J. Youngner, Renée C. Fox and Laurence J. O’Connell (eds.), Organ Transplantation: Meanings and Realities, 32–55, Madison: University of Wisconsin Press. Youngner, Stuart J., Renée C. Fox and Laurence J. O’Connell, eds. (1996), Organ Transplantation: Meanings and Realities, Madison: University of Wisconsin Press. Yōrō, Takeshi (1989), Yuinōron, Tōkyō: Seidosha. Yōrō, Takeshi and Iwane Saitō (1992), Nō to haka—hito wa naze maisō suru no ka, Tōkyō: Kōbundō.

INDEX Abe, Masao 170–1, 188 Abe, Tomoko 157–8 abortion 1, 109, 177–8, 180, 184, 206 n.1 Ad Hoc Committee on Brain Death and Organ Transplantation (Rinji nōshi oyobi zōki ishoku chōsakai) or (Nōshi rinchō) 14, 54–5, 90, 175, 193 minority report 14, 54–5, 90, 175, 188–9, 220 n.9 “Against the Stream” (Jonas) xiv, 26, 29, 82, 133–8, 141–2 agapé 3, 26, 107–14, 119, 134, 198–9, 206 n.2, 216 n.7 “agapeic calculus” 112 Aita, Kaoruko 17 Akutagawa, Ryūnosuke 115–17 altruism xiii, 2–4, 18, 28–9, 37, 61, 68, 113, 199, 201 Amagasa, Keisuke 77 Ambagtsheer, Frederike 15 Ames, Roger T. 171 analytic philosophy 94, 136–7, 141, 170, 205 n.10, 219 n.2, 219 n.7 Andrews, Lori 37 angels, as symbol for organ donors 101–2, 205 n.3 Anstötz, Christoph 53 anti diuretic hormone (ADH) 208 n.15 Arap, Wadih 58 Arendt, Hannah 130, 133, 136 arguments for caution (shinchōron) 26, 34–5, 83–4, 173, 176–80, 184, 188 Ariés, Philippe 104 Armstrong-Hough, Mari 18 Atomic Bomb Casualty Commission (ABCC) 152–3, 185 Attali, Jacques 123, 217 n.5 Awaya, Tsuyoshi 33–4, 97, 123 Bacon, Francis 48, 129 Bacteriological War Unit that Disappeared, The (Kieta saikin sen-butai), (Tsuneishi) 153

Baioeshikusu to wa nanika (Katō) 129 Bambrough, Renford 99 Barnard, Christiaan 19, 27, 31, 64–5, 108, 110, 118, 198–9, 201, 209 n.27 “Basic Concepts on the Handling of Human Embryos” (Japanese Government Report) 183, 187 Beginnings of Dutch Learning, The (Rangaku kotohajime), (Sugita) 85, 87 Beginnings of Japanese Learning, The (Nihongaku kotohajime), (Umehara) 92 Bellah, Robert Neelly 43 Benedict, Ruth 13, 155 Bentham, Jeremy 111–12, 117–20, 124, 201 Bernat, James L. 8 Bernstein, Richard J. 140 bioethics 133, 136, 180–3 critique of American/Western style xiii, 26, 30, 43, 90, 117–18, 121, 125, 164, 169 history of field 109, 111, 122, 133, 136, 141, 170 ­and public debate x–xi, 25, 31, 46–7, 124, 145, 173, 180, 182, 190–1 utilitarianism and xiii, 3, 12, 111–12, 116–25, 137, 184, 188–9, 201–3, 206 n.2, 212 n.9 “biolust” 41, 63 biotechnology xiv, xvi, 28, 34, 38, 40–2, 55–6, 68, 70–1, 85, 93, 96, 99, 109, 129–30, 133, 140–1, 145, 173, 176–80, 182–4, 198, 202, 204, 212 n.7–8. See also consensus, social, in adoption of new biotechnology; skepticism, of biotechnology as juggernaut xi, 24, 34 Bleich, J. David 103 Böhme, Gernot 75–6, 174, 219 n.5 Boniface VIII, Pope 104 Borovoy, Amy ix, xi, 18, 205 n.5–6

Index brain dead bodies ability to heal and grow 8, 32, 96, 154, 196 appearance of 6, 51, 57–8, 77, 108, 123, 201 care for (nursing) 8, 49, 51–2, 107 family interaction with 49–52, 76–7, 98–9, 114, 175, 199, 201 pregnant 52–5, 143, 178, 184, 192 proposed uses of xiii, 58, 125–8 Brain Dead Human Person, The (Morioka) 50–1 brain death (neurological death) xi–xiii, xv, 2–11, 13–18, 25–7, 31–3, 47–58, 64–7, 76, 79–80, 90–2, 96–7, 108, 122, 138–9, 141–2, 156–7, 174–5, 188–9, 191–8, 205 n.8–9, 206 n.4, 207 n.7–8, 209 n.2, 210 n.5 (Ch.3), 221 n.5 as flawed concept xii, 31–2, 47–8, 54, 57, 91, 94, 125, 134–5, 142 as liminal state xv, 6, 11–12, 96, 135, 192, 205 n.4, 205 n.6 and loss of somatic integration 8, 14, 57, 195–7, 207 n.9, 209 n.4 procedures for determining 7, 174, 193–4, 207 n.7–8, 213 n.4 public debate in Japan x, 25, 46–7, 145, 182, 191 whole brain criterion 4–11, 32, 57, 65, 133, 194, 197, 208 n.13, 208 n.15 Brain Death Theory: Between Being Human and Inhuman (Washida) 96, 123, 154 brain function 6–11, 66–7 Brencick, Janice M. 51 Brown, Peter Robert Lamont 104 Buddhism xiv–xv, 1, 12–13, 63–4, 66, 85, 101, 105, 113, 161–2, 165, 167–72, 188–9, 208 n.19, 209 n.6, 211 n.3, 212 n.4, 215 n.5–6, 219 n.3, 219 n.6, 220 n.9 Buddhist ethics 63, 162–3, 171 Bultmann, Rudolf 130 Bush, Barbara 45–6 Bush, George Herbert Walker 45 Bush, George Walker 46, 141

237

calculation (in ethics) 63, 112, 121, 123–5, 127, 185, 204, 209 n.28, 217 n.6 Callahan, Daniel 133, 136 Campbell, Courtney 108–9 Campbell, John Creighton 89 capitalism 162–3. See also organ transplantation, economic/financial considerations in and encouragement of consumption 162 and “market solutions” for organ shortages 37, 123 and price discovery (for body parts) 186 Caplan, Arthur L. 74, 81, 123, 212 n.5 cardiac death. See death, cardiac ­Cartesianism xii, xvi, 57, 91–5, 97, 139–40, 170, 197, 209 n.5 Catholicism/Catholics 20, 27, 85, 104–7, 131, 199–200, 216 n.7 Chinese medicine, traditional 85–90, 194, 206 n.3, 214 n.1, 214 n.4 Christianity xiii–xv, 2, 9, 26, 34, 77, 86, 101–13, 118–19, 134, 150, 164–5, 170, 191, 193, 198–201, 205 n.3, 206 n.2, 215 n.5, 215 n.13, 216 n.2, 216 n.7, 220 n.10 Christian ethics 109, 111, 134 Japanese/in Japan 26, 85–6, 193, 199, 200–1 values xiii, 2–3, 9, 102–3, 150, 200, 220 n.10 Chrysanthemum and the Sword, The: Patterns of Japanese Culture (Benedict) 155 Cold War xiv, 150–1, 203 coma 6–7, 17, 29, 31–2, 174, 195, 206 n.4 commodification. See organs, commodification of Confucianism 64, 88–9, 113–14, 167 conscience 60–3, 132, 152, 212 n.7 ryōshin 61 consciousness xii, xvi, 4, 7, 9, 15, 29, 32, 57, 81, 94, 97–8, 139, 198, 207 n.8, 215 n.13 as localized in brain 57, 65 Consciousness Explained (Dennett) 94 consensus, social xv, 4, 6, 9, 11, 54, 209 n.2 in adoption of new biotechnology 2, 68, 70–1, 109, 145, 177, 180–2 in Japan xv, 42, 54, 177, 182

238

Index

consent, familial 7, 10, 16–17, 82, 175, 208 n.13, 212 n.6 consumption, encouragement of. See capitalism Cook, James 70 cost-benefit analysis xiii, 125–6, 184–5 Council for Science and Technology Policy (Japan) 187 Council on Ethical and Judicial Affairs of the American Medical Association, The 81 creationism 34 Crowley-Matoka, Megan 20 cultural difference 21, 23–5, 29–30, 40, 48, 61, 64, 70, 77, 79, 89, 92, 113, 145, 185 culture wars xi, 56, 141, 198, 211 n.10 Culver, Charles M. 8 curiosity 3, 23–5, 30, 41–2, 73, 86 cyclosporine 15, 37, 46, 65, 134, 168, 175, 219 n.5

Committee of Harvard Medical School to Examine the Definition of Brain Death Dennett, Daniel C. xii, 94 Descartes’ Error: Emotion, Reason, and the Human Brain (Damasio) 95 Descartes, René 88–95, 97–8, 139–40, 163–4, 167, 172, 188, 197, 219 n.5. See also Cartesianism disabled persons/disabilities 135–6, 177–8, 184–5 rights of 177 Dōgen xvi, 172, 188 Doi, Kenji 200–1 Dorff, Elliot N. 102–3, 107, 215 n.2, 215 n.3, 215 n.13 Down syndrome 177, 184–5 Drott, Edward 219 n.3 dualism. See Cartesianism Dutchen, Stephanie 10 “Dutch Learning” (Rangaku) 85–8, 92

Damasio, Antonio R. 95, 216 n.12 Daoism 64, 165–8, 171, 219 n.9 (Ch.11) Darwin/Darwinism 33–4, 109, 139, 198, 209 n.6 “dead donor rule” 10, 81, 207 n.10 death cardiac 5, 10, 17, 175, 207 n.10 definition of 4–6, 76 life and 170–2 moment of 4, 9, 14, 18, 205 n.6 as process 11–12, 18, 74–5, 132, 172, 197 re-definition of xii, 6, 9, 57, 75–6, 114, 131, 184, 213 n.11 as social (rather than purely individual) phenomenon 1–2, 12–13 three signs of (san chōkō) xii, 15, 74–6, 197 de Becker, Gavin 82 de-coupling (of medical and transplant teams) 9 ­Deeken, Alfons 200 “A Definition of Irreversible Coma: Report of the Ad Hoc Committee of Harvard Medical School to Examine the Definition of Brain Death.” See Report of the Ad Hoc

East, stereotypes of xi, xiv, 2, 31, 120, 147, 190, 218 n.3 (Ch.10) EEG. See electroencephalogram (EEG) electrocardiogram (ECG) 74–5 electroencephalogram (EEG) 49–50, 74–6, 194, 213 n.2 emotions xii, 13, 17, 50–1, 62, 64, 68–70, 77, 82, 94–5, 101–2, 110, 112–14, 161, 199, 202, 206 n.2 role in moral reasoning xii, 68–70, 201–2 empiricism xii, 32, 47, 49–50, 66, 74–7, 134, 151, 194, 197–8, 201 importance of seeing the brain dead 15, 49–52, 74–9, 104, 200–1 Endo, Shusaku 149, 155 Erlangen case 52–3, 55, 142–3 Ethics of Genetic Control, The: Ending Reproductive Roulette (Fletcher) 109 Eugenic Protection Law (1948) 177–8, 180 eugenics 176–80, 184. See also “new eugenics” Nazi 184 “faith healing” 107 family 10–13, 17–18, 20, 49–52, 70, 76–7, 98–9, 114, 175, 199, 201,

Index 206 n.1, 207 n.6, 211 n.9, 220 n.10. See also brain dead bodies, family interaction with; consent, familial fear. See also “moral fear” heuristics of xii, 79, 81–4, 99, 202–3, 212 n.8 irrational 45, 68, 79, 82, 125 noncompliant public 77–9 and rationality 73–7 feminist ethics 53, 178 Fletcher, Joseph 109–12, 118–19, 134, 188, 199, 206 n.2, 216 n.2, 216 n.9 Fox, Renée C. 2, 37–8, 61, 68, 133–4, 136, 141, 159, 162, 168–9, 210 n.4 (Ch.2), 211 n.10, 217 n.5, 219 n.11 “Frankenstein Syndrome” 211 n.1 Fujii, Masao 63 Fukushima Daiichi Nuclear Power Plant disaster 186 Futterman, Laurie G. 73, 78 ­gallows humor 37–40 Gaylin, Willard 125–8 gene therapy 2, 28 Gert, Bernard 8 Gervais, Karen Grandstrand 54, 219 n.8 (Ch.11) Gold, Hal 146–7, 149 Golub, Edward S. 158 Goodman, Grant K. 86 Good Samaritan, parable of 26, 79, 200–1, 206 n.2 Goodwin, Michele 37 “Guidelines for Handling Specified Embryos” (Japanese Government Report) 179 Guillemin, Jeanne 158 guilt cultivation of 43, 80, 191 culture 154 sense of 40, 60, 212 n.4 vs. shame 155 Hacking, Ian 124, 217 n.6 Hara, Hideo 15 Hardacre, Helen 1, 208 n.19 Harris, Sheldon H. 148–50, 216 n.3 Harvard Report. See Report of the Ad Hoc Committee of Harvard Medical

239

School to Examine the Definition of Brain Death (A Definition of Irreversible Coma) “Harvesting the Dead: The Potential for Recycling Human Bodies” (Gaylin) 125–7 Havana, Second International Conference on Brain Death in 31–5, 195, 210 n.6 Hawking, Stephen 135–6 health, societal 18, 42–3. See also Japan/ Japanese, health care system heart, as metaphor/symbol xii, 27–8, 59–64, 101. See also kokoro Heidegger, Martin 27, 130, 141–2, 202 Hiro, Sachiya 40, 161 Hosaka, Masayasu 163 Howe, Reuel 111 humanity 4, 29, 33–4, 57, 91–2, 95–8, 181 essence/definition of xii, 29, 34, 75, 92–3, 95–100 fears about future of 74–5, 79, 80, 83, 95–8 human nature xii–xiii, 29, 95–100, 162, 171, 215 n.12 Hume, David 97–8 Hurwitz, Brian 118–19 Ienaga, Saburō 122, 148, 216 n.3 Ikegami, Naoki 89 immortality xiv–xvi, 161, 163–8, 172, 219 n.3–4 immune system 37, 46, 65, 174–5, 206 n.3 role in defining boundaries of self/ person 65, 206 n.3 immunosuppressants. See cyclosporine Imperative of Responsibility, The: In Search of an Ethics for the Technological Age (Jonas) 27, 83–4, 133, 137 informed consent 82, 164, 178, 186, 218–19 n.7 at societal level 55, 145 inochi no senbetsu. See “sorting of life” ­Inquiry into the Good, An (Zen no kenkyū), (Nishida) 120 instrumentalization 186–8 “invisible death,” brain death as 15, 49, 76 Ishii, Seishi 97, 138–40 Ishii, Shirō 148–51, 156, 218 n.5 (Ch.10)

240

Index

Ishitobi, Kōzō 14 Issa 162 Itō, Osamu 112–13, 199 Jansen, Marius 87 Japan/Japanese. See also Christianity, Japanese/in Japan attitudes toward bioetechnology xiv, 28, 34, 38, 42, 55, 85, 95–6, 140, 176–80, 182–4, 202 bioethics 3, 4, 26, 121, 174, 180–9, 201 and cadaveric organ transplantation 26, 69–70, 101, 157, 174–6, 190, 193, 220 n.3 health care system 18–19, 42–3, 87 Hiroshima and Nagasaki, nuclear bombing of 151–5, 202, 185 living-donor transplantation 5, 14–21, 132, 176 population aging 15, 18–19, 48 Japanese Society of Obstetrics and Gynecology 178–9 Johnson, Mark 93–5, 98, 125 Jonas, Hans xii, xiii, 3, 11, 15, 26–9, 31, 34, 75, 79, 82–4, 93, 97, 99–100, 129–43, 162, 176, 188–9, 192, 201–2, 212 n.8, 213 n.5, 215 n.16, 218 n.4–5 (Ch.9) Jones, David S. 4, 10, 207 n.7 Joy, Bill 34 Judaism xiii, 103–4, 130, 134, 140–2, 165 Jewish ethics 102–3, 107, 109, 119, 134, 215 n.13 Kagawa, Chiaki 175 Kaji, Nobuyuki 88–9 kanpōyaku. See Chinese medicine, traditional Kass, Leon 74, 141, 183, 212 n.5, 217 n.5 Kasulis, Thomas 64 Katō, Hisatake 75–6, 97, 129, 138, 162, 201–2, 209 n.3 Kaufman, Sharon R. 1, 4, 6, 16, 20, 207 n.6 Kawahata, Aiyoshi 169, 171 Kierkegaard, Søren 108, 110–11, 114 Kieta saikin sen-butai. See Bacteriological War Unit that Disappeared, The Kimura, Rihito 17, 145, 155–6 kokoro xii, 60, 62–4, 211 n.2

Komatsu, Yoshihiko 9, 13, 49, 123, 157 Kondō, Makoto 158 Kugu, Kōji 179 Kūkai 167–8, 219 n.4 Kulmus, Johann Adam 86 Kurosawa, Akira x, 115 LaFleur, William R. viii–xvi, 1–5, 9, 11–14, 16–19, 21, 38, 106, 141, 159, 173–7, 183–204, 205 n.10, 206 n.2–3, 208 n.19, 210 n.3 (Ch.3), 210 n.5–6 (Ch.2), 214 n.5, 215 n.6, 215 n.15, 218 n.5 (Ch.9), 220 n.2 (Conclusion), 221 n.6 Lakoff, George 93–5, 98 Lamb, David 53–4, 56–8, 214 n.11 ­Lebra, Takie 63 Levenson, James L. 39 Lindee, M. Susan 152–3 Liquid Life: Abortion and Buddhism in Japan (LaFleur) 1, 12, 177 living donor transplantation 5, 14–21, 132, 176 Lock, Margaret M. xv, 2, 4, 6, 13–14, 16–19, 32, 40, 48, 87, 205 n.4, 205 n.8, 210 n.2 (Ch.2), 210 n.5 (Ch.2), 210 n.2–4 (Ch.3), 211 n.1, 212 n.6, 213 n.6, 215 n.10, 221 n.3 Long, Susan O. 14, 17 love xiii, 3, 26, 28, 61, 79, 108–14, 134, 158, 186–7, 199, 201, 206 n.2, 216 n.7, 216 n.10, 216 n.11, 217 n.7 McCumber, John 136, 219 n.7 McGee, Glenn 28 Machado, Calixto 31 MacIntyre, Alasdair C. 70–1, 104, 212 n.9 Manzei, Alexandra 51 Marieb, Elaine Nicpon 67 “market solutions.” See capitalism, and “market solutions” for organ shortages Marra, Michael F. 120 Maternal Protection Law (Japan) 178, 180 Mathis, Bryan 17–18 medical missionizing 30, 106, 199, 205 n.3 medicine. See also medical missionizing Chinese. See Chinese medicine, traditional

Index and cultural bias/metaphysical assumptions xii, 29, 95, 97, 170, 215 n.10 as “miracle” or “modern miracle” xvi, 2, 25, 28, 68, 79, 103–7, 109–10, 119, 137, 145, 161, 163, 167, 169, 172, 199 mystique of 77 medulla oblongata 66–7, 133, 139 Meisel, Alan 38 Memoirs (Jonas) 131–2, 142 metaphysics xii, 56, 136–7, 141, 215 n.10 Method of Ethics, The (Sidgwick) 124 Midgley, Mary 68–9, 212 n.8 Miller, Franklin G. 10 Mill, John Stuart 111–12, 117–18, 120, 124 Minemura, Yoshiki 19 Ministry of Health and Welfare (Japanese) 101, 153, 176, 178 miracles 103, 105–7. See also medicine, as “miracle” or “modern miracle” Mitsuishi, Tadahiro 15 Mizutani, Hiroshi 122 “Modest Proposal, A” (Swift) 126 “moral fear” 2, 62–4, 68, 71, 212 n.7 Morals and Medicine (Fletcher) 110 Morioka, Masahiro 50–1, 96, 127, 211 n.6, 215 n.9 Morishita, Naoki 35, 80–1, 163, 197 Mortality and Morality: A Search for the Good after Auschwitz (Jonas) 140 Moulin, Anne Marie 104 Murakami, Naoka 19 Nakagawa, Yonezō 58 ­Nakajima, Michi 49–51, 76, 217 n.7 Nakasone, Yasuhiro 90 Nakayama, Kōmei 19 Nancy, Jean-Luc 219 n.5 nationalism 24, 48, 188–9, 202, 210 n.5 (Ch.2), 219 n.10 Nelkin, Dorothy 37 “neo cannibalism” 11, 123 neomorts xiii, 125–8 neurological death. See brain death (neurological death) “new eugenics” 118, 173, 177–9, 184–5 New School for Social Research 130, 133, 136

241

Nihongaku kotohajime. See Beginnings of Japanese Learning, The Nirenberg, David 142 Nishi, Amane 117, 120, 188 Nishida, Kitarō 120–1, 123–4, 188 Noble, David F. 109 Noninvasive Prenatal Testing Procedure (NIPT) 178 Nōshi rinchō. See Ad Hoc Commission on Brain Death and Organ Transplantation nurses 18, 49, 51–2, 54, 57, 111, 211 n.8, 217 n.7 for “corpses” (the brain dead) 49, 51–2 perspectives of (vs. those of physicians) 51–2, 217 n.7 Ogawa, Hidemichi 19 Ogiwara, Makoto 113, 216 n.11 Ōishi, Matashichi 186 Olbrisch, Mary Ellen 39 On Interpreting Immunity (Tada) 65 “ontological thinning” and “ontological denseness” 1, 12 ontology 165, 170–2, 219 n.6, 219 n.8 (Ch.11) organ donation/donors xi, 3–5, 9–12, 15–21, 32–4, 46, 48, 53, 67, 77–9, 81–3, 101–3, 107, 119, 123, 157, 175–6, 191, 199, 201, 205 n.3, 208 n.16, 208 n.18, 211 n.1, 212 n.6, 219 n.10 organ donor card 101, 205 n.3, 213 n.6 “organ panic” xi, 41–2, 190–2 Organ Procurement and Transplantation Network (OPTN) 12, 208 n.16 organ procurement organizations (OPOs) 12, 210 n.6, 211 n.9, 213 n.8, 217 n.4, 217 n.7, 220–1 n.2 (Appendix) organ recipients health outcomes of 37, 134, 168 interest in donors 211 n.1 and “tyranny of the gift” 61, 210 n.4 (Ch.2) organ recipients, potential and factors determining availability of organs 38, 39, 56, 80 psychology of xii, 16, 37–40, 61–2 organs. See also organ transplantation

242

Index

access to as medical right 3, 38, 210 n.2 (Ch.2) commodification of 4, 11–14, 17, 37, 123, 186–8, 209 n.1 (Ch.2), 210 n.2 (Ch.2) meanings assigned to 54, 65, 89, 211 n.1 shortage of 3, 37, 56, 80, 123, 209 n.1 (Ch.2), 213 n.10 as transferring personality of donor 62, 211 n.1, 219 n.5 ­waiting lists for 18, 20, 39–40, 63 organ transplantation economic/financial considerations in 18–19, 33, 105, 123, 127, 162–3, 177, 180, 184, 191–2, 199, 209 n.1 (Ch.2), 216–17 n.4, 221 n.4 factors determining success or failure of xi, 37–9, 174–5, 217 n.4 live donor 20, 23, 38, 123, 176 opening the way for more radical forms of biotechnology xi, xvi, 24–5, 27–8, 34–5, 38, 163, 176, 183 public relations campaigns promoting 2, 12, 23–4, 78–9, 102, 105, 191, 202–3 and transparency 10, 210 n.6, 210 n.3 (Ch.3), 220–1 n.2 (Appendix) transplant tourism 15–16, 175, 209 n.23 Organ Transplant Law (Zōki no ishoku ni kan suru hōritsu) 2, 14, 175, 182 Orientalism xiv, 218 n.3 (Ch.10) Origin of Species, The (Darwin) 34 “Our Shameful Waste of Human Tissue: An Ethical Problem for the Living and the Dead” (Fletcher) 110, 118 Outka, Gene 108 Pascal, Blaise 97 Pasqualini, Renata 58 patients awaiting transplants. See organ recipients, potential Paul VI, Pope 198–9 Payer, Lynn 89 Pernick, Martin S. 82, 125 Perry, Matthew 116 persistent (permanent) vegetative state (PVS) 6, 7, 32, 50, 56, 97, 139, 184, 194, 197, 205 n.10, 206 n.4, 207 n.6, 210 n.5 (Ch.3), 213 n.11, 214 n.11

personhood consciousness as criterion for xii, 29, 57, 65, 205 n.8, 205 n.9, 205–6 n.10, 211 n.6, 214 n.11, 215 n.10 as culturally defined/determined 29–30, 205 n.8, 205–6 n.10 philosophical discussions of 4, 92, 211 n.6, 219 n.8 (Ch.11) as socially constituted 10, 12, 13, 64, 76, 94, 108, 207–8 n.11 somatic integration as criterion for 8, 57, 195–7, 207 n.9, 209 n.4 Phenomenon of Life: Toward Philosophical Biology, The (Jonas) 130, 138–40 philosophy xiii, xiv, 4, 12, 17, 27, 32, 83, 91–5, 115–21, 129–30, 133–7, 139–41, 164, 167–8, 170–1, 188, 201–3, 205 n.10, 215 n.10, 216 n.1, 219 n.2, 219 n.7 analytic vs. continental 136–7, 141, 170, 219 n.2, 219 n.7 Philosophy in the Flesh (Lakoff and Johnson) 94–5 Pius XII, Pope 105 Plath, David 13 Pope, Thaddeus Mason 4, 10 pregnancy 1, 52–5, 118, 143, 178–9, 184, 192 postmortem xii, 53, 192 President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research (1981) 8, 14, 174, 195, 206 n.5, 208 n.12 President’s Council on Bioethics 141, 183 “presumed consent” 81 price discovery. See capitalism, and price discovery (for body parts) Procrustean bed 55–8 ­profit motive. See organ transplantation, economic/financial considerations in progress medicine/organ transplants as index of 25–6, 42, 73, 90–1, 190, 194–5 technology and 180, 182, 194 Protestantism/Protestants 105–6, 110, 112, 199, 206 n.2 Proudfoot, Wayne 106 prudence/prudentialism 26, 83, 176. See also arguments for caution (shinchōron)

Index psychology. See organ recipients, potential, psychology of public health 18–19, 42–3 Puczynski, Michelle 38 Rangaku Kotohajime. See Beginnings of Dutch Learning, The Rashōmon (Akutagawa) x, 115 Real Triumph of Japan, The: The Conquest of the Silent Foe (Seaman) 146–7 Reeves, Christopher 135 religion as factor in moral reasoning/ intuitions xiv–xv, 12, 18, 21, 23–6, 34, 63–4, 78, 92, 102, 104–5, 109, 111–13, 154–5, 161, 164–5, 170–2, 180, 198–200, 208 n.13, 209 n.1, 216 n.7 vs. magic 162 as superstition/taboo x, 45, 56, 69–71, 136, 190–1, 194, 211 n.9 (Ch.3) Remaking Eden (Silver) 97, 165 Report of the Ad Hoc Committee of Harvard Medical School to Examine the Definition of Brain Death (A Definition of Irreversible Coma) 6–10, 65, 108–9, 125, 128, 130–3, 174, 206 n.2, 207 n.7, 213 n.5, 218 n.1 (Ch.9) Richardson, Ruth 82, 118–19 Ricour, Paul 98 rights to organs of others. See organs, access to as medical right Rinji nōshi oyobi zōki ishoku chōsakai. See Ad Hoc Committee on Brain Death and Organ Transplantation Robinson, William 10 Roth, Loren H. 38 Russ, Ann J. 16, 20 ryōshin. See conscience, ryōshin Sade, Robert M. 10 Saitō, Iwane 90 samādhi 67 san chōkō. See death, three signs of Sangō shiiki (Kūkai) 167 Satō, Kōdō [Takamichi] 184–5 Schipper, Kristofer 166 Schleiermacher, Friedrich 106 Schöne-Seifert, Bettina 53

243

Schweitzer, Albert 106 science/scientific research as contra religion/irrationality 34, 56, 81–3, 109, 198 in Japan 2–3, 85–7, 146–7, 155, 157 metaphysics dressed up as (scientism) xii, 29, 94–5, 97, 170, 215 n.10 of neurological death (brain death) xi, xii, 6–9, 26, 31–3, 66–7, 124, 195–7, 207 n.8 principles of (skepticism/empiricism/ transparency) 26, 30–1, 33, 42, 48, 50, 75–7, 81, 94, 132–3, 135 and promise of immortality xv, 34, 164, 168–70 public knowledge of 31, 33, 42, 48–9, 77, 210 n.6 and question of human nature 29, 97 and war 146, 155, 158, 185–6, 203 Sea and Poison, The (Umi to dokuyaku), (Endo) 149 Seaman, Louis Livingston 146–7, 203, 218 n.3 (Ch.10) Second World War xiv, 122, 126, 130, 148, 150, 153, 156, 158, 184, 203–4 Shaivo, Terry 57 shame 45, 79, 155 Sharp, Lesley A. xv, 11–12, 16, 205 n.8–9, 208 n.18, 209 n.1 (Ch.2), 210 n.2 (Ch.2), 210 n.3 (Ch.3), 210 n.6, 211 n.1, 211 n.9, 212 n.6, 213 n.4, 213 n.8, 215 n.10, 216–17 n.4, 217 n.7, 218 n.1 (Ch.9), 219 n.10–11, 220–1 n.2 (Appendix), 221 n.4–5 Shemie, Sam D. 10 Shewmon, D. Alan 3, 8, 32–5, 81, 133, 139, 195–8, 207 n.8, 210 n.6, 221 n.6 Shibatani, Atsuhiro 96, 206 n.3 Shimazono, Susumu ix, xi, xv, 4, 14, 174, 183, 186, 188, 217 n.8, 220 n.1 (Conclusion), 220 n.7, 220 n.8 Shim, Janet K. 16, 20 Shimode, Sekiyo 166 Shinagawa, Tetsuhiko 218 n.5 (Ch.9) shinchōron. See arguments for caution Shinto 24, 63–4, 205 n.3, 209 n.1 (Ch.1) Shusterman, Richard 206 n.11 Sidgwick, Henry 124, 212 n.9 Silver, Lee M. 97, 165 Singer, Peter 4, 8–9

244

Index

Situation Ethics: The New Morality (Fletcher) 109–12 Sivin, Nathan 166, 214 n.1, 214 n.4 scepticism. See also science/scientific research, principles of (skepticism/ empiricism/transparency) of biotechnology 34, 56, 90–1, 145, 165, 179–80 of brain death/organ transplantation xi, 3, 5, 31–2, 47, 50, 65–6, 77–8, 142, 176, 200. See also Jonas, Hans of utilitarianism 3, 117, 120–1 Smith, Robert J. 13 societal health. See health, societal Solomon, Michael 1, 4, 7, 10 somatic integration 8, 14, 57, 195–7, 207 n.9, 209 n.4. See also brain death (neurological death), and loss of somatic integration Sontag, Susan xiv, 158 “sorting of life” (inochi no senbetsu) 173, 176–80, 183 Spicker, Stuart F. 93 spinal cord xii, 7, 33, 66–7, 133, 196, 206 n.3, 221 n.5 Starzl, Thomas E. 2, 46–7, 55, 210 n.1 “Statement on the Birth of Genetically Altered Children” (issued by Board of Directors of the Philosophical Association of Japan, the Board of Trustees of the Japanese Society for Ethics, and the Board of Directors of the Japanese Association for Religious Studies) 181–2 ­Steiner, Franz 70 Stock, Gregory 34, 209 n.5 Sugita, Genpaku 85–9, 92, 214 n.1, 214 n.8 Sugitani, Atsushi 15 Swazey, Judith P. 2, 37, 61, 133–4, 136, 141, 159, 162, 168–9, 210 n.4 (Ch.2), 211 n.10, 219 n.11 taboo 2, 45, 64, 68–71, 118, 123, 212–13 n.9 Tachibana, Takashi 14–15, 26, 55, 193–4, 208 n.21, 213 n.3 Tada, Tomio 65–6, 73, 198, 206 n.3, 211 n.4 Takatsuki, Yoshiteru 40–1

Tamaki, Kōshirō xii, 66–7, 206 n.3 Tanaka, Akashi 15 Tanaka, Kakuei 55, 193 Taylor, Gabrielle 62 technology. See biotechnology Terunuma, Yuri 17–18 Thomas, Keith 105–6 Time in the Ditch (McCumber) 136 Tirosh-Samuelson, Hava 130, 140–1 traditionalism xiii, xiv, 26, 90, 92, 210 n.5 (Ch.2) transparency. See bioethics, and public debate; organ transplantation, and transparency; science, principles of (skepticism/empiricism/ transparency); science, public knowledge of Transplantation Ethics (Veatch) 63 transplants/transplantation. See organ transplantation transplant tourism. See organ transplantation, transplant tourism Treatise on Human Nature (Hume) 98 Trifling with Life—Contemporary Medicine (Yamaguchi) 156 Troster, Lawrence 138 Truog, Robert D. 4, 10, 47–8, 56–7, 139, 142, 207 n.7, 213–14 n.11 Tsuneishi, Kei’ichi 11, 122–3, 153–4, 204, 216 n.3 Twice Dead: Organ Transplants and the Reinvention of Death (Lock) 13 Umehara, Takeshi 15, 26, 40, 48, 88–93, 103, 188–9, 202, 206 n.3, 214 n.6–8, 220 n.9 Uniform Declaration of Death Act 206–7 n.5, 208 n.12 United Network for Organ Sharing (UNOS) 11, 23, 37, 78, 105, 209 n.25, 217 n.4 Unit 731 xiv, 122, 148–51, 153, 155–8, 185, 192, 202, 204, 216 n.3 “University in Exile”. See New School for Social Research utilitarianism xiii, 3–4, 9, 12, 75–6, 110–12, 116–27, 134, 137, 184, 188–9, 201–3, 206 n.2, 212–13 n.9, 216 n.2, 217 n.7

Index Veatch, Robert M. 4, 8, 27–8, 33, 63, 207 n.9 ventilator, mechanical 2–11, 14–15, 53, 132, 185, 196, 205 n.4, 207 n.6, 207 n.10, 208 n.14, 211 n.8 vivisection 5, 11, 74, 127, 131–2, 139, 148–9, 155, 157, 192 Vogel, Lawrence 140 Wada, Jurō 19, 164, 175, 190, 192–4, 221 n.3 waiting list. See organs, waiting lists for wake (o tsuya) 13, 75, 77 Warnock, Mary 201–2 Washida, Koyata 96–7, 123, 154, 217 n.5 ­waste organs and 57, 61, 110–11, 116–19, 122, 123, 125–6, 128, 210 n.2 (Ch.2), 216 n.2 utilitarian attitudes toward xiv, 111–12, 116–25, 151 “Western civilization” 30, 97–8, 106, 146, 189 whole brain. See brain death, whole brain criterion

245

Wiese, Christian 130, 134, 140–2 Williams, Bernard 124, 215 n.12, 217 n.7 Wittgenstein, Ludwig xiii, 85, 98–9, 215 n.16 Wolin, Richard 141–2 Woolman, John 69 Works of Love (Kierkegaard) 108 Yamaguchi, Ken’ichirō 156–7, 193 Yamaoka, Kazue 19 Yanagida, Kunio 6 Yokota, Hiroyuki 19 Yonemoto, Shōhei 15 Yonemoto, Shunpei 215 n.9 Yōrō, Takeshi 65, 89–90, 206 n.3 Yoshino, Ryozo 19 “yuck factor” 2, 67–9, 74, 116–17, 125, 127, 212 n.7 Zen no kenkyū. See Inquiry into the Good, An Zōki no ishoku ni kan suru hōritsu. See Organ Transplant Law

246

247

248