Risks and Regulation of New Technologies [1st ed.] 9789811586880, 9789811586897

How should we proceed with advanced research of humanities and social sciences in collaboration? What are the pressing i

322 97 5MB

English Pages XI, 312 [313] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xi
Front Matter ....Pages 1-1
Risk and the Regulation of New Technologies (Jonathan Wolff)....Pages 3-18
Gradation of Causation and Responsibility: Focusing on “Omission” (Tsuyoshi Matsuda)....Pages 19-46
Ockham’s Proportionality: A Model Selection Criterion for Levels of Explanation (Jun Otsuka)....Pages 47-65
Front Matter ....Pages 67-67
Enforcing Legislation on Reproductive Medicine with Uncertainty via a Broad Social Consensus (Tetsuya Ishii)....Pages 69-86
Gene Editing Babies in China: From the Perspective of Responsible Research and Innovation (Ping Yan, Xin Kang)....Pages 87-109
Posthumously Conceived Children and Succession from the Perspective of Law (Kengo Itamochi)....Pages 111-134
Aristotle and Bioethics (Naoto Chatani)....Pages 135-152
Reinterpreting Motherhood: Separating Being a “Mother” from Giving Birth (Mao Naka)....Pages 153-170
Front Matter ....Pages 171-171
Domains of Climate Ethics Revisited (Konrad Ott)....Pages 173-199
Electricity Market Reform in Japan: Fair Competition and Renewable Energy (Takashi Yanagawa)....Pages 201-215
Renewable Energy Development in Japan (Kenji Takeuchi, Mai Miyamoto)....Pages 217-233
Adverse Effects of Pesticides on Regional Biodiversity and Their Mechanisms (N. Hoshi)....Pages 235-247
Reconsidering Precautionary Attitudes and Sin of Omission for Emerging Technologies: Geoengineering and Gene Drive (Atsushi Fujiki)....Pages 249-267
Front Matter ....Pages 269-269
Exploring the Contexts of ELSI and RRI in Japan: Case Studies in Dual-Use, Regenerative Medicine, and Nanotechnology (Ken Kawamura, Daisuke Yoshinaga, Shishin Kawamoto, Mikihito Tanaka, Ryuma Shineha)....Pages 271-290
Global Climate Change and Uncertainty: An Examination from the History of Science (Togo Tsukahara)....Pages 291-312
Recommend Papers

Risks and Regulation of New Technologies [1st ed.]
 9789811586880, 9789811586897

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Kobe University Monograph Series in Social Science Research

Tsuyoshi Matsuda Jonathan Wolff Takashi Yanagawa   Editors

Risks and Regulation of New Technologies

Kobe University Monograph Series in Social Science Research Series Editor Takashi Yanagawa, Professor, Graduate School of Economics, Kobe University, Kobe, Japan

The Kobe University Monograph Series in Social Science Research is an exciting interdisciplinary collection of monographs, both authored and edited, that encompass scholarly research not only in the economics but also in law, political science, business and management, accounting, international relations, and other sub-disciplines within the social sciences. As a national university with special strength in the social sciences, Kobe University actively promotes interdisciplinary research. This series is not limited only to research emerging from Kobe University’s faculties of social sciences but also welcomes cross-disciplinary research that integrates studies in the arts and sciences. Kobe University, founded in 1902, is the second oldest national higher education institution for commerce in Japan, and is now a preeminent institution for social science research and education in the country. Currently, the social sciences section includes four faculties—Law, Economics, Business Administration, and International Cooperation Studies—and the Research Institute for Economics and Business Administration (RIEB). There are some 230-plus researchers who belong to these faculties and conduct joint research through the Center for Social Systems Innovation and the Organization for Advanced and Integrated Research, Kobe University. This book series comprises academic works by researchers in the social sciences at Kobe University, as well as their collaborators at affiliated institutions, Kobe University alumni and their colleagues, and renowned scholars from around the world who have worked with academic staff at Kobe University. Although traditionally the research of Japanese scholars has been publicized mainly in the Japanese language, Kobe University strives to promote publication and dissemination of works in English in order to further contribute to the global academic community.

More information about this series at http://www.springer.com/series/16115

Tsuyoshi Matsuda Jonathan Wolff Takashi Yanagawa •



Editors

Risks and Regulation of New Technologies

123

Editors Tsuyoshi Matsuda Graduate School of Humanities Kobe University Kobe, Japan

Jonathan Wolff Blavatnik School of Government University of Oxford Oxford, UK

Takashi Yanagawa Graduate School of Economics Kobe University Kobe, Japan

ISSN 2524-504X ISSN 2524-5058 (electronic) Kobe University Monograph Series in Social Science Research ISBN 978-981-15-8688-0 ISBN 978-981-15-8689-7 (eBook) https://doi.org/10.1007/978-981-15-8689-7 © Kobe University 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Preface

This edited book is a response to the vital and challenging questions caused by two crises. One of them is related to the major issues of human beings in a rapidly changing technological civilization, and the other, more subtle one that has to do with the destination of the humanities and social sciences in contrast to the natural sciences in the knowledge-based society of the twenty-first century. The fact is that we have been constantly faced with how to properly use the power of reproductive medicine such as genome editing or how to mitigate the threat of climate change by innovating environmental technology and reforming economic and legal institutions. These issues come before discussions of how to live with artificial intelligence in the academic scene of the present day. In the process of the technological transitions of national and global societies, the humanities and social sciences are seen to be at a sort of crossroads from the point of view of public interest or of national university policy in a developed country such as Japan. These are preconditions of multidisciplinary and international academic collaboration. From this historical background, we aim to newly build “socio-humane” studies focusing on the problems of environmental and biotechnologies by following and extending the model of a project of “econo-legal” studies by economists and jurists. For this undertaking, Takashi Yanagawa has been one of the editors of the successful book Econo-Legal Studies: Thinking Through the Lenses of Economics and Law (Yuhikau, Tokyo, 2014; Chinese translation, 2017). With the coined word “socio-humane” we mean that this framework can capture the humane aspects of life and the environment, influenced by factors such as health, family, and culture, and concurrently, with economic impacts and their legal regulation. In such terms, on the one hand, this book is oriented especially to the political economy about which Jonathan Wolff is an internationally recognized philosopher through his books such as Ethics and Public Policy: A Philosophical Inquiry (Routledge, 2011). On the other hand, the present volume also provides a philosophical foundation and ethical examination of the implementation of innovative technologies and useful materials related to everyday life and their legal regulation. Toward this end, it considers basic crucial problems such as the relationships of causation and responsibility, methodological criteria of statistics, and precautionary ethics. v

vi

Preface

Our authors and editors have collaborated in their research, meeting in workshops more than 40 times to date: a team of the Japan Society for the Promotion of Science (JSPS) Topic-Setting Program and the group “Meta-science and Technology Project: Methodology, Ethics and Policy from a Comprehensive Viewpoint” in the Organization for Advanced and Integrated Research at Kobe University, of which Tsuyoshi Matsuda is the project leader. We hope that forward-thinking readers will make further discoveries of the challenges of multidisciplinary studies for themselves, as well as for scholars across the diverse social sciences and humanities. We hope that they will explore the evolving frontiers and depths of those disciplines by means of individual contributions of experts from philosophy, ethics, law, economics, and bioscience, and science and technology studies (STS). Kobe, Japan

Tsuyoshi Matsuda

Contents

Part I

Socio-Humane Sciences of New Technology 3

1

Risk and the Regulation of New Technologies . . . . . . . . . . . . . . . . Jonathan Wolff

2

Gradation of Causation and Responsibility: Focusing on “Omission” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tsuyoshi Matsuda

19

Ockham’s Proportionality: A Model Selection Criterion for Levels of Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Otsuka

47

3

Part II 4

5

Reproductive Technology and Life

Enforcing Legislation on Reproductive Medicine with Uncertainty via a Broad Social Consensus . . . . . . . . . . . . . . . Tetsuya Ishii

69

Gene Editing Babies in China: From the Perspective of Responsible Research and Innovation . . . . . . . . . . . . . . . . . . . . . Ping Yan and Xin Kang

87

6

Posthumously Conceived Children and Succession from the Perspective of Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Kengo Itamochi

7

Aristotle and Bioethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Naoto Chatani

8

Reinterpreting Motherhood: Separating Being a “Mother” from Giving Birth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Mao Naka

vii

viii

Contents

Part III 9

Environmental Technology

Domains of Climate Ethics Revisited . . . . . . . . . . . . . . . . . . . . . . . 173 Konrad Ott

10 Electricity Market Reform in Japan: Fair Competition and Renewable Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Takashi Yanagawa 11 Renewable Energy Development in Japan . . . . . . . . . . . . . . . . . . . 217 Kenji Takeuchi and Mai Miyamoto 12 Adverse Effects of Pesticides on Regional Biodiversity and Their Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 N. Hoshi 13 Reconsidering Precautionary Attitudes and Sin of Omission for Emerging Technologies: Geoengineering and Gene Drive . . . . . 249 Atsushi Fujiki Part IV

Science and Society

14 Exploring the Contexts of ELSI and RRI in Japan: Case Studies in Dual-Use, Regenerative Medicine, and Nanotechnology . . . . . . . 271 Ken Kawamura, Daisuke Yoshinaga, Shishin Kawamoto, Mikihito Tanaka, and Ryuma Shineha 15 Global Climate Change and Uncertainty: An Examination from the History of Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Togo Tsukahara

Editors and Contributors

About the Editors Tsuyoshi Matsuda is the project leader of “Meta-science and Technology Project: Methodology, Ethics, and Policy from a Comprehensive Viewpoint” and a professor of philosophy and environmental ethics in the Graduate School of Humanities, Kobe University, a position he has held since 2003. He received his Ph.D. from Osnabrück University in 1989, as a scholar in the German scientific exchange program, majoring in philosophy and environmental ethics. After working for the Japan Society for the Promotion of Science as a special researcher at Kyoto University, he was with the Kyushu Institute of Design from 1992 to 1997, then joined the faculty of Kobe University in 1997. He was the vice-dean of the Graduate School of Humanities at Kobe University from 2008 to 2010, and the Vice-Dean of the Organization for Advanced and Integrated Research at Kobe University from 2020. He was a visiting researcher at the Leibniz-Archive of the State Library of Niedersachsen (2000, an award of a German scientific exchange scholarship). He is currently a member of the editorial board of the Japanese Leibniz Society and the board of the Kansai Philosophical Association. He also serves as the chief editor of the Journal of Innovative Ethics (Kobe University). Jonathan Wolff is the Alfred Landecker Professor of Values and Public Policy, Blavatnik School of Government, University of Oxford, Political Philosophy, and a Governing Body Fellow at Wolfson College. He was formerly the Blavatnik Chair in Public Policy at the School, and before that he was a professor of philosophy and dean of arts and humanities at University College London. He is currently developing a new research program on revitalizing democracy and civil society, in accordance with the aims of the Alfred Landecker Professorship. His other current work largely concerns equality, disadvantage, social justice, and poverty, as well as applied topics such as public safety, disability, gambling, and the regulation of recreational drugs, which he has discussed in his books Ethics and Public Policy: A Philosophical Inquiry (Routledge, 2011) and The Human Right to Health (Norton,

ix

x

Editors and Contributors

2012). His most recent book is An Introduction to Moral Philosophy (Norton, 2018). Takashi Yanagawa is a professor at the Graduate School of Economics at Kobe University, Japan, where he specializes in industrial organization. His work includes Risks and Regulation of New Technologies(edited jointly with Tsuyoshi Matsuda and Jonathan Wolff, Springer, 2020), Introduction to Econo-Legal Studies (edited jointly with Hiroshi Takahashi and Shinya Ouchi, Yuhikaku, 2014, in Japanese; China Machine Press, 2017, in Chinese; and currently being translated into English), among others. At present, he is working on research projects having to do with the competition policies of platform businesses, energy market reform under the Paris Agreement, and econo-legal studies. He received his Ph.D. from the University of North Carolina at Chapel Hill, USA. He was formerly the director of the Interfaculty Initiative in the Social Sciences at Kobe University and was also the vice-dean of the Organization for Advanced and Integrated Research at Kobe University. He was a visiting researcher at the Japan Fair Trade Commission, the London School of Economics, and the University of California at Berkeley. He was the president of the Japan Economic Policy Association and is currently a board member of the Japan Economic Policy Association and the Public Utility Economics Association. He is also a co-editor of the International Journal of Economic Policy Studies (Springer).

Contributors Naoto Chatani Kobe University, Kobe, Japan Atsushi Fujiki Kobe City College of Nursing, Kobe, Japan N. Hoshi Laboratory of Animal Molecular Morphology, Department of Animal Science, Graduate School of Agricultural Science, Kobe University, Kobe, Hyogo, Japan Tetsuya Ishii Office of Health and Safety, Hokkaido University, Sapporo, Japan Kengo Itamochi Kobe University Graduate School of Law, Nada, Kobe, Hyogo, Japan Xin Kang Zhongshan Hospital of Dalian University, Dalian, China Shishin Kawamoto Hokkaido University, Sapporo, Hokkaido, Japan Ken Kawamura Osaka University, Suita, Osaka, Japan Tsuyoshi Matsuda Kobe University, Nada-ku Kobe, Japan Mai Miyamoto Kansai Gaidai College, Hirakata, Japan Mao Naka Kobe University, Nada-Ku Kobe, Japan

Editors and Contributors

xi

Jun Otsuka Kyoto University, Yoshida-Hommachi, Sakyo-ku, Kyoto, Japan Konrad Ott Universität Kiel, Kiel, Germany Ryuma Shineha Osaka University, Suita, Osaka, Japan Kenji Takeuchi Graduate School of Economics, Kobe University, Kobe, Japan Mikihito Tanaka Waseda University, Shinjuku, Tokyo, Japan Togo Tsukahara Kobe University, Kobe, Japan Jonathan Wolff Blavatnik School of Government, University of Oxford, Oxford, UK Ping Yan Dalian University of Technology, Dalian, China Takashi Yanagawa Kobe University, Kobe, Japan Daisuke Yoshinaga Waseda Institute of Political Economy, Shinjuku, Tokyo, Japan

Part I

Socio-Humane Sciences of New Technology

Chapter 1

Risk and the Regulation of New Technologies Jonathan Wolff

Abstract New technologies can bring tremendous benefits. But they also have costs, or risks, some known, some unknown. How should authorities regulate new technologies in the light of the possible costs and benefits? A standard approach to decision making under risk is to use formal risk cost-benefit analysis. Yet there are clear limits to this approach where risks and probabilities are unknown. Furthermore, simple cost-benefit analysis ignores questions of moral hazard—where benefits and costs fall—and the political dimensions of the introduction of new technologies. In this paper, I discuss how to frame a reasonable precautionary attitude to the risks of new technology, setting out a series of questions that need to be taken into account before a technology should be approved. Keywords Risk · Regulation · New technologies · Precautionary principle · Precaution

1.1 Introduction I once read an article by an American college professor who said that he had asked his class two questions. The first was ‘what are the greatest technological advances of the last hundred years?’ The students answered: nuclear power and plastics. His second question was ‘what are the greatest technological challenges we now face?’. The students, so he said, responded: dealing with nuclear waste; and the disposal of unwanted plastic. New technologies can bring tremendous benefits. But they also have costs, or risks, some known, but some, such as damage to the ozone layer and asbestosis, unpredicted or even entirely unpredictable. How should authorities regulate new technologies in the light of their possible costs and benefits? Our standard formal methods for risk analysis, most notably risk cost-benefit analysis, cannot be applied in cases of radical uncertainty. Some theorists have tried to extend those formal methods or replace J. Wolff (B) Blavatnik School of Government, University of Oxford, Oxford OX2 6GG, UK e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_1

3

4

J. Wolff

them with other quasi-formal principles—such as ‘the precautionary principle’— with limited success. Here I argue that a number of factors, including moral hazard and the political dimension of risk, can be overlooked, with damaging effect, in the attempt to apply a formal method or principle. Instead of hoping to formulate a precise ‘precautionary principle’ we do better to use a ‘precautionary attitude’ [20] or perhaps even a ‘precautionary checklist’ [10] which reminds us which values and concerns should be at the forefront in the political debate that will ultimately make the decision (see also [8, 9]).

1.2 The Introduction of a New Technology While there is no single process by which all new technologies are introduced, often a development can be traced through a number of stages. A breakthrough in basic science uncovers a new mechanism, or discovers an unexpected property of a known object, or creates a new material, or makes some other surprising finding. Innovators suspect that there may be an opportunity for commercial exploitation, and engage in experiments to put the new discovery or invention to a use that will have commercial value. That stage could be carried out by the original scientists, or by a team of technologists, or, increasingly in recent decades, by amateur enthusiasts working out of a basement or garage laboratory. Ultimately a product is created, and typically it will need to go through a sequence of tests before it is approved for general use, to show that it meets a certain safety standard. In many areas common standards have developed and have been operationalised into a series of tests that can be applied in a straightforward fashion. This will be particularly so when the proposed innovation is little more than a development of existing technologies, the risks of which are well known. But underlying those tests is likely to be a more complex pattern of reasoning that has been instrumentalised in this particular way. In designing those deeper tests it is appropriate to consider all stakeholders in the decision, and the benefits and costs that could accrue if the technology is approved for use. It is rare, however, that costs and benefits can be known with certainty. In other words, technologies have risks and those risks also need to be understood if an informed decision is made about whether to authorise their use, and if so under what conditions. Such analyses have been carried out, at least informally, for many decades. For example, the introduction of the motor car in the UK was debated in parliament, and was allowed in 1865 only if preceded by a person waving a red warning flag [16]. Stakeholders in any such decision will include those who wish to introduce the technology and would expect to benefit commercially from doing so. This will normally be a company, and for ease I shall call this the corporate interest. There will also, naturally enough, be a potential consumer group. Consumers would expect to benefit, but with a new product it is always possible there are unintended effects, either known or unknown. There can also be unknown costs or benefits for the corporate interest (for example finding itself liable to clean-up costs, or reaping secondary

1 Risk and the Regulation of New Technologies

5

benefits from an unexpected source). And very often there will be a series of third parties (including future generations), who will either benefit or lose from the introduction of the technology, through, for example, pollution, or increase in property values, or the creation of new employment opportunities. However, very little of this will be known for certain in advance, and therefore estimates of probabilities will need to be made. Hence, we have the basic question facing any regulator of a new technology: how can the balance of risk of costs and benefits to the various stakeholders be evaluated in a way that allows a reasonable decision about whether a new technology should be made commercially available, and what restrictions, if any, should be applied? I will use as a running example through this paper genetically modified (GM) crops. How should the regulator of agricultural products assess whether they should have been permitted? Some new technologies have little impact beyond their immediate sphere of application. Others can be revolutionary and reshape society beyond the immediate technical context. The printing press, the spinning jenny, the railway, the computer, the internet and the mobile phone are all examples of technologies that have had profound effects, not only on individual well-being and livelihoods, but throughout society. No one would deny the social effects of the railway or the internet, but this entails that they have also enabled shifts in power and influence. Little of this can be known at the time of the first introduction of the fledgling technology that led down this route. But still, as I will claim later, technologies can redistribute power and influence in ways that can be included in a form of risk assessment.

1.3 Cost-Benefit Analysis and Its Limits A standard approach to decision making under risk is to use risk cost-benefit analysis. On this familiar approach, a range of options is considered and evaluated against each other. The simplest case is where a decision is to be made whether or not to make a single change. The two options are either to remain as we are, or to make that change. The many possible contexts include ordinary business decision-making, safety planning, a change to public policy, and the introduction of a new technology. To use cost benefit analysis it is necessary to come to an understanding of all known possible costs and benefits, and their probabilities. It is also standard practice to convert costs and benefits into a common currency, most notably money, so that they can be weighed against each other. The probabilities of things working out well or badly for the different stakeholders can be combined with the positive and negative value of the different possible outcomes so that the expected value of making the change can be calculated. In the simple case ‘business as usual’ can be given a value of zero, and the question then is whether the change yields a positive or negative expected value. Of course, there will also be cases where more than one alternative to the status quo is being considered, and others where the status quo is not sustainable and hence not an option or very expensive to maintain. But similar considerations

6

J. Wolff

apply and the technique generally recommends the option with the highest expected value. This approach has been used, for example, to decide whether to introduce safety improvements in transport systems. A new signalling system, for example, could be predicted to save a number of lives, and the saving of a life will be given a particular financial value, as will other costs of an accident. Preventing an accident therefore has a value. The cost of the technology is also calculated, and if the benefits outweigh the costs, this is a strong consideration in its favour, as it will yield higher value, according to the calculation, than doing nothing. In other cases, the cost of the new technology might be considered to be too high, and the method would recommend against introducing the safety measure, even if, by all probability, it would save lives. (For discussion see [18, pp. 79–99, 19]. It should not be completely obvious, however, that cost-benefit analysis is acceptable for areas such as transport safety. I have argued that cost-benefit analysis can be the right approach to take in economic decision-making where the following conditions are met [17]: 1. The range of possible outcomes, and their probabilities, are known (to a reasonable degree of accuracy). 2. It is reasonable to put a monetary valuation on all relevant costs and benefits. 3. The decision is one of a repeating series of similar decisions. 4. The probabilities in the repeating series are true, in the sense that they are not biased towards or against one group. 5. The loss to any individual or group is not catastrophic. The basic idea is that in a long series of repeating decisions under risk, where all costs and benefits are comparable, maximising average expected value is a rational procedure, for, under the conditions mentioned, the great majority of people can expect an individual outcome that is close to the expected value. You will be on the losing side in some cases, and the winning side on others, but few people will do badly over a long series. Nevertheless, even if the probabilities are fair, in a big group it is likely that some people will have an unlucky run, coming out on the wrong side time after time, and a form of secondary scheme will be needed to compensate those who lose from those who have a run of lucky gains. With such a secondary mechanism in place, and under the assumptions set out above, cost-benefit analysis has much in its favour. The conditions set out above make evident when cost-benefit analysis is particularly useful, and, arguably, it would be irrational to follow any other procedure. But it can immediately be seen that there is a limit to the application of risk cost benefit analysis. For many of the cases where we are tempted to use risk cost benefit analysis involve possible loss of life, which is a catastrophic outcome for an individual, and cannot be made up for by benefits issuing from future cases. This appears to violate condition 5 above. For this reason, some have thought that cost-benefit analysis should never be used in life and death situations. Conceptually, however, the decisive move was made by Thomas Schelling in the US [15] and Jacques Drèze in France [2]. If the railway company introduces a safety

1 Risk and the Regulation of New Technologies

7

measure that can be expected to save a life, we will, nevertheless, not know whose life it has saved. Indeed, we will not know whether it really has saved one life, two lives or even more, or none at all. We have no way of knowing what precisely would have happened had the safety development not been introduced. But what we do know is that the introduction of the safety measure has made every traveller’s life a tiny bit safer. Therefore, although it is convenient to talk about lives saved, and the value of a statistical life, the reality is that a safety improvement does not save a known particular life but is an aggregation of many small risk reductions. It is argued that when the risk to any individual is very small, we can still reasonably use risk cost benefit analysis, for it is continuous with ordinary economic life. We buy smoke alarms or improved bicycle helmets, and these reduce our risk of death slightly. A statistical life simply aggregates this. Of course, the fundamental issue—that if you die you cannot be compensated by a later run of good luck—holds. But if we were to treat all the normal risks of life as requiring some sort of special treatment, as if we had a right not to be subject to risk of death, then nothing would be possible. (Elsewhere I have called this the ‘problem of paralysis’ [4].) When risks rise, and especially if they become concentrated on particular groups, risk cost-benefit analysis becomes much more problematic. We can see this concern reflected in current UK policy. Very low risks are simply accepted as part of life and no special measures are needed. At a particular threshold a higher, though still low, level of risk is taken as acceptable if it would be disproportionately expensive to eliminate. This is the level at which risk cost-benefit analysis applies. At higher levels, special steps should be taken to reduce the risk, and the highest levels should simply not be permitted unless there are special circumstances. In other words, costbenefit analysis is seen as the appropriate methodology for dealing with risk in the moderate range—not so low that it just fades into background noise, or so high that it rings alarm bells [6]. Something else that can ring alarm bells is high variation from year to year. In the case of transport safety, statistics from one year tend to be very similar to the years before and after. But compare this with the safety of a nuclear power station. If we are told that a nuclear power station will cause three statistical deaths a year, while supplying vast quantities of energy, we might accept that as a cost worth paying. But consider two different scenarios. In the first leaking radiation causes three deaths from cancer every year. In the second, most years no-one dies, but we expect a serious incident every thirty years in which ninety people die. Risk cost benefit analysis suggests that (ignoring discount factors) the two cases are identical. But at least in terms of how they are perceived they may feel very different. Hence where there is the possibility of a single incident causing a large number of deaths some will wish to be more cautious in the use of cost benefit analysis. However, the argument that a ‘statistical life’ is really the aggregation of many small risk reductions also allows us to answer a second, related criticism. Standard cost benefit analysis requires all values to be reduced to monetary terms, as reflected in condition 2 above. But how can we put a price on life? The answer is that the method does not require us to put a price on life. Rather we are putting a price on a small safety improvement. And this is a perfectly ordinary economic transaction,

8

J. Wolff

as we noted in the case of smoke alarms or improved bicycle helmets. Putting a price on risk-reduction is familiar to us as consumers. Techniques building from this practice—either revealed preference or ‘contingent valuation’—allow a value for a statistical life to be estimated (which is not to say that the resulting figures are uncontested). (See, for example [1].) I will only discuss the third and fourth conditions briefly at this point as they will be important later on. The third was that the decision should be part of a repeating series of similar decisions. Of course, the decision to introduce a new technology is always in one sense a one-off. However, new technologies are being introduced all the time, and although some turn out to have enormous costs, others do not and are extraordinarily beneficial. As opponents of precaution point out, if we have a restrictive attitude to new technology we risk losing much that is of great benefit and low cost. Hence, we must not jump to unnecessary precaution. The trick will be to decide when precaution is necessary and when it is unnecessary, or, in other words, when the introduction of a new technology should be treated in a different way. I will return to this later, as it is the key theme of this paper. Finally, I mentioned that the probabilities must play true. The threat is that either immediately or over time the benefits will accrue to one person or group and the costs will fall elsewhere. This is one of the most sensitive issues of all concerning risk regulation, and again I will return to it later in detail.

1.4 Risk, Uncertainty and Radical Uncertainty I have so far passed over the first condition above, that the situation is one in which the probabilities and outcomes are known. It is this that allows the calculation of expected utility. My own detailed work on risk has largely been on railway safety, where there are large statistical databases, and past frequencies are a good guide to present probabilities. The mechanical systems are relatively straightforward, with very few complete unknowns, and it is reasonable to present options and their probabilities. Yet this is the exception, rather than the rule. In other cases, it is a fiction to pretend we have a good basis for calculating probabilities. Hence it is common to distinguish decision under risk, where the probabilities are known, and decision under uncertainty, where probabilities are not known. Decision under uncertainty is very common. Consider the offshore oil and gas industry. There have been a small number of very spectacular accidents in oil and gas rigs. Clearly it is a high risk business. But it would be very foolish to try to conduct a risk analysis purely on the basis of statistics collected so far, given the limited data available. How, then, is risk analysis to be done? One way is by ‘critical factor’ analysis: how likely is it that a critical factor will fail, and what would happen? In some cases, frequency analysis is available at that level, but in others the problem repeats. There is also an issue of interaction between different critical factors that have as yet unknown effects in combination. Hence different analysts, looking at

1 Risk and the Regulation of New Technologies

9

the same situation, can make wildly different judgements about the probability of different types of failure. In these cases we are dealing not with risk, but uncertainty. In the case of oil and gas I have assumed that we know the possible hazards, but lack solid information that allows an estimate of probabilities. Nevertheless, it is possible to talk about upper and lower bounds, and use other methods that allow the application of a modified form of risk cost-benefit analysis. But for very new technologies we often did not have an understanding of what problems we might have in store when they were introduced. Examples I have given so far are the disposal of nuclear waste and of unwanted plastics, the depletion of the ozone layer, asbestosis and GM crops. Examples can be multiplied. Some detrimental effects of technologies must have been anticipated but others are wholly unexpected. Standard cost-benefit analysis in effect treats the probability of unpredicted events as zero, which is clearly problematic. But if you do not know what the possible outcomes are, it is impossible to make a fair assessment of their probabilities, even in terms of an upper and lower bound. This is the problem of decision-making under radical uncertainty. What can be done? From the perspective of existing theory, the natural approach is to try to derive a formal method to help with such cases. Can risk cost-benefit analysis be extended through some clever techniques for estimating unknown possibilities? Is there some way of using a ‘maximin’ principle to help us steer clear of the most damaging possible outcomes, or could a version of a ‘precautionary’ principle help us? Or do we need to take a different starting point? It may be that there is a type of path dependency which has led from simple and useful cost-benefit analysis to a potentially very damaging possibility of using formal methodologies in contexts to which they cannot apply. This, so opponents claim, has the consequence of advancing the interests of a political and economic elite. I will try to explain this suggestion later in the paper. First, I need to say more about the claimed path dependency and the alternatives. Recall simple cost-benefit analysis. Under the assumptions set out above (almost) all people will benefit in the medium term if social and economic policy attempts to maximise average expectations in every decision. Those who lose out now will, if the probabilities run true, be more than compensated in the future (ignoring the case of death). But in order to apply that technique it is necessary to reduce all values to a form in which they can fed into the necessary formulae. We need to value all inputs—including life and death—in financial terms, and this can be done by a variety of valuation methods. Activists and philosophers have been worried about the reduction of life, and other values, such as the natural environment, or the preservation of species, into monetary terms. However, sometimes it is accepted that it is necessary to do this in order to have a seat at the table when cost-benefit analysis is being discussed.1 With some reservations, therefore, the reduction of everything to monetary values can be helpful as, for example, it allows environmentalist to discuss the matters that concern them in terms that economists recognise and policy makers need if they are to take everything into account using formal methods. As long as it 1I

owe this point to John O’Neill.

10

J. Wolff

is understood why this is being done, the damage of giving everything a cash value can be limited. There can be, therefore, a pragmatic reason for reducing all value to the monetary when considering decisions under risk, and, perhaps, under some level of uncertainty. But consider the case of radical uncertainty, where there is incomplete knowledge of the range of possible outcomes and hence very little understanding of any probabilities. The case for reducing everything to a monetary value in order to be fed into a formula recedes dramatically. Before, it was a pragmatic accommodation; the reduction is needed if we are to use the method. Now, though, things looks different. The reduction seems as much ideological as technical. If there is no decision formula there is no particular reason to render all inputs into a form that can used in the formula. In fact, doing so can do more harm than good, as I will endeavour to explain in the next sections.

1.5 The Risk Triangle and Moral Hazard It has been pointed out by Hermansson and Hansson [5], and see also [3] that in any decision regarding risk there are generally three roles to consider: the agent deciding that the risk should be taken; the agent that will benefit if the risk pays off; and the agent that will lose if it fails. This simple insight is very powerful, for it allows us to understand that the number of possible structures of risk decision-making is limited. Suppose that agents A, B, and C are different people. If only one agent can occupy each role the only possible structures as laid out below: Decision maker

Benefits go to

Costs go to

Individualism

A

A

A

Paternalism

A

B

B

Moral Hazard

A

A

B

Moral sacrifice

A

B

A

Adjudication

A

B

C

Of course, there are variations on these, where, for example, the benefits are shared between someone who makes the decision and someone who does not, but in terms of pure structures only these five are possible. The first is morally speaking the most straightforward. All costs and benefits are absorbed by the decision maker. In practice, however, for any significant decisions this situation is rare. The government is normally a supervisory decision maker, providing rules for what is and is not permitted. Where the government is the only decision maker, regulating what will happen to a single individual we move to the second case, paternalism, which also applies whenever one agent takes a decision for another. Again, some level of shared decision making, with the consent of both, is the much more common model in practice. Paternalism does raise moral questions:

1 Risk and the Regulation of New Technologies

11

why should people not be permitted to take the risks they want to? This is particularly relevant regarding the regulation of drugs, for example. The third case, however, is the most troubling: where one agent is in a position to decide whether a risk should be taken, and would benefit if it works out, but another would pay the cost if it fails. This is the source of many of our difficulties in policy, including, according to some analysts, the financial crisis of 2008. It is highly relevant to the introduction of risky new technologies and I will come back to it in the next section. For completeness, I should finish the picture first. The fourth case appears to be an unusual one, for one agent can decide to take a risk, but while benefits would go to others any cost will fall on the agent who decides. If it does go ahead it is an act of self-sacrifice. Elsewhere I called it ‘maternalism’ as it is a behaviour that some mothers will engage in for the sake of their children [19]. However, we should understand that when faced with this situation the most likely outcome is that the risk will not be taken: why should I decide to take a risk if others will benefit but I would suffer the costs? It may seem fair enough to decline to take a risk from which you cannot benefit, but consider the issue from the point of view of cost-benefit analysis. If the potential costs to me are smaller than the potential benefits to you, then it is, in a sense, inefficient, even if perfectly understandable, for me not to take the risk. In policy terms this is very important, and could be a significant barrier to growth. The refusal to make this ‘moral sacrifice’ could be called ‘moral cowardice’. However, it is not central to the current analysis and so I will leave it aside for the purposes of this paper. Finally, there is the case one where party takes the decision, another would get the benefit and a third suffer the loss. This is, perhaps, standard for government when regulating behaviour that impacts different groups in different ways. This is why I have called it ‘adjudication’. The difficulties arise if all the benefits begin to pile up on one side, over time, case after case. Understanding these different structures allows us to see some of the potential difficulties with cost-benefit analysis. Simple models aggregate costs and benefits without consideration of where those costs and benefits fall. In principle no distinction is made between the five structures outlined. But morally we can see that they are different, and we can also understand why I have insisted that cost-benefit analysis is relatively unproblematic only as part of a repeating series in which the probabilities of receiving costs and benefits run true. That will not be so in repeated plays of the several of the structures outlined. It is also worth noting that the issues of moral hazard and moral sacrifice are another way of identifying ‘negative externalities’ and ‘positive externalities’ in economic theory. It is a common-place in economics that the free market, left to itself, will ‘over-supply’ goods with negative externalities, as this is a way of dumping costs on others, and ‘under-supply’ goods with positive externalities, as individuals will not want to take on costs for the sake of others. If we think of cost-benefit analysis as providing a benchmark of efficiency, and therefore a normative account of what ought to be supplied, then standard economic theory maps onto the table above very well. For the standard policy response is to find ways of ‘internalising the externalities’

12

J. Wolff

which, in my terms, is also a way of attempting to move from a problematic structure to a less problematic one, either individualism or adjudication. To have a fuller understanding of any case we need to know not only the structure, but the types of costs and benefits that are likely to be engaged. Risk analysis begins with an attempt to understand all factors in play, and to formulate a list of possible costs and benefits in qualitative, realistic, terms. For example, in deciding whether to open a new runway at an airport, it will be important to consider such things as air pollution to those who live nearby, as well as noise pollution, disturbed sleep and traffic congestion. Benefits would include reduced travel time for those able to make use of the airport, additional flights, and the commercial stimulus. Some of these, such as increased profits, are straightforwardly stated in monetary terms, but others, such as loss of sleep, are much more difficult to translate. As mentioned previously, where the decision is being made by means of a formula that requires inputs in monetary terms it makes great sense to convert everything into monetary terms, uncomfortable though that may be. But if the decision is being made without the use of a formula, then this appears to be an unnecessary, and, I will argue, potentially highly damaging step. For let us consider the types of costs and benefits that can be associated with the decision to introduce a new technology. As an example, let us return to the introduction of GM crops, which has been accepted in the United States of America, but was much more controversial in Europe. Those in favour of introducing GM crops regarded the issue as essentially one where those who understood science favoured GM crops whereas those who were opposed were scientifically illiterate. Defenders of GM claimed that the technique is not very much different from the widely accepted practices of selective breeding or of creating hybrids. Taboo, suspicion or sheer ignorance was alleged to be the reason for opposition. (For one discussion, see Nuffield Council [11].) There is no denying that some opponents did not understand the science well. But at the same time other, more subtle, lines of opposition also existed. One was purely at the level of risk analysis. It may be true that the new techniques were similar to selective breeding. But that does not mean that they are safe as selective breeding. Perhaps they would have risks that we are simply not aware of. Hence, while defenders framed the issue of one of straightforward cost benefit analysis, with the benefits far outweighing the costs, opponents thought that the situation was one of radical uncertainty, at least in part. And there is a cost to living with risk or uncertainty, even if nothing bad ever happens (Wolff and de-Shalit [13, 21]. Second, it was also noted that once GM crops became widely used, all farmers would have to use them if they wanted a competitive yield of their crops. But the seeds would be patented, and the company that held the patent would have near monopoly power, which would make farmers very vulnerable. Even if the seeds were initially introduced at low cost, the manufacturers could abuse their monopoly position, giving them not only economic power, but also political power, as the productivity of an entire region could be in the hands of one manufacturer. This is particularly worrying in a developing country where farmers may have few resources to fall back on if things go wrong for them.

1 Risk and the Regulation of New Technologies

13

Hence in enumerating the costs and benefits engaged, it is necessary to take a very wide view. The possible benefits obviously include the well-being of those who have better nutrition and the farmers who might make more money, as well as the profits of the corporate interests who invent and market the new seeds. But it also includes the power and prestige of those who introduce and profit from the new technology. Being early to market with GM crops will attract attention from industry and government. It will lead the company executives to receive honours, to be invited to lecture at Business Schools, to sit in government advisory roles and help shape future legislation, which in turn will be more likely to be moulded round their interests. Prestige and power will follow from the roll-out of a new technology, provided it isn’t an instant disaster. But prestige and power are zero sum and hence one party’s gain is another’s loss. How much of this can and should be reduced to purely monetary terms in order to allow formal risk analysis to proceed? In terms of what is possible, the company involved will reap economic benefits, and these can be included unproblematically. A heroic attempt at a reduction of the intangibles of power and prestige to financial terms is no doubt also possible. But to repeat an earlier point, if there is no formula to enter the values into, it is unclear why it is worth going to the trouble, especially as estimates would have a vast margin of error. And to amplify a point made earlier but not yet explained, converting everything into a financial quantity bleaches out the nuance of the detail, and takes off the table what so alarms activists. It is not so much that if the innovation goes ahead some people will get very rich, but rather that those people will become powerful in diverse ways, none of which are beneficial for those who now become beholden to the company for their livelihood. These effects can last long into the future, even when the technology is outdated and replaced by others. Hence economic modelling hides the politics. Cynics will say that this is precisely why economic modelling is so popular. But the question of how to regulate a new technology with unknown effects is a political question, not just a question for technocrats.

1.6 Decision-Making Under Radical Uncertainty I have suggested that if we start from a position in which risk-cost benefit analysis is the default position for making decisions about risk and uncertainty, then as we move to situations of radical uncertainty, to which it does not apply in straightforward terms, the temptation is to try to modify the approach to deal with the more difficult cases. So far in these paper I have introduced four obstacles. First, because we do not know what the possible risks are, or their probabilities, we do not know how to maximize expected benefits. Second, some of the options may include devastating risks which would be masked by an averaging process. Third, in a one-off case we cannot assume that those who lose out will be compensated by gains elsewhere. And finally, attempting to reduce all costs and benefits, such as long term political

14

J. Wolff

influence, to a monetary value is problematic. To overcome those obstacles it is tempting to try to move to a principle that side-steps the problems. Sometimes it is suggested that there is a ‘precautionary principle’ that we can appeal to in such cases. Quite what the precautionary principle is supposed to be is a matter of debate, but it seems fairly clear that it is not a principle in any decisiontheoretic sense. Rather, as I have suggested elsewhere, it is better thought of as an attitude [20], see also Nuffield Council [11]. Others have used the idea of a ‘precautionary checklist’ [10]. In my understanding, we should regard the precautionary attitude as a series of linked questions, as follows. The preliminary stage asks two questions about immediate risks and benefits. 1. Are the costs and risks of the new technology acceptable? 2. Does it have significant benefits? A second stage considers the social need for the technology: 3. Do these benefits solve important problems? 4. Could these problems be solved in some other, less risky, way? The precautionary approach suggests that we should proceed only if the answer to these questions is: yes, the risks are acceptable; yes, it has real benefits; yes, it solves real problems; and no, we cannot solve the problem in other ways. Consider, for example, some opposition to the introduction of GM foods. First, there was a disagreement about whether the technology was known to have unacceptable risks, which is, of course, itself a vague notion. Defenders were keen to normalize the risks by pointing out the similarities between genetic modification and selective breeding, which has been common for centuries; critics pointed out the discontinuities with earlier approaches. Second, there was broad, if not universal, agreement that GM crops could bring real benefits in terms of increased crop yields. Third, there was a somewhat less broad agreement that the benefits would solve real problems by helping to feed the developing world or, less significantly, reducing food prices in the developed world. However, only the opponents were interested in the fourth question: could we end hunger in the developing world in other ways? Here they had many suggestions, such as changing the protectionist trade policies of the wealthy countries, or investing in irrigation, infrastructure and other known development techniques. The critics said that the developing world goes hungry not through lack of technology but through lack of political will. We know, so they say, how to solve the problem in a way that does not put the environment and even human life at risk. So why introduce a very risky new technology? Now, the opponents of GM could be wrong. Perhaps we can’t feed the world without GM crops. In other cases, it may be that the new technology does address a problem for which we do not have other solutions. We are beginning to see, for example, such claims being made about geo-engineering to reduce the greenhouse effect (for discussion cf RS/RAE [7, 14]). It is possible, therefore, that a new technology will pass this initial, technical, stage, at least in some circumstances. But even so, we need to move to a more difficult political stage in which we explore the longer

1 Risk and the Regulation of New Technologies

15

term effects of introducing technologies, in which we need to consider two difficult questions: 5. What are the possible longer term economic consequences of introducing the technology? 6. What are the possible longer term political consequences of introducing the technology? Here is might be argued that there is little point in raising these questions, as we have already acknowledged that we are in a position of radical uncertainty; indeed this is the motivation for the discussion. Trying to answer these questions, it might be said, is to fall into the realm of fortune-telling. But in response, it is very likely that refusing to discuss them plays into the hands of those who wish to introduce the technologies, as it will make some of the significant benefits accruing to them, and costs to others, invisible. For these questions return us to the discussion of moral hazard, power and prestige. In the case of GM foods critics argued that an enthusiasm for innovation plus corporate profit-seeking was the real motivation for pushing the technology forward. Corporate profit-seeking constitutes the most obvious moral hazard element. If GM crops work, and producers the world over are tied into the manufacturer’s patented products, profits will soar to the great benefit of shareholders, and those individuals who work for the corporation. Of course, there are risks to these same agents. In the extreme, the company will go bust and people will lose their jobs. But shareholders typically hold a diversified portfolio and accept some losses as a price of investing in innovation. Those developing and marketing the technology will have skills and experience that make them highly employable elsewhere, even if the company collapses and everyone loses their jobs. On the other side, if GM crops fail, or become too expensive, farmers, especially in developing countries, may lose their livelihoods without being able to replace it, particularly if their land is damaged by the failure of GM crops. And consumers of the GM crops could also greatly suffer if the worst fears are realized. Hence the issue of moral hazard is real: those promoting the technologies and putting pressure on governments to accept them have a lot to gain and relatively little, in comparison, to lose. Those who will be using the technologies have something to gain, but an awful lot to lose. The economics are unbalanced between corporate and individual interests and this should give regulators reason to pause. At the very least they need to take a very hard look at any evidence presented to them by the corporate interest. At this point the issue shades into the questions of power and prestige. I would not wish to enter into anti-business conspiracy theory in which companies develop products in order to strengthen their political power. Any political benefits could be an unintended consequence. But, and to simplify, let us suppose there are two types of unintended consequence: those that benefit the interests of those who run the corporation and those that are detrimental to their interests. It seems very likely that agents who are able to do so will take steps to mitigate or overturn practices that are detrimental to their interests, while ignoring those consequences that further

16

J. Wolff

their interests. They may even take them for granted as ‘just how things are’. In this case, the unintended consequences may well be that those who are responsible for introducing successful new products will receive accolades and prizes, and will be invited to join committees to shape future regulation of their area, where they will be able to consolidate their success and magnify its effects. In sum, we need to be particularly watchful of two types of problematic aspects regarding the introduction of a new technology. First, those who have developed the technology and are pushing for its introduction, and who would benefit from it, are very likely to focus much more attention on its benefits (to others) than on the costs and risks. However hard they try (and they might not try at all) it is very difficult to put self-interest entirely to one side and perform a properly balanced assessment. The second level concerns the political consequences, in terms of the consolidation of corporate power, and increasing vulnerability of those who are already vulnerable. The basic point is very simple. As more people become dependent on the products of a smaller number of suppliers, the power of the supplier increases, even to the point of ‘regulatory capture’ where they help compose the regulations that they must follow [12]. This process is self-fueling unless checked. Therefore, even if there is an agreement that the technology should go ahead, safeguards need to be put in place to provide a check on growing corporate power, if possible. To put this point in terms of the risk triangle introduced earlier, we worry, initially, that corporations producing goods such as GM crops are in a position of moral hazard. Unchecked they will hope to make gains for themselves, pushing risks on to the vulnerable consumers. Government is brought into regulate, thereby transforming the situation into something closer to what I called adjudication in which the government acts as a neutral broker. However, if the technology is allowed, a sphere of regulation will develop, which in turn will draw on those who know the industry. If it turns out that the regulators are sympathetic to corporate interests then we have an element of regulatory capture, where the industry, in effect, regulates itself, in part at least. This, then, converts what was meant to be adjudication back into moral hazard. Arguably this mechanism is the governance failure of our age, repeated over and over again, and incredibly hard to guard against. Nevertheless, even when we have observed such risks, it may be the case that going ahead is the right thing to do, even in the face of likely adverse long-term political consequences. My argument is that the listed questions raise considerations to be taken into account, not a set of necessary and sufficient conditions.

1.7 Conclusion My conclusion is a fairly commonplace one. Although many areas of risk can be approached and dealt with by technical tools of economic assessment, doing so in the case of risky new technologies is highly problematic, as, first, the tools are not appropriate for cases of radical uncertainty, and second, attempting to use these tools hides the deeply political dimensions of new technologies. There is no real alternative

1 Risk and the Regulation of New Technologies

17

to having what will be fraught and contested debates about the costs and benefits of the technologies, where those costs and benefits need to be understood in their complexity rather than reduced to a colourless monetary sum. All the questions raised in the previous section need to be addressed. In particular, however, the absolute key question is whether the new technology has a real chance of being the best response to a genuine problem. If not, it can be hard to see why it is worth taking a step into the unknown, with the variety of problems to which it can leave us open. Acknowledgments My thanks to audiences in Kobe, Oxford and Reading for their very helpful comments, and to Tom Simpson, Karthik Ramanna, and Henry Shue for discussing the themes of the paper. I owe a special thanks to Hélène Hermansson and Sven Ove Hanson for giving me the opportunity to respond to Hélène’s Ph.D. thesis, which contained their joint work and set me on the path explored here.

References 1. Carthy, T., Chilton, S., Covey, D., Hopkins, L., Jones-Lee, M. W., Loomes, G., et al. (1998). On the contingent valuation of safety and the safety of contingent valuation: Part 2: The CV/SG “chained” approach. Journal of Risk and Uncertainty, 17, 187–213. 2. Drèze, J. (1962). L’Utilitè Sociale d’une Vie Humaine. Revue Française de Recherche Opèrationelle, 6, 93–118. 3. Hannson, S. O. (2018). How to perform an Ethical Risk Analysis (eRA). Risk Analysis 38, 1820–1829. 4. Hayenhjelm, M., & Wolff, J. (2012). The moral problem of risk imposition. European Journal of Philosophy, 20(S1), E26–E51. 5. Hermansson, H., & Hansson, S. O. (2007). A three-party model tool for ethical risk analysis. Risk Management, 9, 129–144. 6. HSE. (2001). Reducing risk, protecting people. London: HMSO. 7. Lenzi, D. (2018). The ethics of negative emissions. Global Sustainability, 1(e7), 1–8. 8. Manson, N. A. (2002). Formulating the precautionary principle. Environmental Ethics, 24, 263–274. 9. Munthe, C. (2011). The price of precaution and the ethics of risk. Dordrecht: Springer. 10. Myers, N. J. (2005). A Checklist for Precautionary Decisions. In Her (Ed.), Precautionary tools for reshaping environmental policy, pp. 93–106. Cambridge Ma.: M.I.T. Press. 11. Nuffield Council of Bioethics. (2004). The use of genetically modified crops in developing countries: A follow-up discussion paper. London: Nuffield Council of Bioethics. 12. Ramanna, K. (2015). Thin political markets: The soft underbelly of capitalism. California Management Review, 57, 5–19. 13. Roeser, S. (2014). The unbearable uncertainty paradox. Metaphilosophy, 45, 640–653. 14. RS/RAE. (2017). Greenhouse gas removal. London: Royal Society/Royal Academy of Engineering. 15. Schelling, T. (1968). The life you save may be your own. In S. Chase (Ed.), Problems in public expenditure analysis, pp. 127–162. Washington, DC: Brookings Institution. 16. Setright, L. J. K. (2004). Drive on!: A social history of the motor car. London: Granta Books. 17. Wolff, J. (2006). Making the world safe for Utilitarianism. Royal Institute of Philosophy Supplement, 58, 1–22. 18. Wolff, J. (2020). Ethics and public policy. (2nd Ed.) London: Routledge. 19. Wolff, J. (2011). Five types of risky situation. Law Technology and Innovation, 2, 151–163.

18

J. Wolff

20. Wolff, J. (2014). The precautionary attitude: Asking preliminary questions. In: Synthetic future: Can we create what we want out of synthetic biology? Special report, Hastings Center Report 44, no. 6, pp. S27–S28. 21. Wolff, J., & de-Shalit, A. (2007). Disadvantage. Oxford: Oxford University Press.

Chapter 2

Gradation of Causation and Responsibility: Focusing on “Omission” Tsuyoshi Matsuda

Abstract For philosophical foundation of interdisciplinary research on the public problems of technology it is important to focus on the problems of omission not only theoretically from studies on the ontology of actual causation and responsibility in a total picture, but also concretely by case study of contrastive lawsuits of asbestos issue in the US and Japan. Through examining contributions of Hitchcock and Halpern, Mumford and Anjum to explicating the grades of both causation and responsibility from a realist viewpoint, a path will be opened to recognize the necessity of “precautionary attitude” not to omit to prevent the problematic implementation of risky technologies and materials. This paper argues further for a version of the principle “With Great Power Comes Great Responsibility” from the standpoint of advocacy for public health. It is claimed finally that experts’ inaction in their regulation in the public spheres is ethically responsible for the decision-making of new technology as well as for actual hazards, although the grades of causation and responsibility in individual cases depend on the contexts influenced by social history and cultural values. Keywords Causality · Responsibility · Omission · Gradualism · Normality

2.1 Introduction This essay is a challenge for philosophical foundation of interdisciplinary research on the public problems of social implementation and use of powerful bio- and environmental technology. In this paper, the author attempts to explicitly delineate the reasons why we must focus on the problem of omission from recent philosophical studies on the nature of causation and to demonstrate the relevance of applied philosophical examination of the historical problem of omission from his action research based case studies. In this regard, the aim of this consideration is to explore ontological and normative connectedness in our conception of causation and responsibility, T. Matsuda (B) Kobe University, 1-1 Rokkodaicho, Nada-ku Kobe 657-8501, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_2

19

20

T. Matsuda

especially by focusing on “omission,” not only theoretically from the viewpoints of philosophy of causation, but also practically by comparing related lawsuits of asbestos hazards in the US and Japan in order to provide a concrete foundation for this research. This paper begins by confirming the advantages and limitations of Hitchcock and Halpern’s conception of the grade of causation ([8], hereafter H&H), in order to clarify the problems of the ontological status of omission as a base for further discussing its ethical and legal positions (2). In the second and third part of this paper, after indicating the relevance of H&H’s value-neutral concepts of “normality” or “default” (3), the author briefly introduces contrastive results of several lawsuits as cases for a historical examination and study of asbestos hazards (4). The author analyzes another significant conception of the grade of causation and responsibility from studies of Mumford and Anjum (hereafter M&A) about causal factors of dispositional “powers” or their equilibrium in order to shed light on the subtle ontological and ethical position of “omission” from a more realist standpoint (5). Through these examinations, a path to rightly recognize the ethical necessity of implementing a mature “precautionary attitude” to either prevent or not omit to prevent the ethically problematic implementation of risky technologies or materials. Finally, the author argues for a specific version of the principle “With Great Power Comes Great Responsibility” (“GR”) raised by M&A in their interpretation of Moore’s ontology of law about omission from the moral perspective of advocacy for public health as an indispensable factor of environmental ethics (6). It can then be conclusively claimed that, in this case, typically experts’ omission of prevention of hazards in their regulation or management in the public spheres is both causally real and ethically responsible of issues that arise, such as causing hazards, although the grade and proportion of individual cases depend on the circumstances and ethical contexts of omission influenced by social history and cultural values (7). In other words, the contention of this essay consists in that we can and should more appropriately discuss the grade of moral and legal responsibility of use and implementation of powerful technology by elucidating the structural complexity of actual causation from an ontological perspective. In order to make this claim persuasive, human action or inaction must also be ontologically positioned in such a way that each is closely related with its intention, thought, or value judgement in the structure of actual causation. Thus, we are required to comprehend problematic phenomena such as inaction or omission and prevention in a total picture of causation and responsibility. For this aim, theory of basic action that does not separate our bodily action specifically from other kinds of physically happened movements in the world is useful. The problems of this paper take two directions: one is toward causal ascription of past and present hazards or severe accidents post facto, as ethically and legally controverted in society and law while the other is toward the public decision-making or risk-taking of profits and detriments related to the implementation of new technology ex ante, as socially and academically discussed in the context of preemptive prevention and precaution in contrast to proactive attitude to innovation (cf. Ichinose). We are often faced with the predicament in which unanimous accord among

2 Gradation of Causation and Responsibility: Focusing on “Omission”

21

stakeholders or consumers are not easily reached. Thus, we have sometimes lapsed into suffering from the results of culpable negligence or irredeemable heavy costs.

2.2 “Gradualism” of Causation From a background of the philosophy of causation, the concept of the “grade” of causation is contrasted to both the “universalist” philosophy of causation, such as Humean “constant conjunction” (or regularity) and David Lewis’s semantical understanding of “counterfactual dependence,” and related trends of causal “pluralism” influenced by later Wittgenstein [17]. In this context, “gradualism” of causation can be featured as an alternative for more persuasive explanations of actual or singular causation in ethically and legally conflictive cases in technologically developed societies by encompassing human factors such as will or values in models or frameworks for causal ascription. By examining the validity of claims related to “gradualism” of causation, theoretically interesting and normatively significant phenomena, such as prevention (“preemption”) and “omission” as its opposition of individual actions and collective activities within networks of various phases of actual causation are presented without excluding them as causes as in the “physicalism” of Michael Moore’s Causality and Responsibility [23]. H&H’s model and the asbestos hazard are reviewed as cases of corporate management for safety and administrations in connection with legal and ethical responsibilities in order to demonstrate the institutional and public necessity of precautionary “attitudes” in governance to the implementation of new technologies and their use. The first clue for our consideration is the idea of “graded causation” by H&H. One of their core claims is that the ordinary notion of actual causation is graded rather than all-or-nothing ([8], 445). From this conception, the problems of graded ascription of causality and “graded responsibility” are presented in parallel to those of complexities of actual causations in the context of the responsible research and innovation of bio- and environmental technologies. In this section, after indicating the significant differences between causal “pluralism” represented by later Wittgenstein and H&H’s prima facie similar position from “Humean Bondage”,1 the ethical relevance of H&H’s conception of “ranking” of the “normality” for ascribing of actual causations and responsibilities is highlighted, focusing on cases of omission. H&H elucidates its “gradualism” by basing it on their extended structural equations of Halpern and Pearl’s (HP hereinafter) definition of causes. Our original question can be formulated as follows: Is it impossible for us to say that there is always one and only one cause of an actual event? As a matter of fact, it seems rather absurd and unnatural if an expert talks about “the cause” when 1 Cartwright also defends causal pluralism in the context of economics and its relatedness of policies

in which the important concepts of modularity and the method of Bayes-nets are critically examined ([2], 43).

22

T. Matsuda

thinking about the causes of hazards such as the meltdown of Fukushima Daiichi 2011 or victims of ARD (asbestos related diseases) in Japan. We are almost always necessarily involved in discussing the responsibilities of such events and deducing “late lessons” for prevention and precautions concerning powerful technologies. The claim of “causal pluralism” is surely interesting enough to depart from traditional understandings of causation by Hume or (early) Lewis, to the extent that it is actually not easy to develop a universal theory of causation from constant conjunction or counterfactuals because there are some counterexamples in which neither regularity nor transitivity can be found, especially in cases of actual causation.2 To approach this issue, several cases of “preemption” are illustrated. While the claim of “causal pluralism” is persuasive to some degree, it carries some risk of allowing relativism or agnosticism insofar as it drops out of monist strategies of causation for univocal explication about the responsibilities of particular events due to the linguistic tendencies of philosophical explanations of causation. In this regard, the strategy of H&H’s two stages theory of causation and “graded causation” is advantageous in its ability to avoid this epistemological impasse of “causal pluralism.” According to H&H, HP’s definition of actual causations and its structural equations can overcome these “causal pluralism” difficulties by integrating a variety of causation types. They propose further normative concepts such as “default” or “normality” rank for valid ascriptions of both the causes and responsibilities of actual events. H&H attempts to provide procedures by which responsibilities can be ascribed to actual events that cannot be done solely using HP’s structural equations by his modelling of actions and inactions of agents. It is of note that their approach can open the possibility of ranking the responsibilities of agents on a measure of “graded causation” by getting right contemporary theories about the legal or ethical status of agent omission. Among them, a spectrum of theories can be found: as a legal philosopher, Moore does not admit omission as cause at all. In opposition, Lewis accepts it as a real cause, while Hall finds it secondary. Moreover, McGrath believes that the causal status of omission depends on its normative position ([8], 437). In fact, we have experienced severe accidents or diseases in public health that occurred, at least seemingly, due to omissions in the process of industrial production and management by companies or governmental offices, including policy-making to regulate the implementation of technologies or materials. Therefore, one of the most important constraints of examination is that we are now required to prevent “similar” disasters or implement proper precautions against them, not to allow such calamities to happen again. In this respect, philosophically speaking, sheer “physicalism” of causation must be rejected or mitigated by taking components of the thoughts and judgements of agents, especially normative components like values, institutions, and economic motivations into our consideration of actual causation. In other words, this issue can be concretized in contexts of applied philosophy 2 We

can say that this is a “standard” view of philosophy of science today [24]. As is well known, Lewis later perused another line of analysis of causality using the concept of ancestral influence, for instance.

2 Gradation of Causation and Responsibility: Focusing on “Omission”

23

related with finding causes of public health or epidemiology issues insofar as the socio-economical and legal-ethical elements of technologies—as well as activities of mass production and management methods—construct the complex contexts of these issues. From the philosophy of science, it is clear that the problems of causation cannot be approached by using the simplified “closed system” models of classical physics that could dispense with such complicated components. Here, a classic example [M] is presented to illustrate the problem from Mackie about searching for the cause of a house fire; an expert investigates it after the fire went out. The expert confirms that a short circuit caused this fire. Mackie assumes that this fire would not have taken place if there had been an automatically operating fire extinguishing device nearby or no flammable materials at all. However, if there had not been a short circuit and other conditions were not sufficient to cause a fire, the short circuit must remain a necessary part of the condition of this event. This is INUS condition.3 This usually means that the short circuit (as an accident of artefact) is the cause of this fire in relation to the whole complex of conditions. However, on the whole, these conditions are neither a necessary nor sufficient condition of the fire in general because other conditions such as an arsonist (an intentional agent) or lightning bolt (a natural hazard) could have caused it as well. In this case, it is not easy to clearly specify who or what is responsible for the house fire. In a philosophical sense also, it is difficult to delineate “the” cause. To do so, we must consider not solely natural and technological factors of materials, but also legal-ethic and socio-economic elements such as the skill of electric technician or paying capacity of the owner for working appliances, for example. The problems of causation and responsibility require us to bridge discussions of the philosophy of causation and those of the ethics of responsibility to deduce implications for our technology, economy, society, and history if we surpass the intellectual enterprise of solving philosophical puzzles as those of multiple causation or alternative causation in legal meaning. This is a simplification of the problematic subject of this research. Before relying upon the points of H&H’s two stages theory of causal analysis, a general philosophical view of the problematics of causation must be outlined as preconditions. Hitchcock critically appeals to the Thesis of Humean Bondage (THB) in his Humean Bondage [10] that has arguably produced a series of pseudo-problems of causation. Metaphors such as those of “chains”, “cement”, “connection,” and “bond” have been used to elucidate the aim of philosophical investigation to analyze “the relation” sui generalis called “causality” ([10], 4). According to the Humean view, two events are connected by causal relationships, whichever the concrete situation may be. This kind of relationship is said to be “unique.” However, this project is a sort of obsession insofar as we never see a unique definition of causation even in such an epoch-making book as Pearl’s Causality in which we can learn a variety

3 INUS is an abbreviation of “insufficient but

necessary part of an unnecessary but sufficient condition.” The issue here is the actual cause of a fire. A short circuit is a “contributing condition of a fire.” The INUS condition is that about which the effect E may have occurred without the cause C, but it must exist when E really occurs [17].

24

T. Matsuda

of concepts like causal efficacy, relevance, whole results, direct or indirect results, actual cause, and contributing cause. Against THB, H&H devises two stages strategy: the first is a pluralism of the essence of causation in contrast to contemporary theories of [1] counterfactuals, [2] probability, and [3] process for the causation, and the second is “graded causation” of actual events. These standpoints are featured respectively. First, the affinity of these three is to specify and separate a privileged class of entity as the “cause.” For example, the natural law or “law of succession” is a paradigm for this as we know it from Newton’s second law of motion in opposition to “accidental generalization.” These three theories are all essentialist or universalist to the extent that they are bound with THB in pursing the unique essence of causation. [1] The causal theory of counterfactuals is based on the semantics of possible worlds to measure similarities among potential situations in which “backtracking” conditionals are excluded. For instance, we can compare two possible worlds: “one possible world in which a match is struck” and “another world in which a match is not struck” by using an unreal conditional like “if a match would not be struck, a fire would not happen.” In this simple case, striking a match (as “difference making”) can be seen as the cause of the fire in this possible world—that is, our actual world—if all other things are equal (ceteris paribus) as in other possible worlds in which a fire will not happen because we do not strike a match. [2] Probabilistic causal theory can distinguish between causal relationships and mere correlations, as is the case in successful practices of epidemiology using methods such as RCT (randomized control trials), in order to identify the cause of a disease or effects of medicine.4 For instance, we should and can discern the genuine causation of smoking with lung cancer from mere “associations” of staining teeth with lung cancer because these latter two have nothing other in common than the cause of the former—namely, smoking. [3] The process theory of causation separates some physical objects or entities as the causes from “spatial–temporal junks” such as mere forms, shadows, and light spots in order to identify them as something to transmit the mark of having “conserved quantity,” such as kinetic energy, throughout the whole process of an actual event. We learn this type of causation from the tradition of physics. To this conception, we later add a version of [3] by introducing the causal factors of dispositional “powers” or their balancing from the study of M&A for our more realist explanation of omission as a sort of cause. 4 In

RCT, test subjects are selected for a trial and then divided at random into two groups. If the numbers are large enough, and the division is genuinely random, then these groups should be much alike. The trial drug is given to the first group, which is called treatment group. The other, unbeknownst to them, gets a placebo ([24], 58). If the probability of the effect of the drug is statistically significantly higher than that of placebo, we can ascribe the cause of this effect to the drug. This epistemological structure of the confirmation of the cause of a disease is basically the same as the handling of two contrasted cohorts in epidemiology, as we know this relationship also from the population exposed to asbestos to mesothelioma, for instance.

2 Gradation of Causation and Responsibility: Focusing on “Omission”

25

Although these three types of causal reasoning have greatly contributed to the pursuit of causality in science and technology, they have a common theoretical drawback in presupposing unique basic units as follows. Hitchcock criticizes this “uniqueness” assumption in the philosophy of causation as a decisive problem of the second stage of causal analysis (ibid. 7). In [1], the cause is uniquely “the ancestral of influence that involves a type of non-backtracking counterfactual dependence” of its results [14]. Similarly, in [2], C is a cause of E if and only if C raises the probability of E relative to every relevant background condition B. In this case, the “background conditions B are those that need to be conditioned on in order to ensure that the probabilistic correlation between C and E is not spurious” (cf. Cartwright). Finally, in [3], C is a cause of E if they are connected by a causal process or a chain of causal processes.5 Hitchcock himself posits a pluralism of analyzing causation at the second stage. From the results of these approaches, he judges that THB is “a wild goose chase” (ibid. 9). Instead of pursuing a unique philosophical explanation of causation in vain, it is wiser for us to appropriately select relevant relationships in accordance with contexts of investigation and flexibly combine those conceptual apparatuses to analyze causation. Still, this would remain a half-truth of his two stages theory of causal analysis if we could not relate this strategy with the problem of “graded causation,” especially insofar as we expect this pluralism to be utilized in the various research on causation across humanities and social sciences, not to mention the natural sciences. In Humean Bondage, Hitchcock resumes his conclusion to defend the methodological pluralism of causality: “what we should demand of a theory of causation is that it be able to identify the causal features of a given scenario that each side is attending to in making their judgements” (ibid. 9). Moreover, he takes all concepts such as causal explanation, rationality, and responsibility as the components of the first stage of research without searching for a philosophically unique understanding of causation by THB, even if the facts are controversial, or “whether one event or factor really causes another.” Therefore, examples are illustrated to highlight weaknesses in each THB position in the second stage and acknowledge the advantages of his two stages theory6 before approaching the problem of “graded causation.” (1) The following case of two assassins suggests theoretical difficulty of the presupposed transitivity of “influence” from the ancestral to the result or effect. There is a captain and an assistant. Although the captain ordered his assistant to shoot by crying “fire!” and the assistant shot at the target, the target saved her own life by ducking upon hearing the captain’s voice. In this case [C], she would not have survived without hearing the captain’s voice. Is his voice the cause of her survival? It is however rather unnatural to praise this man because he did not 5 He

formulates also the case of traditional regularity theory that presupposes universal validity of natural laws: C is a cause of E, if there are circumstances S such that relative to S and the laws of nature, C is both necessary and sufficient for E. 6 Hitchcock examines some other interesting cases for discussing disadvantages of THB. We select the cases here for our aim to discuss the problem of “graded causation and responsibility.”.

26 Fig. 2.1 Preemptive prevention

T. Matsuda

1. PreemptivePrevention

intend to save the life of the victim, only to kill her. If we consider his intention as a precondition of his order, it is difficult to admit the transitive relationship of the ancestral intention to kill the victim in this series of actions. Later, in their Graded Causation and Defaults, the voice is understood as one of the causes of “bogus prevention,” and we face the problem of ranking or evaluation of causes accounting for the order or intention and so on because this event is “normally” thought to be an attempted assignation (Fig. 2.1). (2) There is a famous counterexample of probabilistic theory from thrombosis, or the formation of blood clots in one’s arteries as a side effect of birth control pills in 1970s. While these pills were considered to be the cause of thrombosis for their high likelihood of causing of this symptom, it was statistically known that the same pills lowered the risk of thrombosis in women under 35 years old, non-smoking, and capable of becoming pregnant. In this case, it is difficult to univocally answer the question “do birth control pills cause thrombosis or prevent it?” The latter fact can falsify the thesis, “C raises the probability of E.” The answer, whether yes or no, depends on the circumstances. Here, case [P] would be also an issue of corporate responsibility as the producer of the pills would likely advertise the pills’ preventative effect described above to address the case in which a woman would not chose the pills because she thinks they cause thrombosis.7 (3) Two cases should be mentioned for our consideration of omission and prevention. The first case is that of a gardener who forgot to water flowers that then died. If he watered them, they would have survived. Is his omission the cause of the flowers’ death? (ibid. 20) This is our paradigmatic case [G] that is similar to real issues of hazards such as ARD in [A] (See Fig. 2.2). When Lewis describes causation by omission [13], he seems to say that the outcome must be related to

7 As

a matter of fact, according to Hesslow, although it is right and effective to tell the truth, the producers cannot morally reject her demand to develop safer products, because of the facts of prevention for some women, if they are required to do it. Through empirical research they were obliged to find the causes of both Thrombosis and its opposite preventative effect of pills, somewhere in the process of anti-pregnant function as a component effect related with Thrombosis, in order to eliminate this side effect with keeping anti-pregnant function. And this was indeed later realized.

2 Gradation of Causation and Responsibility: Focusing on “Omission” Fig. 2.2 Omission

27

2. Omission

some event that really happened.8 This case of causation by omission is favorable for our thinking and locution about intention, values, norms, and institution as “causes” that are given to us directly or simply not physical as such. The second case [B] is one of “preemptive prevention” that is slightly modified by the author (See Fig. 2.1). An outfielder catches a ball hit by a batter in a ground of a high school in t. The following counterfactual is supposed to be true in this case first,if he could not catch the ball, it would fly over the fence of the ground and shatter the window of the house of a neighbor. It may be typically assumed that the outfielder prevents the break of the window of the house by his “(early) preemptive action.” This thought is intuitively right. However, if someone additionally stated that, in the case that either the tall fence was not in place or his catch was not effective, the window would surely be broken. Moreover, the following question might be asked: which prevents the window from breaking, the outfielder or the fence? Some of us could answer that it is the fence against our first intuition because the fence is tall enough to stop almost all the balls that are hit. Our intuition about preemptive prevention as the cause of non-occurrence varies “depending on the nature of auxiliary preventer” (ibid. 21), such as the existence of tall enough fence. Thus, our judgment about the contributions of preventers, like an outfielder, varies with the supposed circumstances of given problems. If this is the case, our causal ascriptions of preventing events or accidents and their legal and ethical evaluations could be relevant enough to such situations that we must conceive of a wide range of possible situations as not “far-fetched” to consider the causes (and responsibilities) at least theoretically. However, this task is beyond the scope of theoretical philosophy of causation in a rigid sense because there is seemingly no steady general criteria by which we can decide what possibility is “far-fetched” or not when attempting to find the causes of events and ascribing the responsibilities to something or someone. This is the reason that concepts such as “default” or “normality” are introduced by H&H in their Graded Causation and Defaults [8], despite the structural equation in HP [9] that encompasses various types of causations as a strong tool in which the difficulties of THB are handled as the problem of “isomorphism” of HP’s causal 8 This

is called “quasi-causation” by some philosophers because there is no actual interaction between the gardener and the dried flowers in this case.

28

T. Matsuda

model. “Isomorphism” here refers to the assignment of values to variables of several primitive events, which raises the problem of variation in the causal ascription of a certain actual event according to the interpretation of the ontological status of cause, as we will see the cases below. However, to the author, the extended causal model has a possibility of specifying the actual causes by also assigning “not-physical” variables like norms, thoughts, values, and institution. The structural equation of HP makes the compatibility of two demands both of the law-likeliness of causation and contingencies of actual causation possible by controlling the contingencies through operational or experimental interventions. Thus, the starting point of our consideration is the Definition 2.1 of the actual cause by HP, shown below. − → → → Definition 2.1 (Actual cause) [9] X = − x an actual cause of ϕ in (M, − u ) if the following three conditions hold: − → → → → AC1. (M, − u )(X =− x ) and (M, − u )  ϕ. AC2. There is a partition of V (the set of endogenous variables) into two sub sets − → − → − → − → − → − → − → → w of the variables in X and W , Z and W with X ⊆ Z , and settings x  and − − → → respectively, such that if (M, − u )  Z = z* for all Z ∈ Z , the both of the following conclusions hold: → − − → − → → → (a) (M, − u )  [ X = x , W = − w ] ¬ϕ → → − → − − → − → − → → − → → (b) (M, − u )[X =− x , W = − w , Z  = z ∗ ] ϕ for all subsets W  of W and all − → − − → − → w to denote the subsets Z of Z , where we abuse notation and write W = → − → assignment where the variables in W  get the same values as they would in − → → − → w (and similarly for Z ). the assignment W  = − − → − → AC3. X is minimal: no subset of X satisfies conditions AC1 and AC2. We must paraphrase this Definition 2.1 of actual cause by HP ([8], 424). A Boolean combination of primitive events is the “actual cause” if the following three conditions are satisfied: AC1: A combination (conjunction or disjunctions) of primitive events cannot be considered a cause of an event or a result, unless both this combination and the event actually happen. AC2 (a): It is possible that the result does not occur because we can think of changing the values of variables somewhere in the causal path from a conjunction of primitive events to the result. In such a case, the “endogenous” parts of the conjunction of primitive events do not function at the moment, but they may have still indirect effects on the manifestation of the event ϕ.9 AC2 (b): The permissiveness of AC2 (a) with regard to contingencies that can be considered is limited; only changing the values of variables in the conjunction 9 For

simplicity, we mention only the case of conjunction. H&H’s comment on AC2 (a): this is a twist of standard definition of counterfactuals about actual cause. In such a case, we must intervene into more than one value of the variables of essential parts in the conjunction of primitive events, if necessary, as the captain could silently order the assistant to shoot the victim in the case [C] of (1).

2 Gradation of Causation and Responsibility: Focusing on “Omission”

29

of primitive events modifies the result. In other words, the setting of parts in the conjunction of primitive events can eliminate the spurious side effects that may hide changes of values of variables in the conjunction of primitive events. Even if values of variables in the causal path may be disturbed by interventions into some parts in the conjunction of primitive events, this disturbance does not influence the value of the result. Namely, the result remains true insofar as the conjunction of primitive events is the actual cause if a variable of the causal path is set to the original value in this context.10 With these two inventions, we can overcome one of the theoretical difficulties of counterfactual causation considerations. AC3: The minimal condition of actual cause: no subset of conjunction of primitive events satisfies conditions AC1 and AC2. It shows that unessential parts of changes in the result in AC2 (a) can be eliminated, and therefore, only essential parts of conjunction of primitive events must be considered.11

2.3 H&H’s Concept of “Normality” or “Default” The point of our interpretation regarding actual cause from HP’s Definition 2.1 is that we can clearly see the plurality and complexity of the actual cause of an event as a combination—conjunction, for simplification—of primitive events by these formulations with flexible operations or interventions into the values of variables, not to mention the concretization of pluralism in the second stage. Through this step, this view of plurality and complexity of actual cause of an event can be further connected with the concept of “graded responsibility” mediated by normative notions like “default” and “normality.” Both the strengths and limitations of the extended causal model for the gradualism of causation are made clear by confirming the problem of “isomorphism” in HP and then focusing on the causal and ontological status of omission in representative four philosophical standpoints of causation. According to H&H, HP’s framework is extended because something more suited to find the ascription of actual cause is necessary. They demonstrate this from a modified instance of bogus prevention in the case of the two assassins [C]. In this example [C-2], the target survives under the conditions that an assassin changed his mind about poisoning her at the last moment and that the bodyguard puts the antidote in the item at the same time. Following HP, the actual cause of survival is the 10 H&H’s comment on AC2 (b): it implies that it is always permissible to intervene into the causal path for setting the actual values of variables of parts in the conjunction of primitive events. The causal model may be modified by such intervention because more than one value of the variables of parts in the conjunction of primitive events change through changing the value of a variable in this conjunction. 11 H&H’s further comments on Definition 2.1.: these conditions regulate the cases that the conjunction of primitive events is the actual cause of the result when the result counterfactually depends on them. There is no disturbing part at all (it is not mere standstill of some hidden elements), if both conditions of AC2 hold—that is, such parts in the conjunction of primitive events is empty although they are hypothetically introduced to prevent the manifestation of the result.

30

T. Matsuda

bodyguard’s antidote because the assassin’s poisoning is thought to be a permissible contingency in this case, and it is theoretically claimed that the antidote is necessary and sufficient for survival, whether she is poisoned or not. However, this causal ascription itself seems unnatural or not “commonsensical.” Therefore, we require an additional constraint. In [C-2], the values of variables of the result are V [assassin of the victim] = 0 (no victim) in following both cases, A [poison of the assassin] = 0 (no poison), B [antidote of the bodyguard] = 1 (putting antidote in), V = 0—B, in this case, can be seen as overdetermination of the survival—and A = 1, B = 1 also yields V = 0. Despite this, it is still possible to think that the assassin changing his mind (A = 0) is the actual cause of V = 0. This is the problem. This commonsensical feeling of causal judgement of A = 0 for V = 0 must be explained. To surpass the limitations of HP’s framework, normative notions such as “default,” “typicality,” and “normality” must be introduced. Therefore, “default” is an assumption about what happens, or what the case is, when no additional information is given. That birds fly is a default (but defeasible) assumption for inferences about birds. It is not the case indeed about an ostrich or penguin. Flying is characteristic of the type “bird,” not simply a statistical fact among birds.12 The important concept of “normality” is however “interestingly ambiguous” (ibid. 429) because it connotes a feature that is statistically average on the descriptive dimension on the one hand, but it is related to prescriptive “norms” including those of morality, law, and policy on the other. For example, telling a lie will be reproved as morally wrong and violation of laws will be legally punished. In H&H’s case of a working rule, employees will be fired if they are absent from work without the required notes from doctor because it is judged to be against the policy of their company. Furthermore, there can be “norms” of proper functioning in machines or organisms according to which human hearts or engines are meant to work. Their malfunctions have not only epistemic meanings but also normative ones to the extent that these events can be morally and legally evaluated. The issue of this paper is that these factors are components of our ascriptions of actual causes. From this problematic view of causation, it would follow that causation may be socially or politically constructed and context dependent (ibid. 431). Moreover, there may be a concern about this sort of vagueness of causation, insofar as “normality” admits of degree, because these factors seem to be “subjective,” while HP’s structural equation is objective. However, notions like “default” and “normality” are not merely “subjective,” although they are essentially relevant for judging actual causation and decision-making regarding responsibility post hoc (ibid. 432). This state of affairs is illustrated first based on normality by the supposed case [C-2] of the two assassins to demonstrate that the framework of normality essentially commits with causal ascription; then, historical cases of asbestos lawsuits are introduced. Accordingly, by focusing on differentiating factors under presupposed normal conditions to determine the intended result of actions, the essential problem can be clarified with precaution, “what should we legally or morally intervene in 12 Hitchcock

refers to some attempts of “default logics” in this connection.

2 Gradation of Causation and Responsibility: Focusing on “Omission”

31

and correct in some cases?” The “extended causal model” by H&H is later justified, which explicitly clarifies what can be the inciting factor of change in the result of the structural equation. Here, a ranking of possible choices in actions or prevention and inactions or omission can be introduced by considering agents’ preferences or morals and their norms under our intuition that “usually” we do not put poison into coffee in ordinary life, so as to discern the genuine prevention from bogus prevention and also to reproach an omission in the case of prevention of poisoning for demanding improvement or correction of their actions. For elucidating the difference of the results of options, H&H provides an extended causal model that introduces a “world” that includes sets of exogenous variables—these are not included in HP’s equation—for complete descriptions of possible situations of prevention and omission by attributing values to all exogenous variables. The wit of this invention is to be able to say it by referring to the notion of normality that a world is more normal than others, even if in the case of [C-2], both the world (w1) with values of A = 0, B = 1, V = 0 and the world (w2) with values of A = 1, B = 1, V = 0 are possible. Furthermore, it is stipulated that the worlds (of everyday life) A = 0, V = 0, including (w1) is a default and more normal than the world of A = 1, V = 1, besides presupposing that endogenous variables in HP’s equations are also normal, except for special reasons. The consequences (downstream) from the typical (upstream) can be seen also as typical in this invention. Through the formalization of the order of this normality (ibid. 434), the criteria of the comparability of various worlds is given by which, in possible cases, we can evaluate different worlds from the normal world, how they are normal in comparison. It is aimed both for the ranking of actual causes and responsibilities and for the creation of a model of partial pre-order of the worlds. By these “normative” orders, we can also compare “the intervened world from outside” with “the actual world” (without prevention or omission of a required prevention). If there is not any intervention, the “world” refers to all that occurs in the setting of variables of a given context. This context decides the modes of this world that we can take for an “actual world” in this context. Thus, we have an “extended causal model” with preorder as criteria for ranking of the worlds by H&H. The former expresses testable counterfactual dependence of objective patterns among events that occur by intervention in a system, and the latter expresses orders of normative and contextual factors that influence our judgements about actual causation. In using these tools to compare worlds and their normative components from the viewpoints of combination or conjunction of primitive events and the setting of its parts, only such cases in which the compared worlds13 wrought by intervention or prevention are at least as normal (not far-fetched) as the actual world are considered seriously. This is the only case that is productive insofar as compared worlds invented by intervention can begin functioning in AC2 (a) in this limited condition of normality. Otherwise, we may neglect imaginary possibilities of 13 In this case, it is presupposed that compared worlds invented by intervention can begin functioning

in AC2 (a) only in this condition of the equal normality when AC2 (a) conditions part of the conjunction of primitive events. In other words, arguably serious or essential changes in a series of events occurred due to intervention that cannot be neglected in causal ascription.

32

T. Matsuda

indirect influence to events by endogenous (unknown or hidden) parts of conjunction that might happen to stop functioning.

2.4 Problems of Prevention and Omission of Asbestos Hazards from Contrastive Lawsuits For indicating the theoretical limitations of our consideration hitherto, concrete cases of lawsuits of asbestos issues in the US [A-1] and Japan [A-2] are outlined to clarify the problems of “omission” because it is one of the paradigmatic cases of omission in terms of the grade of causation and responsibility relating to “normality”; this is most notable in the blameworthiness of companies and states due to their non-use or omission of health and safety regulations regarding asbestos workers and citizens in some cases. To highlight this, comparative judgements of courts in the US and Japan are shown. Asbestos’ two faces—namely, “a magical natural mineral” for industries that is indiscernible micro-fiber and a “killer dust” that can cause ARD, such as mesothelioma, lung cancer, and asbestosis—must be clarified before delving into the illumination of the background. Among ARD, mesothelioma is the most serious in the resultant pain and prognosis because of the lack of efficient cures. Additionally, this disease has remarkable features: one of its problems is its very long incubation period, from 20 to 50 years after exposure. One cannot unconditionally determine the threshold of exposures. To make matters worse, asbestos has been ubiquitous in our world for its technical utility and economic efficiency even after its ban in Japan this century. In the 2010s, many redress law suits against companies and the state occurred in Japan. Although there has been much variability, especially in judgments about the liability of Japanese government, the Supreme Court held that the state was liable for failing to issue the ministerial ordinances to regulate asbestos based on the relevant laws, namely, the Labor Standards Act and the Industrial Safety and Health Act in the Sennan asbestos case on October 9, 2014. This is arguably a typical case of double omission (See Fig. 2.4) by the state. Judges condemned the government for failing or omitting to have legislated that factories be equipped with mechanical measures to remove asbestos dust from the air in by 1958 [A-2], although guidelines issued were advisory and not mandatory.14 The five Supreme Court judges found that the government’s failure to take timely and appropriate action was “extremely 14 While the judgment of the Osaka District Court on May 19, 2010 admitted the liability of the state, the Osaka High Court overruled this decision on August 25, 2011, rejecting the plaintiffs’ claim because the regulatory agency’s non-use of their authority was not equivalent to sins of omission. This judgment denied the government’s liability by saying that industrial development should have been prioritized to life and health of workers. Despite this, the decisions of the Osaka District Court on March 28, 2012 and December 25, 2013 recognized State liability in the second Sennan case again. The Osaka High Court gave different decisions between first and second cases. Excepting Osaka High Court judgment on the first case, the judiciary handed down a sentence

2 Gradation of Causation and Responsibility: Focusing on “Omission”

33

unreasonable” as well as “illegal.” As a consequence of its negligence or omission, the government was liable for ARD contracted by two groups of claimants. It is theoretically important for us to confirm that, in this Japanese lawsuit, the counterfactual condition is effectively recognized as valid for one of the causes of ARD of these groups, aside from the medically and epidemiologically established general Helsinki criteria regarding ARD. Virtually, [A-2] is similar to [B] in which both the fence and the outfielder’s catch would be preventers as it is analogous with the state’s role of regulation and the executions by employers. In reverse, [A-2] is also similar with [G] in which a gardener omitted to water the flowers, and his omission can be considered the cause of the flowers’ death. Connected with this case, it is interesting and important to mention one of the related judgments in Osaka High Court concerning state compensation in 2013 in the consideration of normality or default. In this lawsuit, the defendant, the Japanese government, claimed that it was the individual fault of workers with no previous knowledge of asbestos risks working without protective masks, as was the same legal decision in the previous judgment of District Court. However, the judge claimed counterfactually that they should have forced employers to have asbestos workers wear protective masks during work time because voluntary wearing of this mask may not be expected from workers due to the discomfort or intrusion during work. Thus, it can be argued that this was legally the default of the safety measure of asbestos workers even at that time by the judgement of Osaka High Court. Elsewhere, the meaning of this judgement in 2013 in comparison with that of the court of the US in 1980 concerning workplace smoking rules introduced by the Johns-Manville Corporation is discussed. The points of the issue are the following; Manville was one of the largest asbestos industry companies in the world until the 1980s that began to try reducing its employees’ asbestos exposure through dust apparatuses, mainly for the preparation of worker’s compensation by occupational ARD. This prohibition of smoking in the factory by Manville,15 which was a sort of preventative measure of ARD, was however judged “unjust”—or we can characterize it as “abnormal”—according to the legal prioritization of workers’ abilities to make free decisions regarding their lifestyles (1609, 621 F.2d 759 (5th Cir. 1980))16 ; “the danger is to the smoker who willing courts it”. This problematic case is mentioned by ethicists such as Daniels (196) and Goodin (120) as a conflictive example between paternalistic public health rules versus the principle of (illusory) “autonomy” (a that partly granted the plaintiffs’ demand. To further explore this problem, see Okubo, Fujiki, and Kazan-Allen. 15 An epidemiologist already showed that smoking significantly increases the relative risk of lung cancer caused by asbestos in comparison with non-smoking among workers because of synergy; the ratio was said to be 1 to from 5 to 10 times in nonsmoking asbestos workers to smoking asbestos workers. 16 cf. https://www.asbestos-osaka1.sakura.ne.jp/2014/02/2-4.html. Without entering details of backgrounds of worker’s smoking as “voluntary” action around that time, naturally we must ask whether conditions of “informed consent” had been reliably met by workers. This question has also been related with issues of public health concerning smoking that can include the problems of “irrational” nicotine addiction from the youth or its association with “cool” lifestyles [19].

34

T. Matsuda

sort of default in standard moral philosophy) in the context of the (il-)legitimacy of voluntary risk acceptance and (in-)valid “informed consent.” This contrast of the results of the two cases demonstrates a problem of H&H’s “normality” or “default” insofar as the implication of the judges above are so contrastive. In this regard, the extended HP structural model as such seems to be actually indifferent or unable for the concrete application of it to the legal (or ethical) decision making.

2.5 Cause as Dispositional “Power” and the Problem of Omission Next, H&H’s “gradualism” is complemented with the more realist M&A’s view of causes as dispositional “powers” to more appropriately discuss the ethical status of omission in term of responsibility. The author explores the conception of omission as a sort of cause further and more coherently than M&A’s exploration of “occasion” as well as Moore’s negation of omission as genuine cause. However, the strong point of the conception of causes as dispositional “powers” is the ability to naturally explain that omission—such as the non-watering of flowers or non-enforcement of wearing dust masks against asbestos—has the effect of upsetting the equilibrium of dispositional powers under the condition of polygenesis of events such as the flowers’ death in [G] ([26], 224) or ARD in [A]. According to M&A: The vector model allows us to explain causation by absence entirely in terms of what there is: the powers of things that really are, rather than any alleged powers of nothingness. This account will also show why we can attribute responsibility without causation, in the cases of omissions (ibid. italicized by Matsuda).

This model fits the explanations of event and non-event occurrence or the phenomenal state of rest from the viewpoint of structural dynamic characteristics, such as in the sudden falling of bridges. However, as M&A mentions Aristotle and Thomas Aquinas in their comments about this concept of dispositional power as genuine cause, this may yield a return to the difficulties of traditional conceptions of causes as something problematic, including intentional or purposive agents, rather than contemporary notions of event causation.17 This return is not always theoretically problematic for intuitively explaining more persuasive solutions to the problem of causal ascription. They discuss a humorous metaphor [T] of a game of tug of war between two groups such as philosophers and theologians: in this situation, both sides are equally matched and the rope goes nowhere. At last, one weak-willed philosopher gives up and leaves the contest. Therefore, the theologians win.18 The cause of this win is unclear. 17 We can recognize this sort of problems also in philosophy of causation in Leibniz along with Aristotle [17]. 18 The case of sprinkler is maybe more persuasive for their view of dispositional “powers” that functions when a fire reaches a certain threshold ([26], 231).

2 Gradation of Causation and Responsibility: Focusing on “Omission”

35

All the causing of that result was due to the forces exerted on the rope by the theologians pulling. But the philosophers giving up was the occasion for their victory insofar as he had been holding them back. The philosopher’s teammate may well apportion responsibility to their weak-willed colleague. Would they have avoided defeat had they remained at full strength? If so, they would be right to think of their teammate giving up as the occasion for theologians win, even if the giving up was not the cause (ibid. 225, italicized by Matsuda).

Why and how is cause as dispositional power and occasion as inaction distinguished here? M&A illustrates this from the film story of Spider-Man [SP] who “rightly understood that his omission occasioned the killing of his uncle by the bullets of the burglar to the extent that it would probably not happened if he had acted” (ibid. italicized by Matsuda), which is similar to the cases of the flowers’ death in [G] and ARD in [A-2]. Peter Parker as Spider-Man was mindful of this tragedy. This could be also the case for someone not watering the flowers, a manager of an asbestos company not requiring employees to wear masks, and state officers not regulating companies. Although M&A distinguishes a sine qua non from a “real cause” for the mindful Parker, we cannot accept this distinction unconditionally because we find a sort of manifestation of dispositional “powers” in a way of basic actions, also seen in the cases of a philosopher losing his grip or a gardener forgetting to water flowers. Is this differentiation of “real cause” from occasion as “sine qua non,” that is, necessary condition, not clear even from the perspective of Moore’s understanding of causes as dispositional powers? Human actions as “occasion” also have real and physical factors in such basic actions insofar as our “intention” cannot remove the dynamical equilibrium of the ropes or, in reverse, our (precautionary) “intention” to regulate potentially tortious companies, could not be realized by a sort of interventions of political pressure or something else including economic high costs. In contrast, M&A attempts to justify this distinction further from the cases of overdetermination and prevention by removing responsibility (of “inaction” for Peter) from the condition of counterfactual dependence; if Spider-Man would have exercised his power, he could have saved his uncle because he could not have helped his uncle due to some hindrance. Distinguished from both THB and Lewis’s modal semantic approaches, it is reasonable that the basic claim “a power gives us more than a mere possibility of its manifestation” holds (ibid. 226) and that a counterfactual dependence theory cannot allow effects to be overdetermined, while we have seen overcoming of the weak points of these philosophies of causation in the amendment of HP structural equation by adjusting contingencies of causal factors for graded actual causation, that will still keep the strength of counterfactual approach. However, according to M&A, we cannot say that “if C occurs, E will occur; only that it will be disposed to occur” (ibid. 226). Naturally, the former is also the case for HP and the latter would be the case by extending [3], the process theory of causation by dispositional “powers.” Apart from the case of overdetermination such as in [C], the problem of explaining omission as a cause in some sense or occasion as sine qua non remains, although the metaphor of “the polygeny of effects” is effectual (ibid. 229, cf. [23], 276) in conveying the plurality and complexity of M&A’s actual causation. This is compatible

36

T. Matsuda

with the spirits of graded causation and responsibility. Despite this, Moore and M&A do not accept omission as genuine cause even if they recognize it as responsible for the results. As for this decision, Moore rejects negative facts or particulars like omission as “bogus-entities”—more generally, the token-absence of an action—as an entity in distinction to negative proposition as truth makers ([23], 444). This is ontologically obvious from his phrase, “nothing” cannot produce “something.”19 In other words, omission is not a particular event or state of affairs, while, in this paper, occasion has been interpreted in a different way from dispositional “powers” from a more realist standpoint together with the concept of basic action. Thus, the character of the controversy is (queer) ontological20 insofar as it is required to decide whether “omission” is mental particular of some events that is elliptically referred or a totality of states of affairs in the region at t (ibid. 438). From the legal perspective, positive action is always required to be legally punished in so-called “embedded omission.” Even without sharing his worries about omission as cause, we can recognize that the concept of omission as a “discrete act” is arbitrary.21 In contrast, Moore poses the problem of the failure of transitivity in connection with responsibility. Here, the issue is that causation can pass through intermediaries in its polygeny, but it need not always do so. The causal problem of omission lurks here from the viewpoint of dispositional “powers.” The case for explication of the failure of transitivity is similar to [M] and [C]; an arsonist sets fire to a house. The fire starts the sprinkler as extinguisher that puts out the fire. Generally speaking, fire does not surely dispose toward being extinguished. In this case, M&A stresses the relationship of two opposite dispositional powers or vectors: one of the fire set by an arsonist and another of the sprinkler extinguishing the fire. The problem of failure of transitivity—here, the will of “being aflame”—will be resolved as harmless. The will is inessential or occasional. Despite this, we could say that he is still responsible for this acute emergency in vain if the owner of the house would claim compensation for the substitution of the machine. Here, we can easily imagine that the malfunction of the sprinkler could be an omission of its producer seen from the default of the safety. It is not sheer “nothing” nor absence in contrast to Moore’s ontological position. For 19 Moore defends three propositions: (1) we have positive moral duties and the breach of such “positive duties” is blameworthy; (2) a necessary condition for the blameworthiness for omission of positive duties is that the omitter could have prevented that which he omitted to prevent, which presupposes the ability of omitter; and (3) causation (of the harm by the omission) is not necessary for omissive responsibility, because omissions are not causes of the harm they fail to prevent. The last proposition expresses his physicalism. Here, omission means “an absent action” such as an omission by someone at t to save a man from drowning is the absence of any act-token of someone at t that instantiates such type of action. 20 Moore suggests the circumstances of this queer ontology from the metaphor of the hole of donut that is surely controversial about its ontological status in the ontology. It is ambiguously too “large” to precisely be determined. 21 Moore rejects the view of Hart and Honoré about negative facts as a way of describing the world (ibid. 444). While the author does not agree with his physicalism regarding causation by omission in his Causation and Responsibility, the author confirms that he provides rich and interesting analyses of moral and legal responsibilities from his standpoints for blameworthiness in Chap. 18 in his book.

2 Gradation of Causation and Responsibility: Focusing on “Omission”

37

example, a technician did not conduct a regular check of this apparatus due to some reasons or his dispositions.

2.6 The Principle of “With Great (Causal) Power Comes Great Responsibility” M&A claims that “transitivity clearly bears on the issue of responsibility” (ibid. 232). For example, a doctor should not necessarily take blame if he was unable to save the life of his patient despite his good will [D]. This case is a failure of transitivity. In this case, the doctor could not have known that complicated factors of disease would produce unintended death of the patient without further intervention that should produce an opposite outcome—that is, the survival of the patient. M&A raises an ontological question about these unknown complicated factors connected with the death of the patient as the effect. Rightly, they introduce the concept of background conditions that are distinct from causes, referred to as sine qua non conditions such as oxygen in a fire. These are preconditions of events that yield effects, but do not cause them. For the case of the doctor [D], the cause is mortal danger or other biological process. The author agrees with this remark. However, M&A continues to take the case of omission for sine qua non condition (here as the counterfactual condition) and not genuine cause without which the accident would not have happened, or would not have been disposed to happen, although they themselves recognize with Moore that omission or negligence as such can be sufficiently responsible for some effects—as is the case of leaving dangerous and unmarked chemicals where others may access them—because “this action might have been a sine qua non for them doing so” (ibid. 232). We find that this view of negligence virtually affirms the status of omission as a sort of “cause” in our normal understanding in the following statement, not to mention the fact that an agent’s negligence in this case cannot be equated at all with natural conditions such as oxygen; The judgement of negligence would be on the basis that he had needlessly and carelessly created a condition for harm being done, without any precaution being taken against the possibility using it in a harmful way. (ibid. italicized by Matsuda).

It should be noted that, from the viewpoint of graded causation and responsibility, that omission as negligence is a component of the cause. In this sense, the position of this paper on omission as cause is close to that of the concept of secondary cause outlined below. Before discussing this position, the point of M&A’s principle, “WGPCGR” (“With Great Power Comes Great Responsibility”, hereafter “GR”), is confirmed with a modification, “With Great Causal Power Comes Great Responsibility.” M&A’s position about causal dispositional power is not restricted to dispositions of physical things or materials, but rather related to the modality of intentional agents like human beings, may it be individually or collectively and cooperatively or obstructively, to

38

T. Matsuda

be voluntarily willing to realize something that is not guaranteed to be successful. According to M&A’s traditional Aristotelian understanding of responsibility based on the theory of voluntary action, we are only responsible for our action producing harm in this case if and only if we are not forced nor controlled and we have necessary abilities to do them. Concurrently, whether we are also able to prevent the harm remains an ethical and legal question. Naturally, this question is not separated from the ethical and legal responsibility of omission. Their answer is clear enough to exempt non-intentional or non-voluntarily agents of harm like accidents caused by a spasm, for example, from their responsibility.22 They apply this view to the case of omission of agents like Parker as Spider-Man: “If we had no power to act, then our omissions cannot be blameworthy” (ibid. 234). For the ascription of responsibility, in the case of the omission of preventing murder by a burglar, the agents must be free to be able to omit and not to omit. This position is agreeable insofar as in another case, our actions would be surely necessary or purely contingent. In a necessary case, we would not be able to prevent acquired actions, and in a purely contingent case, our actions would be mere accidents, akin to a genuine dilemma about freedom. However, this conclusion is too metaphysical to accept without modification because agents are actually and usually not totally forced nor perfectly free (eventually arbitrary) in our issues, as can be seen from our socially and historically determined lawsuits cases [A] in contrast to the imaginary case of Spider-Man. Our actions or causal powers as such are dispositional not only physically and psychologically from dominant predispositions, but also socially and historically depending at least partially on the efforts toward ethically and legally improving the given circumstances of society. M&A’s principle of “GR” seems to not exclude these humane elements of ethical and legal praxis because this principle ascribes greater responsibility to agents with greater power, such as the rich in comparison with the poor in his case who can and should relive the suffering people from famine. Otherwise, these rich people are really more to blame because of their omission of doing something for mitigating the famine of the people, although they confess that they do not know the ultimate reason of this claim. On the other hand, they boil down this reason to the problem of the virtue in the ethics of Aristotle. Thereby, the last suggestion is interesting enough to indicate the limitation of the ethics of autonomy as it can be vividly found in the judge in [A], seen from the safety of workers or public health. That can be qualified as evidence of the variability and transformability of the default or the normality of graded cause and responsibility. M&A classifies three dimensions of their “gradualism” of responsibility, but not always “gradualism” of the cause, nota bene; a. Without the ability to do x, but also to prevent x, one cannot be responsible for doing x. 22 Although it is plausible for the case of criminal law as in traffic accidents, it is doubtful for the case of compensation by civil law. A similar problem remains regarding the application of strict liability—whether or not negligent—in some lawsuits about diseases caused by public pollution such as Minamata diseases.

2 Gradation of Causation and Responsibility: Focusing on “Omission”

39

b. With the ability to do x, one can (but need not necessarily) have a responsibility to do x. c. The more able one is to do x, if one should to do x, then the greater the responsibility to do x. (ibid. 236. italicized by Matsuda) Naturally (c) is the principle of “GR” that is conditioned by the grade of the ability of the agents and their obligation. From M&A’s ontological viewpoint of causal dispositional powers, it can be said that this principle must be more coherently modified to the principle of “With Great Causal Power Comes Great Responsibility” (“GCR”) through explicitly referring to agents’ abilities as causal powers, not as an occasion in a moderate manner, to substantially affect other agents—that is, causation and prevention or omission in its opposite. In this sense, we could include various kinds of expertise or prerogatives of officers in some branches of administration as well as the investments and innovations of companies, for example. We must say that the concept of the graded causation is not restricted to physical powers.

2.7 Conclusion Omission, especially, the omission of preventing hazards by experts or professionals in their regulations or management in the public spheres is both causally real and ethically responsible as basic inactions on something wrong like causing hazards or tort, although the grades of causation and responsibility in individual cases depend on its situation of the events and its ethical context of omission influenced by social history and cultural values. In the final step of proposing the gradualism of causation and responsibility, we must settle on the ontological position of the actual cause among a variety of possible candidates, for differentiating our position from those of the authors discussed here. This operation is related with the evaluation of evidence if we use H&H’s locution again. Here, evidence indicates a world constrained by two conditions of AC2 concerned about HP that can be qualified as evidence for a claim of actual cause after the preorder as criteria of ranking. Thus, ranking the cause will be conducted according to criteria of normality related to the best evidence for applying this strategy to discerning actual cause. This strategy of “graded causation” keeps the first stage (object level) pluralism of causation (not pluralism of the concept of the cause) without losing the objective structure of actual cause in HP’s equation. Through examining the validity and efficacy of H&H’s “graded causation,” we can approach the ontological problem of omission from the viewpoint of normality: the question is whether and in which sense omission is the actual cause, which is negated by M&A as occasion. As it has been considered hitherto, this problem is directly connected with that of agents’ legal and ethical responsibilities. H&H classified four standpoints regarding this issue: (1) Moore does not admit inaction as cause at all. (2) Lewis accepts inaction as a real cause. (3) Hall finds it secondary. (4) McGrath

40

T. Matsuda

believes that the causal status of inaction depends on its normative position ([8], 437). Our standpoint is close to (3). In this context it is important to remind that H&H’s claim is derived from treating [G]—their strategy of “graded causation” can account for all four stances as it is value-neutral by changing the order of ranking of normality. The case [G] is that of a gardener who forgot to water the flowers in a heat summer, and the flowers died. What is the actual cause of the flowers’ death? This question is crucial for demonstrating of the gradualism of both causation and responsibility by numerical method. For illustrating it, they suppose three endogenous variables in HP: H [hot weather], W [watering], and D [dying]. The values of those variables are H = 1, if it were hot weather, W = 1, if the flowers were watered, and D = 1, if the flowers died, and H = 0, if it were cool, W = 0, if the flowers were not watered and D = 0, if the flowers would not die.23 If there were an actual world (w: 1, 0, 0) like H = 1, W = 0, and D = 1, H = 1 (not exclusively) or W = 0 would be the actual cause according to HP’s definition. In this case, the evidence for H = 1 as the (the difference making) cause is the set of values of (w: 0, 0, 0), (H = 0, W = 0, D = 0, the flowers would not die in cool weather in spite of not-watering) on the one hand, and the evidence for W = 0 is the set of values of (w: 1, 1, 0) (H = 1, W = 1, D = 0, the flowers would not die if watered in spite of heat) on the other hand. Through examining this case, we can learn the difference of causal ascriptions and eventually the responsibilities from (1) to (4), depending on the ranking of normality or “graded causation.” (1) Omission is not cause at all: in this view, H = 0 and W = 0 are “typical” as far as lack of attention (H = 1, W = 0, D = 1) or inaction, and (w: 1, 0, 0) is evaluated as more typical than doing something positively. Here is an order of normality: 1) (H = 0, W = 0, D = 0) > (H = 1, W = 0, D = 1) > (H = 1, W = 1, D = 0) because, in this case, (H = 0, W = 0, D = 0) is found to be evidence for the state of affairs that H = 1 is the actual cause of D = 1. In other words, H = 1 changes the value of equation from 0 to 1 in the same value of W = 0 in the former two possible worlds. This is a causal claim about the heat causing the death of the flowers according to (1). A comment of H&H on (1) is that not-watering is more normal than watering in this case and not-watering cannot be found as the actual cause of death for the flowers, given that an actual world like (H = 1, W = 0, D = 1) is more normal than the possible world (H = 1, W = 1, D = 0) and provides evidence for the state of affairs such that W = 1 is the actual cause of D = 0 in a compared world. (3) Omission is the secondary cause: in this view, such an order of the normality is presupposed, as (H = 0, W = 0, D = 0) > (H = 1, W = 1, D = 0) ≥ (H = 1, W this configuration, D = H × (1 − W) holds arithmetically. It means that the truth value of drying of flowers is logical products of heat weather and watering. In other words, in this equation not-dying flowers (D = 0) is logically equivalent with three cases like not-hot weather (H = 0) and not-watering (W = 0), hot weather (H = 1) and watering (W = 1), not-hot weather (H = 0) and watering (W = 1), and dying flowers (D = 1) is only the case of hot weather (H = 1) and not-watering (W = 0). The author of this paper uses different notation for (>) from that of H&H. 23 In

2 Gradation of Causation and Responsibility: Focusing on “Omission”

41

= 0, D = 1). W = 0 can be the actual cause of D = 1 insofar as watering is as normal as not watering from the latter two values. However, H = 1 is a better reason than W = 0 in this case because H = 1 has relatively more evidence. W = 0 is still found as the secondary actual cause because of its evidence in contrast to the order of 1). We can paraphrase this from the case of the actual cause of ARD in the judgement in [A]. A (asbestos), M (mask), and D (Disease) for simplification: today, we cannot ethically and legally agree any more with an order such as 1) (A = 0, M = 0, D = 0) > (A = 1, M = 0, D = 1) > (A = 1, M = 1, D = 0). Instead, we have accepted the preorder of (3), (A = 0, M = 0, D = 0) > (A = 1, M = 1, D = 0) ≥ (A = 1, M = 0, D = 1). The graded responsibilities of companies and the state depend on their powers to have been able to regulate the use of asbestos (not to have banned them) and to have let the workers wear the protective mask on site at a certain point of history. (2) Omission is an actual cause: the order of normality is the same as that of (3). However, the gap of (H = 0, W = 0, D = 0) and (H = 1, W = 1, D = 0) is minimal and can be equivalent. If equivalence holds, H = 1 and W = 0 are candidates of actual cause of D = 1 in the same degree. According to H&H, this claim is so strong that it is possible for agents who were not asked to water them to be causally ascribed to the flowers’ death. It is indeed dangerous to force people to accept the responsibility of this inaction who does not matter in the immediate circumstances if we claim this unconditionally. This criticism also applies to the case of asbestos with which a variety of stakeholders were involved. (4) The causal status of omission depends on its normative position. The situations are various, but the flowers’ death is rather generally ascribed to agents’ inactions if there is a promise from the neighbor or she knows that the owner loves the flowers, for example. However, it is difficult to accept this unless there are obligations or expectations. In this case, such factors as norm, obligation, and expectation influence the ranking of normality of (H = 1, W = 1, D = 0) ≥ (H = 1, W = 0, D = 1). This shows the reason why any omissions of watering by neighbors are not to be equally evaluated. Thus, H&H’s thoughts in their paper are not definitively conclusive even in this rather simplified case study. It can be said that this is a case-by-case situation. However, the author showed real situations in which omissions are found to be actual cause, at least as secondary, decisively. And the components of this causal ascription of omission can be partially ethical or legal “influence of norm, obligation and expectations.” H&H’s and M&A’s approaches are interesting enough for us to think about conceptual relationships between actual causal ascriptions and ethical or legal responsibilities from embedding their ideas about normality ranking into concrete situations when faced with the implementation of controversial technologies if we take a seriously precautionary attitude toward them. By further improving this model, we explicitly outline what is more normal or which orders of normality we have in each context that help our decision-making about implementations beforehand as well as

42

T. Matsuda

discussing grades of causal ascription and responsibilities of events that have actually occurred. In our time, the sense of values related to safety or precaution is an essential part of this normality and an influential factor of the equation that is subject to our history, culture, economy, law, and politics in its transformability.24 In this regard, the “gradualism” of actual causation provides a clue also for the “gradualism” of responsibility. Some remarks have been included to further address issues of gradualism. (a) By activating both “egalitarian gradualism” in the first stage of actual causation and the ranking of normality regarding actual causation, the epistemic and normative structures of cause and responsibility must be more dynamically described. After we saw that “causal pluralism” or “egalitarian gradualism” is an antithesis of THB and that “graded causation,” by our interpretation, can also be a complement of HP’s normative indifference, we are expected to develop the method for evaluating of roles and functions of variously related agents in our contexts. (b) The considerations in this paper highlight the gaps in HP’s structural equation caused by both its indifference to normative normality and formal validity.25 Gaps can be construed as an opposition of science or expertise, especially those of statistically performed disciplines like epidemiology, to “folk knowledge” of lay persons related to causation and responsibility. We must additionally emphasize an issue of the validity of juristic judgements sui generis about actual causation and responsibility in contrast to both scientists and lay citizens, as it is briefly suggested in the case of the suicide of a manager of Lloyd from a fire on a ship as well as by H&H and our two lawsuits [A-1] and [A-2]. These two types of gaps are hindrances to consensus building or “reflective public equilibrium”26 about relevant problems to demand detailed case studies and trials of decision making by discussing among various stake folders, for instance, although these are tightly connected with real policy-making and political-economical situations. (c) The problems of “what is normal” are to be further investigated, especially in the case of hazard “prevention” from our historical experiences, such as ARD or nuclear accidents. Here, prevention can be re-described as a sort of “preemption” of some events’ occurrence and its failure is eventually, intentionally or not, “omission” of prevention (as it was handled by M&A and Moore) in an analogous way of characterizing “double prevention” (See Fig. 2.3). Here, “double prevention” means that A prevents B from preventing E (for example, murder), ultimately to cause E ([23], 459ff),27 therefore, we can conceive of “double omission (or allowing)” for the case that A allows (or omits) B for omitting F 24 The Paris Agreement’s international effort to address climate change is possibly one example of this. 25 H&H indicates problems of value-laden characters of epistemic biases of causal ascription. 26 About this concept and its relevance for contemporary public philosophy, see works of Wolff (2013) and Matsuda [21]. 27 A typical case of this is a sort of the accomplice in crime or passive euthanasia.

2 Gradation of Causation and Responsibility: Focusing on “Omission” Fig. 2.3 Double prevention

Fig. 2.4 Double omission

43

3. Double Prevention

4. Double Omission

(for example, checking the safety), ultimately to cause D, ARD, for instance (See Fig. 2.4). We briefly suggested this “double omission or allowance” in the management and regulation of asbestos use. These cases are found more generally among company management or government office administration. As a matter of fact, people’s judgements of normality regarding the risks of asbestos have been historically radically changed from the start of its use about 150 years ago to its complete ban now. If one government allows companies to use it, it is actually blamable as committing “willful negligence” insofar as the causation of ARD by asbestos exposure is confirmed by epidemiology and the victims of it have repeatedly appeared until today. Sophisticatedly speaking, “double omission” of asbestos use, that is, the allowance by the government or any law that does not stop the use of this carcinogen,28 is now extremely abnormal in the sense of our considerations.29 This suggests the ethical problem of updating the values of variables in the equation for making precautionary check lists of the items or products and incidents (cf. Myers, [33]) based on historical lessons.

28 Here, “double prevention” can mean the activities of some agents to prevent activities of banning asbestos to prevent people from being exposed to asbestos, for example. 29 The condition of HP holds for the case of mesothelioma as ARD, as its outbreak depends counterfactually on asbestos exposure according to Helsinki criteria.

44

T. Matsuda

(d) At last, we will, ideally, have an issue of ranking of omissions and its responsibility from the viewpoint of (ab-) normality in a way of lexical order as an extended version of structural equations for assignments of variables in activities of agents like innovations and managements of products, related law making, and safety administration, as well as scientific investigation of its side effects and medical research for the cure of diseases, by focusing on the death of the patients, for example. However, to promote these agendas, interdisciplinary cooperation beyond the boundary of “pure” philosophy is necessary. Acknowledgements This paper is funded by two projects: “advanced and integrated research about social implementation of bio and environmental technologies” by JSPS and an interdisciplinary project on “Meta science and technology; methodology, ethics and policy” by Kobe University for anticipating future problems caused by the implementation of new technologies. And the author would like to thank Editage (www.editage.jp) for English language editing.

References 1. Birnbacher, D., & Hommen, D. (2013). Omission as causes—Genuine, quasi, or not at all? In B. Kahmen & M. Stepanians (Eds.), Critical essays on “Causation and responsibility (pp. 133– 156). Berlin: De Gruyter. 2. Cartwright, N. (2007). Hunting causes and using them. Approaches in philosophy and economics. Cambridge: Cambridge Univ. Press. 3. Daniels, N. (2008). Just health meeting health needs fairly. Cambridge Univ. Press. 4. Fujiki, A. (2015). Responsibility of the state and employers; workers’ health in asbestos issues. In: K. Naoe & S. Morinaga (Eds.), Science and engineering ethics for the students in STEM fields: Conforming to the JABEE accreditation criteria (pp. 82–83). Tokyo: Maruzen (In Japanese). 5. Goodin, R. E. (2007). No smoking: The ethical issues [1989 extraction]. In: R. Bayer et al. (Eds.), Public health ethics: Theory, policy, and practice (pp. 117–126). Oxford University Press. 6. Hall, N. (2004). Two concepts of causation. In: J. Collins, E. J. Hall, & L. A. Paul (Eds.), Causation and counterfactuals (pp. 225–276). Cambridge, MA. MIT Press. 7. Hart. H. L. A., & Honoré, T. (1985). Causation in the law, 2nd edn. Oxford: Clarendon Press. 8. Halpern J. Y., & Hitchcock, C. (2015). Graded causation and defaults. British Journal of Philosophy of Science, 66, 413–457. 9. Halpern, J. Y., & Pearl, J. (2005). Causes and explanations: A structural-model approach. Part I: Causes. British Journal for Philosophy of Science, 56, 843–887. 10. Hitchcock, C. (2003). Of humean bondage. British Journal for Philosophy of Science, 54, 1–25. 11. Ichinose, M. (2018). Precautionary principle and proactive principle: About the contrast of regulation and implementation (manuscript). MST Workshop at Kobe University. 12. Kazan-Allen, L. (2014). Historic asbestos ruling by Japanese Supreme Court. https://www.iba secretariat.org/lka-historic-asbestos-ruling-by-japanese-upreme-court.php. 13. Lewis, D. (1973). Causation. Journal of Philosophy, 70, 556–567. Reprinted with added ‘Postscripts’. In: D. Lewis (Ed.), (1986). Philosophical papers (Vol. II, pp. 159–213). Oxford: Oxford University Press. 14. Lewis, D. (2000). Causation as influence. Journal of Philosophy, XCVII, 182–197. 15. Mackie, L. J. (1993). Cause and conditions. In: E. Sosa & M. Tooley (Eds.), Causation (pp. 33– 55). Oxford: Oxford University Press.

2 Gradation of Causation and Responsibility: Focusing on “Omission”

45

16. Matsuda, T. (2008). Towards the ethics of environmental risks from a methodological point of view. Journal of Innovative Ethics, 1, 1–18 (In Japanese). 17. Matsuda, T. (2010). Leibniz on causation: From his definition of cause as ‘coinferens’. Studia Leibnitiana Sonderheft, 37, 101–110. 18. Matsuda, T. (2011). Risk and safety. In: K. Todayama & Y. Deguchi (Eds.), A reader for applied philosophy (pp. 47–57). Kyoto: Sekaishisousya (In Japanese). 19. Matsuda, T. (2015). Ethic for public health from the view point of advocacy. Journal of Innovative Ethics, 8, 84–98 (In Japanese). 20. Matsuda, T., & Fujiki, A. (2017). Asbestos in Japan: Social mobilization and litigation to boost regulation. Journal of Innovative Ethics, 10, 78–94. 21. Matsuda, T. (2018). Jonathan Wolff and his philosophy of public policy seen from his Disadvantage. Journal of Innovative Ethics, 11, 70–91 (In Japanese). 22. McGrath, S. (2005). Causation by omission. Philosophical Studies, 123, 125–148. 23. Moore, M. S. (2009). Causation and responsibility. Oxford: Oxford University Press. 24. Mumford, S., & Anjum, R. L. (2011a). Causation a very short introduction. Oxford: Oxford University Press. 25. Mumford, S., Anjum, R. L. (2011b). Getting causes from powers. Oxford: Oxford University Press. 26. Mumford, S., & Anjum, R. L. (2013). With great power comes great responsibility. In: B. Kahmen & M. Stepanians (Eds.),Critical essays on “Causation and responsibility” (pp. 219– 237). Berlin: De Gruyter. 27. Myers N. J. (2005). A checklist for precautionary decisions. In: N. J. Myers (Ed.), Precautionary tools for reshaping environmental policy (pp. 93–106). Boston: MIT. Press. 28. Okubo N. (2016). Judicial control over acts of administrative omission: Environmental rule of law and recent case law in Japan. In: V. Mauerhofer (Ed.),Legal aspects of sustainable development: Horizontal and sectorial policy issues (pp. 189–202). Berlin. Springer. 29. Pearl, J. (2000). Causality: Models, reasoning, and inference. New York: Cambridge University Press. 30. Psillos, S. (2008). Causal pluralism. Retrieved from https://users.uoa.gr/~psillos/PapersI/26Causal%20Pluralism.pdf. 31. Shrader-Frechette, K. S. (1991). Risk and rationality: Philosophical foundations for populist reforms. Berkley and Los Angeles: University of California Press. In: T. Matsuda (Ed.). (2007). Japanese Edition. Kyoto: Showado. 32. Wolff, J., & De-Shalit, A. (2013). Disadvantage. Oxford Univ. Press 33. Wolff, J. (2018). Risk and the regulation of new technologies (manuscript), at his invited international conference about the possibilities of advanced and integrated researches focusing on public policy at Kobe University

List of Cases 34. [M] Mackie: Searching for the cause of a house fire. 35. [C] H&H: Assistant shot the target, but the target saved her life by ducking on hearing the voice of the captain. She would not have survived without hearing the voice of the captain. 36. [C-2] H&H: The target survives under the conditions that an assassin changed his mind about poisoning the target at the last moment and that the bodyguard puts the antidote in at the same time. 37. [P] H&H: Thrombosis 38. [G] H&H: A gardener forgot to water the flowers and they died. If watered, they would have survived. Is this omission the cause of the death of flowers? 39. [B] H&H (Matsuda): “Preemptive prevention”— an outfielder catches a ball hit by a batter in a ground of a high school in t.

46

T. Matsuda

40. [A-1] [A-2] Matsuda: The US and Japan, lawsuits of asbestos issues. 41. [T] M&A: A game of tug of war between two groups like philosophers and theologians. 42. [D] M&A: A doctor should not necessarily be blamed if he could not save the life of his patient despite his good will. A failure of transitivity. 43. [SP] M&A: Peter Parker as Spider-Man suffers from not saving the life of his uncle by using his power against the burglar.

Chapter 3

Ockham’s Proportionality: A Model Selection Criterion for Levels of Explanation Jun Otsuka

Abstract Philosophers have long argued that a good explanation must describe its explanans at an appropriate level. This is particularly the case in social sciences and risk analyses, where phenomena of interest are often determined by both macro and micro factors. In the context of the interventionist account of causal explanation, Woodward (Philosophy of Science 81:691–713, [18]) has recently proposed that a cause must be proportional in the sense that it contains just enough information about its effect. The precise formulation of proportionality and its justification, however, have been under debate. This article proposes an interpretation of proportionality based on Akaike Information Criterion, a statistical technique for model selection. In a nutshell, disproportional cause variables with too much detail often call for extra parameters, which increases a model’s complexity and impairs its predictive performance. By focusing on a model’s predictive ability and its relationship to evidence, this chapter highlights the importance of a pragmatic or what Woodward calls a “functional” factor in the reductionism debate. Keywords Ockham’s razor · Reduction · Individualism vs holism · Model selection · Akaike information criterion

3.1 Introduction Philosophers have long argued that a good explanation must not only identify the right explanans but also describe it at an appropriate level [2, 11, 20]. This is particularly the case in social sciences and risk analyses, where phenomena of interest are often determined by both macro and micro factors. The cause of poverty, for example, may be attributed on the hand to macrosociological factors such as recession, taxation system, the extent of the social safety net, etc., and on the other hand to individual characters such as job skills, education, or health status. The ubiquity of competing sociological theories of different granularity has raised a long-standing J. Otsuka (B) Kyoto University, Yoshida-Hommachi, Sakyo-ku, Kyoto, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_3

47

48

J. Otsuka

debate between individualists who try to understand any social phenomena in terms of behavior, properties, or interactions of individual actors, and holists who confer genuine explanatory roles on social structures or organizations [7, 9, 21]. The key question here is whether macro variables have any causal or explanatory power irreducible to properties of its parts, despite the fact that the former supervene on and thus are completely determined from the latter. In the context of the interventionist account of causal explanation, Woodward [17] has recently proposed that a cause must be proportional, meaning that it must contain just enough information about its effect. This invites two questions: how to assess or measure proportionality, and why is proportionality a good thing? This article proposes an interpretation of proportionality based on Akaike Information Criterion (AIC; [1]). Akaike’s theory tells that, other things being equal, predictions of parsimonious models tend to be more accurate than those of complex models [3]. Applying this idea, I will argue that disproportional cause variables with too much detail often call for extra parameters, which increase a model’s complexity and impair its predictive performance. The proportionality criterion in this understanding is thus a variant of Ockham’s razor applied to the context of causal explanations [14, 15]. The chapter unfolds as follows. I begin in Sect. 3.2 with a brief description of Woodward’s notion of proportionality, followed by an examination of criticisms and interpretations of the concept offered by subsequent philosophical works (e.g., [4, 10]). Section 3.3 introduces my account of proportionality based on Akaike’s theory. After its formulation, the idea will be illustrated with a simple simulation to compare the predictive accuracy of two—proportional and disproportional—models. The new approach for selecting a level of explanation has implications for reductionism, which are discussed in Sects. 3.4 and 3.5. The AIC-based proportionality clarifies conditions under which multiple realizability does not bar reductive explanations: in short, successful reduction occurs when a lower-level theory integrates micro-level properties into a simple model. The approach also highlights pragmatic factors in the reductionism debate, most notably our ability to collect data, as a key to deriving the positive value of higher-level explanations. I will argue this pragmatic nature makes my account of proportionality more in line with Woodward’s [18] functional approach to explanations.

3.2 Kinds of Proportionality Proportionality is the requirement that a description of a cause must “fit with” or “proportional” to that of an effect in the sense that it does not contain irrelevant detail. Consider Yablo’s [20] example of a pigeon trained to peck at red targets to the exclusion of other colors. Now, suppose the red target the pigeon pecked on an occasion had a particular shade of scarlet. We then seem to have two ways of describing the situation: 1. The presentation of a red target caused the pigeon to peck.

3 Ockham’s Proportionality: A Model Selection Criterion …

49

2. The presentation of a scarlet target caused the pigeon to peck. Provided they are both true, (1) strikes us to be a better explanation than (2) because by assumption what makes a difference in the pigeon’s behavior is redness rather than scarletness. Proportionality captures this intuition. According to Woodward’s definition, a cause is proportional to its effect iff (a) it explicitly or implicitly conveys accurate information about the conditions under which alternative states of the effect will be realized and (b) it conveys only such information—that is, the cause is not characterized in such a way that alternative states of it fail to be associated with changes in the effect ([17], p. 298). In Yablo’s example, describing the causative target as scarlet rather than red violates the second condition (b), because other non-scarlet reds, say dark red or rose, would still trigger the same pecking behavior, and thus these “alternative states fail to be associated with changes in the effect.” Proportionality is devised to rule out such redundant information that plays no explanatory role. Woodward’s proposal has come under close scrutiny in recent philosophical discussions. Franklin-Hall [4] interprets proportionality as a requirement that the functional relationship between a cause and an effect be bijective—the first part (a) of the definition requiring each cause to be mapped to a specific effect (one cause, one effect), while the second part (b) forbidding distinct causes to be mapped to the same effect (one effect, one cause). Understood in this way, however, Franklin-Hall contends that proportionality fails to reject an intuitively too fine-grained explanation. She notes that the above descriptions (1) and (2) of Yablo’s thought experiment are incomplete, because they do not specify the contrast class, i.e., what values the cause variable could take other than red (or scarlet). Franklin-Hall fills in that missing information and comes up with the following contrast class: 1* The presentation of a red target (other value: presentation of a non-red target) caused the pigeon to peck (other value: not peck). 2* The presentation of a scarlet target (other value: presentation of a cyan target) caused the pigeon to peck (other value: not peck). ([4], p. 564, with the order reversed). Intuitively (1*) is the better explanation for the same reason we favored (1) above, but Franklin-Hall argues that proportionality fails to support this intuition because the causal relationships in (1*) and (2*) are both bijective: in (1*) we have {red → peck, nonred → notpeck}, while in (2*) {scarlet → peck, cyan → notpeck}. This criticism, however, is an artifact of restricting the domain of the mapping relation to an arbitrary subset of all the target chips, which presumably include those that are neither scarlet nor cyan. What if samples contain, say, cobalt or navy targets? (2*) says nothing about their consequences and thus is at best an incomplete description of the causal relationship.1 This could be patched by adding a third 1 On the other hand, if indeed all targets are either scarlet or cyan, there is no difference in granularity

between the two descriptions and choosing between them is simply a matter of taste. Note the problem here (when there are more than scarlet or cyan targets) is that Franklin-Hall’s “variable” having only scarlet and cyan as values fails to satisfy a formal requirement of a random variable,

50

J. Otsuka

catch-all value such as “presentation of a target neither scarlet nor cyan,” but then the domain has three causal values and the relationship is no longer proportional in the bijective sense. Another—more sympathetic—interpretation comes from the group of Paul Griffiths and his collaborators, who use information theory to refine the concept of information in Woodward’s definition of proportionality [5, 10]. Recall proportionality requires a cause X to convey enough information about the effect Y but no further. Griffiths et al. identify the amount of information that X carries about Y with their mutual information I (X ; Y ) which represents the extent to which knowing a state of X reduces the uncertainty of Y . In contrast, the excess of information in cause X can be measured by its entropy H (X ) which represents the uncertainty about X ’s state. These two measures set conflicting objectives because fine-graining a variable increases both its mutual information (with any other variables, including its effect) and entropy. Proportionality can be defined as an optimal balance between these two desiderata: PropINF : a cause X of an effect Y must (a ) maximize the mutual information I (X ; Y ) while (b ) minimizing its entropy H (X ) [10]. As can be shown easily, this is equivalent to choosing the coarsest cause variable that maximizes the mutual information. One strength of the information-theoretic interpretation is that it can handle continuous or stochastic variables. Suppose, as is very likely, that the pigeon in the above hypothetical experiment responds to stimuli only stochastically. Such a stochastic causal relationship cannot be expressed by a simple bijective function, but PropINF is applicable as long as we have the joint probability distribution over the cause and effect variables. In effect, relationships do not even have to be causal—one can well calculate PropINF for a correlational relationship with no direct causal link, although the focus of [10] is on causation. This holds true of any other proposal of proportionality, including Woodward’s, Franklin-Hall’s, and mine, and for this reason what follows treats proportionality as a criterion for the general problem of variable selection, not just for causes or effects. Although theoretically attractive and versatile as seen above, the informationtheoretic criterion is difficult to apply in actual problems because the knowledge of the joint probability distribution it requires is hard to come by. A pigeon’s pecking probability, for example, is not something that is given a priori, but must be estimated from data (that is the reason we do experiments). Mutual information and entropy can also be calculated from data, but the problem is that the sample mutual information tends to overfit data. The assumption of PropINF is that mutual information hits a “plateau” as the causal variable gets fine-grained—the proportional variable is the coarsest among those at the plateau. But as we will see later with a simulation study, sample mutual information tends to increase almost indefinitely in proportion to the granularity of the used variable. This suggests PropINF is likely to fail to screen out too fine-grained descriptions in actual cases. defined as a function on the sample space. Hence her later consideration on “exhaustivity” which amounts to adding other cause variables does not affect the argument here.

3 Ockham’s Proportionality: A Model Selection Criterion …

51

What motivates Woodward’s account of proportionality (along with his other criteria, such as specificity) is what he calls the functional approach to causation, which evaluates causal claims in terms of their usefulness or functionality in achieving our epistemic goals and purposes [18]. The project in this line involves “normative assessment (and not just description) of various patterns of causal reasoning, of the usefulness of different causal concepts, and of procedures for relating causal claims to evidence” (p. 694, italics in original). The philosophical analysis of proportionality, then, must identify the specific epistemic goal it is supposed to serve and clarify its connection to evidence. That is, why, how, and when is proportionality a good thing? My proposal is that a proportional cause variable is expected to give more accurate predictions than non-proportional ones, in the case predictions are based on finite data. Proportionality, therefore, is not an a priori goal but rather a means to achive predictive accuracy, and the decision as to whether a given description is proportional or not depends not only on the nature of the causal relationship but also on the amount of data we have to estimate the relationship. The next section substantiates this idea based on Akaike’s theory of model selection.

3.3 Model Selection Approach The previous discussions on proportionality have asked what the appropriate level of description of a causal relationship is, assuming the relationship itself is already known. This assumption, however, is unrealistic because in most empirical research scientists have to begin by hypothesizing the relationship between a putative cause and effect. The hypothesized relationship is called a model and is represented by a function that calculates the probability of an effect given the input of a cause. In our pigeon example, a model assigns the probability of pecking to each target presented. There are various ways to model the same phenomenon. To illustrate this imagine two experimenters, Simplicio and Complicatio, come up with different models about the pigeon’s behavior. Simplicio thinks the only thing that makes a difference in the pigeon’s behavior is whether the target is RED or BLUE. He thus builds a model with two parameters which specify the probability of pecking targets of each color, P(peck|RED) and P(peck|BLUE). Complicatio thinks that’s not enough. His hypothesis is that pigeons have better vision than human and can distinguish subtle nuances in color. Accordingly, the target chips that look red for us must be further classified into DARK RED, SCARLET, and ROSE, whereas the blue targets into CYAN, COBALT, and NAVY. Complicatio’s model thus has six parameters, one for each conditional probability given a specific shade. This model is clearly more fine-grained than Simplicio’s, and this difference in granularity is reflected in the number of parameters of the respective models. To decide which model is better, they jointly run an experiment and fit their models to the obtained data. How well a model fits to data can be evaluated by looking at its likelihood, which is the probability of data given a model P(data|M). High likelihood means that the observed data are well predicted by a model, which certainly seems a

52

J. Otsuka

good sign. Since a model’s likelihood depends on its parameters, one can choose the best set of parameters that maximizes a model’s likelihood, or log-likelihood, which comes to be the same thing (taking the logarithm is just to make the calculation easier). Such parameters are called maximum likelihood estimators. In our case, they are actual frequencies of pecking—hence if pigeons have pecked 4 out of 10 total RED target presentations, the maximum likelihood estimator of P(peck|RED) is simply 0.4. Suppose Simplicio and Complicatio have done their math and obtained the maximum likelihood of their model. Which model fits the data better? Without exception, the winner is Complicatio. In nested models like those we have here, the likelihood can only increase but never decrease as a model’s parameters increase, because a model with more parameters is more flexible to “fit” the data, and this is so even if it contains seemingly redundant or unnecessary parameters. To borrow Hitchcock and Sober [6] expression, likelihood measures how well a model accommodates data, i.e., the facts that have already happened. In our case, Simplicio’s model is a special case of Complicatio’s with P(peck|DARKRED) = P(peck|SCARLET) = P(peck|ROSE) and P(peck|CYAN) = P(peck|COBALT) = P(peck|NAVY). This means any data that can be “accommodated” by Simplicio’s model can be equally well handled by Complicatio’s. This is a general phenomenon: any reductive model has a higher likelihood than its less-specific counterparts, and thus better accommodates data. This fact underlies the reductionist intuition that a lower-level description allows for a finer representation of the reality, and thus is epistemologically superior. Accommodating the past, however, is not always our epistemic goal, nor is it even an important one. Hitchcock and Sober [6] rather emphasize prediction as a major goal of building scientific models, and argue that a complicated model may not give an optimal result in this respect. It is not difficult to see why in the present case. As an extreme example, we can imagine a model that counts every single presentation of a target as a different stimulus (this is in a sense true, for no two events are exactly the same. There is always a difference, say, in the lighting conditions etc.). Although such a “highly-detailed” model is guaranteed to have the highest likelihood, it says nothing about what will happen at the next presentation of a target, which it considers to be unlike any other in the past. Hence a model that best accommodates the past is not necessarily the one that serves best for predicting the future. Such a model is said to overfit the existing data, at the expense of its ability to predict novel data. Estimating the predictive accuracy of a model is the principal goal of model selection, whose philosophical implications have been discussed by Forster and Sober [3] along with the related works [6, 13, 14]. Here, I summarize the idea. Above we saw that a model’s ability to accommodate data is measured by its likelihood, the probability of observed data given that model. In contrast, the predictive ability of a model is measured by expected likelihood, E(data|M), the likelihood averaged over all possible datasets including unobserved ones [3, 14]. The higher this value is, the better a model predicts future datasets on average. Because it concerns future and not-yet-observed data, the predictive accuracy (expected likelihood) of a model cannot be calculated from observed data, but must be estimated. Akaike [1] showed that under certain conditions, which do not concern us here, its unbiased estimator

3 Ockham’s Proportionality: A Model Selection Criterion … Table 3.1 Specification of two simulation experiments. In Experiment 1, the pecking probabilities depend only on colors (RED/BLUE) and individual random effects. In Experiment 2, they also depend on difference in shades. In each experiment, the total of 10 targets are presented to each of 5 pigeons. After each experiment, Simplicio’s and Complicatio’s models were fitted to data and their AIC was calculated using glmmML function in R software

53

Color

Shade

Experiment 1

Experiment 2

Dark red

0.8

0.9

Red

Scarlet

0.8

0.8

Rose

0.8

0.7

Cyan

0.2

0.3

Cobalt

0.2

0.2

Navy

0.2

0.1

Blue

is given by logP(data|M) − k where k is the number of the free parameters of a model. I follow Forster and Sober [3] and call this estimate AIC score of model M.2 Akaike’s results identify two factors that affect a model’s predictive performance, its log-likelihood and the number of parameters. These factors often conflict: as we have seen, complex models with more parameters tend to have a higher loglikelihood, while their complexity is penalized through the second component k. Taken together, Akaike’s theory tells that a model that achieves the best balance between its ability to accommodate a given dataset and simplicity will have the best average predictive accuracy. Akaike’s theory has an important implication to our discussion on proportionality. Recall that in our experimental setup, the number of parameters corresponds to a model’s descriptive level: Simplicio’s model has only two parameters, whereas Complicatio’s has six. The question of their comparative performance thus boils down to whether the extra details/parameters introduced by Complicatio to distinguish different shades actually “pays off,” i.e., boosts the log-likelihood more than the margin of 4. The answer to this question is contingent upon the nature of the data, and to illustrate this I performed two experimental simulations under different setups. In the first simulation, pigeons are assumed to peck any reddish target at a constant probability, as shown in the third column of Table 3.1. 1000 datasets were generated from these parameters, and at each round the difference in AIC between Simplicio’s 2 This definition differs slightly from the convention in the model selection literature, where the AIC

score is defined as the expected log-likelihood times negative two, i.e., −2logP(data|M) + 2k.

54

J. Otsuka

Fig. 3.1 Differences in AIC between Complicatio’s and Simplicio’s models, calculated from 1000 data generated each with the parameter sets in Table 3.1. In Experiment 1 (solid line), the AIC of Simplicio’s model is smaller than that of Complicatio’s in most cases, with the mean difference of 3.47. In contrast, the plot for Experiment 2 (dotted line) is about symmetric around zero (mean = −0.74)

model (Msimp ) and Complicatio’s model (Mcomp ) was calculated. The solid curve in Fig. 3.1 represents the relative frequencies of the differences under this setup, and shows that in most cases the simpler model Msimp scored a higher AIC. Hence in this case the AIC favors the simpler model in accordance with our intuition. This is contrasted with mutual information. When calculated from samples in the above simulation, Complicatio’s variables always had a higher sample mutual information than Simplicio’s, with the means being 0.90 and 0.67, respectively (mean difference = 0.23 with standard deviation = 0.10). Hence PropINF as proposed by Pocheville et al. [10] ends up with favoring the too-detailed model in all runs despite the fact that it has no extra information. This apparent puzzle stems from random fluctuation in data. The two models will have the same mutual information only if there is no difference in the actual pecking rate among different shades. But the stochastic nature of the experiment means there are always slight differences, which are then counted as “extra information” in calculating sample mutual information. Next, suppose the pigeons do differentiate shades, with the true pecking rates as shown in the rightmost column 2”). The dotted curve in of Table 3.1 (“Experiment   Fig. 3.1 is the plot of AIC Msimp −AIC Mcomp obtained under this new setup. This time the difference in AIC between the two competitive models is less noticeable, with the mean close to zero (−0.37). This means that even though Simplicio’s model is wrong, it is almost on a par in its predictive ability with Complicatio’s model which better captures the reality. Truth, therefore, is not the only arbiter of models’ predictive ability, but simplicity also matters; sometimes a coarse-grained model that ignores the detail of nature may be useful in predicting the future.

3 Ockham’s Proportionality: A Model Selection Criterion …

55

Elliott Sober [13, 14] has argued that Akaike’s theory gives theoretical support for the use of Ockham’s razor, i.e., our preference for simpler models. A similar line of argument can be made with respect to proportionality. The basic idea is that proportional variables should be preferred because they are conductive to better predictive performance. Too detailed variables, as those adopted by Complicatio, tend to require more parameters, at the cost of impairing the model’s average predictive ability. On the other hand, a model must have enough granularity to correctly describe the causal relationship in question. This observation motivates us to use the AIC score to calibrate the level of description: PropAIC : when comparing models Mi , i = 0, 1, · · · of different granularities, the proportionality of a model Mi with respect to data D is estimated by its AIC score, logP(D|Mi ) − k. A model proportional in this sense is preferred because it is conductive to accurate predictions. Like the previous accounts including Woodward’s original definition, PropAIC requires a cause to convey both enough information about its effect and no more than necessary. The first component, log-likelihood, measures the informativeness of the model or how well its putative cause explains the observed outcomes. The second part of the AIC, in contrast, guards against overdetailing by imposing a cost for the number of its parameters. Hence as in the original version, PropAIC seeks proportionality as the balance between these two desiderata, informativeness and parsimony. There are also dissimilarities, however. The first point of difference concerns epistemic goals. Woodward’s motivation for proportionality is to obtain a simpler account of the true causal relationship, or in other words, a parsimonious picture of the reality. In contrast, PropAIC is specialized for prediction tasks, favoring a simpler relationship for the sake of predictive accuracy. These two aims can conflict—the true model may not necessarily give accurate predictions, as suggested above in Experiment 2 where the predictive performance of Complicatio’s true model was only little better than that of the less faithful Simplicio’s model. Should the differences in parameter among shades be less significant, Simplicio’s model could well have a higher AIC score. This reflects the instrumentalist character of Akaike’s theory which places priority on predictive accuracy over a true description of the reality [13]. The second conspicuous difference is the explicit mention of data. The previous treatments of proportionality have questioned only the nature of functional relationships connecting causes and effects, without regard to the data with which these relationships are estimated. In contrast, PropAIC explicitly depends on the data at hand, so that a model judged as proportional by one set of data may be judged otherwise by a different set. The appropriate level of description depends on how much data we have. This again comes from the nature of AIC as an estimate of predictive ability and the fact that the best predicting strategy hinges on the size of available datasets. The next section further discusses these two characteristics in view of deriving their implications for the reductionism debate.

56

J. Otsuka

3.4 Multiple Realizability and Reductionism In the philosophical literature, levels of explanation have been discussed in relation to the multiple-realizability argument against reductionism. A property A is said to multiply realize another property B if a change in the latter entails that of the former but not vice versa. In the variable notation used here, multiple realizability means that the function that maps the values of a lower-level variable to the corresponding values of a higher-level variable is non-invertible [19].3 The existence of such a “coarse-graining” function guarantees that any state of a micro variable corresponds to a unique state of a macro variable, but not the other way around: there is at least one macro state which is multiply realized by two or more micro states. In the above pigeon experiments, Complicatio’s variable describing shades multiply realizes Simplicio’s color variable in this sense. Multiple realizability has been philosophers’ pet argument against reductionism. Fodor [2] claimed that because psychological states are expected to be multiply realized by a number of distinct neurological or physical states that share no nontrivial common properties, psychological generalizations can not be represented in any way but by a messy disjunction of neurological laws. Similarly, the gist of Putnam [11] famous peg-and-hole example was that the multiple realizability of the structural features of the peg and hole at the particular level makes the lower-level explanations based on the latter less general and thus inferior. These anti-reductionist arguments, however, did not go unchallenged. Sober [12] questions Fodor’s premise that a disjunction of laws is not itself a law or explanatory, for many paradigmatic laws, such as “water at surface pressure will boil when it exceeds 100 °C,” seem well to be disjunctive, saying that water boils at 100 °C, 101 °C, 102 °C, and so on. He also criticizes Putnam, claiming that universality is not the only desideratum of scientific explanations; one may well be interested in depth as well as breadth, and those who seek for deep explanations may legitimately prefer lower-level descriptions. Few philosophers today doubt the explanatory relevance of higher-level sciences such as psychology or sociology. Anti-reductionists like Putnam and Fodor, however, make a stronger claim that these higher-level explanations are epistemologically better than lower-level counterparts, and that is in contention here. Why should we prefer macroscopic explanations? An answer suggested by the present thesis is because it provides more accurate predictions. The experiments we saw in the previous section fit Fodor’s scheme of reduction, where Complicatio’s predictor variable (i.e., the antecedent of his causal law) multiply realizes that of Simplicio’s. As a result, Complicatio had to devise six distinct laws to express the same relationship that took Simplicio only two. The extra complexity has bought Complicatio’s model a flexibility to accommodate the obtained experimental results, but did not help him to predict future outcomes. A moral here for the anti-reductionism debate is formally: a random variable is a real-valued function defined on algebra F of a sample space. A random variable X supervenes on another Y iff for any a, b ∈ F if X (a) = X (b) then Y (a) = Y (b). Y multiply realizes X iff X supervenes on Y but not vice versa. It is easy to see that in the latter case a coarse-graining function that assigns X (a) to Y (a) is non-invertible. 3 More

3 Ockham’s Proportionality: A Model Selection Criterion …

57

that multiple realization and the resulting disjunctive laws of a lower-level science may lead to overfitting, which is why higher-level explanations should be (at least sometimes) preferred. Complicatio’s reductive model is said to overfit data because his variable wrongly assumes differences in causal properties where there is none or only little. In this sense, his variable does not curve the nature at its joints. But in reality the “joints” may not be so conspicuous or even discrete. In Experiment 1 one can easily recognize two causally distinct properties, Red and Blue. The distinctions among the shades in Experiment 2 are less obvious. These are just putative examples, and reality can be more subtle, with difference of order of one hundredth or one thousandth. Do these differences still mark joints? Reductionists will say yes, because ignoring such niceties, however small they are, yields a bias in prediction. The reductionist preference of micro variables is thus motivated and justified by the search for unbiased laws that have as few as possible exceptions. However, the avoidance of bias in pursuit of exceptionless generalizations is not the only, nor even a major, goal of science. Another important goal is to reduce the variance of estimators—that is, we wish to estimate the parameters of our laws in a more precise fashion. In general, the variance gets inflated when a model contains a large number of parameters compared to the size of samples used for its estimation. Hence there is a trade-off between a model’s bias and variance: the more parameters we introduce to guard our model against potential biases (and thereby making our law more disjunctive), the bigger the variance of our estimators become, and vice versa. Traditional reductionists can be seen as attaching heavy weight to the bias part of this trade-off, whereas anti-reductionists stress the variance side. But as far as predictive ability is concerned, the virtue is in the middle: Akaike’s theory implies that, if the goal of finding a lawful relationship is to use it for future prediction, the best granularity must balance these two desiderata. The key in the above discussion is the number of parameters; multiple realization impairs the predictive performance of the reducing theory provided its disjunctive laws require separate parameters. However, there are cases where multiple realization is not accompanied by an increased number of parameters, but rather enables a formulation of an even simpler law at the lower-level. Let us illustrate such a case of successful reduction with the second experiment in the previous section where the pigeons’ pecking rate varied among shades (Experiment 2 of Table 3.1). Imagine that these pigeons are actually responding to light frequency so that their pecking rate is a function of frequencies of light reflected on targets. Suppose further that the frequencies of the shades are 420, 450, 480, 600, 630, and 660 THz for Dark Red, Scarlet, Rose, Cyan, Cobalt, and Navy, respectively. Now a third experimenter, Salviati, intuited this and built the following model where the pecking rate of pigeons is a liner function of light frequency X : Probability of pecking = f (α + β X ), for some function f and parameters α, β.

(3.1)

58

J. Otsuka

Although Salviati’s X variable takes real values and is definitely finer-grained than that of the other two experimenters, his model has only two parameters, α and β. If this model is fitted to the same data used in Experiment 2, we see Salviati’s model enjoys much better AIC scores than the other two models (Fig. 3.2), which suggests that Salviati’s model is more accurate despite the fact that his “law” is much more disjunctive, summarizing infinite laws for each value of the real-valued variable X. There are two reasons for the success of Salviati’s model. First is the metric assumption that colors and shades come in degree and can be expressed by a ratio scale (frequencies). The metric assumption allows one not only to order color stimuli, but also to apply various arithmetic operations such as addition or multiplication. This insight presumably comes from knowledge of optic theory, and provides a deeper understanding of the nature of the cause variable X . The second key factor for the success is the functional assumption that the shades thus expressed are systematically related to the pecking rate via Eq. (3.1). This formula assumedly summarizes a theory about the complex neurological and physiological mechanisms relating visual stimuli to pigeons’ behavior. Salviati’s model thus stands on the shoulders of these elaborated theories, which make his law distinct from mere disjunctions. The difference is a systematic relationship—Salviati’s law (3.1) does not just tell us the pecking rate for each target, but does so systematically. This is also the reason why the law about the boiling point of water mentioned by Sober [12] should be distinguished from what Fodor [2] had in mind when he dismissed disjunctive laws as non-explanatory. Temperature is measured by interval scale, and already has a rich metric structure to it; hence, saying that water boils when it exceeds 100 °C is not the same as saying that it boils at 100 °C, 101 °C, and so on.

0.06

linear

frequency

macro micro

0.04

0.02

0.00 20

30

40

50

60

70

AIC Fig. 3.2 Comparison of AIC among Linear, Macro, and Micro models. Even though Salviati’s X variable is much finer-grained than that of the other two, his linear model scores smaller AICs (solid line) and is expected to provide more accurate predictions

3 Ockham’s Proportionality: A Model Selection Criterion …

59

A successful reduction happens, therefore, when a scientific theory enables us to formulate a systematic relationship at the lower level in a simple way. Of course such a theory and relationships are hard to come by in most special sciences such as biology, psychology, sociology, and so on; the only reason Salviati could come up with his nice solution above is that I made it so. In reality there are objective and epistemological challenges. First, it may simply be the case that nature at its microscopic scale lacks systematic relationships, as Fodor [2] surmised. Or even if they exist, these relationships may forever stay hidden from our scientific investigations. In addition to these two obstacles for successful reduction, the model selection perspective suggests a third, pragmatic factor that should be considered in the discussion of reductionism. The pragmatic consideration comes from the nature of AIC as a tool for evaluating the average predictive accuracy of a model [14]. Which model is considered the best tool naturally depends on our goal, or the size of data used to fit a model. For example, a model suited for predicting small datasets does not necessarily fare well with large datasets. We have seen that Simplicio’s and Complicatio’s models almost tied in Experiment 2; but with a bigger sample size (e.g., with 10 instead of 3 targets presentations for each trial of 10 instead of 5 pigeons), Complicatio’s model outcompeted  Simplicio’s  with the mean difference in their respective  AIC scores AIC Mcomp − AIC Msimp = 11.4. Thus under this data-rich situation PropAIC favors Complicatio’s reductive model as being more proportional. In general, increasing sample size allows for finer-tuning of reductive models. An appropriate level of description, therefore, depends on the size of data at our disposal. If we have a large dataset it makes more sense to adopt fine-grained models, but otherwise we might be better staying at a macro level.

3.5 Objective, Epistemological, and Pragmatic Aspects of Reduction The relationship among the three—objective, epistemological, and pragmatic— aspects of reduction mentioned above can further be clarified by comparing AIC and mutual information. Above we saw mutual information I (X ; Y ) as a measure of the amount of information X carries with respect to Y . There is an alternative interpretation, based on the following identity I (X ; Y ) = KL(P(X, Y ); P(X )P(Y )),

(3.2)

where KL is the Kullback-Leibler divergence (KL divergence), an information theoretic measure of the distance between two distributions.4 From this perspective mutual information measures the distance of the product of marginal distribution P(X )P(Y ) from the joint distribution P(X, Y ). If this distance is zero then be precise KL divergence is not a distance because it is not symmetric—i.e., KL( f, g) = KL(g, f ) in general. This, however, is not relevant to the discussion here.

4 To

60

J. Otsuka

Fig. 3.3 A hypothetical plot of the (estimated) distance of various indices from the null model P(X )P(Y ) for different granularities of X . Solid curve: the true joint distribution P(X, Y ) sets the upper bound of the information one can exploit by relating X and Y . Dashed curve: as X becomes detailed a model f (X, Y ) approaches the truth P(X, Y ), but does not reach it. The remaining distance a is due to our ignorance of the true distribution. Dotted curve: the expected distance from the true distribution of a model fitted with finite sample may increase if a finer description introduces more parameters. b Represents the loss due to a pragmatic constraint on available sample size

P(X, Y ) = P(X )P(Y ), namely X and Y are independent and thus there is no point in considering X and Y together in the form of joint distribution. In contrast, a large distance suggests that treating X and Y separately likely misses the whole picture. Mutual information thus measures the modeling opportunity—that is, how worthwhile it is to relate X to Y to begin with. Now, consider plotting I (X ; Y ) for different granularities of X variable (solid curve in Fig. 3.3). The horizontal axis of the plot represents granularity of X , where a variable at each point multiply realizes all the variables to the left.5 The vertical axis measures, for a given level of X , how much information it has about Y , or equivalently from (3.2), the distance of the joint distribution from the “null-model” P(X )P(Y ) where X and Y are treated unrelatedly. Since detailing a variable never gets rid of the information it already has, the solid curve is non-decreasing, but the steepness of the slope depends on the nature of the relationship between X and Y . If it is steep, we can exploit more information about the effect by further detailing the cause, i.e., there is a lot of opportunity for reduction. In contrast, a flat slope means that higher-level properties already exhaust most of the potential information about the effect. The slope of I (X ; Y ), therefore, reflects the objective constraint 5 Since

multiple realization forms a partial order, there are multiple, possibly infinite, ways to align variables according to their granularity. The X-axis of the plot is just one of them.

3 Ockham’s Proportionality: A Model Selection Criterion …

61

on reduction imposed by the nature of the causal relationship connecting the two variables. The proposal of Pocheville et al. [10] is to find the plateau of this curve, i.e. the coarsest X that can exhaust all the information of Y we can get by knowing their true relationship. In contrast to the objective constraint that pertains to the nature of the relationship and is encoded in the true joint distribution P(X, Y ), the epistemological constraint on reduction stems from our ignorance. For want of the true picture, we build a model f (X, Y ) that we think approximates P(X, Y ). Since a model is only an approximation of the truth, it has a nonzero KL divergence from the true distribution and is positioned somewhere between P(X, Y ) and the “null-model” P(X )P(Y ) in Fig. 3.3. This KL divergence is negatively proportional to the expected log likelihood of the model, the value that AIC tries to estimate.   KL(P(X, Y ); f (X, Y )) = Const. − E log f (X, Y ) .

(3.3)

In reality the expected log likelihood stays unknown (the right-hand side is an expectation over the true probability distribution) and thus must be estimated from finite samples, say via AIC. But here let us assume our limitation is only epistemic, and we have infinite data to correctly determine the expected log likelihood of the model with different granularities of X . Under this assumption, a model’s expected log likelihood never decreases as its variable gets fine-grained, which means the KL divergence of the model from the true distribution is non-increasing (dashed curve in Fig. 3.3). The actual slope of the curve depends on the model. A model showing a steep slope, for example, will approximate well the truth on microscopic scales, and thus has a high potential for reduction. The KL divergence (3.3) of the model from the true distribution thus represents the loss of the opportunity of reduction due to the epistemic limitation of not knowing the true distribution. Finally, where does the pragmatic factor fit in this plot? The pragmatic limitation relevant to the current discussion concerns our data-gathering ability. With a finite sample, a model’s predictive performance depends on whether we have enough data to afford its complexity. AIC is formally derived as an estimate of the average KL divergence of the distribution predicted by a model from data sampled from the true distribution. This distance may increase as a model gets finer-grained, as we have seen in our simulation experiments. Maximizing the AIC score among models with different granularities amounts to minimizing the distance between the solid curve and the dotted curve in Fig. 3.3. The best or most proportional model in this sense, indicated by PropAIC in the figure, tends to be coarser compared to the optimal model under infinite sample size, with the difference between them (b in Fig. 3.3) representing the pragmatic limitation on our data-gathering ability. The objective, epistemological, and pragmatic limitations for reduction can thus be visualized as divergences from the null model. Note that this plot is just for illustration and not meant to be a representative case; the shape of the curves depends on the nature of the relationship and model in question. Qualitative remarks about the figure, however, are general: (i) The information X contains about Y with regard to the true distribution is never lost as X gets finer-grained, thus the solid curve is always

62

J. Otsuka

non-decreasing. (ii) The information X contains about Y with regard to a model (dashed curve) is also non-decreasing, but does not reach the mutual information. (iii) A model’s actual performance as estimated by AIC with finite samples (dotted curve) may decrease as X gets fine-grained. (iv) For these reasons the AIC-based proportionality (PropAIC ) is always coarser than that based on mutual information (PropINF ). The loss of information (a + b) due to this coarse-graining reflects the two limitations discussed above, namely our ignorance of the true distribution and limited data to fit a hypothesized model. The plot also helps us to understand various attitudes toward reductionism. First, one may construe the problem of reduction as an in-principle matter that concerns the true picture of the world or ideally completed sciences. The primary question on this construal would be which level faithfully captures all there is to know about the relationship between two variables, or maximizes their mutual information. If this is the problem, reduction to a lower-level science “never hurts,” for mutual information (solid curve) is a non-decreasing function of granularity. There may be a point, PropINF , beyond which no further reduction yields additional information and thus is unnecessary, but nevertheless innocuous. The objective, in-principle attitude thus admits only this kind of weak form of anti-reductionist stopping rule. Next, those who take the inherent incompleteness of our scientific knowledge seriously might be interested in the epistemological merit of reduction, and would ask whether reduction improves our theory by bringing it closer to the truth. Their question, then, is which level minimizes the KL divergence of a model from the true distribution, that is, the distance between the solid and dashed curves in Fig. 3.3. The shift in question, however, does not affect the overall inclination toward reductionism. Because the KL divergence in question is non-increasing, there is no penalty for a lower-level variable; any model is at least as close to the truth as its coarse-grained version that uses a multiply realized variable. Hence this construal too motivates only the weak anti-reductionism. Finally, consider a more realistic stance that acknowledges not only the incompleteness of scientific knowledge but also the limit of our data-gathering ability. The question then is which level best serves our epistemic purposes given finite data available in a specific research context. In this case reduction is not always good, at least for the purpose of predictions; reducing variables beyond a certain granularity marked by PropAIC is not just otiose but potentially harmful for a model’s predictive performance (dotted curve). Hence the focus on the pragmatic limitation motivates the strong form of anti-reductionism that cautions against a definite demerit of reductive investigation. Woodward’s functional account best fits with the last among these three attitudes towards reductionism, with its focus on “usefulness of different causal concepts, and of procedures for relating causal claims to evidence” [18], p. 694). Evaluating usefulness makes sense only in relation to their users who are limited in both knowledge and resource. The advantage of the AIC-based approach presented here is its explicit recognition of these limitations from which it derives the positive value of proportionality, namely, that proportional variables can be more useful despite containing less information than other finer-grained descriptions.

3 Ockham’s Proportionality: A Model Selection Criterion …

63

An implication of this is that proportionality is a pragmatic standard, rather than an epistemic criterion for truth. Although some philosophers expressed a concern that the focus on pragmatics introduces some kind of anthropocentrism into scientific explanations [4], I argue that it is a virtue rather than a vice of our account. First of all, very few, if any, philosophers today would deny the pragmatic dimensions of scientific practices and explanations [16]. Moreover, a consideration of pragmatic factors proves essential in the context of sociological studies, policy making, and risk analyses, where the range and amount of possible observations are severely limited by practical, financial, and ethical reasons. Facing complex social issues, scientists and policy makers must limit their research focus on only a tiny fragment of possibly relevant factors and draw a conclusion based on a relatively small dataset. In such situations, it makes much more sense to let your model and conclusion depend on pragmatic factors rather than on some epistemic ideal one can never attain. In this respect, the pragmatic aspect of the present approach is not a philosophical drawback, but rather a necessary element to understand our explanatory practices.

3.6 Conclusion Justifying the use of high-level explanations in the so-called special sciences has long been a major challenge in the reductionism debate and in philosophical theories of explanation in general. The proportionality criterion was proposed to save high-level causal explanations, but its precise formulation and, more importantly, its epistemological merit have come under discussion. This paper offered a new interpretation of proportionality based on the Akaike Information Criterion. AIC-based proportionality estimates the predictive accuracy of a model by balancing its bias and variance. A model with a too detailed variable tends to overfit data, which impairs its predictive performance. In such cases we should prefer macroscopic and less detailed explanations over microscopic ones. The chapter illustrated this with a rather simplistic example of pigeons, but one can easily imagine a similar explanatory task arises in social sciences. For instance, a researcher may be interested in whether the political climate of a country affects its ratification of an international pact on some cause, say environmental protection. The explanatory variable here can be described at various levels: one can, like Simplicio, dichotomize it into either conservative or liberal, or, as did Complicatio, adopt a finer sub-categorization of political spectrum that distinguishes neo-liberalism, social democracy, green party, populism, etc. The later by definition gives a more detailed picture, but not necessarily a better prediction as to whether a new country ratifies the pact in question. As we have seen, which descriptive level the researcher should choose depends on data available to fit the models as well as the nature of the problem. This conclusion sheds new light on the long-standing debate on reductionism in social sciences. The discussion between methodological individualism and holism has mainly resolved around the in-principle derivability of macro states, properties or

64

J. Otsuka

theories from micro counterparts [9]. But as Lohse argues, the choice between individualist versus holist explanation should depend “on our epistemic interest (what do we want to know?) and pragmatic aspects such as efficiency [8].” The AIC-based approach takes into account this pragmatic aspect by evaluating the “efficiency” of explanations/models of different granularity in terms of their predictive ability. Since the AIC score depends on objective, epistemic, and pragmatic factors which are all case-relative, the present approach supports the local, piece-meal view of reduction rather than the classical view that focuses on the derivability of entire theories [12, 17]. According to the piece-meal view, whether we should adopt a reductive or “individualist” explanation or not should be determined not by an in-principle fiat, but rather by case-by-case considerations on empirical as well as pragmatic circumstances of the problem at hand. The model selection perspective described in this paper clarifies which factors should be accounted for in each of such decisions, and why. The focus on pragmatics is in line with the “functional approach” [18], which takes usefulness in actual scientific practices as an important (or in Woodward’s view, the only) arbiter of philosophical accounts of explanations. Usefulness in the present context meant predictive accuracy. This particular choice reflects our use of causal models to predict (intervention) consequences, but I by no means claim this to be the only criterion of usefulness. As another research context not so much related to prediction, one may be interested in finding a descriptive level of longitudinal data in which any variable of the auto-regression model screens off all the prior variables from subsequent ones. The AIC-based proportionality as proposed in this paper falls short for such a purpose because it does not guarantee the desired Markov property. The current proposal, therefore, should be understood as an interpretation of proportionality for the purpose of prediction. How do different epistemic goals affect our choice of description remains to be seen.

References 1. Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. 2. Fodor, J. A. (1974). Special sciences (or: The disunity of science as a working hypothesis). Synthese, 28(2), 97–115. 3. Forster, M., & Sober, E. (1994). How to tell when simpler, more unified, or less ad hoc theories will provide more accurate predictions. British Journal for the Philosophy of Science, 45(1), 1–35. 4. Franklin-Hall, L. R. (2016). High-level explanation and the interventionist’s ‘variables problem’. The British Journal for the Philosophy of Science, 67(2), 553–577. 5. Griffiths, P. E., Pocheville, A., Calcott, B., Stotz, K., Kim, H., & Knight, R. (2015). Measuring causal specificity. Philosophy of Science, 82(4), 529–555. 6. Hitchcock, C., & Sober, E. (2004). Prediction versus accommodation and the risk of overfitting. The British Journal for the Philosophy of Science, 55(1), 1–34. 7. Kincaid, H. (2017). Reductionism. In L. McIntyre & A. Rosenberg (Eds.), The Routledge companion to philosophy of social science (pp. 113–123), New York: Routledge.

3 Ockham’s Proportionality: A Model Selection Criterion …

65

8. Lohse, S. (2016). Pragmatism, ontology, and philosophy of the social sciences in practice. Philosophy of The Social Sciences, 47(1), 3–27. 9. McGinley, W. (2011). Reduction in sociology. Philosophy of the Social Sciences, 42(3), 370– 398. 10. Pocheville, A., Griffiths, P. E., & Stotz, K. (2016). Comparing causes: An information-theoretic approach to specificity, proportionality and stability. In H. Leitgeb, I. Niiniluoto, E. Sober, & P. Seppälä editors (Eds.), Proceedings of the 15th Congress of Logic, Methodology and Philosophy of Science. 11. Putnam, H. (1975). Philosophy and our mental life. In Mind, language, and reality (pp. 291– 303). Cambridge: Cambridge University Press. 12. Sober, E. (1999). The multiple realizability argument against reductionism. Philosophy of Science, 66, 542–564. 13. Sober, E. (2002). Instrumentalism, parsimony, and the akaike framework. Philosophy of Science, 69(S3), S112–S123. 14. Sober, E. (2008). Evidence and evolution. Cambridge: Cambridge University Press. 15. Sober, E. (2015). Ockham’s Razors. A user’s manual. Cambridge: Cambridge University Press. 16. van Fraassen, B. C. (1980). The scientific image. Oxford: Oxford University Press. 17. Woodward, J. (2010). Causation in biology: Stability, specificity, and the choice of levels of explanation. Biology and Philosophy, 25(3), 287–318. 18. Woodward, J. (2014). A functional account of causation: Or, a defense of the legitimacy of causal thinking by reference to the only standard that matters—usefulness (as opposed to metaphysics or agreement with intuitive judgment). Philosophy of Science, 81(5), 691–713. 19. Woodward, J. (2016). The problem of variable choice. Synthese, 193(4), 1047–1072. 20. Yablo, S. (1992). Mental Causation. The Philosophical Review, 101(2), 245–280. 21. Zahle, J. (2016). The individualism-holism debate on intertheoretic reduction and the argument from multiple realization. Philosophy of the Social Sciences, 33(1), 77–99.

Part II

Reproductive Technology and Life

Chapter 4

Enforcing Legislation on Reproductive Medicine with Uncertainty via a Broad Social Consensus Tetsuya Ishii

Abstract Contemporary techniques in reproductive medicine, such as in vitro fertilization (IVF) and preimplantation genetic testing (PGT), have enabled fertilization, and the selection and transfer of embryos, without temporal and spatial restrictions. These powerful technologies have been used for both medical and non-medical reasons by couples hoping to build a family, and in the process, they have engendered tremendous controversy worldwide. The major points of contention are potential transgressions of religion, health risks to the resultant children, questions about the balance between the reproductive autonomy of parents and the welfare of children, and reluctance to accept genetic unrelatedness in the family system. For countries, it has often been difficult to discuss these points, because individuals in such countries tend to have widely varying views regarding the moral status of human prenatal lives, and they often lack sufficient knowledge of reproductive medicine. However, lessons learned from past debates illuminate an important path for regulating the new reproductive techniques with higher uncertainty. Before enacting legislation around these powerful methods, the people must weigh the likely benefits and inevitable risks to the future children, the prospective parents, and the society at large. Of course, it is challenging for most countries to reach a social consensus on reproductive medicine. However, each country must steadily tread the path to as broad a consensus as possible in order to make a socially responsible decision and empower prospective parents. This, in turn, will lead to effective enforcement of the legislation governing these powerful but nascent reproductive technologies. Keywords Reproductive medicine · Uncertainty · Legislation · Enforcement · Consensus · Autonomy · Welfare

T. Ishii (B) Office of Health and Safety, Hokkaido University, Sapporo 060-0808, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_4

69

70

T. Ishii

4.1 Regulating Medicine for Family-Building All of us were born without consent. Our parents, to some extent, determined the course of our lives. However, when couples use reproductive medicine, they need to carefully consider its clinical, ethical and social implications. In 1884, a Philadelphia physician performed artificial insemination (AI) for a patient whose husband had no viable sperm. This AI case is believed to be the first instance of reproductive medicine involving a third-party. However, the couple was not informed of the fact that the sperm the doctor used was donated from his student [1]. In 1977, a British woman named Lesley Brown who had been infertile due to complications of blocked fallopian tubes underwent an experimental reproductive technique [2]. This was in vitro fertilization (IVF), which enabled fertilization outside of the body and placement of the created embryos back into the uterus. Although experts in the UK were concerned over the risk to resultant children, the world’s first IVF resulted in the birth of a healthy girl in 1978 [3]. On the other hand, the Roman Catholic Church has consistently opposed IVF and other reproductive techniques based on their convictions that a new human life must begin only through natural means and that human embryos are humans and therefore inviolable [4]. Although the safety of IVF for resultant children is still uncertain, the reproductive technique spread in the UK and the rest of the world, assisting the birth of more than 8 million babies thus far [5]. Subsequently, IVF, the genetic testing of human embryos (preimplantation genetic testing: PGT) and other derivatives of these techniques have permitted the fertilization, selection and transfer of embryos to the uterus without restrictions in time and space, collectively establishing the field of contemporary reproductive medicine, while also opening the door to the use of these methods purely for social purposes. Concurrently, such practices have created tremendous controversies worldwide. For example, cryopreserved gametes and IVF embryos have been used even after the death of a partner. Such posthumous reproduction may lead to legal cases in countries without relevant regulations, because the resultant families are not consistent with the typical legal assumption that legitimate children are born of living married couples [6]. IVF involving surrogacy and/or gamete donation from third parties has been used by not only heterosexual couples who have no viable uterus and/or gametes but also same-sex couples and men and women without a partner. Such family-building is also inconsistent with the legal assumptions that both parents are genetically related with their children, and that a legitimate mother gives birth to her child [7]. From ethical and social standpoints, it has also been a thorny issue to weigh guaranteeing the offspring’s right to know their origin and protecting the privacy of gamete donors [8]. PGT provides genetic testing of IVF embryos to select favorable ones, while excluding unfavorable ones. The most accepted use of PGT is having children free from a certain genetic disease: however, such medical use is still controversial primarily in the context of liberal eugenics [9]. In addition, the use of PGT for non-medical, social sex selection has been criticized because of its potential reinforcement of discrimination, particularly against women [10].

4 Enforcing Legislation on Reproductive Medicine with Uncertainty …

71

To respond to these complicated issues, countries have had to enact legislation regarding the appropriate practice of reproductive medicine, while prohibiting some techniques or uses [11]. Currently, at least 34 countries regulate reproductive medicine by law. 15 countries, including Japan, still depend on guidelines by professional societies or have no legal regulations. Particularly in countries advocating religious freedom, the deliberations have progressed with difficulty, because it is uncertain whether these reproductive techniques will affect the health of offspring, who, of course, cannot give their consent, even if their parents can provide theirs. Moreover, it is unclear how clinics and parents will repurpose the reproductive techniques, and how such uses will impact future children and society. Countries that already enacted domestic law on reproductive medicine confront yet another difficulty. Even if it is enacted, a domestic law will only be effective in one specific country, and will not pertain to citizens who participate in cross-border reproduction to undergo reproductive techniques that are prohibited in their homeland [12]. Thus, the uncertainty of reproductive medicine has made it difficult to enact relevant legislation. The present article revisits the lessons learnt primarily from the discussions surrounding IVF and PGT. Then, it utilizes these lessons to discuss how to enact legislation on upcoming reproductive techniques with higher uncertainty, such as germline genome editing (GGE) [13], which can produce offspring with a certain trait through genetic modification. Finally, it underscores the importance of reaching a broad social consensus to effectively enforce the regulation on new reproductive techniques in each country.

4.2 Lessons Learnt from Contemporary Reproductive Medicine There has been tremendous debate surrounding the various uses of reproductive techniques. Throughout this process, several lessons have been learned. Focusing on IVF and PGT, such lessons are herein revisited from the standpoints of clinical medicine, ethics and law in addition to religion and other teaching.

4.2.1 Roles of Religion As mentioned in Sect. 4.1, it is a truism that parents, in the process of exercising their reproductive autonomy, determine the course of their children’s lives in any way. For this reason, parental consent to reproductive medicine in part supports the legitimacy of reproductive medicine in light of contemporary bioethics. On the other hand, religious beliefs and teachings have played a large role in the manner of human reproduction or family-building. Notably, Roman Catholicism does not accept any

72

T. Ishii

form of reproductive medicine; it is the position of the Catholic church that such artificial interventions at the inception of human life involve inherent dangers and are unacceptable. Other religious groups have also expressed their views of reproductive medicine [14, 15]. Briefly, most Orthodox Jews and Christians refuse IVF involving third parties because they value blood lineage. Protestants, Anglicans, Coptic Christians, and Sunni Muslims also accept most forms of reproductive medicine, so long as gamete or embryo donation are not involved. Similarly, Confucianism accepts all forms of reproductive medicine that do not involve third parties. Meanwhile, nearly all forms of Judaism, Hinduism, and Buddhism currently accept most forms of reproductive medicine. Except Roman Catholicism, many religious groups largely accept reproductive medicine, while some condemn specific reproductive procedures, particularly involving third-parties. Some countries have enacted law pertinent to reproductive medicine based on religious beliefs. For instance, Costa Rica, where Catholicism is the state religion, barred IVF in 2001 after the reproductive technique had been allowed for 16 years [16]. Thus, Costa Rica became the only country in the world that legally prohibited IVF [11]. However, the Inter-American Court adjudged Costa Rica in breach of the American Convention on Human Rights in 2012. After years of legal conflict, the country eventually lifted the legal prohibition. In 2017, an infant was born via IVF in that country [16]. Although religion will impact the permissibility of reproductive medicine in some countries, the ability of religions to prohibit individual reproductive techniques on a worldwide basis would be limited.

4.2.2 People’s Views Towards Human Prenatal Lives In countries that have adopted a policy of politico-religious separation, people have a range of views regarding the moral status of the human embryo and fetus, and these views tend to overlap somewhat with their religious affiliations. It is worth reviewing people’s views towards human prenatal lives, because such views can be explicitly or implicitly reflected in the policy on reproductive medicine. The three major views have been described as the “all,” “none,” and “gradualist” positions [17, 18]. In the “all” position [17], which is inherently found in Roman Catholicism, human embryos already possess full human status. From this position, contemporary reproductive medicine, except AI, is generally unacceptable because its creation and selection of human embryos outside of woman’s body can ultimately harm and waste most of those embryos (and therefore humans) which are not selected. The all position has become dominant in Costa Rica, as described above, but it seems unlikely to prevail in many other countries. By contrast, the “none” position advocates that human embryos or fetuses have no moral status and therefore deserve no special moral concern before birth [17]. It is believed that the none position is essentially

4 Enforcing Legislation on Reproductive Medicine with Uncertainty …

73

found in Confucianism, which has influenced China and other Eastern Asian countries, including Korea. The none position underpins the practice of elective abortion as well as reproductive medicine. The “gradualist” position views human embryos as potential human beings, but not actual humans until they are given birth [17, 18]. It considers that human embryos possess a special status that deserves a certain degree of respect, which increases along with their development. Compared with the none position, the gradualist position emphasizes some respect for human prenatal lives, which seems morally favorable to people particularly in the Western world [18]. Indeed, it appears that many jurisdictions have preferentially adopted the gradualist position, permitting elective abortion for specific reasons and by a prescribed week of pregnancy [19], in addition to prescribed reproductive techniques [11]. Importantly, the practice of and gynecology requires informed consent to risks and burden as well as benefits for parents. However, do the consenting parents understand the possible health consequences by medical intervention to potential children to render them actual human beings? Of course, parents who consider abortion well understand that the intervention takes their fetus’s life. But conversely, do parents who intervene medically to advance the life of their embryos and fetus sufficiently understand the possible risks to and burdens upon their children?

4.2.3 Uncertainty of Safety for Resultant Children Reproductive medicine can affect the health of prospective parents. For instance, women how undergo IVF may be affected by the hormone administration used during egg retrieval; however, the risk of the most serious side effect (severe forms of ovarian hyperstimulation syndrome: OHSS) is relatively low (approximately 0.3–1%) [20]. The low risk of OHSS is considered to be ethically acceptable if the infertile patient consented to the risk. When considering the safety of new reproductive techniques, however, the previous discussions surrounding the first IVF in the UK lead us to focus on the health of the resultant children themselves. In the 1970s, the UK Medical Research Council (MRC) explicitly gave infertility a low priority compared with population control and regarded IVF as an experimental procedure rather than a potential treatment for infertility, and therefore did not support the research with long-term funding [3]. Although the MRC feared fatal abnormalities after the clinical implementation of IVF, the first IVF baby was born healthy in 1978 [2]. Since then, it has been estimated that more than 8 million infants have been born via IVF [5]. Did the MRC make a fatal mistake in judging the IVF research inappropriate for public funding? At least from the vantage of the social environment of the 1970s, their precautionary policy seems appropriate. Consider the worst case that the first IVF baby is born unhealthy. As IVF was an unprecedented procedure that directly intervenes in human reproduction and could impact resultant offspring systemically, such an event would lay the MRC open to censure for supporting the medical research using public funds. Although the MRC did not grant long-term funding to IVF

74

T. Ishii

research, it constructively advised the investigators to reduce the uncertainty of IVF by experiments using monkeys, which are more similar to humans than other animals [3]. More importantly, there was no consensus as to how worst-case IVF outcomes should be handled, either in the UK or globally. While the WMA Declaration of Helsinki was already issued in 1964 [21], the ethical principles have been implicit regarding how the risks and benefits for unborn subjects (children) as well as living subjects (prospective parents) should be weighted in reproductive studies. According to statistics, IVF currently appears to be largely safe for resultant children, although babies conceived via IVF have slightly higher rates of some malformations, including cleft lip and palate, gastrointestinal system malformations, and heart conditions, than babies born without IVF [22]. In addition to such malformations, several reports have suggested that children born via IVF are at increased risk of having growth disorders, such as Angelman syndrome and Beckwith–Wiedemann syndrome [23]. Because such disorders are caused through epigenetic mechanisms that act at the interface of genetic and environmental factors, the suggested association seems plausible in the context of IVF, where eggs and embryos are exposed to culture medium in a petri dish. However, such reports have some defects in terms of sample scale and analysis [22, 23]. The absolute risk of such epigenetic disorders may be low, but it remains likely that epigenetic disorders are associated with IVF to some degree. Consider the safety of PGT, in which the genetic selection of IVF embryos is carried out after the removal of some cells from embryos (embryo biopsy). The first use of PGT in a birth was reported in 1992; the procedure helped the parents deliver a normal girl free from cystic fibrosis [24]. However, a comprehensive assessment of the risks of embryo biopsy is still lacking [25]. Namely, the results reported from the few available epidemiological studies are controversial and/or are limited to normalcy at birth or early childhood. On the other hand, studies on animals have shown that embryo biopsy can be a risk factor for impaired development, during both pre- and postnatal life [25], which is well illustrated by a recent animal study [26]. Together, the currently available data suggest that there are generally no immediate, serious risks of IVF and PGT; however, the available data represent only a fraction of the safety evidence needed for resultant children. Ideally, the safety of a reproductive technique would be assessed through follow-up of the resultant children. However, such follow-up has been not actively pursued in reproductive medicine [27]. First, the endpoint in infertility treatment has been live births or pregnancy, with follow-up of children mostly considered out-side the scope of investigation. Second, follow-up in reproductive medicine may require decade-long monitoring of resultant children; such prolonged investigation could impose a major burden on the children throughout their lives, in addition to a high cost for the reproductive investigators. Third, parents who once consented to follow-up of their children may withdraw their consent later due to the considerable burdens and cost of the required travel and medical examinations. Why, then, have reproductive techniques of comprehensively unproven safety been repeatedly performed at fertility clinics? The possible answers include that fertility professionals offer reproductive techniques without sufficiently informing couples of the potential risks for resultant children, and because couples having reproductive autonomy consent to reproductive medicine,

4 Enforcing Legislation on Reproductive Medicine with Uncertainty …

75

without fully understanding such risks to their embryos, fetuses and future children whom cannot give consent to it. In addition, countries that largely allow prospective parents to exercise reproductive autonomy probably tend to overlook the lack of safety evidence of reproductive techniques for future children [11]. Namely, the problem is how the risks of a reproductive technique for future children are perceived by physicians, parents, policy-makers and others in a country.

4.2.4 The Reproductive Autonomy of Parents and Welfare of Children We were all born without consent. If we have the opportunity, we, in turn, exercise reproductive autonomy: remaining childless or building a family. When people wish to build a family, but are infertile, they have three options: give up the idea of having a child, adopt a child, or go to a fertility clinic. In developed countries, many infertile people first go to a fertility clinic. It is natural that people wish to participate in the cycle of conception, childbirth and nurturing, since most people accept the fact that they were born of and raised by their parents. Although adoption also provides infertile people the opportunity to nurture a child, it cannot offer the experience of childbirth. Moreover, reproduction is, fundamentally, a major activity for organisms. Thus, reproductive medicine plays major social and biological roles in modern society and has spread particularly in developed countries that face a rising average age of first birth. Additionally, reproductive medicine that can attain fertilization, and the selection and transfer of embryos beyond constraints of time and space allows prospective parents to use these procedures to build a family not only for medical reasons, but merely for social reasons. Although reproduction is a form of social activity, is it appropriate to let prospective parents use any powerful reproductive techniques as they wish? Does it always fall under the exercise of reproductive autonomy? The answer varies according to the reproductive techniques in question, as well as the countries and specific applications.

4.2.4.1

Posthumous Reproduction

Health care system in some countries, including Japan, do not fully cover the expenses of reproductive medicine, such as IVF, whereas other promote its use by prospective parents by covering all the expenses. This diversity is well observed in Europe [28]. Moreover, it is worth mentioning a country largely loyal to the Judaism’s biblical commandment to “be fruitful and multiply”. That is, Israel entitles every Israeli woman aged 18–45, irrespective of her family status or sexual orientation, to unlimited, fully funded reproductive treatment up to the birth of two live children [29]. In a jurisdiction which assumes that legitimate children are those born of living married couples, posthumous reproduction using AI or IVF can potentially affect the

76

T. Ishii

child’s citizenship and legal rights, inheritance, and order of succession [6]. Countries in which a sperm donor is not considered to be the legal father of a child born via posthumous reproduction have had the effect of excluding the deceased husband from fatherhood and making the child legally fatherless [30]. Countries having no relevant legislation also render such children legally fatherless, because marital status ends after one partner dies. Therefore, some countries, such as France, Germany, Sweden and Canada, bar posthumous reproduction [31]. In contrast, some countries conditionally permit posthumous reproduction. Again, in Israel, the regulations issued by the Attorney General of 2003 outline a two-step procedure for posthumous reproduction for deceased men: sperm retrieval from a dying or deceased man with his female partner’ request; and authorization of AI or IVF using the sperm on a case-by-case basis, taking into consideration the deceased man’s dignity and “presumed wishes”: a man who lived in a loving relationship with a woman would wish her to conceive his offspring even after his death [32]. The procedure appears to respect the reproductive autonomy of a female and her deceased male partner. Thus, Israel, as a pronatalist country, endorses the legitimacy of the posthumous use of the sperm of a diseased man. However, it has remained uncertain regarding the health risks to children born from the reproductive use of gamete taken from cadavers, including anomalies [33]. The procedure based on the regulations 2003 actually proceeds with consent by only a living female. It is axiomatic that a deceased male cannot understand the risks and burdens to the bereaved family, or the benefits of fulfilling his “presumed wishes.” However, the aforementioned biblical commandment of Judaism theoretically fills the gap between the reproductive use of sperm taken from a deceased husband and the lack of his informed consent while alive. In some countries, civil actions regarding posthumous reproduction have sometimes been brought to claim parenthood between a deceased person and a child born using her or his gamete. Thus, such a use of gametes or embryos beyond time and space will continue to pose a question as to whether or not it is a legitimate exercise of parent’s reproductive autonomy in countries without having a national policy regarding reproductive medicine, including Japan [11].

4.2.4.2

Preimplantation Genetic Testing (PGT)

The world’s first PGT was reported in 1992 from the UK. The first case was intended to prevent the birth of children with a fatal autosomal recessive disease, cystic fibrosis through the selection of embryos using genetic testing of biopsied embryonic cells [24]. This goal can be attained using abortion after prenatal testing; however, aborting “future children” may be traumatizing for couples who hold the gradualist position. In contrast, PGT does not involve abortion as embryos are selected before implantation. Subsequently, PGT to select human embryos free from a pathogenic mutation, while discarding ones with the mutation (called PGT-M: for monogenic/single gene defects) has spread worldwide, because such a parental use can lead to an improvement in the welfare of the resultant children. At the same time, the clinical role of PGT has expanded [11]. For instance, PGT has been used

4 Enforcing Legislation on Reproductive Medicine with Uncertainty …

77

to screen euploid embryos, while screening aneuploid embryos out, to increase the implantation rate and/or avoid miscarriage (an approach known as preimplantation genetic screening: PGS; or preimplantation genetic testing for aneuploidy: PGT-A), despite the controversial efficacy. In the USA and other countries, PGT has also been used just for social sex-selection to attain a better ratio of boys and girls in a family, or to have children of a parent-preferred gender [34]. However, critics have suggested that PGT-M and PGT-A are a vehicle for eugenics more powerful than any of their predecessors, because many parents will find embryo selection more acceptable than abortion after prenatal testing [9]. The use of PGT for social sex-selection has also raised concerns, such as the potential for the reinforcement of sex discrimination, particularly against women in some countries, such as China and India [10]. Thus, PGT has recurrently been controversial in terms of whether genetic selection of human embryos is socially acceptable, or whether it is socially acceptable only in specific contexts. Notably, in Western Europe, Germany, Austria, Italy, and Switzerland, which are located close to each other, all uses of PGT have been prohibited for years [11]. As the predominant religion in these countries, Roman Catholicism or the believers might have influenced their PGT-prohibitive policy [11]. However, Germany legally permitted PGT-M by enacting the Law Regulating Preimplantation Genetic Diagnosis in 2011. In 2015, Austria amended the 1992 Law on Reproductive Medicine to grant the exceptional use of PGT in cases of serious conditions. In Italy, the Constitutional Court ruled that couples who are carriers of genetic diseases have the right to undergo PGT-M in 2015. Additionally, Switzerland, which prohibited PGT under the Reproductive Medicine Act of 1998, recently held referendums, then permitted PGT-M and PGT-A in 2017. The recent increased accessibility of PGT for medical purposes implies that Western Europe is newly emphasizing the reproductive autonomy of parents if its exercise can improve the welfare of resultant children. On the other hand, Asian countries have also seen different legal changes regarding the use of PGT. In Thailand, PGT for social sexselection, as well as for avoiding the birth of children with genetic disease, had been offered to domestic and foreign couples, raising concerns internationally [35]. Consequently, Thailand legally banned the use of PGT for social sex-selection by enacting the Protection of Children Born from Assisted Reproductive Technology Act of 2015. Meanwhile, Japan has no national regulation pertinent to PGT, instead, depending on guidelines set by Japan Society of Obstetrics and Gynecology (JSOG) [11]. For ethical and social reasons, some countries explicitly consider PGT for certain medical reasons permissible, while rendering its use for social sex-selection illicit. However, the USA, Japan and other countries have no regulation on the practice of PGT, which leaves the matter completely up to prospective parents and fertility professionals. Given that embryo biopsy for PGT might impose some health risks upon the resultant children, and that the widespread use of PGT can impact society in the remote or near future, should people forego the use of PGT in the absence of a compelling reason?

78

T. Ishii

4.2.5 Genetic Unrelatedness in the Family System Not only heterosexual couples who have no viable uterus and/or gametes but also same-sex couples and men and women without a partner have used IVF involving surrogacy and/or gamete donation from third parties. Obviously, such reproductions involving a third-party result in partly or completely genetically-unrelated families. However, genetic relatedness is not necessarily important for building families. Indeed, many countries legitimize various adoption systems, by which resultant family members are not genetically related with each other at all. The acceptance of adoption suggests that reproduction involving a third party is fundamentally worth consideration. In the USA, reproduction involving gamete donation is liberally performed to satisfy various reproductive needs and is generally allowed at the federal level [11]. However, as mentioned in the first section, the power of contemporary reproductive medicine, such as IVF, raises greater ethical and social issues relevant to reproduction involving a third party compared to conventional procedures, such as surrogacy and AI. Below, I focus on reproduction using IVF involving gamete donation from a third party (termed donor conception). Again, such families are also inconsistent with the legal assumptions that both legitimate parents are genetically related with their children. It has also been asserted that the privacy of gamete donors should be protected in order to appropriately enforce laws regarding donor conception [36, 37]. Despite the limitations, social studies on offspring born via donor conception have underscored the importance of securing a child’s right to know his or her genetic parents [8, 38]. Some countries have not accepted donor conception. Turkey has prohibited any reproductive medicine using donor gametes by enacting Legislation Concerning Assisted Reproductive Treatment Practices and Centres in 2010 [11]. The Turkish law prohibits physicians from performing donor conception, and also prohibits citizens from going abroad to seek donor conception. Sunni Muslims are the dominant population in Turkey [39], and as mentioned in Sect. 4.2.1, this country takes the strictest policy against reproduction using donor gametes. However, these cases are fairly extreme examples, and can offer other countries little guidance. Countries that permit donor conception typically regulate it from three standpoints: the intended parents, the gamete donors, and the resultant children. Namely, civil law explicitly stipulates parental authority in reproduction involving a third party. Namely, the woman who delivers the child is deemed a legitimate mother if statutory requirements are satisfied. Likewise, a legitimate husband is deemed a legitimate father if he acknowledges a resultant child as his own. On the other hand, gamete donors have no obligation to foster the resultant children, if they satisfy legal requirements, such as the abandonment of parental authority. Moreover, to safeguard gamete donors from health problems after egg retrieval, such as ovarian hyperstimulation syndrome, and to avoid consanguineous marriage between people born from an identical donor, the maximum number of gamete donations is regulated. Furthermore, such permissive countries are divided into two different groups with respect

4 Enforcing Legislation on Reproductive Medicine with Uncertainty …

79

to the offspring’s right to know their genetic parents and the protection of donor’s privacy. Countries such as Sweden, Austria, Switzerland, the Netherlands, Norway, the UK, New Zealand, Germany, Ireland, and Finland, as well as the state of Victoria in Australia, legally guarantee the rights of donor-conceived offspring to access donor information [11]. This social system is intended to help such children establish their identity and to foster a stable parent-child relationship. Meanwhile, running such a social system is costly, since extra public funding is needed to register donors and provide consultancy for donor-conceived children. By contrast, some countries prioritize the protection of gamete donors rather than donor-conceived children. For example, countries such as Canada, the Czech Republic, Israel, India and Taiwan permit the reimbursement of necessary costs such as travel expenses and/or wage loss, even if the gamete donation is understood to be altruistic [11]. Importantly, in France, Spain, Denmark, the Czech Republic and Israel, gamete donors must be anonymous for donor-conceived offspring as well as the intended parents, which prevent them from readily knowing their genetic parents (gamete donors) [11]. However, donorconceived children may suffer psychologically if they accidentally learn that they have genetic parents unknown to them, and that they are genetically unrelated to their other family members. Of particular note, France, where Roman Catholicism is dominant, has legally prohibited same-sex couples and men and women without a partner from accessing reproductive medicine, including donor conception, because the 2013 Bioethics Law stipulates that donor conception is allowed only for heterosexual couples with medical problems such as infertility or the risk of inheriting genetic disorders. However, this policy is now under review in the Parliament, potentially leading to a more laxed policy that allows single women and lesbian couples to use IVF using donor gametes [40]. Recent studies have suggested that the spread of direct-to-consumer (DTC) genetic testing will likely increase the accidental disclosure of the fact [41]. Another movement should also be noted. In 2017, the state of Victoria in Australia acknowledged that donor-conceived people have the legal right to know details about their genetic parents, even if the gamete donation was made anonymously or the donor did not consent to being identified [42]. For countries permissive to donor-conception, is it time to shift the policy respecting donor-anonymity to a policy securing the right of offspring to know their origin? If such countries understand the importance of a child’s right to know his or her origin and agree to its necessary social costs, they may change their policy. However, simply enacting a policy which promotes the rights of children to know their origins may not contribute to the welfare of donor-conceived people. In Sweden that prohibited anonymous sperm donation to secure the rights of children to know their genetic origins in 1985, only 34 of 700 donor-conceived adolescents requested donor-identifiable information [43]. This is because they might have felt no necessity, and/or their parents had not yet been informed of the fact of donor conception [44]. The latter possibility raises an important question: Do parents have the intention to inform their offspring of the fact of donor conception? If not, this becomes a problem of enforcing the policy on donor-conception, not enacting it.

80

T. Ishii

A society must educate prospective parents so that they can understand the implications of regulation on donor conception and act responsibly for their donor-conceived offspring. This chapter has revisited past discussions surrounding IVF and PGT, and reviewed the major points, including the transgression of religion, the health of resultant children, weighing parent’s reproductive autonomy and child’s welfare, and genetic unrelatedness in the family system. Although it remains uncertain whether reproductive interventions affect the health of offspring, follow-up of the resultant offspring is not frequently considered. In addition, it has generally been difficult to reach a consensus as to whether or not a reproductive technique or its certain use is acceptable. However, the lessons learned from past discussions illuminate an important path toward the regulation of the imminent reproductive technologies.

4.3 Response to Upcoming Reproductive Medicine More powerful reproductive techniques with higher uncertainty will likely arrive in most countries in the coming years. Countries must respond to this development through social deliberations. However, the path will not be smooth.

4.3.1 Powerful Reproductive Techniques with Higher Uncertainty Different uses of preexisting techniques can bring about higher uncertainty. For instance, a private-sector enterprise in the USA plans to provide PGT to predict the intelligence of future children. The planned PGT use currently intends only to screen out embryos deemed likely to become children with mental disability [45]. Given that genome sequencing is becoming more advanced and available at lower costs, this application could develop into PGT for polygenic traits and could be used to identify embryos that become children more likely to have a high IQ. However, there are no perfect genetic tests. If the PGT fails, how will parents treat the offspring born? Should the government financially support the PGT use for society as well as parents? Over the past two decades, clinics in some countries have attempted germline (germ cells and embryos) genetic modifications mainly to treat intractable infertility. That is, they clinically performed several reproductive techniques involving cytoplasmic or nuclear transfer in order to modify the composition of the mitochondrial genome (mtDNA) of human eggs or zygotes using donor eggs [46]. Although some of these cases led to live births, others resulted in miscarriages, unintended effects on pregnancies or the development of disorders in offspring. Thus, some countries have legally prohibited such germline genetic modification [46]. However, in 2015,

4 Enforcing Legislation on Reproductive Medicine with Uncertainty …

81

the UK legalized two types of cytoplasmic replacement using nuclear transfer to exclude most mutated mtDNA in the eggs or zygotes (mitochondrial donation), in order to allow prospective parents to have genetically related children free from serious mitochondrial disease [47]. And recently, genetic modification technologies using bacterial DNA-cutting enzymes, collectively called genome editing, have spread worldwide [48]. Genome editing that can modify a nuclear gene has been used in the human germline, and modified embryos were transferred into the uterus of recruited Chinese women [49]. This reproduction study has produced twin girls having a modified gene that may confer resistance to HIV infection. At the same time, tremendous concerns were raised over the clinically and socially higher uncertainty of the inheritable germline genome editing in China, leading to the calling of a WHO panel for international regulatory harmonization [50]. However, it also seems almost inevitable that clinics in countries with laxed regulation will provide germline genome editing for foreign couples who wish to have offspring with a desirable trait, as illustrated by the reproductive tourism for PGT for sex selection in Thailand [35]. When that happens, how should countries respond?

4.3.2 Enacting Legislation People have to discuss whether or not such increasingly powerful reproductive techniques are acceptable for a society. Some may consider that the reproductive technique can provide benefits for parents, future children, and society under appropriate regulation, if the risks and burden are minimized to a clinically applicable level [51, 52]. Others may judge that such a reproductive technique itself, or its specific use can harm future children, their parents, and ultimately society, and such techniques and some uses should be prohibited [53, 54]. In any case, a country must enact regulation pertinent to the powerful reproductive technique that may largely change society for better or worse. Importantly, any such regulation must be for the whole nation, not only for reproductive professionals. There are several considerations here. First, the regulation has to restrict reproduction, which is a fundamental right for each person. Second, regulation on reproduction can impact the welfare of future individuals. Third, even if rogue physicians offer a forbidden reproductive technique or use, prospective parents who understand the regulation can decline such services and avoid participating in dubious reproductive tourism. This is supported by the GGE case in China in which the clinical practice was prohibited by guidelines for physicians [13, 49]. Fourth, reproduction is a fundamental basis for society, and thus its regulation must be made using national, enforceable rules. Therefore, the regulations should be enacted as legislation based on a national consensus with regard to a reproductive technique itself and/or its specific use. However, in countries without state religion or where the dominant religion provides no substantial guidance related to reproduction, people have different views

82

T. Ishii

regarding the moral status of human prenatal lives, and often lack sufficient knowledge regarding reproduction. Given this situation, it is, in general, difficult to reach a consensus regarding a powerful reproductive technique that may impact future people, and also bring about higher uncertainty in the future society. It often took years or more for countries to enact relevant legislation. Notably, Japan, where approximately 60% of people have no religious affiliations and the rest believe Buddhism that accepts most forms of reproductive medicine [55], has not yet enacted legislation pertinent to reproductive medicine, but rather has depended on guidelines by professional societies, such as JSOG [11]. Under such circumstances, the uncontrolled uses of powerful reproductive techniques might cause ethical and social issues.

4.3.3 Reaching Broad Consensus While Applying Lessons Again, the path is not smooth for many countries. However, we can apply the lessons learned from previous discussions regarding IVF and PGT to tread a path to a consensus regarding a given reproductive technique. To make a responsible decision for future children, who necessarily cannot consent to the use of reproductive medicine in their own creation, we must comprehensively discuss the potential risks and burdens, as well as the likely benefits, of a given reproductive technique to the resultant children. In so doing, available preclinical data should be carefully assessed and used to minimize the risks. If the assessment cannot support a clinical application, the technique must be abandoned in order to safeguard the health of the resultant children. Even if the assessment supports the clinical use for a compelling reproductive need, the resultant offspring should be, with parental consent, followed-up at least in the early stage, provided that the resultant children give assent to such follow-up and its burdens, with a clear understanding of the period and frequency of hospital visits and the necessary medical examinations, and provided that this regimen is acceptable to the families [56]. Moreover, as the situation of donor conception in Sweden suggested, it is crucial that any such discussion involve the need of parents’ informing the resultant children that they were born via a reproductive technique in question. We must carefully weigh the reproductive autonomy of parents against the welfare of their future children. A powerful reproductive technique may be versatile. Consider PGT and germline genome editing. Those techniques can be theoretically used for many social purposes, as well as medical purposes [13]. It is important to judge which of the possible uses are most likely and appropriate at the present time, while considering medical evidence and other reproductive options, such as donor conception, available in a given country. With regard to the GGE, the most likely and appropriate uses include the prenatal prevention of genetic disease, to which PGT is not applicable [51, 52]. However, just because a particular use of a reproductive technique is plausible does not mean that its practice at fertility clinics is justified. Although reproductive autonomy must be respected for each person, its exercise requires prerequisites that the practice of a

4 Enforcing Legislation on Reproductive Medicine with Uncertainty …

83

reproductive technique does not harm and does not impair the physical, mental and social well-being of resultant children [57]. It is essential to systemically assess the possible scenarios that may play out in a child’s life following the use of a particular reproductive technique. Although it is highly challenging, each country must steadily tread a path to as broad a consensus as possible [58]. This recommendation does not simply emphasize a democratic approach. Legislation enacted with a broad consensus is much more likely to be effectively enforced. Consider the recent spread of reproductive tourism. Because legislation is only effective within the borders of the country that enacts it, there is no easy way to prevent reckless couples from going abroad to seek a reproductive technique that is forbidden in their homeland. However, if the legislation is enacted with a broad social consensus, couples who understand the spirit of the law will decline to participate in dubious reproductive tourism. To make a responsible decision in a county, and to empower many prospective parents, each country should go forward along a path of broad consensus to the greatest extent possible.

4.4 Concluding Remarks The power of contemporary reproductive medicine has provided couples with tools that can be used for both medical and non-medical reasons when building a family, and engendered tremendous debate in the process. Such controversy seems natural enough. The various uses of reproductive medicine reflect the reproductive needs of people with widely ranging views regarding prenatal human life. Soon, even more powerful reproductive techniques will arrive before us. It is hoped that the lessons learned from past debates will illuminate a regulatory path forward, so that these reproductive techniques can be appropriately applied or is forbidden. We will need to carefully weigh the likely benefits and inevitable risks for future children and society as well as prospective parents. Then, we must steadily tread a path of broad consensus in order to make a socially responsible decision and empower prospective parents in each country.

References 1. Yuko, E. (2016). The first artificial insemination was an ethical nightmare in the Atlantic. Retrieved October 6, 2017, from https://www.theatlantic.com/health/archive/2016/01/first-art ificial-insemination/423198/. 2. Steptoe, P. C., & Edwards, R. G. (1978). Birth after the reimplantation of a human embryo. Lancet, 2(8085), 366. 3. Johnson, M. H., et al. (2010). Why the Medical Research Council refused Robert Edwards and Patrick Steptoe support for research on human conception in 1971. Human Reproduction, 25(9), 2157–2174.

84

T. Ishii

4. Benagiano, G., et al. (2011). Robert G Edwards and the Roman Catholic Church. Reproduction Biomedicine Online, 22(7), 665–672. 5. Norcross, S. (2018, July 3). Eight million ART babies and counting. BioNews. Retrieved October 23, 2019, from https://www.bionews.org.uk/page_136862. 6. Bahadur, G. (2002). Death and conception. Human Reproduction, 17(10), 2769–2775. 7. Thompson, C. (2016). IVF global histories, USA: Between Rock and a marketplace. Reproduction Biomedicine & Society Online, 2, 128–135. 8. Hertz, R., et al. (2013). Donor conceived offspring conceive of the donor: The relevance of age, awareness, and family form. Social Science and Medicine, 86, 52–65. 9. Fasouliotis, S. J., & Schenker, J. G. (1998). Preimplantation genetic diagnosis principles and ethics. Human Reproduction, 13(8), 2238–2245. 10. WHO. (2017). World Health Organization (WHO): Gender and genetics, sex selection and discrimination. Retrieved October 6, 2017, from https://www.who.int/genomics/gender/en/ind ex4.html. 11. Ishii, T. (2018). Global changes in the regulation of reproductive medicine. In M. K. Skinner (Ed.), Encyclopedia of reproduction (2nd ed., pp. 380–386). Academic Press. 12. Jackson, E., et al. (2017). Learning from cross-border reproduction. Medical Law Review, 25(1), 23–46. 13. Ishii, T. (2017a). Germ line genome editing in clinics: The approaches, objectives and global society. Briefings in Functional Genomics, 16(1), 46–56. 14. Schenker, J. G. (2005). Assisted reproductive practice: Religious perspectives. Reproduction Biomedicine Online, 10(3), 310–319. 15. Sallam, H. N., & Sallam, N. H. (2016). Religious aspects of assisted reproduction. Facts Views Vision in Obgyn, 8(1), 33–48. 16. Valerio, C., et al. (2017). IVF in Costa Rica. JBRA Assisted Reproduction, 21(4), 366–369. 17. Tsai, D. F. (2005). Human embryonic stem cell research debates: A confucian argument. Journal of Medical Ethics, 31(11), 635–640. 18. Birkhauser, M. (2013). Ethical issues in human reproduction: Protestant perspectives in the light of European Protestant and Reformed Churches. Gynecological Endocrinology, 29(11), 955–959. 19. Berer, M. (2017). Abortion law and policy around the world: In search of decriminalization. Health and Human Rights, 19(1), 13–27. 20. Binder, H., et al. (2007). Update on ovarian hyperstimulation syndrome: Part 1–Incidence and pathogenesis. International Journal of Fertility and Women’s Medicine, 52(1), 11–26. 21. The World Medical Association (WMA). (1964). The declaration of Helsinki: Ethical principles for medical research involving human subjects. Retrieved October 23, 2018, from https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-formedical-research-involving-human-subjects/. 22. Greely, H. T. (2016). The end of sex and the future of human reproduction. Harvard University. 23. Fauser, B. C., et al. (2014). Health outcomes of children born after IVF/ICSI: A review of current expert opinion and literature. Reproduction Biomedicine Online, 28(2), 162–182. 24. Handyside, A. H., et al. (1992). Birth of a normal girl after in vitro fertilization and preimplantation diagnostic testing for cystic fibrosis. New England Journal of Medicine, 327(13), 905–909. 25. Zacchini, F., et al. (2017). Embryo biopsy and development: The known and the unknown. Reproduction, 154(5), R143–R148. 26. Sampino, S., et al. (2014). Effects of blastomere biopsy on post-natal growth and behavior in mice. Human Reproduction, 29(9), 1875–1883. 27. Barbuscia, A., et al. (2019). The psychosocial health of children born after medically assisted reproduction: Evidence from the UK Millennium Cohort Study. SSM Population Health, 7, 100355. 28. Berg Brigham, K., et al. (2013). The diversity of regulation and public financing of IVF in Europe and its impact on utilization. Human Reproduction, 28(3), 666–675.

4 Enforcing Legislation on Reproductive Medicine with Uncertainty …

85

29. Birenbaum-Carmeli, D. (2016). Thirty-five years of assisted reproductive technologies in Israel. Reproductive Biomedicine & Society Online, 2, 16–23. 30. Law Form Commission. (1986). Report 49—Artificial conception: Human artificial insemination, 12. In N. S. Wales (Ed.), AIH and posthumous use of semen. 31. Tremellen, K., & Savulescu, J. (2015). A discussion supporting presumed consent for posthumous sperm procurement and conception. Reproductive BioMedicine Online, 30(1), 6–13. 32. Hashiloni-Dolev, Y., & Schicktanz, S. (2017). A cross-cultural analysis of posthumous reproduction: The significance of the gender and margins-of-life perspectives. Reproductive Biomedicine & Society Online, 4, 21–32. 33. Batzer, F. R., et al. (2003). Postmortem parenthood and the need for a protocol with posthumous sperm procurement. Fertility and Sterility, 79(6), 1263–1269. 34. Bayefsky, M. J. (2016). Comparative preimplantation genetic diagnosis policy in Europe and the USA and its implications for reproductive tourism. Reproduction Biomedicine & Society Online, 3, 41–47. 35. Whittaker, A. M. (2011). Reproduction opportunists in the new global sex trade: PGD and non-medical sex selection. Reproduction Biomedicine Online, 23(5), 609–617. 36. Hallich, O. (2017). Sperm donation and the right to privacy. New Bioethics, 23(2), 107–120. 37. Kalampalikis, N., et al. (2018). Sperm donor regulation and disclosure intentions: Results from a nationwide multi-centre study in France. Reproduction Biomedicine & Society Online, 5, 38–45. 38. Bos, H., et al. (2019) Self-esteem and problem behavior in Dutch adolescents conceived through sperm donation in planned lesbian parent families. Journal of Lesbian Studies 1–15. 39. Gürtin, Z. B. (2016) Patriarchal pronatalism: Islam, secularism and the conjugal confines of Turkey’s IVF boom. Reproductive Biomedicine & Society Online, 2(Supplement C), 39–46. 40. Corbet, S., & Gaschka, C. (2019, October 16). France OKs bill legalizing IVF for lesbians, single women. The Washington Post. Retrieved October 24, 2019, from https://www.was hingtonpost.com/world/europe/french-lawmakers-to-vote-on-giving-ivf-to-lesbians-singles/ 2019/10/15/8fbb839a-ef4d-11e9-bb7e-d2026ee0c199_story.html. 41. Harper, J. C., et al. (2016). The end of donor anonymity: How genetic testing is likely to drive anonymous gamete donation out of business. Human Reproduction, 31(6), 1135–1140. 42. Symons, X. (2017). Victoria’s controversial donor anonymity laws come into effect. BioEdge 4 March 2017. Retrieved October 24, 2018, from https://www.bioedge.org/bioethics/victoriascontroversial-donor-anonymity-laws-come-into-effect/12210. 43. Iona Institute for Religion and Society. (2019). Push to give more rights to donor-conceived children. Retrieved October 24, 2018, from https://ionainstitute.ie/news-roundup/push-to-givemore-rights-to-donor-conceived-children/. 44. Isaksson, S., et al. (2011). Two decades after legislation on identifiable donors in Sweden: Are recipient couples ready to be open about using gamete donation? Human Reproduction, 26(4), 853–860. 45. Wilson, C. (2018). A new test can predict IVF embryos’ risk of having a low IQ. Retrieved October 24, 2018, from https://www.newscientist.com/article/mg24032041-900-exclusive-anew-test-can-predict-ivf-embryos-risk-of-having-a-low-iq/#ixzz63FwBCkxo. 46. Ishii, T., & Hibino, Y. (2018). Mitochondrial manipulation in fertility clinics: Regulation and responsibility. Reproductive Biomedicine & Society Online. 47. Dimond, R., & Stephens, N. (2018). Legalising mitochondrial donation. Palgrave Pivot. 48. Barrangou, R., & Doudna, J. A. (2016). Applications of CRISPR technologies in research and beyond. Nature Biotechnology, 34(9), 933–941. 49. He, J. (2018). CCR5 gene editing in mouse, monkey and human embryos using CRISPR/Cas9. In Session 3 Human Embryo Editing at Second International Summit on Human Genome Editing. Retrieved January 21, 2019, from https://nationalacademies.org/gene-editing/2nd_ summit/second_day/index.htm. 50. WHO, On 19 March 2019. (2019). WHO expert panel paves way for strong international governance on human genome editing. Retrieved October 24, 2019,

86

51.

52.

53. 54.

55.

56. 57. 58.

T. Ishii from https://www.who.int/news-room/detail/19-03-2019-who-expert-panel-paves-way-for-str ong-international-governance-on-human-genome-editing. NASEM, National Academies of Science, Engineering, and Medicine (NASEM). (2017). Human genome editing: Science, ethics, and governance. The National Academies Press. https://www.nap.edu/catalog/24623/human-genome-editing-science-ethics-and-governance. NCB, Nuffield Council on Bioethics (NCB) (2018) Genome editing and human reproduction: Social and ethical issues. Retrieved January 21, 2019, from https://nuffieldbioethics.org/pro ject/genome-editing-human-reproduction. Hildt, E. (2016). Human germline interventions—Think first. Frontiers in Genetics, 7(81). Rehmann-Sutter, C. (2018). Why human germline editing is more problematic than selecting between embryos: Ethically considering intergenerational relationships. New Bioethics, 24(1), 9–25. Pew Research Center. (2010). The future of world religions: Japan. Retrieved October 24, 2018, from https://www.globalreligiousfutures.org/countries/japan#/?affiliations_religion_id=0&aff iliations_year=2010®ion_name=All%20Countries&restrictions_year=2016. Ishii, T. (2019). Should long-term follow-up post-mitochondrial replacement be left up to physicians, parents, or offspring? New Bioethics 1–14. Ishii, T. (2017b). The ethics of creating genetically modified children using genome editing. Current Opinion in Endocrinology, Diabetes, and Obesity, 24(6), 418–423. Baylis, F. (2017). Human germline genome editing and broad societal consensus. Nature Human Behaviour, 1(6), 0103.

Chapter 5

Gene Editing Babies in China: From the Perspective of Responsible Research and Innovation Ping Yan and Xin Kang

Abstract On Nov. 26, 2018, He Jiankui, an associate professor at Southern University of Science and Technology, announced that two gene editing babies named Lulu and Nana were born healthy in China, claiming that one of the genes had been modified to make them naturally resistant to HIV/AIDS after birth. This paper first reviews the development of this event, and then sorts out the concerns and attitudes of the different stakeholders such as government, scientific community, medical community, ethicists, news media and the public for this event, and makes a preliminary comparison and analysis of these attitudes. Responsible research and innovation (RRI) is a newly developed concept proposed by the European Union in recent years on the development of scientific research and technological innovation. It emphasizes on transforming different values into functional design embedded in the design phase of research and innovation, and emphasizes stakeholder engagement. From the perspective of Responsible Research and Innovation, China’s practice of Gene Editing Babies event can be divided into two aspects: One aspect is the perspective of the event itself, that is, to analyze the event itself from the perspective of RRI, and to investigate this event with the “four-dimensional” framework of RRI. The preliminary conclusion is that this “scientific research” itself does not conform to the concept of RRI. Another aspect is the process of government and society handling this event. From the perspective of RRI, stakeholders in China remain actively involved in the process of handling this event. Although they have different positions in the handling process, the joint participation makes this event develop towards a responsible direction. In this sense, China’s handling process of this event meets the requirements of Responsible Research and Innovation. Keywords Gene editing babies · China · Overview · Stakeholders · RRI · “Four dimensional” framework P. Yan (B) Dalian University of Technology, No. 2 Linggong Road, Dalian 116024, China e-mail: [email protected] X. Kang Zhongshan Hospital of Dalian University, No. 6 Jiefang Street, Dalian 116001, China e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_5

87

88

P. Yan and X. Kang

5.1 Overview of Gene Editing Babies Event in China On November 27, 2018, the news was triggered by a report titled “The World’s First Gene Editing Baby for HIV/AIDS was Born in China” published by Shenzhen channel of People’s Daily online. “A pair of gene-editing babies named Lulu and Nana were born healthy in China in November,” it said. One of the twins’ genes was modified to make them naturally resistant to AIDS at birth. “This is the world’s first gene editing baby immunized against HIV/AIDS, which also means that China has achieved a historic breakthrough in the application of gene editing technology in the field of disease prevention” [21]. Nov 27, the Guangdong Provincial Health Commission announced that Guangdong and Shenzhen have set up a joint investigation team to carry out a comprehensive investigation into the “Shenzhen gene-editing baby incident” [36]. On the evening of November 27, 2018, at He Jiankui’ office at Southern University of Science and Technology, the introduction outside the office was found to be removed, and the notice of “Enter at your own risk” with the university stamp was posted outside the office [22]. At 12:50 on November 28, He Jiankui appeared at Second International Summit on Human Genome Editing held in University of Hong Kong, and gave a presentation with the title of “CCR5 Gene Editing of Mouse, Monkey and Human Embryos Using CRISPR/Cas9 Technology” and shared the experimental data of the research. During the presentation, he disclosed that “Lulu and Nana have been born healthy” and “the results are in line with expectations”. He Jiankui’s presentation time was about 18 min [23]. The following is an excerpt from his presentation at the Summit: “Thank you very much, first of all, I have to apologize. My whole experiment results, because the confidentiality of the experiment is not strong, the data is leaked. So I have to share this data with you on this occasion. Start at this meeting, two days ago, this topic became very popular. This research has already been submitted. I am very grateful for the entire ethics committee to supervise. Our entire team has made efforts and a series of conclusions on the whole research results. I also want to thank my university. My university doesn’t know my experiment at all. I also thank you for giving me this opportunity to conduct research” [1]. On November 29, 2018, the heads of the three departments of the National Health Commission, Ministry of Science and Technology (MST), and China Association for Science and Technology (CAST) accepted an interview with Xinhua News Agency. They indicated the event was extremely bad and the relevant persons were required to suspend scientific research activities. And “we must resolutely investigate and punish violations of laws and regulations” [34]. CAST withdraws He Jiankui’s nomination for the 15th China Youth Award of Science and Technology [34]. On November 30, Chinese Clinical Trial Registry (ChiCTR) website published a special explanation for disapproving the application of the clinical trial registration of “Evaluation of the safety and efficacy of gene editing with human embryo CCR5 gene” [12].

5 Gene Editing Babies in China: From the Perspective …

89

On January 21, 2019, the Gene Editing Baby event investigation team of Guangdong Province announced that it had initially identified that the event was a former professor at Southern University of Science and Technology, He Jiankui, in pursuit of personal fame and fortune, using self-raised funds, deliberately evaded supervision, and privately organized relevant personnel, and carried out genetic editing activities for human embryos for reproductive purposes which had been prohibited by the state [26]. It was said from the investigation that He Jiankui and involved personnel and institutions will be punished seriously according to the law, and the suspected crimes will be handed over to the public security department for handling. For the born babies and pregnant volunteers, Guangdong Province will work with relevant parties to do medical observation and follow-up work under the guidance of relevant state departments [26]. The above is an overview of He Jiankui’s Gene Editing Babies event, which involved many stakeholders in this case.

5.2 Stakeholder Analysis in Gene Editing Babies Event The above analysis is a brief introduction to the Gene Editing Babies event, mainly based on the timeline as a clue, and introduces this event according to the sequence [The time interval for this event analyzed in this paper is from November 27, 2018 to March 14, 2019]. However, as can be seen from the above brief introduction, there are many stakeholders involved, and the roles played by them are not the same, and jointly influence the development of the event in China. Therefore, it is necessary to conduct a detailed analysis of the stakeholders of this event in China.

5.2.1 Stakeholders Description The stakeholders of this event are generally divided into eight parts (see Table 5.1). In addition to the researchers (He Jiankui) and the subjects who are direct participants in the events, the government, the scientific community, the medical community, ethicists, the news media and the public as indirect participants also played a certain role in the subsequent development of this event, which require detailed introduction and analysis.

5.2.1.1

Government

As one of the stakeholders of this event, the government is one of the important driving forces for the development of the event. The government here is represented by the following different departments:

90

P. Yan and X. Kang

Table 5.1 The stakeholders of the event

Event

Stakeholders

Gene editing babies in China

Government Scientific community Medical community Ethicists News media Researcher (He Jiankui) Subjects of the research The public

National Health Commission On November 27, the National Health Commission immediately asked the Guangdong Provincial Health Commission to seriously investigate and verify the matter, handle it according to law and regulations in line with the highly responsible and scientific principles for people’s health, and promptly disclose the results to the public [32]. On Nov 29, the deputy director of the National Health Commission stressed the need to follow technical and ethical norms to safeguard people’s health and human dignity [2]. Ministry of Science and Technology On November 27, 2018, the deputy director of Ministry of Science and Technology said that the Ethical Guidelines for Human Embryonic Stem Cell Research issued in 2003 stipulates that gene editing and modification can be carried out on human embryos for research purposes, but the in vitro culture period should not exceed 14 days after fertilization or nuclear transfer. If the “gene editing babies” are confirmed to have been born, it is prohibited and will be handled in accordance with relevant Chinese laws and regulations [24]. On January 21, 2019, the Ministry of Science and Technology said in a response on its website that the next step will be to work with relevant departments to improve relevant laws and regulations and improve the research ethics review system, including life sciences [15].

5.2.1.2

Scientific Community

The scientific community is one of the fastest responsive stakeholders in the event. The main voice of the scientific community has condemned this event as following. Scientists

5 Gene Editing Babies in China: From the Perspective …

91

On the evening of November 26, a joint statement issued by 122 Chinese scientists said: “Any attempt to rashly try to make a genetically editable human embryo gene editing is resolutely opposed and strongly condemned!” [14]. South University of Science and Technology of China On Nov 26, 2018, South University of Science and Technology of China released a statement on its official website, saying it was shocked to see He Jiankui carry out gene editing research on human embryos. Associate professor He Jiankui has been suspended from his position on February 1, 2018, and his leave was from February 2018 to January 2021. This research work was carried out by He Jiankui outside the university. He did not report it to the university and the department of biology. The university and the department of biology were unaware of this. The academic committee of the department of biology considered it a serious violation of academic ethics and norms [27]. On January 21, 2019, South University of Science and Technology of China released a public statement, saying that it would terminate its labor contract with He Jiankui and all his teaching and research activities on campus [28]. China Association for Science and Technology (CAST) On November 27, the Association of Life Sciences of CAST issued a statement resolutely opposing so-called scientific research and biotechnology applications that go against the spirit and ethics of science [33]. The deputy director of CAST said on Nov 29 that CAST will adopt a “zero tolerance” attitude to deal with misconduct that seriously violates the morality and ethics of scientific research. It will closely follow the progress of the event, give full play to the important role and value of the scientific community, and provide timely intellectual and technical support for the in-depth investigation of the incident by relevant state departments [2]. Chinese Academy of Sciences (CAS) On November 27, the Science Ethics Committee of CAS issued a statement, saying: “we are highly concerned about this matter and firmly oppose the clinical application of human embryo gene editing. We are ready to actively cooperate with the state and relevant departments and regions to carry out joint investigations and verify relevant information, and call on relevant investigation institutions to promptly release the progress and results of the investigations to the public” [36]. On March 9, Bai Chunli, President and Party Secretary of CAS and deputy to the National People’s Congress, said at the annual meeting of the National People’s Congress (including the National People’s Congress and People’s Political Consultative Conference) that “the formulation of laws should have a proper balance between scientific regulation, avoiding misuse and encouraging scientific research and exploration. We should not give up eating for fear of choking” [18].

92

5.2.1.3

P. Yan and X. Kang

Medical Community

The role played by the medical community is more controversial: on the one hand, the ethical review of the trial was approved by the medical ethics committee of a hospital in Shenzhen (this fact is still controversial), on the other hand, the medical community responded quickly and condemned this. Meanwhile, it is one of the few stakeholders who are concerned about the situation of the newborns. The hospital which “took” the ethical review In an approved file of the Medical Ethics Committee of Shenzhen HOME Women’s and Children’s Hospital, seven people from the hospital signed that He Jiankui’s clinical trial was “in compliance with ethical norms and agreed to be carried out”. However, on November 26, 2018, the hospital responded to the media by saying, “the hospital and He Jiankui did not cooperate. The program was not performed in this hospital, and the children were not born here” [9]. Medical and health department of Chinese Academy of Engineering On November 28, the medical and health department of Chinese Academy of Engineering pointed out that “we deeply care for the two babies who were born as stated in the report. We are calling for the strictest protection of their privacy… to enable them to grow up mentally and physically healthy and happy in the fullest possible way that society can provide for them” [37]. Chinese Academy of Medical Sciences On November 30, the Chinese Academy of Medical Sciences took the initiative in the international authoritative medical journal The Lancet to make clear to the world the position, attitude and positive measures to be taken by the Chinese medical and scientific community. It also called on the medical community and society to make every effort to properly protect and care for the twins, who are said to have been born, and ensure their healthy growth both physically and mentally [25]. Medical Ethics Branch, Chinese Medical Association On November 30, Medical Ethics Branch of Chinese Medical Association calls for the scientific research personnel, medical institutions, relevant administrative departments and industry group, to study and discuss the ethical problems in the gene editing babies event that more formal prevention and management measures are put forward, jointly safeguard the scientific spirit and moral order [4]. China Clinical Trials Registry (ChiCTR) On Nov 30, 2018, ChiCTR said it has “been withdrawn with the reason of the original applicants cannot provide the individual participants data for reviewing Safety and validity evaluation of HIV immune gene CCR5 gene editing in human embryos” [3].

5 Gene Editing Babies in China: From the Perspective …

5.2.1.4

93

Ethicists

Since the publication of the Gene Editing Babies event, the ethics community has closely followed it. As of March 14, 2019, China National Knowledge Internet (CNKI) published 29 academic papers, which interpreted, analyzed and criticized this event from various perspectives. The main points are as follows: Qiu Renzong, Zhai Xiaomei, Lei Ruipeng: Professor of Chinese Academy of Social Science, Professor of Chinese Academy of Medical Sciences & Peking Union Medical College, Professor of Huazhong University of Science and Technology. From an ethical point of view, the authors argue, two things should be done before conducting clinical trials of genome editing. That is to establish the ethical framework of genome editing and make governance arrangements for genome editing. For the former aspect, the ethical framework should be provided by (1) the premise of genome editing for human reproduction; (2) protecting the interests of future parents; (3) protecting the interests of the future human; (4) protecting the interests of other people in the society; (5) protecting the interests of the whole society; (6) protecting the interests of human beings. On the second aspect, the authors suggest five governance arrangements: professional governance, institutional governance, regulatory governance, legal governance and international governance. The authors argue that if a society lacks the ethical framework and governance arrangements described above, its scientists and doctors are not qualified to engage in heritable genome editing [16]. Cong Yali: Professor, The School of Health Humanities, Peking University The author argues that the gene editing baby event reminds us to rethink at the institutional level. It includes the researcher’s individual, the research team, the institution’s policy, and the coverage and consistency among the national regulations. How to think from the perspective and the way to prevent potential problems, instead of blaming individual only, is addressed by the author. Meanwhile, the further rethinking on the moral status of embryo, fetus and who can make decision for the child are also reminded by the author [5]. Duan Weiwen: Professor, Chinese Academy of Social Sciences The event of gene editing babies is determined by the epistemological and axiological defects behind the existing ethical construction model that lacks firmness and value. It is necessary to establish a rigid regulation of bioethics. Three Suggestions are put forward [6]. Zhang Chenggang: Professor, School of Social Sciences, Tsinghua University The birth of the world’s first gene-edited baby in November 2018 has led to “panichot discussions” from all walks of life. The “panic-hot discussions” on bioethics can be the starting point of rationality rather than the end point of irrational eating for choking. To promote social development requires both objective scientific research and sober reflection on ethical norms [38]. Tian Song: Professor, Beijing Normal University

94

P. Yan and X. Kang

This event provides an opportunity for our society to prevent the harm of science to society at the legal level, and punish the harm that has already happened to society with criminal law [29].

5.2.1.5

News Media

The whole He Jiankui’s Gene Editing Babies event was first known to the public through the media publicity, and afterwards many medias focused and published large number of reports and news on this event. The news media played a vital role in the process of clarifying a series of issues in the subsequent development of the event. – Guangming Daily: Scientific progress, or play to the gallery? [10]. – The People’s Daily: Technological development cannot leave ethics behind [20]. – Science and Technology Daily: The high-voltage line of scientific ethics cannot be touched [17]. – Guangming Net and Xinhua News Agency have pointed out that the bottom line of law and ethics cannot be broken. 5.2.1.6

The Researcher: He Jiankui

He Jiankui is from Loudi, Hunan province in China. He graduated from the University of Science and Technology of China with a bachelor’s degree, obtained his doctorate from Rice University in the United States, and did postdoctoral research at Stanford University under the supervision of Stephen Quake. In 2012, He Jiankui came back to China through the “Peacock Project” of Shenzhen and established his personal laboratory in the South University of Science and Technology to conduct research on gene sequencing. In his early years, He Jiankui firmly believed that “scholars should stick to poverty, so as to make academic achievements”. But in interviews over the past two years, he said: “wealth and science can go together”. According to the survey data, He Jiankui is the shareholder of 7 companies (with a total registered capital of 151 million RMB), the legal representative of 6 companies, and the actual controller of 5 companies. In February 2017, He Jiankui published an article on the Chinese Science website, elaborating that “from the perspective of science and social ethics, it is extremely irresponsible for any human to perform reproductive cell line editing or make gene editing without addressing these important safety issues”. In his keynote speech in the US in July 2017, he warned of slow and careful experiments in human gene editing. But, meantime, He Jiankui has been already engaged in human gene editing experiments since March 2017, according to the project description of the retrospective registration on ChiCTR’s website, the “study execute time” is “from 2017-03-07 to 2019-03-07” [3, 14].

5 Gene Editing Babies in China: From the Perspective …

5.2.1.7

95

Subjects of the Research

The public information of the subjects is very limited. Except for the introduction of He Jiankui himself, we could only understand the simple situation of some subjects from an NGO called Bai Huailin. Bai Hualin (White Birch Forest, represented as part of subjects) According to the official website of the ChiCTR, He Jiankui’s research subjects were recruited through some “AIDS public welfare organizations” [3], which later was known as Bai Hualin (White Birch Forest). He Jiankui’s team first contacted the organization in March and April 2017 to solicit patients willing to participate in the gene editing trial, according to the founder of Bai Hualin national alliance. After only forwarding relevant information, the public welfare organization screened out the volunteers who did not meet the requirements of He Jiankui, and introduced the remaining 50 people to him with the consent of the volunteers. But it is said for follow-up studies, the group did not participate [11].

5.2.1.8

Public Attitudes

The event has also caused a large amount of discussion online, with some scholars pointing out that “the He Jiankui gene editing baby event marks a change in the basic attitude of the Chinese public towards scientific activities, from unconditional belief to doubt and questioning” [29]. The followings are the excerpt of some comments left by the Chinese Netizens online. @YUE: I thought that the advancement of technology should have been a good thing for the benefit of mankind. @Mr. 姚: Editing genes is not supported. If this can be edited casually, it is a potential threat to the natural development of mankind. @湖海散人: It’s as Hawking’s prediction of the superhuman, and Pandora’s box has been opened. @Lobsangwangmu?: Scientific research does need to be based on ethics and law. Technology without ethical and legal foundations is not a blessing but a disaster! @亚男Joanna: If humans turn on the gene editing model, then there will be more genetic changes in the future, then it will violate the principle survival of the fittest and natural selection.

5.2.2 Stakeholders Analysis From the description of the stakeholders of this event mentioned above, it is obvious that different stakeholders have been playing different roles during this event, which deserves more specific and sophisticated analysis.

96

P. Yan and X. Kang

Government: the Chinese government, from the central ministries and commissions to the provincial, municipal and local governments, immediately responded to the news of the event. The government’s attitude towards the whole event is the one that takes into account the most comprehensive factors. It not only considers the event as a scientific experiment, but also takes into account the interests of academia, medical field, society, China’s scientific and technological workers community, the public, subjects and their babies, and even for the future of mankind. Scientific community: the scientific community responds most to the whole event, and the community also holds a firm opposition and critical attitude towards this event. From CAS, CAST, to universities, departments and individual scientists, all strongly condemned He Jiankui and the gene-editing babies event from all levels, and the event was analyzed and criticized from multiple perspectives. Medical community: the medical community first clarified the relationship between He Jiankui and the hospital involved in the ethical review, and then the ChiCTR rejected He’s application of retrospective registration of the clinical trial. The medical community’s response to this event, in addition to being consistent with the scientific community’s criticism, is also marked by the deep concern for the two babies who were born, which is a significantly different perspective from other groups’ attention on this event. Ethicists: compared with scientists tend to look at problems from a technical perspective, ethicists, from the perspective of epistemology and axiology, think the gene editing baby event can bring about the ethical, social, legal and other issues, therefore, the ethicists think about how to prevent the event by setting up the ethical framework and governance, and meanwhile try to hold on the ethical bottom line. Media: the entire He Jiankui’s Gene Editing Babies event was first known to the public through media publicity. Therefore, in the process of clarifying a series of issues in the subsequent development of the event, the news media’s active participation played a crucial role in the public’s continuous understanding of the truth of the event. But it is also noticed that, the media’s attitude towards this event changed magnificently, from advocating it as “a scientific breakthrough” to condemning it as “sensationalism”, showing that it is necessary for media staff to carry out proper scientific popularization and education. The public: this event has received great attention from the general public (especially the netizens) in China. The public actively participated in the understanding of this event through news reports by the media. What’s more, they made their own judgment through various ways to show the public’s position and opinions, which in turn promoted the process of the government’s handling of the event. The researcher himself: According to He Jiankui’s limited description of the gene-editing babies “research”, we can know that He himself was aware of the technical and ethical risks of the research, but he still took the risk of carrying out the experiment, which is obviously “misconduct” from the perspective of scientific research and even violated the relevant laws of the state. This is really puzzling and confusing for a young scientist with good academic training like He. Subjects: at present, there are no comments or news from the subjects themselves. The information of the subjects is provided mainly by the researcher. On the other

5 Gene Editing Babies in China: From the Perspective …

97

Table 5.2 Stakeholder Analysis in Gene Editing Babies Events in China Stakeholders

Role and Influences

Attitude (±)

Government

Leading role; deal with it in accordance with the law



Scientific community

Criticism; to explain and to analyze



Medical community

Criticism; caring for the subjects



Ethicists

Reflection; focus on the broader perspective



Media

Reporting; solicit opinion



The public

Pay attention to; participate in discussion



Researcher

Initiated the event; major participant

+

Subjects

Participant

Unknown

hand, the public welfare organization Bai Hualin comes forward to explain relevant issues in the volunteer recruitment process. However, we can see from the “informed consent of the volunteers participating in the experiment” exposed by the media that in this experiment, the subjects are completely in the position of the vulnerable parties and have very little abilities to understand such advanced scientific experiment as human embryo gene editing. The above table (cf. Table 5.2) briefly analyzes the roles and impacts of different stakeholders in the event and their attitudes towards the matter. It can be seen that except the researcher’s own attitude, i.e. He Jiankui’s attitude has been positive, the attitude of the subjects is unknown, and the attitudes of all other stakeholders are critical, and there are many fierce attacks and condemnations. It should be pointed out that the whole event was first exposed in China as the result of news media reports. The media used words like “scientific breakthrough” and “progress” in the first time, but such news is no longer available online. Soon after, media coverage of the event was consistent with that of other stakeholders. It can be seen from this detail that the scientific literacy of Chinese media needs to be further improved.

5.3 China’s Gene Editing Babies Event and Responsible Research and Innovation 5.3.1 Responsible Research and Innovation(RRI) Responsible research and innovation (RRI) has been a new concept of scientific research and technological innovation put forward by the European Union in recent years [8, 19].

98

P. Yan and X. Kang

Prof. Van den Hoven gives the most compound definition in a very broad way, which could be one of the most used definitions so far. Van den Hoven considered “Responsible Research and Innovation(RRI) refers to the comprehensive approach of proceeding in research and innovation in ways that allow all stakeholders that are involved in the processes of research and innovation at an early stage (A) to obtain relevant knowledge on the consequences of the outcomes of their actions and on the range of options open to them and (B) to effectively evaluate both outcomes and options in terms of societal needs and moral values and (C) to use these considerations (under A and B) as functional requirements for design and development of new research, products and services. The RRI approach has to be a key part of the research and innovation process and should be established as a collective, inclusive and system-wide approach” [30]. From this concept, it can be seen that RRI emphasizes the transformation of different values into functional requirements embedded in the design stage of research and innovation, and also emphasizes the participation of stakeholders. In other words, scientific research and technological innovation should take the value demands of different stakeholders into consideration in the early design stage, and meet the value requirements of different stakeholders in the whole process of scientific research or technological innovation. In addition, the 2012 European commission research and innovation council report on Ethical and Regulatory Challenges to Science and Research Policy at the Global Level suggests that the concept of RRI should be considered in a combination of ethical acceptability, risk management and human interest [7]. To achieve this goal, RRI proposes a research framework and approach based on four dimensions to analyze and evaluate relevant scientific research and technological innovation. The research approach is also called the “four dimensional” framework, which was proposed by British scholar Richard Owen, et al. The “four dimensions” include anticipation, reflection, deliberation and responsiveness [19]. The “anticipation dimension” helps policymakers deal with complex issues by combining the latest science and evidence with future analysis, enabling them to better understand the opportunities and challenges ahead. “Responsible Research and Innovation” focuses on “risk” and attempts to shift the focus on risk from downstream links (consequences) to upstream links (research and innovation). The “reflection dimension” means that actors and organizations need to reflect on themselves. Personal reflection means setting a mirror in front of personal activities, commitments and ideas, and being aware of the limitations of knowledge. The reflection at the institutional level makes reflection a public matter. The “deliberation dimension” refers to putting vision, purpose, problem and predicament into a larger context, achieving collective deliberation through dialogue, participation and debate, inviting and listening to a wide range of opinions from the public and different stakeholders. Redefine problems and identify potential areas of debate by introducing a broad perspective.

5 Gene Editing Babies in China: From the Perspective …

99

The “responsiveness dimension” also needs to adjust the framework and direction of scientific research and innovation based on the response and changes of stakeholders. This is an interactive, inclusive and open adaptation process and dynamic capabilities.

5.3.2 The Four Dimensional Framework Taking the four dimensions in the above mentioned “four dimensional” framework as standards, the specific contents of the four dimensions are selected as parameters to construct a model for examining and analyzing the Gene Editing Babies event. The general model is as follows, see Table 5.3. Based on the above framework model, this paper analyzes the Chinese Gene Editing Babies event. In this analysis, the event is divided into two levels: one is the Gene Editing Babies event itself. Through the parameters of the “four dimensional” framework, in contrast to the content of the Gene Editing Babies event described above, it is analyzed whether the event meets the requirements of RRI. The second level is the process of dealing with Gene Editing Babies event in China. The parameters of the “four-dimensional” framework model are used to analyze this process to explore whether the Chinese stakeholders’ handling of this event is consistent with the requirements of RRI. Table 5.3 The four dimensional framework Four dimensions

Criteria

Anticipation

Future analysis; Focus on “risk”; Focus on risk shifts from downstream to upstream

Reflection

Self-reflection; Aware of the limitations of knowledge; Reflection becomes a public affair

Deliberation

Collective consideration through dialogue, participation and debate; Views from the public and stakeholders; Broad perspective to redefine issues

Responsiveness

Make the change to scientific research and innovation with public opinions

Case

Correspond or not (±)

100

P. Yan and X. Kang

5.3.3 Analysis on the Event with Four Dimensional Framework As mentioned above, Gene Editing Babies event is a “case”. Based on the “four dimensional” framework and the parameters as standard, this paper carries on the analysis and interpretation to the event, the purpose is to explore whether the “case” is in accordance with the requirements of RRI, providing a perspective and criterion for how we should evaluate this event. The contents of the Gene Editing Babies event are all quoted from the report of the researcher He Jiankui himself at the Second International Summit on Human Genome Editing and this experiment’s “informed consent” form found on the official website of the China Clinical Trial Registry (ChiCTR) website. See Table 5.4 for detailed analysis. It can be seen from the above analysis that He Jiankui noticed the risks during the trial, such as off-target effects, the problems of efficacy and durability, and the risk of HIV infection in mothers or infants during artificial insemination. However, He still rashly push forward the experimental trials. At the same time, he believes that some risks are not related to the project, and he tries to “use technical means to reduce the possibility of injury”. These cannot be regarded as the rigor and responsibility of a responsible researcher. Therefore, when analyzing the “anticipation” dimension, this study considers that it is both correspond and not correspond to the criteria of RRI. If there is still a match in the anticipation dimension, then the case is completely negative in the analysis of the reflection, deliberation, and responsiveness dimensions. Therefore, this paper believes that the Gene Editing Babies event itself does not meet the requirements of Responsible Research and Innovation, and He Jiankui’s experiment itself can be said to have no legitimacy under the analysis from the perspective of RRI.

5.3.4 Analysis on Process of Dealing with the Event in China with Four Dimensional Framework This paper then analyzes the second level of this event, which is to examine the process of dealing with Gene Editing Babies event in China. As pointed out and analyzed above, this “process” contains many stakeholders, and the main content of the analysis using the “four dimensional” framework here is also the reaction and action of the previous “stakeholders”. The specific analysis is shown in Table 5.5. The above analysis evaluates the process of government and society handling this event according to the “four dimensional” framework. From the perspective of RRI, stakeholders such as government, academics, media, and the public remain actively involved in the process after learning this event. Although they have different positions in the event handling process, the joint participation makes this event develop towards a responsible direction. In this sense, China’s handling process of this event meets the requirements of Responsible Research and Innovation.

Criteria

Future analysis; Focus on “risk”; Focus on risk shifts from downstream to upstream

Self-reflection; Aware of the limitations of knowledge; Reflection becomes a public affair

Collective consideration through dialogue, participation and debate; Views from the public and stakeholders; Broad perspective to redefine issues

Four dimensions

Anticipation

Reflection

Deliberation

Table 5.4 Analysis on the event with four dimensional framework Correspond or not (±)



(continued)

• The data was leaked because the confidentiality of the experiment − was not strong. The university was completely unaware of the experiment; • The informed consent states that the project team’s rights include: After the birth of the baby, the project team should keep the cord blood for future use. The photos of the baby on the day of its birth will be kept by the project team. The project team has the right of portrait of the baby and can publish it to the public. Baby blood samples are to be disclosed to the public. If parents are willing to reveal their portraits and names, their wishes should prevail. Only the project team has the final interpretation right to announce the project results to the public, and volunteers have no right to explain, release or announce project related information without permission

• Forge an ethical review; • The project team stated that it would not take risks or consequences beyond existing medical science and technology

• The project team will use technical means to reduce the possibility + of injury; − • It has the risk of experimental off-target; • The risk of HIV infection of the mother or baby during artificial insemination cannot be completely excluded, but this is not caused by the project, so the team will not be responsible for this; • The project team will not be held legally liable for any natural risk of malformation, congenital defect or genetic disease of the newborn participating in the project • Health follow-up plan for 18 years: this was done because of uncertainties in gene editing techniques, such as off-target effects, problems of efficacy and durability

Case: gene editing babies event

5 Gene Editing Babies in China: From the Perspective … 101

Criteria

Case: gene editing babies event

Responsiveness Make the change to scientific research and innovation with public opinions • After gene editing of the embryos, the volunteers would have to repay all the money provided by the previous project team if they withdraw from the project, unless the mother was not pregnant or had a miscarriage, or the fetus was detected to be suffering from a major disease

Four dimensions

Table 5.4 (continued)



Correspond or not (±)

102 P. Yan and X. Kang

Criteria

Future analysis; Focus on “risk”; Focus on risk shifts from downstream to upstream

Self-reflection; Aware of the limitations of knowledge; Reflection becomes a public affair

Four dimensions

Anticipation

Reflection

Correspond or not (±)

(continued)

– 122 Chinese scientists’ joint statement: “This technology can be + performed a long time ago, it doesn’t count as any innovation, and the global biomedical scientists didn’t perform it [on human embryos] because of the uncertainty of off-target and other great risks”; – The scientific community and even the whole society should take this event as an opportunity to fully realize the close relationship between science and technology and ethics, further enhance the consciousness of respecting science and respecting life, carry forward medical humanistic spirit, and strive to achieve the harmony and unity between scientific and technological progress and human ethics; – Gene editing brings about ethical and social problems in family and society; – How to prevent science to break rules and ethical boundaries in the name of “breaking”; – Reflection from the media and the public…

– Recognizing that new technologies like CRISPR have a huge − ethical risk while bringing profound changes in medical trials; + – Even if the subject fulfils informed consent, if the research project with potentially significant risks that violates the favorable principle is applied clinically, it cannot be ethically defended; – Genetic editing of germ cells, the results will have an unpredictable impact on future generations; Anticipation from the media and the public…

Process: dealing with the event in China

Table 5.5 Analysis on process of dealing with the event in China with four dimensional framework

5 Gene Editing Babies in China: From the Perspective … 103

Collective consideration through dialogue, participation and debate; Views from the public and stakeholders; Broad perspective to redefine issues

Deliberation

Correspond or not (±)

– We will support and cooperate with relevant departments to deal + seriously with people and institutions involved according to laws − and regulations based on the investigation facts; – CAST will closely follow the progress of the incident, give full play to the important role to the scientific community, and provide timely intellectual and technical support for relevant state departments to carry out in-depth investigation of the incident; – We are ready to actively cooperate with the state and relevant departments and regions to carry out joint investigations and verify relevant information, and call on relevant investigation institutions to promptly release the progress and results of the investigations to the public; – The public’s right to know should be fully respected and the principle of information disclosure should be adhered to ensure its transparency and impartiality; – It is suggested that experts in related fields should be organized to have a wide discussion on the ethical issues arising from the gene editing babies; – Concern and discussion from the public…

Process: dealing with the event in China

Responsiveness Make the change to scientific research and innovation with public opinions – On the very day of the incident, Guangdong province and + Shenzhen set up a joint investigation team to carry out a − comprehensive investigation into the gene editing babies event – Relevant units are required to suspend scientific research activities of relevant persons; – On January 21, 2019, Ministry of Science and technology said it will work with relevant departments to jointly improve related laws and regulations and improve the research ethics review system, including life sciences; – The guidance and supervision of institutional ethics committees by relevant administrative authorities and industry departments should be strengthened, and the qualification access, evaluation system and continuous evaluation mechanism of ethics committees should be established and improved; – Laws, regulations and ethical norms concerning human biomedical research should be improved; – This event provides an opportunity for our society to prevent the harm of science on the legal level

Criteria

Four dimensions

Table 5.5 (continued)

104 P. Yan and X. Kang

5 Gene Editing Babies in China: From the Perspective …

105

5.4 Latest Development of the Event and Its Analysis The day before the end of 2019, He Jiankui’s gene editing babies event had legal results. According to the news reported by the Xinhua News Agency in Shenzhen, the Gene Editing Babies case was publicly sentenced in the first instance of the Nanshan District People’s Court in Shenzhen on December 30. The main contents of the verdict reported are as follows: “The three defendants were jointly criminalized for illegally practicing human embryo gene editing and reproductive medical activities for reproductive purposes, which constituted the crime of illegal medical practice. … The court held that the three defendants failed to obtain a doctor’s qualification and pursued for profit, deliberately violated the relevant national regulations on scientific research and medical management, crossed the scientific and medical ethics bottom line, and rashly applied gene editing technology to human assisted reproductive medicine, disrupted the medical management order, and the circumstances were serious. Their behavior has constituted an illegal medical practice. Based on the criminal facts, nature, circumstances and degree of harm to the society of the three defendants, the defendant He Jiankui was sentenced to three years in prison and fined three million yuan; Zhang Renli was sentenced to two years in prison and fined RMB One million yuan; sentenced Qin Jinzhou to one year and six months’ imprisonment, suspended for two years, and fined RMB 500,000. … Due to the personal privacy of the persons involved, the court heard the case in private. … The defendants’ family members, deputies to the National People’s Congress, members of the CPPCC, media reporters and representatives from all walks of life attended the verdict” [35]. The legal verdict of the Gene Editing Babies Event raises some level of concern. Analyzing this with the stakeholder engagement framework above, the following preliminary conclusions could be drawn: – The government: The verdict by the people’s court in Nanshan district, Shenzhen, represented the government’s legal decision on the matter, and there was no further comment from other authorities. – The media: The media is the main disseminator of the news, Xinhua net, People’s net, PhoenixNet, CCTV, Sina, Tencent net… all major websites reported this news. – The academic community: As of March 31, data from CNKI showed that a total of seven academic papers on the gene-editing baby event had been published after the verdict, mainly analyzing the administrative supervision and legal regulation aspects. Meantime, Prof. Qiu Renzong and his team published a book on “Human Genome Editing: Science, Ethics and Governance” [13] on the very month of the verdict. The book provides a grand background of the development of human genome editing home and abroad, responds directly to the event, and analyzes and interprets this event from many aspects with many scholars’ work focused on it. – The public/Netizens: In Weibo (the biggest online community in China), for the news of the verdict, the discussion has been vigorous: within 3 days’ discussion, there’re 7433 likes, 1282 transmits, 1249 comments. Chinese netizens focused

106

P. Yan and X. Kang

mainly on three aspects of the verdict: (1) To the Babies: most of the netizens cared for the situation of the Babies, with two attitudes, to protect or to supervise them, especially for their fertility; (2) To He Jiankui: actually not all netizens condemned He, there were people criticized him severely, but there were also many people directly supported him, regarding him as “someone who will leave a name in history”; (3) To Science: some netizens believed scientific progress should be subject to ethical constraints, but others thought scientific progress should not be bound by ethics [31]. Here, the authors find it difficult to analyze this legal verdict with the framework of responsible research and innovation, but prefers to regard the verdict as the “responsiveness” dimension in the RRI framework, which belongs to the process of handling the Gene Editing Babies Event. Because of the limited information available on the adjudication process and content, we have no way to analyze this further so far. However, there is a detail, that is, many stakeholders were invited at the scene of the verdict in the first instance, so we have reason to believe that the outcome and process of the verdict have fully considered this.

5.5 Conclusion This paper gives a “Chinese perspective” on the development of the gene editing babies event since November 2018, and meticulously sorts out and analyzes the stakeholders in the event. From the perspective of Responsible Research and Innovation, China’s practice of Gene Editing Babies can be divided into two aspects: One aspect is the perspective of the event itself, that is, to analyze the gene editing babies event itself from the perspective of RRI, and to investigate this event with the “four-dimensional” framework of RRI. The preliminary conclusion is that this “scientific research” itself does not conform to the concept of RRI, or even the basic ethical and legal requirements of scientific research. Therefore, the Chinese government has ruled on this event according to the laws. Another aspect is the process of government and society handling this event. From the perspective of RRI, stakeholders such as government, academics, media, and the public remain actively involved in the process after this event. Although they have different positions in the event handling process, the joint participation makes this event develop towards a responsible direction. In this sense, China’s handling process of this event meets the requirements of RRI. Nevertheless, this does not mean China’s handling process is perfect, and insofar how to prevent the possibility of future misconduct in scientific research is and will be the most important concern. As Prof. Cong has argued, that to be a nation of implementing Responsible Research and Innovation, that the individuals, academic community, institutions, governments…all stakeholders need both to “each performs its own duties” and to “pull together”. And a governance framework of different aspects not only in China but also internationally will be needed to better prevent and control the Good use of the emerging technologies such as genome

5 Gene Editing Babies in China: From the Perspective …

107

editing technology. And this will need the collaborations of all stakeholders, in or outside China. Acknowledgements The authors thank Prof. Tsuyoshi Matsuda for his invitation for attendance at the International Workshop on Meta Science and Technology in Kobe, which initiated the writing of this topic, and are grateful for his kind encouragement and support. This work was supported by the National Center for Science & Technology Evaluation under Grant “Theoretical and Applied Research on Responsible Innovation” (Project No. YJZX2019-11) and also supported by the Project of “Research on the Chinese Model and Application Approach of Responsible Innovation”. This paper incorporates substantial elements of research from a paper co-authored with Carl Mitcham (Emeritus Professor of Humanities, Arts, and Social Sciences at Colorado School of Mines) that is scheduled for publication in a theme issue of the journal Social Epistemology on field philosophy. We greatly appreciate the willingness of Professor Mitcham, Social Epistemology, and the theme issue editors Robert Frodeman and Adam Briggle, for permission to draw on the closely related article that will appear there. All remaining errors are, of course, our own.

References 1. Beijing Business Today. (2018). 贺建奎现身基因编辑婴儿:结果是不小心公布的,露露娜娜 已出生 [He Jiankui appeared to Gene Editing Babies: the result was accidentally announced, Lulu and Nana have been born]. Retrieved Mar 13, 2019 from https://baijiahao.baidu.com/s? id=1618431887702825981&wfr=spider&for=pc. 2. CCTV News. 2018. 最新消息!国家对“基因编辑婴儿”这样回应! [Latest news! The nation responded to the “Gene Editing Babies”!]. Retrieved Jan 30 2019 from https://baijiahao.baidu. com/s?id=1618463676360262049&wfr=spider&for=pc. 3. Chinese Clinical Trial Registry (ChiCTR). Retrieved Mar 14, 2019 from http://www.chictr.org. cn/showproj.aspx?proj=32758. 4. Chinese Medical Association, Medical Ethics Branch. (2019). 中华医学会医学伦理学分会 关于“基因编辑婴儿”事件的呼吁和建议 [Appeals and suggestions on the “Genetic Editing Baby” Incident of the medical ethics branch of the Chinese Medical Association). Retrieved Mar 13, 2019 from http://www.dxy.cn/bbs/topic/40416928?sf=2&dn=4. 5. Cong, Y. (2019). Reflection on the gene-editing baby event from Institutional perspective (基 因编辑婴儿事件制度层面的反思), Medicine & Philosophy (医学与哲学), 1(40), 16–20. 6. Duan, W. (2018). Gene Edited Babies urgently need rigid bioethical regulation (基因编辑婴 儿函待刚性生命伦理规制). Study Times (学习时报), 006, Jan 5, 2018. 7. European Commission. (2012). Ethical and regulatory challenges to science and research policy at the global level. European Commission, Directorate-General for Research and Innovation. Brussels. 8. European Union. (2014). Horizon 2020 in brief: The EU framework program for research and innovation. https://doi.org/10.2777/3719. 9. Gmw.cn. (2018). 基因编辑婴儿公布一日:有人强烈抗议,有人撇清关系 [Gene editing baby announced one day: Some people strongly protested, some people clarified the relationship]. Retrieved Mar 13, 2019 from https://baijiahao.baidu.com/s?id=1618266453767605020&wfr= spider&for=pc. 10. Guancha.gmw.cn. (2018). 世界首例免疫艾滋病的基因编辑婴儿:科技进步,还是哗众取 宠? [The world’s first genetically edited baby with immune AIDS: Advances in science and technology, or sensationalism?]. Retrieved Mar 13, 2019 from http://guancha.gmw.cn/201811/26/content_32049454.htm. 11. Haiwainet.cn. (2018). 基因编辑婴儿何来?艾滋病平台回应志愿者招募细节 [Where is the genetic editing baby coming? AIDS platform responds to volunteer recruitment details].

108

12.

13. 14.

15.

16.

17.

18.

19. 20.

21.

22.

23.

24.

25.

26.

27.

28.

P. Yan and X. Kang Retrieved Mar 13, 2019 from https://baijiahao.baidu.com/s?id=1618238295983967849&wfr= spider&for=pc. Jiemian. (2018). 贺建奎“基因编辑婴儿”临床试验注册申请被驳回 [He Jiankui’s “Gene Editing Baby” clinical trial registration application was rejected]. Retrieved Mar 13, 2019 from https://www.jiemian.com/article/2714919.html. Lei, R. P., Zhai, X. M., Zhu, W., Qiu, R. Z. (Eds.). (2019). Human genome editing: Science, ethics, and governance (Vol. 12). Beijing: Peking Union Medical College Press, 2019. Lifetimes.cn. (2018).. 为什么我们反对“基因编辑婴儿”?真相比你想象中更可怕 [Why do we oppose “gene editing baby”? Really more scary than you think]. Retrieved Mar 13, 2019 from http://www.cqcb.com/yangshen/2018-11-30/1273056_pc.html. People.cn. (2019). 两部委回应“基因编辑婴儿”调查结果 南科大发公开声明 [The two ministries responded to the results of the “Gene Editing Baby” survey]. Retrieved Jan 30, 2019 from http://health.people.com.cn/n1/2019/0122/c14739-30583912.html. Qiu, R., Zhai, X., & Lei, R. (2019). Ethical and governance challenges raised by heritable Genome editing (可遗传基因组编辑引起的伦理和治理挑战). Medicine & Philosophy (医 学与哲学), 1(40), 1–6, 11. Science and Technology Daily. (2018). 科学伦理的“高压线”不容触碰 [The “high-voltage line” of scientific ethics cannot be touched]. Retrieved Jan 30, 2019 from http://digitalpaper. stdaily.com/. Science and Technology Daily. (2019). 人大代表:基因编辑技术不能因噎废食,叫停一切有 违科学精神 [Representative to the National People’s Congress: Gene editing technology can’t be ruined, it is against the spirit of science]. Retrieved Mar 13, 2019 from https://baijiahao. baidu.com/s?id=1627662886370932625&wfr=spider&for=pc. Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. https://doi.org/10.1016/j.respol.2013.05.008. Tech.163.com. 2018. 人民日报评基因编辑:科技发展不能把伦理留在身后 [People’s daily Review on gene editors: Technology development cannot leave ethics behind]. Retrieved Mar 13, 2019 from http://tech.163.com/18/1127/09/E1K03I9C0009996A.html. Tech.sina. (2018a). 世界首例免疫艾滋病的基因编辑婴儿在中国诞生 [The world’s first genetically edited baby without AIDS is born in China]. Retrieved Jan 30, 2019 from https:// tech.sina.com.cn/d/f/2018-11-26/doc-ihmutuec3688779.shtml. Tech.sina. (2018b). 南科大已“查封”贺建奎办公室:请勿入内 后果自负 [South University of Science and Technology has “sealed” He Jiankui’s office: Do not enter, the consequences at your own risk]. Retrieved Jan 30, 2019 from https://tech.sina.com.cn/d/2018-11-27/doc-ihp evhcm0168226.shtml. The Beijing News. (2018a). 贺建奎港大谈“基因编辑婴儿” 现场发言18分钟 [He Jiankui Port talks about “gene editing baby” 18 minutes on the spot]. Retrieved Mar 13, 2019 from http://www.bjnews.com.cn/news/2018/11/28/525678.html. The Beijing News. (2018b). 科技部:“基因编辑婴儿”被明令禁止 [Ministry of Science and Technology: “Gene Editing Baby” is banned]. Retrieved Jan 30, 2019 from http://www.xinhua net.com/tech/2018-11/28/c_1123776229.htm. The Beijing News. (2018c). 中国医科院发文谈基因编辑婴儿:突破学术道德伦理底线 [Chinese Medical Academy issued a text on genetic editing infants: breaking through the bottom line of academic ethics]. Retrieved Mar 13, 2019 from https://baijiahao.baidu.com/s? id=1618700638243625633&wfr=spider&for=pc. The Beijing News. (2019). 广东初步查明“基因编辑婴儿事件” [Guangdong initially identified “gene editing baby events”]. Retrieved Jan 30, 2019 from http://www.bjnews.com.cn/ news/2019/01/21/541532.html. Thepaper.cn. (2018). 基因编辑婴儿试验:所有涉事方均澄清与贺建奎关系 [Genetic editing baby test: All parties involved clarified the relationship with He Jiankui]. Retrieved Jan 30, 2019 from https://news.163.com/18/1127/09/E1K0TDKC0001875P.html. Thepaper.cn. (2019). 贺建奎一伦理建议论文被撤稿,期刊:与基因编辑婴儿实验相关 [He Jiankui’s ethical suggestion paper was retracted, journal: related to genetic editing infant experiments]. Retrieved March 13, 2019 from https://tech.sina.com.cn/roll/2019-02-24/doc-ihqfsk cp8125603.shtml.

5 Gene Editing Babies in China: From the Perspective …

109

29. Tian, Song. (2019). Science infringes upon society must be sanctioned by law (以法律制裁侵 害社会的科学). Forum on Science and Technology in China (中国科技论坛), 2, 6–8. https:// doi.org/10.13580/j.cnki.fstc.2019.02.005. 30. den Hoven, Van, Jeroen, Klaus Jacob, Nielsen, Linda, Roure, Francoise, Rudze, Laima, Stilgoe, Jack, et al. (2013). Options for strengthening responsible research and innovation: report of expert group on the state of art in Europe on responsible research and innovation. Brussels: European Commission. https://doi.org/10.2777/46253. 31. Weibo, People’s Daily. (2020). #基因编辑婴儿案一审宣判# 贺建奎等三被告人被追究刑 事责任 [He jiankui and three other defendants were investigated for criminal responsibility]. Retrieved Jan 3, 2020 from https://weibo.com/2803301701/In9udCk4D?type=comment&dis play=0&retcode=6102#_rnd1580716199508. 32. Xinhua Net. (2018a). 国家卫健委回应”基因编辑婴儿”事件:依法依规处理 [The National Health Commission responded to the “Gene Editing Baby” incident: Dealing it according to the law]. Retrieved Jan 30, 2019 from http://www.xinhuanet.com/politics/2018-11/27/c_1123 770847.htm?baike. 33. Xinhua Net. (2018b). 中国科协生命科学学会联合体:坚决反对有违科学精神和伦理道德 的所谓科学研究与生物技术应用 [Association of Life Sciences of China Association for Science and Technology: Resolutely oppose the so-called scientific research and biotechnology applications that violate the scientific spirit and ethics]. Retrieved Jan 30, 2019 from http:// www.xinhuanet.com/tech/2018-11/28/c_1123776091.htm. 34. Xinhua Net. (2018c). 国家卫健委、科技部、中国科协负责人回应“基因编辑婴儿”事件: 已要求有关单位暂停相关人员的科研活动、对违法违规行为坚决予以查处 [The heads of the National Health Commission, the Ministry of Science and Technology, and the China Association for Science and Technology responded to the “Genetic Editing Baby” incident: the relevant units have been requested to suspend the scientific research activities of relevant personnel, and resolutely investigate and deal with violations of laws and regulations]. Retrieved Jan 30, 2019 from http://www.xinhuanet.com/2018-11/30/c_137641875.htm. 35. Xinhua Net. (2019). “基因编辑婴儿”案一审宣判 贺建奎等三被告人被追究刑事责任 [The first trial of the “Gene Editing Baby” case sentenced He Jiankui and other three defendants to be held criminally responsible]. Retrieved Jan 6, 2020 from http://www.xinhuanet.com/legal/ 2019-12/30/c_1125403802.htm?baike. 36. Youth.cn. (2018a). 广东省、深圳市全面调查“基因编辑婴儿”事件 [Guangdong Province and Shenzhen City comprehensively investigated the “gene editing baby” incident]. Retrieved March 13, 2019 from https://baijiahao.baidu.com/s?id=1618355660266978990&wfr=spider& for=pc. 37. Youth.cn. (2018b). 工程院:“基因编辑婴儿”严重违背伦理和科学道德 [Academy of engineering: “Gene editing baby” seriously violates ethics and scientific ethics]. Retrieved March 13, 2019 from https://baijiahao.baidu.com/s?id=1618389320524922726&wfr=spider& for=pc. 38. Zhang, C. (2019). 新兴技术发展与风险伦理规约 [Ethical regulations on the development and risk of emerging technology]. Forum on Science and Technology in China (中国科技论 坛), 1(1), 1–3. https://doi.org/10.13580/j.cnki.fstc.2019.01.002.

Chapter 6

Posthumously Conceived Children and Succession from the Perspective of Law Kengo Itamochi

Abstract Reproductive medicine enables a man to store his sperm in a cryobank, which allows the possibility that his biological child will be conceived after his death. The preexisting law was established prior to the creation of technology that made such posthumously conceived children possible. As such, we are now presented with a legal problem: Should we recognize these children as the legal offspring of their biological fathers? This chapter discusses this problem in the context of whether the law allows such posthumously conceived children to inherit their deceased fathers’ estate property by analyzing cases from Japan and the United States. It also considers who should devise the rule to address the problem and what it should comprise by examining the experience of the United Kingdom. Keywords Reproductive medicine · Intestate succession · Posthumously conceived child · Right to inherit · Paternity

6.1 Introduction When a person dies, society must decide who takes the property that belonged to him/her.1 The field of law that governs this type of situation is called succession. In this context, succession refers to the inheritance of the deceased’s property. When 1 This chapter uses ample legal jargon. For the convenience of non-lawyer readers, although some terms are explained in the main text, some clarification is given in the footnotes. A person who has died is called “the deceased.” Whole assets of the deceased are called the “estate.” Thus, an estate is a body of property that the deceased owned at the time of his/her death and “estate property” means each piece of property contained within the estate. People who take ownership of the estate property in intestate succession are called ‘heirs’ of the deceased. For more specialized readers, legal problems with reproductive technologies are discussed in many literatures such as Bridge [2], Shapiro [11], and Simana [13]. For legal problems with medical issues in general, see, e.g., Laurie, Harmon & Dove [9].

K. Itamochi (B) Kobe University Graduate School of Law, 2-1 Rokkodaicho, Nada, Kobe, Hyogo 657-8501, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_6

111

112

K. Itamochi

a person dies and leaves a will, which is also called a “testament,” the law of wills (or law of testate succession) determines whether the will is legally valid, how its validity may be contested, how it can be construed, and other related matters. Wills usually designate who will receive his/her property and which portion they will be allotted. However, many people die without making a will.2 In this case, the law of intestate succession dictates how the estate property should be allocated. In the jurisdictions discussed in this chapter,3 the default rules give the highest priority to the surviving spouse and the children of the deceased.4 While entertaining philosophical doubt as to whether spouses and children should take the property of the deceased by default may represent a starting point, this chapter discusses the law in its current form. Here, let us assume that under intestate succession law, the surviving spouse and children of the deceased have first priority to take the estate property. Developments in reproductive medicine enable a widow to have her eggs fertilized by her late husband’s semen from a cryobank, which is known as a “Sperm Bank” in more common terminology, even after his death. As such, a posthumously conceived biological child might come into the world following his/her father’s succession process because following the death of one’s husband, it is usually the social and sometimes legal duty of the widow to commence the succession process5 and conclude matters related to the estate by allocating estate properties to the heirs and paying inheritance taxes. Despite these details of the succession process and tax payments, it is not unusual for a widow to deliver a posthumously conceived child some length of time, even years, after the matters related to an estate property have been settled and its assets distributed. The issue this chapter addresses is as follows: should the child take his/her father’s property through succession and receive the same benefits as other legal children which he/she otherwise would have received if he/she had been born or conceived during the father’s lifetime? Further, if he/she should receive more or less in terms of an inheritance, what is the basis for making this determination and how much more or less should their portion be? If not, how is such a distinction between the children born or conceived during their father’s lifetime and those conceived after his death made or justified? Thus, new technologies have caused new social and legal problems. In the 2000s, courts in Japan and the United States confronted this issue. The Japanese Supreme 2 It

is said that only 1.32% of the population makes wills in Japan. https://www.kameoka.com/inh erit_data_5038.html Even in the United Kingdom or the United States, where it is often said that most people make wills, legal professionals say that, at most, about half make wills. 3 This chapter discusses legal cases from the United States, the United Kingdom, and Japan. For general references to the family law and/or succession law in each jurisdiction, see, e.g. Sitkoff & Dukeminier [12], Gilmore & Glennon [4], Kubota [8], respectively. 4 Strictly speaking, children have been divided into categories of “legitimate” and “illegitimate;” although, such a division is now abrogated or deemed unconstitutional in the jurisdictions discussed in this chapter. Also see JSC [7] below. 5 In common law jurisdictions, this process is not only legal but also judicial, and is widely known as “probate” or “administration of estate”.

6 Posthumously Conceived Children and Succession …

113

Court denied any possibility of a legal father-child relationship (“paternity”) between a deceased biological father and his posthumously conceived child. As a result, such a child could not inherit any property left by the deceased father. This is not the only solution devised for this type of circumstance. A leading case in the United States took a more flexible approach and balanced three policy concerns: (1) the best interests of the children; (2) the state’s interest in the orderly administration of estates; and (3) the reproductive rights of the biological parents. Balancing these interests, the court rejected the position that all posthumously conceived children are automatically barred from taking property from their deceased biological parent’s intestate estate, and instead suggested when and how such children could be regarded as the legal heirs. Another approach can be found in the United Kingdom, where a new law was introduced by Parliament to regulate how and under what circumstances a couple may rely on artificial insemination or other reproductive technologies and what legal effects could then follow. Under this law, posthumously conceived children are not allowed to inherit their biological fathers’ estate, but the UK law was amended to allow such children to claim their biological fathers for one specific purpose. Parliament enacted the law to allow posthumously conceived children to ensure that biological fathers who were legally married to the mothers before their death could be listed on their birth certificates (this official acknowledgment of paternity does not currently apply for any other legal purposes), which took place after the court’s denial of paternity in 1997 and the political campaigns thereafter.6 In its discussion on this issue, this chapter draws judicial and legislative solutions from Japan, the US, and the UK. Before the presentation of the case study, Sect. 6.2 depicts how the traditional law would have addressed the issue of succession by a posthumously conceived child and reveals the aspects that are particularly problematic. Sections 6.3 and 6.4 are case studies from Japan and the US, respectively, which show a clear contrast in judicial attitude: while one is passive regarding its independent resolution of a new problem, the other is active in proposing a tentative solution for use until new legislation is introduced. Section 6.5 reconsiders and defends the passive attitude of the Japanese court. The legislative branch might be in better position to devise a solution to this new social problem that has been introduced by a new technology. Section 6.6 introduces the experience in the UK. Parliament quickly reacted and implemented a solution to this new problem caused by new technologies. However, its regulation was not perfect and consequently, some disputes were brought to court. Following such litigations and a political campaign, Parliament amended the regulatory framework to allow posthumously conceived children to claim their biological fathers as such for a limited purpose. Section 6.7 summarizes the argument and concludes that when new technologies cause new social problems, 6 One caveat: Although the sexes are interchangeable in both theory and practice with minor correc-

tions. For example, a widower could have a child with his late wife by preserving her eggs, having another woman as a surrogate to carry the embryo made by his sperm and the egg of his late wife, and having her deliver a child. This case would also prompt (a similar but somewhat different set of) questions. For convenience for discussion, this chapter focuses on the situation of a widow, her late husband, and their posthumously conceived child.

114

K. Itamochi

it is ideal for legislators to react and ensure an optimal balance between the benefits brought by such new technologies and the risks they introduce; but if a case is brought to court before appropriate legislation can be passed, it might be in the best interest of the courts to provide a tentative solution, independent of the lack of legislation, especially when the interests of political minorities are at stake.

6.2 Posthumous Children and Law Before examining specific cases, let us review the basic framework and the problem in the context of law.

6.2.1 Old and New Question The legal relationship between parents and their children varies across countries and cultures. Some cultures recognize such a relationship only amongst a married father and mother and their biological children. Other cultures have an even broader scope for the familial relationship between what we now call parents and children by adoption; in this case, family members do not necessarily have any biological connection with each other. In 18th-century Japan, for example, it was not uncommon for a childless homeowner to adopt a young couple as children to allow them to inherit his/her household and ensure that its business would continue under the same familial line; this tradition still occasionally occurs today although it is less common. The effects of such a relationship also vary. In some societies, a parent-child relationship acknowledges the powers and duties of parents to make decisions on matters related to their children. In other societies, parents have discretionary power over their children’s behaviors, but they have only duties to properly maintain their children’s property and assets. In some situations, parents are incapable of managing such powers and duties, so another person, such as a guardian or trustee, undertakes the responsibility of managing the powers and duties or all parental responsibility, in general. However, it is almost universal for a parent-child relationship to be typically established by marriage between the father and the mother and a biological relationship between the married couple and their child; such a parent-child relationship leads to succession of property and assets of the parent to the child at the parent’s death. In other words, the biological children of the deceased parent are his/her heirs.7 7 The author does not insist that it is normal for biological parents to be married or for only their biological children to be their legal heirs. However, he does endeavor to describe our society’s status quo as the starting point to this discussion. He assumes that a married couple and any resulting legal children are more or less legally privileged; the children of individuals who are not married or in a like status, such as a civil partnership, cannot always enjoy the benefits they would have if they had such status in the US, the UK, or Japan today.

6 Posthumously Conceived Children and Succession …

115

Here, we have a classic problem. When we recognize a legal parent-child relationship between the married couple and their children, what does this mean for a child who was born after his/her father’s death? This problem is classic because it has always been possible for a child to be conceived by a married couple through natural sexual intercourse (natural conception) and for the father to die before the mother gives birth to the child (the delivery). That is, the following chronology is possible: (1) the conception; (2) the father’s death; and (3) the delivery of the baby. In this case, the baby was born to a single mother, not a married mother, because of her husband’s passing. This type of circumstance was possible even before we invented the technology of artificial insemination or in vitro fertilization (IVF). Although the baby came into the world as the fetus of a married couple, his/her legal parent is ultimately only the mother, who gave birth to him/her. Japanese lawmakers were aware of this problem when they enacted the Civil Code in 1896. Section 772 of the Code8 provides that a child is presumptively the child of a married couple if he/she was conceived during their marriage. Under this provision, the legal recognition of paternity (the father-child relationship) is established in principle by conception during the marriage rather than delivery. As such, the classic problem posed by posthumous children is settled. English and American common law also has similar traditional rules for this situation. For example, a Massachusetts case in the early 19th century stated the following: “A child en ventre sa mere [note: a French legal phrase meaning “fetus in utero” (the literal meaning is “a child in his/her mother’s belly”)] is taken to be a person in being, for many purposes. He may take by descent; by devise; or under the statute of distributions; and generally for all purposes where it is for his benefit.” (Hall [5], 258) An English case in 1792 also says: “For many purposes an infant en ventre sa mere is considered by the law as if he were in actual existence” and “there is no distinction between a child en ventre sa mere and one actually born.” (Doe [3], 29, 37) At least for the purpose of succession, either by will or through intestacy, posthumously born children are regarded as children of the deceased biological father if they were conceived before the father’s death. As can be observed from the above examples, posthumously born children do not really constitute a new problem for the law of succession; the issue is instead quite old.

6.2.2 What’s New: Cryobank (Sperm Bank) and Posthumous Conception Then, what is new in terms of this issue? The technologies of artificial insemination and cryobanks introduce new challenges in this regard. A cryobank, which is also 8 “A

child conceived by a wife during marriage shall be presumed to be a child of her husband.” Translation from Japanese Law Translation administered by the Ministry of Justice of Japan: http:// www.japaneselawtranslation.go.jp/law/detail/?id=2252&vm=04&re=02.

116

K. Itamochi

called a “sperm bank,” enables men to store their semen at a very low temperature for a much longer period than the span of their own life. Even decades after a man dies, his widow can become pregnant with her late husband’s sperm through artificial insemination. Chronologically speaking, now the following order is possible: (1) the father’s death; (2) conception (by artificial insemination); and (3) the child’s birth. This process would not be possible without the new technologies mentioned previously. Here, the child is conceived after his/her biological father’s death and is then what this chapter calls a “posthumously conceived child.” This introduces a situation for which preexisting laws were unprepared. By literally applying Japanese law as described above, a posthumously conceived child is presumably not a child of his/her mother’s late husband because he/she was not conceived during their marriage. The relationship of marriage ended upon the death of the husband. The traditional common law of England and America is the same: when the father dies, the child is not “a child en ventre sa mere” or “a fetus in utero,” or does not exist at all. Instead, “death parts the two.” However, comparing the circumstance of children who were conceived before the father’s death with that of those conceived posthumously, there is no logical reason to distinguish the two. They were both biological children of the same father and mother and their parents remained married until the husband (=the father) died. It seems quite unfair to regard the former as legal children while the latter are not classified as such.9 Should the latter also be deemed legal children? Thus, this particular problem is new and has been triggered by our new technologies.

6.2.3 Theoretical Frameworks To approach this problem, there can be a range of solutions. One end of the spectrum might suggest ignoring the problem, i.e. maintaining the status quo in which children conceived before the father’s death are still his legal children and those conceived thereafter are not. The latter can still be adopted by another man, a stepfather who marries the mother, for example, who would then take care of the child. Even if the mother remains single, a variety of social security programs will help her and the child. However, the problem of apparent inequality remains, which requires justification. The other end of the spectrum would grant posthumously conceived children a status as the legal children of the late father for all intents and purposes. All biological children of the married couple would then be legal children according to this regime. This seems quite progressive, but would prompt other questions. If the existence of a biological connection between the child and the father matters, why should we confirm whether the father and the mother were married? What about same-sex 9 We set aside the problem of inequality amongst children of a married couple and those of unmarried

individuals because this issue might prove too controversial to fully discuss in this chapter.

6 Posthumously Conceived Children and Succession …

117

couples or other types of family units? If we allow only legal children to succeed their father’s estate, should we delay the succession proceedings until the mother is too old to become pregnant? It might be prudent to identify a position between the two extremes, but this is no easy task.

6.3 Japanese Case The Japanese Supreme Court faced this problem in 2006 (hereinafter cited as “JSC [6]”). At that time and continuing until today, Japan had no legislative or administrative regulation regarding whether artificial insemination or other reproductive technologies could be used, how they could be used, or the legal effects that would result from the use of the technologies. The solution proposed by the Supreme Court was that posthumously conceived children cannot be the legal children of their biological father and were therefore incapable of inheriting his estate property.

6.3.1 Facts of the Case According to the court, the facts of the case are as follows10 : (1) H and W were married in 1997. (2) H had been in continuous treatment for chronic myelocytic leukemia (CML) since he was married, and six months after his wedding, he decided to have a bone-marrow transplant (BMT) operation. H and W had undergone fertility treatment when they got married, but W failed to become pregnant. Thus, they cryopreserved H’s semen in June of 1998 to prevent H from becoming azoospermic through the BMT operation, which requires exposure to strong radiation. (3) In the summer of 1998, before undergoing his BMT operation, H told W that he wanted her to have his child if he died, as long as she remained unmarried following H’s death. H also told his own parents, after the BMT operation, that he wanted W to have his child as his heir when he died. Later, H conveyed the same message to his brother and his aunt.

10 Translation with minor changes for convenience by the author. Such minor changes include the names of parties. In Japanese reported cases, parties’ names are in principle made anonymous. For example, the author uses H for the party who is husband and W for wife, as is common in literature written in English. Similar changes are made in the other translations from Japanese sources in this chapter, unless noted otherwise.

118

K. Itamochi

(4) H’s BMT operation was successful and he returned to work in May of 1999. H and W decided to restart their fertility treatment and a hospital agreed in the end of August of 1999 to use H’s cryopreserved semen for the artificial insemination procedure. However, before the procedure could be performed, H died that following September. (5) After H died, W decided to have the in-vitro fertilization upon the advice and consent of H’s parents. W underwent the procedure in 2000 and became pregnant; in May of 2001, she gave birth to a child who is the appellant of this case. The child brought this lawsuit against the government to establish a legal father-child relationship through the judicial acknowledgement of paternity. Judicial acknowledgement of paternity, which is provided under Section 787 of the Japanese Civil Code,11 is a process followed by the court through which a man (or the government if the man has died) is forced to recognize the legal paternity of the claimant child. If a man voluntarily recognizes or is judicially forced to recognize such legal paternity, he becomes responsible for carrying out his legal duties as the child’s father and the child has legal rights accordingly, such as the right to be financially supported by him and the right to inherit his property. Here, H had already died so the child sued the government in form.

6.3.2 Passive Approach The court denied the child’s request and stated the following: [I]t is clear that the [Civil Code’s] rules do not assume that posthumously conceived children can obtain the legal status of the biological father’s child. This is because it is not possible for the biological father to serve as his/her guardian (“parent’s right/custody”) nor is there any possibility that he will be able to raise and care for the child by providing financial or emotional support (“child support”); further, it is also impossible for the child to become the father’s heir (“succession”). … Thus, posthumously conceived children can never have a basic legal relationship, as provided by the Code.

11 “A

child, his/her lineal descendant, or the legal representative of either, may bring an action for affiliation; provided that this shall not apply if three years have passed since the day of the death of the parent.” Translation from Japanese Law Translation administered by the Ministry of Justice of Japan: http://www.japaneselawtranslation.go.jp/law/detail/?id=2252&vm=04&re=02.

6 Posthumously Conceived Children and Succession …

119

Interpreting and construing the Civil Code as current law, the Japanese Supreme Court found no grounds for a legal parent-child relationship between posthumously conceived children and their fathers. This could be deemed a “passive approach” because the court simply stated and applied the existing legal rules and did not address the new problem; as such, it took no steps to evaluate what the law should be instead to properly consider a new situation. Here, this chapter does not necessarily criticize the court’s approach as a bad decision. Rather, the court deferred to the legislature to address the new questions and confined itself to the application of existing law to avoid the establishment of a new law. This is one reasonable approach for addressing this kind of situation.

6.3.3 Evaluation Without a will/testament, no one can inherit the estate unless he/she is a legal heir in Japanese law related to intestate succession. Because posthumously conceived children cannot be legal children of the deceased father, it is not possible for them to succeed the father’s estate. In the particular case of JSC [6], as the surviving spouse, W inherited her husband H’s estate with H’s parents, not with her child, who was conceived with H’s gametes. The child cannot inherit the property of his paternal grandparents, who inherited H’s estate, because the child is not recognized as a legal child of H and is therefore not a legal grandchild of H’s parents.12 Since Japanese lawmakers have never enacted a new law for this situation, this is still the status quo of Japanese law. Is this an acceptable law? It is a hard rule, so one point in its favor is its clarity. If a man wants to have a posthumously conceived child, he must write a will, create a trust to leave assets for the child, or make an equivalent arrangement before he dies; otherwise, no property or assets are allocated to the child, which is a negative attribute of this law. In Japanese society, the vast majority of the population does not make a will,13 create a trust, or make any other arrangements, so if we emphasize this statistical fact, this is an unacceptable law from the perspective of posthumously conceived children’s welfare, which may depend on their inheritance of their father’s estate. To make matters worse, such children have no legal father for any purpose. They are officially “fatherless” children in Japanese society.

12 Under Japanese law, as in common law jurisdictions, a grandchild is allowed to inherit his/her grandparent’s property when his/her parent, who is a child of the grandparent, dies before the grandparent. This is called succession “by representation.” 13 Again, only 1.32% of Japanese population made wills in 2016.

120

K. Itamochi

6.4 American Case The United States took a different direction than Japan. Let us review one of the leading cases in the United States. For the sake of discussion, although there are 50 different state laws in the US, let us take a famous example, Woodward [15], from the Commonwealth of Massachusetts and address it as one representation of American law. The policy that is stated in the case clearly contrasts with JSC [6]’s conclusion at first glance

6.4.1 Facts of the Case The facts of the case are quite similar to those of JSC [6]. The husband was diagnosed with leukemia and he and his wife decided to withdraw his semen for preservation before he underwent treatment for his disease. Unfortunately, he died; thereafter, the widow became pregnant through artificial insemination with the sperm that had been frozen. The resulting children (who are twins here) were thus posthumously conceived children and the legal issue that emerged was whether such posthumously conceived children could inherit their father’s estate. Quoting the judgment, the facts are described as follows (Woodward [15], 538– 540)14 : In January, 1993, about three and one-half years after they were married, Lauren Woodward and Warren Woodward were informed that the husband had leukemia. At the time, the couple was childless. Advised that the husband’s leukemia treatment might leave him sterile, the Woodwards arranged for a quantity of the husband’s semen to be medically withdrawn and preserved, in a process commonly known as “sperm banking.” The husband then underwent a bone marrow transplant (BMT). The treatment was not successful. The husband died in October, 1993, and the wife was appointed administratrix15 of his estate. In October, 1995, the wife gave birth to twin girls. The children were conceived through artificial insemination using the husband’s preserved semen. In January, 1996, the wife applied for two forms of Social Security survivor benefits: “child’s” benefits and “mother’s” benefits under the statutes. The Social Security Administration (SSA) rejected the wife’s claims on the ground that she had not established that the twins were the husband’s “children” within the meaning of the Act. In February, 1996, as she pursued a series of appeals from the SSA decision, the wife filed a “complaint for correction of

14 Some

expressions are simplified or styled by the author in form but not in substance. Similar changes are found in other direct quotations in this chapter.

6 Posthumously Conceived Children and Succession …

121

birth record” in the Probate and Family Court against the clerk of the city of Beverly, seeking to add her deceased husband as the “father” on the twins’ birth certificates. In October, 1996, a judge in the Probate and Family Court entered a judgment of paternity and an order to amend both birth certificates declaring the deceased husband to be the children’s father. In his judgment of paternity, the Probate Court judge did not make findings of fact, other than to state that he “accepts the stipulations of voluntary acknowledgment of parentage of the children… executed by the wife as mother, and the wife, administratrix of the estate of the husband, for father.

”The wife presented the judgment of paternity and the amended birth certificates to the SSA, but the agency remained unpersuaded. A United States administrative law judge, hearing the wife’s claims de novo, concluded, among other things, that the children did not qualify for benefits because they “are not entitled to inherit from the husband under the Massachusetts intestacy and paternity laws.” The appeals council of the SSA affirmed the administrative law judge’s decision, which thus became the commissioner’s final decision for purposes of judicial review.16 The wife appealed to the United States District Court for the District of Massachusetts, seeking a declaratory judgment to reverse the commissioner’s ruling. The United States District Court judge certified17 the [question quoted below] to this court because “the parties agree that a determination of these children’s rights under the law of Massachusetts is dispositive of the case and… no directly applicable Massachusetts precedent18 exists.” As such, the case was presented to the Supreme Judicial Court of Massachusetts. The question posed to the court was as follows: “If a married man and woman arrange 15 “Administratrix” is a feminine name for masculine “administrator,” both of which are the person(s) appointed by the court who is/are responsible for the estate of the deceased in case of intestacy. In most cases, surviving spouses are appointed. An administratrix (or an administrator) has legal duties inter alia to collect the estate property, to pay inheritance tax, and to distribute the estate property to the heirs. 16 Under the US administrative law, claimants must make their claim with the relevant agency before bringing it to the judiciary (courts). Thus, the wife tried her case before the SSA, its administrative law judge, and its appeals council first, and then went to court. 17 “Certification” is a judicial process provided in some jurisdictions in the United States; in this process, when a court of the jurisdiction A has to apply the law of jurisdiction B but it is not clear from any statutes or case law, the court in A can ask the highest judicial court in B to clarify the law of B in terms of its legal issue(s). Here in Woodward [15], one federal court had to apply Massachusetts law, which was unclear and thus posed their question to Massachusetts’s highest court. 18 “Precedent” in the legal context means a case that led to binding case law.

122

K. Itamochi

for semen to be withdrawn from the husband for the purpose of artificially impregnating the wife, and the woman is impregnated with that semen after the man, her husband, has died, will children resulting from such pregnancy enjoy the inheritance rights of natural children under Massachusetts’ law of intestate succession?” (Woodward [15], 536).

6.4.2 Judicial Regulation Approach This American case provided some requirements for posthumously conceived children to inherit their biological fathers’ estates. Because this allows inheritance under certain legal requirements, let us call this American rule the “judicial regulation approach.” The Massachusetts court first admitted that there was no precedent in the courts of any state regarding this question, i.e. that this was the case of first impression in the United States. With appropriate caution, the court also concluded that while the facts of the case were narrowly established, the issue posed was extremely general and far-reaching. The court then began with the statutory scheme of intestacy. The court extracted three policy concerns from the Massachusetts intestacy law (Woodward [15], 545): The question whether posthumously conceived genetic children may enjoy inheritance rights under the intestacy statute implicates three powerful State interests: the best interests of children, the State’s interest in the orderly administration of estates, and the reproductive rights of the genetic parent. Our task is to balance and harmonize these interests to effect the Legislature’s over-all purposes. Regarding the first point (“best interests of children”), the court says: “Repeatedly, forcefully, and unequivocally, the Legislature has expressed its will that all children be entitled to the same rights and protections of the law regardless of the accidents of their birth.” (Woodward [15], 546) Therefore, based on the first point, posthumously conceived children should be eligible to inherit the deceased father’s estate property as antehumously conceived children do. However, the second point (“State’s interest in the orderly administration of estates”) guides us in the opposite direction. Since “[a]ny inheritance rights of posthumously conceived children will reduce the intestate share available to children born prior to the decedent’s death,” it is important “to provide certainty to heirs and creditors by effecting the orderly, prompt, and accurate administration of intestate estates.” (Woodward [15], 546) The intestacy statute furthers this purpose “in two principal ways (Ibid.):

6 Posthumously Conceived Children and Succession …

123

(1) by requiring certainty of filiation between the decedent and his issue, and (2) by establishing limitations periods for the commencement of claims against the intestate estate. In answering the certified question, we must consider each of these requirements of the intestacy statute in turn.” For requirement (1), the statute affirms three methods: the father’s acknowledgement of paternity, his marriage to the mother, or the judicial determination of paternity. For requirement (2), the court did not establish clarity in terms of a specific period but instead stated that the paternity action in this case had not been filed “within the one-year period for commencing paternity claims mandated by the intestacy statute.” (Woodward [15], 550) Considering the third and last point (“reproductive rights of the genetic parent”) the court confirmed both the mother’s rights and those of the father. In this case, the former was easy to settle because it was clear that the mother herself wanted to have the twins. “The husband’s reproductive rights are a more complicated matter.” (Woodward [15], 551) Because the husband had already died, his true intention regarding conception using his semen was not easy to ascertain. On this point, the court stated the following (Woodward [15], 552): [A] decedent’s silence, or his equivocal indications of a desire to parent posthumously, ought not to be construed as consent. The prospective donor parent must clearly and unequivocally consent not only to posthumous reproduction but also to the support of any resulting child. After the donor-parent’s death, the burden rests with the surviving parent, or the posthumously conceived child’s other legal representative, to prove the deceased genetic parent’s affirmative consent to both requirements for posthumous parentage: posthumous reproduction and the support of any resulting child. In summary, the Massachusetts court interpreted the rules as follows: in order for a posthumously conceived child to be considered an heir of his/her biological father under intestacy law, (1) his/her biological father must leave clear and unequivocal evidence of his intention towards both posthumous reproduction and the support of any resulting child; and (2) the mother or other legal representatives of the child must file the paternity action in a timely manner, e.g. within the period provided in the intestacy statute.

124

K. Itamochi

6.4.3 Evaluation An evaluation of this “judicial regulation approach” is more complicated than that of the more passive Japanese approach. We must evaluate what is required and whether it is an acceptable law. Under Woodward [15], posthumously conceived children can be heirs of their deceased biological father if certain conditions are met. The conditions prescribed in the case are at least reasonable, if imperfect. The core of the rule is the balancing of various interests. Courts must consider the following: (1) the children’s interest, (2) the interest of other heirs and society, and (3) the parents’ interest, especially that of the deceased father. This is seemingly significantly superior to the passive Japanese approach in terms of its benefit to posthumously conceived children. The policy established by Woodward [15], which states “[t]he protection of minor children, most especially those who may be stigmatized by their “illegitimate” status19 … has been a hallmark of legislative action and of the jurisprudence of this court” (Woodward [15], 545–546) has also been applied in a recent Japanese case, JSC [7]. The Japanese Supreme Court held an intestacy provision of the Civil Code as unconstitutional, as the provision asserted that illegitimate children could take half of that taken by legitimate children in intestate succession proceedings. The court clarified that discrimination based on a child’s legitimacy violates the equal protection clause of the Japanese Constitution. Although this case addressed a different question than that related to discrimination based on the timing of conception, it is an inferable general policy “that all children [should] be entitled to the same rights and protections of the law regardless of the accidents of their birth.” All approaches have benefits and drawbacks. An apparent drawback of the judicial regulation approach is cost. Probate courts must determine if a posthumously conceived child at issue meets the aforementioned prescribed conditions. His/her mother or legal representative, and/or the hospital that provided fertilization treatment, must have created and maintained relevant documents as evidence. Other heirs must wait for a longer period until succession proceedings can begin and the estate property can be distributed accordingly.

6.5 Japanese Approach: Revisited Let us re-examine the Japanese court rule. It is a hard rule, as there is no possibility for posthumously conceived children to establish a legal paternity relationship with their late biological fathers, which is precondition of the child’s right of succession. No flexibility was given in the decision.

19 Children born in wedlock are called “legitimate” children and children born out of wedlock are deemed “illegitimate.” Historically, laws in many jurisdictions discriminated against illegitimate children in various aspects.

6 Posthumously Conceived Children and Succession …

125

6.5.1 Separation of Powers When read more closely, however, the Japanese courts have considered most points discussed in Woodward [15]. In the same case, the intermediate appellate court, THC [14], said: Requirements for an acknowledgement of paternity are met in the case of natural conception because there is a biological parent-child relationship between the child and the man alleged to be his/her father. However, in the case of conception through artificial insemination, it must be considered whether the alleged father consented to the conception. The institution charged with acknowledging paternity was introduced to establish the legal father-child relationship between a child and his/her biological father; thus, it is reasonable to construe [the Civil Code as not requiring] that the father be alive at the time of the child’s conception. Conception through natural sexual intercourse presumptively occurs as the result of the biological father’s intention. However, conception via artificial insemination, in which semen from a cryobank is used, can occur regardless of the biological father’s intention. Thus, if all natural, biological connections between the man and the child are themselves eligible for the acknowledgement of paternity, a man who has preserved his own semen in a cryobank cannot ultimately independently determine whether that sperm is used to produce his legal child. This is especially true for a child who is conceived after the biological father has died, in which case, the father has no control over how his semen is used. In this case [i.e. when the father does not make the independent determination to use his semen for an artificial insemination procedure following his death] the man who is acknowledged as the father is unexpectedly imposed several legal duties.20 This result is unreasonable. Therefore, in a case of conception and subsequent birth resulting from artificial insemination, it is necessary for the biological father to consent to the use of his sperm for conception for an acknowledgement of paternity to be granted. As quoted above, the Japanese courts considered at least the third point in Woodward [15]. This is cited (and not expressly rebutted) in the Supreme Court decision. The Supreme Court decision also admits that there is no reason to completely exclude posthumously conceived children from access to an acknowledgement of

20 E.g.

duty to financially support the child (maintenance), duty to take care of him/her, etc.

126

K. Itamochi

paternity. However, in its conclusion, the Japanese Supreme Court ultimately denied this possibility.21 Setting aside detailed arguments, the mechanism that causes the Japanese Supreme Court to conclude this hard rule is its consideration for high-level policy. This fact is clear in the following passage: The way the legal relationship between a posthumously conceived child and his/her biological deceased father is arranged inherently relates to the bioethics of artificial reproduction using the cryopreserved semen of the deceased, the welfare of children born through the use of those technologies, the ideas of relevant parties regarding the legal parent-child relationship or the prospect of it, and moreover, the social consensus and public opinion regarding these questions. In short, it is a problem that must be addressed by legislation. The absence of such legislation [causes the court to conclude, as is reflected in the current statutes on the issue] that a paternity relationship between posthumously conceived children and their biological father is impossible. The policy driving the court’s decision is now clear. It does not object to the new technology, but does simply say “it’s not our business.” Since judges do not have the strong democratic legitimacy that representatives in the legislative assembly do, it is understandable for the court to defer to the legislature. The judiciary is better designed to settle a dispute between parties in a case than engineer a general law, although their decision often has legal influences on prospective parties in similar situations.22 Conversely, a democratically-elected legislative body is in a better position to create general rules, especially when a problem is very controversial and widely discussed in society.

6.5.2 Comparison The comparison between the American approach and that of the Japanese has intensified. The former is a policy asserting that, without any statutes, judges should 21 According to the court, under the current law, posthumously conceived children have no possibility of receiving the support of the father, nor will they be taken care of by him or succeed his estate (because inheritance automatically and virtually occurs when the deceased dies and heirs immediately acquire the estate property, at which time the posthumously conceived child does not exist); therefore, there is no legal interest regarding the establishment of paternity. While the Supreme Court denied any legal interests for posthumously conceived children, Takamatsu High Court allowed a possibility of succession by representation, through which they can inherit their paternal grandparents’ estates. 22 Concurring opinions in JSC [6], especially Justice Takii’s opinion, can be read that they intentionally discourage the use of cryopreserved semen of dead men despite its practical possibility (at least until some legislative solution is provided).

6 Posthumously Conceived Children and Succession …

127

provide a solution based on the information they possess at that point in time. The latter asserts that, without any statutes, judges should do nothing until the legislature introduces a new statute. One feature that is common to both is that when the legislature enacts a new statute, it prevails even when an antecedent law created by a judge had existed previously. The interval between the time when a case comes to court and the point at which a legislative solution is devised is therefore the primary difference. Under the political culture in Japan, the American judicial regulation approach might work better than a passive approach. The Japanese approach works well only if the legislature can make a decision within a reasonable period. If Japanese lawmakers had made the decision to allow the law to remain the same soon after the case discussed above, that would have represented one possible approach to policymaking. Although it somehow discourages the use of the new technology introduced by cryobanks and artificial insemination, the rule is very clear and its result is predictable. However, in fact, the Japanese legislature has seemingly neglected this problem.23 Posthumously conceived children still have no means of becoming the heir of their biological father, not because the status quo has been deemed better, but because the issue has not been considered to the extent necessary to justify revising the existing law. Further, they have no way to be acknowledged as their biological fathers’ legal children for any legal purpose. They are thus legally fatherless children. Political solutions can work well for problems that draw attention from many individuals and easily become a national political issue. However, the problem discussed in this chapter draws scant attention because the population of interested parties, i.e. posthumously conceived children, is small. The judiciary is better suited for managing these minority-oriented problems. True judges have weaker democratic legitimacy. However, if the rule produced by the court is unacceptable, the legislature can and should correct it promptly.

6.6 English Case To consider the separation of powers between the legislature and the judiciary, the UK’s experience reveals a different perspective, and provides another position somewhere between the previous two stances. The UK’s Parliament made a new law regulating reproductive technologies and therefore the legislature clarified the legal effects that could result from their use. This new law was enacted in 1990, which was much earlier than the other cases that were brought to the judiciary in Japan and the US. However, the legislative framework could not provide a perfect solution and some cases were brought to court. Following the judicial disputes and a political 23 It is true that some working groups managed by the government have discussed and reported the legal issues prompted by the reproductive medicine and that concurring opinions in the JSC [6] refers to such reports from the working groups. However, no discussion in the legislative branch has taken place as to the issue whether posthumously conceived children should be able to inherit their biological fathers.

128

K. Itamochi

campaign, Parliament amended its rule and allowed the posthumously conceived children to claim their biological father as their legal father for a certain purpose. Although the purpose is very limited and is not related to succession/inheritance, it is still a (small) remedy to their former position, which was one of rejection on legal grounds.

6.6.1 The Human Fertilisation and Embryology Act 1990 Parliament enacted the Human Fertilisation and Embryology Act (HFTA) in 1990 to regulate assisted conceptions performed by clinics, which was brought into force on August 1, 1991.24 Under the HFTA 1990 and a Code of Practice, clinics must satisfy certain conditions to provide fertility treatment and regulate when and how the parents, and the donors of gametes when applicable, become the legal parents of the resulting children. Section 28(6)(b) of the Act clearly provides the following: “Where [] the sperm of a man, or any embryo the creation of which was brought about with his sperm, was used after his death, he is not to be treated as the father of the child.” Therefore, the Act denies any chance for posthumously conceived children to become the legal children of their biological father even if he was the husband of their mother. They must be fatherless children under the current legislative scheme. This is a hard rule that was enacted by the national legislature and has the advantage of certainty. If a man wants to produce his biological child through posthumous conception using his frozen sperm, he must make arrangements to support the resulting child, such as writing a will or creating a trust for him/her. The type of evidence he must leave for his wife (or partner) to use his sperm for such conception is clear. Because this rule is a solution resulting from legislative action, it is also assumed to be a democratically supported political and societal decision on the matter.

6.6.2 Opening a Way However, some widows and mothers of the posthumously conceived children were not satisfied with that legislative solution. They were not necessarily concerned with succession rights, but were instead focused on the availability of the new technologies and the potentially negative psychological effects owing to the legal fatherlessness of their children. Like the Japanese case, they fought the law in court and lost. However, unlike the Japanese case, mothers successfully persuaded lawmakers to amend the statute, which resulted in the realization of at least one of their wishes. In short, the current UK law allows posthumously conceived children to have their biological 24 The Human Fertilisation and Embryology Act 2008 now applies to the situation this chapter discusses. The 2008 Act is discussed below.

6 Posthumously Conceived Children and Succession …

129

fathers’ names on their birth certificates, but the children are not recognized as the legal offspring of their biological father for any other purpose, including succession of their father’s estate. A case in which the widow lost is known as Diane Blood’s case (hereinafter cited as Blood [1]). In this section, let us review not only the case itself but also the political challenges the parties faced.

6.6.3 Facts of the Case A serious argument again developed in the courtroom. Although the lawsuit did not pertain to the posthumously conceived children’s right of succession and the facts relevant to this lawsuit are very different from those characterizing the cases in Japan and the US, it is worth examining them first. Here, the court is quoted as follows (Blood [1], 155): The applicant [=Diane Blood] is a widow now aged 30 years. She was married to her husband Stephen in 1991. They had been courting for nine years before that. They lived a happy married life and greatly wished to have a family. They had married according to the rites of the Anglican Church using the traditional service contained in the 1662 Book of Common Prayer. The applicant had her own business. She had set up her own company in advertising and public relations dealing particularly with matters concerning nursery products. They lived a normal sex life. Towards the end of 1994 they began actively trying to start a family. On 26 February 1995 tragedy unexpectedly struck. Stephen her husband complained of feeling unwell and was admitted to a local hospital with suspected meningitis. The following day he was moved to another hospital where his condition rapidly deteriorated. On 28 February 1995 the applicant raised with the doctors the question of taking a sperm sample from her husband. A sample was taken on 1 March 1995 and was entrusted to the Infertility Research Trust (“I.R.T.”). Stephen was unconscious in a coma. A second sample was taken on 2 March and on that same day Stephen was certified clinically dead. Throughout these days he had been unconscious in a coma. Both the samples of sperm are kept by the I.R.T. pending the resolution of the legal issues. The samples are kept for the benefit of the applicant, who has in fact paid the necessary charges. Professor Cooke of the I.R.T. has written to the applicant confirming the viability of the sperm. The applicant wishes to be artificially inseminated with her late husband’s sperm in order to produce a child.

130

K. Itamochi

Apart from legal problems, this set of facts is seemingly like those of the other cases above. The widow eventually became pregnant by artificial insemination using her late husband’s sperm and their child’s legal rights were at stake some years later. However, at this point, the main legal issue was whether the widow could use the sperm for artificial insemination under the statute at that time. Section 4 and Schedule 3 of the HFEA 1990 prohibited the storage of sperm without the informed written consent of the donor, who was the late husband in this case.25 The relevant government authority, the Human Fertilisation and Embryology Authority, refused to permit her use of the sperm according to this law. She appealed this governmental decision to the court.

6.6.4 Passive Approach? The court concluded that the widow, Mrs. Diane Blood, could not use her late husband’s sperm due to her lack of compliance with the law, HFTA 1990. On this point, the court decision can be categorized as a passive approach, in which the court simply applied the existing law but did not formulate a new solution. Because the statute did not permit such use of her late husband’s sperm, she was prevented from doing so under any circumstances. However, the Court of Appeal addressed another issue. Because the EC Treaty was binding in the UK and applied to its citizens, the judges also concluded that she had right to receive medical treatment in another member state; under the facts of the case, she could export the sperm for her use, which was otherwise prohibited by the same UK statute of HFTA 1990. This was a point of law which had not necessarily been settled at that time and the conclusion was not the only construction of the Treaty; further, it was arguably a progressive interpretation among a number of possible interpretations.

6.6.5 Fatherless Child and Legislative Amendment Mrs. Blood travelled to Belgium, where she had a fertility treatment using her late husband’s sperm; as such, she conceived her son. She successfully gave birth to him. “But Diane Blood’s battle did not stop there because she had to register her child in Britain as illegitimate. When it came to signing the birth certificate, she was told that by law she had to leave the father’s name blank or write “father unknown.” (Norton [10]). This circumstance was not unique. There were many mothers in the same situation who wanted to change the law to allow their children to have fathers listed on their 25 Strictly speaking, the law and the case include other legal issues and requirements. However, for the convenience of discussion, this chapter will instead maintain its focus on the current issue.

6 Posthumously Conceived Children and Succession …

131

birth certificates. It had become more common for men to store their gametes before cancer treatment or other circumstances that could possibly result in sterility. In the case of their death following such treatment, it was natural for their widows to wish to have their children through artificial insemination using their late husbands’ sperm. Thus, the number of children who were posthumously conceived and born following their father’s death has been increasing. In recognition of this societal fact, the Department of Health took initiative to amend the HFTA 1990. Eventually, Section 29(3B) was added by the Human Fertilisation and Embryology (Deceased Fathers) Act 2003, which provided the following: (3B) Where subsection (5A), (5B), (5C) or (5D) of Section 28 of this Act26 applies, the deceased man— (a) is to be treated in law as the father of the child for the purpose27 referred to in that subsection, but (b) is to be treated in law as not being the father of the child for any other purpose. Under this provision, posthumously conceived children can now have their biological fathers listed on their birth certificates, but may not be considered his legal offspring for any other purpose. The fathers are not imposed any duties to support or maintain the children and the children cannot inherit their fathers’ estates. However, they are not fatherless for all intents and purposes. Listing one’s father on the birth certificate is merely a symbolic gesture, but this symbol might have a positive psychological influence, not only on the child and the mother, but also on their friends, teachers, and other people in their community. The Human Fertilisation and Embryology Act 2008 was later enacted, which amended and updated the 1990 Act and is still in force today. The basic framework remains regarding the regulation of artificial insemination under Sections 35–38. Posthumously conceived children are still allowed to have their biological fathers listed as their legal fathers on their birth certificate under Section 39. Thus, the UK law generally denies posthumously conceived children’s rights as the legal offspring of their biological father, with the exception of paternal acknowledgement on a birth certificate.

26 These 27 The

subsections regulate posthumous conceptions. purpose of birth certificates.

132

K. Itamochi

6.6.6 Legislative Regulation Approach and Purpose-Sensitive Approach Compared to Japan and the US, the UK is unique in that rules were passed in the legislature very quickly. The rule is very severe for posthumously conceived children, but is still reasonable in its intention to avoid inconvenience and uncertainty regarding substantial legal rights, such as the right to be supported or maintained. Even the potential that such a child could be granted the right to inherit his/her biological father’s estate introduces uncertainty into the succession process and the rights of other heirs. Perhaps this rule is harsh for children who are conceived posthumously, but there is still acceptable reason to deny them legal rights as children of their biological father. However, the UK law also demonstrated flexibility. The law created a means for such children to be legally acknowledged on their birth certificate. Because this acknowledgement is only for that purpose, it causes no uncertainty for others. The psychological merits of this approach cannot be evaluated in this chapter, but it is not difficult to imagine that the children and their mothers feel happier with their loved one’s name on the official document. This could be labeled as a purpose-sensitive gesture, in which the law endeavored to strike a balance between total denial and total acceptance of the posthumously conceived children.

6.6.7 Analysis The approach to legislative regulation taken in the UK is a solution to the concern related to the separation of powers discussed in Sect. 6.5.1 of this chapter. Unlike Japan, the UK Parliament responded quickly to new reproductive medicines and the potential legal problems that might be caused by such new technologies. Therefore, the courts did not need to consider what the law should be in relation to the new problem; they simply needed to apply it. At the same time, the UK courts did not automatically apply the domestic law but instead construed European law to address the difficult case presented by Mrs. Diane Blood, in which the strict regulation denied her any possibility of using her late husband’s sperm following his sudden, unexpected death. The legislature was also flexible when it faced the new need of posthumously conceived children to have their fathers’ name on their birth certificates. Although the legislative response was not settled quickly enough for the parties at issue,28 it still constituted a more agile response than that of the Japanese legislature, which 28 The amended UK law has retrospective effects. Section 3 of the Human Fertilisation and Embry-

ology (Deceased Fathers) Act 2003 provides “This Act shall (in addition to any case where the sperm or embryo is used on or after the coming into force of Section 1) apply to any case where the sperm of a man, or any embryo the creation of which was brought about with the sperm of a man, was used on or after 1st August 1991 and before the coming into force of that section.” Thus,

6 Posthumously Conceived Children and Succession …

133

never enacted any regulations or remedies for the presented issue. According to the newspaper, the UK government began reacting when the children at issue numbered mere dozens. (Norton [10]).

6.7 Conclusion, or Problem Restated This chapter began by discussing the problem of posthumously conceived children and their succession rights. In the application of the existing law, such children are not legally classified as offspring of their deceased biological fathers, so they have no succession rights whatsoever. However, they are still biological children of their fathers, and in many cases, no one would contest the recognition of such rights. This is the problem that initiated this discussion. Japan and the US both have leading cases that project a clear contrast in their respective conclusions. While the Japanese case excludes any possibility of such succession rights, the American case allows for some rights that are constrained by certain conditions. However, the contrast between the two cases stems from indirect policy considerations rather than a direct value consideration. The American court actively engaged with the question and made a reasonable decision on the problem directly. Conversely, the Japanese court did not solve the problem directly, and instead avoided doing so by sending the problem to the legislature. This move can still be justified by the standard policy regarding the separation of powers. It is important to consider what powers are held by the lawmaking division of the government and what held by the entity charged with applying the law. However, in an examination of the original question, the Japanese court’s approach resulted in its neglect of the question because there was no reaction from the political branches. Given the reality of this context, this chapter concludes that the Japanese courts should have provided a solution to the problem regardless of the conclusion itself. Even in consideration of the separation of powers, legislative measures are never perfect and some problems and difficult cases remain. When such difficult cases are brought to court, judges must make a decision whatever it is. In this context, the British experience has many implications, as its approach still illustrates a range of reasonable solutions. They first denied any possibility for posthumously conceived children to become the legal children of their biological fathers with acceptable policy considerations for legal certainty and predictability. It also shows that the legislature can choose to deny legal paternity for some purposes and allow it for others. The approach in which posthumously conceived children are allowed to claim their biological fathers on only official/administrative documents,29 like that adopted in the UK, might be a good option for Japanese lawmakers. posthumously conceived children who had already been born before the new law was enacted could also have their fathers’ name enlisted on their birth certificates. 29 In Japan, there are no such documents or registers like that of a birth certificate in the US or the UK, but there is a family register, which records and shows one’s familial relationships. On these documents, the data are arranged differently, so there might be some technical difficulties in imitating the approach adopted in the UK.

134

K. Itamochi

References 1. Blood. (1997). R v Human Fertilisation and Embryology Authority, Ex parte Blood [1999] Fam 151, [1997] 2 All ER 687 (CA). 2. Bridge, S. (1999). Assisted Reproduction and the Legal Definition of Parentage. In A. Bainham, S. D. Sclater & M. Richards (Eds.), What is a Parent? A Socio-Legal Analysis. Hart Publishing. 3. Doe. (1792). Doe v Lancashire (1792) 101 ER 28. 4. Gilmore, S., & Glennon, L. (2018). Hayes & Williams’ Family Law (6th ed.). OUP. 5. Hall. (1834). Hall v. Hancock, 32 Mass. 255 (1834). 6. JSC. (2006). Japanese Supreme Court decision 4 Sep 2006. Minshu, 60(7), 2563. 7. JSC. (2013). Japanese Supreme Court decision 4 Sep 2013. Minshu, 67(6), 1320. 8. Kubota, A. (2019). Kazoku-ho [Family Law] (4th ed.). Yuhikaku. 9. Laurie, G. T., Harmon, S. H. E., & Dove, E. S. (2019). Mason and McCall Smith’s Law and Medical Ethics (11th ed.). OUP. 10. Norton. (2000). Cherry Norton, ‘At last, Diane Blood can put the name of her husband on son’s birth certificate’ Independent (London, 25 Sugust 2000). Retrieved from May 7, 2019, from https://www.independent.co.uk/news/uk/crime/at-last-diane-blood-can-put-thename-of-her-husband-on-sons-birth-certificate-710510.html. 11. Shapiro, J. (2005). Changing Ways, New Technologies and the Devaluation of the Genetic Connection to Children. In M. Maclean (Ed.), Family Law and Family Values. Hart Publishing. 12. Sitkoff, R., & Dukeminier, J. (2017). Wills, Trusts, and Estates (10th ed.). Aspen Publishers. 13. Simana, S. (2018). Creating life after death: Should posthumous reproduction be legally permissible without the deceased’s prior consent? Journal of Law and the Biosciences, 5(2), 329. 14. THC. (2004). Takamatsu high court decision 16 July 2004. Minshu, 60(7), 2604. 15. Woodward. (2002). Woodward v. Commissioner of Social Security, 435 Mass. 536.

Chapter 7

Aristotle and Bioethics Naoto Chatani

Abstract In bioethics Aristotle’s ethics is often reevaluated as representative of socalled ‘virtue ethics’ as alternative position to current modern ethical theories (esp. utilitarianism and deontology). However, most of these approaches are free reconstruction of his theory or idea. This paper considers what we can say from Aristotle’s text in itself. Through this consideration I present the following points; (i) the central notion of Aristotle’s ethics is eudaimonia (flourishing life), rather than virtue (aretê) and then his ethics should be characterized as eudemonics above all, rather than virtue ethics. (ii) His notion of eudaimonia can justify some euthanasia, but does not demonstrate doctor’s obligation to accept patient’s request for euthanasia. (iii) Aristotle’s view of moderate unity of ethics gives a hint about the problem unity of bioethics. Keywords Aristotle · Bioethics · Euthanasia · Virtue ethics

7.1 Introduction: Aristotelian and Aristotle In bioethics the name of Aristotle has been often referred by several scholars until now. Although these references are in various ways, they are basically common in that Aristotle’s view is introduced in a favorable sense. Such welcoming attitude is in marked contrast to tendency for this ancient philosopher to be regarded negatively as incarnation of ‘premodern’ in the context of commonplace history of science. (For example, he is often regarded as geocentric theorist who could not escape from anthropocentrism, as animistic teleologist who thinks stone falls on the ground because it want to fall, and so on.) In the context of bioethics, his ethical view is, for example, appraised as representative of so-called ‘virtue ethics’ as alternative position to current modern ethical theories (esp. utilitarianism and deontology). Moreover, we can understand his ethics has similar idea of modern clinical ethics which considers specific judgement more important than general armchair theory in N. Chatani (B) Kobe University, 1-1 Rokkodaicho, Nada-ku, Kobe 657-8501, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_7

135

136

N. Chatani

real clinical situations. Furthermore, he can be regarded as ethicist akin to supporters of care ethics which understands patient’s pain and agony on the level of emotion instead of cool-headed evaluative judgement by intellect only. However, Aristotle, as far as he is referred in the context of bioethics (or modern ethics in general), means basically Aristotelianism, but not Aristotle himself. That is, most of what are introduced as ‘Aristotle’s position’ then are results of free (sometimes arbitrary) application or reconstruction of his notions in a suitable way for the referring person’s position, but not his own view found in his texts. For example, Philippa Foot, who is famous for neoaristotelianist, indicates her own view in a mixed way with Aristotle’s notion concerning abortion, euthanasia, etc., without detailed referring to the source of her discussion.1 Of course, I have no intention of charging such situation at all. This is a reappearance of past fact that Aristotle had suffered by many traditional philosophers in the long history of European philosophy. Moreover it is also sure that science in general is developing through reiteration of such ‘misapprehension’ by succeeding thinkers. However, what is about Aristotle himself? Are not we allowed to have another style of consideration too? I think it is also significant to consider relationship of Aristotle and modern bioethics on the basis of texts of Aristotle himself only, to the extent that we do not depart from classical study on Aristotle’s philosophy. My paper adopts this way and examines what suggestion his own view can give concerning bioethical problems. Thereby I wish to find out some different and new suggestions from what ‘Aristotelian’ interpretations offered as before. This paper consists from three parts. First, I examine differences between virtue ethics regarded generally as Aristotelian theory and Aristotle’s own position (Sect. 7.2). Next, I take up the problem of euthanasia as one of main subjects on bioethics and consider what notion we can get from Aristotle’s texts on this theme (Sect. 7.3). Finally I consider what we can say about interdisciprinarity of bioethics by referring to Aristotle’s view concerning basic characteristic of ethics (êthikê) as a discipline (Sect. 7.4).

7.2 Reconsideration of Aristotelianism: Is (and How is) Aristotle Himself ‘Virtue Ethicist’? 2-1 Most of notions that are supposed to ‘Aristotelian’ in bioethics or modern ethics in general have some relations to so-called virtue ethics. Virtue ethics, being characterized in most general way, is the position that virtuousness of agent is more important than utility or conformity to duty in the action on moral evaluation. Virtue ethics, as moral theory or doctrine, is often regarded as alternative (or complementary) theory to two modern current moral theories, utilitarianism and deontology. Virtue ethics 1 e.g.

Foot [1, 2].

7 Aristotle and Bioethics

137

supposes various excellent characters, for example, brave, gentile, moderate, and faithful, which are supposed to belong to good person, and then refers to presence of such characters in moral evaluation of human life and actions. Aristotle’s ethics is normally supposed to be ancestor and representative of such modern virtue ethics. Therefore, we can set up first of all a question ‘whether (or in what way) Aristotle is virtue ethicist?’, as basic consideration of relation and difference between Aristotelianism and his own view on bioethics. This question is equal to ask how much and to what extent Aristotle himself shares fundamental characteristics which modern virtue ethics is generally supposed to have. Before that, as preliminary consideration, I carry out minimum confirmation of his understanding about concept of virtue (aretê). After that, I compare modern virtue ethics with that of Aristotle. What is virtue at all? Most generally speaking, virtue (aretê) means feature(s) that makes entity X (qua certain type) excellent X (Nicomachean Ethics (NE). I 7,1098a8ff, II 6, 1106a15ff).2 For example, virtue of horse is fleetness of foot. Virtue of pianist is ability of good performance of piano. In the same way, virtue of human being means feature(s) which makes person (qua human being) excellent human being. Then, what is content of virtue of human being? Virtue, as Aristotle thinks, concerns specific and essential feature or function (ergon) which entity X (qua type) exclusively has (NE, I7). What is function of human being? Aristotle thinks that this is logos (i.e. intellect or reasonability). Hence, it follows that virtue(s) of human being is (are) excellent feature(s) which have logos (reasonability) in any way. Based on this understanding, Aristotle classifies virtues of human being into two types. The first is dianoêtikê aretê (intellectual virtue). This means the excellence which has to do with manifestation of intellectual ability itself. For example, wisdom, prudence (phronêsis), and art (technê) are typical intellectual virtues. The second is êthikê aretê (moral virtue or virtue concerning character). Êthikê aretê is not excellence of intellectual part of soul itself, but that of non-intellectual parts of soul (i.e. emotion or desire). That is, êthikê aretê is virtue which enables for us to display and control our emotion or desire with logos (reasonably) and in appropriate way. For example, courage, moderation, holiness are typical. In relation to bioethics, we can suppose moral virtues of doctor in two way, that is, (i) virtues of doctor qua human being, and (ii) virtues of doctor qua doctor, setting aside the problem of whether Aristotle admits positively such classification. 2-2 Then, I move to next step of consideration. Modern virtue ethics sets up several excellent characters, for example, brave, gentile, and faithful, which are supposed to belong to excellent or flourishing person. According to this theory, moral evaluation is performed based on a presence of such characters. In bioethics, good doctors (and in time patients) are supposed to several peculiar virtues to medicine, 2 As to my reference to Aristotle’s texts, I point out relevant part of Becker Version according to the

convention.

138

N. Chatani

for example, moderation, self-restraint, gravity, patience, sympathy, discretion, prudence. compassion, and so on.3 Modern virtue ethics, as already said, has been introduced as third option against utilitarianism and deontology from the point of view that these two existing moral theories can not fully follow moral various problems including modern medical issues. Then Aristotle’s ethics has been positively reevaluated by many thinkers as ancestor of virtue ethics. If we count such situation, we can formulate basic characteristics of virtue ethics by antithetical expressions to characteristics of current two theories. I consider such situation and then enumerate main fundamental three features which virtue ethics in general seems to share. These are as follows; (a) agent (character)-based approach (non-actioncentrism) Utilitarianism estimates the value of result of action (from the perspective of e.g. greatest happiness of greatest numbers of people). Deontology estimates the value of action or action under certain motivation (from the perspective of e.g. conformity to moral duties). In both theories object of moral evaluation is action (or result of action) and subject of discussion is whether action X (e.g. euthanasia) is justified or permitted or not. On the contrary, virtue ethics also pays attention what sort of person in character (êthos) performs the action and estimates inclusively her action and character. In other words, object of moral evaluation is agent’s character rather than her action. Action is praised (or reproached) if and only if it is such action that virtuous (or vicious) person act and virtuous (vicious) person actually performed the action in question. (b) non-deductive approach (particularistic approach, non-fundamentalism) Utilitarianism establishes utility and deontology does duty respectively as single fundamental principle, and thereby both theories estimate all actions deductively from it, without exception. On the contrary, virtue ethics has doubts about absolute knowledge such as that found in natural science and possibility of deductive system. Rather, virtue ethics sets up plural virtues which are supposed to belong to excellent people in the community and refers to these virtues in evaluation of action without using such fundamental one principle. Moreover, virtue ethics performs moral evaluations of concrete actions flexibly by considering how virtuous person would act at relevant situation. (E.g. there can be action which is not justifiable by fundamentalism but justified by virtue ethics, and vice versa.) Furthermore, virtue ethics establishes and emphasizes pronêsis in moral evaluation. Pronêsis is practical reason by which we find out what action is appropriate in concrete practical occasions and choose such action according concrete circumstances. This is different from theoretical reason working in theoretical science such as mathematics. (c) communitarianism (anti-individualism) J. S. Mill, who completed utilitarianism, insists that every person has absolute freedom to act without interference by others unless her/his action do harm others. 3 Kass

[3].

7 Aristotle and Bioethics

139

That is, individual’s autonomy is most fundamental principle in human society. On the other, Immanuel Kant, the ancestor of deontology, also posits autonomy in his moral philosophy. For him, autonomy means that person imposes obligatory rule of action on himself based on goodwill (respect for the relevant rule) and then preforms such action. Every individual as autonomous agent in such meaning is not mere ‘thing’, but has personality with dignity. This principle, autonomy of individual, is supposed to fundamental principle in bioethics too. Modern bioethics has been historically formed in the way that they has been released from paternalism and introduced the notion of respect of autonomy, and has been developed in the United States, which had cultural atmosphere of respecting positively pluralism about values at that time. However, we can direct question at extreme individualism and autonomy-based approach. Human being is necessarily a member of some society, so we would study individual in vain if we neglect this fact. Moreover, actual societies or communities have variety and variability. Accordingly we should investigate ethical problems while considering this. Such position of communitarianism is supposed to similar to Aristotle’s attitude. For Aristotle defines human being as social animal and posits the notion of virtue, whose content is formed by members of the community, as central notion of ethics (e.g. Politica (Pol) I1, 1253a9f). 2-3 Then, if we consider these points which are supposed to basic characteristics of most of Aristotelian virtue ethics, what position we can find out in Aristotle’s own texts concerning them? I try to make relativisation of Aristotelian notions and his own views. First, How is it about (a)? Does Aristotle hold a view against action-based approach? It is sure that Aristotle establishes the notion of virtue as one of the key terms of his own ethical study. He defines happiness (eudaimonia) as activity of soul conforming to virtue in the first book of Nicomachean Ethics (I7, 1098a1617), and then considers two types of virtue (intellectual virtue and moral virtue) and respective concrete virtues in detail in the following books. Then, does he emphasize only the possession of virtue in agent and think action is secondary factor in moral evaluation? He does not. Because virtue and (virtuous) action hold a circular relationship in principle and so we can not determine which of the two is in more fundamental position. On the one hand virtue (e.g. braveness) means disposition (hexis) which is formulated through repetition of correspondent action (e.g. concrete action regarded as brave). On the other hand such concrete action regarded as virtuous (e.g. rescuing someone’s life) becomes virtuous (brave) in true sense only when the agent performs his action with virtuousness of his soul. (For example, only when he rescued someone’s life not in order to receive a fame or reword but on account of brave action is morally excellent.) In addition we should notice Aristotle defines happiness as ‘activity conforming to virtue’, but not possession of virtue. His criticism of Platonic view is in the background of this point. Platonism holds happiness consists in possession of virtue. Against such position Aristotle often insists that it is meaningless if one have virtue only and do not exhibit virtue by concrete actions. Moreover, he thinks that actual practice as manifestation of having virtue are objects

140

N. Chatani

of moral evaluation (NE. I5, 1095b30ff). He has formed this view through his criticism of Plato’s virtue-centrism (I6, 1096a11-97a14). As Aristotle criticizes, Plato holds that possession of virtues in itself brings about happiness and ignores the significance of actualization of virtue. We can apprehend Aristotle’s view as antithesis of Plato’s position. Thus we can assume Aristotle adheres to commit the circular and inseparable relationship of action and character. Therefore, dichotomous notion as ‘existing moral theories evaluate action and virtue ethics does agent’s character’ does not have a similarity to Aristotle’s view. Next, how we should think about the non-deductive approach (b)? In modern bioethics several alternative approaches has been introduced instead of deductive ethics. Deductive ethics evaluates all actions on the basis of single principle such as utility or conformity to duty. On the contrary, so-called plinciplism hold a neutral position to particular ethical theory and considers relatively and flexibly plural principles as prima facie on moral evaluation of action.4 Moreover, particularistic approach takes account of specific situation or contexts of concrete action in moral evaluation. Typical style of such approach is case study approach. Case study approach, or socalled Casuistry also, aims at considering major ethical problems flexibly on the level of case, but not by applying general ethical theory or principle. In case study approach, generally speaking, model case is established first. Model case is factual or fictional example, and functions as a reference in order to evaluate many concrete similar cases. After considering and analyzing such model case, we can access similar or relate concrete cases by analogical thinking with the model case. We can find a kind of original model of this approach in Aristotle. He sets up practical reason (pronêsis) in the central of ethical domain. Pronêsis is, unlike theoretical reason bringing a conclusion by reasoning based on general principle, intellectual ability to choose properly subordinate ends and perform their means toward ultimate end of life (i.e. eudaimonia, flourishing life) as concrete situations (NE.VI 5). Moreover he insists that we should make decision making and moral evaluation according to the case because we find more exception or deviation in actual situations (I3). In fact, Aristotle carries out his discussion about moral evaluation in his texts in such way. For example, he applies case study approach in his discussion about distinction between voluntary action and involuntary action in NE. In this discussion, he takes up and considers many borderline cases about which we can not easily decide whether the agent has responsibility or not. For example, he takes up the following model cases(NE. II3); – throwing away cargo into the sea in order to avoid sinking of the ship at heavy stormy weather – leaking a confidential information without knowing it is confidential – injuring others by arms because he misunderstood the safety device was attached to it – killing the patient because of medicine which doctor administered for medical treatment to him. 4 Beauchamp

and Childress [4].

7 Aristotle and Bioethics

141

However, I think there is an important difference between the case study approach in bioethics and Aristotle’s. For in bioethics (or clinical ethics), basically speaking, so-called ‘taxonomy’ is established at first and relevant case is analyzed by reference to this taxonomy.5 Moreover, this analysis is performed in order to judge whether relevant case is right or wrong. However, it seems to me that Aristotle himself carries out case study in different way and aim. He refers first to intuitive understanding, common idea, opinion which reasonable people have, or common opinion about whether relevant action is voluntary or not. And through such references and considerations about many cases, he comes to fix several conditions for voluntary/involuntary action. In other words, for him, moral intuition or common opinion about relevant case is starting point of consideration and so foundation of moral judgement. On the other hand, to generalize in such way as ‘Condition for voluntary action is such and such…’ become rather goal of moral consideration. Therefore, although the case study by him and it in bioethics (esp. clinical ethics) have similar appearance, we can think these two approaches have different directions or goals. That is, we could not directly apply Aristotle’s way and purpose of case study to modern context of clinical ethics. Admittedly, general conditions derived through Aristotle’s case study might be used again in moral consideration about real concrete actions. On the other hand the taxonomy in clinical ethics might be established (or even revised in some cases) inductively through particular study of particular cases. If so, we might say both approaches have actually some close or complementary relation. In addition, it is clear that Aristotle, case-by-case approach, non-deductivism, and plinciplism in bioethics have share a basic attitude in that he emphasizes the need to consider exceptions and peculiar conditions in actual concrete situations and regards ethics (practical knowledge) has non-rigid and flexible characteristic unlike theoretical one. However, I think Aristotle is after all a strong fundamentalist still in different way from utilitarian or deontologist. Because he establishes eudaimonia as the ultimate single principle which is the base of moral evaluations of all human actions. For him, eudaimonia is so fundamental term in ethical thinking that we can not examine or evaluate fully any ethical things without referring to it. In book 1 of NE, he establishes this notion as most fundamental notion in ethical consideration and characterizes ethics as eudaemonics, just as did ontology as theory of substance (ousia) in the field of metaphysics. Flourishing life, being characterized as the ultimate end of life and, is originally referring point of every action. Accordingly, every human action becomes evaluated ultimately in terms of the question whether relevant action contributes to flourishing life or become constituent of it. On the other hand, it is when Aristotle defines the content of flourishing life that he introduces concept of virtue for the first time. (That is, eudaimonia is defined as activity of soul conforming to the virtue.) Aristotle’s ethics is then most suitable to be characterized as eudaemonics, rather than virtue ethics. We could him virtue ethicist in secondary meaning. We can characterize him a kind of fundamentalist in that he is strong eudaemonist. I move to consideration of point (c). How we should think about relation of Aristotle’s ethics and communitarianism? As I already said, Aristotle states clearly 5 Jonsen

et al. [5].

142

N. Chatani

sociality of human being and points out necessity of considering this. He stresses repeatedly ethics is a part of Politikê (study about the state) and thinks there is analogical relationships between individual (as micro-cosmos) and state (as macrocosmos) (NE. I2, 1094a19-b12). The reason why he thinks so is the following (Pol. I2, 1252a26-53b1). Human being is an animal organizing essentially a society and so can only survive in it. Minimum and primitive unit of society is family. Succeeded by family, village community become formulated at the next stage. Village community is society which consists of plural families for the sake of more stable and continuous life. Finally, the state (polis) become organized from plural village communities as maximum and ultimate unit of society for the sake of realizing more sustainable and flourishing life of members. Accordingly, as Aristotle thinks, what we can say about individual (as member of society) or family is also applicable to what we can do about the state. For example, likewise every individual aims at one’s own eudaimonia (flourishing life) as ultimate end of his/her life, politician or governor of state (should) aims at goodness of own state (i.e. public welfare) as his/her mission. Furthermore, we can find Aristotle’s view being related to communitarian position in this concept of virtue (aretê) itself. The virtue of human originally functions as a indicator when we judge someone excellent person as a human. To look at it from another perspective, what concrete virtues exists and what content each virtue has depend on what type of character or action the majority in relevant community admires. Therefore it is true that anyone who introduces the notion of virtue in the central of their own ethical theory, including Aristotle, has some affinity those who make much account of peculiarity and tradition of each community or cultural area, hold cultural relativism, and stand at a conservative position. For example, Some people may think that we should examine separately the problem of euthanasia in countries where individualism is dominant such as USA, in Catholic area of Europe, and in Confucian area of Asia. However, we should not conclude Aristotle holds willingly such position on the ground that the concept of virtue in general have implicates it. He did not declare in his text that the virtue is relative to culture or community. He maybe unaware such possibility of cultural relativism rather than oppose it. Furthermore he supposes various virtues basically as those of human being and then does not tell about particular virtues of each occupation, social class, or gender such as virtue of doctor, nurse, woman, and so on. If so, Aristotle’s position about medicine would turn out that doctor should perform virtues of human being, but not those of doctor (i.e. specific virtues of doctor only). However we cannot ignore the fact that Aristotle notoriously claims there is substantial difference concerning sexuality and between citizen and slave (e.g. Pol. I4-6, NE. VIII7). We therefore have room for thinking he admits virtue of man or that of slave in a prejudiced way. However, I would like to point out that in his discussion about friendship he says slave is slave as slave but human as human and then some friendship between slave and his master can be established (NE VIII 11, 1161a30-b9). We can therefore think that for Aristotle’s notion of human virtue is applicable generally regardless of sexuality or social class. If so, his such essentialistic attitude about human being suggests he does not commit such view that spirit of ‘caring’ is exclusively important virtue only for nurse.

7 Aristotle and Bioethics

143

7.3 Aristotle and Euthanasia 3-1 I move to next consideration. What notion can Aristotle’s ethics offer to the area of modern bioethics, if we can basically characterise it as above in comparison to modern virtue ethics? We can suppose two questions in considering this problem. The one is what relation do Aristotle and main concrete subjects in modern medicine hold. The other is what relation do Aristotle’s view of ethics and fundamental characteristic of bioethics (as one of disciplines) have. About the first issue-based considerations, several thinkers have discussed concerning euthanasia, abortion, enhancement, animal experiment, etc. from neoaristoterian perspective until now.6 On the contrary we can call the second point the methodological problem of bioethics as a science, although such approach does not have been tried so much by referring to Aristotle. In this section I take up the problem of euthanasia as a part of the first topic. The problem of euthanasia have especially relation to technology in modern society and is also one of classical problems of bioethics. The representative thinker arguing the problem of euthanasia by connecting to Aristotelianism is Philippa Foot. As she points out, most of discussions about euthanasia so far had stressed the respect for autonomy of patients who request for dying or considered the distinction between‘killing’ and’letting to die’ in principle.7 In contrast to such conventional styles of consideration, she argues euthanasia from the perspective of good life (flourishing life) and virtue in her famous paper about euthanasia.8 Her view is the following. Euthanasia is the death for the sake of the goodness for dying person, but not mere acceptance of patient’s request. The’good’ means flourishing or happy here. She supposes minimum autonomy, support of community, relief from hunger, hope for future, and so on as fundamental elements of the good life. Euthanasia is morally permissible if patient requests for it for the reason that continuing to live in relevant situation makes her life bad and unhappy. In other words, euthanasia is justifiable when patient could pass final time of her life in good or flourishing way by means of euthanasia. Then, how should doctor act to patient’s request of euthanasia? It depends on how we can evaluate acceptance (or rejection) to the request from the viewpoint of virtues such as charity or justice. For example, if a patient lacking extremely basic goods (i.e. unhappy patient) requests euthanasia for the sake of realising good final moments, to accept the request will be permissible from the viewpoint of both charity and justice. For in this case the charity implies wishing for the patient’s good and the justice does being faithful to her serious request. However, if a patient still keeping the basic goods in some degree request euthanasia for the sake of only escaping pain, doctor’s acceptance will be against the virtue of the charity although it will meet that of the justice, and then not permissible. Moreover, we could not say such death‘euthanasia’ in definition. On the 6 e.g.

Foot [1, 2], Hursthouse [6,7], Sandel [8]. Rachels [9]. 8 Foot [2]. 7 e.g.

144

N. Chatani

other hand, if the patient who lacks basic goods requests refraining or withdrawal of treatment but does not active euthanasia (i.e. to die by doctor’s assistance), to perform active euthanasia will be against the justice although it will meet the charity. Therefore to do it in such situation will not be morally permissible. Considering her discussion as above as a clue, I examine what position Aristotle himself hold concerning the problem of euthanasia. 3-2 I think also, as Foot points out, that we cannot consider sufficiently the problem of euthanasia solely by referring to the principle of autonomy. The reason is this. We can explain the justifiability of euthanasia according to respect of autonomy only as follows; Human being has the right to die in the society whose members respect the autonomy. That is, every person holds the right to choose and act what is benefit for her as she judges and to control own body freely unless relevant action does harm to others or society. Therefore, her request to die (i.e. be assisted to die) should, as far as it is self-determination, be respected and then accepted by doctor, even if it seems strange to others including doctor. We can justify euthanasia as above by sorely the principle of autonomy. However, it is questionable to be able to explain fully justifiability of euthanasia on the ground of autonomy only; for doctor would become ought to accept anyway patient’s request for euthanasia, if only she judged spontaneously (i.e. without any coercion) ‘my life do not deserve to life in this situation’ after her serious reflection. In other words, every euthanasia (assisted suicide) for every reason, including lost love, vague anxiety for future, etc., would be permissible except for such cases as depression and impulsive desire for suicide. However, if so, arguing point of euthanasia would deviate from the background that we have began to consider originally about euthanasia as option in the face of excessive treatment for life prolonging in modern medicine. Moreover, such radical view would not fit in our ordinary moral intuition about what situation makes euthanasia permissible. If we rely on the principle of autonomy only, we will be hard to explain sufficiently why most of actual states or communities regarding active euthanasia as legal establish in fact concrete requirements such as non-recoverable condition, terminal stage, unendurable pain, etc. as safeguard or guideline against abuse of euthanasia. To look at it from another perspective, most of us perhaps think such requirements (e.g. nonrecoverable state, terminal stage, serious pain) are necessary to permit euthanasia in addition to autonomy because we have fundamental understanding concerning what life is deserve to live or not on the background of such attitude. Does every person have altogether different understanding about it? No. There must be some common and minimum understanding among us about life deserving to live, and so common attitude that euthanasia is morally permissible only if it does not conflict with such understanding. If so, this would imply that the notion of the flourishing life (life worth living), which Aristotle sets up as core principle of ethics under the name of eudaimonia in a different meaning from happiness as temporary subjective satisfaction, must be needed for justification of euthanasia. However, we should be fully cautious with such idea as ‘life worth living’, when we take account of historical fact of inhumane ‘euthanasia’ under such slogan by

7 Aristotle and Bioethics

145

Nazis. If contents of the flourishing life (or miserable life) and the life worth living (or life unworthy of living) are determined arbitrarily by tyranny of majority or dictatorial power regardless of the person’s intention in question, there is a risk that inhumane mercy killing be performed under the name of benevolence or charity based on such arbitrary definition. (I guess this risk is one of reasons why many thinkers stresses importance of autonomy so.) However, it is unlikely and unreasonable that the content of flourishing life has nothing to do with particular opinion of each individual. Therefore we should think that what life is flourishing (or miserable) can be explained both subjectively and objectively when we consider flourishing life in the context of euthanasia. If we were allowed to determine flourishing life subjectively only, we could regard every possible situation (including lost love, vague anxiety for future, etc.) as justifiable reason of euthanasia. On the other hand, if we determined it objectively only, the majority or authority in the society might regards a certain state of life as ‘life unworthy of living ‘and then compel people in such state to mercy killing without any request or consent. The content of flourishing (or miserable) life must contain both objective element as minimum condition which most of us would admit and subjective (peculiar) element which each person has specially. Therefore when we consider permissibility of euthanasia we should confirm firstly to what extent the patient requesting euthanasia lucks severely minimum conditions which are regarded commonly as necessary to minimum flourishing life. Then, does Aristotle’s view of eudaimonia harmonize with this? He defines eudaimonia as ‘practical life by the part of soul which has logos (reason)’, alongside ‘activity of soul according to virtue’(NE. I7, 1097b30-98a21). The former definition means his manifestation of the view that essence of human being consists in intellect. At first sight this view appears to be rigid intellectualism regarding only intellectual activity as human and so restrict univocally the flourishing life to purely intellectual activity (such as contemplation) only. Moreover, this definition appears to have a risk to justify depriving those who lack intellectual ability of right to live, just as so-called the person theory. However, we should pay attention to what Aristotle means by the notion ‘activity with logos (reason)’. He supposes actually both activity using the intellect itself only and various actions which we perform by working unintellectual parts of soul (e.g. desire, emotions) reasonably as ‘activity with logos’ That is, ‘to act with logos’ means to act reasonably using any mental faculty. Therefore, sports, amusement, gastronomy, cooking, artistic activity, expressing emotion, academic activity, etc. are ‘activities with logos’ as far as we perform their activities reasonably. Some may think pursuit of gastronomy is flourishing, others may think reading books is life worth living. While admitting positively such multiplicity of opinions, Aristotle sets up reasonability of any action as minimum condition for judging it to be flourishing. This notion is applicable to the problem of euthanasia. What life is good and flourishing life for herself depends on her own thought about it. Other people surrounding her can not decide this independently of her will. Aristotle’s definition of eudaimonia is minimum necessary condition which means that every activity is flourishing as far as it is done reasonably and then holds good whatever view of life or lifestyle the agent holds. We must not determine other person’s content of happy life heedless of

146

N. Chatani

her opinion, like as ‘This choice is more flourish for you than your choice’. Such attitude is just meddling or paternalism. According to Aristotle’s view of eudaimonia, what surrounding people (including doctor) should confirm first of all when they receive the patient’s request for killing her or letting to die would be whether she is in a situation where she cannot act reasonably in any sense and is hopeless of it. If we cannot check this, euthanasia would not be permissible at any rate. Put it the other way, we can suppose heavy damage of heath, unrecoverable state from such damage, unendurable pain, etc. as impeding factor of reasonable activity. Moreover, according to this arguing point of reasonability we can indicate that we should check the following two points concerning consent of request for euthanasia. (i) whether the request is decisive choice made through sufficient consideration; (ii) whether he chose it for the sake of realizing his own flourishing life (i.e. avoiding extremely miserable life) and with awareness of this purpose; in other words, whether its request was done because of desperation or not. It is sure that the eudaimonia (flourishing life) means conceptually the ultimate end that every person aims at in her life. Therefore a situation such as‘he choose a action in order to become unhappy’ must be impossible in principle. However, person might act desperately, desire his death impulsively because of depression, choose his action under compulsion (i.e. not voluntarily). We could not consent patient’s request for euthanasia in these cases at least, because they do not satisfy the two conditions as above. In other words, doctor can consider about acceptance of the patient’s request only if it satisfy these conditions. Furthermore, we can refer to Aristotle’s notion of pronêsis (practical reason) concerning present issue. Pronêsis means the intellectual ability to choose flexibly appropriate subordinate ends and perform their means toward ultimate end of life (i.e. eudaimonia) as concrete situations. Therefore, if both patient’s request and doctor’s acceptance of it are well-considered choices using pronêsis, we might say such choices are ‘thoughtful’. (But this is just supposition in theory. How many cases there are which we can say really thoughtful is another problem). 3-3 However, even if we can admit such reasonable and thoughtful request in some cases (i.e. we can judge such request and acceptance of it is not necessarily morally bad), this would not entail doctor’s duty to accept such request by patient. Foot seems to try to access the propriety of doctor’s acceptance in some extent by introducing virtues of charity and justice. She thinks, for example, when the patient lacking fundamental goods requests reasonably to euthanasia the doctor is morally allowed to accept his request on the ground of charity (that implies fidelity to serious request) and justice (implies wishing partner’s good). However, even if we can use these two virtues in order to make a moral evaluation of the doctor’s acceptance in question, we cannot follow the problem of duty of acceptance by referring them. Because virtue ethics in general and Aristotle’s eudaemonics do not originally adopt such way of thinking like as the concept of right/obligation or deontology. Rather, according to Aristotle’s position on medicine we cannot but respond negative answer about the problem of patient’s obligation. The medicine (iatrikê), as Aristotle thinks, one of arts (technê). Every art has originally one definite end. The end in medicine is

7 Aristotle and Bioethics

147

health (hygieia)(NE. I1, 1094a9-10). Therefore every practice in medicine, whatever various it is, is supposed to be done for the sake of the health. If so, to assist patient’s death is outside of the doctor’s duty. This is not normative statement but descriptive fact. In other words, euthanasia is out of business of doctor as medical professional. Of course, withholding a medical treatment of incurable disease would be part of the business of medicine and then so-called passive euthanasia must be justifiable. By the way, Foot enumerates the charity and the justice as items of virtue concerning moral evaluation of euthanasia. However, I think it is doubtful whether these are really virtues which Aristotle himself admits as virtues. She seems to understand the justice as virtue concerning right actions which every human should perform toward others regardless of any utility or profit. For example, being as faithful to other’s serious request as possible and not killing others are right actions. In the other the charity, as she understands, is item of virtue concerning actions which make others good or happy. However, Aristotle does understand that the concept of justice like Foot thinks means the perfection of all ethical virtues (êthikê aretê) concerning actions to others (NE.V1, 1129a3-30a13). That is, the justice in such broad meaning is not particular item of virtue. The justice which he admits as concrete item of virtue is so-called distributive justice or collective justice on the basis of the principle of fairness (NE. V2-5). As for the charity, we cannot find it actually anywhere in his texts. Because the charity cannot originally become one of the virtues in character (êthikê aretê) as Aristotle understands. He regards the virtue in character as the following disposition (hexis) (e.g. NE.II1-2, 6–7); (i) It is established through repetition of correspond action. (ii) It makes correspond emotion or desire work reasonably (i.e. in the middle way, but neither insufficiently nor excessively). For example, the virtue of courage (andreia) is disposition to fear reasonably (in the middle way) and established through repetition of concrete courageous actions. In contrast the disposition to fear excessively is called‘coward’ and that to do insufficiently is called‘rashness’. The virtue of gentle is disposition to get angry reasonably. The disposition to do excessively is irritableness. (That to do insufficiently has no name.) That is, each virtue has corresponding emotion or desire respectively. Each emotion or desire is morally neutral in itself. The disposition to show such emotion or desire appropriately is virtue. According to this idea, the concept of charity must not be virtue because it means a feeling to sympathize with others and with others’ good and then has essentially positive value in itself. Therefore I cannot but say that there is crucial gap between Aristotle and Foot concerning understanding of virtue. This gap in fact is not only found in Foot. The same is true of many virtues such as patience, sympathy, compassion, discretion, etc., which modern virtue ethicists often emphasise as necessary virtues in bioethics and medical context.9 These are not disposition like as mentioned above or neutral emotion in value but originally high valued feeling. Furthermore, justice as Foot thinks, integrity, and beneficence, which are often regarded as important virtues in medicine, seem to be just moral principles 9 e.g.

Drane [10].

148

N. Chatani

rather than virtues. They seem to be principles according to which we should act normally, just as non-maleficence or autonomy. 3-4 Admittedly, I am not criticizing here Foot and modern virtue ethicists for holding incorrect understanding of virtue. What I intend to point out is just that Aristotle and they have different view of virtue. Moreover, we can not demonstrate that doctor has obligation to accept of patient’s request for active euthanasia (physician-assisted suicide) by referring to Aristotle’s theory of virtue (êthikê aretê). Instead more effective hint about euthanasia should be found in his notion and theory of eudaimonia (flourishing life). By introducing his eudaimonia, we can suppose certain permissibility of request and consent of it in theory.

7.4 Interdisciplinarity of Bioethics and Aristotle 4-1 I move to another consideration. As noted above, we can suppose two way of consideration in order to understand the relationship of Aristotle and modern bioethics. The one is issue-based approach as I examined in previous chapter. The other is to ask what relation do Aristotle’s view of ethics and fundamental characteristic of bioethics (as one of disciplines) have. As for the first approach, several thinkers have discussed about euthanasia, abortion, enhancement, animal experiment, etc. from neoaristoterian perspective until now. On the contrary, the second approach does not have been tried so much by referring to Aristotle, although it concerns methodological problem of bioethics and its establishment as a science or discipline. In this section, I focus on such viewpoint. I would like to point out some affinity between interdisciplinarity of bioethics and Aristotle’s view about basic characteristic of ethics through the following consideration. In general, modern bioethics is characterised as interdisciplinary science. Plural areas (e.g. medicine, philosophy, law, sociology, education) participate in academic activity of so-called ‘bioethics’ and approach jointly various ethical problems concerning modern medicine. This interdisciplinarity has historical background that modern bioethics has been established through liberation from exclusive ethics by medical professional only in twentieth century. This liberation has great significance because interdisciplinary approach by plural disciplines is necessary for consideration and resolution about different complicated problems in modern technological society, such as artificial reproduction, terminal care and euthanasia, organ transplantation and brain death, and so on. However, we can ask how such various and complex approaches hold a unity as one academic activity. If plural disciplines approach any one problem respectively, accumulation of these approaches is likely to be only mixture of consideration from respective fields. However, if these approaches coexist aggregately but do not relate each other, we cannot regard such aggregation as one

7 Aristotle and Bioethics

149

interdisciplinary science of bioethics. That is, so-called bioethics would mean only coexistence of respective academic fields. Then we should ask the fundamental question concerning the possibility of bioethics as this; in what way and in what extent does bio-medical ethics hold unity, certainty and clearness as one discipline (or one brunch of applied ethics)? (Of course such opinion that only aggregation is essence of bioethics may be on possible answer to this question). 4-2 There is a ancient person who can offer a hint to this question. He is Aristotle. The reason is as follows. As he is popularly called ‘founder of all sciences’, he established most of current academic disciplines (e.g. physics, biology, economics, logic, psychology, etc.) in European history. Therefore, he is naturally sensitive what fundamental characteristic new sciences which he originated have with regard to their object, method, and purpose. Of course ethics (êthikê) is one discipline among them. Aristotle determines what science ethics (or strictly speaking what is called ‘ethics’ by people in the later history) is at introductory discussion (Book I) in NE. Then he introduces an important notion which relates closely to interdisciplinarity of bioethics in my opinion. It is the notion of focal (pros hen) structure. What is the focal structure? Aristotle introduces his notion of ‘pros hen’ when we have any general theme both whose consideration corresponds to one discipline and in which central concept has equivocal implication or plural meanings. Pros hen (focal) structure is a kind of framework by which unified consideration about relevant theme or concept is established in this situation. For example, he enumerates ‘be’ ‘good’ ‘medical’ ‘healthy’ as such concepts. I take up the examples of ‘healthy’ and ‘be’ introduced in Metaphysics. In Metaphysics (Met.) Gamma and Zeta, He shows focal structure of ‘be’ as follows; –to on (be, being) is said in many ways. But they are said so not homonymously, but with reference to one nature (ousia, substance) (1003a33f). –to on is said in many ways (i.e. according as category), but primary being is ‘what it is’, which means substance (1028a13-15). Being as substance (e.g. ‘is Socrates’, ‘is human being’) is primary being. On the contrary, other categories (‘is pale’, ‘is tall’) are secondary beings, which are said with reference to substantial being. This situation is analogised with the case of ‘healthy (hygieinon)’ (1003a34ff). That is, predication ‘healthy’ is applied to not only person but also food (e.g. vegetable), complexion, urine, etc. But they have difference with respect to way of application. For application to person is as holder of health, to vegetable as cause of it, and to complexity or urine as mark of it. So, predication ‘healthy’ is said primarily about person as holder of health, on the contrary other predications is predicable only through referring to health which person holds. If we think the case of ‘be’ analogically with that of healthy, we can understand two points; (i) Substance is primary being.

150

N. Chatani

(ii) Various things to which the term healthy is applied can be object of one science, i.e. medicine. Analogically, originally equivocal beings can be object of one science, i.e. study of substance. This pros hen structure, as Aristotle thinks, holds also true of the concept of ‘good’ in ethics. Aristotle applies this theory directly to ethics in NE. In NE I6, he criticizes Platonic view of the good. Platonism thinks all good things are unlimitedly derived from Idea of the Good and therefore the concept of good is univocal. Against this position Aristotle tries to show the good is essentially equivocal and has many implications. He insists two fundamental characteristic of the predicate of ‘(be) good (agathos)’ as follows; (i) homonymy of the good: the ‘good (agathos)’ is said in many ways, i.e. crosscategorical, just as the ‘be’. For example, just as good and intellect (theos kai nous) are said ‘good’ in the category of substance, virtue is said so in quality, to be moderate in quantity, usefulness in relationship, opportunity in time, and location in place respectively. (ii) pros hen (focal) structure of the good: we can find certain order between these categories (substance, quality, quantity, and so on). That is, all categories other than that of substance depend on the latter in the cases of ‘good’, just as in the cases of ‘be’. They are explained in the relation to the substance (1096a17-34). Aristotle thinks that these two points are established on the analogy of focal and equivocal character of the ‘be’. We can understand easily this about most of examples which he enumerates, i.e. virtue, opportunity, moderate, usefulness, location. But we need to examine what ‘god and intellect’ as ‘the good as substance’ means. It is difficult at first sight to understand clearly that concrete good things in human being are said so derivatively from the goodness in god or intellect. However, we can think ‘god and intellect (theos kai nous)’ as ‘godlike intellect’ or ‘god, namely intellect’ on the ground that Greek word ‘kai’ can be used as hendiadys grammatically and other all examples are enumerated one-on-one at each category. If we read this statement in such way, we can understand what ‘the goodness on substance’ means. Aristotle does not insist here that the god is good entity on the highest level but refers to theôria (i.e. contemplation, theoretical activity), which can be performed only by intellectual capacity of human being. Aristotle presents this activity as most flourishing activity worthy of most flourishing life in the final chapters of NE. Theôria, as he thinks, can be performed by intellectual part of the soul and moreover is most godlike (theiotaton) activity of all human activities (X 7–8. esp. 1177a12-22). Therefore we can think Aristotle intends to show in analogical discussion on the good above (1096a17-34) that the goodness of eudaimonia (flourishing life) is primary goodness (i.e. goodness on substance). Because eudaimonia, as already mentioned, is both ultimate end and ultimate good that all actions (i.e. all activities for the sake of any goodness; I1,1094a1-3) finally aim at (I4,1095a17-20). In other words, the goodness of eudaimonia is the standard of all actions performed for the sake of any good. I think the example ‘god and intellect’ suggests Aristotle’ own position that intellectual activity (theôria) is concrete content of ultimate good (i.e. eudaimonia). However,

7 Aristotle and Bioethics

151

this is just his own position. It is important that the focal structure of the good remains valid, even if anyone assumes any thing (activity) as concrete content of eudaimonia. Aristotle indicates here his notion that we can consider various goods in human life comprehensively as a part of ‘eudaemonics’. It is sure that the good is told in many way and then each good on each category can be understood within respective specialised area. For example, good quantity of spice is known by art of cooking, good location is explained by estate agent, and so on. On the other hand, any good on any category can be recognised within the framework of the eudaemonics as far as relevant action is performed voluntarily for the sake of agent’s good. I would point out here that we can relate this idea of moderate or mild focal unity involving certain multiplicity to the idea of interdisciplinary unity in bioethics (or applied ethics in general). Bioethics deals with various subjects containing various contexts or social aspects respectively. Therefore each subject can be considered independently by each professional of particular field (e.g. medicine, philosophy, law, sociology, etc.). However, there should be some unity or common standard among such multiple considerations, just as all goods both keep such multiplicity and are referred ultimately to ‘the good in happiness (eudaimonia)’ on Aristotle ethics. Otherwise bioethics would remain just aggregation of different approaches to one subject by different areas. 4-3 Then, what is common standard (i.e. focal notion) for realizing such interdisciplinary unity in bioethics? I think that to consider this question in itself is one of tasks imposed on each bioethicist. That is, whoever is engaged in bioethical subjects seems to be required to answer this question. For example, if Aristotle were asked this question, he would answer the focal notion is the health on the ground that bioethics has to do with medicine (healthcare) and then the ultimate end of healthcare is originally the health. However, this is just Aristotle’s answer. Each person who are engaged in modern bioethics now must consider by oneself concerning what is focal notion in modern bioethics.

References 1. Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 5–15. 2. Foot, P. (1977). Euthanasia. Philosophy and Public Affairs, 6(2), 85–112. 3. Kass, L. R. (1989). Neither for love nor money: Why doctors must not kill. Public Interest, 94, 25–46. 4. Beauchamp, T. O., & Childress, F. C. (2001). Principles of biomedical ethics (5th ed.). Oxford University Press. 5. Jonsen, A. R, Siegler, M., & Winslade, W. J. (2002). Clinical ethics (5th ed.). McGraw Hill. 6. Hursthouse, R. (1991). Virtue theory and abortion. Philosophy & Public Affairs, Summer, 20(3), 223–246.

152

N. Chatani

7. Hursthouse, R. (2011). Applying virtue ethics to our treatment of the other animals. In Beauchamp, T. L. & Frey, R. G. (Eds.). The Oxford handbook of animal ethics. Oxford University Press. 8. Sandel, M. (2007). The case against perfection: Ethics in the age of genetic engineering. Cambridge, Massachusetts: Belknap Press of Harvard University Press. 9. Rachels, J. (1975). Active and passive euthanasia. The New England Journal of Medicine, 292(January 9), 78–80 10. Drane, J. F. (1995). Becoming a good doctor: The place of virtue and character in medical ethics (2nd ed.). Sheed & Ward

Chapter 8

Reinterpreting Motherhood: Separating Being a “Mother” from Giving Birth Mao Naka

Abstract The fact that only women give birth has been used to justify the view that women must “naturally” be the primary caregivers of newborns, which has constituted the core of the traditional understanding of motherhood. In order to reinterpret the motherhood, it is essential, therefore, to separate giving birth to a baby from the concept of raising a child or being the primary parent. The paper focuses on just such a theoretical separation. In addition, this paper proposes using the term “mother” to describe a person who does not physically give birth—such as fathers and foster parents—if they form a close connection with a child through their “mothering,” which may transform their way of existence as a result. Half of the paper is devoted to an examination of the practical example of newborn adoption due to an undesired pregnancy, including babies left in baby hatches or anonymous/confidential childbirth. Such instances serve as exploration of the actual possibility, theoretically considered in the first half of the paper, of separating the concepts of giving birth and being a primary parent. Keywords Motherhood · Rich · Ruddick · Baby-Hatches · Newborn adoption

8.1 Reinterpreting Motherhood 8.1.1 Introduction The fact that only women are capable of giving birth—which is what we know to be true for now—seems to constitute the core gender difference related to reproduction. Even if most other differences could be eliminated, that difference would remain strongly rooted until a future technology allowing males to give birth is created. That difference seems, therefore, to be the source of all gender bias—not only in terms

M. Naka (B) Kobe University, 1-1 Rokkodaicho, Nada-Ku Kobe 657-8501, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_8

153

154

M. Naka

of reproduction but also generally. Nevertheless, were this solely the case, it would likely not be possible to reduce the influence of gender differences.1 What makes a woman’s life completely different before and after childbirth is, needless to say, the existence of a baby. Theoretically, childbirth and childrearing or being a primary parent should be considered separate matters, but practically, they are often conflated. It is this confusion that causes most issues of gender difference related to reproduction, rather than from the experience of giving birth itself. Doucet has pointed out that “more than any other single life event, the arrival of children most profoundly marks long-term systemic inequalities between women and men” [12, p. 5]. In order to reinterpret motherhood, it is essential to consider giving birth and raising a child, and giving birth and being a primary parent as separate things. Nevertheless, we do not insist that women should stop being mothers or that they should become identical to men as social beings since this would mean judging women based on a male-centered perspective. Instead, we intend to focus the reader’s attention on the fact that giving birth and raising a child are distinct from each other and have that recognition take root by emphasizing those differences. Our direction is, in fact, the inverse of the above. We propose the direction of considering those who did not physically give birth—such as fathers and foster parents—as falling within the term “motherhood.” We do this by positively reinterpreting the term by loosening the practically exclusive connection between giving birth and raising a child or being a primary parent. Besides, it can be said that these aspects may already be present to a certain degree.

8.1.2 Ambiguity of Motherhood The idea of motherhood2 is ambiguous as it can involve more than merely being a female parent, depending on the person and situation. The primary reason for this appears to be that the concept of giving birth remains strongly rooted in the core of understanding what motherhood is and, thus, childbearing has been assigned unreasonable importance. Moreover, in addition to the fact that giving birth lies at the core of motherhood, the concept of being the primary parent—who has the closest relationship with the child through caring—is closely connected. There is an implicit but firm connection between being a woman who has given birth and being a primary parent and caregiver. To some extent, this connection constitutes the basis of public perception of motherhood today, even though the situation has been changing gradually. Certainly, physical experiences of pregnancy and childbirth offer quite an advantage in creating a close relationship with one’s child. However, they are dispensable 1 This

paper is partly based on my previous paper in Japanese, “Hahadearukoto (motherhood) wo saikosuru (“Reconsideration of motherhood” in English),” in Shiso, no.1141, Iwanami shoten, 2019. 2 I discussed motherhood in Japanese Feminism in [20].

8 Reinterpreting Motherhood: Separating Being a “Mother” …

155

qualities of motherhood in our view. Rather, it is not uncommon for people who did not give birth to surpass those who did in terms of mothering. From our perspective, the overemphasized connection between being a woman who has given birth and being a primary caregiver has been one of the main sources of gender bias historically. Therefore, this presentation will question this assumption and demonstrate that our reinterpreted idea of “motherhood” applies to people more broadly, superseding gender differences and the distinction between having given birth and not having done so.

8.1.3 Distinction of Motherhood: Rich The abovementioned view of motherhood, based on the importance of giving birth and its implicit connection with being a primary caregiver, has historically bound women to childcare and education. The natural fact that only women give birth is used to justify the view that women must “naturally” be the primary caregivers of newborns, based on the implicit connection between the mother and the child according to the traditional understanding of motherhood. In this sense, motherhood has served to oppress women for centuries in patriarchal societies. Adrienne Rich brought oppressive motherhood into question, calling it “motherhood as an institution” in Of Woman Born. She distinguished between two types of motherhood, positive or favorable and negative or rejectable. “I try to distinguish between two meanings of motherhood, one superimposed on the other: the potential relationship of any woman to her powers of reproduction and to children; and the institution, which aims at ensuring that that potential—and all women—shall remain under male control… The power of the mother has two aspects: the biological potential or capacity to bear and nourish human life, and the magical power invested in women by men” [22, p. 14]. While favorable motherhood concerns women’s own potential related to reproduction, the rejectable version of “motherhood as an institution,” is imposed upon women by androcentric societies, restricting their choices to becoming a “good” mother and subjugating them to men’s control. According to Rich, the invisible institution of motherhood, as well as that of heterosexuality, “creates the prescriptions and the conditions in which [women’s] choices are made or blocked; they are not ‘reality’ but they have shaped the circumstances of our [women’s] lives” [22, p. 42]. This means that the institution of motherhood works to oppress women instead of cultivating and developing their potential, as does the favorable concept of motherhood, according to her. “Institutionalized motherhood demands of women maternal ‘instinct’ rather than intelligence, selflessness rather than self-realization, relation to others rather than the creation of self” [22, p. 42].

156

M. Naka

The reason that motherhood is oppressive in this instance is that it is uniformly imposed upon women from the outside, based on gender and the fact that they have given birth. In this case, motherhood is compulsory, rather than voluntary, as opposed to favorable motherhood. Moreover, differences among women and their particularities, including orientation, qualities, and abilities, are obstructed. However, Rich’s distinction of motherhood also includes favorable motherhood; therefore, we do not need to reject motherhood outright just because it involves the downside of institutionalization. As we saw above, Rich defined the idea of favorable motherhood as “the potential relationship of any woman to her powers of reproduction and to children” [22, p. 14]. Rich immediately rephrases this definition, saying, “the biological potential or capacity to bear and nourish human life.” Indeed, she considers childbearing one way of “discovering our physical and psychic resources” [22, p. 157] or “of liberating ourselves from fear, passivity, and alienation from our bodies” [22, p. 184] instead of “the victimizing experience.” After dismissing institutionalized motherhood, Rich intends to change perspectives of childbirth and the potential for childbearing, from the negative to the positive, and she comes to consider women’s “physical organization” accompanied by the potential to bear children as a “female resource”: “The physical organization which has meant, for generations of women, unchosen, indentured motherhood, is still a female resource barely touched upon or understood” [22, p. 285]. Such a positive evaluation of a female body focusing on the potential to bear can be an effective countermeasure against an institutionalized way of looking at motherhood, which tries to impose patriarchal motherhood uniformly on women. On the other hand, it causes us to question, at the same time, whether the potential or capacity to bear children is really indispensable to the concept of favorable motherhood and, if so, whether that definition of motherhood is limited only to those women that are capable of giving birth. We agree that we can reinterpret motherhood positively in spite of the possible downside of institutionalization. However, we consider that favorable motherhood is not limited to women who have given birth. In the following section, we explore favorable motherhood as differing from the idea proposed by Rich. Specifically, we question whether giving birth is essential to motherhood, and if not, whether “motherhood” could be attributed to individuals other than the birth mother to include fathers and foster parents. To this end, we explore the possibility of separating giving birth from motherhood, referring to two feminist thinkers, Chodorow and Ruddick. Both consider motherhood as independent of the feminine gender and extend its scope to various extents of childbearing.

8 Reinterpreting Motherhood: Separating Being a “Mother” …

157

8.1.4 Separating “Being the Primary Parent” from “Giving Birth” 8.1.4.1

Chodorow

Nancy Chodorow reframed what it is to be a mother in the Reproduction of Mothering. According to her view, “a mother” is above all “a person who socializes and nurtures” a child or “a primary parent or caretaker.” Based on this definition, she asked the following questions: “Why are mothers women? Why is the person who routinely does all those activities that go into parenting not a man?” [11, p. 11]. She insisted that “women’s mothering is seen as a natural fact” by many theorists, and they acknowledge no need for an explanation. This view is also held by the public and reinforced by ideologies and institutions such as schools, the media, and families; therefore, there has been no room for questioning the connection between primary caregivers and women thus far. “Society’s perpetuation requires that someone rear children, but our language, science, and popular culture all make it very difficult to separate the need for care from the question of who provides that care. It is hard to separate out parenting activities, usually performed by women and particularly by biological mothers, from women themselves” [11, pp. 35–36]. In contrast to this traditional view, Chodorow posited that the connection between being a mother and being the primary caregiver has been constructed culturally and socially. From a psychoanalytical standpoint, she recognized a gap between primary caregivers and women in general and endeavored to reveal the social mechanism that connects them socially and culturally, causing us to believe that women’s motherhood is a natural fact. This mechanism socially reproduces women’s mothering beyond generations and is Chodorow’s main subject in her book. However, we do not consider this further because of the risk of digressing from our main topic.

8.1.4.2

Ruddick

Sara Ruddick further radicalized the separation between being a mother and giving birth. She defined a mother concisely as a person who engages in mothering. She considered mothering to be work or practice that meets children’s fundamental needs and regarded anyone for whom an essential part of life is occupied by mothering as a “mother.”3 “In my terminology they are “mothers” just because and to the degree that they are committed to meeting demands that define maternal work…These three demands— for preservation, growth, and social acceptability—constitute maternal work; to be 3 Ruddick

explains the reason she keeps using the term “mothering” instead of a neutral term like “parenting” as follows: “I want to recognize and honor the fact that even now, and certainly through most of history, women have been the mothers. To speak of “parenting” obscures that historical fact” [23, p. 44].

158

M. Naka

a mother is to be committed to meeting these demands by works of preservative love, nurturance, and training” [23, p. 17]. Anyone who meets this criterion, including men and others who have not given birth to a child, is thusly a “mother.” In addition, two or more people can share motherhood. In contrast, according to Ruddick, a woman who has given birth is not necessarily a “mother” based on this. There is a gap between giving birth and being a mother, which means that anyone who engages in mothering becomes a mother, while a woman who has given birth can retreat from being a “mother.” Ruddick interprets this gap as room for voluntary choice. “In any culture, maternal commitment is far more voluntary than people like to believe. Women as well as men may refuse to be aware of or to respond to the demands of children” [23, p. 22]. The view of mothering as work or practice shows that there is room to choose between giving birth and being a mother. The fact that one becomes a mother through practicing mothering, rather than giving birth, means that becoming a mother requires some degree of willingness to do so. Although most people engage in motherhood as a matter of course, some are unable to do so because of undesired pregnancy or other circumstances. Besides, most mothers feel that it is impossible to continue being a mother at times, and some actually suspend the practice of mothering as a result. “All mothers sometimes turn away, refuse to listen, stop caring” [23, p. 22]. This is not exceptional; rather, it constitutes an essential part of the practice of mothering, as all types of practice can essentially be both fulfilling and painful depending on individuals and situations. Therefore, it is natural that some women do not or cannot become mothers, just as there are men who do not or cannot take on a similar role. From this perspective, Ruddick made the bold claim that all mothers are “adoptive.” “A corollary to the distinction between birthing labor and mothering, is that all mothers are “adoptive.” To adopt is to commit oneself to protecting, nurturing, and training particular children. Even the most passionately loving birthgiver engages in a social, adoptive act when she commits herself to sustain an infant in the world” [23, p. 151]. Ruddick therefore emphasized that there are no qualitative differences between cases in which women who have given birth also engage in mothering and those in which adoptive parents decide to do so. In both cases, engagement in mothering is a “social” and “adoptive” act. In this way, Ruddick squarely opposes the general view that giving birth and engagement in mothering are continuous and constitute a “natural” fact.

8.1.5 Critics and Limits of Ruddick’s “Mothering” For the purposes of reconsidering motherhood, introducing the concept of mothering as discussed above was effective. The emphasis of action or practice enables us

8 Reinterpreting Motherhood: Separating Being a “Mother” …

159

to view motherhood independently of static qualities of the individuals concerned, including the distinction between giving birth and biological or legal status, and focus on gradually building a relationship with a child through the practice of mothering. It either allows people other than the woman who has given birth to a child to be engaged in being his or her “mother,” or reveals the reality where a certain part of those people other than the one who gave birth have already been doing so. We can say, therefore, that it is an effective way for us to shift our point of view on motherhood from its formal and static status to its real and dynamic state.

8.1.5.1

Why Keep Using the Term “Mothering”?

The theories represented by Ruddick also have shortcomings and are subject to criticism. We start by examining one of the major criticisms or questions, which is why both Chodorow and Ruddick keep using the gender-biased term “mothering” instead of adopting a neutral term such as “parenting” or “childcare” while arguing that men and foster parents could be included as “mothers.” Ruddick defends herself against that criticism in the introduction to the second edition of Maternal Thinking, where she says: “I am asked why I don’t speak of “parenting”…I retain the vocabulary of the maternal for several reasons…At the simplest level, I want to recognize and honor the fact that even now, and certainly through most of history, women have been the mothers. To speak of “parenting” obscures that historical fact, while to speak evenhandedly of mothers and fathers suggests that women’s history has no importance” [23, p. 44]. Here, she expresses her standpoint in which she takes the historical and current gender-biased circumstances of mothering seriously instead of taking a shortcut to express her ideal by using the neutral term parenting. She elucidates her point clearly on the next page, where she says, “Evenhanded talk of mothers and fathers or abstractions about parenting” causes trouble for the “acknowledgment of difference and injustice” [23, p. 45]. In addition, she immediately adds another reason in the part following the above line: “Moreover, I want to protest the myth and practice of Fatherhood and at the same time underline the importance of men undertaking maternal work. The linguistically startling premise that men can be mothers make these points while the plethora of literature celebrating fathers only obscures them…[I]t is of the first importance, epistemologically and politically, that a work which has historically been feminine can transcend gender” [23, p. 44]. Although this second reason is seemingly contradictory to the first, she considers that there is, in fact, no essential distinction between mothering and fathering (mothering performed by men) against the commonly accepted view, and she is able to describe them uniformly using only the term mothering. What is subtle about Ruddick’s method is that she does not recognize any gendered qualitative differences on childrearing, yet she persists in building her theory based on the female perspective rather than a gender-neutral viewpoint. The crucial issue is where she

160

M. Naka

stands when she starts to examine “mothering.” The answer to the question of why she insists on using the term mothering is related to this characteristic of Ruddick’s logic, in our view. It seems indispensable to us to start with the female perspective instead of taking a gender-neutral viewpoint from the outset, since the latter position also obscures the traditionally assumed androcentric viewpoint on which many cultures are based. However, by doing so, we run the risk of taking for granted the female-centered point of view on mothering, thus imposing on people engaged in mothering a fixed model as an implicit ideal, which may overlook the variety and fluidity that is characteristic to the act of mothering.

8.1.5.2

Distinction Between Identity and Practice

Another criticism of Ruddick’s theory concerns her clear distinction between identity and practice regarding motherhood or mothering. In one way, that distinction helps us consider separately a person who devotes themselves to mothering from a person who has given birth, a necessary stage to go through in order to reinterpret motherhood. On the other hand, if the distinction goes too far, it could underestimate the fact that the accumulation of the practice of mothering can create an idea of how a person who mothers should be. Indeed, Ruddick’s argument seems to lack consideration of what constitutes being a “mother;” in other words, what constitutes the personhood of a “mother.” It is true that, like Chodorow, she defines being a mother and enumerates the main elements of the practice of mothering, but it is still not clear what changes that becoming a “mother” can bring about to the “mother” herself/himself. One critic, Miller, does not agree with Ruddick’s focus on the performative aspects of “motherhood” as distinguished to a certain degree from the manner in which one practices mothering. She mentions, “Crucially for me, being a mother is always more than performing and playing a part.” Rather, she contends that “[p]racticing over time…for most women leads to the development of a deeply felt, loving relationship;” that is, “a relationship and connection to our children which is developed through practice and interaction.” Based on that relationship, the manner of being a “mother” has to be reconstituted and that aspect should not be limited to females who have given birth. This is another reason why we avoid adopting the terms “caring” and “childcare” instead of the term mothering, since the former tends to focus on the practice itself, and is separated from the way of being of the person created through the practice. An emphasis on action can overshadow the fact that one’s actions are closely connected to the personhood of the one who acts. Indeed, we are certain that “motherhood” or being a “mother” through mothering affects what one is and what one may be. Therefore, we will examine this point before concluding.

8 Reinterpreting Motherhood: Separating Being a “Mother” …

161

8.1.6 Transformation of the Self Through Mothering 8.1.6.1

Behavioral Adaptation, Based on Merleau-Ponty

How is it possible, then, to think of motherhood while focusing on its practice without undervaluing one’s way of being at the same time? In answering this, we would first like to refer to Merleau-Ponty’s related theory. According to Merleau-Ponty, and particularly expressed in Phenomenology of Perception, human beings are considered as corporeal existence immersed in the world, mutually interacting with their environment by means of their body. He calls this “being in a world.” “The world is… what I live, I am open to the world, I have no doubt that I am in communication with it” [16, pp. xvi–xvii/p. 17]. As a corporeal existence, humans perceive their environment and meaning through physical interaction or corporeal approaches toward the environment. “Sense experience…invests the quality with vital value, grasping it first in its meaning for us, for that heavy mass which is our body, whence it comes about that it always involves a reference to the body” [16, p. 52/p. 79]. In this sense, it is said that “my body is the pivot of the world” [16, p. 82/p. 111]. If one’s behavioral pattern changes, the meaning of the environment has to change accordingly. Conversely, if the environment changes, it would allow the individual to acquire a different behavioral pattern. We can adapt Merleau-Ponty’s view to the context of “mothering;” it can be said, then, that once a person has a child, “mothering” occupies an important part of her/his life. As Ruddick mentions in her definition of being a “mother,” it can transform one’s actions or behavior toward the environment, since actions and behaviors can be reorganized according to one’s new valued practice and behavioral system. We can surmise such a corporeal adaptation toward one’s environment through Merleau-Ponty’s analysis of a denial of loss or dysfunction of part of the body, such as a “phantom limb” or “anosognosia” [16, p. 80/p. 109]. His analysis revealed the existence of the “habit-body,” which is distinct from “the body at this moment” [16, p. 82/p. 111], and where we are in relation to our environment in grasping the “meaning of a situation” [16, p. 79/p. 108] as a whole through our body. Moreover, that shift in our behavioral patterns leads to changes in one’s way of being or one’s existence because, according to Merleau-Ponty, a dynamic system of one’s action toward the environment is rooted in one’s existence [16, pp. 78– 84/pp. 107–113]. Ruddick stated that mothering begins with responding to a child’s needs. If so, once one becomes a “mother,” one has to be required to transform or rearrange one’s actions or behavioral system so as to respond to the child’s needs appropriately. We can interpret this adaptation as a transformation of one’s action toward, or interaction with, the newly rearranged environment and a following change in one’s way of being or existence. Concretely, most new mothers or fathers who commit to mothering can experience a radical change to their way of being, accompanying a change in their actions or behavioral system, because of the arrival of a child and the subsequent deep commitment to mothering.

162

8.1.6.2

M. Naka

Reorganization of One’s Way of Being in Motherhood, Based on Levinas4

Merleau-Ponty’s view helped us consider motherhood focusing on both the practice of mothering and one’s way of being. However, this is not yet sufficient for us to grasp what the exact changes to a person’s way of being are through mothering. As such, we sought to extend and develop that view by using Levinas’ thought. Levinas posited in Time and the Other that “fatherhood (paternité),” as long as it is interpreted on a previous empirical level, is reconstituted based on one’s relationship with a child, who is considered to be the “Other”5 by Levinas. In contrast to first appearance, what Levinas calls “fatherhood” is not limited to male parents, but rather represents being a parent in general, preceding gender or other empirical aspects. It can be interpreted as a thing like “motherhood” as understood in this paper, which would include all kinds of “mothers.” The remarkable part of his view on the self is that the relationship with a “child” (as far as is understood by drawing upon the meaning attributed to his terminology) constitutes the individual’s way of being. In other words, being a “father” or “mother,” as reinterpreted in our terminology, is not superimposed on a pre-existing self; rather, in the opposite way, it is one’s relationship with a “child” that primarily constitutes the basis of the self. We deduce from Levinas’ view a rather empirical insight on the issue of reproduction, beyond his intention, as follows: once one has a child, and becomes deeply involved with the child through mothering, this experience can cause a radical shift in or the reorganization of one’s existence. The person’s existence can then be reformed around their relationship with their child as the foundation of all other aspects of the person, including her/his recognition, feelings, and values. In other words, her/his existence would be founded based on being a “mother” in our terminology, regardless of gender or having given birth. Nevertheless, this does not mean that there is no conflict or crisis in the course of such a transition. Rather, most new mothers or fathers go through a perplexity or conflict in facing the change in the environment accompanied by shifting of the self because of the arrival of a child. Neither does it mean that such a shift reflects a realization of an innate nature that every woman may be considered to potentially have, as traditional views have asserted. In contrast, that shift is actually caused by the accumulation of one’s experiences and it can, consequently, occur for male or 4I

discuss Levinas’ view about motherhood further in [18]. his earlier work, Time and the Other, Levinas described “fatherhood” or “fraternity,” the relationship with the child, as a symbol of the relationship of the Self with the infinite Other. In this case, fatherhood is used as a description of the Self on an a priori, rather than at an empirical level, and the child represents the infinite Other which is one of his main themes. Therefore, the way of being of the Self in fatherhood is universally applicable to the self for all people, regardless of gender or the distinction between having or not having a child. In contrast, in his main book in the latter half, Otherwise than Being…, he described a similar but more developed relationship of the Self with the infinite Other using the term “motherhood” or “pregnant mother’s body.” In this case, motherhood represents the Self’s way of being on a universal level regardless of the empirical level.

5 In

8 Reinterpreting Motherhood: Separating Being a “Mother” …

163

foster parents, as well, in our view. We will now move on to an empirical level of discussion before concluding, wherein we can find experiences of individuals who are deeply engaged in “mothering.”

8.1.6.3

Shift of the Self

Miller reveals through interviews with a number of mothers and fathers that the shift of the self begins from the moment of anticipating being a mother or father and intensifies in the period following when new mothers or fathers go through “mothering.” As seen above, she considers that “[p]racticing over time…for most women leads to the development of a deeply felt, loving relationship…to our children which is developed thorough practice and interaction,” and it is this relationship developed through practice that is indispensable and constitutes being a “mother” for Miller, rather than the practice itself. Moreover, she applies this interpretation to fathers who are deeply engaged in “mothering, as well, in another book entitled Making Sense of Fatherhood.” When Miller focuses on relationships “developed through practice and interaction” in this way, she is speaking against the essentialist view that women potentially have an innate tendency to develop such relationships. According to her, that relationship is not innate, but is rather acquired little by little by performing “mothering.” “(D)oing and performing mothering requires time to master. It involves a relationship of interaction, rather than being experienced as innate” [17, p. 146]. The result is that “mothering” is equally open to other people as it is to the women who have given birth, such as male or foster parents, but only if they have shown a great enthusiasm for “mothering.” Indeed, Doucet insists in her book focusing on fathers that “[w]hile…many women experience profound moral transformations when they become mothers, my study… indicates that such transformations can also occur for fathers.” Especially, “when women move over, or are temporarily emotionally and practically unavailable, men can come to know the depths of what it means to be fully responsible for a child. It is this responsibility for others that profoundly changes them as men. That is, having the opportunity to care engenders changes in men that can be seen as moral transformations” [12, p. 207]. This “moral transformation” can be considered as corresponding to the shift in one’s way of being that we argue for in this paper. Doucet considers what she calls emotional responsibility as one of the main elements that constitutes being a primary parent, or “mother” in our terminology. According to her, mothering and fathering have both things in common and differences, but above all, she emphasizes fluidity: each of them changes depending on the age of the child or the degree of engagement. “There are also spaces and times in the flow of mothers’ and fathers’ lives when gender boundaries are relaxed to the point that they are barely noticeable” [12, p. 126]. In other words, what occurs is the “breaking down of some of the binary distinctions between mothering and fathering” [12, p. 134].

164

M. Naka

At the end of the book, Doucet frankly confesses her wavering and complicated standpoint regarding strategy and decision: “While my argument remains, largely in response to those who argue the contrary within the terms of that debate, men are not mothers and fathers do not mother, there are times and places where men’s caregiving is so impeccably close to what we consider mothering that gender seems to fall completely away, leaving only the image of a loving parent and child” [12, p. 246]. In conclusion, although it is true that the experience of pregnancy and giving birth involves considerable labor and can therefore be a strong incentive for mothering, those who have given birth do not have an absolute advantage, and it is not uncommon for children’s relationships with people who did not give birth to them to surpass those with the people who did so. As such, “motherhood” as defined above can be extended to all parents and caregivers. Therefore, it is important not to draw clear distinctions between genders or those who give birth and those who do not and to recognize the fluidity and gradations among those binaries.

8.2 Example from a Practical Context6 : Newborn Adoption Through Baby-Hatches or Anonymous/Confidential Childbirth 8.2.1 Introduction We ventured in this paper to focus on the possibility of separating the concepts of giving birth and being a mother and identifying the gap between them. In exploring that possibility, our actual concerns were accounting for male primary parenthood or sole custody, foster care (including same-sex couples), and newborn adoption because of undesired pregnancy, which occasionally operate through baby-hatches or anonymous/confidential childbirth. We take up the last issue as an example in this section. When women become pregnant accidentally and the pregnancy is undesired, they can be pressured into deciding between either giving birth or undergoing an abortion. However, some choose neither, because they do not want those around them to know about the pregnancy but have missed the cutoff time for abortion. In such cases, they occasionally abandon the baby immediately after giving birth. Baby-hatches were established to prevent infant abandonment, which was typically followed by harm or death. In other cases, pregnant women may not want those around them to know about the pregnancy but are hesitant to have an abortion and therefore, desire to give birth to the baby only if there is no risk of others knowing. Newborn adoption through

6 This

part is based on the paper for the following presentation [21].

8 Reinterpreting Motherhood: Separating Being a “Mother” …

165

anonymous or confidential childbirth can be an option for such women living in countries where it is legal. We will first examine baby-hatches in Germany and Japan since they are contrasted on that issue. While Germany is the birthplace of and the leading nation in babyhatches, Japan has only one baby-hatch since the concept was introduced in 2007, following the German example.

8.2.2 What Are Baby-Hatches? Although the baby-hatch has a long history in Europe, going back to the Middle Ages, its modern origin is found in the German “Babyklappe.” The first baby-hatch in the world was set up in Hamburg, Germany, in 2000, by a private social welfare organization to help address the problem of abandoned babies, of whom there were around a thousand every year in Germany. Its purpose was to secure the life of the baby, thus preventing it from being deserted or harmed [cf. 13, Chap. 2; 24, Chap. 2]. The director of the Jikei Hospital in Kumamoto, Taiji Hasuda, inspected the Hamburg Babyklappe and shortly thereafter, set up the first Japanese baby-hatch, named Crandre of Stork (Konotori no Yurikago in Japanese), commonly known by the name of Baby-Postbox (Akachan-Posuto in Japanese) at his hospital in 2007, adopting some devices of the German system, including keeping a proper temperature in the container to protect newborns, a door that is openable only from the inside once it is closed by the depositor, a letter left in the box addressed to the mother or another depositor urging them to leave some remembrance or information about the baby, and so on [24, Chaps. 2, 3]. Both the German Babyklappe and the Japanese Baby-Postbox are considered as drastic measures to protect a baby’s life and to help a mother or parents in difficult situations. Therefore, these facilities are accompanied by several preventative measures to limit the incidences of surrender, in particular counseling services for pregnant women and their partners who are experiencing conflict. Jikei Hospital also has a 24 h hotline service, and counselors can reassure mothers in conflict about seeing an obstetrician at the hospital or can introduce them to the adoption system [24, Chap. 4; 25, Chap. 6]. In addition, many Germany institutions with Babyklappe are practicing anonymous childbirth, in which a mother can give birth without revealing her identity. As a result of these practices, mothers or parents in trouble have resources to explain alternatives to leaving their babies in a baby-hatch. Of course, it is desirable for people managing the baby-hatch that it be used as less often as possible. Even after a baby is left at a baby-hatch, the staff try to identify and contact the baby’s parent(s)—not to accuse or penalize them, but rather to urge them to reconsider and to offer alternatives. In Germany, once a baby is left, the Babyklappe puts a personal ad in the paper to appeal to the depositor to contact the facility. The baby-hatch in Kumamoto is designed such that an alarm sounds when a baby is deposited to allow the hospital personnel to rush to it and ideally intercept and speak with the

166

M. Naka

depositor before s/he leaves. Of course, the depositor’s will to remain anonymous is prioritized, but at the same time, the staff try to assess and be sensitive to the depositor’s hopes, fears, and situation. Through such efforts, quite a few depositors abandon their anonymity, having found a safe and reassuring alternative. As a result, cases in which the depositors remain completely anonymous are far rarer among users of baby-hatches than the public often imagines—amounting to about 25% in Kumamoto. In Germany, about 50% of the mothers go back to pick up their child after seeing the ad in the newspaper [24, Chap. p. 63; 13, pp. 121, 123]. Anonymous and confidential childbirth is also a growing trend in Germany. In the former, mothers can give birth with full anonymity, while in the latter they leave their baby’s sealed information with the facility for future reference. It is reported, however, that even in the former, about 90% of the mothers who practice anonymous childbirth ultimately relinquish their anonymity as a result of care and counseling during their stay before and after childbirth in shelters for mothers and infants in need [26, pp. 88–90; 14, Chap. 3]. In contrast to confidential childbirth, which is established by law in Germany, baby-hatch systems and anonymous childbirth are situated in a legal gray zone both in Germany and in Japan. In contrast, some countries in Europe and most states in the US have legalized both [8, 2, 14, Chaps. 5, 15, 27]. The largest contrast between the German and Japanese baby-hatch systems consists in the extent of their services and the degree of administrative commitment. In Germany, Babyklappen now exists in all parts of the country numbering around 100 in all. While Babyklappen are run by private institutions, the German government supports them financially and has supplemented and supported their work with the legalization of confidential childbirth. In Japan, in contrast, a second baby-hatch has not been established in the 10 years since the first, and the Japanese government continues to refrain from being actively involved in the issue. The hospital has received 155 babies from around the country at the baby-hatch,7 which is situated at the southern end of Japan. However, this does not mean that there have been no developments. In the non-governmental sector in Japan, hotlines for pregnancy conflict and facilities for receiving (prospective) mothers and infants in need are gradually increasing. Moreover, it was reported in December 2017 that Jikei Hospital was considering introducing confidential childbirth and seeking to have the cooperation of health authorities, including passing new legislation [6].

8.2.3 From Anonymous Childbirth to Confidential Childbirth As suggested, leaving a baby anonymously at a baby-hatch is viewed as a last resort for a mother or parents who have no other choice. Among conditions that might affect the matter of anonymity, the most significant are the child’s right to know his origin and the security of both the mother’s and the baby’s lives. These have given rise to interest in establishing some form of anonymity for parents in legal systems outside 7 As

of March 31, 2017 [10].

8 Reinterpreting Motherhood: Separating Being a “Mother” …

167

the baby-hatch system, which has been done in Germany with the legal establishment of the confidential childbirth arrangement [13, Chaps. 2, 3]. This issue has always been a core criticism of baby-hatches worldwide as a movement in favor of the child’s “right to know,” even against the parents’ wishes, has gathered momentum and as inquiries and appeals by former “hatch-babies” have multiplied; in France, for example, this became a large organized political movement [1]. It is frequently reported that children who do not know who their parents experience a lack of secure identity. Even some nurses involved in the Baby-Postbox in Kumamoto feel anxious about the lack of information regarding an abandoned baby to provide to the child or future foster parents and doubt the legitimacy of maintaining full anonymity of depositors. However, Hasuda, the founder of the Japanese baby-hatch and director of Jikei Hospital, insists that the life of a baby should be prioritized over its right to know its origin, since naturally one cannot appeal for one’s rights if one is not alive in the first place. The second main criticism regarding the safety of mothers and babies is more pressing. Most mothers who leave their baby at a baby-hatch have not had any checkups in a hospital during pregnancy, and quite a few of them give birth at home or in a hidden place by themselves, like inside a vehicle. This is obviously dangerous for both mother and baby. Therefore, contacting these women before they give birth and offering them a safer environment for childbirth is important. These are salient reasons why establishing some level of anonymity in the childbirth system may be seen as better than using the baby-hatch approach. Confidential childbirth is a better approach than fully anonymous childbirth when it comes to the child’s right to know its origin. It was this recognition that pushed Germany to introduce confidential childbirth in law in 2014, while fully anonymous childbirth is reserved for mothers in specific situations who wish for anonymity, including cases of rape or adulterous relationships. The precise difference between the two is that in anonymous childbirth, a pregnant woman can give birth in a hospital and leave the baby in the hospital with total anonymity. Of course, hospital personnel will talk to her during her stay to suggest alternatives such as adoption or bringing up the baby by herself with support8 ; this is one benefit of anonymous childbirth when compared to the baby-hatch. In confidential childbirth, on the other hand, a mother leaves her own and her baby’s information at a pregnancy conflict counseling center, which keeps it sealed until the child has reached a specific age (16 in Germany) and only if the child demands its disclosure. This way, a person’s right to know his or her origin is assured. That is why confidential childbirth is more desirable than anonymous childbirth and concerned institutions in several countries (for example Switzerland and South Korea [8, 15]) are considering a transition from anonymous to confidential childbirth, as is the Jikei Hospital in Japan, as noted previously. The Japanese approach begins with the same principle as the German approach, namely the child’s right to know its parentage. 8 In

Germany, 20% of women hoping to give birth at facilities with Babyklappen relinquished their anonymity because they received support from the hospital personnel before giving birth and 70% of women did so after giving birth [25, p. 141].

168

M. Naka

8.2.4 Newborn Adoption As seen above, a baby-hatch system is intrinsically interconnected with that of anonymous or confidential childbirth—and both are part of a series of support structures for mothers or parents and infants in need. We can now add one more important system to that series: newborn adoption. A safe, pleasant, homely environment is the best place for babies to be placed and to grow up. However, there is a remarkable contrast between Germany and Japan in this regard. In Germany, most babies left at Babyklappen are handed over to adoptive parents while they are still babies, after being placed with temporary foster parents for eight weeks to allow the depositor to reconsider their decision [14, Chaps. 4, 24, pp. 62–63]. In Kumamoto, in contrast, most hatch-babies are placed before long into an infant home and only a few months or years later, some are adopted by foster parents. However, more than 20% of them are forced to remain in a children’s home and not with individual adoptive parents [8]. Newborn adoption, which does not go through children’s homes, is still rare in Japan, although some child consultation centers9 or non-governmental groups,10 including Jikei Hospital, have promoted and supported newborn adoption, especially in cases of undesired pregnancy [25, Chap. 8].11 There are Japanese customs and policies that have been children’s home-oriented. Indeed, among abandoned children in Japan in general, about 90% grow up in a children’s home when compared to only 50% in Germany [3, 24, p. 123, 4]. The government has recently changed policies to promote fostering and adoption in preference to placement in a children’s home [4, 7]. It should be noted that in newborn adoptions, children are handed over to adoptive parents while they are still babies and that enables adoptive parents to develop close relationships with their new child through mothering at an earlier stage relative to that in later adoptions. This helps them be “mothers” in our terminology rather than second-rate parents. The relationship gained through mothering is central here and happens regardless of gender or the fact of having given birth.

8.2.5 Reflection As long as people adhere to the notion that having given birth lies at the core of motherhood, these examples such as using baby-hatches, anonymous or confidential childbirth, and newborn adoption can be considered second best measures at 9 The child consultation center of Aichi prefecture has led this trend. The model of newborn adoption

developed there is known as the Aichi Style [cf. 31]. the increase and the problems of non-governmental groups dealing with Newborn Adoption, see [5]. 11 As for hatches-babies, newborn adoption is more difficult due to the lack of biological parents’ consent, which is legally required. However, the requirement is reportedly going to be revised soon [9]. 10 Regarding

8 Reinterpreting Motherhood: Separating Being a “Mother” …

169

best. In this case, it would be preferable for people to avoid these measures and for women who have given birth to raise their babies themselves. A focus on a fixed and exclusive connection between having given birth and being a primary parent, and overemphasis of the biological relationship, could drive birth-givers or biological parents to abortion, child abuse, or abandonment. In contrast, we regard the extended possibility of being a “mother,” regardless of having given birth or a biological relationship, positively. If we consider whoever engages in mothering will experience the transformation of existence into a “mother,” all parents, such as male primary and foster or adoptive parents, can be the best parents and “mothers” through their engagement in mothering. If we do not adhere to having given birth or a biological relationship and consider giving birth and being a primary parent separately, this could increase the possibility of various parent-child relationships focusing on practical and existential factors. It is reasonable to determine that a biological and static identity is not essential, but constant and dynamic involvement with a child based on mothering is vital and would be sufficiently radical to lead to the transformation or reorganization of one’s existence. A significant change in the mindset regarding motherhood is necessary. This is why we emphasized the possibility of separating giving birth and being a primary parent. We should certainly not underestimate the one-sided hardship and burden on women who give birth, particularly during pregnancy and childbirth,12 which arises not only because of biological factors, but social and cultural factors as well. However, we should call this bias into question by considering giving birth and being a mother separately and ascertain whether this one-sidedness has any grounding in reality.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Asahi Shimbun. (2007). Seibu, August 26, 2007, p. 34 (in Japanese). Asahi Shimbun. (2007). Tokyo, August 31, 2007, p. 27 (in Japanese). Asahi Shimbun. (2016). Kumamoto, March 9, 2016, Morning, p. 30 (in Japanese). Asahi Shimbun. (2017). Tokyo, August 3, 2017, Morning, p. 4 (in Japanese). Asahi Shimbun. (2017). Tokyo, September 5, 2017, Morning, p. 35 (in Japanese). Asahi Shimbun. (2017). Tokyo, December 15, 2017, Evening, p. 1 (in Japanese). Mainich Shimbun. (2020). Tokyo, May 30, 2020, Morning, p. 24. (in Japanese). Asahi Shimbun. (2018). Kumamoto, April 25, 2018, Morning, p. 28. (in Japanese). Asahi Shimbun. (2018). Tokyo, May 25, 2018, Evening, p. 16 (in Japanese). Asahi Shimbun. (2018). Seibu, May 28, 2018, Evening, p. 7 (in Japanese). Chodorow, N. (1978). The reproduction of mothering: Psychoanalysis and the sociology of gender. University of California Press. 12. Doucet, A. (2006). Do men mother? Fathering, care, and domestic responsibility: Fatherhood, care, and domestic responsibility. University of Tronto Press. 13. Hasuda, T., & Kashiwagi, K. (2016). Namae no nai boshi wo mitsumete (in Japanese). 14. Kashiwagi, K. (2013). Akachan-Post to kinkyuka no jyosei [BabyKlappen und Frauen in Not]. Kitaoji shobo (in Japanese). 12 Concerning

pregnancy and breastfeeding, see [19].

170

M. Naka

15. Mainichi Simbun. (2018). Tokyo, May 17, 2018, Morining, p. 15 (in Japanese). 16. Merleau-Ponty, M. (1945). Phenomenology of perception. Routledge, 1962/Phenomenologie de la perception, Gallimard. 17. Miller, T. (2008). Making sense of motherhood. Cambridge University Press. 18. Naka, M. (2016). The otherness of reproduction: Passivity and control. In N. Smith & J. Bornemark (Eds.), Phenomenology of pregnancy. Södertörn University Press. 19. Naka, M. (2016). The vulnerability of reproduction: Focusing on pregnancy and breastfeeding. Aichi, 28. Kobe University, Faculty of Philosophy. http://www.lib.kobe-u.ac.jp/reposistory/E00 41131.pdf. 20. Naka, M. (2018). Some glimpses of Japanese feminist philosophy: In terms of reproduction and motherhood. In J. W. M. Krummel (Ed.), Contemporary Japanese philosophy: A reader. Rowman & Littlefield International. 21. Naka, M. (2018). “Baby-Hatches” in Japan and abroad: An alternative to harming babies. In The European Conference on Ethics, Religion & Philosophy 2018: Official Conference Proceedings. 22. Rich, A. (1986). Of woman born: Motherhood as experience and institution. Norton. 23. Ruddick, S. (1995) Maternal thinking: Toward a politics of peace. Beacon Press. 24. Tagiri, Y. (2016). Hai. Akachan sodanshitsu, Tajiri desu. Minerva shobo (in Japanese). 25. Tagiri, Y. (2017). Akachan post wa soredemo hitsuyo desu, Minerva shobo (in Japanese). 26. Takahashi, Y. (2009). Hamburg no “sutego project” no enjoy wo riyo shita jyoseitachi, Teikyo Hogaku, 26(1), 77–111 (in Japanese).

Part III

Environmental Technology

Chapter 9

Domains of Climate Ethics Revisited Konrad Ott

Abstract Climate ethics (CE) has become an emerging field in applied ethics. CE is not just a sub-discipline of environmental ethics but has its own moral and ethical profile. Meanwhile, CE is not just about mitigation and future generations but has enlarged onto adaptation, climate engineering, allocation of burdens, and distributive justice. This article summarizes recent developments in CE and proposes a coherent set of yardsticks for orientation within the different domains of CE. Keywords Climate ethics · Abatement · Carbon budget · Adaptation · Historical emissions · Climate engineering

9.1 Introduction Literature on climate change is abundant. Beside the scientific, economical, technological, and political literature there has been also an increase in ethical analysis of the many moral problems which are embedded in climate change. The term ‘climate ethics’ (CE) is taken as title for such analyses.1 This article presents a systematic approach in CE, based on distinctions between different domains (topics).2 Each topic entails specific moral problems that must be resolved on ethical grounds

1 Authors who have contributed to the emergence of CE are, among others, Henry Shue, John Broome, Steve Gardiner, Aubrey Meyer, Donald Brown, Edward Page, Michael Northcott, Simon Caney, Marco Grasso, Christoph Lumer, and Christian Baatz. Essential articles are collected in Gardiner [17]. 2 The idea to distinguish different domains is taken from Grasso [21].

This article is an updated version of Ott [43]. The basic structure has been maintained. Recent debates and new data have been incorporated. Thanks to Christian Baatz, Margarita Berg, Michel Bourban, Frederike Neuber and Patrick Hohlwegler. K. Ott (B) Universität Kiel, Kiel, Germany e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_9

173

174

K. Ott

via arguments. A comprehensive and systematic CE will be established if wellsubstantiated solutions (‘positions’) in each domain can be conjoined coherently. The article gives an outline of the main building blocks of such climate ethical theory whose philosophical background theories are philosophical pragmatism and discourse ethics. At its core, CE refers to a portfolio of means and strategies by which the causal roots and the negative impacts of climate change are to be addressed. Such means are (a) abatement (reduction of greenhouse gas emissions, sometimes termed ‘mitigation’), (b) adaptation, and (c) climate engineering. Climate engineering options (CEO) divide into Solar Radiation Management (SRM) and Carbon Dioxide Removal (CDR) (Sect. 9.8). Climate policies started at the Rio summit in 1992. The decades between 1992 and 2020 are a peculiar period in history in which both awareness of climate change and GHG-emissions increased due to a rapidly globalizing economy. Therefore, debate on assets beyond abatement has become unavoidable (McNutt et al. [31], p. 18). The best way to manage climate change is arguably a portfolio of different strategies. All CE assets, then, might be part of a comprehensive climate policy portfolio. However, the portfolio approach to climate politics should not oversimplify the problems at stake. Since CEO and abatement might have interdependencies, and CEO might inflict a reduction in abatement efforts, a purely additive concept of a portfolio might be myopic (Gardiner [19, 36]. A portfolio perspective also may falsely suggest that economic considerations (cost efficiency) should prevail in determining the ‘optimal’ response to climate change. Climate ethicists emphasize that moral reasoning should be intrinsic to the portfolio-debate. It might make a difference whether an asset in the portfolio only cures the symptoms or addresses the root cause of climate change, namely emissions. The global climate portfolio differs from ordinary investment portfolios since stakes are huge, moral values in dispute, risks and uncertainties pervasive and collective decisions urgent. Each specific portfolio of means and strategies must be substantiated not just in terms of economic efficiency or political feasibility, but also in terms of ethical principles. Thus, I discard the figure of a benevolent portfolio manager, but adopt a discourse ethical approach instead. Some sections of the article refer to this ethically reflected moral portfolio structure (Sects. 9.4, 9.7, 9.8). The article also refers to the ethical profile of climate change (Sect. 9.2), a reflection on climate economics (Sect. 9.3), distribution schemes for remaining carbon budgets (Sect. 9.5), responsibility for historical emissions (Sect. 9.6), and a comparison between two competing concepts in CE (‘Contraction and Convergence’ and ‘Greenhouse Development Rights’, Sect. 9.9). The article supposes some familiarity with the basics of climate science and, in Sect. 9.3, with mainstream economics. It has been written in a moment of time when a young generation worldwide becomes far more demanding in terms of climate policies. And rightly so! The article strongly supports such demands—despite some peculiar tendencies to moral hysteria.

9 Domains of Climate Ethics Revisited

175

9.2 The Ethical Profile of Climate Change: A Perfect Moral Storm There is a beneficial natural greenhouse effect and there is natural climate variability. Without CO2 , temperatures on planet Earth would be far too low for human life. On geological time scales, the global climate is permanently changing. The recent interglacial Holocene range of temperature has allowed for flourishing cultures since the Neolithic age. On a very short time scale of ≈200 years, however, humans contributed (and still are) to the natural greenhouse effect by releasing CO2 and other so-called greenhouse gases (GHG) (as methane) into the atmosphere. The industrial revolution mobilized the subterranean forest of fossil fuels. Arrhenius [4] was the first scientist recognizing a presumptive impact of the release of GHG to the global climate. As Swedish citizen, however, Arrhenius hoped for warmer skies. GHG-release is performed by burning fossil fuels (oil, coal, gas) and by land-use change (deforestation, forest fires, draining mires, grazing). Due to human release, atmospheric concentration of GHG has reached more than 400 ppmv CO2 and roughly ≈440 ppmv CO2 -eq.3 The basic physical mechanism of greenhouse-effect is beyond doubt. There are many remaining uncertainties in the details of climate change4 but the ‘big picture’ of a warming world partly due to anthropogenic emissions has been scientifically established. Some climate change has become unavoidable as consequence of 200 years of emissions. More than 50% of all emissions, however, have been performed in recent decades (since 1970). Ironically, increasing knowledge about climate change has run parallel to an increase in global emissions. An extraterrestrial observer might (falsely) conclude that humans intentionally wished to bring about a warmer climate. From the perspective of a human analyst, however, it seems evident that economic forces, technological diffusion, global trade, and population growth were triggering emissions despite a growing sense of alarm since 1990. Scientific understanding of the global climate system has increased and the models are more trustworthy than they were 30 years ago. So called ‘climate skepticism’ does not deserve credit anymore, even if there are some merchants of doubt who still deny anthropogenic effects on global climate. Recent scientific attention focuses on ‘tipping points’ and feed-back mechanisms in the global climate system. New literature warns against slipping planetary civilization into a ‘hothouse’ Earth [61], see also SRU [55], pp. 32–52). Anthropogenic climate change is not repugnant in itself. Imagine a world with low CO2- concentrations that would only allow for an Inuit-like human life in a speciespoor borealis-type world. If human cultures and biodiversity could flourish if this ‘cold’ world could be warmed by means of release of some GHG, most people would not oppose such a strategy. Ceteris paribus, Northern latitudes may gain some 3 GHG

concentrations can be defined in terms of CO2 only or in terms of all GHG which are calculated in CO2 -equivalents (CO2 -eq). In the following, I adopt the CO2 -equivalents numbers. 4 Ocean as carbon sinks, albedo change, cloud cover, precipitation patterns, thermohaline circulation, stability of cryosphere, ‘tipping points’ etc.

176

K. Ott

benefits from moderate warming. Climate change is a moral problem because of its many negative impacts on human systems (and on biodiversity) in the short, middle, and long run. Not all impacts must be seen as negative. Melting of glaciers and retreat of Arctic sea-ice is not bad in itself because mountain forests may grow and new ship-ping routes may become viable. It is bad because water provision schemes are impaired and the sea level rises, putting coast lines at risk. Negative impacts of climate change are those that count as ‘bads’ according to our axiological common sense. Typhoons, malaria, droughts, and forest fires are bad. In human life, we are facing bads that either are naturally induced (natural disasters) or result from the behavior of other persons. The latter we call ‘evils’. A bad is an evil, if caused by human agency. If a human intentionally acts as to cause evils, such action is prima facie wrong as it violates the ‘no-harm’-principle being endorsed by most ethical theories. If an evil occurs as an un-intended side effect of an action, the action is harmful to some people. Responsible agents take into account harmful side effects of their actions and may accept duties to omit such actions or to reduce (or minimize) their effects upon others. If an action is prohibited because of its side effects, it becomes permissible as far as the side effects can be avoided. Minimization is reduction coming close to omission. Abatement of emissions may have this temporal structure. Steve Gardiner [18] has rightly argued that climate change constitutes a ‘perfect moral storm’. I wish to mention some aspects of this moral storm on which humans must learn to ride. Release of greenhouse-gases is a harmful side effect of otherwise ‘innocent’ activities (as heating, driving a car etc.). Any single activity contributes only marginal to a global evil that shows up on the aggregate, as global climate models indicate. Many people contribute to the problem, however uneven, while other people suffer, however uneven. The sets of contributors and victims may overlap to some extent and are distinct to some other extent. Climate change manifests itself in events that look like natural disasters but may be, at least in part, anthropogenic by origin. Hardly any single weather event is to be attributed to human emissions with certainty. To measure the increase in risks of climate change and to assess liability for such increase is highly complex [1]. What can be predicted with some confidence is a modification in probability, frequency, and intensity of events that cause bad impacts for humans. Such disasters are, for instance, floods, droughts, heat waves, forest fires, landslides, spread of diseases, hurricanes, hard rain, decline in local harvests, desertification, increased water stress in (semi)arid regions, conflicts over scarce resources, climate induced displacement, political instability, and the like. Biologists warn that the combination of climate change and highly intensified land-use system increases the risk of ≈25–30% on all species of going extinct [51].5 This may reduce resilience of ecosystems against disturbances. Climate change also may affect food prices for worse and reduce food security of the poor. The impacts of climate change will fall upon specific individual persons by chance but there are some ‘patterns of likeliness’. If a person will live at the coastline of Bangladesh it is more likely that she will be affected by floods than a person in the 5 Mass

extinction of species is not exclusively triggered by climate change, but also by agriculture, deforestation, loss of habitat, neobiota, and hunting.

9 Domains of Climate Ethics Revisited

177

hills of, say, France or Germany. All models indicate that poor people in the global South will have to face most evils. Risks are imposed upon vulnerable people with low capabilities to cope with climatic change. Since these people don’t contribute much to the overall emissions, they are, by intuition, victimized by polluters living in other parts of the world. Emitters and affected persons do not encounter each other in a faceto-face-situation but remain anonymous to each other. Due to inertia and global nature of climate systems, evil impacts occur at different locations. Victimization across time and space is hard to localize as individual guilt. Nevertheless, decent actions and honorable lifestyles become blameworthy if their CO2 footprint is high. Since wealthy people emit more on the average, climate change allows to blame the global rich on moral grounds. This makes CE attractive to many leftist intellectuals after the collapse of socialism. Climate change becomes a media for global redistribution if the burdens are allocated to the wealthy strata. This, of course, is part of the moral storm. GHG emissions are collective actions that are not directed against single rights of single persons but create negative external effects on the environmental conditions under which people live. Since climate change already shows effect, people affected are (a) contemporary adults, especially in poor strata of Southern countries, (b) contemporary children and young adults whose overall life prospects are affected for worse, and (c) members of future generations that will be born into an age of climatic change. Climate change clearly is a paradigm case for intergenerational responsibility since, once induced, it will continue for centuries even if global emissions peak and atmospheric greenhouse-gas concentration stabilize at some high level (see next section). The general ethical literature on responsibility towards posterity can be applied to the case of climate change. Derek Parfit’s ‘future individual paradox’ does not refute the widespread conviction that there is some moral responsibility against members of future generations.6 Such responsibility implies, minimally, that it is mandatory to bequeath overall environmental conditions that are not inimical to a decent human life. This implies the prima facie obligation not to change the global climate by GHG emissions in a way that dignity, decency, and safety of human life, seen as large, is impaired or threatened. Under extreme scenarios, human life becomes almost impossible in some regions of South-East Asia and Middle East at the end of the century. In any case, such instances of ‘hothouse Earth’ must be avoided on a highly crowded planet. There is no reasonable portfolio without stringent abatement [8]. Generally, contributing to evils of climate change should count as unintended but harmful victimization of other people in distant locations. Victimization is a kind of injury. The facets of such victimization are as manifold as the types of evils that are associated with climate change. For both Kantian and utilitarian ethicists, it seems hard to accept increases in the standard of life of wealthy persons that impose severe risks on poor people [58]. The rich should not live at the expense of the poor. 6 Ott [41], the famous future-individual paradox was outlined in Parfit [50]. Parfit himself downplayed the role of the paradox for long-term policymaking.

178

K. Ott

Three additional aspects of the perfect moral storm should be highlighted. First, the problem occurs whether (a) the overall structures of such perfect moral storm fit coherently with approaches that search for ideally (perfectly) fair solutions or whether (b) CE should search for solutions that are ‘fair enough’. To me, there are no ideal solutions within perfect storms. Second, CE must reconcile the principle of normative individualism (finally, rights and interests of individual persons matter intrinsically) and the matter of facts that evils of climate change refer to collectives (populations, strata, indigenous people, generations). Right-based approaches are attractive on ethical grounds since rights are (overriding) trumps of individuals, but such approaches must demonstrate that climate change violates, impairs or compromises individual rights, as life, liberties, bodily integrity, and properties. Third, CE is always close to hyper-moralizing emissions since economic and consumerist activities come under moral attack. Since moral life is not without emotions, guilt, shame, resentment, anger, dissonance, rage etc. come into play with respect to carbon footprints. Such hyper-moralizing might not be helpful for stubborn pragmatic reformism (Sect. 9.10).

9.3 Ethical Suppositions in Climate Economics In a commercialized world, many elites still believe that economics might serve as guidance of how to navigate. Therefore, a critical look on climate economics is mandatory even for climate ethicists. Economists do not wish to avoid climate change at any cost. If energy input by fossil fuels increases production of commodities but has GHG emissions as unwelcome side-effects, and if the consumption of commodities fulfills preferences while the side-effects create negative external effects, the GHG emissions should be curbed to the extent only to which these external effects outweigh the utilities being created by consumption.7 Standard economic approaches even rely on the idea of maximizing net present value. The paradigm calculation is William Nordhaus’s ‘classical’ DICE-model.8 Richard Tol [64] continues this efficiency approach (EA). Economists calculate opportunity costs of climate policies. A delay in future GDP-growth is costly by definition. If delayed growth of global GDP is presented in absolute $-numbers, costs look horrible (to lay persons). Such cost-benefit-analysis of global, unique, and long-term problems as climate change has raised skepticism even among economists.9 Prudent economists are aware that there are many ethical assumptions in EA-models and cost-benefit-analyses. Such assumptions are – the rate of discount, 7 See

Schröder [56], 417 with further references. [40]. Lomborg uncritically relied on Nordhaus’ calculations in his “Skeptical Environmentalist” (2001). 9 Gernot Klepper, Ulrich Hampicke, Peter Michaelis, Ottmar Edenhofer, Martin Quaas, to name but a few German economists. 8 Nordhaus

9 Domains of Climate Ethics Revisited

– – – – – – – – –

179

the curving of the damage function, aggregation of impacts in a single welfare function, the marginal value of future consumption units, the assumed value of a statistical life, technological innovation as either exogenous or endogenous to climate change, monetary value of environmental change and loss of biodiversity, costs of displacement and migration, insurance schemes and uninsured damages, shifts in transaction costs, control costs, and search costs.

Economic calculations are highly sensitive to these assumptions. It makes a difference whether the damage function is shaped in a linear fashion or whether it allows for non-linearity of damages. A linear damage-function models climate change as rather smooth and without unpleasant surprises. The debate on “tipping point” should count as reason in favor of non-linearity. Modeling the monetary value of a statistical life according to current salaries may downplay the death of poor humans. Such calculations may fly in the face of moral egalitarianism. The many cultural amenities of a stable natural environment are also downplayed in most economic models. The costs of aggressive abatement increase if technological innovation in carbonfree energy supply systems is modelled as exogenous. Costs decrease if innovations are modelled as endogenous (which is more likely). The Stern-Report [62] provides results on mitigation policies different from EA. One important modification of the Stern report compared to DICE is a discount rate close to zero (0.1% p. a.). Due to a low rate of discount, future evils are represented in the net present value to almost full extent. Setting the discount rate close to zero is a reasonable choice from the moral point of view, as ethical reflections on discounting indicate,10 but it is not based on purely economic grounds. The best available analysis on EA is given by Hampicke [22]. Utilitarian ethicists as Broome [13] and welfare-ethicists as Lumer [30] come to results on mitigation policies that differ significantly from those of efficiency-oriented economists. The debate on the ethical assumptions within EA motivates many (prudent) economists to adopt an alternative approach.11 This alternative is called ‘Standard Price Approach’ (SPA). This approach supposes a given standard (‘objective’) set by some legitimate authority (democratic politics, fair negotiation, international treatises, discourse based, result of scientific findings in conjunction with the precautionary principle etc.). The primary task of economics, then, is to calculate how this objective can be reached by minimizing opportunity costs. In SPA, the role of economics is less a master of rational choice than a servant to legitimate political 10 Cf.

the contributions in Hampicke and Ott [23]. problems of EA increase if not only mitigation but adaptation and climate-engineering are addressed, too. If EA can’t calculate the efficient solution for mitigation policies only it can’t calculate a fortiori the efficient solution in the triangular affair in between mitigation, adaptation, and modes of climate-engineering. To determine the ‘efficient’ solution of mitigation, adaptation, and climate-engineering in a global welfare function over a century is, at best, an utopian ideal and, at worst, a misleading, dangerous, and chimerical myth. 11 The

180

K. Ott

objectives.12 In the case of climate change, the standard consists in global stabilization targets. Such stabilization target can be defined either in terms of global mean temperature or in terms of atmospheric GHG-concentrations. Climate sensitivity is the crucial factor to calculate temperatures into concentrations and vice versa. To adopt SPA implies to argue about supreme targets.

9.4 Stabilization Targets Art. 2 of the United Nations Framework Convention on Climate Change (UNFCCC) defines the ultimate objective of this convention and of all related protocols to stabilize atmospheric greenhouse gas concentration at a level that prevents a dangerous anthropogenic interference with the climate system.13 Very often, a ‘tolerable window’ approach has been specified with reference to the increase in global mean temperature (GMT). Very popular is the so called ‘2°-target’ being proposed by the WBGU [69]: GMT should not increase more than 2 °C compared to pre-industrial GMT, since the overall sum of evils and risks associated with a higher increase in GMT might become too high. Some scientists, as James Hansen, argue that a 2 °C-increase in GMT is still too risky since the ice shields of Greenland and Antarctica might melt down slowly but steadily (over centuries) at such a GMT. Ultimately, the justification of this 2 °C-target remained unclear in the reports of WBGU. Years ago, a study on behalf of the Environmental Protection Agency [48] outlined a (meta-)ethical argument in favor of very low GHG stabilization levels. The study compared CE approaches that rely on competing ethical theories. Almost all approaches (except contractarianism) reached the conclusion that there is a collective moral commitment to curb global GHG emissions collectively in order to reach very low GHG stabilization levels. This agreement encompassed variants of utilitarianism, welfare-based consequentialism, deontological approaches, Rawlsian approaches, Aristotelian prudential approaches, physiocentric approaches, and Hans Jonas’ ethics of responsibility.14 There is, indeed, a remarkable convergence of different ethical theories. Ethical convergence counts in fields of practical philosophy, as CE. This convergence became even more robust since 2004. Earth System analysts as Johan Rockström have argued that there is much evidence to see the Holocene range of temperature is desirable to be maintained. Thus, ethicists and earth system analysts agree. Clearly, the clause ‘as low as possible’ must be interpreted and specified. The camps of ideal justice and “fair-enough”-approaches (Sect. 9.2) perceive feasibility 12 At

least with respect to environmental problems SPA has several advantages over EA since it is hard to see how the ‘efficient’ pollution of air, rivers, and marine systems or the ‘efficient’ number of species on planet Earth might be calculated. 13 This objective has three normative constraints which I must leave aside here. 14 If one takes a closer look on the recent literature from religion-based ethics this convergence broadens.

9 Domains of Climate Ethics Revisited

181

differently. The (dialectical) term ‘feasibility’ obscures many economic, political, and cultural assumptions with respect to a highly non-ideal world of non-compliance, inertia, ignorance, myopia, obscuring ideologies, and perverse incentives. In ideal theory, all societal affairs are feasible that do not contradict natural laws, but, actually, some state of affairs are not feasible politically without high risks of disruption, protest of different camps (as “Gelbwesten”), dwindling cohesion, resistance, and even civil war. It is feasible to phase out German car production, but this comes at the loss of roughly 80 billion e annually. Thus, it is infeasible in the short run. Prudent climate politics should make new feasibilities more feasible (see Sect. 9.10). In November 2015, the 21st Conference of the Parties (COP21) to the United Nations Framework Convention on Climate Change (UNFCCC) took place in Paris. All 197 participating states mutually agreed on the danger and urgency of climate change being the foremost threat to human society. The final Paris Agreement [66] aims at reducing increase in global mean temperature to ‘well below 2 °C above pre-industrial levels’ [66]. The Paris Agreement is in line with Art. 2 of UNFCCC. This objective defines a tolerable window that is not without any risks but keeps temperatures in a Holocene range that allows for adaptation, de-carbonization, and negative emissions. As a recent special report from IPCC indicates, it would be desirable not to exceed 1.5 °C GMT. Thus, scientific findings, ethical analysis, and political negotiations meanwhile are in a kind of reflective equilibrium. Therefore, it seems mandatory to keep GMT well below 2 °C and desirable to come close to the 1.5 °C objective. The 1.5 °C-target would have to mobilize a globally concerted effort to abate aggressively. Note, that some NGO redefine the 1.5 °C target as the mandatory one. Under such ideal mandatory target, the remaining carbon budget becomes very small, indeed. The scientific problem remains how atmospheric GHG levels are associated with GMT. The crucial variable is climate sensitivity.15 Years ago, the IPCC has given a best guess on climate sensitivity at roughly 3 °C. If this ‘best guess’ is adopted, one can assess the probabilities with which a ‘well-below-2 °C’-target might be reached. If one wishes to reach the ‘well-below-2 °C’-target with high probability, global emissions must be curbed rigorously. There is more leeway if one regards as sufficient a 50% probability of reaching the target.16 In any case, taking the ‘wellbelow-2 °C’-target seriously requires that GHG concentrations remain far below 500 CO2 -eq. A temporal overshot over 450 CO2 -eq might be tolerable if and only if it is compensated by negative emissions later (see Sect. 9.9). Given all the carbon on planet Earth, especially dispersed coal resources, given economic growth, and given roughly 9.6 billion humans in 2050, it will be highly difficult to reduce GHG emissions in the required order of magnitude. The nationally intended contribution (NIC), which each country formulates individually, is a 15 Climate

sensitivity is defined as increase in GMT at a CO2 -level of 560 ppmv (twice than preindustrial CO2 ). 16 Betz [11] claims that the IPCC methodology of modal verificationism by which climate sensitivity is determined should be replaced by modal falsificationism. If so, there will be more reasons for concern and precaution.

182

K. Ott

nation’s share to realizing this supreme goal. The NICs are a crucial mechanism under the Paris Agreement. Because the NIC’s will amplify every five years, it is not in a state’s interest to define ambitious NDC’s in the first instance. Existing NICs are not sufficient to reach the ‘well-below-2 °C’ target, but would result in a 2.5–2.8 °C GMT warming. Many NIC’s made by countries of the Global South are conditional on proper adaptation financing. If conditional NIC’s fail, GMT will increase 3 °C (or even more). The later the global emissions peak, the steeper the pathways will be and the higher the likeliness to miss the target. Given current GHG concentrations (440 ppmv CO2 -eq) and an increase of 1.5–1.9 CO2 ppmv each year, ‘well-below-2 °C’target would imply that the net intake of GHG into the atmosphere must be stopped within two decades. Global emissions must peak in 2025 (sic!) and continuously decline afterwards as steeply as possible. In order to reach this ‘well-below-2 °C’goal, global GHG emissions should reach net zero by the end of the twenty-first century, asking for drastic emission cuts (= abatement) in coming decades for most countries except least developed countries, but including the BRIICS-states (Brazil, Russia, India, Indonesia, China, South Africa). To stay well below 2 °C in the longer run requires future negative emissions in most models (see Sect. 9.8 on CDR). If one wishes to reach 1.5 °C with certainty, global carbon budget dramatically shrinks to 400 Gt being consumed away within a decade at current speed. Other targets increase the budget up to 1.000 Gt. In some scenarios, the budget is consumed away within 20–30 years. If so, negative emissions must compensate for likely overshoot (Sect. 9.8). The size of the remaining carbon budget changes according to the chosen target and the probabilities associated with such target. In any case, there won’t be open access to the sink capacity of the atmosphere any more. Any stabilization level defines a remaining global carbon budget that has to be distributed ‘fairly’. The smaller the budget, the harder the case for distribution. In asking for a fair distribution of the remaining budget, the question reoccurs: ‘ideally fair’ or ‘fair enough’?

9.5 Distribution Schemes for Remaining Emission Entitlements There are many schemes of how the carbon budget under the given target structure should be distributed.17 Since emission entitlements are only one good among many it is tempting to embed the problem at stake into a more comprehensive (‘holistic’) theory of (ideal) (global) distributive justice [14, 15]. But since such a holistic theory will be essentially contested (with respect to scope, currency, and pattern of justice, and because of the role of equality) it seems more viable to isolate emission entitlements as one specific good the distribution of which is to be determined irrespectively 17 Grandfathering, basic needs, Rawlsian difference principles, proportionality, per-capita schemes.

9 Domains of Climate Ethics Revisited

183

of how other goods are or should be distributed in a globalized world. Such an ‘isolationistic’ approach has clear advantages, at least, in terms of political viability (Baatz and Ott [9]. Isolationism takes the remaining carbon budget, however specified in terms of Gt, as a resource to be divided among claimants. Holism distributes burdens, while isolationism distributes a resource budget. Pure holism tends to allocate almost all burdens to wealthy people while pure isolationism abstracts away the background distribution of wealth. Both purities have shortcomings. A conceptual middle-ground between holism and isolationism might be dubbed ‘connectism’. Distributive justice with respect to the shrinking carbon budget might be connected to other topics of justice, as combating absolute poverty or nature conservation, by way of argument. Under a “fair enough”-approach, I follow the maxim: Isolate first, connect later. See my proposal in Sect. 9.8 of how to connect nature conservation with adaptation financing. Emission egalitarianism seems fair enough, because it is pro-poor and it does not deprive the Global North from all carbon resources. If one assumes, first, that the atmosphere has the status of a global common pool good,18 and if one, second, adopts the Rawlsian intuition on justice that all goods should be distributed equally unless an unequal distribution benefits all, egalitarian schemes deserve special attention. Therefore, I have argued in favor of an egalitarian per-capita-approach in more detail, following Aubrey Meyer [34]. The argument claims that it is fair to shift the burden of proof to those who favor unequal distribution schemes for global common pool goods. From the moral point of normative individualism, however, it seems hard to argue why, say, a German deserves a larger share of the sink than a Nigerian. Note, that this argument is made at a specific point in history (end of 20. century). At best, it is valid since then. If so, it can’t prolonged into the past without further assumptions (see next section on historical emissions). For the time being, however, a per-capita scheme looks fair enough. Reasons in favor of unequal distribution might be that some people deserve more entitlements from a moral point of view or that unequal schemes are beneficial to all consumers in the global village. Reasons to sidestep egalitarianism are claiming special needs legitimizing basic emissions. A plausible claim with respect to special needs might be the claim for heating in winter in the North. Heating needs, however, can be addressed by technological innovations and social policies in rich Northern countries and should not open the Pandora’s Box of a global debate on special needs. An egalitarian scheme is well-advised not to address too many special needs. Special needs in combination with the quest for ideal justice will result in endless, but highly moralized and politicized debates. What about special needs for cooling in (sub)tropical countries to provide children with fresh dairy products? There might be claims on special needs for personal PC’s in a digitalized information society. Is there a special need to cool mega-towers in the Arabic emirates? With respect to special needs, egalitarianism is not unfair to the Global South. 18 I changed my mind on this ontological-economic concept several times. Meanwhile, this assumption seems not as flawed as Baatz and Ott (2016) argued. This assumption needs more refinement.

184

K. Ott

An egalitarian scheme under the ‘well-below-2 °C-objective’ would mean that each person has a carbon budget of roughly 1.8–2.0 tons of CO2 per year.19 ‘Well below 2 °C’ and per capital emissions of 2 t/p/y are two normative building blocks of my approach in CE. This approach is known as ‘Contraction and Convergence’ (C&C, see [34]. If the global carbon budget is allocated to national states according to their populations under an egalitarian scheme, it demands emission reductions of 85% at east (Germany) or even more (U.S.). If properly implemented on a global scale, such egalitarian scheme launched via emission trading has the welcomed effect that persons with low emissions (India, Sub-Saharan Africa) would be benefited because they can sell their entitlements. Egalitarian schemes can be seen as income generators to Southern countries if such countries do not increase their domestic emissions. In any case, low-emitting countries face the choice either to sell or to use. C&C can operate on different time scales. Emission schemes might be equalized within few decades. Note, that C&C is highly demanding to the BRICCS states: Brazil, Russia, India, Indonesia, China, and South Africa. C&C requires deep emission cuts for China instead of a decline per unit of GDP. The counter argument remains that egalitarian schemes are highly unfair to newcomers since most of the sink capacity of the atmosphere has been consumed away by past emission. The sink has been occupied by countries having industrialized first. Is it, on the one hand, fair enough to divide the remaining budget equally after a long historical period of unequal consumption? Are, on the other hand, macrohistorical events as industrialization objects of moral judgement at all? What, if one wishes to answer both questions No? We must turn to the antinomic problem of historical emissions.

9.6 Responsibility for Historical Emissions? Northern countries started to emit GHG in the course of industrialization. Mainly these countries filled up the common atmospheric sink until the 1960ies. Now, Southern countries claim that there is a huge historical debt of the North against the South. The problem of past emission is haunting political negotiations even after Paris. They play a role in determining the meaning of “common but differentiated responsibility” (UNFCCC). At first glance, historical responsibility seems obvious because cumulative historical emissions are decisive for climate chance. On ethical reflection, however, historical emissions are puzzling. Causal responsibility does not imply moral responsibility. In the remote past, all persons (except some readers of Arrhenius) were ignorant about the causal relation between GHG emissions and future climate change. Ignorance matters in cases of unintentional side effects. To which degree are descendants of ignorant people liable for their actions? We can’t 19 This approach should be based on a benchmark to avoid incentives for pro-natalistic population policies. It is highly doubtful whether restrictive population policies, as in China, can be regarded as ‘early action’ in mitigation policies.

9 Domains of Climate Ethics Revisited

185

blame our ancestors for burning coal and drilling oil. Historical emissions turn out to be harmful, but they have not been wrongful. At the end of 18th century, the medieval ice age came to its end. The collective climate memory kept long, freezing winter times in mind. There was no moral imaginary about global warming until 1990. It felt good to have a warm house in winter times. Fossil-fuel-inputs into production created an “empire of things” [65], bringing about comfort, convenience, and prosperity. Even if historical responsibility might be agreed upon in principle, the devil is in the details: Should there be a benchmark year after which responsibility cannot be denied or is there full responsibility for all past emissions? Should historical changes in land use also be taken into account (deforestation in North America in the 19. century)? What about emissions of states that do not exist anymore (as USSR)? What about the (folly) emissions under Mao’s regime being caused by deforestation in order to produce steel in the Chinese countryside? What about the emissions caused by warfare? How can past emissions be measured? If there is a historical debt for past emissions why not add more historical debts for colonialism and for slave trade?20 Did Northern countries pay back some or all historical debts via development aid (globally 100 billion per year over decades)? Historical emissions open a can of worms. Citizens of Northern countries are, on the one hand, beneficiaries of the past creation of wealth being accompanied by GHG emissions.21 On the other hand, the historical sources of present wealth are manifold. There were wars, inventions, saving rates, labor, surplus accumulation, investments and returns, foreign trade, taxation schemes, social mobility, and many other factors influencing wealth creation since 1850. It remains doubtful, whether GHG-emissions are a good proxy for wealth generation. Socialist countries remained poor despite high GHG-emissions (GDR, USSR). It seems undeniable that GHG-emissions played some role in long-term wealth generation, but such emissions do not make current wealth simply a kind of unjust enrichment being rooted in the past. History sets some limits to morals and ethics. We should not be forgetful about the past, but we should also refrain from trying to calculate exactly how large the historical responsibility really is, because such calculations would rest on many contested and arbitrary assumptions. It seems fair enough if the GHG-legacy of the Global North becomes a reason for citizens of Northern countries (1) to recognize themselves as beneficiaries of past emissions, (2) to recognize that cumulative past emissions turn out to be harmful in the present, (3) to agree to duties to compensate victims of more recent emissions, and (4) to adopt the attitude to assist countries in the Global South in the fields of technology transfer and adaptation generously. Thus, readiness to invest a (small) fraction of the wealth of the North into technology transfer and adaptation funding in the South is not just a noble benevolent attitude, but mandatory under a “fair enough”-approach. Yes, ‘we’ are indeed beneficiaries of the contingent matter that the industrial revolution occurred in the North, but macro-historical events, as the 20 How deep are Arab countries indebted to Sub-Saharan Africa since there was slave trade over centuries before the Europeans took part in slave trade? 21 Arguments in favor of the beneficiary account are given by Gosseries [20] and Caney [14].

186

K. Ott

industrial revolution clearly is, can’t be moralized. The legacies of wealth constitute a weak, non-legal liability. The duties of compensatory justice refer mostly to our current emissions, but also to more recent historical ones. I feel responsible for the overall GHG-emission since my infancy even if I only became fully aware of climate change in my thirties.22 Compensatory justice and some redress for historical emissions paid by beneficiaries might transform politically into generous adaptation financing. Generosity is seen as virtue since ancient times. The virtue of generosity corresponds to the contrary vices to avarice and a kind of overspending which brings about poverty and misery to one’s own household. The virtuous attitude of generosity must be specified by adaptation financing schemes.

9.7 Adaptation Opportunities Humans are practical beings with large capacities for problem solving. They are settlers on a global scale who can cope with a great variety of environmental conditions. These capacities can be used for climate adaptation strategies.23 Adaptation strategies may include buildings like dikes, behavioral patterns like siesta, protective strategies against forest fires, improved water supply systems in arid regions, different crops in agriculture, and the like.24 Adaptation has a broad spectrum across different dimensions of societal life. There are adaptation aspects in agriculture, forestry, freshwater supply, urban planning, transport, medicine, education, disaster management, investment decisions, gender issues, and the like. The practice of adaptation must combine personal initiative with political prudence. Rich countries can utilize scientific knowledge, financial capital, political administration, and infrastructures in order to implement adaptation strategies on their own.25 Given only modest climate change well below 2 °C and proper adaptation strategies, the prospects for the temperate zones are not completely bleak. The situation is different in Southern countries were many institutional preconditions for effective adaptation are lacking despite several decades of “capacity building” and “empowerment” being financed by development assistances.26 I sidestep all politicized debates

22 My parents were ignorant about climate change and simply enjoyed cars and family holidays in the prosperous Western Germany between 1965 and 1990. 23 The concept of adaptation must be secured against biological definitions of adaptation of organisms to a hostile environment. If not, adaptation to climate change might be seen as an instance of survival of the fittest. 24 A conceptual framework on adaptation strategies is given by Smit et al. [60]. 25 Germany has already adopted a national adaptation plan. 26 It should be asked whether such assistance should be additional to ordinary development aid (ODA) as most NGO’s suppose. This problem is not addressed here since such debate relies on assumption of how good or bad the 100 billion $ ODA are spent each year. It remains doubtful whether strict emission reduction (80–90% compared to a 1990 benchmark), doubling of ODA

9 Domains of Climate Ethics Revisited

187

on a postcolonial situation and on the causal roots of the bad situation of many countries in the Global South.27 The emerging adaptation discourse suggests that there can be no such thing as a global master plan of how to adapt. Adaptation is literally ‘concrete’ because internal and external resources, cultural lifestyles, and patterns of environmental practices must be rearranged within adaptation strategies. Adaptation is often small scale. There are already adaptation funding schemes for Southern countries under the UNFCCC regime. They will develop further on. Adaptation financing has three crucial dimensions. First, some countries must pay fair shares into such global funds (input). Second, money must be transferred to adaptation projects according to some criteria (output). The criteria must be ethically reflected. Third, expenditures must be controlled in order to safeguard the integrity of the overall funding scheme (control). There are some analogies between adaptation financing and development assistance on how to spend money. Some calculations indicate that even 100 billion $ per year wouldn’t be sufficient for global adaptation. Since adaptation is cross cutting and the Global South is large, calculations can be driven up to trillions. Moral concerns are catalysts of $-numbers up to trillions. A lump sum of 100 billion per year looks fair enough for the time being.28 Funding as an input-side. On this input-side, we face the problem of non-compliance. Imagine that Germany has a fair share of 10 billion and the U.S. have a fair share of 25 billion. Now the U.S. refuses to pay a single buck into adaptation funding. Does Germany and other states have a moral duty to increase their payments or might it be acceptable that the lump sum of adaptation funding becomes lower than it should be? If the moral point of view is victim-driven, decent states should compensate for non-compliance by additional payments. Non-compliance is, however, not a purely moral, but intrinsically a political problem. The intrinsic political dynamics of non-compliance is this: the less agents comply the higher the burdens for the remaining ones, the higher the likeliness of non-compliance, the higher the burdens—and so on. A victim-driven approach has to face the risk that compliance collapses.29 Fund money must be spend. How to spend it? Since adaptation funding is done under the condition of scarcity of resources, applications for such funding must be governed and controlled by criteria and procedures.30 Under real-world conditions, one should not expect that all applications for adaptation funding are honest ones. Even if the ‘pay’ is done out of moral reasons of compensatory justice (historical (0.7% GDP), and additional burdens for adaptation funding gradually become somewhat overdemanding even to rich societies that have to deal with many other problems than just climate change. 27 See Seitz [57]) and Menzel [33] for critical overviews. 28 This 100-billion-$-number does not entail costs for provision and resettlement of displaced persons which is a hard special case of adaptation. The case for climate-induced displaced persons is beyond the scope of this article (see [49]. 29 Distributing migrants onto EU-countries seems a fair analogy. 30 The following remarks have profited from Baatz and Bourban [7], even if I come to different conclusions than the authors.

188

K. Ott

responsibility, generosity, benevolence), the ‘take’ might be strategically organized. Adaptation funding, as any funding, might be seen as opportunity to grab money. There is a peculiar ethical dialectics with assistance and aid: One the one hand, it relies on moral attitudes, while one the other it easily falls prey to strategic behavior if it is done without sobriety and prudence. If so, we need a critical look on criteria [6, 7]. One meta-criterion is efficiency. Efficiency is directed against wastefulness. The 100 billion $ should generate as much adaptation to climate change as possible. As a matter of fact, adaptation funding might rely on capacities to write colorful proposals. States that invest in such capacities may see adaptation funding as returns of investments. As in the system of science, funding proposals will be full of catch phrases, rhetoric, hollow promises etc. Funding is open for systems based on client-patron-relations. Adaptation funding must strictly secure itself against narratives of misuse being presented in the media with strategic interests to stop payments. By moral intuition, the most vulnerable, victimized, and marginalized groups should be the first beneficiaries of adaptation funding. Vulnerability to climate change impacts is constitutive for adaptation financing (Baatz, oral communication). Grasso [21] proposes the following vulnerability-based decision rule for spending: The lower the overall level of human security (high vulnerability), the more adaptation funds are due. It can’t be denied that vulnerability and human security are important criteria for funding priorities. If, however, these criteria remain unbalanced by other criteria an inconvenient consequence may result. Imagine countries of the South competing for adaptation money against each other under a vulnerability criterion. If so, they must present themselves in the application procedures as being more vulnerable than others. If so, vulnerability as supreme criterion implies a perverse incentive to present oneself as being poor, helpless, ignorant, devoid of capabilities and initiative, and so on. If such outcome is to be avoided the criterion of vulnerability should not be the only one. Vulnerability is Janus-faced. In recent literature, a ‘democracy’-criterion has been proposed [7]. This criterion assumes that adaptation performs better if there is a public sphere of reasoning, respect for human rights, fairly elected parliaments, responsible authorities, low corruption etc. There are three arguments in favor of a democracy-criterion [7]: First, adaptation requires collective decision-making and participation of affected parties. Second, local knowledge should be incorporated. Third, corruption should be low. All requirements are far better fulfilled in democratic states. This democracy criterion, however, has major disadvantages: It allows financing political forces (NGO, even parties) even if such forces only engage for “democratization”, but won’t perform adaptation measures themselves. Success of democratization is almost impossible to control. Wastefulness of money becomes likely under this criterion. Moreover, the criterion politicizes the funding decisions, and it may serve as reason to stop payments into the fund. Governments may complain if oppositional forces are financially supported via adaptation funding. They may perceive such support as being unfair an influence in domestic affairs. States complain if they are put in a class of non-democratic state and may withdraw their contributions to abate. The criterion also consumes contested assumptions from political science and from democracy

9 Domains of Climate Ethics Revisited

189

rankings. Democracy comes and goes in degrees and there are elections in many authoritarian states. Finally, it has to face the harsh problem that many vulnerable people live in non-democratic countries. If these people remain entitled to adaptation funding, the criterion becomes pointless. At best, the criterion opens casuistries of whether and how (not) to finance adaptation in non-democratic countries. All things considered, a democracy-criterion is flawed in many respects. My proposal takes a different route: Many poor people in the Global South do not live in abject misery but use their indigenous knowledge to reproduce a decent livelihood which is not devastating natural environments. Adaptation funding should devote a substantial fraction of money to indigenous communities that have sustained a non-miserable livelihood for long and might continue to do so even under climate change impacts. Adaptation funding should reward and stimulate activities by which adaptation is linked to other objectives of genuine sustainable development. According to the concept of strong sustainability [47], adaptation funding should pay attention to the conservation and restoration of different stocks of natural capital. All around the world, there are many such activities as community based forestry in Nepal, water harvesting in the Sahel, reconstruction of traditional water storage systems in Iran, revitalizing the fertility of degraded soils by charcoal, terracing hills against landslides, bringing back moisture into landscapes, increasing local species composition by organic agriculture, and the like. Thus, global adaptation spending might support and stimulate such nature-friendly activities. Activities that combine local adaptation, biodiversity conservation, ecosystem restoration, and carbon storage should be highly welcomed. I shall say some words on so-called natural climate solutions in the following section on climate engineering.

9.8 Climate Engineering In his last publications, Edward Teller [63] proposes solar radiation management (SRM) as a technical measure against climate change. According to Teller, a doubling of CO2 -concentrations (560 ppm-eq) could be compensated by a decrease of roughly 2–3% of solar radiation reaching the surface of planet Earth. Some years after Edward Teller’s rough calculations, Paul Crutzen [16] urged for active scientific research on stratospheric aerosol injection (SAI)—a technology that might possibly reduce radiative forcing by injecting sulfate particles into the stratosphere (overview in [39], for technical details see [38]. Crutzen’s article was soon followed by Victor [67] and Victor et al. [68] in the affirmative: ‘It is time to take geoengineering out of the closet’. Robock [53] presented a list of 20 arguments why geoengineering may be a bad idea. The Royal Society launched a report in 2009. Keith [27] proposed SAI research. “Earth Future” devoted a special volume on CEO ten years after Crutzen’s paper. CEO-Technologies can broadly be categorized into Solar Radiation Management (SRM) and Carbon Dioxide Removal (CDR), containing different assets and having highly different moral profiles [46].

190

K. Ott

In order to control and stabilize the global mean temperature, SRM technologies directly influence the energy balance of the earth by reflecting the incoming sunlight and thus influence the radiative forcing. The idea of space mirrors is not very promising due to technical and financial difficulties. Deployment costs would be in the magnitude of some trillions. The idea of ‘whitening’ areas of land in order to change the albedo has been discarded due to lack of effectiveness. Some more promising strategies include the enhancement of cloud formation (cloud seeding) over the ocean, which alters the albedo and shields a significant amount of sunlight. However, the physics behind cloud formation is imperfectly understood which may lead to great uncertainty regarding the side effects (Royal Society [54]. Another idea to modify clouds refers to Arctic cirrus clouds [29]. Injection of sulfate aerosols (SAI) has been researched and debated at large. The latest comprehensive overview on the different aspects of this technology can be found in the National Academy of Science’s study [31, 32]. SAI is the most tempting SRM technology, because it is technologically feasible, deployment costs look cheap, and it brings about quick effects in cooling the earth. SAI technologies, however, don’t address the root cause of climate change—the concentration of CO2 in the atmosphere. This is a reason for a mostly negative stance towards SAI and other SRM technologies, since a symptomatic approach towards climate change is associated with the repugnant attitude of ‘techno-fixing’ climate change. SAI seems attractive an option to many scientifically credible scholars especially in the US. The popular message is simple: If there is a quick, cost-efficient, effective technological solution to the problem of climate change by which a decline in economic growth and a change in consumerist lifestyle can be avoided, governments should not hesitate to go for such solution.31 In case of emergency, even unilateral action by technological advanced national states might be the ultima ratio. Betz and Cacean [12] have mapped arguments pro and con of SRM with respect to theoretical research on SRM, small-scale experiments, large field test, and full deployment. An overview on the discourse on CEO is given in Ott and Neuber [46]. Here some arguments are presented in a nutshell. Some arguments use the concept of hubris: Engineering planet Earth might be an instance of such hubris. Such hubris may conjoin with moral corruption in a specific attitude of playful tinkering with the global thermostat. Under modern conditions, the concept of hubris must be secularized. Hubris means to overrate the technological, political, and moral capacities of humans to perform SAI on a planetary scale. Such hubris-argument follow Jonas [25] who has warned against such attitudes. With respect to the political economy of SRM, it might be argued that SRM should be seen as a protective measure in favor of outdated fossil fuel industries with their high-emission profiles against the global diffusion of smart ‘green’ industries with comparatively low GHG emissions [45]. SAI fits frightening well within the profile of

31 Sometimes it is added that the problem of

global cooperation in mitigation of GHG can be easily turned into a technological joint effort problem.

9 Domains of Climate Ethics Revisited

191

the most questionable variant of capitalism and its military-industrial complex. Political economy of SAI rests, however, on many contested assumptions. One assumption is about trustworthiness: Should humankind entrust states with research and deployment of SAI which didn’t contribute to abatement measures, obstructed the UNFCCC process, refused to pay into adaptation funding. The fossil fuel empire may well strike back via SAI. Moreover, there are risk-based ethical concerns against SAI. Once fully deployed, SRM can’t be easily stopped if it is not combined with stringent abatement policies (‘termination problem’). If SAI is used as a substitute for aggressive abatement, atmospheric CO2 concentration will continue to rise. Only the effect of high CO2 concentration, namely the temperature rise, will be artificially stabilized. An abrupt termination of SAI (for example, due to social and political upheavals) would lead to an accelerated heating enforced by the large volume of atmospheric CO2 . This might be disastrous for the climate and the biosphere. Additionally, this situation could put future generations in a dilemmatic situation: Either they would have to decide to continue to operate SAI, with possibly fatal side effects, or they would have to accept accelerated climate change [44]. If SAI would bring about many negative sideeffects (acid rain, changing precipitation patterns, ozone layer) future persons might be trapped in a dilemmatic situation. The termination problem is, however, coined for a specific case: SAI deployment without adequate abatement and sudden termination. The argument must be refined if termination of SAI is performed gradual over a longer period in time in combination with strong abatement. Such strategy is close to “buying time” for abatement vial restricted SAI. Nevertheless, the termination problem still gives reasons to be concerned. It presupposes a somewhat stable political governance capacity to deploy, control, secure, assess, and terminate SAI. Abrupt termination due to political crisis remains possible. The termination problem connects to the hubrisargument: We may overrate the capacities to govern and control SAI. The risks of termination require a robust and viable exit-strategy for any SAI deployment. In a future climate emergency, where global climate impacts happen fast and intense, SAI might serve as a back-up plan, an insurance that serves as a shield against this kind of catastrophe. Lately, the emergency framing of climate change has been challenged. Invoking an emergency situation in order to justify the deployment of a risk technology might lead to problematic political and social implications [24, 59]. Sillmann et al. argue that the ‘emergency’-argument fails for scientific and political reasons. If tipping points have been touched and thresholds have been crossed, SAI won’t be able to stop and reverse the consequences. Timothy Lenton’s research gives credit to the scientific rejection of the ‘emergency’-argument. From a political perspective, it is not clear what would count as an (global) emergency situation and who has the power to define it. If an emergency state has been declared, power might concentrate in the hands of a few, with the associated risk of abuse [24]. Remind Carl Schmitt: ‘He is sovereign, who decides about the exceptional state’. The legal status and political background of an emergency situation might threaten deliberative and democratic structures. Additionally, as Gardiner [17] has prominently pointed out, there is no moral obligation to prepare for an emergencysituation as long as we still have the capacity to avoid it (or at least reduce it). Note,

192

K. Ott

that both SAI proponents, supporters of radical redistribution (Sect. 9.9) and climate activists (“Klimanotstand”) are sympathetic with the dangerous idea to proclaim emergency-situations. Rebuttal of the emergency framing also gives more attractiveness to another argument: the ‘buying-time’-argument (BTA). Instead of preparing for an emergency scenario that we could arguably still avoid, CE could be used as a stopgap measure to buy time until abatement polices show effect on a global scale. The ‘buying time’-strategy represents a seemingly prudent way to benefit from CE deployment while minimizing its negative side effects. Starting from the assumption that climate change will show dire effects in the second half of the 21st century, but emissions will reach net zero earliest at the end of the century, some authors propose CE as a stopgap measure for this interim period. The discrepancy in time can be bridged by the use of some CE strategy which will cushion the effects of climate change, while buying time for effective mitigation measures. BTA applies mostly to SRM and SAI technologies. The underlying idea of the BTA is to buy more time for an effective climate policy mix that will eventually guarantee the ‘well-below-2 °C’-target. This policy mix ought to ensure a non-catastrophic climate change by itself, making SAI an add-on that is deployed as needed and ramped down as quickly as possible. This is a rather strong normative framing for the use of SAI. This normative frame is expressed by the following five preconditions to the BTA: 1. 2. 3. 4. 5.

The use of SAI is timely limited and will be ceased as soon as its goal is reached; Aggressive mitigation strategies are undertaken parallel; The use of SAI doesn’t lead to a decline in mitigation efforts; The use of SAI is shaped to have no or very little negative side effects; The use of SAI is not morally forbidden intrinsically.

These contested points indicate that the BTA is anything but an easily achieved common ground in the debate on SAI. Rather, it is highly demanding for policy makers and scientists alike, to guarantee the fulfillment of the five prerequisites. Following Neuber [37], the BTA is the last credible argument in favor of SAI. The second group of climate engineering strategies aims at removing carbon dioxide from the atmosphere (Carbon Dioxide Removal, CDR). This may happen through mechanical or technical carbon air capture, for example via artificial trees or biochar, or by enhancing natural CO2 sinks, such as enhanced weathering, ocean fertilization, restoration of mires to make peat layers grow, enhancement of oceans alkalinity, and reforestation and afforestation. Reforestation induces conflicts over competing land use. Large-scale afforestation requires large amount of freshwater and it faces a trade-off between carbon removal and albedo change [52]. Assisted reforestation that generates income to local people should have priority over largescale afforestation. New studies indicate that restoring former natural forests can contribute substantially to CDR [28]. CDR modeling has focused on large-scale technologies that combines bioenergy with carbon capture and storage (BECCS). BECCS means that biomass is combusted for energy production while the resulting CO2 emissions are captured and, finally,

9 Domains of Climate Ethics Revisited

193

stored underground. BECCS is seen as a crucial approach in climate policies. Most models that reach the Paris target (with a likeliness of more than 66%) strongly rely on negative emission technologies (NET) after 2050 being produced mainly by BECCS. BEECS, however, if performed at a Gt-scale, requires large amounts of fertile land for biomass production. Thus, it conflicts with food security of a global population of roughly 9.6 billion humans in 2050. Locations for BECCS should be the tropics and subtropics where population increase is high. It matters in terms of yields, whether biomass will be produced with or without irrigation. Yields are far higher with irrigation but this may enhance water scarcity and, thus, constitutes a harsh trade-off. BEECS can be perceived as a risk-transfer into the future. According to Anderson and Peters [2], p. 183), outlook for large-scale BECCS are an “unjust and high-take gamble”. Some CDR-technologies have been classified as ‘natural climate solutions’ (NCS), as they offer win-win-situations with goals of biodiversity protection, ecosystem restoration, soil formation and the maintenance of different kinds of socalled ecosystem services, as providing, regulating, and cultural services. Important ecosystems are mires, forests, coastal zones, and soils. Mires store large stocks of carbon within peat. Enhancing peat formation in mires counts as CDR and NCS. C-enhancement of soils is also a promising strategy. Fertility of soils is crucial for global food security. Some models, however, indicate that NCS won’t bring global temperatures ‘well below 2 °C’ unless reduction of emissions would be dramatically increased. NCS face limitations in scope and in effectiveness [3]. They take time to show effect and require the virtue of patience. In any case, NCS should be financed by the adaptation financing schemes (see proposal in Sect. 9.8).

9.9 Contraction and Convergence Versus Greenhouse Development Rights The Global South is highly demanding in terms of money in the name of postcolonial climate justice and it claims a right to escape poverty. Such demands and claims find support within CE [35]. The wicked situation we are in is this: Sink capacities for GHG are becoming scarce, many people in the Global South are still absolutely poor, and cheap energy paves reliable ways out of poverty. Note, that the Sustainable Development Goals (SDG) require both eradication of absolute poverty and universal access to modern and affordable energies. Members of the political camp that supports most principles, objectives, and strategies outlined in the previous sections often split into supporters of two competing ethical concepts, namely Contraction and Convergence (C&C) and Greenhouse Development Rights (GDR).32 The GDR concept has found support by many NGO’s and some churches in recent years. It supposes a global emergency situation (see Sect. 9.8) and it combines strict abatement in the North with mandatory assistance to 32 Baer

et al. [10]. See also the homepage of www.ecoequity.org.

194

K. Ott

adaptation in the global South and, moreover, with a benchmark in monetary income below which persons have no obligation to curb their GHG emissions or care about climate change.33 GDR licenses the poor to emit as they please. A supposed human right to develop is seen as a right to create monetary income. The income baseline is, in principle, open for negotiation. Proponents of GDR have set it at 7500 US-$ a year given in purchase power parity. The majority of humans lives below this monetary benchmark. Thus, all climate related duties concentrate upon a small fraction of the human population. Under the GDR criteria of responsibility and capability, the burdens of single states are calculated. As result, the burden of states as Germany, the USA, and other wealthy industrialized states becomes greater than 100% emission reduction. Even if these states might have reduced all domestic GHG emissions to zero there remains a financial burden to assist Southern countries to decarbonize and to adapt while they are relieved from any burden. The Global South becomes the beneficiary of climate change in the name of ideal global justice. Thus, GDR discards the idea of a common responsibility because it shifts all burdens to the wealthy countries. GDR is a concept that redistributes global wealth far stronger than pro-poor C&C. Therefore, liberals perceive GDR as Trojan Horse of egalitarian socialism after the collapse of communism: The rich must pay for all costs of climate change while the poor have right to escape poverty by means of energy-intense growth. The poor are protected against obligations and costs by anti-poverty-principles. The anti-poverty principle holds if there ‘are’ other strategies feasible to reach the targets without SAI. If the Northern countries respect the anti-poverty-principle, and if they wish to reach a desirable 1.5 °C target and, moreover, wish to avoid risky SAI, then they should be willing to finance de-carbonization in the Global South via large transfers, not via investments. Such transfers may in the magnitude of warfare (more than 1 trillion/year according to Shue’s envelope-calculation). Energy would become scarce and expensive in the North and it becomes cheap and abundant in the South as long as Southern countries remain free to choose either renewables or coal. From an ethical perspective, GDR must be seen with a critical lens because it combines an emergency ethics that allows for uncommon measures with a highly conventional approach to development as being defined in terms of monetary income. If so, there are reasons to claim that a C&C-concept that must be enlarged to the domain of adaptation and CE, is, all things considered, the ‘better’ concept from a pragmatic “fair-enough”-approach.

9.10 Conclusion: Building Blocks of Climate Ethics It is mandatory for CE to provide some reasonable ethical orientation for the time being. Without common moral ground, climate negotiations will fall prey to the 33 The charming idea that rich persons in poor countries should contribute to mitigation and adaptation efforts is not at the heart of the GDR-concept.

9 Domains of Climate Ethics Revisited

195

predicament of becoming a mere muddling through governed by strategic and tactical cleverness of the thousands of stakeholders and negotiators gathering each year at the COP/MOP conferences. Steffen et al. [61] have argued that collective human action may, despite global warming and positive feed-backs, avoid uncontrollable ‘Hothouse Earth’ and stabilize the Earth System in a habitable interglacial state. The most important task is to push back the Earth System from the high-emission trajectory that might collapse in a basin of attraction towards ‘Hothouse Earth’. From the perspective of climate ethics as outlined in previous sections of this article, environmental ethics [42], and a theory of strong sustainability [47], the most attractive pathway for climate policies would be the following one: – Standard-Price-Approach in climate economics, discarding ‘efficiency’ approaches. Very low discount rates. Value of a statistical life not calculated in monetary income of working life. High monetary value on ecosystem services. – Ultimate stabilization objective at ‘well below 2 °C’ GMT. 1.5 °C-target as desirable utopia. Range from 1.5° to 1.8 °C GMT increase. Stringent development of NIC. Liberation of LDC from burdens, but inclusion of BRIICS and other developing states in the Paris regime. Intended national contributions should become more ambitious over time. Single countries should take forerunner roles, as in phasing out coal and starting CDR, especially NCS. I see Germany as a candidate for such forerunner role. – Long-term egalitarian distribution at 1.8–2 tons/person/year in conjunction with global emission trading. Phasing out combustion engines in airplanes and cars over decades. – Mandatory attitude for Northern countries to assist adaptation strategies in the global South. Adaptation finance should be generous (100 billion per year). There should be multiple criteria (beyond vulnerability) to determine of how to launch the money into desirable projects. One crucial criterion should be to support natural climate solutions and strong sustainability. This would bring connectivities between UNFCCC and CBD. – Research on CDR options, support for natural climate solutions (NCS). Ban or moratorium on SAI research and deployment. No substitution strategy between abatement and SAI. The only credible SAI strategy is based on the buying-timeargument. Risks of SAI must be carefully weighed against a temporal overshoot (+2.4 °C GMT). Deployment of SAI requires the global prior informed consent within the system of the United Nations. – Change in lifestyles. A change to more vegetarian diets releases pressure on land. Developed countries should start BECCS and NCS within their territory even if there might be some protests against dumping CO2 underground. – Betterness relation: C&C over GDR. No strict anti-poverty principle, as in GDR. Concentration of climate burdens will demotivate the duty-bearers to comply. – Consumer boycott of products being produced with high emissions. Noncompliance with the Paris regime should count as subsidizing domestic products unfairly. Boycott and taxes are appropriate counter-means.

196

K. Ott

As we have argued at the beginning of this article, the triangular affair between abatement-, adaptation-, and climate-engineering-strategies should be seen as a portfolio of means and assets. Within this triangular affair, abatement deserves strong priority because it addresses the root cause of the problem and it is a precondition for successful adaptation and NCS. Aggressive abatement on a global scale is by no means utopian any more. Despite still rising global GHG emissions substantial change is in the making. Public awareness has increased worldwide. There is a new global youth movement with an iconic figure: Greta Thunberg who gives the young generation a face and a voice. Renewable energies already are going to be established. Diffusion of existing carbon-poor technologies can be accelerated by political strategies and economic incentives [26]. GHG emissions have been decoupled from GDP growth in advanced industrial societies. There is scientific knowledge, there is plenty of capital for carbon-low investments, there are established carbon-free energy technologies waiting for being mainstreamed, there is global civil society being aware of the problem. Interesting enough, the major achievements of modern societies (science, technology, capital, public reasoning) are available for problem-solving. If the course of action proposed here will be agreed upon and become a safely paved and reliable pathway, the speed of taking steps may be increased. After a regrettable period of stagnancy since 2008, Germany has increased efforts in 2019. Germany will phase out coal burning facilities until 2038 (hopefully, some years earlier) and combustion engines in cars will be phased out gradually. Carbon pricing shall make air traffic more expensive. The measures being proclaimed by the German government before the climate summit in New York are clearly insufficient to reach the 2020 goals but might be a preparing step towards a coming decade of strong emission decline from 2025–2035. These efforts, however, would be in vain, if they won’t be a forerunner role, but a lonely pathway.

References 1. Allen, M. (2003). Liability for climate change. Nature, 421, 891–892. 2. Anderson, K., & Peters, G. (2016): The trouble with negative emissions. Science, 354, 182–183. 3. Anderson, Ch., DeFries, R., Litterman, R., & Matson, P., et al. (2019). Natural climate solutions are not enough. Science, 363, 933–934. 4. Arrhenius, S. (1896). On the influence of carbon acid in the air upon the temperature on the ground. Philosophical Magazine and Journal of Science, 41, 237–276. 5. Baatz, C. (2013). Responsibility for the past? Ethics, Politics & Environment, 16(1), 94–110. 6. Baatz, C. (2018). Climate adaptation finance and justice. A criteria-based assessment of policy instruments. Analyse & Kritik, 40(1), 1–33. 7. Baatz, C., & Bourban, M. (2020). Distributing scarce adaptation funding across SIDS: Effectiveness, not efficiency. In C. Klöck & M. Fink (Eds.), Dealing with climate change in small island states. Göttingen University Press (forthcoming). 8. Baatz, C., & Ott, K. (2016). Why aggressive mitigation must be part of any pathway to climate justice. In C. Preston (Ed.), Climate justice and geoengineering, pp. 93–108. London. 9. Baatz, C., & Ott, K. (2017b). In defense of emissions egalitarianism? In L. Meyr & P. Sanklecha (Eds.), Climate justice and historical emissions, pp. 165–197 Cambridge University Press.

9 Domains of Climate Ethics Revisited

197

10. Baer, P., Athanasiou, T., Kartha, S., & Kemp-Benedict, E. (2008). The greenhouse development rights framework. Berlin. 11. Betz, G. (2009). What range of future scenarios should climate policy be based on? Modal falsificationism and its limitations. Philosophia Naturalis, 46(1), 133–155. 12. Betz, G., & Cacean, S. (2012). Ethical aspects of climate engineering. Karlsruhe: KIT Scientific Publishing. 13. Broome, J. (1992). Counting the cost of global warming. Oxford. 14. Caney, S. (2006). Environmental degradation, reparations, and the moral significance of history. Journal of Social Philosophy, 37(3), 464–482. 15. Caney, S. (2009). Justice and the distribution of greenhouse gas emissions. Journal of Global Ethics, 5(2), 125–145. 16. Crutzen, P. (2006). Albedo enhancement by stratospheric sulfur injections: A contribution to resolve a policy dilemma? Climatic Change, 77, 211–220. 17. Gardiner, S. (2010). Is ‘Arming the Future’ with geoengineering really the lesser evil? In S. Gardiner, S. Caney, D. Jamieson, & H. Shue (Eds.), Climate ethics: Essential readings, pp. 284–312. Oxford. 18. Gardiner, S. (2011). A perfect moral storm. New York. 19. Gardiner, S. (2013). Geoengineering and moral schizophrenia: What is the question? In W. Burns & A. Strauss (Eds.), Climate change geoengineering: Legal, political and philosophical perspectives, pp. 11–38. Cambridge. 20. Gosseries, A. (2004). Historical Emissions and Free-Riding. Ethical Perspectives, 11(1), 36–60. 21. Grasso, M. (2007). A normative ethical framework in climate change. Climatic Change, 81, 223–246. 22. Hampicke, U. (2011). Climate change economics and discounted utilitarianism. Ecological Economics. http://www.sciencedirect.com/science/article. 23. Hampicke, U., & Ott, K. (Eds.) (2003). Special issue: Reflections on discounting. International Journal of Sustainable Development, 6(1). 24. Horton, J. (2015). The emergency framing of solar geoengineering: Time for a different approach. The Anthropocene Review, 1–5. 25. Jonas, H. (1979). Das Prinzip Verantwortung. Frankfurt/M. 26. Jänicke, M. (2010). Die Akzeleration von technischem Fortschritt in der Klimapolitik – Lehren aus Erfolgsfällen. Zeitschrift für Umweltpolitik, 4(2010), 367–389. 27. Keith, D. (2013). A case for climate engineering. Boston: Boston Review Books. 28. Lewis, S.L., Wheeler, Ch., Mitchard, E., & Koch, A. (2019). Regenerate natural forests to store carbon. Nature, 568, 25–28. 29. Lohmann, U., & Gasparini, B. (2017). A cirrus cloud climate deal? Science, 357, 248–249. 30. Lumer, C. (2002). The greenhouse. Lanham: A welfare assessment and some morals. 31. Mcnutt, M., Abdalati, W., Caldeira, K., & Doney, S. et al. (2015a). Climate intervention: Reflecting sunlight to cool earth. The National Academies of Sciences, Engineering, and Medicine, Washington DC. 32. Mcnutt, M., Abdalati, W., Caldeira, K., & Doney, S. et al. (2015b). Climate intervention: Carbon dioxide removal and reliable sequestration. The National Academies of Sciences, Engineering, and Medicine, Washington DC. 33. Menzel, U. (1992). Das Ende der Dritten Welt und das Scheitern der großen Theorie. Frankfurt/M. 34. Meyer, A. (1999). The Kyoto protocol and the emergence of ‘contraction and convergence’. In O. Hohmeyer & Rennings, K. (Eds.), Man-made climate change, pp. 291–345. Heidelberg. 35. Moellendorf, D. (2014). The moral challenge of dangerous climate change. Cambridge. 36. Morrow, D. (2014). Ethical aspects of the mitigation obstruction argument against climate engineering research. Philosophical Transactions of the Royal Society, 372, 20140062. 37. Neuber, F. (2018). Buying time with climate engineering?. Karlsruhe: Kit. 38. Niemeier, U., Schmidt, H., & Timmreck, C. (2011). The dependency of geoengineered sulfate aerosol on the emission strategy. Atmospheric Science Letters, 12, 189–194. 39. Niemeier, U., & Tilmes, S. (2017). Sulfur injections for a cooler planet. Science, 357, 246–248.

198

K. Ott

40. Nordhaus, W. (1994). Managing the global commons: The economics of climate change. Cambridge/Mass. 41. Ott, K. (2004). Essential components of future ethics. In R. Döring & M. Rühs (Eds.), Ökonomische Rationalität und praktische Vernunft, pp. 83–110. FS Hampicke. Würzburg. 42. Ott, K. (2010). Umweltethik zur Einführung. Hamburg. 43. Ott, K. (2012a). Domains of climate ethics. Jahrbuch für Wissenschaft und Ethik, 16, 95–114 44. Ott, K. (2012b). Might solar radiation management constitute a dilemma? In C. Preston (Ed.), Engineering the climate, pp. 33–42. Lanham. 45. Ott, K. (2018). The political economy of solar radiation management. Frontiers in Env. Sc., 6. 46. Ott, K., & Neuber, F. (2019). Climate engineering. Oxford Research Encyclopedia of Climate Science (forthcoming). 47. Ott, K., & Döring, R. (2008). Theorie und Praxis starker Nachhaltigkeit. 2. erweiterte und aktualisierte Auflage, Marburg. 48. Ott, K., Klepper, G., Lingner, S., Schäfer, A., Scheffran, J., & Sprinz, D. (2004). Reasoning goals of climate protection. Specification of Article 2 UNFCCC. Berlin: Umweltbundesamt. 49. Ott, K., & Riemann, M. (2018). On flight reasons: Persecution, escape, displacement. In G. Besier & K. Stoklosa (Eds.), How to deal with Refugees? Europe as a continent of dreams, pp. 15–39. Münster. 50. Parfit, D. (1983). Energy policy and the further future. In D. Maclean & P. Brown (Eds.), Energy and the future, pp. 166–179. Totowa. 51. Parmesan, C., & Yohe, G. (2003). A globally coherent fingerprint of climate change impacts across natural systems. Nature, 421, 37–42. 52. Rickels, U. et al. (2011). Gezielte Eingriffe in das Klima? Eine Bestandsaufnahme der Debatte zu Climate Engineering. Sondierungsstudie für das BMBF. Kiel: Earth Institute. 53. Robock, A. (2008). 20 reasons why geoengineering may be a bad idea. Bulletin of the Atomic Scientists, 64, 14–18. 54. Royal Society (2011). Solar radiation management: The Governance of research. Online: https://royalsociety.org/~/media/Royal_Society_Content/policy/projects/solar-radiat ion-governance/DES2391_SRMGI%20report_web.pdf. 55. SRU (Sachverständigenrat für Umweltfragen) (2019). Demokratisch regieren in ökologischen Grenzen. – Zur Legitimität von Umweltpolitik. Berlin. 56. Schröder, M. et al. (2002). Klimavorhersage und Klimavorsorge. Berlin, Heidelberg. 57. Seitz, V. (2018). Afrika wird armregiert. München. 58. Shue, H. (1992). The unavoidability of justice. In A. Hurrel & Kingsbury, B. (Eds.), The international politics of the environment. Oxford. 59. Sillmann, J., Lenton, T. M., Ott, K., et al. (2015). Climate emergencies do not justify engineering the climate. Nature Climate Change, 5, 290–292. 60. Smit, B., Burton, J., Klein, R., & Wandel, J. (2000). An anatomy of adaptation to climatic change and variability. Climatic Change, 45, 223–251. 61. Steffen, W., Rockström, J., et al. (2018). Trajectories of the earth system in the anthropocene. PNAS, 115, 8252–8259. 62. Stern, N. et al. (2007). The economics of climate change: The stern review. Cambridge. 63. Teller, E., et al. (2002). Active climate stabilization: Practical physics-based approaches to prevention of climate change. Washington, D.C.: U.S Department of Energy. 64. Tol, R. (2008). Why worry about climate change? A research Agenda. Environmental Values, 17(4), 437–470. 65. Trentmann, F. (2016). Empire of things. London. 66. UNFCCC (2016). Unites Nations framework convention on climate change, Conference of the Parties, Twenty-first session Paris, 30 November to 11 December 2015. Adoption of the Paris Agreement. http://unfccc.int/resource/docs/2015/cop21/eng/l09.pdf/. Accessed 12 July 2016. 67. Victor, D. (2008). On the regulation of geoengineering. Oxford Review of Economic Policy, 24(2), 322–336. 68. Victor, D., Morgan, G., Apt, J., Steinbruner, J., & Ricke, K. (2009). The geoengineering option. A last resort against global warming? Foreign Affairs, 2009, 64–76.

9 Domains of Climate Ethics Revisited

199

69. WBGU (Wissenschaftlicher Beirat der bundesregierung globale Um-Weltveränderungen) (2009). Kassensturz für den Weltklimavertrag – Der Budget-ansatz. Sondergutachten, Berlin.

Chapter 10

Electricity Market Reform in Japan: Fair Competition and Renewable Energy Takashi Yanagawa

Abstract In Japan, the Great East Japan Earthquake prompted a major transformation in Japan’s power industry. Electricity market reforms have rapidly progressed to introduce fair and free competition. The reforms also had to respond to the shutdown of nuclear power plants and the need for expanding renewable energy. This chapter discusses reforming Japan’s electric power system to suppress carbon dioxide emissions amid fair and free market competition by introducing renewable energy while avoiding putting supply stability at risk. I first summarize the history of electricity market reform in Japan briefly. I then discuss current issues of the wholesale power market (power generation market) and the retail market from the perspective of fair and free competition. Finally, we discuss the issues of new measures designed to achieve 3E+S, focusing on a non-fossil fuel energy value trading market, a market for baseload power, and a capacity mechanism. Keywords Energy reform · Fair competition · Renewable energy

10.1 Introduction1 Much discussion has taken place in Japan regarding the nation’s energy policies since the Great East Japan Earthquake, which prompted a major transformation in Japan’s power industry. The nuclear accident caused by the earthquake stoked concerns about the enormous risks posed by nuclear power plants. All nuclear reactors were shut down immediately after the earthquake, and abolishing them was even considered. However, nuclear power is expected to once again become Japan’s primary source of electricity under more stringent safety standards. Conversely, energy reforms have rapidly progressed since the earthquake, amid growing calls to make the power market a fair and free trading platform and to expand the use of nuclear power and 1 This

article is a revised and translated version of Yanagawa [25].

T. Yanagawa (B) Kobe University, 2-1 Nada, Rokkodai, Kobe 657-8501, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_10

201

202

T. Yanagawa

renewable energy to achieve the Paris Agreement targets. Renewable energy is an unstable source of electricity, requiring thermal power backup to supplement the electricity supply and prevent blackouts. It will be challenging for Japan to ensure sufficient power generation capacity because the use of thermal plants is expected to decline. This chapter discusses reforming Japan’s electric power system to suppress carbon dioxide emissions amid fair and free market competition by introducing renewable energy while avoiding putting supply stability at risk.

10.2 Energy Reform in Japan Initially, Japan’s power industry was privately owned. A free market for electricity came into existence when Tokyo Dento (now Tokyo Electric Power Co., or Tepco) was established in 1883. The industry later came under centralized control in 1939 as part of wartime regulations, even though the companies were still privately owned. Power generation and transmission were nationally monopolised, and distribution and retail were regionally monopolized by nine utilities. In 1951, the industry became a vertically integrated regional monopoly of nine utilities (10 after Okinawa was returned to Japan in 1972) without competition. They are now dominant in each operational region as “former general electric utilities (GEUs)” in Japan’s liberalized market. The power industry in Europe and the U.S. began to be deregulated in the 1990s. Japan also began deregulation amid complaints of high electricity prices. A power generation market was opened in 1995, with independent power providers (IPPs) being allowed to sell electricity to the 10 utilities. In 2000, electricity sales by power producers and suppliers (PPSs) was partially deregulated. Initially, retail deregulation was applicable only to sales of extra high voltage electricity to large-scale factories and other facilities using at least 2,000 kW of electricity (26% of the market). In 2004, deregulation was extended to midsize factories and other facilities using at least 500 kW of electricity (40% of the market). This was further extended to smallscale factories and other facilities using at least 50 kW of electricity (62% of the market) in 2005. The Japan Electric Power Exchange (JEPX), a wholesale power market, was established in 2005 as well. At the same time, the utilities separated their accounting of power transmission and distribution from their power generation and retailing. The 2000 California power crisis contributed to a cautious mood in Japan. Nonetheless, Japan made great strides in deregulating the power industry following the 2011 Great East Japan Earthquake. The tsunami that hit Tepco’s Fukushima Daiichi nuclear reactor made a deep impact on Japanese society. In 2011, the government established a taskforce on electric power system reform in order to debate the issue. In 2014, the government issued the fourth Strategic Energy Plan, which put forth a basic policy called “3E+S.” It has three main aims, all premised on the need for safety (“Safety”): first and foremost, to ensure a stable supply (“Energy security”), to achieve a low-cost energy supply by enhancing efficiency (“Economic efficiency”),

10 Electricity Market Reform in Japan: Fair Competition …

203

and to make maximum efforts to pursue environmental suitability (“Environment”). In 2018, the fifth Strategic Energy Plan strengthened efforts to ensure the realization of an energy mix.2 Following the Fukushima Dai-ichi accident, all domestic nuclear plants, which produced about 30% of the nation’s total power supply, were shut down. This resulted in rolling blackouts within Tepco’s service areas and power shortages in other parts of the nation. The Great East Japan Earthquake exposed two problems with Japan’s power supply system.3 First, the primary focus was on securing supply, and not enough attention was paid to creating supply flexibility by curbing demand. Second, the emphasis was on supply within each divided area, and not enough attention was paid to creating an optimal supply and demand structure at the national level. Several changes were brought about by the disaster.4 First, people’s trust in nuclear plants as a primary source of electricity was shaken; electricity prices also soared because thermal power plants had to be used after the nuclear plants were shut down. Second, people came to expect efforts to maintain the supply and demand balance on the demand side, such as power conservation and demand response, as well as the use of distributed power sources. Third, sharing electricity supply capabilities across wide areas faced limitations because of restrictions on the facilities that convert power line frequencies between eastern (50 Hz) and western (60 Hz) Japan and the capacity of the power interconnection lines among the former GEUs. Fourth, energy consumers’ awareness changed, as they realized the enormous value of saving power during peak hours. Fifth, the development of transmission networks that can handle a variety of power sources, such as renewable energy, became necessary. The government responded to these challenges after receiving a report from the Expert Committee on Electricity Systems Reform. In 2013, the Cabinet approved the Policy on Electricity Systems Reform and listed three reform objectives. The first is to secure a stable supply. As the nation’s dependence on nuclear power generation declines, a stable supply should be secured while renewable energy sources, which have output fluctuations, are being adopted. A system should be established to allow users to reduce consumption and for power companies in different regions to supply electricity to one another. The second objective is to keep electricity rates as low as possible. Electricity rates should be reduced by promoting competition and ensuring that a “merit order” (a system in which power sources are selected based on the lowest marginal costs) is enforced so that the cheapest source is used first, as well as by suppressing demand and optimizing investments in power generation. The third objective is to expand users’ choices and power companies’ business opportunities. The system should be able to respond to users’ options, accept market entrants 2 The

power source composition ratio to be realized in 2030 is 22–24% for renewable energy, 20– 22% for nuclear power, and 56% for fossil fuels such as oil, coal and LNG. With the goal of reducing greenhouse gas emissions by 80% by 2050, the government is pursuing the challenge of decarbonization, such as aiming to make renewable energy the main power source. 3 The Agency for Natural Resources and Energy [1]. Kikkawa [17] explains the issue of private ownership of nuclear power plants in Japan. 4 The Agency for Natural Resources and Energy [2, 3].

204

T. Yanagawa

from other industries and regions, generate power using new technologies, and spur innovation through demand-curbing measures. The system design and three-stage timetable shown below were established to achieve these objectives. The project is now running according to the plan. In Phase 1, the Organization for Cross-regional Coordination of Transmission Operators (OCCTO) was established in 2015 to expand cross-regional transmission networks. This was an effort to strengthen the supply-and-demand adjustment function across the nation and enhance the power transmission mechanism involving frequencyconversion facilities and cross-regional interconnection lines, in addition to drastically strengthening the supply system. Further, the Electricity Market Surveillance Commission (the Electricity and Gas Market Surveillance Commission from 2016) was established to strengthen market monitoring and promote sound competition. In Phase 2, the low-voltage power retail market was completely deregulated in 2016. However, some price regulations were kept in place as an interim measure to ensure appropriate prices, particularly for households. Phrase 3 involves the legal separation of utilities’ operations in 2020 to further secure the neutrality of power transmission and distribution and to guarantee return on investments in transmission lines. To achieve 3E+S, several additional measures are considered: a non-fossil fuel energy value trading market, a market for baseload power, rules for the use of interconnection lines and indirect power transmission rights, a capacity mechanism, and supply and demand adjustment market (adjustment power market). In May 2018, trading in FIT non-fossil certificates began in the non-fossil value trading market. An indirect auction using the interconnection line started in October 2018, and the first indirect transmission rights transaction was conducted at JEPX in April 2019. In July 2019, the first baseload market auction was held. In addition, although the full liberalization of retail charges was considered for 2020, it was decided that the transitional regulated charge should continue. Removing it will require the confirmation of a state of competition whereby charges are not subsequently raised through the exercise of market dominance. We must therefore, while considering the supply regions of the market-dominant former GEUs as markets, examine how much competitive pressure exists in each market and investigate whether that competitive environment is being maintained and to what extent consumers are switching. The next section discusses current issues involving the wholesale power market (power generation market) and the retail market. Section 10.4 discusses new measures designed to achieve 3E+S, focusing on a non-fossil fuel energy value trading market, a market for baseload power, and capacity mechanism—all of which are important for reducing carbon dioxide emissions amid fair and free market competition by introducing renewable energy while avoiding putting supply stability at risk.5 Finally, Sect. 10.5 concludes the chapter.6

5 Books

on energy system reforms in Japan include Nomura and Kusanagi [21], Kibune et al. [18], Hatta [16], Yamauchi and Sawa [24], and Yamada [23]. 6 Electricity and Gas Market Surveillance Commission [15], pp. 34–43.

10 Electricity Market Reform in Japan: Fair Competition …

205

10.3 Competition and Challenges in the Electricity Market Japan’s power reforms have resulted in a situation in which the transmission business is monopolized and operated neutrally since the market is a natural monopoly. Prices are determined through the fully distributed cost method.7 Conversely, power generation and retail operations are not natural monopolies; thus, new entrants were invited to compete. A wholesale electricity market was also established to connect power generation and retail. This section draws from an interim report of the Competitive Power and Gas Market Research Committee to discuss the status of the wholesale and retail power markets and examine the competition policies in each one. There are three potential problems regarding competition policies and the exercise of monopoly power in each market; these include (1) the foreclosure of power and customers (market foreclosure), (2) market distortions due to internal subsidization, and (3) oligopolistic cooperation.

10.3.1 Wholesale Market (Power Generation Market) Japan’s power generation facilities (10 former GEUs) had a total capacity of 260 GW at the end of fiscal 2015. The source composition was as follows: nuclear power (16.2%), coal power (15.4%), general hydropower (8.0%), LNG power (28.2%), petroleum power etc. (15.6%), pumped water power (10.6%), and renewable energy etc. (6.0%).8 In fiscal 2016, 1,030 TWh of electricity was generated. The source composition was as follows: nuclear power (1.7%), coal power (32.3%), hydropower (7.6%), LNG power (42.1%), petroleum power etc. (9.3%), and renewable energy etc. (6.9%).9 A total of 83% of the power generation was produced by the 10 former GEUs and the two former wholesalers (Electric Power Development Company and Japan Atomic Power Company). There are 96 new power producers with capacities of 100 MW or more, but their scale is smaller than that of the former GEUs. The trading volume at JEPX, which was launched in 2005, accounted for 12% of Japan’s entire electricity demand as of March 2018.10 It increased to more than 30% after October 2019, and reached 35.4% in June 2019. It had risen from 0.9% in March 2012, thanks to a series of measures. For example, the former GEUs supply all their excess electricity to the wholesale exchange at a marginal cost, engage in gross bidding (conduct a portion of transactions previously carried out internally through 7A

natural monopoly is a market wherein a single company can provide products or services to the entire market more cheaply than would be the case if there were two or more companies. A natural monopoly is created if investments in networks, such as transmission lines, require a large expense. 8 The Ministry of Economy, Trade, and Industry [19], pp. 186–187. 9 The Ministry of Economy, Trade, and Industry [20], pp. 181–182. 10 The Electricity and Gas Market Surveillance Commission [15], p. 7, Agency for Natural Resources and Energy [10], p. 2.

206

T. Yanagawa

the exchange), release a portion of the power procured on a long-term contract from Electric Power Development Company to the exchange, and benefit from new rules on the use of interconnection lines. The share is still small relative to the 2013 share in other nations—50.7% in the U.K., 50.1% in Germany, and about 86.2% in Northern Europe—but is increasing rapidly.11 The vitalization of the wholesale power market will play an important role in achieving a wide-ranging merit order and help PPSs procure power sources. Of the procurement of power sources by new market entrants, 46.2% came from their own sources or from other companies based on individual contracts, 38.9% came from the wholesale market, and 14.9% came from the “constant backup” system as of December 2017.12 Regarding the cornering of power sources in the wholesale market, foreclosure risk has decreased because of an expansion in the volume of wholesale transactions. However, market fragmentation could occur in Okinawa, where there are no crossregional interconnection lines, and in Hokkaido, where the capacity of the interconnection line with the Tohoku area is small. Wholesale prices could also soar in other areas during peak hours. In such cases, PPSs may not be able to obtain enough electricity at competitive prices. Meanwhile, the Electric Power Development Company owns nearly 10% of all power sources, most of which are hydropower or coal-fired plants with low marginal costs. The company signed long-term supply agreements with the former GEUs before the current deregulation took place. It would thus help vitalize the wholesale market if the company released more of its power to the exchange so that PPSs could also access it. The former GEUs are also making efforts to contribute to the wholesale market. The establishment of a baseload power market is also expected, as will be discussed later. The effects of these developments must be closely observed, and they must continue to be discussed. In the wholesale market, PPSs often find it difficult to negotiate wholesale contracts with the former GEUs because the utilities’ retail divisions intervene either directly or indirectly. Moreover, the former GEUs sometimes refuse to hold talks with PPSs about the sale of excess power plants, which is unreasonable because it has the effect of foreclosing the power generation business. These actions could be problematic.

10.3.2 Retail Market In March 2018, the PPSs’ share was around 16% in the high voltage field and 7% in the extra high voltage field. Market entry in the high voltage field is advancing somewhat, but market entry into the extra high voltage field is hardly advancing at 11 The

Electricity and Gas Market Surveillance Commission [12], p. 38. Electricity and Gas Market Surveillance Commission [14], p. 5. The “constant backup” system is a mechanism whereby former GEUs make up for the supply shortfalls of PPSs within a certain limit at the average cost of all power sources. 12 The

10 Electricity Market Reform in Japan: Fair Competition …

207

all because a great deal of high load factor industrial demand for extra high voltage is coming from large factories, as described in the next section, and former GEUs with base loads that can generate power cheaply both night and day, such as with nuclear power, are dominant.13 By contrast, high voltage focuses on business demand, such as from offices, where demand for power is high during the day and is therefore often provided using power sources with high variable cost (fuel cost) such as LNG thermal generation. It is therefore easy for PPSs to enter this market.14 Moreover, market entry has advanced steadily in the newly liberalized low voltage field, reaching about 8%. The effect of the entry of city gas companies with regional customer bases has been significant in this field, and market entry from other industries, such as the telecommunications and oil sectors, is also advancing. Market entry is advancing, particularly PPSs’ market share of high voltage is 15% on average and exceeds 25% in Tokyo, Kansai, and Hokkaido. Conversely, market entry has not progressed in Hokuriku or Okinawa. In Hokuriku, Hokuriku Electric, a former GEU, has many low-cost baseload generators, such as hydroelectric plants, and so rates are comparatively cheap. Regarding Okinawa, it is impossible to obtain power from the wholesale market because there is no interconnection network linking the regions. Moreover, new market entry in each area is often done by new PPSs, and the entry of former GEUs into other regions has hardly advanced. Although the market shares of the nine power companies in the Tokyo area besides Tepco and the shares of the nine companies in the Kansai area besides Kansai Electric are comparatively high, they are nonetheless approximately 2%, while the national average is about 1.5%. Regarding competition policy in the retail market, there are long-term contracts that include scaling sales and comprehensive sales contracts that bind customers.15 “Scaling sales” is the act of linking the next contract prior to the end of the current contract period; market entry becomes difficult where cancellation penalties are imposed on long-term contracts. When trying to link a new contract prior to the end of a contract, it is simple to continue the contract if the company holding the existing contract offers to link the new contract prior to the end of the contract without a cancellation fee, although the customer must pay a termination fee. By contrast, in “comprehensive sales” such as contracts for multiple customer locations, where contracts have differing end dates, a contract provides a discount on the condition that the multiple contracts are all continued, and some or all of the multiple contracts are renewed using fresh discounts as an incentive.16 If a penalty is levied on early cancellation and the dominant business repeats comprehensive sales, it is difficult for an entrant to make a new contract. 13 The

load factor is the ratio of average power to maximum power over a certain period of time. It is high in the field of industrial applications, such as factories, which operate without stopping, and low in the business field, where there is a lot of daytime usage. 14 The industrial demand ratio is 85% at extra-high voltage and 46% at high pressure (Q1, 2015; Electricity and Gas Market Surveillance Commission [12], p. 14). 15 Electricity and Gas Market Surveillance Commission [15], pp. 22–24. 16 Electricity and Gas Market Surveillance Commission [13], p. 3.

208

T. Yanagawa

Discriminatory pricing and set discounts can be seen as competition distortions caused by internal subsidization. “Discriminatory pricing” is the act of offering an especially cheap contract to customers that PPSs are trying to acquire. The source of that discount may be a low price, such that part of the fixed cost cannot be recovered through the baseload power supply. In particular, after a customer that has decided to switch from a former GEU to a PPS applies to switch, the former GEU can obtain that information at any time during the one to two months until supply from the PPS begins and propose special prices to prompt the customer to change their mind about switching. This is known as “recovery sales,” which could be seen as equivalent to discriminatory pricing.17 “Set discounts” are discounts given when a customer purchases gas and electricity as a set. Although this is generally not an issue, former GEUs offering large discounts for set purchases may give rise to antitrust issues, as it may make it difficult for gas companies to continue to operate. The former GEUs are thought to be using low marginal cost power sources for discount set sales to the retail sector. Thus, where only the former GEUs, which are vertically integrated, possess low marginal cost power sources, they may adversely affect competition by excluding PPSs through discount sales, which cannot recover fixed costs. Therefore, new regulations must be considered for the former GEUs in order to prevent the elimination of new PPS through discount sales. The first required policy is the adoption of marginsqueeze regulations to prevent retail sales at prices lower than their wholesale price on the baseload market, as discussed in the next section. This is required because vertically integrated companies have no economic motivation to sell at retail prices lower than the wholesale price and thus do it only to exclude rival companies. Second, it is necessary to regulate the transactions of vertically integrated companies’ power generation and retail sales sections to ensure that they are not discriminatory toward transactions with PPSs. Specifically, not only the separation of accounting but also the legal separation of the power generation and retail sections—similar to the legal separation of the transmission and distribution sections—might be necessary.

10.4 Novel System Design Aiming for 3E+S and Its Issues To help achieve 3E+S while promoting competition, the Policy Subcommittee for Achieving Reform of the Power System investigated the development of new markets and rules governing safety, stable supply, and environmental compliance, and an interim report was published in February 2017. This report outlined concepts for introducing four new markets and rules for a baseload power supply market, interconnection usage rules and indirect transmission rights, a capacity mechanism, and a non-fossil value transaction market. Subsequently, the Working Group for the Institutional Review of the Basic Policy Subcommittee for Power and Gas investigated their implementation and published an interim summary of issues in two 17 Agency

for Natural Resources and Energy [10].

10 Electricity Market Reform in Japan: Fair Competition …

209

parts.18 The second interim report was published in 2019, giving directions for the non-fossil value transaction market, baseload market, interconnection rules review, indirect power transmission rights, capacity mechanism, and supply and demand adjustment market.19 Below, we examine the non-fossil value transaction market, baseload market, and capacity mechanism in turn.

10.4.1 Non-fossil Value Transaction Market Following the Paris Agreement, which came into force in 2016, and after setting the goal of “making efforts to keep the average temperature increase below 2 °C, suppressed to 1.5 °C compared with prior to the industrial revolution,” the government established the 2009 Energy Supply Structure Improvement Act (advancement law) as one way to achieve that goal. Currently, retailers are required to generate 44% of their power through non-fossil power sources by 2030. If this requirement is met, along with a target energy mix of 20–22% nuclear power and 22–24% renewable energy for 2030, the 44% total will be achievable. However, because the non-fossil power share of each retail electricity provider can vary, finding the value in non-fossil derived power and the creation of a non-fossil value transaction market for trading certificates would assist the achievement of the 44% non-fossil power generation share among power retailers.20 By purchasing non-fossil fuel certificates, retailers will not only be described in reports based on the advancement law but will also respond to the desires of consumers who wish to use non-fossil power sources. The first bid for the FIT non-fossil certificate was held in May 2018; the transaction volume was small, at 1.3 yen, which was the minimum price. Non-FIT power trading is planned to commence in 2020. The revenue from FIT power certificates will not become revenue for power generating companies but will be allocated to reduce the national burden of the FIT levy. The first issue in the non-fossil value transaction market is the equivalence of renewable and nuclear energy. Non-fossil power sources include renewable energy and nuclear power. Currently traded non-fossil value is derived from FIT power sources. While the inclusion of nuclear and hydroelectric power, as non-fossil fuels, starting in 2019 is reasonable, consumers might differ in their evaluations of renewable and nuclear power, despite their similar effects on carbon dioxide emissions. For example, some consumers wish to help spread renewable energy but do not wish to help spread nuclear power. Whether to lump the two together may become an issue.

18 Agency

for Natural Resources and Energy [5–8]. for Natural Resources and Energy [11]. 20 Agency for Natural Resources and Energy [9]. 19 Agency

210

T. Yanagawa

In addition, while FIT power revenue from the non-fossil value transaction market is allocated to reducing the burden of the FIT levy, revenue from nuclear and hydroelectric power will be allocated to the generating companies. Because value as renewable energy is recognized in the FIT levy, it is not necessary to duplicate the distribution of non-fossil value; however, a great many subsidies have been injected, at least for nuclear energy, which is promoted primarily due to CO2 reduction.21 Therefore, it could be argued that the electricity generated using nuclear energy should also be used to reduce the burden on the population. Furthermore, if the use of renewable energy ultimately means addressing global warming through reduced CO2 emissions, switching from high CO2 -emitting coalfired power to new low CO2 -emitting LNG thermal power ought also to be investigated; however, neither coal-fired nor LNG thermal power has any value in the non-fossil value transaction market, and switching from coal to LNG cannot be promoted there.

10.4.2 Baseload Market A baseload power supply is a power source that can supply power stably at low generation costs day and night; it refers to hydroelectric (inflow type), nuclear, coal, and geothermal power. Most baseload power supplies are owned by the former GEUs. The Electric Power Development Co., a former wholesale electric utility (mainly hydroelectric and coal) and the Japan Atomic Power Co., another former wholesale electric utility, have long-term contracts with former GEUs. Few sites are suitable for large-scale hydroelectric power, many nuclear power plants have been shut down, and it is difficult to set up new coal power plants because of environmental issues. The wholesale power market is expanding; however, the former GEUs are using low-marginal cost baseload power supplies. Therefore, PPSs lack sufficient access to baseload power and must use middle and peak power sources with high fuel costs, such as LNG. Therefore, their competitiveness is weak in the high load factor industrial field. A baseload market was therefore established to provide an equal footing upon which the former GEUs and PPSs could access baseload power; the former GEUs (excluding Okinawa) are required to supply power to the PPSs. The delivery period for the baseload market is set to one year from April, and auctions are performed several times in the year prior (e.g., nine, seven, and five months prior). There are three markets—the Hokkaido area, the Tohoku and Tokyo area, and the Western area—formed by the high frequencies of the disconnections/interconnections between Hokkaido and Tohoku and between Tokyo and the West. 21 According to Oshima [22], the R&D cost of nuclear power is 1.46 yen per 1 kW, and 0.26 yen is spent on regional countermeasures. In addition, the Cost Examination Committee appropriates 0.5 yen for accident expenditures, which is part of the national burden.

10 Electricity Market Reform in Japan: Fair Competition …

211

To ensure equal access to the baseload power supply, the volume supplied to the market as a whole must equal the baseload power ratio of the long-term energy supply and demand forecasts (56%) to total demand from the PPSs (total power demand times the PPSs’ share), multiplied by an adjustment factor. The adjustment factor is initially set to 1 given the goal of ensuring that the former GEUs and PPSs are on an equal level. It is thought that this will be gradually reduced to 0.67 as the PPSs’ shares increase. This will be calculated by area and by company. The former GEUs can purchase bids in markets outside of their own area to stimulate competition. An upper limit is set to the price of supply to the baseload market by subtracting the revenue of the capacity mechanism (described below) from the average cost of the generation of the baseload power supply to avoid duplicating the recovery of fixed costs, and power must be supplied below this price. The average cost, in addition to the fixed costs of operational power sources and variable costs such as fuel costs, also includes the fixed costs of non-operational power supplies such as the halted nuclear power plants. Prior conditions in the setting of purchasing limits for each company and posterior conditions such as resale restrictions are established to ensure that buyers purchase quantities to meet real demand and do not purchase with the aim of resale. Whether the baseload power supply market functions as expected depends on whether an appropriate amount of power is traded at an appropriate price. Quantitatively, while observing market trends with an emphasis on results, the baseload power market as a whole, as well as each area and operator, is expected to be controlled. Conversely, regarding price, including the fixed costs of non-operational segments in calculating the upper price limit is aimed at achieving an equal footing between the former GEUs and the PPSs. However, if the price increases, the PPSs may not operate as efficient businesses that can compete to acquire customers and may lose share in the industrial field. In addition, the baseload power supply market may become less attractive depending on the price of the spot wholesale power market, which supplies on a marginal cost basis. The first auction started in July 2019, and the contract price per kW was 9.77 yen in the Tokyo area, 8.70 yen in the Kansai area, and 12.47 yen in the Hokkaido area. The total amount was about 184 MW, which was only about 3% of the amount supplied for bidding; this probably happened because the price was considered to be higher than the wholesale market price of 8–10 yen for the PPSs.

10.4.3 Capacity Mechanism Renewable energy sources such as solar and wind power require a adjustable power source such as LNG, the output of which can be adjusted quickly, because their power output varies depending upon weather conditions. However, as low-marginal cost power sources like solar power increase, the operating hours of thermal power generation, which has a high marginal cost because of fuel costs, decrease. Based on the former fully distributed cost method, the cost of investments in power generation

212

T. Yanagawa

could be recovered through regulated charges; however, in this age of liberalization, the predictability of cost recovery is declining. Thus, investments in new LNG thermal power generation plants may stall, while aging LNG thermal power plant are shuttered. Conversely, the effects of power shortages will be significant because the construction of thermal power plants takes approximately 10 years, including design and environmental assessment.22 Therefore, while recognizing the value of power production capacity (kW), retailers are asked to guarantee a production capacity equivalent to sales volume, and power generation investment is to be promoted in 2020 by trading the capacity to be guaranteed at auction and paying the price to power producers. Retailers are required to not only guarantee supplied power (kWh) but also medium- and long-term supply capacity (kW). In general goods markets, suppliers normally recover not only variable costs but also fixed costs through revenue obtained from the flow of goods sold; electrical power is to be treated as an exception. Of course, if the price mechanism causes power shortages and high prices, this will promote investment; thus, supply and demand adjustment is possible, even without establishing a capacity mechanism. However, investment cost should be high because investment necessitates a premium arising from the uncertainty of returns, and it is thus considered preferable to encourage investment through a capacity mechanism. In the capacity demand curve, the horizontal axis represents supply capacity (reserve rate), and the vertical axis represents value paid per kW. It is assumed that the demand curve slopes downward: The amount paid per kW necessary to promote new investment falls as the capacity increases. Here, the OCCTO plays a central role. On the supply side, supply capability is assessed based on the size of each power source when power generation capability is low, and new energy is recorded only where stable generation is obtained during maximum-demand power generation. The best method of allocating the burden across retailers is to be determined based on the criterion that most of the burden should fall on those responsible for facility creation, i.e., according to power (kW) during the retailers’ area peak. The handling of new and existing power supply equipment should be treated equally without distinguishing between payment for new and existing power sources because, from the perspective of guaranteeing capacity, new and existing power sources have similar value. However, if new and existing facilities are not distinguished, both the closure of existing aged power sources and investments in new power sources will be delayed. Investment incentives do not change when the present discount value is the same between cases where there are high returns on capacity at the time of establishment and the returns then become low and cases where a uniform return can be obtained into the future. However, the recognition of kW value and the receipt of returns provide an incentive to keep the equipment in operation when the power supply has aged and is less efficient than new power supplies, even though investment in a new power supply is possible without recognizing kW value. Failing to distinguish 22 Agency

for Natural Resources and Energy [4], p. 9.

10 Electricity Market Reform in Japan: Fair Competition …

213

between old and new equipment extends the operation of existing power sources: On the one hand, this maintains supply capacity; on the other, the more supply capacity is maintained by old power sources, the more investment in new, more efficient power supplies decreases.

10.5 Concluding Remarks To close this chapter, let us discuss future tasks. First, it is necessary to examine how a series of new policies for achieving 3E+S will affect revenues among the former GEUs and PPSs. Although the baseload market and interconnection rules are expected to benefit PPSs, their effects may be limited. By contrast, the capacity mechanism and non-fossil value market are advantageous to the former GEUs and disadvantageous to PPSs. Although the baseload power supply market is expected help PPSs compete with the former GEUs as equals, its effects vary significantly according to price and quantity; thus, attention must continue to be paid to it. Interconnection usage rules ensure fair competition in the use of interconnections by reducing the profits of vested interests. However, the extent to which they will be effective is unknown because the capacity to produce power favors the former GEUs. Conversely, unilateral payment from PPSs to the former GEUs is planned for the capacity and non-fossil value markets. Most of the power production capacity is owned by the former GEUs, and the PPSs sell electricity supplied both directly and indirectly by the former GEUs. The burden on PPSs is therefore large when charging for power generation capacity. The non-fossil value market is similar, as the burden falls unilaterally on PPSs because most of the hydroelectric power and all the nuclear power are owned by the former GEUs and former wholesale power companies. Additionally (as mentioned), solar energy is used to reduce FIT levies, whereas power producers will receive revenues from hydroelectric and nuclear energy. Thus, the new markets and rules will be advantageous to the former GEUs in many ways. It is necessary to pay attention to how this distribution effect impacts market competition. The capacity mechanism and non-fossil value market will benefit the power generation sector of the former GEUs. The negative impact on the PPSs could be great if the profits are not used for power or renewable energy investments but the internal subsidy is aimed at strengthening the competitiveness of the retail sector of the former GEUs and increasing their market share. It is fortunate that the Institutional Review Working Group has brought up fossil power grandfathering (special measures) in the non-fossil value market, thus giving consideration to the procurement of fossil power sources to retail electric utilities that have previously procured electricity such as coal power sources. This should prevent difficulties in procuring new non-fossil power sources and drastic changes in the business environment. It is assumed that grandfathering will be gradually reduced by 2030. Let us conclude by discussing our expectations for energy system reform. Although the anti-monopoly law has, as a general law, promoted fair and free competition, industry laws governing (for example) electricity and gas have, as special laws,

214

T. Yanagawa

typically played a role in suppressing competition. However, in the power system reforms, these are playing a role as special laws to promote competition beyond the anti-monopoly law. In the energy field, as with 3E+S, it is necessary to achieve goals different from those of general product markets, and it is not enough to simply enable competition. It may also be necessary to modify market rules in order to achieve market-specific goals, such as a stable supply. However, because this market has conventionally operated monopolistically, the disparity in competitive strength is wide, even though liberalization such as market entry has been permitted. Under these circumstances, market intervention rules must be changed in order to introduce competition above and beyond the anti-monopoly laws, so that competition can be placed on an equal footing. It is hoped that, beyond reform of the energy system, public utilities’ supervisory authorities will plan effective policies by aiming for efficient resource allocation through competition and taking market characteristics into consideration.

References 1. Agency for Natural Resources and Energy. (2011). The power system reform taskforce’s summary of issues, December 2011 (in Japanese). 2. Agency for Natural Resources and Energy. (2013a). Expert committee report on power system reform. Advisory Committee for Natural Resources and Energy, Expert Committee Report on the Electricity Power Systems Reform, February 2013 (in Japanese). 3. Agency for Natural Resources and Energy. (2013b). Cabinet decision on the policy on electricity system reform policy, April 2013 (in Japanese). 4. Agency for Natural Resources and Energy. (2016). On the capacity mechanism. Policy Subcommittee for Electricity Power System Reform, Market Adjustment Working Group Second Handout, October 2016 (in Japanese). 5. Agency for Natural Resources and Energy. (2017a). Interim report from the Policy Subcommittee for Achieving Reform of the Power System. Advisory Committee for Natural Resources and Energy, Strategic Policy Committee, Report from the Policy Subcommittee for Achieving Reform of the Power System, February 2017 (in Japanese). 6. Agency for Natural Resources and Energy. (2017b). On the direction of future market adjustment, March 2017. Basic Policy Subcommittee for Power and Gas Business, Working Group for Institutional Review (First) Handout (in Japanese). 7. Agency for Natural Resources and Energy. (2017c). Interim summary. General Energy Resources Investigative Committee, Subcommittee for the Power and Gas Business, Basic Policy Subcommittee for Power and Gas, Working Group for Institutional Review, August 2017 (in Japanese). 8. Agency for Natural Resources and Energy. (2017d). Interim summary (Part 2). General Energy Resources Investigative Committee, Subcommittee for the Power and Gas Business, Basic Policy Subcommittee for Power and Gas, Working Group for Institutional Review, December 2017 (in Japanese). 9. Agency for Natural Resources and Energy. (2017e). On the non-fossil value transaction market. General Energy Resource Investigative Committee, Basic Policy Subcommittee for Power and Gas, Working Group for Institutional Review, December 2017 (in Japanese). 10. Agency for Natural Resources and Energy. (2018) On recovery sales when consumers switch electricity providers. 27th Expert Assembly for Institutional Design Handout, March 2018 (in Japanese).

10 Electricity Market Reform in Japan: Fair Competition …

215

11. Agency for Natural Resources and Energy. (2019). Second interim report. General Energy Resource Investigative Committee, Basic Policy Subcommittee for Power and Gas, Working Group for Institutional Review, July 2019 (in Japanese). 12. Electricity and Gas Market Surveillance Commission. (2017a). Assessment of the State of competition in the electricity market. Electricity and Gas Market Surveillance Committee 77th Handout, April 2017 (in Japanese). 13. Electricity and Gas Market Surveillance Commission. (2017b). Transactions on the retail market for electricity and gas. Competitive Electricity and Gas Market Research Committee Second Handout, November 2017 (in Japanese). 14. Electricity and Gas Market Surveillance Commission. (2018a). The State of the wholesale power market. Regulatory Reform Promotion Council, 29th WG on Investments Handout, April 2018. 15. Electricity and Gas Market Surveillance Commission. (2018b). Interim summary. Competitive Electricity and Gas Market Research Committee, August 2018 (in Japanese). 16. Hatta, T. (2012). How to promote power reforms. Nikkei Publishing Inc. (in Japanese). 17. Kikkawa, T. (2011). What to do with nuclear power? Nagoya University Press (in Japanese). 18. Kibune, H., Nishimura, K., & Nomura, M. (Eds.). (2017). New developments in energy policy— Clarifying the issues accompanying the liberalization of electricity and gas. Energy Forum (in Japanese). 19. Ministry of Economy, Trade and Industry. (2017). Energy white paper 2017. Research Institute of Economy, Trade and Industry (in Japanese). 20. Ministry of Economy, Trade and Industry. (2018). Energy white paper 2018. Research Institute of Economy, Trade and Industry (in Japanese). 21. Nomura, M., & Kusanagi, S. (2017). The reality of power and gas liberalization. Toyo Keizai (in Japanese). 22. Oshima, K. (2013). Of course nuclear power isn’t worth it. Toyo Keizai (in Japanese). 23. Yamada, H. (2012). Is the separation of power transmission a Trump card? Structural reform of the power system. Nippon Hyoron Sha (in Japanese). 24. Yamauchi, H., & Sawa, A. (2015). Verifying power system reform. Hakuto Shobo (in Japanese). 25. Yanagawa, T. (2019). Energy system reform in Japan after the Great East Japan earthquake. In M. Sataka, Y. Iida, & T. Yanagawa (Eds.) Abenomics: Success or failure. Keiso Shobo.

Chapter 11

Renewable Energy Development in Japan Kenji Takeuchi and Mai Miyamoto

Abstract This chapter overviews the recent development of renewable energy in Japan. First, we discuss the issues surrounding generation of electricity by renewable energy. Particularly, we focus on the curtailment of supply and inactive renewable projects that requires substantial institutional reform for power transmission and feed-in tariff system. Second, we look at the technological development as a key driver for further promotion of renewable energy. By using the indices calculated from the patent application of renewable energy-related technology in US, Germany, and Japan, we compare the quantitative and qualitative development during this two decades. Keywords Renewable energy · Technology · Patent · Japan

11.1 Introduction The 2011 Great East Japan Earthquake and the subsequent Fukushima nuclear accident accelerated Japan’s transition to renewable energy. After the devastating earthquake, significant progress was made towards promoting renewable energy. However, this transition also entails enormous challenges that require a fundamental transformation in policies as well as considerable investment in infrastructure. This chapter focuses on these challenges while emphasizing the importance of technological development in renewable energy sources. One of the most ambitious renewable energy projects undertaken is in the Fukushima prefecture that plans expanding renewable energy generation to 100% of energy demand by 2040. In Fukushima’s Minami Soma city, several mega solar power plants are operating on a commercial basis [8]. These plants were built after K. Takeuchi (B) Graduate School of Economics, Kobe University, Kobe 657-8501, Japan e-mail: [email protected] M. Miyamoto Kansai Gaidai College, Hirakata 573-1001, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_11

217

218

K. Takeuchi and M. Miyamoto

the earthquake on farmland that was extensively damaged by the tsunami. The land was once used as paddy fields, which means the soil is soft and construction is not easy. This requires new construction techniques for building the foundation. Three wind turbines were installed after the earthquake off the coast of Fukushima by a consortium of Japanese companies in collaboration with the Ministry of Economy, Trade and Industry [6]. The project aimed to assess the technical feasibility and economic viability of large scale offshore wind power. Unfortunately, one of the three turbines had to be removed because of operational problems and poor performance. Although offshore wind has high potential as an energy source in Japan, few wind turbines have been installed in the sea. To further promote renewable energy in Japan, technological progress is indispensable. This chapter investigates the following two questions regarding to renewable energy development in Japan. First, what are the challenges for renewable energy development after the Great East Japan Earthquake? We will assess the current status of renewable power in Japan and explore the necessary institutional arrangement for its further development. Second, is renewable energy technology developing qualitatively? Although technological development is expected to reduce the cost of renewable power generation, it is unclear whether such a development entails a qualitative improvement. Using four quality indices calculated from patent application data, we investigate if renewable energy-related technology is advancing qualitatively in Japan, the United States, and Germany.

11.2 Challenges Ahead 11.2.1 Feed-in-Tariff Renewable power is growing slowly but steadily in Japan. Figure 11.1 shows the composition of electricity sources in Japan. Before the earthquake, in 2010, renewable energy provided only 10% of the total power supply. This share gradually increased to 15% in 2015 and is expected to be 23% in 2030 according to the Fifth Strategic Energy Plan announced by the Agency of Natural Resources and Energy in 2018. Conversely, the share of nuclear power was 25% in 2010 and became 1% in 2015. Although its share in 2030 is predicted to be 21% in the Strategic Energy Plan, it seems difficult to achieve this goal considering the current operating situation and the cost of nuclear power generation. As of March 2018, only nine of the 57 nuclear power plants were operating in Japan. The Japanese government estimates the cost of decommissioning the Fukushima Daiichi plant at 22 trillion JPY, approximately 200 billion USD. In contrast, JCER, a Tokyo-based think tank, estimates the cost to be 80 trillion JPY, four times the government’s estimate. The difference in estimation is mainly because of difference in the evaluation of the cost of final disposal of radioactive waste and treatment of tritium-contaminated water [5].

11 Renewable Energy Development in Japan

219

%

100 75 50 25 0

21

25 10 2010

1 15

23

2015

2030

LNG Oil Coal Nuclear Renewable

Fig. 11.1 Electricity sources in Japan

To promote renewable energy production, the Feed-in Tariff (FIT) policy was introduced in 2012 in Japan. Under this policy, electricity companies have to purchase power generated by renewable energy sources at government-designated prices and contract durations. The consumers pay additional charges in the electricity price to finance the subsidy. By promising the purchase of electricity over the long term, FIT can stimulate investment in renewable energy. The surcharges that consumers pay is increasing as the power supply from renewable sources steadily rises. In fact, it increased by more than ten times from 0.22 to 2.9 JPY/kWh between 2012 and 2018. For households using 300 kWh of electricity per month, the current level of surcharge amounts to 870 JPY per month. Since this is more than 10% of the monthly electricity bill for households, how much consumers are willing to pay for promoting renewable energy has significant implications in sustaining this scheme. The actual effect of FIT on renewable energy deployment was very powerful. Figure 11.2 shows the cumulative capacity of installed renewable power in Japan. Comparing the installations before FIT (dark blue) and after FIT (light blue), solar power installations in the non-residential sector increased by almost thirty times. The effect of FIT on other renewable energy sources has been also significant, although weaker than non-residential solar. For example, biomass energy installation increased by 37% after the introduction of FIT. The FIT rate for solar power has declined from 42 JPY/kWh in 2012 to 24 JPY/kWh in 2019 as cost of solar electricity became lower. This declining tariff rate has two implications. First, it reduces the demand for products used in generating renewable power. Figure 11.3 shows the historical change in the production of solar modules in Japan. The production increased significantly in 2012 and 2013, after the FIT policy was introduced. After 2014, as the tariff rate decreased, the production also declined. In 2017, it is almost half the production level at its peak. It

220

K. Takeuchi and M. Miyamoto

Fig. 11.2 Cumulative installation of renewable power in Japan

Fig. 11.3 Production of solar panels in Japan

is interesting that, before the introduction of the FIT policy, half of the solar panels produced in Japan were exported. After the FIT policy was implemented, most of the panels were used for domestic purposes. The expensive and high-quality products made in Japan might be the reason for the high cost of solar power generation. Second, it increased the number of inactive solar power projects. Once a plan of installing renewable power is approved by the government, the tariff rate of the year of approval is applied even though the facility starts operations many years later. Since the tariff rate was high at the beginning of the policy (42 JPY per kWh),

11 Renewable Energy Development in Japan

221

many developers applied for projects but did not start operations soon. Even after starting their operations later, the developers received the tariff rate applicable in the year the project was approved, while the price of solar panels became lower due to technological advancement. Thus, developers could increase their profits by applying for projects and wait until the cost of production decreased. Therefore, the Feed-in Tariff Act was amended in 2016 under which it was made mandatory for all approved projects to make a contract with the electricity company by March 2017. The Act also outlined punitive measures if developers delay the start of operations. With the implementation of the amended act, 17,000 MW of renewable energy projects lost their approval. Despite that, as much as 23,000 MW of projects approved in the first three years of the FIT policy are still inactive.

11.2.2 Curtailment of Renewable Power The effect of the FIT policy was so powerful that generated renewable power sometimes exceeds the electricity network capacity. This leads to the curtailment of renewable electricity when power supply exceeds the demand for electricity. For example, on a cool and sunny weekend when solar power generation is at its maximum and consumption of electricity is at its minimum. In 2018, the Kyushu Electric Power Company issued eight requests for curtailment of power to solar power producers. Solar power generation tends to peak around midday due to increased solar irradiation. However, if demand is lower than supply, the excess electricity could cause serious damage to the network. Therefore, the electricity company increases the production of pumped-storage hydro power as well as transmission to other areas. Although these measures reduce the excess supply, there might remain over-supply of electricity. Expecting such a case, the power company asks solar power producers to cut their supply. In practice, curtailment involves several steps. First, the power producer adjusts output from thermal power or increases pumped-storage hydroelectricity to reduce over-supply. Next, after taking several measures, they ask solar and wind power producers to reduce their production. Hence, the curtailment of renewable energy cannot be implemented casually. The curtailment in Kyushu area is not high thus far. According to the Kyushu Electric Power Company, it is at most 6% of the total power supplied by solar panels.1 However, significant investment is required to resolve the intermittency of renewable power sources. To mitigate the local over-supply of electricity, the first option is the expansion of the transmission grid between the supply areas. The Japanese electricity market is divided into ten areas dominated by ten regional electricity monopolies. Each area is not well connected to the grid and transmission between areas is low. By investing in transmission grid expansion, over-supply in one region can be avoided. So far, the current plans for expansion is only between Tokyo and Tohoku areas and between Tokyo and Chubu areas because Tokyo is the only area where high 1 https://www.kyuden.co.jp/power_usages/pc.html.

222

K. Takeuchi and M. Miyamoto

demand is expected. In the long-run, it is possible to expand the grid even to other countries, such as Korea and China. The longest submarine power transmission line in the world is the NorNed built in 2006, a 580 km long cable between Norway and the Netherlands. In comparison, the distance between Kyushu and South Korea is 200 km and between Kyushu and Shanghai is 700 km. Strengthening battery storage is also important to address over-supply. By building a larger capacity to store electricity, it is possible to mitigate curtailment. In 2016, the Kyushu Electric Power Company installed a NAS battery storage in Fukuoka prefecture in northern Kyushu. Its capacity is 300 MWh and was not enough to store the excess supply experienced in 2018. Expanding battery storage would be a good solution to the higher dependence on renewable energy supply. Alternative solution is to utilize the battery storage of electric cars. The Share&Charge is a decentralized network created by the startup MotionWerk with funding from a German power company, now available as an app, that connects owners of electric vehicle (EV) charging stations and drivers of EVs using blockchain technology. In other words, it is the “Airbnb of EV charging stations.” With Share&Charge, drivers can find the cheapest household EV charging stations. If this type of project can utilize the EVs and charging stations at home, there will be no need for additional large capital investments to store excess electricity supply. Yet another alternative is charging negative electricity price. This has already been implemented in many European countries and California in the United States. When demand is low and supply is high, there is negative electricity pricing. It seems somewhat strange that consumers are paid to use electricity. Rather than asking producers to reduce supply, negative electricity pricing uses market power to adjust supply and demand. If electricity is traded at negative prices, it is possible for the wholesaler to provide free electricity and still make a profit. When electricity is free, demand will increase and over-supply will be resolved by the power of the market. The challenges for renewable energy policy in Japan can be summarized as follows. First, there are many inactive solar projects because of a strong FIT policy. Although the FIT Act was amended in 2016 to solve the issue, many projects that were approved are yet to commence operations. Second, the curtailment of solar power. To overcome these challenges, more investment is needed in expanding the transmission grid and battery storage. Smarter solutions will not require large capital investments to address the issue.

11.3 Technological Development of Renewable Energy 11.3.1 Why Patent Quality Matters Technological development is driven by innovation. The innovator applies for a patent for an invention to protect it through the patent law. Hence, economists have used patent data as a proxy variable to describe technological development in the study of innovations. However, the individual quality of each patent is skewed, since each

11 Renewable Energy Development in Japan

223

patent varies enormously in its “value” [16]. Since limited patents are traded in the licensing market, the actual value of the patents cannot be directly observed. Therefore, using details of patent data, we indirectly measure “patent quality” reflecting the different values of each patent. The value of a patent is comprised of its technological and economic value. The technological and economic value refer to profitability and innovativeness, respectively. Innovativeness refers to the difference in impact on technological development by an invention. Profitability refers to the size of profits produced by each patentprotected invention. We define these concepts––innovativeness and profitability––as patent quality in this paper. We use four types of quality index from OECD [13] and Squicciarini et al. [14]: (1) patent scope, (2) forward citations; (3) claims; and (4) family size. In the following paragraphs, we demonstrate the construction of each index and a review of the literature used for each type of index. Patent Scope The patent scope index defines the breadth of technology covered by each patent. It describes patent quality from the viewpoint of technological value. When an applicant files a patent application, the patent data is attached to an IPC code that describes different types of technologies. Following Lerner [11], we define four distinct subclasses of the IPC code allocated to inventions in the patent scope index. He used patent scope index data to estimate the technological value of each patent through its impact on firm valuation in the field of biotechnology. As a result, he found that one standard deviation increase in the average patent scope is associated with a 21% increase in firm value. Broader patents are more valuable when there are many substitutes in the same product class. We hypothesize that patents having a higher patent scope can contribute more to technological development since the higher number means that the patent covers a wider range of technology. Forward Citation The forward citation index captures the spillover effect of other patents on technology. When an innovator applies for a patent, he/she cites other patents or documents, such as a scholarly article that was referred to when creating the invention. These citations are called forward citations. This index is a count of the number of times a patent is cited in other patents. The higher the number of forward citations, the greater the impact that patent has on other patents, or the patent has a great impact on technological development. Along with the patent scope index, the forward citation index describes patent quality from the viewpoint of innovativeness. However, the main difference with the patent scope index is that applicants cannot control the forward citation index. The forward citation index is the most-used indicator in previous studies on patent quality [1–4, 9, 10]. Trajtenberg [16] was one of the first studies that used forward citation as an indicator of patent quality. He found that patents weighted by forward citation correlate with the rate of social profits. Thus, he concluded that the forward citation index described differences in the impact of inventions. We hypothesize that higher counts cited in other patents means that the patent has a greater impact on the creation of new technology through influencing other innovations.

224

K. Takeuchi and M. Miyamoto

Claims Claims are the most important information in patent documents because claims define the technology of each patent. We define the number of claims as the claims index. Patent claims determine the boundaries of the exclusive rights of the patent owner. The number of claims not only reflect the technological breadth of a patent, but also its expected market value. Lanjouw and Schankerman [10] found that the number of claims was the most important indicator of patent quality in seven technological field studies. This index is defined from the viewpoint of both innovativeness and profitability. Along with the patent scope index, the claims index covers the breadth of technology. A greater number of claims increases the cost of patent application. Thus, the applicants increase the number of claims only when they expect a patent to produce more profits. To summarize, the higher number of claims can be regarded as the higher expected value of the patent Squicciarini et al. [15]. Family Size The family size index measures the value of patent associated with the geographical scope of patent protection. According to the Paris Convention, applicants have up to 12 months from the first filing of a patent application (typically in the country of origin) to file applications in other jurisdictions for the same invention and claim the priority date of the first application. The set of patents filed in several jurisdictions or countries, which are related to each other by one or several common priority filings, is generally called “patent family.” This index is defined as the number of patents in a family. Previous studies using family size index include Lanjouw and Schankerman [10] and Harhoff et al. [3]. Lanjouw and Schankerman [10] found a strong positive relationship between patent quality index and family size using US patent data. As with the claims index, this index is defined from the viewpoint of both innovativeness and profitability. A higher number in family size means it has a broader international patent protection. We hypothesize that a patent applied in many countries is an important patent. The innovator expects the patent to be worth the time and cost of application, since the benefits from the patent have wider application.

11.3.2 Comparison Among Japan, the US, and Germany This subsection describes patent count and patent quality in Japan, the US, and Germany. The data was collected using the Worldwide Patent Statistical Database, called PATSTAT (version 2016 autumn) and OECD quality indicator database (version March 2017). We focus on two energy types: wind and solar. Patent data is classified into each energy group using the IPC code [7]. Figure 11.4 compares the trends of patent counts in Japan, the US, and Germany in wind energy technologies. The number of patent applications filed in Japan has been increasing after 2000. Since Japan ratified the Kyoto Protocol in 2002 and

11 Renewable Energy Development in Japan

225

Fig. 11.4 Patent applications for wind energy technology (data from patent office)

implemented the Renewable Portfolio Standard policy in 2003, technological development has responded to these policy initiatives [12]. Figure 11.5 shows the trends of patent counts in solar energy technologies. There are three peaks in the number of patent applications. The first peak is in the early 1980s, when Japan experienced an oil shock and started to promote solar energy technology. As a result, the number of patent applications for solar energy grew significantly, as compared to wind energy. The second peak occurred in 2000. The third peak is after 2010, when the FIT policy was implemented. The number of applications for solar energy in the Japanese patent office increased more than that of the other two countries. Figure 11.6 reports the trends of patent counts in wind energy technologies by applicants’ nationality. The number of patent applications from the three countries has significantly increased from 2000 to 2010 due to the rising demand for renewable energy as a measure to mitigate climate change. As the first commitment period of the Kyoto Protocol is between 2008 and 2012, it could have driven the patent applications during this period. The number of patent applications from Japan is relatively low compared to the other two countries. Figure 11.7 also compares the trends of patent counts in solar energy technologies. As with wind energy technology, the number of patent applications for solar energy technology increased rapidly in the 2000s. The number of patent applications from the US is particularly high and outnumbers that of the other two countries. To confirm that the patent increases are qualitatively important, we compare the patent quality indexes for wind and solar energy technologies in Japan, the US, and Germany in Fig. 11.8. There were no patent applications in the European Patent Office

226

K. Takeuchi and M. Miyamoto

Fig. 11.5 Patent applications for solar energy technology (data from patent office)

Fig. 11.6 Patent applications for wind energy technology (by nationality)

11 Renewable Energy Development in Japan

227

Fig. 11.7 Patent applications for solar energy technology (by nationality)

(EPO) from Japan for wind energy technologies until 1991. However, regarding the patent citations index, the number of patent applications from Japan is comparable with that of the other countries. The result shows that the Japan have created important technology in wind energy. When comparing the patent family index, the number of international patent applications in Japan is relatively smaller than the other two. This implies that Japan has directed technological change mostly towards domestic use. Figure 11.9 compares patent quality in Japan, the US, and Germany in solar energy technologies. As with wind energy technology, although the number of international patent applications is relatively smaller, the number of citations is the highest among all three countries. Patent quality analysis suggests that Japan has developed technology that inspires subsequent innovations and promotes global innovation in renewable energy technologies.

11.4 Conclusions We can summarize the results of our analysis as follows. First, renewable energy technology is steadily advancing, although there are many challenges ahead regarding policy responses and infrastructure investment. Second, we find that Japanese technology for renewable energy is developing quantitatively and qualitatively. These findings clearly suggest that further steps should be taken to link technological development with the renewable energy sources.

228

Fig. 11.8 Patent quality for wind energy technology

K. Takeuchi and M. Miyamoto

11 Renewable Energy Development in Japan

Fig. 11.8 (continued)

229

230

Fig. 11.9 Patent quality for solar energy technology

K. Takeuchi and M. Miyamoto

11 Renewable Energy Development in Japan

Fig. 11.9 (continued)

231

232

K. Takeuchi and M. Miyamoto

Joseph Schumpeter stated that technological change can be divided into three stages: invention, innovation, and diffusion. In each of the stages, there are many pitfalls called the Devil’s River, Valley of Death, and Darwinian Sea before moving on to the next step. These names denote the tremendous difficulties in transforming inventions into profitable products. In other words, a good idea alone is not consistent with market competition without a clear survival strategy. The Strategic Energy Plan of 2018 does not show the energy mix in 2050 in detail. However, to attain the Paris Agreement targets, it is important to envision the energy mix beyond 2030. Indeed, the Climate Change Action Plan endorsed in 2016 states the 2050 target as reducing greenhouse gases by 80%. Therefore, setting clear targets and back-casting is important for a transition to an economy consistent with the Paris Agreement and the market economy.

References 1. Hall, B., Jaffe, A., & Trajtenberg, M. (2005). Market value and patent citations. RAND Journal of Economics, 36(1), 16–38. 2. Harhoff, D., Narin, F., Scherer, F. M., & Vopel, K. (1999). Citation frequency and the value of patented inventions. Review of Economics and Statistics, 81(3), 511–515. 3. Harhoff, D., Scherer, F., & Vopel, K. (2003). Citations, family size, opposition and the value of patent rights. Research Policy, 32, 1343–1363. 4. Jaffe, A., Trajtenberg, M., & Fogarty, M. S. (2000). The meaning of patent citations: Report on the NBER/Case-Western Reserve Survey of Patentees. NBER Working Paper No. 7361. 5. Japan Center for Economic Research. (2019). The cost of decommissioning Fukushima Daiichi nuclear power plant is estimated as between 30 trillion to 80 trillion Japanese Yen (in Japanese). Retrieved August 13, 2019, from https://www.jcer.or.jp/policy-proposals/2019037.html. 6. Japan Times. (2018, 27 October). Fukushima wind turbine, symbol of Tohoku earthquake recovery, to be removed due to high maintenance costs. Kyodo. Retrieved August 13, 2019, from https://www.japantimes.co.jp/news/2018/10/27/national/fukushima-wind-turbinesymbol-tohoku-earthquake-recovery-removed/. 7. Johnstone, N., Hascic, I., & Popp, D. (2010). Renewable energy policies and technological innovation: Evidence based on patent counts. Environmental and Resource Economics, 45, 133–155. 8. Kaneko, K. (2019, January 26). Sumitomo completes 32MW solar plant in Fukushima. Solar Power Plant Business. Nikkei BP. Retrieved August 13, 2019, from https://tech.nikkeibp.co. jp/dm/atclen/news_en/15mk/012602639/. 9. Lanjouw, J. O., & Schankerman, M. (1999). The quality of ideas: Measuring innovation with multiple indicators. NBER Working Paper, No. 7345. 10. Lanjouw, J. O., & Schankerman, M. (2004). Patent quality and research productivity: Measuring innovation with multiple indicators. Economic Journal, 114, 441–465. 11. Lerner, J. (1994). The importance of patent scope: An empirical analysis. RAND Journal of Economics, 25(2), 319–333. 12. Miyamoto, M., & Takeuchi, K. (2019). Climate agreement and technology diffusion: Impact of the Kyoto Protocol on international patent applications for renewable energy technologies. Energy Policy, 129, 1331–1338. 13. OECD. (2009). OECD Patent statistics manual. Paris: OECD Publishing.

11 Renewable Energy Development in Japan

233

14. Squicciarini, M., Dernis, H., & Criscuolo, C. (2013). Measuring patent quality: Indicators of technological and economic value. OECD Science, Technology and Industry, Working Papers, 2013/03, OECD Publishing. 15. Squicciarini, M., Dernis, H. & Criscuolo, C. (2013). Measuring patent quality: Indicators of technological and economic value. Organisation for Economic Cooperation and Development Directorate for Science, Technology and Industry Working Paper no. 2013/03, Paris. 16. Trajtenberg, M. (1990). A penny for your quotes: Patent citations and the value of innovation. RAND Journal of Economics, 21(1), 172–187.

Chapter 12

Adverse Effects of Pesticides on Regional Biodiversity and Their Mechanisms N. Hoshi

Abstract In recent years, reports from several organizations such as the World Health Organization (WHO), the National Academy of Sciences and the American Academy of Pediatrics have suggested a causal relationship between exposure to pesticides and developmental disorders. For example, neonicotinoids (NNs), one of the widely used type of systemic insecticides, are a class of neuroactive insecticides that are structurally similar to nicotine. Although these are selectively toxic to insects, laboratory tests and clinicoepidemiological reports suggest several adverse effects in mammals including humans. In this chapter, we focused on the adverse effects of noobserved-adverse-effect-level (NOAEL) of NNs on higher brain and immune system functions in experimental animals. The results of cognitive-emotional behavior test showed anxiety-like behaviors and excessive stress responses as the main signs when a single NOAEL-dose of NNs was administered to experimental animals. In addition, mice in the clothianidin (CLO, a type of NN)-administered group spontaneously emitted human-audible vocalizations (4–16 kHz), which are behavioral signs related to aversive emotions. This group also reported increased number of c-fos immunoreactive cells in the paraventricular thalamic nucleus and the dentate gyrus of the hippocampus. Other adverse effects included disturbance of extracellular signal-regulated kinase (ERK) phosphorylation, calcium influx and other intracellular signals. We believe that these signals could serve as potential novel biomarkers. Cognitive-emotional transformation clearly showed a sex- (adverse effect in males) and age-related effect. CLO also changed the rat thymus development and intestinal bacterial flora. Based on these findings, we speculate that it both directly and indirectly affects the immune system. We also quantitatively confirmed for the first time that CLO and its metabolites actively passed through the placenta. In conclusion,

N. Hoshi (B) Laboratory of Animal Molecular Morphology, Department of Animal Science, Graduate School of Agricultural Science, Kobe University, Kobe 657-8501, Hyogo, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_12

235

236

N. Hoshi

administration of an NOAEL dose of NNs resulted in adverse effects on cognitiveemotional behavior and immune system functions, revealing the existence of a major flaw in the current toxicity test. Keywords Adverse outcome · Biodiversity · Pesticide · NOAEL · Neonicotinoid

12.1 Introduction Although tens of thousands of chemical substances are currently being used worldwide, most of these have not been assessed for their toxic and adverse effects. In July 2013, the World Health Organization (WHO) and UNEP (United Nations Environment Programme) published reports on environmental hormones (endocrinedisrupting chemicals [EDCs], 2012) which stated some of these chemical pollutants negatively affect the endocrine system [17]. Moreover, some of these were also reported to interfere with the development of humans and wildlife species. Following the international recommendations in 1997 by the Intergovernmental Forum on Chemical Safety and the Environment Leaders of the Eight on EDCs, WHO, through the International Programme on Chemical Safety (IPCS), a joint Program of WHO, UNEP and the International Labour Organization, published a report in in 2002 entitled “Global Assessment of the State-of-the-Science of Endocrine Disruptors.” According to it, the various reproductive abnormalities (endocrine disruption) that have occurred 10 years ago, and all humans and wildlife on earth are exposed to environmental hormones that not only affect the reproductive system, but also have adverse effects on the nervous system (brain development disorder, cognitive functions, and intelligence quotient [IQ] decline), which an increased risk of certain types of cancers (breast cancer, prostate cancer etc.) [7, 8]. Although it is difficult to completely prove scientific causality, the relationship between wildlife reproduction and environmental hormones is no longer debatable, and it is pointed out that many of their sources are largely unknown. The present situation warns us of the seriousness of the problem. The report is particularly concerned with the biological effects of pesticides. In addition, the restriction on the use of some environmental hormones and reports on the restoration of wildlife populations and improvement in health problems indicate that pesticide-mediated environmental pollution is an urgent issue. In this chapter, we provide the epidemiological data on environmental pollution caused by pesticides and their effects on a variety of organisms. The chapter then focuses on the evaluation of the safety of Japanese agriculture system and pesticides, and the effects of novel systemic pesticides (NN) on the nervous system and behavior of birds and mammals using experimental evidence.

12 Adverse Effects of Pesticides on Regional Biodiversity …

237

12.2 Adverse Effects of Pesticides on Organisms 12.2.1 Collapse of the “Small Amount of Safety” Myth of Pesticide The novel “The Complex Contamination” (Sawako Ariyoshi 1975) describes the complex effects of chemicals, such as pesticides, detergents, and heavy metals in humans and wild animals, leading to the emergence of several consumer movements. Effects of chemical substances on humans have been reported frequently, for example Rachel Carson’s “The Spring of Silence (1962)”, describing the persistence and toxic effects of pesticides, “Kanemi oil poisoning” by polychlorinated biphenyls (PCBs) mixed in edible oil, toxic effects of dioxins in defoliants in the Vietnam war, Minamata disease due to organic mercury poisoning, “thalidomide case”, and vaginal cancer in young women (by the synthetic female hormone diethylstilbestrol prescribed for the preventing miscarriages). The “large use” of insecticides was initially regarded as causing indirect environmental damage; however, pesticides such as DDT and PCB persist in the environment for long durations without getting decomposed. Such substances get highly concentrated via the food chain in organisms located at the top of the ecosystem pyramid, that is humans. Its chronic effects include complex toxic symptoms, such as reproductive toxicity, immunotoxicity, and neurotoxicity, which are described in “Our Stolen Future,” published in 1996 by Theo Colborn et al. In the United States, has caused us again to intensify chemical hazards. The authors reviewed thousands of articles on the effects of chemicals on wildlife and humans reported in various parts of the world in recent decades. This review led to the hypothesis that it is the “extremely small amounts of chemical substances present in the environment (“EDCs” so-called environmental hormones) that disrupt the normal action of hormones in wildlife and humans, and have an irrevocable effect on reproduction or the health of offspring.” In other words, considering the health hazards caused by chemical substances, such as pollution and carcinogenicity, Colborn et al. suggested the idea of “safe if very little” to be completely wrong. Furthermore, a wide variety of chemical substances, including notorious pollutants such as dioxin, PCB, and cadmium, as well as nearly 300 chemical substances including organophosphorus pesticides, pyrethroids pesticides, and NN pesticides (described later), have been isolated from the umbilical cord of newborns. Thus, the fetus, protected by the mother, is actually exposed to severe toxicity. In other words, people in today’s modern world get affected by pollutants even before they are born. Fetal and neonatal stages of organogenesis and development are extremely sensitive to drugs as compared to adults, and their mechanism of action is different, suggesting that contaminants may irreversibly damage the brain or reproductive function.

238

N. Hoshi

12.2.2 Japanese Highly Intensive Agriculture System and Pesticides: History The amount of agricultural land in Japan is considerably limited owing to the mountainous and the hilly area that covers 70% of the land in proportion to the population. Therefore, an intensive system of agriculture was introduced during the middle and end of the Edo period, which involved extensive use of fertilizers to achieve productivity far superior to that of other Asian countries. This led to the industrialization of agriculture with introduction of irrigation facilities, agricultural machines, production and shipping facilities, use of chemical fertilizers and pesticides, and employment of agricultural workers. This was accompanied by indiscriminate use of pesticides, such as disinfectants, fungicides, insecticides, herbicides, rodenticides, and plant growth regulators to increase the efficiency of agriculture or preserve agricultural crops. As mentioned above, in addition to the agricultural land issues in Japan, abnormal aging of farmers to reach 70 years also occupies a large proportion. Until the first half of the twentieth century, pesticides included mainly natural products and minerals; however, in 1938, research on the insecticidal effects of synthetic dyes revealed that DDT had insecticidal activity, and “pesticides” were chemically synthesized in large quantities by human beings. After World War II, synthetic pesticides began to be used in large quantities. Their use in agriculture dates back to the use of nerve gas, and the early products, such as parathion, although had a strong insecticidal activity were toxic to humans. Subsequently, research and development of pesticides with low toxicity started in each country, and highly selective and low toxic compounds appeared. In the 1990s, neonicotinoid, a new pesticide that is considered highly safe to the human body (selective toxicity) emerged, with advantages of penetration into plants, long lasting effect, and reduced number of applications (described later), and became the mainstream of current pesticides.

12.2.3 Risk and Safety Assessment of Pesticides Most pesticides are physiologically active chemical substances with some adverse effects on non-target organisms (that provide ecosystem services including pollination and natural pest control), humans, and the environment other than pests and weeds. Those susceptible to risk include (i) the sprayer (health effects), (ii) target crops (pharmacological damage), (iii) scattering, (iv) consumers (health effects of residual pesticides), and (v) aquatic animals (water landscape pollution due to runoff of paddy water). Therefore, it is necessary to scientifically evaluate each pesticide and use it in a controlled manner to avoid adverse side effects. The safety of a chemical substance is determined not only by its intrinsic property (the strength of toxicity) but also by the amount and duration of contact with the substance in daily life (“the strength of toxicity” × “amount of exposure [exposure]”). That is, the risk of a chemical substance needs to be considered both from the point

12 Adverse Effects of Pesticides on Regional Biodiversity …

239

of view of degree of toxicity of the substance and the manner of contact with the substance in daily life. In the case of pesticides, for residual pesticide risk to crops, the intake allowance for humans is determined based on toxicity tests using experimental animals. Subsequently, a usage method is formulated to achieve a concentration less than the allowance from crop residue tests conducted separately. Under the Agricultural Chemicals Regulation Act, only those chemicals are registered that have cleared certain criteria of ensuring safety.

12.3 Verification of New Systemic Pesticides (Neonicotinoid) 12.3.1 Target of New Systemic Pesticides (Neonicotinoid) and Their Mode of Action Neonicotinoids (NNs, sometimes shortened to neonics) are a class of neuroactive insecticides that are also called a new nicotine-like substance because of their structural similarity to nicotine, which is a harmful component of tobacco. Seven components of NNs, namely acetamiprid (ACE), CLO, dinotefuran, imidacloprid, nitenpyram, thiacloprid, and thiamethoxam, have been registered. After organophosphorus pesticides, NNs have been available in the market since the late 1980s, and are now the most widely used worldwide. Neonicotinoids, like nicotine, bind to nicotinic acetylcholine receptors (nAChRs) of a cell to trigger a response. Neonics disrupt the normal function of acetylcholine, which plays an important role in the nervous system, and exhibit high insecticidal effects at small amounts (Fig. 12.1) [5, 13, 14]. Characteristics of NNs include (i) systemic effect, (ii) residual efficacy, and (iii) neurotoxicity. In addition, NN pesticides also include the same systemic pesticide, fipronil (but the mechanism of action is different; fipronil disrupts the inhibitory neurotransmitter gamma-aminobutyric acid [GABA]). Compared to organophosphate and carbamate insecticides, NNs are considered to be safe for humans, because they selectively exhibit greater neurotoxicity to insects than vertebrates. Furthermore, they are regarded as an effective pesticide that can protect the entire crop against pests as they meet the criteria of a systemic insecticide. These include the ability to get dissolved in water and spread to every corner of the plant from the root to the leaf. Therefore, NNs are now used on a large scale in farm and public lands. In addition, these are widely used in the general household, for example in gardening, termite control, pet lice/flea removal, cockroach control, spray insecticides, and as chemical building materials for new homes. However, with the rapid increase in their use since 2000s, NNs have been suspected as the direct cause of honeybee colony collapse disorder (CCD) observed in various parts of the world. In 2012, world’s top scientific journals, “Nature” and “Science,” reported NNs to be the cause of CCD, thus affecting honeybee-mediated crop production. More recently, KimuraKuroda et al. (2012) reported that acetamiprid (ACE) and imidacloprid (IMI), earlier

240

N. Hoshi

Fig. 12.1 Neonicotinoids, like nicotine, bind to nAChRs of a cell and trigger a response. In mammals, nAChRs are located in cells of both the central and peripheral nervous systems. In insects, these receptors are limited to the central nervous system. The nAChRs are activated by the neurotransmitter acetylcholine. While low to moderate activation of these receptors causes nervous stimulation, high levels overstimulate and block the receptors, causing paralysis and death. Acetylcholinesterase breaks down acetylcholine, thus causing its release at terminate signals from these receptors. However, acetylcholinesterase cannot break down neonicotinoids and their binding is irreversible. nAChRs: nicotinic acetylcholine receptors

chloropyridine methyl NNs, caused nAChRs-mediated neural excitation in the cerebellar neurons in neonatal rats [6]. In addition, a number of studies have reported that NNs could affect the reproduction and behavior of birds and mammals even when used below the published non-toxic dose (described later). Several other reports, such as those published by the American Academy of Pediatrics and the European Food Safety Authority (EFSA), based on the epidemiological survey indicate these pesticides to be associated with attention deficit hyperactivity disorder (AD/HD), depression, and learning disorders [1]. Under these circumstances, the European Union (EU) banned the use of three NNs based on the precautionary principle in 2013, and publicly announced its concern regarding the relationship of NNs with neurodevelopmental disorders. Several states in the United States have also restricted the use of NNs out of the concern for pollinators and bees. However, in Japan, not only the regulation but also the expansion of application and the mitigation of the residual standard value in crops are carried out, and the public awareness is low. Thus, an assessment of the impact of NNs is urgently required at the international level.

12 Adverse Effects of Pesticides on Regional Biodiversity …

241

12.3.2 Influence on Reproductive Ability of Birds andMammals (Fig. 12.2) Male quails were orally administered CLO (NOAEL has not been published in birds and therefore the dose was adjusted to 1/3 to 1/30 of rat’s NOAEL) for 30 days. DNA fragmentation of sperm cells and a decrease in the number of sperms were noticed. Furthermore, mating between administered male and a non-administered female quail resulted in an embryo with an abnormal size and weight. Moreover, the proportion of unhatched eggs increased. In another experiment, similar amount of CLO was administered for 6 weeks to male and female pairs of juveniles, and as the concentration of administered CLO increased, a reduction in the antioxidative enzymes that inhibit reactive oxygen species was noted in the testis, as confirmed by cell damage. Thus, COL administration resulted in elevated oxidative stress that led to damage of proteins, lipids, and DNA, consequently damaging the testis. Same observations were noticed in the ovaries. We believe that COL induced the formation of abnormal granulosa cells (cells that maintain pregnancy), leading to reduced egg production [5, 14].

Fig. 12.2 Extremely low-dose CLO affects reproductive functions in avian and experimental small animals through oxidative stress, and this effect could be more severe in birds that have higher sensitivities. CLO: clothianidin

242

N. Hoshi

12.3.3 Neurobehavioral Adverse Effects on Mammals 12.3.3.1

Neurobehavioral Adverse Effects of Chronic and Acute Treatment with Clothianidin [Figs. 12.3, 12.4 and 12.5]

We performed immunohistochemical and behavioral analyses in male mice actively administered CLO, an NN (CLO; NOAEL: 50 mg/kg/day), for 4 weeks under an unpredictable chronic stress (six stressors) procedure. A 10-min behavioral test (open field test measuring spontaneous activity in a broad and bright novel environment) was performed on the last day of CLO administration under this stress. The total distance traveled (an indicator of locomotor activity) did not change even after administration of CLO and application of stress. On the contrary, central compartment stay time (decreases in individuals showing anxiety-like behavior) reduced additively. Therefore, to clarify the brain region involved in the behavioral effects of CLO, a single oral dose of it was administered to adult male mice, following which an elevated plus maze test (test to measure anxiety behavior) was performed 1 h after administration. Of the cross-shaped platforms installed at a height of 60 cm from the floor, the two opposing arms had transparent walls and the other two arms had no walls. The brain was removed after 2 h and the neurological activity was assessed. The behavioral analysis of the elevated plus maze test reported a decrease in the amount of time spent, and arm entries in the “open arm” were confirmed in 1/10 of the NOAEL group compared with the control group. In the NOAEL dose administered group, decrease in the total distance traveled and abnormal vocalizations (less than

Fig. 12.3 Anxiety-like behavior and excessive stress response were observed when cognitive– emotional behavioral tests were conducted after administering single dose of the current non-toxic dose

12 Adverse Effects of Pesticides on Regional Biodiversity …

243

Fig. 12.4 In such animals “excessive neural activity in particular areas of the brain” was observed. Subparts A–C describe results of dinotefuran administration and subparts G–I depict results of clothianidin administration

Fig. 12.5 Adverse effects such as disruption of intracellular signals were observed even at concentrations below the NOAEL, which could be considered as new biomarkers. NOAEL: no-observed-adverse-effect level

244

N. Hoshi

20 kHz) with freezing behavior (freezing) at the time of maze search were observed (Fig. 12.3) [3, 4, 11, 12, 15, 16]. Histological analysis revealed the involvement of the hypothalamus and the hippocampus in emotional and stress responses and an increase in the number of c-fos-positive cells, reflecting neural activity. From these results, it could be inferred that excess nerve excitation occurs in the hypothalamus (integrated center that regulates autonomic functions such as respiration and cardiovascular movements) and the hippocampus (regulates memory and space-learning ability) of the brain receiving cholinergic nerve projections, with acetylcholine as the neurotransmitter when exposed to novel environmental stress in COL-ingested mice, resulting in anxiety-like behavior and stress responses (Fig. 12.4) [3, 4, 11, 12, 15, 16]. In addition, adverse effects such as disruption of intracellular signals were observed even at concentrations below the NOAEL, suggesting these signs to serve as potential biomarkers (Fig. 12.5) [2]. More recently, it was found that small amounts of CLO altered rat thymus development and intestinal flora [10].

12.3.3.2

Effects of Dinotefuran Intake During Pregnancy and Lactation on Neurobehavioral Effects (Relationship with Mental Developmental Disorders and Depression)

Neurobehavioral study on dinotefuran and dopaminergic nervous system (Fig. 12.4). It is also suggested that NN pesticides may cause developmental disorders by interfering with the development of human brain, especially the monoaminergic system (the networks of neurons that use monoamine neurotransmitters and are involved in the regulation of emotions, arousal, and certain types of memories). Monoamines are neurotransmitters and neuromodulators that contain one amino group. Examples include dopamine, serotonin, adrenaline, and noradrenaline. All monoamines are derived from aromatic amino acids such as phenylalanine, tyrosine, and tryptophan by the action of aromatic amino acid decarboxylase. Among the NN pesticides, we used the NOAEL (maximum amount at which animals are not affected) dose of dinotefuran (currently most frequently used) with free water intake during development and maturation of mice. The effects of the substantia nigra on the dopamine nervous system were analyzed neurobehaviorally and immunohistochemically (a histochemical method that visualizes the localization of a specific substance and cellular elements expressing it using an antigen–antibody reaction). The results revealed enhanced dopamine production and increased spontaneous movements (hyperactivity) [16]. Neurobehavioral study on dinotefuran and serotonin nervous system. Various kinds of stress can reduce the concentration of monoamine in the brain that can cause the onset of mental illness. Antidepressants increase the amount of serotonin and noradrenaline and activate brain activity to improve symptoms such as depression. In the midbrain dorsal raphe nucleus (one of the nuclei in the vertebrate brainstem, where most of the serotonin neurons are concentrated), the secretion of serotonin is regulated via the acetylcholine receptor (α4β2 type). We have demonstrated through experiments in mice that depression-like behavior is observed when

12 Adverse Effects of Pesticides on Regional Biodiversity …

245

dinotefuran is administered to mice in the embryonic and postnatal developmental stages. The materials and methods were same as in the above mentioned. The tail suspension test and the forced swimming test (both of which are often used to develop a depression model), evaluation tests for antidepressants, were conducted on the last day of the drug administration. Mice exposed to dinotefuran during developmental and fetal stages showed no increase in immobility time in both tests; however, a rather significant decrease in tail suspension tests due to developmental administration, fetal administration, and the forced swimming test showed a decreasing trend. In addition, we did not observe a decrease in the number of serotonin-positive cells after dinotefuran administration or elevation in depression [11, 12].

12.3.4 Developmental Neurotoxicity Evaluation Developmental neurotoxicity refers to adverse effects on the fetal nervous system structure and function during fetal or postnatal development owing to exposure to heavy metals and chemicals. Exposure of the mother to chemicals during pregnancy or lactation may severely affect the development of the nervous system of the fetus and the infant, especially the brain, indirectly through the placenta and breast milk [9]. However, the current non-clinical studies involving animal experiments do not consider developmental neurotoxicity as an essential examination element, leading to it getting sidelined in actual toxicity studies. Therefore, the current situation is that numerous chemicals are readily available in the market without information about their toxic effects on development of the brain. The risk assessment of chemicals for their possible side effects in fetuses and children, people who are hypersensitive to chemical substances, and elderly people has largely been ignored and overlooked. For pesticides targeting neuroreceptors in particular, safety and risk assessments for “do not miss” and “do not overlook” are extremely important.

12.4 Conclusion According to the Food and Agriculture Organization Corporate Statistical Database (FAOSTAT), operated by the United Nations Food and Agriculture Organization, 2013, the amount of pesticide used per area of cultivated land in Japan is one of the highest worldwide. For example, it is 3.5–7.0 times that of Germany, Britain, and the United States, and approximately 15 times that of the Scandinavian countries. The NN pesticides are used in large quantities in rice seedling machines used in nurseries; in summer, these pesticides are sprayed by manned and unmanned helicopters. The indiscriminate use of NN pesticides has led to a reduction in biodiversity and widespread pollution from paddy ecosystems to water systems. Researchers worldwide are alarmed at the situation, with all their eyes set on Japan, whose land and people are at profound risk. In addition, the increasing number of reports every year

246

N. Hoshi

on NN-mediated developmental neurotoxicity has become a serious health concern among children, who are considered the future [references]. The current situation warrants immediate regulatory action based on “precautionary principle.” In this regard, the EU has actually implemented legal control of pesticides and other environmental hormones, and in North America, regulations on the use of NNs have also been tightened primarily due to toxicity to bees. Even in Japan that has a mitigation policy and where regulation is yet to start, there is a growing understanding of this risk and the number of local governments and organizations opposing the use of NN pesticides is on the rise with a consequent increase in organic agriculture. The burning issue of pesticides could be solved gradually by personal efforts, as individuals purchase these products or use them for feeding in nursery schools, kindergartens, and elementary and junior high schools. In addition, promoting regional agriculture via “local production for local consumption” of fresh organic agricultural products could serve as a “kill-two-birds-with-one-stone” solution and would attract young people. To avoid health hazards caused by other environmental chemicals, it is important that “what is beginning to be scientifically termed as dangerous” and “always be aware of information such as food according to the precautionary principle” are considered seriously. Biodiversity conservation will, in turn, protect the regional diversity and uniqueness, which is the foundation of all sorts of life on earth (supply of oxygen, formation of rich soil, etc.), resources useful for living (food, timber, medicines, etc.), the root of a rich culture (locally colored culture, a sense of coexistence with nature, etc.), and the safety of living (reduction in natural disasters, increased food security, etc.), which are indispensable to the local revitalization.

References 1. European Food Safety Authority. (2013). EFSA Journal, 11, 3471. 2. Hirano, T., Minagawa, S., Furusawa, Y., Yunoki, T., Ikenaka, Y., Yokoyama, T., Hoshi, N., & Tabuchi, Y. (2019). Growth and neurite stimulating effects of the neonicotinoid pesticide clothianidin on human neuroblastoma SH-SY5Y cells. Toxicology and Applied Pharmacology, 383, 114777. https://doi.org/10.1016/j.taap.2019.114777. Epub 15 Oct 2019. 3. Hirano, T., Yanai, S., Omotehara, T., Hashimoto, R., Umemura, Y., Kubota, N., et al. (2015). The combined effect of clothianidin and environmental stress on the behavioral and reproductive function in male mice. Journal of Veterinary Medical Science, 77, 1207–1215. 4. Hirano, T., Yanai, S., Takada, T., Yoneda, N., Omotehara, T., Kubota, N., et al. (2018). NOAELdose of a neonicotinoid pesticide, clothianidin, acutely induce anxiety-related behavior with human-audible vocalizations in male mice in a novel environment. Toxicology Letters, 282, 57–63. 5. Hoshi, N., Hirano, T., Omotehara, T., Tokumoto, J., Umemura, Y., Mantani, Y., et al. (2014). Insight into the mechanism of reproductive dysfunction caused by neonicotinoid pesticides. Biological &/and Pharmaceutical Bulletin, 37, 1439–1443. 6. Kimura-Kuroda, J., Komuta, Y., Kuroda, Y., Hayashi, M., & Kawano, H. (2012). Nicotine-like effects of the neonicotinoid insecticides acetamiprid and imidacloprid on cerebellar neurons from neonatal rats. PLoS ONE, 7, e32432. 7. Kuroda, Y., & Kimura-Kuroda, J. (2014). The etiology of increased developmental disorders. Tokyo: Kawade Shobo Shinsha (in Japanese).

12 Adverse Effects of Pesticides on Regional Biodiversity …

247

8. Mori, C., & Todaka, E. (2011). Environmental contaminants and children’s health. Tokyo: Maruzen Planet Co. Ltd. 9. Ohno, S., Ikenaka, Y., Onaru, K., Kubo, S., Sakata, N., Hirano, T., et al. (2020). Quantitative elucidation of maternal-to-fetal transfer of neonicotinoid pesticide clothianidin and its metabolites in mice. Toxicology Letters, 322, 32–38. 10. Onaru, K., Ohno, S., Kubo, S., Nakanishi, S., Hirano, T., Mantani, Y., Yokoyama, T., & Hoshi, N. (2020). Immunotoxicity evaluation by subacute oral administration of clothianidin in Sprague-Dawley rats. Journal of Veterinary Medical Science. https://doi.org/10.1292/jvms. 19-0689 [Epub ahead of print]. 11. Takada, T., Yoneda, N., Hirano, T., Onaru, K., Mantani, Y., Yokoyama, T., et al. (2020). Combined exposure to dinotefuran and chronic mild stress counteracts the change of the emotional and monoaminergic neuronal activity induced by either exposure singly despite corticosterone elevation in mice. Journal of Veterinary Medical Science. https://doi.org/10. 1292/jvms.19-0635 [Epub ahead of print]. 12. Takada, T., Yoneda, N., Hirano, T., Yanai, S., Yamamoto, A., Mantani, Y., et al. (2018). Verification of the causal relationship between subchronic exposures to dinotefuran and depression-related phenotype in juvenile mice. Journal of Veterinary Medical Science, 80, 720–724. 13. Tanaka, T. (2012). Reproductive and neurobehavioral effects of clothianidin administered to mice in the diet. Birth Defects Research Part B: Developmental and Reproductive Toxicology, 95, 151–159. 14. Tokumoto, J., Danjo, M., Kobayashi, Y., Kinoshita, K., Omotehara, T., Tatsumi, A., et al. (2013). Effects of exposure to clothianidin on the reproductive system of male quails. Journal of Veterinary Medical Science, 75, 755–760. 15. Yanai, S., Hirano, T., Omotehara, T., Takada, T., Yoneda, N., Kubota, N., et al. (2017). Prenatal and early postnatal NOAEL-dose clothianidin exposure leads to a reduction of germ cells in juvenile male mice. Journal of Veterinary Medical Science, 79, 1196–1203. 16. Yoneda, N., Takada, T., Hirano, T., Yanai, S., Yamamoto, A., Mantani, Y., et al. (2018). Peripubertal exposure to the neonicotinoid pesticide dinotefuran affects dopaminergic neurons and causes hyperactivity in male mice. Journal of Veterinary Medical Science, 80, 634–637. 17. WHO/UNEP. (2013). State of the science of endocrine disrupting chemicals—2012, Geneva, Switzerland (pp. 1–298).

Chapter 13

Reconsidering Precautionary Attitudes and Sin of Omission for Emerging Technologies: Geoengineering and Gene Drive Atsushi Fujiki Abstract Precautionary attitudes including “precautionary principle” are widely accepted in science and technology governance. Their concept can be potentially applied to emerging technologies such as geoengineering and gene drive. However, precautionary and preemptive attitudes may be also obstacles to decision making because they are used by both the advocates and opponents of these new fields. Therefore, when we examine “innovative emerging technologies that will have unquantifiable and unpredictable influences over a wide spatio-temporally range and produce catastrophic irreversible consequences in a worst-case scenario” from the viewpoint of precautionary attitudes, it is necessary to identify the situations and risks that we really want to avoid because the words “precautionary” and “omission” can have different meanings for different stakeholders. Keywords Geoengineering · Gene drive · Dual use · Ecosystem

13.1 Introduction Precautionary attitudes such as “precautionary principle” are widely accepted in science and technology governance. However, these attitudes are difficult to apply to some cutting-edge technologies because the word “precautionary” can be understood differently by their advocates and opponents. How should we consider the “innovative emerging technologies that will have unquantifiable and unpredictable influences over a wide spatio-temporally range and produce catastrophic and irreversible consequences in a worst-case scenario” from the viewpoint of precautionary attitudes? In this study, I will attempt to answer this question for two representative cases: geoengineering and gene drive.

A. Fujiki (B) Kobe City College of Nursing, 3-4 Gakuen-nishi, Nishi-ku, Kobe 651-2103, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_13

249

250

A. Fujiki

13.2 From Precautionary Principle to Responsible Research and Innovation An early version of the precautionary principle was the “precautionary approach” formulated in principle 15 of the Rio Declaration on Environment and Development issued in 1992 (a more strict definition was adopted in the Wingspread Statement six years later). This principle articulates that “where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation”. The Wingspread Statement emphasizes that “when an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically”. Precautionary principle was established as a bitter lesson of the past in order to cope with the scientific uncertainty included into various technologies. In the European Union (EU), the European Environment Agency (EEA) published two official reports titled “Late lessons from early warnings” that emphasized the importance of the precautionary principle [25, 26]. Some changes were made in the second report. First, they introduced typical cases of “false positives”, which first showed some indicators of harm, but subsequently claimed the absence of risks to be prevented. Second, the word “innovation” was used in the subtitle, which apparently represented a conciliatory approach. The recently developed science and technology policy in EU called “Horizon 2020” (2014–2020) and its key concept known as “responsible research and innovation (RRI)” appear to take over the fundamental idea of precautionary principle [54, 59]. The concept and operation of precautionary principle are complicated and controversial. Regardless of what we choose, we may need to select some middle-way approach because both extremes have their own challenges. Cass Sunstein, for example, criticized both the weak and strong versions of precautionary principle. He said that “The Precautionary Principle takes many forms. […] [I]n its strongest forms, the Precautionary Principle is literally incoherent, and for one reason: There are risks on all sides of social situations. It is therefore paralyzing; it forbids the very steps that it requires. Because risks are on all sides, the Precautionary Principle forbids action, inaction, and everything in between [63, p. 4]”. He also added that “[t]he most cautious and weak versions suggest, quite sensibly, that a lack of decisive evidence of harm should not be a ground for refusing to regulate [Ibid., p. 18],” but “the weakest versions are unobjectionable, even banal [Ibid., p. 24]”. Naturally, we need to use the golden mean in all aspects of life; however, moderation in this type of problems must be exercised. Moreover, how should we consider the “innovative emerging technologies that will have unquantifiable and unpredictable influences over a wide spatio-temporally range and produce catastrophic and irreversible consequences in a worst-case scenario” from the viewpoint of precautionary attitudes? I will address this question by focusing on two emerging technologies: geoengineering and gene drive.

13 Reconsidering Precautionary Attitudes and Sin …

251

13.3 Geoengineering Currently, there is no clear and universal definition of geoengineering. However, we can find common features in its different formulations. Geoengineering is generally regarded as “the deliberate large-scale intervention in the Earth’s natural systems to counteract climate change [42]”. In Japan, Kenji Miyazawa’s The Life of Budori Gusuko originally published in 1932 is famous for its story related to geoengineering. In this story, Budori started studying volcanoes with scientists to find a solution for a cold weather phenomenon. Finally, Budori artificially made volcanoes burst into eruption in exchange for his life to mitigate the damage caused by cold weather. Some scientists and engineers also seek to utilize geoengineering for natural disaster prevention in addition to counteracting climate change. However, this field has many ethical, legal, and social issues that must be solved before deployment. Some difficulties, such as the lack of transparency in the decision-making process, uncertain and unpredictable risks, and irreversible outcomes in unplanned situations, have been already pointed out.

13.3.1 Two Basic Classes of Geoengineering As was mentioned above, geoengineering has been mainly considered a countermeasure against climate change, including global warming. The Royal Society divided the existing geoengineering methods into two basic classes: carbon dioxide removal (CDR) and solar radiation management (SRM) [57]. The former includes the techniques that “address the root cause of climate change by removing greenhouse gases from the atmosphere [Ibid., p. ix]”. The latter comprises the technologies that “attempt to offset effects of increased greenhouse gas concentrations by causing the Earth to absorb less solar radiation [Ibid., p. ix]”. The Royal Society admitted that “[t]he greatest challenges to the successful deployment of geoengineering may be the social, ethical, legal and political issues associated with governance, rather than scientific and technical issues [Ibid., p. xi]”. The National Academy of Sciences (NAS) in the United States published two reports titled Climate Intervention in 2015 focusing on CDR and SRM [46, 47]. The tone of their arguments was critical. The authors thought that SRM techniques might have uncertain risks; therefore, they “reiterate that it is opposed to largescale deployment of albedo modification techniques”. The committee also said that “even the best albedo modification strategies are currently limited by unfamiliar and unquantifiable risks and governance issues rather than direct costs [47, p. 192]”. This type of problems is widely known as the “unknown unknowns” issue. The committee also acknowledged the existence of ethical issues related to intergenerational implications. They pointed out that “potential intergenerational implications compound the ethical issues regarding who has the authority, whether legal or moral, to enter

252

A. Fujiki

into deliberate actions that might precipitate profound effects or place obligations on future generations [Ibid., p. 168]”.

13.3.2 Principles and Policy Statement of Geoengineering Because of the unfamiliar and uncertain risks, the governance of geoengineering must be performed very carefully. A regulatory geoengineering framework has not been established on either the global or the local scale yet; however, several guidelines and recommendations do exist. The Oxford Geoengineering Programme founded in 2010 as an initiative of the Oxford Martin School at the University of Oxford formulated the “Oxford Principles of Geoengineering”, which represented a proposed set of the initial guiding principles for the governance of geoengineering [43]: 1. 2. 3. 4. 5.

Regulation of geoengineering as a public good; Public participation in geoengineering decision-making; Disclosure of geoengineering research and open publication of results; Independent assessment of impacts; Governance before deployment.

The American Meteorological Society has issued three recommendations in its policy statement that remained in force until January 2017 [5]. They comprised 1. Enhanced research of the scientific and technological potential for geoengineering the climate system, including the studies on intended and unintended environmental responses; 2. Coordinated study of the historical, ethical, legal, and social implications of geoengineering that integrates the international, interdisciplinary, and intergenerational issues and perspectives and includes lessons from the past efforts to modify the weather and climate; 3. Development and analysis of the policy options aimed at promoting the transparency and international cooperation in exploring geoengineering options along with restrictions of the reckless efforts to manipulate the climate system. Both these documents and NAS apparently acknowledge the existence of ethical, legal, and social issues related to geoengineering and suggest conducting research studies to mitigate them. Regardless of the actual implementation of geoengineering, public participation and transparency remain the key factors affecting its desirability. The Asilomar Scientific Organizing Committee also gave recommendations regarding public participation and transparency in its conference report [8]. It cites principle 10 of the Rio Declaration, which states that “[e]nvironmental issues are best handled with the participation of all concerned citizens, at the relevant level [70]”. After looking back at our bitter experiences with various environmental issues, this sentence appears to be highly persuasive. However, we will face a new problem: who are the “all concerned citizens” that can participate in discussing this global intergenerational question? The climate system as an operational object of geoengineering

13 Reconsidering Precautionary Attitudes and Sin …

253

includes the entire earth and definitely produces transboundary impacts over a long period. Then who and how appropriately defines the stakeholders in each particular case? Can we really do that in the first place? Naturally, we should learn from past lessons, but several types of emerging technologies including geoengineering have some characteristics that differ from those of the conventional environmental issues. Hence, it is important to consider these differences because they affect discussions on the deployment of geoengineering. These guidelines and statements are helpful for us, and we have to think about possible ways to realize them while keeping in mind that guideline preparation and practical implementation are two different tasks.

13.3.3 Difficulties of Conducting Geoengineering Experiments Despite these guidelines and recommendations, conducting geoengineering field experiments is difficult because of the ongoing disputes over their actual and potential risks. The Stratospheric Particle Injection for Climate Engineering (SPICE) project, whose participants include several UK universities and the Cambridge-based Marshall Aerospace, is aimed at investigating the possibility of spraying particles into the stratosphere to mitigate global warming [17]. The SPICE project was cleared by various ethics committees at these Universities with very few or no comments because the proposed research did not involve human volunteers or animals and was unlikely to have a direct effect on the environment [61, p. 1575]. However, the experiment attempted “to spray a 150 L of water (approximately 2 bath loads) from a height of 1 km” was cancelled by a researcher. The principal investigator of SPICE wrote that “one factor in the cancellation was the lack of rules governing such geoengineering experiments,” although “it is hard to imagine a more environmentally benign experiment [53, p. 27]”. This episode clearly shows that most geoengineering methods (except for computer simulations and small-scale experiments in a closed environment) are hard to conduct in the framework of responsible research outlined in the above-mentioned guidelines and statement. How can we test specific methods and assess their risks without any field experiments? This question common for many emerging technologies reveals a major geoengineering challenge. Other international problems exist as well. Representative questions about ethical, legal, and social geoengineering issues can be summarized as follows: “Who gets to decide when to pull the trigger? How do we determine “correct” average temperatures when the same ones will affect different nations in markedly different ways? Can one nation be held responsible for the negative effects of its geoengineering scheme on another country’s weather? Can these tools be used to deliberately attack a neighboring nation? And can conflicts over these questions tip into a war [67]?” Some of these questions are partially related to the “dual use” problem (geoengineering may be also exploited for military use in addition to the deliberate intervention to climate change).

254

A. Fujiki

There is another worry that geoengineering will increase the moral hazard. In other words, people will be uncomfortable with the emission control of greenhouse gases and forget about the temperance issue one day if geoengineering achieves a tremendous success. It remains uncertain whether this type of apprehension will be actually realized or not at that time. Rather, it is noteworthy that if the original goal of mitigating the influence of climate change will be achieved, another problem unrelated to geoengineering will appear. To be more precise, this phenomenon may imply that there is no point to discuss only the results of risk-benefit geoengineering analysis. As was mentioned earlier, we have to deal with both ethical and social issues when predicting the future of this field. Judging from the same aspects, when debating the appropriateness of geoengineering, we may consider at least four different scenarios: (1) abandoning geoengineering, (2) being successful, (3) going amiss after deployment, and (4) increasing the moral hazard due to achievements. In any case, it is necessary to contemplate all possible scenarios to compare them and avoid the worst one. For this purpose, various professionals including philosophers and ethicists must cooperate with each other to make an accurate estimate of each event probability. For example, Preston, an environmental ethicist studying the ethical issues of geoengineering, stated that “environmental ethicists should orient themselves to the rapidly moving geoengineering debate” [52].

13.3.4 Is Geoengineering a Unique Solution or No More than an Alternative? One of the factors complicating the existing geoengineering-related disputes is the necessity to reconcile divergent conflicts of interests on a broad base. In such a case, the inevitability of geoengineering implementation is a key issue of these disputes. In other words, geoengineering must be nearly a unique solution of the climate change problem in the situation when a climate catastrophe will likely occur in the near future to emphasize its legitimacy. However, it is not easy to assure such a thing. Geoengineering is advocated based on two rationales: “plan B” and “climate emergency” [7, p. 90, 48]. “Plan B” is “a necessary emergency option if climate tipping points are crossed leading to abrupt, nonlinear and irreversible climate change” and, in short, “an alternative if mitigation fails” [6, p. 2]. This argument claims that geoengineering can be “insurance” in case the mitigation policy fails and, therefore, at least geoengineering research should be pursued [7, p. 90]. According to this perspective, we can say that geoengineering is just a necessary emergency option rather than a unique solution for climate change. As Mizutani pointed out, geoengineering may not be a fundamental solution, but a symptomatic treatment aimed at finding a technical solution to the climate change problem [41]. It is widely acknowledged that climate change produces the strongest effect on human’s lifestyle since the Industrial Revolution. That is exactly why greenhouse gas emissions reduction is

13 Reconsidering Precautionary Attitudes and Sin …

255

required and why CDR has attracted our attention. We may be able to reduce greenhouse gas emissions by methods other than geoengineering. Indeed, the amount of global greenhouse gas emissions has undoubtedly increased [50, p. 6]. Although a sufficient number of advocates are trying to thrust the geoengineering research forward due to the political failure of the developed countries to implement emission control [35], it is unclear whether our trial except from geoengineering will fail or not. Furthermore, what is the “climate emergency” claim? Asayama suggests that “the rhetoric of climate emergency rests on the logic of preemption [Ibid.].”

Anticipation of potential emergencies in the future – but as yet unknown about whether and how they may occur – necessitates action now to prevent such emergencies. The fear of future impacts of a changed climate rather than a changed climate itself legitimizes the option of geoengineering. As such, what underlies the idea of a ‘technofix’ through geoengineering is catastrophism that imagines the future climate as the apocalypse [Ibid.].

“Climate emergencies” is an ambiguous idea—partially because it is “yet unknown about whether and how they may occur.” It appears very difficult to legitimize climate emergencies based solely on scientific certainty. The point here is that we would rather consider not only the possibility of scientifically determined climate emergencies, but also our fearful emotion, catastrophism, and precautionary attitudes. Thus, geoengineering may be legitimized when we cope with likely climate catastrophes or climate emergencies using precautionary and preemptive attitudes. Interestingly, these attitudes are common to both the advocates and opponents of geoengineering. All of them fear “the worst-case scenario” and try to avoid it with precautionary and preemptive attitudes; however, there are important differences between these two groups. The advocates fear a situation, in which we reach a climate tipping point without doing anything to compensate for our negligence or “omission”. The opponents also fear a situation, in which the unpredictable potential risks related to the realization of climate engineering techniques will cause calamities or disasters and consequently insist on their perpetual abandonment or moratorium based on precautionary attitudes. In such a case, precautionary and preemptive attitudes may be not useful for making certain decisions because they are common for both the advocates and opponents of geoengineering. This structural outline typical for other emerging technologies will be illustrated for gene drive in the next sections.

13.4 Gene Drive Gene drive is “a system of biased inheritance, in which the ability of a genetic element to pass from a parent to its offspring through sexual reproduction is enhanced. Thus,

256

A. Fujiki

the result of a gene drive is the preferential increase of a specific genotype that determines a specific phenotype from one generation to the next, and potentially throughout a population” [68, p. 182]. Gene drive organisms can theoretically modify or eradicate entire species. Researchers consider gene drive technology as a promising method for controlling various pests and vectors mediating serious infectious diseases as well as for eradicating invasive alien species. At the same time, many people including scientists express deep concerns regarding the actual and potential risks threatening both public and ecosystem’s health.

13.4.1 Prehistory and Emergence of Gene Drive Technology The idea of gene drive has existed long before the emergence of a genome editing technology. “A wide variety of gene drives occur in nature. Researchers have been studying these natural mechanisms throughout the 20th century but, until the advent of CRISPR/Cas for gene editing, have not been able to develop a gene drive” [69, p. 1]. “Gene-editing tools have not been used to date in the conservation of wildlife, but their use in the control of non-native invasive organisms is being explored in the laboratory with the creation of sterile insects, and the use of ‘gene drives’” [58]. “The creation of sterile insects” is an effective measure to suppress or control the population of insect pests. It has been used within confined geographical areas such as islands and some cases achieved great successes [33]. All these techniques are called sterile insect techniques (SITs). SIT is “an environmentally friendly insect pest control method involving the mass-rearing and sterilization, using radiation, of a target pest, followed by the systematic area-wide release of the sterile males by air over defined areas, where they mate with wild females resulting in no offspring and a declining pest population” [31]. Conventional SIT methods exhibit serious time- and cost-related problems because their promotors must continue to produce a vast number of infertile individuals and release them into the area until the ultimate goal is reached. For example, in the case of malaria, “previous work had shown that mosquitoes could be engineered to rebuff the parasite P. falciparum, but researchers lacked a way to ensure that the resistance genes would spread rapidly through a wild population” [38]. Clustered regularly interspaced short palindromic repeats (CRISPR)-based gene drive is expected to change the present situation. Once we release genetically engineered individuals into the target area, theoretically “the CRISPR-based gene drive will spread the change relentlessly until it is in every single individual in the population [32]”. Because of these circumstances, gene drive can be regarded as a derivation of the genome editing technology, and at the same time, as one of the variations of SIT [40]. Gene drives have already proven to be extremely useful for engineering mosquito populations in terms of their fertility [30]. The first gene drive demonstration was published in March 2015 by developmental biologists Valentino Gantz and Ethan

13 Reconsidering Precautionary Attitudes and Sin …

257

Bier working at the University of California, San Diego [28]. Soon after that, their team developed an autonomous CRISPR/Cas9-mediated gene drive system in the Asian malaria vector Anopheles stephensi [29]. Niwa stated that “one of the future prospects of genome editing technology for insects is application to social issues such as pest management” and that there is a concern that “whether genetic manipulation modifying the characteristics of agricultural and sanitary pests by utilizing genome editing technology can give a solution to various problems caused by pests” [49]. In a sense, we may be able to say that gene drive is a dramatic improvement of the existing technology or breaking through the technological constraints; however, some qualitative differences exist. The main purpose of sterilization is eradicating specific species by mass releasing. However, the gene drive organism, such as malariaresistant mosquitoes, does not necessarily have to be a sterile individual. Therefore, we have to consider the possibility that traces of the modified genome will remain in the populations over a long period and that they will affect the entire ecosystems in the future. Conventional SIT can easily suspend operation, and its environmental effect is reversible unless the target species are exterminated completely. Nevertheless, the CRISPR-based gene drive process may be difficult to stop once we release genetically engineered individuals into the wild population, while its effect on the entire ecosystem remains unknown and irreversible. For better or worse, gene drive technology has a high potential to change the world and, accordingly, has caused some controversy over its safety and security in recent years [37, 55].

13.4.2 Availabilities and Risks of Gene Drive Gene drive is a versatile and widely applicable technology. Currently, the eradication plans utilizing gene drive technology mainly focus on disease-carrying insects. Hence, it is necessary to consider a future possibility of applying gene drive in other fields (see Table 13.1). Owing to its multiple applications, a controversy over the safety, regulation, and ecological impact of living modified organisms (LMOs) and genetically modified crops (GMOs) may likely arise again. Gene drive is considered an ultimate weapon to fight against mosquitos-borne diseases, such as malaria, Zika, and dengue fever. Bill Gates, the founder of Microsoft Corporation, and the Bill and Melinda Gates Foundation set up a not-for-profit research consortium “Target Malaria” that aimed to develop and share technologies for malaria control. They endorsed the gene drive project and release genetically modified mosquitoes to combat malaria [9, 65]. Bill Gates also called gene drive a “key tool to reduce malaria deaths” and stated positively in an interview conducted in 2016 that “gene drives, I do think, over the next three to five years will be developed in a form that will be extremely beneficial” [66]. Nina Fedoroff, a molecular biologist, also pointed out in her TED (technology, entertainment, design) conference talk that “biological control of harmful insects can be both more effective and very much more environmentally friendly than using insecticides, which are toxic chemicals. That was true in Rachel Carson’s time; it’s true today” [27].

258

A. Fujiki

Table 13.1 Potential applications for gene drive research [68, p. 4, Table S-1] Public health

• Control or alter organisms that carry infectious diseases that affect humans, such as dengue, malaria, Chagas, and Lyme disease • Control or alter organisms that directly cause infection or disease, such as Schistosomiasis • Control or alter organisms that serve as reservoirs of disease, such as bats and rodents

Ecosystem conservation • Control or alter organisms that carry infectious diseases that threaten the survival of other species • Eliminate invasive species that threaten native ecosystems and biodiversity • Alter organisms that are threatened or endangered Agriculture

• Control or alter organisms that damage crops or carry crop diseases • Eliminate weedy plants that compete with cultivated crops

Basic research

• Alter model organisms to carry out research on gene drive function and effects, species biology, and mechanisms of disease

Gene drive may also play a role as a secret weapon in the battle against alien invasive species. The research team of Dr. Leitschuh published a paper suggesting the utilization of gene drive technologies as an alternative to the toxicants for rodent eradication in 2017 [39]. Island Conservation, a conservation group based in Santa Cruz, California is “pursuing the creation of “daughterless” mice, which, due to a gene drive, are only able to have male offspring. The gender-biasing effect would drive down mouse populations on an island, possibly to zero if it proves effective. […] Released in large enough numbers (of genetically modified mice) on an island, the daughterless rodents could, over the course of several months to a few years, result in a mouse population that is, so to speak, all Mickey and no Minnie” [56]. Predator Free 2050 Limited, an organization established to help reach the New Zealand Government’s ambitious goal of eradicating possums, stoats, and rats by 2050, said that they are “currently keeping a watching brief on developments in gene editing technologies” [51]. To date, “despite the buzz around gene drives in New Zealand’s conservation circles, there are no concrete plans to actually use them” [73]. Naturally, not all stakeholders entirely agree with these eradication projects because of the existence of unknown interaction and unpredictable risks [12]. Kevin Esvelt, one of the early pioneers of gene drive at the Massachusetts Institute of Technology (MIT) mentioned various safeguards and control strategies, possibilities of reversibility, and release precautions as well as transparency, public discussion, and evaluation approaches in his study [21]. Later, he published a paper titled “Conservation demands safe gene drive” with his colleague Neil Gemmell and cautiously said that “self-propagating CRISPR-based gene drive system is likely equivalent to creating a new, highly invasive species: both will likely spread to any ecosystem in which they are viable, possibly causing ecological change” [22, p. 2]. The National Academies of Sciences, Engineering, and Medicine (NASEM) suggested that “the considerable gaps in knowledge about potential off-target (within the organism)

13 Reconsidering Precautionary Attitudes and Sin …

259

and non-target (in other species or the environment) effects necessitate a collaborative, multidisciplinary approach to research, ecological risk assessment, development of public policy, and decision making for each proposed application of a gene drive technology” [68, 69, p. 1]. However, Kenneth Oye, a political scientist who studies emerging technologies at the MIT in Cambridge notes that such technological advances are outpacing the regulatory and policy discussions that surround the use of gene drive to engineer wild populations [44]. Gene drives are controversial because of their ability to alter entire ecosystems and military potential. Fifteen years ago, the National Research Council stated that “biotechnology represents a “dual use” dilemma in which the same technologies can be used legitimately for human betterment and misused for bioterrorism” in Biotechnology Research in an Age of Terrorism or Fink Report [45, p. 15]. Gene drive is certainly not an exception. Suda expressed concerns about the dual use problem in synthetic biology including gene drive [62]. She pointed out a potential for the deliberate misuse or abuse of gene drive, such as the enhancement of infectious ability of pests and annihilation of agricultural resources.

13.4.3 Regulatory Frameworks and Precautionary Attitudes Towards Gene Drive At present, gene drive technology requires the establishment of regulatory frameworks based on precautionary attitudes because “gene drives are so effective that even an accidental release could change an entire species, and often very quickly. […] [I]t could be a disaster if your drive is designed to eliminate the species entirely” [32]. A NASEM committee has concluded that “there is insufficient evidence available at this time to support the release of gene-drive modified organisms into the environment. However, the potential of gene drives for basic and applied research are significant and justify proceeding with laboratory research and highly controlled field trials” and suggested a necessity of introducing phased testing and robust ecological risk assessments in the final chapter of a report about gene drive technology [68, pp. 177– 178]. It also stated that the phased testing that was “outlined by the World Health Organization (WHO) for testing genetically modified mosquitoes [72], can facilitate a precautionary step-by-step approach to research on gene drives” [Ibid., p. 6]. In Europe, the European Food Safety Authority (EFSA) published Guidance on the environmental risk assessment of genetically modified plants (2010) and animals (2013) [23, 24]. A panel on the Future of Science and Technology (STOA) that constituted an integral part of the European Parliament held a workshop on “the science and ethics of gene drive technology” in March 19, 2019 [60]. Critical Scientists Switzerland (CSS), the Federation of German Scientists (VDW), and the European Network of Scientists for Social and Environmental Responsibility (ENSSER) jointly released a statement on biodiversity and gene drives [18]. By considering the

260

A. Fujiki

idea of precautionary principle, they have drawn a conclusion that “a moratorium is required”:

This is clearly a case for applying the Precautionary Principle laid down as Principle 15 in the Rio Declaration on Environment and Development. It means in this case to omit risky measures for the time being, even if their intention is an ethical one. Therefore, we call for a moratorium on the release, including experimental release, of organisms containing engineered gene drives.

They published report on gene drives again in 2019 and emphasized the importance of precautionary principle in Chap. 8 [19]. These three scientific organizations expressed an opinion that “[t]he wisdom of applying the Precautionary Principle may be our best guide when facing this new and potent technology [Ibid., p. 13]”. The Academic Association for Promotion of Genetic Studies in Japan (AAPGS) issued an official statement in both Japanese and English in 2017 that gave the following three recommendations regarding gene drive: 1. Share information on gene drive with all researchers at the institution; 2. Record protocols or planned experiments involving gene drive technology; 3. Be certain that appropriate containment measures are taken. In the statement, AAPGS mentioned that “it is extremely important to take appropriate containment measures in compliance with the Act on the Conservation and Sustainable Use of Biological Diversity through Regulations on the Use of Living Modified Organisms,” or “Cartagena Act” [1, p. 2]. “Gene drive organisms (GDOs) are equivalent to living modified organisms (LMOs) mentioned in the Cartagena Protocol on Biosafety because genome editing tool is incorporated into them. Consequently, unleashing GDOs from a laboratory under the Type II Use, namely use under the closed system, is just illegal” [64]. In Japan, gene drive organisms are now subject to regulation by the Cartagena Act. However, the discussion on gene drive experiments is far from being complete [Ibid.]. The African Center for Biodiversity (ACB) published two brief papers on releasing genetically modified mosquitoes for the elimination of malaria in 2018. Both studies pointed out to the insufficient compliance efforts of organizations planning gene drive experiments and further field release and promised to begin the governance of this types of emerging technologies in African countries [2, 3]. Despite the very low number of researchers and countries who think that no regulatory efforts are required, a moratorium on gene drive research is very hard to implement. The United Nations (UN) Convention on Biological Diversity (CBD) rejected calls for a global moratorium on gene drives in 2016 [10]. Two years later, in spite of scientists’ worries about banning gene drive [13], the UN agreed to limit gene drives but rejected the idea of a moratorium again. It also rejected a proposal to temporarily ban the release of organisms carrying gene drives on November 29, 2018 at a meeting of the UN CBD in Sharm El-Sheikh, Egypt [14]. The UN representatives

13 Reconsidering Precautionary Attitudes and Sin …

261

“agreed to changes to the treaty that were vague enough that both proponents and sceptics of gene-drive technology are touting victory. Signatories to the treaty agreed on the need to assess the risks of gene-drive releases on a case-by-case basis. They also said that local communities and indigenous groups potentially affected by such a release should be consulted” [Ibid.].

13.4.4 Developing Technical Solutions for Safeguards Developing safer research and experimental procedures designed for gene drive organisms is an important task for the future. Researchers proposed “reversal drives” as a countermeasure to reverse a previous drive if a problem arises [21, 71]. Unfortunately, it was found that some resistance mechanisms to gene drives were apparently formed as barriers to their use [11, 15]. Despite the gradual progress in fundamental research, the environmental impacts of gene drive have not been fully examined yet. Safer gene drive is strongly desired; in fact, scientists have already conducted related studies [4, 16, 22, 36]. The Defense Advanced Research Project Agency (DARPA) sponsored a “Safe Genes” program that “supports force protection and military health and readiness by protecting Service members from accidental or intentional misuse of genome editing technologies” [20]. The purpose of “three primary technical focus areas” that consist of “control of gene editing,” “countermeasures and prophylaxis,” and “genetic remediation” are “to develop tools and methodologies to control, counter, and even reverse the effects of genome editing— including gene drives—in biological systems across scales” [Ibid.]. DARPA is one of the biggest funding agencies in the synthetic biology domain and gene drive research [34]. Note that DARPA is an agency of the United States Department of Defense, and its mission is “to make pivotal investments in breakthrough technologies for national security”. In this respect, there are people who are willing to use gene drive technology for military applications. The fact that DARPA funds many research programs on synthetic biology does not necessarily mean that the outcomes of these programs are directly related to weapon development. However, we cannot completely avoid concerns about the “dual use” aspect described above because the “gene drive R&D for civilian use and for military use cannot be separated” [19, p. 13].

13.5 Two Emerging Technologies and a Sin of Omission Human beings are now undoubtedly facing a global scale climate change. On the one hand, climate change, especially global warming, may be mitigated by using climate intervention and geoengineering technologies as countermeasures. However, if their negative effects that no one can predict in advance surface out, they will be become a subject to condemnation as “our ancestors did what they should not have done”. On the other hand, if traditional countermeasures against climate change such as the

262

A. Fujiki

development of alternative technologies and changes of people’s lifestyle turn out less effective than we have expected, and if we decide to stop using geoengineering technology, the future generations will accuse us of a sin of omission. They will say that “our ancestors did nothing although there were many available technologies to reduce the damage from the expected climate change to a minimum”. Furthermore, geoengineering as a powerful tool for solving climate change may cause intergenerational ethical problems. An analogy can be drawn between geoengineering and gene drive. Prevention and treatment of various infectious diseases as well as conservation of the existing ecosystems are relatively big issues for the humankind. Gene drive technology must contribute to the accomplishment of these goals. However, it will also lead to a “sin of omission problem”. On the one hand, gene drive may dramatically reduce the number of affected individuals by eradicating vectors and either maintain or improve the biodiversity of specific regions. However, if negative effects become apparent later, it will be a subject of condemnation as “our ancestors did what they should not have done.” On the other hand, if traditional countermeasures against various infectious diseases and invasive alien species are not able to function effectively despite spending a large amount of time and money and result in enormous losses of human lives and biodiversity, the future generations will accuse us of a sin of omission. They will say that “our ancestors have done nothing although there were many available technologies to avoid the risks of disease-carrying pests, invasive alien species, and usage of large amount of pesticides”. Because both gene drive technology and geoengineering are powerful tools for tackling infectious diseases and invasive alien species, they may cause some problems in the field of intergenerational ethics. Emerging technologies exhibit some common features. Geoengineering and gene drive are innovative but unpredictable. They can have irreversible and catastrophic consequences for our planet in a worst-case scenario and affect a wide spatiotemporal range. Finally, these technologies may produce undesirable outcomes regardless of any action or inaction. At the same time, we have to pay attention to the fact that although these emerging technologies contain unknowns in addition to the actual and potential risks, they certainly provide us with solutions for some serious issues we are currently facing. Thus, it is necessary to weigh the risks and benefits of using these technologies against those of not using them and consider the ethical aspects of the problem (e.g. “Is it justifiable to set an open-ended moratorium in exchange for exposing people in infectious areas to the risk of acquiring mosquitoes-borne diseases?”). Before doing that, I suggest that we think about possible situations and risks that we really want to avoid. It seems that the meanings of some words such as “precautionary” and “environmentally friendly” can be very broad and depend on our viewpoints and attitudes. For example, the sentences “geoengineering should be banned in accordance with the precautionary attitudes” and “geoengineering should be promoted in accordance with the precautionary attitudes” are compatible. The sentences “gene drive is environmentally friendly because it will reduce the amount of pesticides used” and “gene drive is not environmentally friendly because it can damage the ecosystem” are also compatible. Precautionary attitudes would support

13 Reconsidering Precautionary Attitudes and Sin …

263

both opinions. In such a case, it would be hard to reach a consensus between the advocates and opponents of these technologies by using the concept of precautionary attitudes. Nevertheless, even if we were unable to reach an agreement on a specific issue, it would be better to identify the risks we want to avoid or prevent them as much as possible because we ultimately have to take some risks to avoid others.

13.6 Conclusions Precautionary principle or precautionary attitude is an essential concept for the humans living in the Anthropocene. Owing to the emergence of geoengineering for manipulating the climate and gene editing technologies including gene drive, we must urgently reconsider the conventional ideas and concepts related to environmental health. During the analysis of these emerging technologies, various terms such as “precautionary”, “environmentally friendly”, and “omission” may have different meanings for different stakeholders. Under certain circumstances, both the advocates and opponents of specific emerging technologies can say that their opinions are based on the same premises. For that reason, we need to identify the situations and risks that we really want to avoid in addition to conducting a risk-benefit analysis. In other words, we have to cooperate with all stakeholders to consider every conceivable scenario associated with these emerging technologies and prepare ourselves for the uncertain future as much as we can. In this respect, as we can see in the code of ethics used by engineering professional societies, “the public health, safety, and welfare” should have priority over other concepts. Have a safe drive.

References 1. Academic Association for Promotion of Genetic Studies in Japan (AAPGS). (2017). Statement on the handling of gene drives. Retrieved from http://www.idenshikyo.jp/_src/2910471/Gen eDrive_ENG_20170920.pdf. 2. African Center for Biodiversity (ACB). (2018). Critique of African Union and NEPAD’s positions on gene drive mosquitoes for Malaria elimination. Retrieved from https://www. acbio.org.za/sites/default/files/documents/Critique_of_African_Union_and_NEPADs_positi ons_on_gene_drive_mosquitoes_for_Malaria_elimination.pdf. 3. African Center for Biodiversity (ACB). (2018). GM mosquitoes in Burkina Faso: A briefing for the Parties to the Cartagena Protocol on Biosafety. Retrieved from https://www.acbio.org. za/sites/default/files/documents/GM_mosquitoes_in_Burkina_Faso_A_briefing_for_the_Par ties_to_the_Cartagena_Protocol_on_Biosafety.pdf. 4. Akbari, O. S., Bellen, H. J., Bier, E., Bullock, S. L., Burt, A., Church, G. M., et al. (2015). Safeguarding gene drive experiments in the laboratory. Science, 349(6251), 927–929. 5. American Meteorological Society Geoengineering the Climate System. Retrieved from https://www.ametsoc.org/index.cfm/ams/about-ams/ams-statements/statements-of-theams-in-force/geoengineering-the-climate-system/. 6. Anshelm, J., & Hansson, A. (2016). Has the grand idea of geoengineering as Plan B run out of steam? The Anthropocene Review, 3(1), 64–74.

264

A. Fujiki

7. Asayama, S. (2015). Catastrophism toward ‘opening up’or ‘closing down’? Going beyond the apocalyptic future and geoengineering. Current Sociology, 63(1), 89–93. 8. Asilomar Scientific Organizing Committee. (2010). The Asilomar Conference Recommendations on Principles for Research into Climate Engineering Techniques: Conference Report. Washington D.C.: Climate Institute. Retrieved from https://www.geoengineeringwatch.org/ documents/AsilomarConferenceReport.pdf. 9. Bill & Melinda Gates Foundation. Malaria—Bill & Melinda Gates Foundation. Retrieved from https://www.gatesfoundation.org/What-We-Do/Global-Health/Malaria. 10. Callaway, E. (2016). ‘Gene drive’ moratorium shot down at UN biodiversity meeting. Nature News & Comment (Dec. 21, 2016). Retrieved from https://www.nature.com/news/gene-drivemoratorium-shot-down-at-un-biodiversity-meeting-1.21216. 11. Callaway, E. (2017). Gene drives thwarted by emergence of resistant organisms. Nature News & Comment (Jan. 31, 2017). Retrieved from https://www.nature.com/news/gene-drives-thw arted-by-emergence-of-resistant-organisms-1.21397. 12. Callaway, E. (2018). Controversial CRISPR ‘gene drives’ tested in mammals for the first time, Nature 559, 164 (Jul. 6, 2018). Retrieved from https://www.nature.com/articles/d41586-01805665-1. 13. Callaway, E. (2018). ‘Gene drives’ ban back on table-worrying scientists. Nature, 563(7732), 454–455 (Nov. 22, 2018). Retrieved from https://www.nature.com/magazine-assets/d41586018-07436-4/d41586-018-07436-4.pdf. 14. Callaway, E. (2018). UN treaty agrees to limit gene drives but rejects a moratorium. Nature (Nov. 29, 2018). Retrieved from https://www.nature.com/articles/d41586-018-07600-w. 15. Champer, J., Reeves, R., Oh, S. Y., Liu, C., Liu, J., Clark, A. G., et al. (2017). Novel CRISPR/Cas9 gene drive constructs reveal insights into mechanisms of resistance allele formation and drive efficiency in genetically diverse populations. PLoS Genetics, 13(7), e1006796. 16. Champer, J., Chung, J., Lee, Y. L., Liu, C., Yang, E., Wen, Z., et al. (2019). Molecular safeguarding of CRISPR gene drive experiments. eLife, 8, e41439. 17. Cressey, D. (2012). Geoengineering experiment cancelled amid patent row. Nature News & Comment (May 15, 2012). Retrieved from https://www.nature.com/news/geoengineering-exp eriment-cancelled-amid-patent-row-1.10645. 18. Critical Scientists Switzerland (CSS), Vereinigung Deutscher Wissenschaftler (Federation of German Scientists, VDW), European Network of Scientists for Social and Environmental Responsibility (ENSSER). (2018). What is on the horizon? Biodiversity and gene drives: science, culture, ethics, socio-economics and governance. Retrieved from https://genedrives. ch/wp-content/uploads/2018/12/First-statement_Gene-Drive-Project.pdf. 19. Critical Scientists Switzerland (CSS), Vereinigung Deutscher Wissenschaftler (Federation of German Scientists, VDW), European Network of Scientists for Social and Environmental Responsibility (ENSSER). (2019). Gene drives. A report on their science, applications, social aspects, ethics and regulations. Retrieved from https://genedrives.ch/wp-content/upl oads/2019/05/Gene-Drives-Report.pdf. 20. Defence Advanced Research Project Agency (DARPA). Safe Genes. Retrieved from https:// www.darpa.mil/program/safe-genes. 21. Esvelt, K. M., Smidler, A. L., Catteruccia, F., & Church, G. M. (2014). Emerging technology: Concerning RNA-guided gene drives for the alteration of wild populations. Elife, 3, e03401. 22. Esvelt, K. M., & Gemmell, N. J. (2017). Conservation demands safe gene drive. PLoS Biology, 15(11), e2003850. 23. European Food Safety Authority (EFSA) Panel on Genetically Modified Organisms (GMO). (2010). Guidance on the environmental risk assessment of genetically modified plants. EFSA Journal, 8(11), 1879. 24. European Food Safety Authority (EFSA) Panel on Genetically Modified Organisms (GMO). (2013). Guidance on the environmental risk assessment of genetically modified animals. EFSA Journal, 11(5), 3200.

13 Reconsidering Precautionary Attitudes and Sin …

265

25. European Environment Agency (EEA). (2001). Late lessons from early warnings: The precautionary principle 1896–2000. Retrieved from https://www.eea.europa.eu/publications/enviro nmental_issue_report_2001_22. 26. European Environment Agency (EEA). (2013). Late lessons from early warnings: Science, precaution, innovation. Retrieved from https://www.eea.europa.eu/publications/late-lessons-2. 27. Fedoroff, N. (2016). A secret weapon against Zika and other mosquito-borne diseases. TED Talk (Oct. 2016). Retrieved from https://www.ted.com/talks/nina_fedoroff_a_secret_weapon_ against_zika_and_other_mosquito_borne_diseases. 28. Gantz, V. M., & Bier, E. (2015). The mutagenic chain reaction: A method for converting heterozygous to homozygous mutations. Science, 348(6233), 442–444. 29. Gantz, V. M., Jasinskiene, N., Tatarenkova, O., Fazekas, A., Macias, V. M., Bier, E., et al. (2015). Highly efficient Cas9-mediated gene drive for population modification of the malaria vector mosquito Anopheles stephensi. Proceedings of the National Academy of Sciences, 112(49), E6736–E6743. 30. Hammond, A., Galizi, R., Kyrou, K., Simoni, A., Siniscalchi, C., Katsanos, D., et al. (2016). A CRISPR-Cas9 gene drive system targeting female reproduction in the malaria mosquito vector Anopheles gambiae. Nature Biotechnology, 34(1), 78. 31. International Atomic Energy Agency (IAEA). Sterile insect technique. IAEA. Retrieved from https://www.iaea.org/topics/sterile-insect-technique. 32. Kahn, Jennifer. (2016). Gene editing can now change an entire species—Forever. TED Talk (Feb. 2016). Retrieved from https://www.ted.com/talks/jennifer_kahn_gene_editing_can_ now_change_an_entire_species_forever. 33. Klassen, W., & Curtis, C. F. (2005). History of the sterile insect technique. In V. A. Dyck, J. Hendrichs, & A. S. Robinson (Eds.), Sterile insect technique (pp. 3–36). Dordrecht: Springer. 34. Kuiken, T. (2017). DARPA’s synthetic biology initiatives could militarize the environment. Slate (May 3, 2017). Retrieved from https://slate.com/technology/2017/05/what-happens-ifdarpa-uses-synthetic-biology-to-manipulate-mother-nature.html. 35. Kuwata, M. (2018). Geoengineering and catastrophe [Kikou-kougaku to catastrophe]. In A. Yoshinaga & M. Fukunaga (Eds.), Future environmental ethics [Mirai no kankyo rinri-gaku]. Keiso-shobo. 36. Kyrou, K., Hammond, A. M., Galizi, R., Kranjc, N., Burt, A., Beaghton, A. K., et al. (2018). A CRISPR–Cas9 gene drive targeting doublesex causes complete population suppression in caged Anopheles gambiae mosquitoes. Nature Biotechnology, 36(11), 1062. 37. Ledford, H. (2015). Safety upgrade found for gene-editing technique. Nature News & Comment (Nov. 16, 2015). Retrieved from https://www.nature.com/news/safety-upgradefound-for-gene-editing-technique-1.18799. 38. Ledford, H., & Callaway, E. (2015). ‘Gene drive’ mosquitoes engineered to fight malaria. Nature News & Comment (Nov. 23, 2015). Retrieved from https://www.nature.com/news/genedrive-mosquitoes-engineered-to-fight-malaria-1.18858. 39. Leitschuh, C. M., Kanavy, D., Backus, G. A., Valdez, R. X., Serr, M., Pitts, E. A., et al. (2018). Developing gene drive technologies to eradicate invasive rodents from islands. Journal of Responsible Innovation, 5(sup1), S121–S138. 40. Macias, V., Ohm, J., & Rasgon, J. (2017). Gene drive for mosquito control: Where did it come from and where are we headed? International Journal of Environmental Research and Public Health, 14(9), 1006. 41. Mizutani, H. (2016). Manipulating climate artificially: Geoengineering confronting the global warming [Kikou wo jinkou-teki ni sousa suru: Chikyu-ondan-ka ni idomu geoengineering]. Kagaku-doujin. 42. Oxford Geoengineering Programme (a). What is Geoengineering? Retrieved from http://www. geoengineering.ox.ac.uk/what-is-geoengineering/what-is-geoengineering/. 43. Oxford Geoengineering Programme (b). Oxford principles of geoengineering. Retrieved from http://www.geoengineering.ox.ac.uk/oxford-principles/principles/. 44. Oye, K. A., Esvelt, K., Appleton, E., Catteruccia, F., Church, G., Kuiken, T., et al. (2014). Regulating gene drives. Science, 345(6197), 626–628.

266

A. Fujiki

45. National Research Council. (2004). Biotechnology research in an age of terrorism. National Academies Press. Retrieved from https://www.nap.edu/catalog/10827/biotechnologyresearch-in-an-age-of-terrorism. 46. National Research Council. (2015). Climate intervention: Carbon dioxide removal and reliable sequestration. National Academies Press. Retrieved from https://www.nap.edu/catalog/18805/ climate-intervention-carbon-dioxide-removal-and-reliable-sequestration. 47. National Research Council. (2015). Climate intervention: Reflecting sunlight to cool earth. National Academies Press. Retrieved from https://www.nap.edu/catalog/18988/climate-interv ention-reflecting-sunlight-to-cool-earth. 48. Nerlich, B., & Jaspal, R. (2012). Metaphors we die by? Geoengineering, metaphors, and the argument from catastrophe. Metaphor and Symbol, 27(2), 131–147. 49. Niwa, R. (2016). Genome editing for insects [Konchu deno genome henshu no riyou]. In T. Yamamoto (Ed.), An introduction to genome editing [Genome henshu nyumon] (pp. 56–72). Shokabo. 50. Olhoff, A. (Ed.), & Christensen, J. M. (2018). Emissions gap report 2018. UNEP DTU Partnership. 51. Predator Free 2050 Limited. Q&As. Retrieved from https://pf2050.co.nz/q-a/. 52. Preston, C. J. (2011). Re-thinking the unthinkable: Environmental ethics and the presumptive argument against geoengineering. Environmental Values, 20(4), 457–479. 53. Preston, C. J. (2013). Ethics and geoengineering: Reviewing the moral issues raised by solar radiation management and carbon dioxide removal. Wiley Interdisciplinary Reviews: Climate Change, 4(1), 23–37. 54. Reber, B. (2018). RRI as the inheritor of deliberative democracy and the precautionary principle. Journal of Responsible Innovation, 5(1), 38–64. 55. Reeves, R. G., Voeneky, S., Caetano-Anollés, D., Beck, F., & Boëte, C. (2018). Agricultural research, or a new bioweapon system? Science, 362(6410), 35–37. 56. Regalado, A. (2017). First gene drive in mammals could aid vast New Zealand eradication plan. MIT Technology Review (Feb. 10, 2017). Retrieved from https://www.technologyreview. com/s/603533/first-gene-drive-in-mammals-could-aid-vast-new-zealand-eradication-plan/. 57. Royal Society. (2009). Geoengineering the climate: Science, governance and uncertainty. Retrieved from https://royalsociety.org/topics-policy/publications/2009/geoengineering-cli mate/. 58. Royal Society Te Ap¯arangi. (2017). The use of gene editing in pest control: Discussion paper. Retrieved from https://royalsociety.org.nz/assets/Uploads/The-use-of-gene-editing-inpest-control-discussion-paper.pdf. 59. Stilgoe, J. (2014.) Caring for the future: Why responsible research and innovation matters. Retrieved from http://www.tuttocongressi.it/tcplusdocs/easyrec/TIIYSIFTAPXENCTW/er2/ 0005/index.html. 60. Panel for the Future of Science and Technology (STOA). (2019). The science and ethics of gene drive technology | Panel for the Future of Science and Technology (STOA) | European Parliament. Retrieved from http://www.europarl.europa.eu/stoa/en/events/past/201902 25WKS02281/gene-drive-and-malaria. 61. Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. 62. Suda, M. (2018). The impact of synthetic biology [Gousei seibutu-gaku no shougeki]. Bungeishunj¯u. 63. Sunstein, C. R. (2005). Laws of fear: Beyond the precautionary principle. Cambridge University Press. 64. Tanaka, N. (2016). Precautions on conduct a study of genome editing [Genome henshu wo okonau uede chui suru koto]. In N. Yamamoto (Ed.), An introduction to genome editing [Genome henshu nyumon]. Shokabo. 65. Target Malaria. Retrieved from https://targetmalaria.org/. 66. Temple, J. (2016). Bill Gates endorses genetically modified mosquitoes to combat malaria—The Verge. Retrieved from https://www.theverge.com/2016/6/17/11965176/bill-gates-geneticallymodified-mosquito-malaria-crispr.

13 Reconsidering Precautionary Attitudes and Sin …

267

67. Temple, J. (2017). The growing case for geoengineering. MIT Technology Review (Apr. 18, 2017). Retrieved from https://www.technologyreview.com/s/604081/the-growing-casefor-geoengineering/. 68. The National Academies of Science, Engineering, and Medicine (NASEM). (2016). Gene drives on the horizon: Advancing science, navigating uncertainty, and aligning research with public values. Retrieved from http://nap.edu/23405. 69. The National Academies of Science, Engineering, and Medicine (NASEM). (2016). Report in brief of Gene drives on the horizon: Advancing science, navigating uncertainty, and aligning research with public values. Retrieved from https://www.nap.edu/resource/23405/Gene-Dri ves-Brief.pdf. 70. United Nations Conference on Environment and Development (UNCED). (1992). The Rio Declaration on Environment and Development. Retrieved from http://www.unesco.org/educat ion/pdf/RIO_E.PDF. 71. Vella, M. R., Gunning, C. E., Lloyd, A. L., & Gould, F. (2017). Evaluating strategies for reversing CRISPR-Cas9 gene drives. Scientific Reports, 7(1), 11038. 72. World Health Organization (WHO). (2014). Guidance framework for testing of genetically modified mosquitoes. Retrieved from https://www.who.int/tdr/publications/year/2014/ Guidance_framework_mosquitoes.pdf. 73. Yong, E. (2017). New Zealand’s war on rats could change the world—The Atlantic (Nov. 16, 2017). Retrieved from https://www.theatlantic.com/science/archive/2017/11/new-zealand-predator-free-2050-ratsgene-drive-ruh-roh/546011/.

Part IV

Science and Society

Chapter 14

Exploring the Contexts of ELSI and RRI in Japan: Case Studies in Dual-Use, Regenerative Medicine, and Nanotechnology Ken Kawamura, Daisuke Yoshinaga, Shishin Kawamoto, Mikihito Tanaka, and Ryuma Shineha Abstract In this paper, we focus on the Japanese context of ELSI (Ethical, Legal and Social Implications) and RRI (Responsible Research and Innovation) related issues, delving into the cases of the dual-use issues led by the Japanese physicists, stem cell research (SCR) and regenerative medicine (RM), and the media coverage on the risk of nanotechnology in Japan. Through our quantitative analysis of discussions on the three topics, we found the diverse ways people shape the ELSI/RRI discussions regarding the three technologies. In the first two cases of dual-use and SCR, the similar structure of discourses concerning the technology is identified: while those affirmative on the technology tend to emphasize the technical and economic aspects, those negative on the technology criticize such moves on the basis of ideal of pacifism or responsible governance. In the last case of nanotechnology, on the contrary, criticism of concentration on the technical and economic aspects of technology was shared throughout the arguments. We showed that Japanese media utilized the memory of post-war pollution disasters in order to help people imagine the risks of the emerging nanotechnology, and thus argued for the overcoming of the economy-first principle. These findings showed how the social context shapes people’s imagination K. Kawamura · R. Shineha (B) Osaka University, 2-8 Yamadaoka, Suita, Osaka 565-0871, Japan e-mail: [email protected] K. Kawamura e-mail: [email protected] D. Yoshinaga Waseda Institute of Political Economy, 1-6-1 Nishiwaseda, Shinjuku, Tokyo 169-8050, Japan e-mail: [email protected] S. Kawamoto Hokkaido University, Kita 10, Nishi 8, Kita, Sapporo, Hokkaido 060-0809, Japan e-mail: [email protected] M. Tanaka Waseda University, 1-6-1 Nishiwaseda, Shinjuku, Tokyo 169-8050, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_14

271

272

K. Kawamura et al.

on the benefits and risks of certain technologies. We must take such “socio-technical imaginaries” into consideration during ELSI and RRI discussions. Keywords ELSI · RRI · Dual-use · Regenerative medicine · Nanotechnology

14.1 Introduction Since the inclusion of an Ethical, Legal, and Social Implications (ELSI) program in the early stages of the human-genome project by the National Health Institute, the scope and limit of the concept of ELSI have been extensively discussed. To highlight the implications of ELSI in discussions concerning science and technology policy (STP) and its governance, new varieties of technology assessment (TA), and public engagement trials have been considered. Guston and Sarewitz particularly discussed real-time technology assessments (RTTA) as the new concept for the governance of science and technology. RT-TA consist of four key approaches: “analogical case studies,” “research program mapping,” “communication and early warning,” and “technology assessment and choice” [1]. In summary, RT-TA aimed to be the core process of ELSI of emerging science and contextualize innovation processes through monitoring the current contexts of emerging research and considering ELSI with insight from previous cases. Therefore, Guston and Sarewitz expressed that “real-time TA can inform and support natural science and engineering research, and it can provide an explicit mechanism for observing, critiquing, and influencing social values as they become embedded in innovations.” Another significant discussion on TA for the social aspects of emerging science and technology—the concept of “anticipatory governance”—should be introduced [2–4]. Anticipatory governance is defined as “a broad-based capacity extended through society that can act on a variety of inputs to manage emerging knowledgebased technologies while such management is still possible” [3]. Stilgoe and Guston also state that “anticipatory governance proposes the development of foresight, engagement, and integration as a response to various pathologies to innovation that are conventionally realized only with hindsight” [4]. In summary, the key point of “anticipatory governance” focuses on how to construct a necessary governance system through co-creation and sharing processes that consider future directions, examining ELSI through upstream and public engagement with various actors. Through the discussions described above, many related key themes have been examined including scientific autonomy and changes in the social responsibilities of experts, social construction of technology, uncertainty, path-dependency of issues, ethical dilemmas, deliberation, and meta-regulation. Based on these alternative concepts of RT-TA and anticipatory governance, the integrative concepts of “responsible innovation (RI)” or “responsible research and innovation (RRI)” have emerged. Particularly in the EU context since 2011, RRI has become the central concept of the “Science with/for society” program of Horizon

14 Exploring the Contexts of ELSI and RRI in Japan …

273

2020, which is the basic framework of the EU science policy [5–7]. Thus, it was expressed that “Responsible innovation means taking care of the future through collective stewardship of science and innovation in the present” [8]. With this idea, Horizon 2020 claimed that RRI policy will engage society more broadly in its research and innovation activities, increase access to scientific results, ensure gender equality in both the research process and research content, take into account the ethical dimension, and promote formal and informal science education. The emergence of the new integrative term, however, does not mean that the arguments that stand behind the term are entirely new and disconnected from the older issues relating the governance of science and technology. Stilgoe and Guston pointed out that “[a]t least since World War II, recognition of the power of the science and technology has forced reconsideration of the responsibilities that should follow such power” [4]. As is implied in this passage, they stressed the historical continuity between the discussion of social responsibility in science and the RRI discussion, from the physicists’ response to nuclear weapons and their commitment to arms control, through life scientists’ gathering at the Asilomar conference and the continued arguments, to the inclusion of “responsible development” in the U.S. National Nanotechnology Initiative (NNI) in 2001. With this historical background, scholars of science and technology studies (STS) have categorized the key components of RRI into four groups: “anticipation,” “reflexivity,” “inclusion,” and “responsiveness” [8, 9]. With this concept, upstream engagement with various actors is anticipated to actively encourage discussions on ethical concerns, governance systems, and responsibilities [8, 10]. More recently, RRI has been discussed according to the contexts of specific fields. For example, the European Commission published a report entitled Current RRI in Nano Landscape in which they discussed the social aspects of nanotechnology from an RRI perspective [11]. Other previous studies explore how to think about and deal with gaps that may be generated by differential R&D progress on nanotechnology in different European countries. Concurrently, critical RRI examination has already begun. Particularly, the effects of alibi-making and the politics concerning RRI should be focused on hereafter. Considering those previous studies, we focus on the Japanese context of RRI related issues, delving into and comparing the cases of the arms control and dualuse issues led by the Japanese physicists, stem cell research (SCR) and regenerative medicine (RM) as exemplars of current life sciences, and the media coverage on the risk of nanotechnology in Japan. One of the important points of an RRI perspective is not only examining prospective issues and debating measures for dealing with them as early as possible, but also opening up the public dialogue arena. Practices and discussions related to RRI represent different framings among various actors. How do people conceive of “issues?” How we can understand politics during discussions on RRI in real contexts? The main purpose of this paper is to describe ethical, political, and media concerns and conflicts during discussions on emerging science and technologies among scientists, government entities, and the public in Japan. We postulate that there are common structural issues behind discussions on ELSI and RRI in various emerging sciences and technologies.

274

K. Kawamura et al.

14.2 Dual-Use Issues in Japan What kinds of discourses and boundary works has been generated in the politics on the use of science and technology? Discussions on ELSI and RRI cannot avoid thinking about politics. Therefore, we must extract lessons from previous cases. Thus, in this section, we analyze how people comprehend the dual-use issue of science and technology in the Japanese context, with the strong taboo on military research. We will show that the constellation of the taboo, arguments on responsibilities of scientists, and the government position toward the funding of research formed the peculiar space of discourses on the dual-use issue. First, we briefly examine the history of the concepts, and then we analyze minutes of the National Diet, and finally show the result of a content analysis on the media coverage of dual-use in Japan.

14.2.1 Japan, the Ambiguous, and Dual-Use Dual-use is itself a difficult concept to approach: Galileo’s telescope might fall under the category of dual-use in the broadest sense. As is well-known, the term “dual-use” has multiple meanings. In the context of Cold War science policy, it refers to technologies that have both civilian and military applications. In the context of post-9/11 anti-terrorism policy, it refers to technology that can be misused, especially in violation of the international law regarding the conduct of war. This latter conception of dual-use was formalized in the so-called Fink Report in which a “dual-use dilemma” is defined as the situation in which “the same technologies can be used legitimately for human betterment and misused for bioterrorism” [12]. Although the term dual-use has different meanings depending on the context, the different meanings are perceived to be connected to each other in the dual-use arguments in the United States and EU: some technologies have both civilian and military applications and such technologies might be used as a means of terrorism rather than in accordance with the rules of war [13]. However, in the Japanese context, it has been highlighted that dual-use in the context of dual applicability and dual-use in the context of misuseability are sometimes significantly distinct [14]. Military research has long been prohibited in Japan after the WWII and the structure of dilemma concerning misuse is not broadly understood. Under such circumstances, as will be further discussed, the concept of dual-use has long been a target of dispute in post-war Japan, with many different actors arguing about dual-use issues based on their own conceptions of dual-use. In this sense, dual-use research poses a serious problem for the concept of RRI; while the concept of security has recently been broadened to include human security or safety, thus encompassing many aspects of RRI, the ambiguity derived from the extension of the meaning might obscure the boundaries traditionally drawn between civil and military science, posing new risks. We briefly outline the historical context surrounding the concept of dual-use in Japan

14 Exploring the Contexts of ELSI and RRI in Japan …

275

from WWII through Cold War to post-Cold War eras, thus attempting to clarify how the dual-use argument in Japan can be analyzed in terms of the concept of RRI.

14.2.2 WWII to the Cold War: Formation of the Japanese Context World War II was a total war that involved massive scientific mobilization—Japan was no exception. Well-known physicists such as Yoshio Nishina and Bunsaku Arakatsu cooperated with the military’s atomic bomb project; many medical scientists performed major roles in the biological war program of the notorious Unit 731 in China. In reaction to scientists’ wartime cooperation, the Science Council of Japan, which was established in 1949, adopted the resolution of the 6th General Assembly in 1950—that they would never to conduct military research. The council declared that “as a founder of the cultural state and an angel of World Peace, hoping that the ravages of war will never recur (….) in order to protect the integrity of scientists, it resolves never to commit scientific research which is conducted for the purpose of war” [15]. Despite of the resolution in 1950, however, Japanese scientists were never insulated from military research. Since the outbreak of the Korean War and the escalation of the US-Soviet Cold War, research funded by the US military has been conducted intermittently by Japanese scientists [16]. Against the backdrop of the rising anti-war movement in reaction to the Vietnam War, in 1967, a scandal was revealed wherein Japanese physicists received funding from the US military, leading the Physical Society of Japan to announce that the members of the society would never receive “funding from the military” nor “cooperate with the military in any sense.” Progress in space development made it difficult for the scientists to remain unrelated to military research. The National Space Development Agency of Japan (NASDA) was established in 1969, accelerating the development of space technology, which was clearly easy to divert to military technology. According to National Space Development Agency Law, NASDA was to develop space technology “only for peaceful purposes”. However, it remained ambiguous whether this “peaceful purposes” clause meant that the technology must not be used for any military and military-related purposes or that the technology can be used when it does not constitute invasions of other countries. The Japanese-local constellation of security, dual-use, and military research was crystalized by the policy advisors to the Ohira Cabinet as the concept of “comprehensive security” in the late Détente era. This conceptualization emphasized the non-military aspect of security, including energy security, food security, and countermeasures to natural disasters (especially large earthquakes) and formalized official development assistance (ODA) as chief measures to stabilize food security. The military conceptualization of security would not have been supported considering the historical context of post-war Japan.

276

K. Kawamura et al.

The concept of “comprehensive security” can be said to be a progressive formulation of security, anticipating the post-Cold War concept of human security [17]. While the concept of human security did not completely diverge from the traditional concept of national security, it emphasized aspects such as freedom from fear (humanitarian intervention) and freedom from poverty (development assistance). It argued that excessive militarization and lacking development are national signals of danger. Human security was rapidly introduced to Japan in which the concept of comprehensive security served as pretext for its introduction. This conceptualization of security was succeeded by the concept of “safety and security” in the 2000s, as will be further discussed below.

14.2.3 Cold War II and Military-Industry Cooperation in Japan During what is commonly referred to as Cold War II, the dual-use debate revived. In the wake of the Soviet invasion of Afghanistan in 1979, the Détente came to an end. Two years later, the Reagan administration took office and reinforced its well-known tough stance on the Soviet Union. The administration focused on semiconductor technology, which was the basic technology for missile defense against the Soviet Union. They established the Very High-Speed Integrated Circuit (VHSIC) program, which aimed to develop integrated circuits that could be used both for military and civilian purposes. It was the first governmental dual-use technology development program. In Japan, the Defense Agency and the military industry were increasingly strengthening their cooperation. In 1980, the Defense Technology Association was launched with the aim of developing armaments in Japan, with Sony and Honda participating. Even when the Reagan administration announced the concept of the Strategic Defense Initiative, the Japanese government decided to participate and signed an agreement that private companies would participate in the SDI regime according to the inter-governmental arrangement. Researchers at the university responded against the progressing formation of the military-industrial complex in Japan in the Cold War II whose fruits were confined to sporadic efforts, such as the Nagoya University Charter against military Research.

14.2.4 Post-Cold War/ Post-9/11: Remilitarization of Dual-Use The 9/11 attacks and the following anthrax bioterrorism shocked not only the security community but also the science community, who was suspected in the development of bioterrorism, and the concept of dual-use was reformulated to focus more on the

14 Exploring the Contexts of ELSI and RRI in Japan …

277

problem of misuse. The most significant contribution to this reformation was the Fink Report, which was published by the National Research Council. According to the report, the dual-use dilemma arises when “the same technologies can be used legally for human betterment and misused for bioterrorism” [12]. In the United States, the concept of dual-use has been re-introduced as a strengthening of the regulatory aspect, while being based on the old notion of dual-applicability for civilian and military purposes. When this concept of dual-use with an enhanced military connotation was introduced to Japan, the historical context we have discussed was relevant. Such reintroduction was first carried out by governmental sectors (Ministry of Education, Culture, Sports, Science and Technology (MEXT) and Japan Science and Technology Agency (JST)) utilizing the broad concept of security. In the 2nd Science and Technology Basic Plan of 2001, the concept of dual-use was virtually reintroduced using the concept of “safety and security (including resilience).” Thus, Japanese governmental sectors utilized less militaristic concepts as the proxy for security and dual-use. Recently, however, the governmental stance has come to be more straightforward. The Law for Facilitating Governmental Research Exchange of 1986 was revised to the Research and Development Enhancement Act. In 2013, the Act was further revised to include Article 28, which defined “science and technology as a basis for the safety of our people, our nation, and our economy and society,” clearly implying the concept of dual-use technology. Furthermore, the introduction of the Research Promotion System by the Ministry of Defense enabled the more outright promotion of research on dual-use technology. The reaction of academia toward the concept of dual-use has been complicated by these governmental maneuvers. First, in the early 2010s, the Science Council of Japan proposed that the misuse aspect of dual-use should be addressed. However, the re-introduction of the militaristic dual-use concept by government sectors as described above has caused a recession to the traditional form of dual-use in the context of dual applicability for civilian and military use [14].

14.2.5 Current Situation of Dual-Use Issues in Japan: Focusing on Discussion in the Diet and the Media With the backdrop of the historical context briefly summarized so far, how can we grasp the structure of arguments on RRI/dual-use issues? In this section, through conducting content analysis on the RRI/dual-use discussion, we try to explicate the patters of arguments developed in the National Diet and the media.

278

14.2.5.1

K. Kawamura et al.

Diet Discussion

First, we collected all the remarks mentioning phrases such as “academic research,” “security,” “dual-use,” “military research,” and “military & civil purposes” since 1986 from the Minutes of the Diet Search System (http://kokkai.ndl.go.jp) and cleaned the data. We then conducted a correspondence analysis on the obtained 1,754 floors (comprising 8,091,816 characters) after natural language processing, and the resultant analysis is shown in Fig. 14.1 in which the above-mentioned key words and related words are plotted (data analyzed in this figure were cleaned regarding the important words). It is clearly shown that the remarks related to “security” take a position around the origin in the coordinate axis, which indicates that the arguments are engaged in terms of security. On the other hand, the components positioned in the left field

Fig. 14.1 Correspondence analysis of the Diet minutes including keywords related dual-use issues

14 Exploring the Contexts of ELSI and RRI in Japan …

279

demonstrate that issues concerning “military research” or “dual-use” were argued relatively separately, with the technological aspect of “dual-use” emphasized. The content of the actual remarks by law-makers on the ruling and opposition parties implied that the term “dual-use” was used in Japanese arguments in order to bleach the abhorrent militaristic aspect of military research and emphasize the technical aspect, stressing the prospective economical gain that dual-use research can incite. In other words, the term “dual-use” was used by the supportive lawmakers to reinforce the imagery that Japan could increase its national wealth by making maximum use of value-neutral technology, which was passed down from the success story of post-war reconstruction and rapid economic growth.

14.2.5.2

Media Coverage of Dual-Use Issues

Next, in order to visualize the trends of the arguments occurring within the vicinity of the general public, we extracted the agendas set in newspaper articles. Targeting the two top-selling newspapers in Japan, The Asahi Shimbun and The Yomiuri Shimbun, we collected all articles referring to “security,” “dual-use,” and “military research” since 1986. Of these, 55 articles were collected from The Yomiuri Shimbun, which is known for its center-right stance and positive attitude toward the promotion of dual-use science and technology, while 446 articles were retrieved from The Asahi Shimbun, the traditional center-left media outlet. As our purpose here is to depict the general trends of arguments in the major newspapers and not to delve into the ideological factor, we processed the aggregated 501 articles. The resultant co-occurrence word network is presented in the Fig. 14.2. This result clearly attests to the confusion and division regarding the dual-use arguments in Japan. Words related to military research conducted under the JapanU.S. alliance are positioned in the center of the field. Words concerning arguments on nuclear weapons and the Cold War spread over the lower half of the field and forming clusters, indicating the continuity from the arguments on social responsibility of science in the Cold War era. Words related to statements by the Science Council of Japan and universities form clusters in the upper half of the field. While the Council clearly criticized the promotion of military research, stances of universities were divided, with some following suit with the Council, others admitting military science. The co-occurrence word network shows that the concept of “freedom of academic research” was centered on the arguments presented in the upper field. Considering the fact that the timespan of the data ranged for 30 years, the configuration of the co-occurrence network analysis is summarized as follows: while the arguments for the promotion of military science have gradually increased based on changes in the post-cold war international situation, Japanese academia has resisted such arguments. From this analysis, we can recognize a very naive structure of the arguments on dual-use issues: while the economic aspect of dual-use technology is emphasized in the Diet and the conservative press, the military aspect is focused on, but categorically refuted with reference to the statements of the Science Council in the liberal media. The question of how to materialize the preventive measures concerning the fullest

280

K. Kawamura et al.

Fig. 14.2 Co-word network of 501 news articles concerning dual-use (The most frequent 60 words; Degree Centrality; Jaccard similarity coefficient >0.14)

scope of dual-use issue—the question of foremost importance in RRI arguments— has been neglected. The divisiveness of the arguments between conservatives and liberals makes it difficult for the general public to participate in the open discussion on dual-use technology, which is necessary for implementing the concept of RRI.

14 Exploring the Contexts of ELSI and RRI in Japan …

281

14.3 Anticipation of RRI: A Case of Stem Cell Science and Regenerative Medicine The post WWII arguments on social responsibility of science and dual-use were largely led by the physicists who were aware of their responsibility for lending their hands to the creation of nuclear weapons. In the 1970s, the technological advances in molecular biology drove the scientists in that field to raise the consciousness on regulatory issues, epitomized by the precautionary guideline of the Asilomar conference in 1975. Since then, biotechnology and life sciences emerged as a locus for the arguments on ELSI and social responsibility in science. In the current Japanese context, the government has attempted to invest significant resources in SCR and RM, particularly after the establishment of induced pluripotent stem cells (iPSC) in 2006 and human iPSC in 2007. A variety of ELSI related to SCR and RM has been discussed: destruction of embryos that are sometimes regarded as potential humans whose dignity should be protected, relationship with nationalism, deprivation of women and marginalized and disabled persons, and the effects of power and politics in bio-capitalization [18–29]. After the emergence of iPSC, new ELSIs have been generated and highlighted: the creation of human-animal chimeras or the derivation of gametes from iPSC and the potential for mass-production/consumption, for example [30, 31]. According to the media survey we conducted elsewhere using a co-word network analysis of more than 7,400 articles from three major Japanese newspapers (The Asahi Shimbun, The Yomiuri Shimbun, and The Mainichi Shimbun) ranging over 30 years, it is shown that the first rise of Japanese media attention to ELSI issues in SCR and RM was triggered by the scandal of fabrication by Fun Woo-Suk in South Korea from 2004 to 2006 [32]. During the Fun scandal, various ELSI including the purchase and deprivation of eggs from women and marginalized persons were exhibited. As a result, a large and concrete co-word network concerning “ethics” was produced by Japanese media. However, this ELSI framing disappeared after the appearance of iPSC. The connection of keywords to “ethics” decreased rapidly. Moreover, the framing of “national promotion” has been appearing with increasing clarity over time. This trend was common in the three newspapers, and thus can be interpreted as “peripheralization” of ELSI in the Japanese media framing on SCR and RM [32]. As a next step, we would like to focus on the communication gap between the scientists and the general public on the ELSI issues on SCR and RM. In collaboration with the Japanese Society for Regenerative Medicine (JSRM), one of authors conducted a large-scale survey with 2,160 public responses and 1,115 responses from the JSRM members—the majority of whom are scientists and medical doctors—on their view of public communication of SCR and RM [33]. The result of the survey showed that more than 70% of the public expressed that they wished to know more about “risk” and “anticipated new medical care.” These topics were also of high interest to JSRM members (53.2% and 78.9%, see Fig. 14.3). However, stark differences were noted regarding factors considered important for

282

K. Kawamura et al.

Fig. 14.3 Differences in the response between JSRM members and the public to “What do you want to know?” (for the public) and “What do you want to convey?” (for JSRM members) “Please choose five factors” and “Don’t know or other” answers were omitted because of the low ratio. Chi-square test was conducted; **, p < 0.01. Abbreviation: RM, regenerative medicine [33]

the acceptance of SCR and RM (Fig. 14.4). While JSRM members emphasized the importance of scientific validation (55.0%) and the necessity of research (36.3%), the public regarded the effectiveness of regulation (50.5%), probabilities of potential risks and accidents (33.5%), and clarification of responsibility and liability (32.2%) as more important factors. In short, the public was more interested in the postrealization aspects of RM, than they were in the scientific and technological aspects, attesting to the growing concern on the ELSI issues of the SCR and RM among the general public [33].

Fig. 14.4 Differences in the response between Japanese Society of Regenerative Medicine members and the public to “What factors are important for your acceptance of regenerative medicine? Please choose three factors.” “Don’t know or other” answers were omitted because of the low ratio. Chi-square test was conducted; **, p < 0.01. Abbreviation: RM, regenerative medicine [33]

14 Exploring the Contexts of ELSI and RRI in Japan …

283

14.4 Media Reports on Nanotechnology Risks in Japan We would like to look at another case of ELSI/RRI-related discourses in emerging science and technology. In this section, we analyze how the local context of ELSI considerations functions in discussing emerging technology by depicting a brief portrait of nanotechnology media coverage in Japan. The U.S. National Nanotechnology Initiative of 2001 included “responsible development” as one of the strategic goals, and thus paved the way for revitalizing the arguments of social responsibility in science and technology. Analyzing the media coverage and the local context concerning the ELSI issues of nanotechnology would provide us perspectives for the discussion on ELSI and RRI. Mass media is the main or, in most cases, only informational source for the general public on technoscience. Their knowledge, understanding, and attitudes are shaped by the information provided by the media [34, 35] thus, media analysis is indispensable for considering the public communication of emerging technology. However, as is often the case with media studies in general [36], previous studies on the public communication of technoscience have presupposed an Anglo-American perspective. Recently, some researchers set out to conduct media analyses considering local or national elements [37, 38]. Nanotechnology is the most suitable subject for this type of comparative analysis: it has come to be considered as the model case for public communication of technoscience, and thus, many related surveys have been conducted in many countries. The story of nanotechnology in Japan has been colored by its ambivalent selfperception on the relationship between national identity and technoscience. Since the modernization of the late 19th century, technology has always been, as it were, imported. The international prominence of post-war Japan’s civil engineering has been recognized as the result of ingenuity, rather than creativity, which was applied to imported knowledge and technology [39]. From the 1970s to the 1980s, Japan has achieved world-wide fame of “Japan as Number One” [40], preparing for the national identity of the 1990s that the current prominence in the nanotechnology had derived from the long tradition of small technology [41]. The national identity was reinforced by stories stressing Japan’s foresight such that the invention of the Esaki Diode (1957) by Nobel Prize winner Leo Esaki was the harbinger of nanotechnology, and that the concept of nanotechnology itself was coined by Norio Taniguchi. It is true that Japan led the field of nano-technology until the first half of the 1990s, which is epitomized by Sumio Iijima’s discovery of carbon nanotubes in 1991; it seemed that Japan’s achievement in nanotechnology overcame the criticisms often directed toward it throughout the 1980s—that Japan had been capitalizing on the outcomes of other countries’ more basic research. Japan’s economic recession since the latter half of the 1990s, however, drastically changed the research environment of Japanese nanotechnology, shifting the focus from basic to applied research, which was supposed to contribute to the recovery of the nation’s competitiveness. The announcement of the National Nanotechnology Initiative by the United States further sharpened Japan’s sense urgency regarding the industrialization of nanotechnology.

284

K. Kawamura et al.

From then on, while Japan retained its technological advantage to some extent, skeptical views on not only the international competitiveness of Japan’s nanotechnology, but also on its innovation system came to be shared among experts [42]. Given the historical background summarized here, between a sense of pride as a leading country in nanotechnology and a sense of crisis in the decline of competitiveness, how did the Japanese media report the risks of nanotechnology? In order to further explore this question, we collected news articles about nanotechnology from 4 major national newspapers (The Asahi Shimbun, The Yomiuri Shimbun, The Mainichi Shimbun, and The Nihon Keizai Shimbun) published from January 1998 to December 2017. Online databases of each newspaper were used to search for articles including the word “nanoteku” (shortened form of nanotechnology in Japanese). From the search results, 41 articles were manually identified that contained statements about the risks of nanotechnology (Fig. 14.5). This number is relatively small compared to previous surveys conducted in other countries according to which the volume of risk descriptions is much smaller than that of benefits [43]. What is most contrastive about this result of the content analysis of Japanese media is that phrases such as “runaway” or “Pandora’s Box,” which were commonly used to depict the catastrophic future vision in other countries’ media, rarely appear. The typical narrative of nanotechnology risk in such visions is that of grey goo, a fictional scenario involving out-of-control self-replicating robots rapidly spreading across the world. A grey goo scenario, first presented in the work of molecular nanotechnology pioneer Eric Drexler [44] and featured in the entertainment novel Prey by Michael Crichton [45], gained popularity and worldwide media coverage when Prince Charles mentioned its risk in 2003 [46]. Such fictional scenarios are often used by journalists to effectively promote public understanding of the risks difficult to imagine when presented quantitatively [47]. In Japanese media coverage, however, Drexler or Prey were rarely mentioned, and the grey goo scenario was not utilized as an argument frame (Fig. 14.6).

Fig. 14.5 Time-lined changes of number of news articles which contained statements about the risks of nanotechnology in four newspapers (Asahi Shimbun, Yomiuri Shimbun, Mainichi Shimbun, and Nihon Keizai Shimbun)

14 Exploring the Contexts of ELSI and RRI in Japan …

285

Fig. 14.6 Risk comparisons and analogies in newspaper articles on nanotechnology

Instead of grey goo, asbestos appears as the most utilized analogy, which is presumably associated with the fact that media coverage of nanotechnology risks peaked in 2005. The number of articles mentioning nanotechnology risks is 13, comprising approximately one-third of all articles. The reports involved shocking incidents during the year, in which people living in the neighborhood of the Kubota Corporation plant were exposed to asbestos scattered from the plant and suffered from mesothelioma (“Kubota Shock”). As the media widely reported the incident, the asbestos disaster gained major recognition as a case of pollution among the Japanese public [48]. It was indicated during the media coverage peak that nanoparticles, particularly carbon nanotubes, have similar physical features as asbestos, thus leading to the argument that similar damage might be caused by nanotubes, or the counterargument stating that there have always been nanoparticles in the natural world and that there was no need to overreact. It is noteworthy that a clear tendency of correspondence between asbestos and precautionary principles is observed. As was previously known, Japan, along with the United States, is a country in which most people tend to consider precautionary principles as constraining factors on the free market and innovation [49]; thus, it is expected that precautionary views would rarely be observed. However, the result of our analysis shows that 10 articles argued that the government should take precautionary measures, many of which mentioned asbestos. This finding indicates the possibility that the history of environmental pollution in Japan shaped their risk perception of nanotechnology. In the past, Japan has suffered major pollution disasters including Minamata disease caused by mercury poisoning, the knowledge of which is widespread through public education. As disasters were increasingly recognized as byproducts of the Meiji-era industrialization and post-war economic growth, educators are inclined to be critical of promoting rapid economic growth [50]. Therefore, health damages caused by products of technology have been recognized as the consequence of runaway industry, not science and technology. By associating potential technological issues with the collective memory of pollution disasters, nanotechnology risks have been perceived as health damage

286

K. Kawamura et al.

caused by industrial applications, rather than inevitable catastrophe. This association provided the context for articles arguing for the overcoming of economy-first principles and for the introduction of precautionary principles materialized in restrictions and legislation on product shipment at the stage of pre-commercialization. The analysis presented so far demonstrates that the Japanese media utilized the memory of post-war pollution disasters to help people imagine the risk of emerging nanotechnology, which are caused by the post-war techno-economic complex formed in the era of rapid economic growth. Despite the differences between the heavy and chemical industries that caused the pollution disasters and the hi-tech industries producing nanotechnology, at least in the early stage of the media coverage, the collective memory of public health crises affected the media’s reaction. This explanation is likely due to those who received Japanese public education. The local and national context is widely visible and should be considered when attempting to argue ELSI issues in a given country. However, of course, that does not mean that such context always contributes to clarifying ELSI arguments. If one overrelies on analogies to past tragedies, one may underestimate the potential of a particular technology, as was the case with GMO crops. More importantly, such routine reliance on the past may function as an inclination or cultural tradition to undermine public argument. Considering the linguistic distance of the Japanese public from English-speaking societies, the focus on risks was significantly minor in arguments on nanotechnology. The rise of media coverage on the risks of nanotechnology associated with the pollution disasters were only temporary, never leading to effective public arguments. Over one and a half centuries, technologies were the chief means for Japan to catch up with the West. In one article analyzed above, a scientist testified the difficulty of discussing the safety and risks of nanotechnology while strong expectations are oriented toward the economic value of its application. In order to develop constructive discussions, we must focus on the social context preventing people from fully imagining the risks of certain technologies.

14.5 Discussions and Conclusion Through our analysis of discussions related to ELSI and RRI in cases of SCR and RM, nanotechnology, and dual-use, we have confirmed the following results. In the first case, we discussed the historical context and the current situation of the dual-use issue in Japan. The results of our quantitative text analysis demonstrated the naive structures of discourses concerning dual-use issues both in the Diet and in the media of Japan: while those on the right use the term dual-use in such a way as to bleach out the abhorrent militaristic aspect of the military research and emphasize the technical and economic aspects of the technology, the leftists criticize such moves on the basis of the statements of the Science Council of Japan categorically prohibiting cooperation in military research. Such divisive structure of the discourse makes it difficult for scientists and the general public to participate in the discussion on

14 Exploring the Contexts of ELSI and RRI in Japan …

287

preventive measures concerning dual-use, which are of foremost importance in the arguments on RRI and social responsibility of science and technology. Excessive focus on the technical and economic aspects can also be found in the second case of SCR and RM, where we confirmed the “peripheralization of ELSI” in the framing of SCR/RM by the Japanese media. In addition, a large-scale questionnaire pointed out that the public was more interested in the post-realization aspects of RM, such as the cost of care, countermeasures for risks and accidents, and the clarification of responsibility and liability, than they were in the scientific aspects. We interpreted them as the public anticipation of “responsible governance” of SCR and RM. Interestingly, these results are similar to those of previous studies concerning public attitudes toward nuclear energy in the 1990s. In the third case of nanotechnology, unlike the first and second cases, criticism of concentration on the technical and economic aspects of technology was shared throughout the arguments on the risk. We showed that Japanese media utilized the memory of post-war pollution disasters in order to help people imagine the risks of the emerging nanotechnology, and thus argued for the overcoming of the economyfirst principle. Our results implied, somewhat ironically, that the collective memory of the health damage diverted the media’s reaction from the specific character of the nanotechnology and the associated risk. This finding showed that the social context sometimes prevents people’s full imagination of the risks of certain technologies. In other words, we must take such “socio-technical imaginaries” [51] into consideration during ELSI and RRI discussions. Our findings shown above indicate how issues related to ELSI and RRI issues are shaped in the Japanese context. It should be noted that the media played decisive roles in shaping the arguments on ELSI/RRI issues among the general public, academia, and the governmental sectors. As was the case with the SCR and RM, the media depicted the simple success story of science and technology, marginalizing the ELSI/RRI implications the technology might have. In other cases, such as nanotechnology and dual-use technologies, the media relied on historical analogies to problematize the research and development of such technologies. It might be argued that the ELSI/RRI discussions in Japan tend to be polarized and that the media lend a hand in the tendency to polarization. On the other hand, the public tends to exhibit interest in “responsible governance,” which focuses on potential risks, measures for accidental cases, and the clarification of responsibility and liability. It seems that historical memories and imaginaries of pollution disasters and WWII affect the public’s understanding of risk, ethical concerns, and the politics concerning emerging science and technologies. Without insights about the effects of historical contexts and imaginaries on discussions and politics in emerging science and technology, the dynamics and structural issues concerning ELSI and RRI cannot be fully understood. However, research examining the socio-technical imaginaries of emerging science and technologies in the Japanese context have thus far been limited. Research attempting to outline and understanding the politics and dynamics of imaginaries of emerging sciences and technologies should be explored further.

288

K. Kawamura et al.

Acknowledgements This work is a collaboration between two research projects: “Co-Creation and Communication for Real-Time Technology Assessment (CoRTTA) on Information Technology and Molecular Robotics” supported by the Human-Information Technology Ecosystem R&D Focus Area of JST-RISTEX and “Theoretical and Practical Study for new RRI Framework: a study series of education, evaluation, and politics” supported by the Topic-Setting Program to Advance CuttingEdge Humanities and Social Sciences Research of JSPS. The authors express their deep appreciation of Dr. Go Yoshizawa and other collaborators for their valuable comments during our project.

References 1. Guston, D. H., & Sarewitz, D. (2002). Real-time technology assessment. Technology in Society, 24(1–2), 93–109. 2. Barben, D., Fisher, E., Selin, C., & Guston, D. H. (2008). Anticipatory governance of nanotechnology: Foresight, engagement, and integration. In E. J. Hackett, O. Amsterdamska, M. Lynch, & J. Wajcman (Eds.), The handbook of science and technology studies (pp. 979–1000). The MIT Press. 3. Guston, D. H. (2014). Understanding ‘anticipatory governance’. Social Studies of Science, 44(2), 218–242. 4. Stilgoe, J., & Guston, D. H. (2017). Responsible research and innovation. In U. Felt, F. Rayvon, A. Miller Clark, & L. Smith-Doerr (Eds.), The handbook of science and technology studies (4th ed., pp. 853–880). The MIT Press. 5. Schomberg, V. R. (2011). Prospects for technology assessment in a framework of responsible research and innovation. In M. Dusseldorp & R. Beecroft (Eds.), Technikfolgen abschätzen lehren: Bildungspotenziale transdisziplinärer Methoden (pp. 39–61). Wiesbaden: Vs Verlag. 6. EU Commission. (2011). Commission staff working paper impact assessment. Retrieved from http://ec.europa.eu/research/horizon2020/pdf/proposals/horizon_2020_impact_assessment_r eport.pdf. 7. Sutcliffe, H. (2011). A report on responsible research & innovation. Retrieved from http://ec.eur opa.eu/research/science-society/document_library/pdf_06/rri-report-hilary-sutcliffe_en.pdf. 8. Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. 9. Owen, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760. 10. Shineha, R. (2017). How can academic societies contribute to RRI education?: An analysis of their roles and situations. Journal of Science and Technology Studies, 14, 158–174 (in Japanese). 11. EU Commission. (2016). Current RRI in nano landscape report. Retrieved from http://www. nano2all.eu/wp-content/uploads/files/D2.1%20Current%20RRI%20in%20Nano%20Land scape%20Report.pdf. 12. Committee on Research Standards and Practices to Prevent the Destructive Application of Biotechnology. (2004). Biotechnology research in an age of terrorism. The National Academy Press. 13. Tucker, J. B. (2012). Innovation, dual use and security: Managing the risks of emerging biological and chemical technologies. The MIT Press. 14. Kawamoto, S. (2017). An attempt to re-conceptualize dual-use research in Japan: Critical review from viewpoint of RRI. Journal of Science and Technology Studies, 14, 134–156 (in Japanese). 15. Science Council of Japan. (1950). Resolution to never to commit scientific research conducted for the purpose of war. Retrieved from http://www.scj.go.jp/ja/info/kohyo/01/01-49-s.pdf (in Japanese).

14 Exploring the Contexts of ELSI and RRI in Japan …

289

16. Sugiyama, S. (2017). Post war history of “Military research” in Japan: How the scientists faced the Taboo? Minerva Shobo (in Japanese). 17. United Nations Development Programme. (1994). Human development report 1994. Retrieved from http://hdr.undp.org/sites/default/files/reports/255/hdr_1994_en_complete_nostats.pdf. 18. Lee, S. J. (2006). Nation of Hwang Woo-Suk. Seoul: Baba Publishing Co. 19. Kim, T.-H. (2008). How could a scientist become a national celebrity? Nationalism and the Hwang Woo-Suk Scandal. East Asian Science, Technology and Society: An International Journal, 2, 27–45. 20. Kim, L. (2008). Explaining the Hwang Scandal: National scientific culture and its global relevance. Science as Culture, 17(4), 397–415. 21. Leem, S. Y., & Park, J. H. (2008). Rethinking women and their bodies in the age of biotechnology: Feminist commentaries on the Hwang affair. East Asian Science, Technology and Society: An International Journal, 2, 9–26. 22. Hishiyama, Y. (2003). Handbook of bioethics: Ethical, legal and social implications of life science. Tsukiji Shokan (in Japanese). 23. Hishiyama, Y. (2010). Current life science policy: Linking science and society. Keiso Shobo (in Japanese). 24. Nisbet, M. C. (2005). The competition for worldviews: Values, information, and public support for stem cell research. International Journal of Public Opinion Research, 17, 90–112. 25. Fuchigami, K. (2009). Bio Korea and female body: Inside story of “egg donation” in cloned human ES cell research. Keiso Shobo (in Japanese). 26. Sleeboom-Faulkner, M. (2008). Debates on human embryonic stem cell research in Japan: Minority voices and their political amplifiers. Science as Culture, 17, 85–97. 27. Sleeboom-Faulkner, M. (2010). Contested embryonic culture in Japan-public discussion, and human embryonic stem cell research in an aging welfare society. Medical Anthropology, 29, 44–70. 28. Sawai, T. (2017). Human iPS cell research and ethics. Kyoto University Press (in Japanese). 29. Shineha, R. (2019). Bio-capitalism in STS. Journal of Science and Technology Studies, 17 (in Japanese). 30. Ishii, T., Pera, R. A. R., & Greely, H. T. (2013). Ethical and legal issues arising in research on inducing human germ cells from pluripotent stem cells. Cell Stem Cell, 13(2), 145–148. 31. Yusuke, I., Ryuma, S., & Yoshimi, Y. (2016). Current public support for human-animal chimera research in Japan is limited, despite high levels of scientific approval. Cell Stem Cell, 19(2), 152–153. 32. Shineha, R. (2016). Attention to stem cell research in Japanese mass media: Twenty-year macrotrends and the gap between media attention and ethical, legal, and social issues. East Asian Science, Technology and Society: An International Journal, 10, 229–246. 33. Shineha, R., Inoue, Y., Ikka, T., Kishimoto, A., Yashiro, Y. (2018). A comparative analysis of attitudes on communication toward stem cell research and regenerative medicine between the public and the scientific community. Stem Cells Translational Medicine, 7(2), 251–257. 34. Gaskell, G., Eyck, T. Ten, Jackson, J., & Veltri, G. (2005). Imagining nanotechnology: Cultural support for technological innovation in Europe and the United States. Public Understanding of Science, 14(1), 81–90. 35. Scheufele, D. A. (2006). Messages and heuristics: How audiences form attitudes about emerging technologies. In J. Turney (Ed.), Engaging science: Thoughts, deeds, analysis and action (pp. 20–25). The Wellcome Trust. 36. Downing, J. D. H. (1996). Internationalizing media theory. Peace Review, 8(1), 113–117. 37. Kjærgaard, R. S. (2010). Making a small country count: Nanotechnology in Danish newspapers from 1996 to 2006. Public Understanding of Science, 19(1), 80–97. 38. Lema´nczyk, S. (2012). Between national pride and the scientific success of ‘others’: The case of Polish press coverage of nanotechnology, 2004–2009. NanoEthics, 6(2), 101–115. 39. Iwabuchi, K. (2002). “Soft” nationalism and narcissism: Japanese popular culture goes global. Asian Studies Review, 26(4), 447–469. 40. Vogel, E. (1979). Japan as number one: Lesson for America. Harvard University Press.

290

K. Kawamura et al.

41. Matsuda, M., Hunt, G., & Obayashi, M. (2006). Nanotechnologies and Society in Japan. In G. Hunt & M. D. Mehta (Eds.), Nanotechnology: Risk, ethics and law (pp. 59–73). Routledge. 42. Kanama, D., & Kondo, A. (2007). Analysis of Japan’s nanotechnology competitiveness: Concern for declining competitiveness and challenges for nanosystematization. Science and Technology Trends Quarterly Review, 25, 36–49. 43. Dudo, A., Dunwoody, S., & Scheufele, D. A. (2011). The emergence of nano news: Tracking Thematic trends and changes in U.S. newspaper coverage of nanotechnology. Journalism and Mass Communication Quarterly, 88(1), 55–75. 44. Drexler, K. E. (1986). Engines of creation: The coming era of nanotechnology. Doubleday. 45. Crichton, M. (2002). Prey. HarperCollins. 46. Anderson, A., Allan, S., Petersen, A., & Wilkinson, C. (2005). The framing of nanotechnologies in the British newspaper press. Science Communication, 27(2), 200–220. 47. Lundgren, R. E., & McMakin, A. H. (2009). Principles of risk communication. In Risk communication: A handbook for communicating environmental, safety, and health risks (5th ed., pp. 71–82). Wiley-IEEE Press. 48. Oshima, H. (2011). Asbestos. Iwanami Shoten (in Japanese). 49. O’Riordan, T., & Cameron, J. (1994). Interpreting the precautionary principle (1st ed.). Routledge. 50. Fujioka, S. (1981). Environmental education in Japan. Hitotsubashi Journal of Social Studies, 13(1), 9–16. 51. Sheila, J., & Sang-Hyun, K. (2009). Containing the atom: Sociotechnical imaginaries and nuclear power in the United States and South Korea. Minerva, 47, 119–146.

Chapter 15

Global Climate Change and Uncertainty: An Examination from the History of Science Togo Tsukahara

Abstract In this paper, I specifically address the issue of uncertainty in climate change as a key concept, and discuss uncertainty over time factors from the perspective of the history of science. This paper especially inspired by an argument of a new way of understanding uncertainty presented by Jerome Ravetz. Also, regarding my philosophical stance on science, I undertook the analysis of Bryan Wynn. In addition, from L’Evannement Anthropocene (2016) by Jean-Baptiste Fressoz and Christophe Bonneuil, I have received a great deal of impetus for reviewing climate change from a historical perspective and the history of science. It should be stated that this paper is an attempt to extend my earlier commentary to Fressoz and Bonneuil from the viewpoint of history of science. Keywords Uncertainty · Climate change · Post normal science · Anthropocene · Bryan wynn

15.1 Introduction The problems related to climate change are becoming recognized as a major issue in contemporary society. These phenomena are generally discussed as a warming issue. The current extremes in global weather are enough to give up optimism. There is an approach that considers climate change as a business opportunity. There are also approaches that consider the political and practical methods that could be applied to address global warming, and examine it as a problem about the long-term outlook on a new way of distributing global resources and energy. It is necessary to consider it as a cultural issue in that it is often represented with catastrophic and apocalyptic prominence, as people lament the deterioration of the global environment. In terms of science studies and STS, what should we think about these various discussions about global climate changes? T. Tsukahara (B) Kobe University, Kobe, Japan e-mail: [email protected] © Kobe University 2021 T. Matsuda et al. (eds.), Risks and Regulation of New Technologies, Kobe University Monograph Series in Social Science Research, https://doi.org/10.1007/978-981-15-8689-7_15

291

292

T. Tsukahara

In this paper, I specifically address the issue of uncertainty in climate change as a key concept, and discuss uncertainty over time factors from the perspective of the history of science. This paper is an expansion of Tsukahara [1], originally written in Japanese [1]. Among other things, Sect. 15.2 was especially inspired by an argument of a new way of understanding uncertainty presented by Jerome Ravetz. It is therefore revised from what was written in 2005. Also, regarding my philosophical stance on science, I undertook the analysis of Bryan Wynn.1 In addition, from L’Evannement Anthropocene (2016) by Jean-Baptiste Fressoz and Christophe Bonneuil [5] I have received a great deal of impetus for reviewing climate change from a historical perspective and the history of science. It should be stated that this paper is an attempt to extend my earlier commentary to Fressoz and Bonneuil from the viewpoint of history of science. My 2005 research can now be considered a precursor to this work.

15.2 Uncertainty, as a Criticism of Risk Theory The concept of uncertainty is already often discussed in the fields of socio-statistical operations, such as financial theory and social insurance, and in the management of investment. Uncertainty is about how to anticipate unexpected losses, for instance, how many people with life insurance will contract cancer. Galbraith’s The Era of Uncertainty (1976) was published in the days when this word was being widely adopted in the social sciences. Uncertainty, in this case, is a term that appears mainly as an economic concept and indicates political and social instability in economic terms. Uncertainty is claimed when there are uncertain factors in a sociopolitical situation that may harm economic growth. It was also a word that vaguely reflected/represented the anxiety about the regime of the capitalist camp during the Cold War. In recent years, it has been used more in sociology and its discussions on risks in society. There is instability inherent in modern society in the sociology arguments of Ulrich Beck and others. Also, it has been discussed in the context of considering the risks of science and technology as an STS concept. I would like to give an overview of the points of contemporary discussions from the perspective of STS. First, Lloyd Dumas [6] analysed the concept of uncertainty in the context of a critical discussion of risk theory. He claimed that the two concepts of risk and uncertainty should be differentiated [6]. There he clarified that “when the list of possible things and their probability of occurrence are known”, it is considered as risk; and ignorance about what can happen or ignorance about the probability of occurrence is uncertainty. Concerning his discussion, a Japanese historian/philosopher of science Makoto Hayashi points out the necessity of recalling that “probability is only an expression 1 About

Brian Wynne, see: https://www.lancaster.ac.uk/people-profiles/Brian-Wynne. In his works, I find particularly important are [2, 3, 4].

15 Global Climate Change and Uncertainty …

293

of our ignorance” in the first place [7]. Because we “use statistical and probabilistic methods only for uncertain things”, the distinction between uncertainty and risk is not so clear in that sense. How is it possible to assume that a certain situation will occur and then quantify it with a probability? In general, such assumptions are not sufficiently explicit in real science and technology, and problems with this will not be clearer unless they are actually addressed. Therefore, the concept that will continue to be used in society is uncertainty and not risk. In other words, what can happen is released to the world, as it is ignored. Even if you try to analyse and evaluate the risk, the result and the possibilities are not sufficiently understood, so there is only uncertainty. In addition, STS scholar Hideyuki Hirakawa noted that the concept of risk itself is an ethos of modern commercial and merchant rationalism. According to Hirakawa, in the discussion centred on risk, uncertainty is quantified and reduced in most of the cases [8]. It is assumed to be based on the desires and convictions that are intended to predict and control possible damages. Hirakawa, therefore, argues that this is “the conviction of the experience of sailors and merchants who have travelled far in the early days of the modern age” and, according to Bernstein’s expression, it can be said to “put the future under the present rule”. Just as risk should be a chance, engagement with risky situations is constantly encouraged. If the capitalist rational principle is followed, then chance is the “business opportunity” in which the risk is calculated, and it also “should be taken (to gain greater profit)”. In this implicitly understood value judgment, silent pressure is socially constructed. To fully embrace the modern capitalist principle, one must invest all of one’s holdings even when there is risk. Furthermore, the philosopher of science Tetsuji Iseda pointed out that risk science intentionally described uncertainty as “a symbol of non-scientificness”; and it has narrow-mindedly defined risk as “a full agreement with the position of regarding uncertainty as a problem within science” [9]. It is important to distinguish between the uncertainty that is addressed by science and the type of uncertainty where assessment from a non-professional point of view is included. Taking up such a problem as a part of the applied philosophy of science, Iseda claims that we should not neglect various subtleties over the recognition and handling of uncertainty. Recently, anthropologist Daiki Ayuba led a research project on risk and uncertainty at the Museum of Anthropology in Osaka, from 2015 through 2018. The project reported that the concept of a risk society actually requires us to assume any uncertainty is a risk. Here, the response to risk is required as an individual responsibility. Ayuba and his research project aimed at giving anthropological responses to this [10, 11]. He stated that there was a social insurance design in Indonesia, and discovered that there was an intended transformation of uncertainty into somewhat manageable risks. Even in that case, in the end, the existential judgment in the presence of uncertainty could not be ruled out. There are subtle differences in the digital presentations made by Dumas, Hayashi, Hirakawa, Iseda and Ayuba, but there are connections and a common underlying theme. Their arguments share the theme: discussing uncertainty requires a different setting than the discourse space for discussions of risk. They also share the assumption that uncertainty is considered to be the broader concept.

294

T. Tsukahara

The necessity of the distinction between risk and uncertainty is persuaded by the reasoning of Dumas. As Hirakawa analysed, the concept of certainty is implicitly suppressed to something that can be controlled; Ayuba’s analysis endorses this. Risk is also a concept that is “accidentally being domesticated”, as Ian Hacking has stated. As Iseda reminds, there are various recognition gaps and differences in handling methods regarding uncertainty. In most of the cases, uncertainty is regarded as a symbol of non-scientificness. It is, therefore, not necessarily right to deal with uncertainty within the boundaries of scientific discourse. It is more important to look closely and acknowledge that there are differences between positions offered by disciplined methodologies, and there are differences between those offered by an expert and by a non-professional. From this viewpoint, the fundamental problem of uncertainty is about its setting and context, the conditions under which that uncertainty is expressed. When the concept of uncertainty is discussed, we should see more from the context of science and technology that generates this very concept of uncertainty. The differences of context are sometimes institutional, and sometimes socio-political and economic, and (even in some cases) cultural and ideological. A detailed analysis of these concepts of risk and uncertainty is needed.

15.3 Uncertainty in PNS (Post-Normal Science) In recent years, uncertainty has become a central conceptual tool of STS, mainly with the EU’s environmental lobby. Together with buzzwords, such as the precautionary principle, uncertainty is part of several discussions that are underway to clarify the EU’s conceptual stipulations and definitions.2 The policy advocacy research group that is centred on the London-based journal FUTURES applies the overarching concept of uncertainty of the system. The group uses the concept of system uncertainty to examine various issues among science and technology and society, within the context of PNS (post-normal science). A historian and philosopher of science Jerome Ravetz is at the centre of this research group around FUTURES. According to Ravetz, post-normal science goes beyond the traditional assumptions of “normal” science. This coincides with the viewpoint that science is not regarded as certain or independent, or is a value-free system of knowledge at all. Ravetz uses the terms systems uncertainty and determination interest as essential elements of analysis. Here, system is considered in the plural, and systems include not only science, but also politics and economics. They 2 For

example, in 2004, a conference called “Uncertainty and precautionary principle in environmental policy” was held in Copenhagen, and since then, PNS has become a focus in other EU conferences, such as New Currents in Science: The Challenges of Quality, 2016, Ispra. I attended the conference at Ispra, and published my comments of the occasion, Togo Tsukahara, “Commentary: New Currents in Science: The Challenge of Quality, examining the discrepancies and incongruities between Japanese techno-scientific policy and the citizens’ science movement in post-3/11 Japan”, Futures, 91 (2017) 84–89.

15 Global Climate Change and Uncertainty …

295

do not function independently of each other, and they are mutually influenced. It is a conceptual view committed to more focus on structure and context, and whose functions are no longer definitive and are more interrelated. According to Ravetz et al., post-normal science is distinguished from other areas by referring to the two axes of system uncertainty and interest in decision as shown in Fig. 15.1. When both values are small, the problem is within the normal domain, where expert knowledge or application science is effective. So-called textbook knowledge can be applied. In the second case, if one of the two is moderate, the application of the classic technology will not be sufficient, which will require higherlevel skills and expertise/experienced judgment. This is the one called professional consultancy, which requires professional involvement. For example, as in the case of a surgeon or a skilled technician, it may be the case that choices are made on the basis of knowledge of the textbook. And lastly, it is post-normal science that is required when both system uncertainty and interest in the decision are high. As for science itself, its work was formerly considered to be absolute and disinterested, and it was speculated that it could overcome nature and solve problems. But such an optimistic view of science is no longer valid, and accordingly, technooptimism is too shallow a belief. Science itself is extremely uncertain, and like global issues, such as environmental problems, what science can tell is not risk (that can offer a tidy balance sheet of costs and benefits), but only uncertainty without knowing any definitive results. There is a widespread recognition that risk exists, but it should be handled within scientific management and technological application, it cannot be resolved intrinsically by scientific judgment alone. This is because science is not a closed system, but interrelated and contextualized. Specialists trained on academic courses and in specific disciplines do have valid knowledge of the problems, but they are all embedded in particular contexts. There are also problems with multiple solutions. Fig. 15.1 Post-normal science diagram, originally presented by Silvio Funtwicz and Jerome Ravetz [12–16]

296

T. Tsukahara

Normal-science is the concept coined by Thomas Kuhn. Within a specific paradigm, science is normal. Hence, the validity of science is bounded but still applicable. But we are now living in post-normal days with trans-science. Thus, science itself is not valid beyond its boundary, and we need to invent new ways to create connections among sciences, technology and society. The concept of post-normal science is such that scientific and empirical facts and predictions are uncertain, and when value is in a controversial state, the stakes and gains/losses of the social groups involved can be affected. Claims of effective insight can often be influenced, consciously or not, by the potential for claiming these gains. Environmental problems are known to have enormous complexity that cannot be solved simply within the paradigmatic borders of disciplinary science, and/or by technological aids of enhancing the database and increasing the computing power. As for global climate change, scientific facts and disciplinary consensus alone cannot reach conclusions and solutions. Instead, interests in the decisions of various stakeholders in society need to be taken into account in addressing the problem. Such problems are different from those of research science, whose quality assurance is done by peer review and market (technological innovation in industrial development). Rather, open dialogue by all relevant people and social elements should be involved. Global climate change is considered to be dependent on such socio-political conditions, therefore, it is a post-normal science situation. Here, Ravetz and others refer to the people related to these aspects of problem resolution as the expanded peer community. This includes not only those with public or government qualifications, but also those who wish to participate in the solution of the problem and those who would potentially benefit or suffer from the problem. The knowledge of people who belong to the latter group place the problem in a social context, rather than the so-called research science or normal science context; this is called the expanded fact. You can think of this as so-called local knowledge, and a proper form of citizen participation in science, or simply, citizen’s science. The opinions from different viewpoints, and not just from within so-called normal science, create an open dialogue. This is a space conceived in which we are able to learn from each other. This is a setting for post-normal science space. The expanded peer communities appear in various forms. For example, there are social-experiments performed in several styles as citizen jury members, focus groups and consensus conferences, in various places in Denmark, England and Japan. In summary, postnormal science is a concept applied to: problems with both high uncertainty and interest; problems that cannot be solved with conventional scientific knowledge; and problems that require more people to be involved and to interact with each other. It is an approach that assesses the political decision-making process and tries to determine the value of the outcome. A more detailed theoretical definition of the concept of uncertainty for post-normal situations is underway. For example, De Marchi et al. divided uncertainty into seven

15 Global Climate Change and Uncertainty …

297

types as follows3 : 1 Institutional Uncertainty 2 Legal Uncertainty 3 Moral Uncertainty 4 Uncertainty of Information Sharing (Proprietary Uncertainty) 5 Situational Uncertainty 6 Scientific Uncertainty 7 Social Uncertainty. Also, the concept of so-called Management of Uncertainty has been proposed by a group led by J. van der Sluijs in the Netherlands. This group is one of the key players of the discussion surrounding the EU’s environmental issues [17]. In this paper, based on the development of these arguments, I would like to analyse the nature and context of the inherent uncertainty of science. Hereafter, I will discuss the historical characteristic of global climate change.

15.4 Time Factor: A Perspective from the History of Science When science is considered in historical terms, the certainty of science is less credible. We now turn our viewpoint toward the historical background of science, with longterm and macroscopic, global perspectives. I would like to show that science in history was inherently uncertain, and argue that it will continue to be so. Science aims at solid knowledge. It is the history of science that has been elevated by humanity; science has tried carefully and strictly to remove any uncertain elements. Modern science and technology that has long been linked to the world’s political, economic and other sophisticated systems underpins modern society. The analysis and judgment provided by the scientific system of knowledge, which is now consistent with technology, can be seen as if it is so certain that there is no room for a single question. However, to think about this certainty from a historical perspective presents a different picture. For example, just imagine: What do you think we would see, if we look at our twenty-first century from a fairly distant future, for instance, from the twenty-fifth century? That is 400 years back. Of course, it is almost science fiction and absurd to imagine and look 400 years into our future. All we can do is fight and struggle in our age. The past can be used as an example. So, I looked for the source of the elements that make up our present knowledge, clarified their history and origin, where the knowledge was born, how long it had endured and had a useful life and how it was passed on to the next generation. I would like to bet on examining material to determine if it will last a long time or even if robust knowledge will last over time into the future. Learning from history will first lead to clarifying the front lines of science and giving them a current perspective. But before we gain confidence in our current view, what does this show about our future? By carefully polishing the mirror on which our current image appears, will the mirror reflect on the future? The history of science looks back on a scale of some hundreds of years to the early shaping of what we would now call science. How can we think of science that 3 These

are published in the glossary at NUSAP.net, a website operated by Jeroen van der Sluijs, now at Oslo University, then at the Copernics Institute at Utrecht University in the Netherlands.

298

T. Tsukahara

dates back 500 years, that is, the early 1500 s? In this way, we see 1543 CE as a very important turning point in considering the history of science and technology. It important to the relationships among science and technology and society on a global scale. In that year, Copernicus’s book that depicted the theory of celestial rotation was published, and Vesalius’s anatomical book was also published. Both represented new views in physics/astronomy and in biology/medicine, respectively. In terms of the global spread of science and technology and the East–West exchange of science and technology, it is also in this year that the Portuguese, who had been set adrift and stranded on the shore of Tanegashima Island off south-west Japan, brought firearms to the Japanese. A military revolution finally arrived there, at the archipelago in the eastern edge of the Eurasia continent, and Japan was involved in an incipient race of imperialism. The fact that we learn from this is that whole human race did not know that the earth travels around the sun and the structure of the human body in only this short 500 years, and even the power of firearms had not reached the islands of the Far East, where we live. And now, with our history as a fulcrum, we need to understand what would happen where we to expand this historical insight in reverse. It may give an impression that 500 years is a somewhat large scale. Is was people of science believe today too dangerous as it was 500 years ago? In addition, the speed of technological progress in recent years has accelerated. It means that we are living in a world in which existing science and technology becomes obsolete extremely fast. Where is the certainty in our life? From this point of view, the certainty of the science of our era has been eroded by time. To name but a few, we can see such instances from the history of science. It is an early highlight of the history of science that the centre theory of the Earth (geocentrism), which seemed so certain, was replaced by the centre theory of the sun (heliocentrism). Remember the challenges that Copernicus and Galileo faced. We now know the Earth is moving, but Galileo was sentenced as guilty by making this claim by the authorities of the time. Also, for example, the barrier of species seemed so certain. But we all know it was overcome decisively by Darwin with the concept of evolution. Human and monkeys had a common ancestor, that some Americans are still rejecting. The ancestors of human beings were savage, ferocious and barbarous beasts, and not the likenesses of gods. The certain things, such as three-dimensional space and time, that Isaac Newton and Immanuel Kant believed in and constructed with detailed care were largely decomposed and almost completely overwhelmed at the beginning of the twentieth century. The transformation of space–time concepts through relativistic-quantitative concepts is also repeatedly analysed in the history of science. Furthermore, aside from the specific pros and cons, the knowledge of life was not a mystery (despite being described as God’s area of biotech by Yuval Noah Harari in the age of Homo Deus). The certainty of life we had relied upon is shaking and being overturned/uprooted radically. Something that was supposed to be certain is already being re-classified into the uncertain. Areas considered to be guilty of intruding on

15 Global Climate Change and Uncertainty …

299

the “realm of God and Creators” is the range where operations are now possible, actually boundaries are crossed, supposed dignity is violated, and life itself is now manipulated. Of course, even so, you should not laugh at the supposed ignorance of others or easily overlook the path of thinking of past natural philosophers and scientists. Thinking in the past is generated from the place where it was situated, between specific social situations of a certain time and its intellectual/cultural background. It is decided by its historical circumstance, bounded by its historical conditions. What we are doing now has similar constraints: we are thinking in a specific historical context, with a particular scientific and technological infrastructure; it is regarded as derived from modernized so-called Western systems; and in terms of thinking and intelligence, it is impossible to immediately say that the one in our age is superior to others in the past. It is not because of a way of thinking, but the way we use machines and instruments, that changed the so-called scientific view. For example, humans in an age without telescopes and microscopes thought that rabbits lived on the moon, insects were thought to generate spontaneously, and life occurred naturally without eggs. It is very difficult to imagine the manner of thinking in that way. However, it is actually impossible to change these kinds of belief without scientific instruments and proper equipment, a set of techno-scientific infrastructure. It goes without saying that the issue becomes even more troublesome when the problems of God and supernatural existence are present. Even in our modern age with the Hubble Telescope and the Big Bang theory, astrology is still very popular. In order to abandon any past thinking and belief systems, and put them into the category of superstition, a highly-charged ideology of scientific enlightenment is needed. A negative judgment of the past in comparison with current achievements is criticized as anachronism in the history of science. Anachronism is to condemn the past, ignoring the temporality and culture of past thinking, its socio-political context and techno-scientific infrastructure. The lens of history is by no means a guarantee of a non-temporal, transcendent gaze. The basic attitude of history is to consider things from the past and to place them in the context of that era and to answer how that knowledge is rationalized in line with the limitations of that time. The task of historians is to seek answers from what kinds of experiments and experiences that those particular forms of knowledge have taken. To consider specific thoughts in particular historical contexts is called synchronism. This is in opposition to an attitude of present-centeredness that is dependent on hind-sight. Simplistic judgment of the past from present is often characterized in such terms as Triumphantism, a Whiggish interpretation of history and a justification of the past by Authoritarianism.4 Instead of falling into these pitfalls, we need to stress a key word on the contemporariness of the players in the past at his/her specific situatedness, as S. Harding noted.5 With 4 For

lengthy and voluminous discussions about historical Triumphantism and a Whiggish interpretation of science, see Simon Saffer and Steve Shaping, Social History of Truth, etc. 5 Situatedness is a concept defined by Sandra Harding. See Harding (1991) and Harding (2008).Whose Science? Whose Knowledge?: Thinking from Women’s Lives, Cornell University

300

T. Tsukahara

this concept, history is required to examine how human intelligence works within a specific time period, what type of infrastructure and socio-economic setting existed, what to think about them in their philosophical/authority contexts and what to worry about and hope for at the same level of gaze and voice by specific and explicit people in the past. Do not laugh at or just denounce people and thought from the past, but carefully listen to their faint voices. Try to look at what they have seen. No matter how ambiguous it seems, it is necessary to have respect for what people of the past desperately thought. If someone from the future, say from the 25th Century, looks at us living in the 21st Century, we do not want to be easily laughed at or ignored for our trial and error, but that we took our struggle seriously and ardently given our settings. With regard to the thesis that science advances knowledge, it is possible to raise doubts about certainty in a paradoxical way. Now that we have affirmed the thesis that science has progressed and developed, we may rightly assume that science should continue to progress and develop. Certainly, as a daily feeling, it cannot be denied that science and technology will continue to be the basis of human life. That means that if it is still developing and we look at it from a certain future point of time, the science we currently have is relied on as the only technology at this limited point in history (which we can admit is quite immature). Here is an example. We’re alive in a time when a good-looking digital camera is produced one after another. Alas, I just got a good-quality camera, which is loaded with many new innovative features. In a very short period of time, a next batch of cameras will arrive with even more new functions and communication features and other embedded systems. It is not even surprising if the price for these new incredible features drop almost as quickly. The almost inertial strength that such technological innovation brings about is the issue we need to consider when we come to the topic of science and technology and society. An important aspect of science and technology is that technological achievements are definitely the leading edge in the latest version, and then they become obsolete. A cell phone with a high-resolution camera from several years ago would barely qualify as a children’s toy today. It looks like it was a good industry in which to invest, or might it still be a good business choice? These experiences are occurring often in our modern digital age. These experiences give rise to a fundamental question of whether the certainty of progress in technology is an illusion. The speed and intensity of these advances in physical techno-artefacts as a matter of fact would at the same time undermine the notion of certainty in science and technology. Now we should look at this from a different angle. Science aims at solid knowledge without mistakes. The pursuit of certainty as a subjective judgment of a scientist and the uncertainty of the science itself as an objective and historical reality can actually coexist without contradiction. Because science is an endless quest. So, science is not Press, 1991); Sciences from Below: Feminisms, Postcolonialities, and Modernities, (Duke University Press, 2008). See also Danna Harway’s stand-point theory. Harway’s works included in Harding (2004), The Feminist Standpoint Theory Reader: Intellectual and Political Controversies.

15 Global Climate Change and Uncertainty …

301

necessarily certain. We are aiming for certainty. It may be more accurate to say we continue to improve our aim. Here, we guarantee that it is important to make an effort and to keep aiming for something. But we should not confuse the effort and the result. No matter how hard you try, wishful thinking is different from the result. Even if you bet on a racehorse that continues to lose, no one will pay on your bet until the horse wins. Not everyone appreciates working hard, especially, if it doesn’t provide results. Scientific wishful thinking is the metaphorical losing race horse. It can’t be judged a winner by the number of bets placed on it. Of course, this does not condemn human values and moral judgment. Human beings have emotions, so we are delighted to see the figure of a racehorse destined to lose, but that still refuses to stop running. It may be common to project our own self-images on the never-give-up losers, especially when we’re invested in an effort and the outcomes don’t look good. But what is aimed for in science is often confused with the outcome. For science, certainty is not the result. Certainty is an eternal effort goal, and it is self-justification with complacency. The fact that science aims at certainty is a mere self-claim, and definitely different from the certainty that science inherits. What can we now consider as we look at the characteristics of scientific wishful thinking? First, although it is a traditional view, science has been progressing to reach a certain ultimate truth and benefit the human race. So far it has done this, but that is not a guarantee for the present or the future. In other words, this certainty may be limited at the present stage, and it is assumed that negative consequences may be steadily approaching. Galileo, Darwin and Einstein are all respected as representing a reason to a great victory. It is more so when they are symbolized as part of the Triumphant March of Reason. This is a typical Whiggish historiography, in a jargon from the world of the history of science. They are thought to have been approaching a certain ultimate truth, but this view is limited to within their specific situatedness and a matter of observation positioning, as D. Haraway and S. Harding have claimed.6 I admit it is important for the history of science to live up to the present and to admire the past as Project X did, a popular science series broadcast by Japanese NHK, the national broadcasting board. The drama of people of techno-science product development in the time of Japan’s economic growth, who have devoted much effort to seemingly trivial roles and have steadily solved technical problems one by one. Heroism in Japanese economic growth is eloquently depicted, sometimes boldly, mostly not examined or investigated within its historical context. The people who are shown in Project X are the symbols of great Japanese achievements of the techno-legacy; national pride is illustrated by such techno-supremacy as Japan Rail’s Shinkansen Express, SONY’s Walkman, cameras from Canon, and the automobiles of Honda and Toyota that conquered America and Germany. I agree with the fact that one of the important tasks of a techno-science historian is to build a monument on the anniversary of discovery and visit the tombs of the great scientists and engineers of the past. The history of science and technology will 6 For

those concepts, see Harding and Haraway, see f.n.14.

302

T. Tsukahara

have a family album with a dramatic view of the past and a convincing sense of it in retrospective. There may also be a function of collective healing after hard labour in the pursuit for techno-science. After all, the so-called history of popular science should be a nationally recognized project. It is a function just as gathering for a funeral in Buddhism plays an important social function and income-generating opportunity for religion in modern society, ancestor worship in science is also an important fund-raising opportunity. But it must not be reassuring. Looking back at the moment of victory and getting drunk on that is not something that should be aimed at for the critical/intellectual historian. For the true winners, euphoria is to be allowed only in the moment of victory, and those who have won once feel they have a destiny that they have to win forever. The modern age with the knowledge of science must continue to win. In the domain called the philosophy of science, it is clear to anyone that intellectual slogans such as truth and reason are the basis of this assumption, but we now know the validities of those slogans as big Modern concepts have already expired. It is also possible to think that scientific knowledge has reached some sort of saturation point. It may be convenient to think of this point as the attainment of “The Truth” by the human race. However, considering the state of modern science, the accumulation of knowledge seems to be expanding further via the two frontiers of advancement in information revolution and biotechnology. Various technologies are also expanding the scope of research in the areas of robotics, neurosciences and artificial intelligence, which are leading in different directions, rather than moving towards saturation. The frontiers of science seem to be moving forward, despite the harshness of the deterioration of the earth’s environment and the worsening global climate. This expansion also indicates we need other voices of criticism, or alternative wisdom, of science and technology in society. Science always continues to override what was believed to be certain at a specific time. The human activity of science is constantly expanding: examining various phenomena, assembling large experimental apparatuses and huge sets of instruments and repeating precise observations and measurements. Scientists elaborate diversities of experiments and articulate various series of thoughts. They examine the plots over and over again. They keep doubting with sound scepticism about initial settings, and review the whole process, trying to find solutions with different possibilities. The one thing that can be considered without contradiction is this: the science we have at this point has advanced in a given historical environment with unique socio-economic and cultural-political settings, and it can be changed at any time now and in the future. What can be changed is already changing, and this has been occurring for a long time. Science is always the best thing at a given moment, and it could be better or worse, or at least different in some other time in the future. Any medical treatment could be “the best treatment” and “the most advanced” at the time when a patient received it. Time is always flowing, and possibly tomorrow, new medicine and/or a miraculous technology will be devised to cure your disease. So, it is not an ultimate certainty or an absolutely given truth what science and technology can offer. Yesterday’s best treatment may already be abandoned as too dangerous today.

15 Global Climate Change and Uncertainty …

303

This view is not intended as either a denial or a reduction in the value of modern science. Instead, I would like to argue that the concept of certainty in science is always placed within a historical normativity, and to insist that the various issues related to scientific certainty should be the object of further serious consideration. I am not just aiming for a violent attack on modern science and claim to throw away its results and the profits that they bring. My point is that modern science has not acquired nor deserved to claim the ultimate certainty: it is today’s “best treatment”, and this should naturally be judged as acceptable for our daily life. Realistically, for the most part, there is no alternative that can sufficiently replace modern science and technology. We should consider the expanding frontier, while being aware of the fundamental uncertainty of the science that we currently have, is the best option at the present given our limits. Choosing the best for the present, always leaving collateral for the possibility of the second best, always assuming it can be trial and error and taking measures against the limitations is little more than a lesson in life. There is no proof that just because there is no evidence to deny the present best, we have not fallen into self-righteousness. In the first place, methodological scepticism to doubt the present best should have been a fundamental principle of science. Sustaining scepticism is also a fundamental respect for the purpose of science and its methodology. No matter how overwhelming the strength of present techno-science is, always remind yourself the Confucian adage, “Don’t get lost in revising your mistakes”. This is not a lesson from science, but valuable human wisdom from moral ethics of ages ago.

15.5 What Kind of Historical Features Does the Warming Paradigm Have? So, what are the historical definition and characteristics of the scientific frontier of climate change, and the newly emerged field, called environment geo-sciences, its research and related response? The current scientific status of global warming research from the point of view of the history of science includes a detailed examination on the series of concepts concerning geo-chemistry, environmental physics, meteorology and climatology. With global warming as a central hypothesis, many working hypotheses are set to explore and examine various areas to determine what is being affected and in what way. From the research on the atmosphere, and by the studies of radiation from the sun and reconstruction of paleoclimate, to name a few, climate change is becoming paradigmatic. These hypothesized systems are considered to be incorporated into mobilization, and the whole functions as a paradigm. For instance, geochemistry is used to examine the presence/distribution and reaction systems of elements on the earth, in the atmosphere and water on the surface of earth. Paul Crutzen was the one from this discipline who contributed to the establishment of the current paradigm of climate change, and ultimately coined the term Anthropocene.

304

T. Tsukahara

The awareness of the problematized change in climate is shared now in various fields. And a grand puzzle solving over warming, namely normal science, has been put into practice worldwide. A variety of disciplines have been mobilized in order to form some social groups of scientists. Because sly economists are trying to make the most profit from the climate change, new patterns in the distribution of wealth and poverty caused by the changing global environment is providing them another “business opportunity”. A large amount of research funding has been introduced, making it a huge politically institutionalized knowledge production industry. Those who are benefiting from this global knowledge industry are now even referred to as geo-crats, a new type of technocrat that deals with the politics concerning geosciences and its institutionalization/bureaucratization. The climate change paradigm appears to be as strong as ever, although there are still some odd and eccentric individuals, who preferred to be called sceptics. In this paper, we shall examine this paradigm by looking at the image reflected in the mirror of the past, with special reference to the histories of meteorology and climatology.

15.5.1 Techno-Natural-History: Galilean and Humboldtian Sciences As far as modern meteorology/climatology is concerned, the warming paradigm is based on the Galilean-Humboldtian methodology. The recent explosive advancement of technology of computation gave the basis to its upgraded version. Galilean science is the first stage of the quantification of nature, mostly carried out in the limited space of the laboratory. Humboldtian science followed it and expanded its objects to a wider nature, namely, natural history and field sciences became objects for quantification by scientific instruments. Both laboratory science and field science were upgraded by the new information revolution of computers. Meteorology began in the time of Aristotle with his Meteorologica, and developed into a modern science. Most conspicuously, Galileo’s ideas for thermometers and barometers were a landmark.7 Galileo characterized modern sciences with the instrumentalization of nature that transforms qualities of nature into quantified data with numbers. He recorded and processed, compared and examined them mathematically. Thermometer and barometer in this sense are typically Galilean; it turns warmth/coldness, namely a relative human sense-feeling, into a reading of numbers; a barometer makes invisible atmospheric pressure visible. The former replaces a sense of heat/cold with standardized numbers that can be described and operated on mathematically/statistically. Daily temperature is compared; monthly temperature averaged. The later predicts changes of weather that no prophet could have given; the prediction is given by mechanical readings of instruments. 7 Galileo’s

name is given to a specific type of thermometer, but who invented it is still in doubt. Galileo did not invent the barometer either, but his idea led his disciple Evangelista Torrichelli to later make a famous experiment, and finally to the invention of a barometer.

15 Global Climate Change and Uncertainty …

305

The most marked advancement that followed was the systematization of synoptic meteorology, based on continuous/simultaneous meteorological observation, that emerged in the society Meteorologica Palatina in Mannheim, Germany from 1778.8

15.5.1.1

Humboldtian Science

Alexander von Humboldt had launched a new style of research in nature different from previous natural sciences. He took laboratory instruments out to the field; namely, scientific instruments were applied in field science. Humboldtian science refers to a style in scientific practice in the nineteenth century characterized by the works of Alexander von Humboldt. Humboldt was committed to what he called terrestrial physics. It maintained numerical precision in the observation of nature and was based on scientific field work with scientific instruments. An extensive array of precise instrumentation had to be available for Humboldt’s terrestrial physicist. Such instruments included chronometers, telescopes, sextants, microscopes, magnetic compasses, thermometers, hygrometers, barometers, electrometers and eudiometers. Humboldt did not consider himself an explorer, but rather a scientific traveller, who accurately measured what explorers had previously reported inaccurately. The term Humboldtian science was defined by Susan Faye Cannon (1978).9 According to Cannon, Humboldt’s scientific approach required a new type of scientist: Humboldtian science demanded a transition from the naturalist to the physicist. It has often been compared with, and thought to have supplanted, the Baconian methodology of science. It involved the application of laboratory methods in the field and carrying out continuous and extensive data collection. This method is also called the Humboldtian paradigm. This Humboldtian approach led directly to the present age. Humboldt brought his scientific instruments to the field, and the field (that is, the earth) became a laboratory object for his science. At the same time that Europe’s violence and imperial abuse of the world was in full swing in the afterglow of the Great Voyage Era, Humboldtian science was a tool for the expansion of European imperialism and it had justified its purpose. Describing the land of terra incognita in a standardized European way was a first step of control. An accurate map of the area and a description of the climate, geography, etc. were made by Humboldtian methodology in order to invade. 8 For Aristotle’s meteorology, Galileo Paradigm and Engineering, Meteorologica and Palatina, see Tsukahara et al.(2002), The Fleming Papers on Encounters of History and Meteorology (in Japanese); Stefan Emeis and Cornelia Ludecke, From Beaufort to Bjerknes and Beyond; Critical Perspectives on Observing, Analyzing, and Predicting Weather and Climate (2005). 9 Susan Cannon (1978). Cannon, Susan Faye: Science in Culture: The Early Victorian Period, New York 1978. Also see: Dettelbach, Michael: “Humboldtian Science”, in: Jardine, N./Secord, J./Sparry, E. C. (eds): Cultures of Natural History, Cambridge: Cambridge University Press 1996; Nicolson, Malcolm. 1987. “Alexander von Humboldt, Humboldtian science, and the origins of the study of vegetation.” History of Science. 25: 167–194.; Home, Roderick Weir. 1995. “Humboldtian Science revisited: an Australian case study.” History of Science. 33: 1–22.

306

15.5.1.2

T. Tsukahara

Humboldtian with Up-Graded Computers

The establishment of the so-called digital paradigm was also greatly dependent on the Humboldtian methodology of instrumentalization, but this stage was defined by a new technology, the computer. Computer-based technology has dramatically increased the ability to process numerical data. The explosive increase in the quantity of data actually caused a qualitative shift. Nakayama defined this shift as caused by a new major method of “imaginary science” (simulation).10 Nakayama contrasts this computer-based method with “empirical experiment”. Instead of actually experimenting with material objects, simulation of numerical data is carried out by calculations on a computer.11 Nakayama called it operations in an imaginary world, and they were carried out in a “void” in comparison with those done in the “practical world”. In meteorology, an electronic communication network was established by the digital paradigm. Real-time weather observation from satellites greatly improved the accuracy of weather forecasts. The observation of the atmosphere at high altitudes was revolutionary in meteorology by the middle of the century, the Humboldt influence has been in full swing in the 1960 and 70s. In the late twentieth century, atmospheric observations and geochemical findings revealed the general circulation of the wind on a global scale, and wider movement in the hydrosphere. Studies of heat radiation and the sunspot cycle in solar activity and studies of trace elements in the atmosphere were also important parts of the establishment of this paradigm. It was not too long ago that global warming research first attracted attention in history. Global warming due to carbon dioxide gas was suggested by Arrhenius Tyndall. He studied infrared radiation. When Guy Stuart Callendar matched it with actual observation data, it drew much attention. This hypothesis was proven after the global observation network was established. During the International Geophysical Year (1957–58), scientists cooperated internationally in many new ways. Science was believed as an unmistakable system to monitor natural phenomena across the globe, in the aftermath of World War II, and in the very middle of the Cold War. Based on the data collected by these various efforts, it was Manabe Shukuro who submitted a three-dimensional general circulation model in 1975. He was actually the individual to make global warming predictions for the first time.12 His prediction was later published in Nature as a global atmosphere–ocean coupled model under 10 Shigeru

Nakayama, 20th and 21st Century History of Science, (in Japanese), (2000). point is similar to the remarks made by Jean Baudrillard, as he criticized a society that can be driven by simulation as a “shmiracle”. But I will not touch on it further in this paper. 12 In the early 1960s, Shukuro Manabe wrote that he had already developed ideas on a radiativeconvective model of the atmosphere, and explored the role of greenhouse gases such as water vapor, carbon dioxide and ozone in maintaining and changing the thermal structure of the atmosphere. This was the beginning of long-term research on global warming, he continued in collaborating with the Geophysical Fluid Dynamics Laboratory (GFDL) of NOAA. In the late 1960s, Kirk Bryan and Manabe began to develop a general circulation model of the coupled atmosphere–ocean-land system, which became a tool for the simulation of global warming. The analysis of deep-sea sediments and continental ice sheets indicated that the Earth’s climate had fluctuated greatly during the geological 11 This

15 Global Climate Change and Uncertainty …

307

the authorship of Stoffer, Manabe, Bryan et al. The result of the warming forecast in this paper became the basis of the first IPCC report. In the 1990s, research, such as the WOCE (World Ocean Circulation Experiment), had been carried out on a global scale. It can be said without exaggeration that the scientific observation network has covered the Earth. Monitoring the weather and climate on a global scale was always the ultimate goal of meteorology, which actually was a military technology in its origin. It can be understood that the warming paradigm has three major elements. First, it is a Humboldt-like collection of field data. It utilizes the imaginary science of a digital paradigm and its invincible analysis machines. Second, it is supported by a global observation network of international cooperation. The observation and data collection rely on advanced technology. Lastly, it is dependent on the worldwide organization and institutionalization of scientists’ observation networks that is indispensably financed by national and international agencies. The advanced technology-dependent data collection is quite different in taste than traditional, pastoral, collection-type natural histories in the eighteenth and nineteenth centuries. It is fundamentally different from the practices of natural historians of the so-called good old days, who were wandering in the mountains in search of new species of butterflies and unusual insects. Those were the idyllic researchers finding pleasure with nature, their equipment was merely a small insect net. Instead of finding joy in discovering small life, cutting-edge scientists in the warming paradigm act as surveyors with their subjects in an electronic net. Their survey nets are not as limited as probes and timid searching needles that you may accidentally stab into the researcher and not the subject. The electronic net consciously covers all samples including observers, the gaze of the observer is a panoramic surveillance. It is like the system of the Panopticon, that keeps watch on prisoners as Michel Foucault discussed. Scientists in the warming paradigm see the weather phenomenon of the whole world, and at this time, this whole world is the actual Earth, represented as if it one complete small sphere. Humboldt brought laboratory equipment out to nature, and he tried to quantify and describe natural phenomena. It has resulted in creating a new system of observation and information collection. This was the basis for looking at nature entirely through the lens of technology. In the current warming paradigm, it is continuing. The great change is the position of nature monitored there. Humboldt’s nature was described in romantic language, and image of the world was a huge thing and a united body of movement beyond human recognition. The observer was a small recognizer/detector/observer of the great unity of nature, in awe of its sanctity, who fell in front of that wholeness. However, the nature of the warming paradigm is in the assumption that nature is being “observed and transformed” and even operable. In short, the Earth has become smaller under the scientist’s gaze and consciousness. The scope of science in dealing

past. With regard to Manabe’s achievements and biographical account, see the special issue of Illume. Published by TEPCO, 2008.

308

T. Tsukahara

with the Earth has become enormously huge, like the gaze of God as seen from a certain point in the universe. This assumed transformation of nature is an artificial one. Nowadays, laboratories are extended; observation and sensing probes are everywhere on the earth, and the trends they measure are monitored. Unlike humble Humboldt, nature is now being overwhelmed with laboratory equipment. The spread of the observation system, the explosive increase of monitoring data, and the acceleration of the processing speed are all enormous. The metaphor that the whole earth is being treated as an in-vitro sample in the laboratory is not entirely figurative. The image of the Earth that appears on the computer display is not something we fear, nor does it strike us with awe. It is an experimental guinea pig. In other words, the earth is now exposed under the gaze of technology, that has over grown and expanded to match the Earth’s scale. While there is historical continuity in Humboldt-like instrumental descriptions, the amount of overwhelming observational networks has transformed the quality of scientific gaze. The warming paradigm looks at the Earth as a sample with a gaze created by a technology monitoring network and a huge numerical processing system. Is this an illusion and a skewed image, a product of techno-civilization? Is the simulated earth one of the illusions and distorted images we see in our time? Cutting-edge sciences based on the warming paradigm do not seem to be close to perfect even at their best approximations. When exposed to abnormal weather, we are treated to the multimedia spectacles of the ultra-advanced monitoring, modelling and simulation-packed visuals that surpass those of the extreme weather itself.

15.5.1.3

New Imagery of the Earth, as a Huge Noah’s Ark

The technology network covers the entire earth, and it simply monitors any human activities. Human senses and everyday life are spontaneous and not always controllable, no matter how refined the observation point is. Use of computer networks should not just be examples. A person’s life should be more than a life insurance company or a mortgage repayment plan, it should not just be defined by the simulations presented by data and models. The actual thinking and philosophy of people should have a larger scale than the framework of its technological simultaneous monitoring and simulation. It was in the middle of the U.S. and Soviet space race that this global awareness was taken on the ground. That the entire Earth was put under a gaze derived its inspiration from the Russian spaceship Vostok 1 and the first astronaut, Yuri Gagarin, who actually looked at the blue earth from high above.13 It was also inspired by the book Spaceship Earth, which was conceived by Richard Buckminster Fuller. Frank Drake’s 13 Oda Makoto published Let’s See Anything(on the Earth), also in the year 1961. This is a remarkable

book based on a world travel diary, in which he has been exposed to hippy culture and Cold War realities; it became one of the best-sellers in Japan. Oda was a member of the elite Japanese class, a graduate of Tokyo University, a Fulbright scholar, but he had ended up being a writer and one of the most prominent anti-Vietnam war activists. Publication of this debut book of his coincided with the year that Yuri Gagarin’s Vostok 1 went out from the Earth, and Gagarin watched the Earth from

15 Global Climate Change and Uncertainty …

309

Project Ozma on SETI (Search for Extraterrestrial Intelligence) that Carl Sagan continued made it most welcomed in popular science. Stanley Kubrick’s and Steven Spielberg’s SFX movies definitely gave space imagery to the cultural movements of the 1960 and 1970s. They played a leading role, while overlapping with the illusion and illuminous image of Earth as our vehicle through space brought by pop-culture. This concept became a source of many creative inspirations, and a new era of SF from Arthur C. Clark, Phillip K. Dick and others was born. However, the Spaceship Earth of this era was just emerging, and for the time being, it was a sensation. It was then made real with more data and observation networks. So, at the beginning, it was rather intuitive and ideological. It was at an abstract stage that sought a universal value and was motivated by the peace movement for Vietnam. It has a solidarity of humanity, which should be called global consciousness, and it was deeply connected with the counterculture. The earth-conscious, space age faction was met by the hippie culture of the Flower Age, the Gaia hypothesis, and the New Science, etc. These dreamers differed from the global consciousness of the current global warming paradigm, which is data-driven and simulation-building in its concern for the environment. There is much less of an ideological tint of omnipotence expressed by the earthconscious, yet there has somehow been a romantic connection since the 1960s everpresent in the background. In particular, there is excessive heroism and sorrow, which can be seen on the side of global warming and environmental change (which may be an implicit continuity of this consciousness).14

15.5.2 Global Warming Paradigm: Diversified Time Resolution and Future-Orientedness There are several scenarios for climate change. These scenarios are extrapolations that show the transformation of various physical quantities in time based on the data from the near past to the present. These are like those of human population growth, which began to increase during the Industrial Revolution, and then increased dramatically at some point. An extrapolation often becomes saturated or ends in inertia. With regard to population, it is feared that the growth will never stop. Of course, the actual forecast is not that simple and interactive. There are elaborate statistical operations, that are quite complicated mathematically, including various factors for living conditions, food and reproductive behaviour. The effects of population control policies are also intricately organized and studied. What is the nature of the extrapolation methodology on the future of the global climate? The systems they are targeting are the weather phenomena, that cannot be the above it. There is a difference that the later looked at the Earth from up in the sky and the former from the ground, but there is no doubt that they shared in a worldview that they helped pioneer. 14 It may be counterproductive to the solution of this global environmental problem that too much exposure is given to the Flower people. They often constitute too many members of the Earth Conscious lineage of the generation, but they are now mostly seen as self-complacent.

310

T. Tsukahara

manipulated freely, like chemical substances in test tubes. There is no reproducibility and no verification by hypothesis and experiment. The methodologies that are used are: the reconstruction of the past, the creation of a model with current observation data, and simulation empowered by greater calculation capability. In climate reconstruction, the past is analysed in order to know the vector of the inertial force currently given. The present provides factors that correct the vector’s directions and determines the magnitude and directionality of a newly added force. An implication for this extrapolation-simulation approach is that prediction is not reproducible. The application of the simulation is assumed to be recognized as the current status, but it does not exactly give evidence directly dedicated to future prediction. Attention to future-oriented problems is often sacrificed in order to address urgent immediate problems. Improving the performance of observation and monitoring and improving the accuracy of model construction and simulation are both important. However, those are issues of the present and the future. We are always based on the past; a forward-oriented mentality has to leave the past behind. This creates a certain degree of uncertainty. In IPCC, there is a history department, but they are more scientific than humanistic. It deals with data such as proxy, solar and astronomical data. Some data are on a shorter time scale. Different sets of climate records of human activity coexist. Those data are accumulated from different time–space resolutions. It is thought that, through adjustment and collation, an extrapolation curve could be drawn from them. One problem in this method is that there are diversified densities in the temporal resolution of these predictions. The differences in time scale should be considered, and how those data were quantified and collated. The examination should include the insight of so-called meta-data from the humanities, about the social situation and how those climatic phenomena were observed at that time. Certainly, electronic networks have increased the amount of information we can collect. Along with the addition of new processes, this has brought about a huge increase in the area covered by our consciousness and scientific gaze.15 This enormity of consciousness and extension of area covered by science is not only spatial and geographical. The sense of time, hence history, is also changing. The scale of Earth history is 4.6 billion years and the scale of human activity of science are so far apart from each other that it should be difficult to discuss as a concrete image. This is the same for the 65 million-year-old collision of meteorites with the Earth and the global freeze hypothesis that draws much attention. In our Anthropocene Age, however, we know human activities are strongly influential, even on geological transformation. It will take considerable ideological manipulation to focus on dinosaur extinction and mobilize capital for glacial survival. Hollywood is hard at work for the kind of personalization and style that is necessary for that, and popular science is good at mixing enlightenment and commercialism.

15 Rather

than paradoxical, so-called withdrawal is a problem because of this huge enthusiasm of consciousness and the resulting almighty-ness.

15 Global Climate Change and Uncertainty …

311

However, due to the rapid expansion of data throughput, various discussions are intermingled on a truly random scale. As for the global warming problem, extrapolation can be performed using observation data from the 1880s as actual instrumentallymeasured data. For earlier data, most use substitute (proxy) data. This does not mean that the resolution in the temporal sense is high for some of those methods. In the history of science, we should reflect on how humans have recognized climate and the environment.16 Global warming must be considered not only from geo-scientists, social-political scientists and economists, but it should also be examined by more scholars from the humanities and philosophers. Global climate change is a central issue of humanity and that should be examined from all the viewpoints offered by human society’s accumulated wisdom for environmental change. One recent development that Atsushi Ota from Keio University has been claiming is necessary and has just announced is his project to establish the Environmental Humanities. One of the essential parts designed by Ota is Humanistic Historical Meteorology in colonial Southeast Asia. For that, a JSPS research grant project has just been launched for 2019–2023.17

15.6 Concluding Remarks The arguments of global warming are mostly lacking the human dimension as Fressoz and Bouneuil have remarked. In Japanese history, we are able to describe more examples to support their discussions. To name just a few, in the face of climate change, there is still something to learn from the early modern Edo period drought and famine, and how farmers tried to survive through hunger. In times of famine, China and Japan have written a number of books of salvation.18 It is timely to recall the desperate efforts of our recent ancestors, at our doorstep to the age of a global warming disaster. Yet, the warming paradigm has uncertainty about the time factor both in a macroand micro- sense. When viewed macroscopically, there is a historical definition that the warming paradigm is a digitized Humboldt science. This paradigm includes the possibility of transforming to a different paradigm by changing the technological infrastructure, such as increasing the computer performance. Also, microscopically, we need to consider differences in the time scale dealing with the warming itself. In the interpretation on the human scale, the relationship between the climate and the culture created by humans are often mutually exclusive. 16 Tsukahara, Mikami, Naito ed., “Meteorology and history encounters” in History of the environment. 2003, Kobe STS Study Group. 17 Ota Atsushi’s ambitious research agenda can be seen in JSPS data base: https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-19H01322/. 18 For the Edo period, there were several Relief Books when crops were bad (or guide books for the time of famine), referred to in the paper such as Shirasugi’s article in 1996 and 2003. Shirasugi. In Tohoku Gaku, 8, 154–165, 2003–04; Chugoku Shiso-shi Kenkyu (19), 211–230, 1996–12.

312

T. Tsukahara

Several more scientific features of this problem can still be pointed out. There are many points to be considered, such as cultural issues of representations of the Earth, climate politics related to warming. So far, I’ve presented a historical sketch from the history of science about global warming with uncertainty as a keyword. I should admit this paper is still crude, but we are constantly repeating examinations as time flows. Yet, I believe in our struggles for climate justice that have a different direction than the futures designed by geo-crats, political economists and politicians, experts and scientists who proudly shown us of their scientific evangelical prophecy with extrapolation and simulations.

References 1. Tsukahara (2005). Koiki Kiko Hendo to Fukakujitsusei. In Kokyo Gijutsu no Governance (pp. 279–294). (https://www.lib.kobe-u.ac.jp/handle_kernel/81011198). 2. Wynne, B. (1996) ‘Misunderstood misunderstandings: Social identities and public uptake of science. In A. Irwin, & B. Wynne, (Eds.), Misunderstanding science? The public reconstruction of science and technology (pp. 19–46). Cambridge University Press. 3. Wynne, B. (1991). Knowledges in context. Science, Technology & Human Values, 16(1), 111– 121. 4. Wynne, B. (1993). Public uptake of science: A case for institutional reflexivity. Public Understanding of Science, 2(4), 321–337. 5. Fressoz & Bonneuil. (2016). L’Evannement Anthropocene, La Terre, l’histoire et nous, Seuil; The shock of the anthropocene, The Earth, History and Us, Verso. 6. Dumas, L. (1999). Lethal arrogance: Human fallibility and dangerous technology, St. Martin’s Press, op. cit. in Hayashi. 7. M. Hayashi. About the concept of risk. In Kogakuin Daigaku Bulletin (Vol. 38, No. 1, pp. 1– 13); also see Hayashi, Concept of Risk and STS. In Kagaku Gijutsu Shakairon Kenkyu (Vol. 1, pp. 75–80). 8. Hirakawa, H. (2003). Risk, Uncertainty and Tradigy. In Gendai Shiso (Vol. 31-9, pp. 156–164). 9. Iseda, T. (2005). Risk as a problem for the applied philosophy. 10. Ayuba, D. (2018). Anthropology of probability and uncertainty. In Minpaku Tsushin (No. 163, pp. 18–19). 11. Higashi, K. (2014). Anthropology of risk. In Minpaku Tsushin (No. 153, pp. 10–11). 12. Funtowicz, S., & Ravetz, J. (1993). Science for the post-normal age. Futures, 31(7), 735–755. 13. Funtowicz, S. O., & Ravetz, J. R. (1992). Three types of risk assessment and the emergence of post normal science. In S. Krimsky, & D. Golding (Eds.), Social theories of risk (pp. 251–273). 14. Westport, Connecticut: Greenwood. (2016). Several articles by Funtwicz and Ravez are translate into Japanese. Also, see T. Tsukahara Gendai Shiso (Vol. 44 No. 12, pp. 172–191), 2016–06 15. Tsukahara, T. In Science (Iwanami) (Vol. 83, No. 3, pp. 334–342). 16. Tsukahara, T. New definition of social responsibility of scientists by post-normal science. In Gendai Shiso (Vol. 39-10, pp. 98–120). 17. Van der Sluijs. (2000). Ankering amid Uncertainty.